id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
15,167
https://en.wikipedia.org/wiki/ICQ
ICQ was a cross-platform instant messaging (IM) and VoIP client founded in June 1996 by Yair Goldfinger, Sefi Vigiser, Amnon Amir, Arik Vardi, and Arik's father, Yossi Vardi. The name ICQ derives from the English phrase "I Seek You". Originally developed by the Israeli company Mirabilis in 1996, the client was bought by AOL in 1998, and then by Mail.Ru Group (now VK) in 2010. The ICQ client application and service were initially released in November 1996, freely available to download. The business did not have traditional marketing and relied mostly on word-of-mouth advertising instead, with customers telling their friends about it, who then informed their friends, and so on. ICQ was among the first stand-alone instant messenger (IM) applications—while real-time chat was not in itself new (Internet Relay Chat [IRC] being the most common platform at the time), the concept of a fully centralized service with individual user accounts focused on one-on-one conversations set the blueprint for later instant messaging services like AIM, and its influence is seen in modern social media applications. ICQ became the first widely adopted IM platform. At its peak around 2001, ICQ had more than 100 million accounts registered. At the time of the Mail.Ru acquisition in 2010, there were around 42 million daily users. In 2022, ICQ had about 11 million monthly users. The service was shut down on June 26, 2024, following an announcement on the website of ICQ in May 2024 that the service would be discontinued. Features of ICQ New The last version of the service, launched in 2020 as "ICQ New", featured a number of different messaging functions: Private chats: conversations between two users, with history synchronized to the cloud. A user could delete a sent message at any time, and a notification will be shown indicating that the message has been deleted. A chat with oneself, which could be used to save messages from group or private chats, or upload media content as a form of cloud storage. Group chats with up to 25 thousand participants at the same time, which any user could create. Users could hide their phone number from other participants, see which group members have read a message, and switched off notifications for messages from specific group members. Audio and video calls with up to five people. Sending and receiving of audio messages, with automatic transcription to text. Channels, where authors could publish posts as text messages and attach media files, similar to a blog. Once the post was published, subscribers receive a notification as they would from regular and group chats. The channel author could remain anonymous. Polls inside group chats. An API-bot which could be used by anyone to create a bot, to perform specific actions and interact with users. "Stickers": small images or photos expressing some form of emotion, could be selected from a provided sticker library or users could upload their own. Machine learning was used to recommend stickers automatically. "Masks": images that could be superimposed onto the camera in real-time during video calls, or onto photos to be sent to other users. Nicknames, which users could set to use in place of a phone number for others to search for and contact them. "Smart answers": short phrases that appear above the message box which can be used to answer messages. ICQ New analyzed the contents of a conversation and suggests a few pre-set answers. UIN ICQ users were identified and distinguished from one another by UIN, or User Identification Numbers, distributed in sequential order. The UIN was invented by Mirabilis, as the user name assigned to each user upon registration. Issued UINs started at '10,000' (5 digits) and every user received a UIN when first registering with ICQ. As of ICQ6 users were also able to log in using the specific e-mail address they associated with their UIN during the initial registration process. Unlike other instant messaging software or web applications, on ICQ the only permanent user info was the UIN, although it was possible to search for other users using their associated e-mail address or any other detail they made public by updating it in their account's public profile. In addition the user could change all of his or her personal information, including screen name and e-mail address, without having to re-register. Since 2000, ICQ and AIM users were able to add each other to their contact list without the need for any external clients. As a response to UIN theft or sale of attractive UINs, ICQ started to store email addresses previously associated with a UIN. As such UINs that are stolen could sometimes be reclaimed, if a valid primary email address was entered into the user profile. History The founding company of ICQ, Mirabilis, was established in June 1996 by five Israeli developers: Yair Goldfinger, Sefi Vigiser, Amnon Amir, Arik Vardi, and Arik's father Yossi Vardi. ICQ was one of the first text-based messengers to reach a wide range of users. The technology Mirabilis developed for ICQ was distributed free of charge. The technology's success encouraged AOL to acquire Mirabilis on June 8, 1998, for $287 million up front and $120 million in additional payments over three years based on performance levels. In 2002 AOL successfully patented the technology. After the purchase, the product was initially managed by Ariel Yarnitsky and Avi Shechter. ICQ's management changed at the end of 2003. Under the leadership of the new CEO, Orey Gilliam, who also assumed the responsibility for all of AOL's messaging business in 2007, ICQ resumed its growth; it was not only a highly profitable company, but one of AOL's most successful businesses. Eliav Moshe replaced Gilliam in 2009 and became ICQ's managing director. In April 2010, AOL sold ICQ to Digital Sky Technologies, headed by Alisher Usmanov, for $187.5 million. While ICQ was displaced by AOL Instant Messenger, Google Talk, and other competitors in the US and many other countries over the 2000s, it remained the most popular instant messaging network in Russian-speaking countries, and an important part of online culture. Popular UINs demanded over 11,000₽ in 2010. In September of that year, Digital Sky Technologies changed its name to Mail.Ru Group. Since the acquisition, Mail.ru has invested in turning ICQ from a desktop client to a mobile messaging system. As of 2013, around half of ICQ's users were using its mobile apps, and in 2014, the number of users began growing for the first time since the purchase. In March 2016, the source code of the client was released under the Apache license on GitHub. In 2020, Mail.Ru Group decided to launch a new version, "ICQ New", based on the original ICQ. The updated software was presented to the general public on April 6, 2020. During the second week of January 2021, ICQ saw a renewed increase in popularity in Hong Kong, spurred on by the controversy over WhatsApp's privacy policy update. The number of downloads for the application increased 35-fold in the region. On May 24, 2024, the main page of ICQ's website announced that the service would be shutting down on June 26, 2024. ICQ recommended that users migrate to VK Messenger and VK WorkSpace. Development history ICQ 99a/b the first releases that were widely available. ICQ 2000 incorporated into Notes and Reminder features. ICQ 2001 included server-side storage of the contact list. This provided synchronization between multiple computers and enforced obtaining consent before adding UINs to the contact list by preventing clients from modifying the local contact list directly. In 2002, AOL Time Warner announced that ICQ had been issued a United States patent for instant messaging. ICQ 2002 was the last completely advertising-free ICQ version. ICQ Pro 2003b was the first ICQ version to use the ICQ protocol version 10. However, ICQ 5 and 5.1 use version 9 of the protocol. ICQ 2002 and 2003a used version 8 of the ICQ protocol. Earlier versions (ICQ 2001b and all ICQ clients before it) used ICQ protocol version 7. ICQ 4 and later ICQ 5 (released on Monday, February 7, 2005), were upgrades on ICQ Lite. One addition was Xtraz, which offers games and features intended to appeal to younger users of the Internet. ICQ Lite was originally an idea to offer the lighter users of instant messaging an alternative client which was a smaller download and less resource-hungry for relatively slow computers. ICQ 5 introduced skins support. There are few official skins available for the current ICQ 5.1 at the official website; however, a number of user-generated skins have been made available for download. ICQ 6, released on April 17, 2007, was the first major update since ICQ 4. The user interface has been redesigned using Boxely, the same rendering engine used in AIM Triton. This change adds new features such as the ability to send IMs directly from the client's contact list. ICQ has recently started forcing users of v5.1 to upgrade to version 6 (and XP). Those who do not upgrade will find their older version of ICQ does not start up. Although the upgrade to version 6 should be seen as a positive thing, some users may find that useful features such as sending multiple files at one time is no longer supported in the new version. At the beginning of July 2008, a network upgrade forced users to stop using ICQ 5.1 – applications that identified themselves as ICQ 5, such as Pidgin, were forced to identify themselves as ICQ 6. There seems to be no alternative for users other than using a different IM program or patching ICQ 5.1 with a special application. ICQ 7.0, released on January 18, 2010. This update includes integration with Facebook and other websites. It also allows custom personal status similar to Windows Live Messenger (MSN Messenger). ICQ 7.0 does not support traditional Chinese on standard installation or with the addition of an official language pack. This has made its adoption difficult with the established user base from Hong Kong and Taiwan where traditional Chinese is the official language. ICQ 8, released on February 5, 2012 – "Meet the new generation of ICQ, Enjoy free video calls, messages and SMS, social networks support and more." ICQ 10.0, released January 18, 2016. Final update was 10.0 Build 46867, released on May 27, 2022. Criticism Policy against unofficial clients AOL (and later Mail.ru) pursued an aggressive policy regarding alternative ("unauthorized") ICQ clients. In July 2008, changes were implemented on ICQ servers causing many unofficial clients to stop working. These users received an official notification from "ICQ System". On December 9, 2008, another change to the ICQ servers occurred: clients sending Client IDs not matching ICQ 5.1 or higher stopped working. On December 29, 2008, the ICQ press service distributed a statement characterizing alternative clients as dangerous. On January 21, 2009, ICQ servers started blocking all unofficial clients in Russia and Commonwealth of Independent States countries. Users in Russia and Ukraine received a message from UIN 1: "Системное сообщение ICQ не поддерживает используемую вами версию. Скачайте бесплатную авторизованную версию ICQ с официального web-сайта ICQ. System Message The version you are using is not supported by ICQ. Download a free authorized ICQ version from ICQ's official website." On icq.com there was an "important message" for Russian-speaking ICQ users: "ICQ осуществляет поддержку только авторизированных версий программ: ICQ Lite и ICQ 6.5." ("ICQ supports only authorized versions of programs: ICQ Lite and ICQ 6.5.") On February 3, 2009, the events of January 21 were repeated. On December 27, 2018, ICQ announced it was to stop supporting unofficial clients, affecting many users who prefer a compact size using Miranda NG and other clients. On December 28, 2018, ICQ stopped working on some unofficial clients. In late March, 2019, ICQ stopped working on the Pidgin client, as initiated in December 2018. Cooperation with Russian intelligence services According to a Novaya Gazeta article published in May 2018, Russian intelligence agencies had access to online reading of ICQ users' correspondence during crime investigations. The article examined 34 sentences of Russian courts, during the investigation of which the evidence of the defendants' guilt was obtained by reading correspondence on a PC or mobile devices. In six of the fourteen cases in which ICQ was involved, the capturing of information occurred before the seizure of the device. Because the rival service Telegram blocks all access for the agencies, the Advisor of the Russian President, Herman Klimenko, recommended to use ICQ instead. Child pornography In 2023, an investigation by Brazilian news outlet Núcleo Jornalismo found that ICQ was used to freely share child pornography due to lax moderation policies. Clients AOL's OSCAR network protocol used by ICQ was proprietary and using a third party client was a violation of ICQ Terms of Service. Nevertheless, a number of third-party clients were created by using reverse-engineering and protocol descriptions. These clients included: Adium: supports ICQ, Yahoo!, AIM, MSN, Google Talk, XMPP, and others, for macOS Ayttm: supports ICQ, Yahoo!, AIM, MSN, IRC, and XMPP BitlBee: IRC gateway, supports ICQ, Yahoo!, AIM, MSN, Google Talk, and XMPP Centericq: supports ICQ, Yahoo!, AIM, MSN, IRC and XMPP, text-based climm (formerly mICQ): text-based Jimm: supports ICQ, for Java ME mobile devices Kopete: supports AIM, ICQ, MSN, Yahoo, XMPP, Google Talk, IRC, Gadu-Gadu, Novell GroupWise Messenger and others, for Unix-like Meetro: IM and social networking combined with location; supports AIM, ICQ, MSN, Yahoo! Miranda NG: supports ICQ, Yahoo!, AIM, MSN, IRC, Google Talk, XMPP, Gadu-Gadu, BNet and others, for Windows Naim: ncurses-based Pidgin (formerly Gaim): supports ICQ, Yahoo!, AIM, Gtalk, MSN, IRC, XMPP, Gadu-Gadu, SILC, Meanwhile, (IBM Lotus Sametime) and others QIP: supports ICQ, AIM, XMPP and XIMSS stICQ: supports ICQ, for Symbian OS Trillian: supports ICQ, IRC, Google Talk, XMPP and others AOL supported clients include: AOL Instant Messenger (discontinued in 2017) Messages/iChat: uses ICQ's UIN as an AIM screenname, for macOS See also Comparison of instant messaging clients Comparison of instant messaging protocols LAN messenger Online chat Windows Live Messenger Tencent QQ References External links Official ICQ website Instant messaging clients 1996 software AIM (software) clients AOL BlackBerry software IOS software Symbian software 2010 mergers and acquisitions Mergers and acquisitions of Israeli companies Android (operating system) software Formerly proprietary software 1996 establishments in Israel Internet properties disestablished in 2024 Defunct websites Defunct instant messaging clients
ICQ
Technology
3,414
7,754,624
https://en.wikipedia.org/wiki/Alkali%E2%80%93aggregate%20reaction
Alkali–aggregate reaction is a term mainly referring to a reaction which occurs over time in concrete between the highly alkaline cement paste and non-crystalline silicon dioxide, which is found in many common aggregates. This reaction can cause the expansion of the altered aggregate, leading to spalling and loss of strength of concrete. More accurate terminology The alkali–aggregate reaction is a general, but relatively vague, expression which can lead to confusion. More exact definitions include the following: Alkali–silica reaction (ASR, the most common reaction of this type); Alkali–silicate reaction, and; Alkali–carbonate reaction. The alkali–silica reaction is the most common form of alkali–aggregate reaction. Two other types are: the alkali–silicate reaction, in which layer silicate minerals (clay minerals), sometimes present as impurities, are attacked, and; the alkali–carbonate reaction, which is an uncommon attack on certain argillaceous dolomitic limestones, likely involving the expansion of the mineral brucite (Mg(OH)2). The pozzolanic reaction which occurs in the setting of the mixture of slaked lime and pozzolanic materials has also features similar to the alkali–silica reaction, mainly the formation of calcium silicate hydrate (C-S-H). See also Energetically modified cement (EMC) Calthemite Pozzolanic reaction External links Cement.org | Alkali-aggregate reaction Alkali-Aggregate Reactions (AAR) – International Centre of Research and Applied Technology Cement Concrete Inorganic reactions de:Alkali-Kieselsäure-Reaktion fr:Réaction alcali-granulat it:Reazione alcali aggregati pt:Reação álcali-agregado
Alkali–aggregate reaction
Chemistry,Engineering
385
11,436,480
https://en.wikipedia.org/wiki/Cercospora%20longipes
Cercospora longipes is a fungal plant pathogen. References longipes Fungal plant pathogens and diseases Fungus species
Cercospora longipes
Biology
25
72,125,412
https://en.wikipedia.org/wiki/Dicumyl%20peroxide
Dicumyl peroxide is an organic compound with the formula (Me = CH3). Classified as a dialkyl peroxide, it is produced on a large scale industrially for use in polymer chemistry. It serves as an initiator and crosslinking agent in the production of low density polyethylene. Production It is synthesized as a by-product in the autoxidation of cumene, which mainly affords cumene hydroperoxide. Alternatively, it can be produced by the addition of hydrogen peroxide to α-methylstyrene. Of the ca. 60,000 ton/y production of dialkyl peroxides, dicumyl peroxide is dominant. Properties Dicumyl peroxide is relatively stable compound owing to the steric protection provided by the several substituents adjacent to the peroxide group. Upon heating, it breaks down by homolysis of the relatively weak O-O bond. References Organic peroxides Organic compounds
Dicumyl peroxide
Chemistry
200
11,848,801
https://en.wikipedia.org/wiki/Keldysh%20formalism
In non-equilibrium physics, the Keldysh formalism or Keldysh–Schwinger formalism is a general framework for describing the quantum mechanical evolution of a system in a non-equilibrium state or systems subject to time varying external fields (electrical field, magnetic field etc.). Historically, it was foreshadowed by the work of Julian Schwinger and proposed almost simultaneously by Leonid Keldysh and, separately, Leo Kadanoff and Gordon Baym. It was further developed by later contributors such as O. V. Konstantinov and V. I. Perel. Extensions to driven-dissipative open quantum systems is given not only for bosonic systems, but also for fermionic systems. The Keldysh formalism provides a systematic way to study non-equilibrium systems, usually based on the two-point functions corresponding to excitations in the system. The main mathematical object in the Keldysh formalism is the non-equilibrium Green's function (NEGF), which is a two-point function of particle fields. In this way, it resembles the Matsubara formalism, which is based on equilibrium Green functions in imaginary-time and treats only equilibrium systems. Time evolution of a quantum system Consider a general quantum mechanical system. This system has the Hamiltonian . Let the initial state of the system be the pure state . If we now add a time-dependent perturbation to this Hamiltonian, say , the full Hamiltonian is and hence the system will evolve in time under the full Hamiltonian. In this section, we will see how time evolution actually works in quantum mechanics. Consider a Hermitian operator . In the Heisenberg picture of quantum mechanics, this operator is time-dependent and the state is not. The expectation value of the operator is given by where, due to time evolution of operators in the Heisenberg picture, . The time-evolution unitary operator is the time-ordered exponential of an integral, (Note that if the Hamiltonian at one time commutes with the Hamiltonian at different times, then this can be simplified to .) For perturbative quantum mechanics and quantum field theory, it is often more convenient to use the interaction picture. The interaction picture operator is where . Then, defining we have Since the time-evolution unitary operators satisfy , the above expression can be rewritten as , or with replaced by any time value greater than . Path ordering on the Keldysh contour We can write the above expression more succinctly by, purely formally, replacing each operator with a contour-ordered operator , such that parametrizes the contour path on the time axis starting at , proceeding to , and then returning to . This path is known as the Keldysh contour. has the same operator action as (where is the time value corresponding to ) but also has the additional information of (that is, strictly speaking if , even if for the corresponding times ). Then we can introduce notation of path ordering on this contour, by defining , where is a permutation such that , and the plus and minus signs are for bosonic and fermionic operators respectively. Note that this is a generalization of time ordering. With this notation, the above time evolution is written as Where corresponds to the time on the forward branch of the Keldysh contour, and the integral over goes over the entire Keldysh contour. For the rest of this article, as is conventional, we will usually simply use the notation for where is the time corresponding to , and whether is on the forward or reverse branch is inferred from context. Keldysh diagrammatic technique for Green's functions The non-equilibrium Green's function is defined as . Or, in the interaction picture, . We can expand the exponential as a Taylor series to obtain the perturbation series . This is the same procedure as in equilibrium diagrammatic perturbation theory, but with the important difference that both forward and reverse contour branches are included. If, as is often the case, is a polynomial or series as a function of the elementary fields , we can organize this perturbation series into monomial terms and apply all possible Wick pairings to the fields in each monomial, obtaining a summation of Feynman diagrams. However, the edges of the Feynman diagram correspond to different propagators depending on whether the paired operators come from the forward or reverse branches. Namely, where the anti-time ordering orders operators in the opposite way as time ordering and the sign in is for bosonic or fermionic fields. Note that is the propagator used in ordinary ground state theory. Thus, Feynman diagrams for correlation functions can be drawn and their values computed the same way as in ground state theory, except with the following modifications to the Feynman rules: Each internal vertex of the diagram is labeled with either or , while external vertices are labelled with . Then each (unrenormalized) edge directed from a vertex (with position , time and sign ) to a vertex (with position , time and sign ) corresponds to the propagator . Then the diagram values for each choice of signs (there are such choices, where is the number of internal vertices) are all added up to find the total value of the diagram. See also Spin Hall effect Kondo effect References Other Gianluca Stefanucci and Robert van Leeuwen (2013). "Nonequilibrium Many-Body Theory of Quantum Systems: A Modern Introduction" (Cambridge University Press, 2013). DOI: https://doi.org/10.1017/CBO9781139023979 Robert van Leeuwen, Nils Erik Dahlen, Gianluca Stefanucci, Carl-Olof Almbladh and Ulf von Barth, "Introduction to the Keldysh Formalism", Lectures Notes in Physics 706, 33 (2006). arXiv:cond-mat/0506130 Condensed matter physics Electromagnetism
Keldysh formalism
Physics,Chemistry,Materials_science,Engineering
1,256
77,315,178
https://en.wikipedia.org/wiki/Tolo%20Calafat
Bartolomé "Tolo" Calafat Marcus (17 September 1970 Palma de Mallorca — 29 April 2010, Annapurna) was a Spanish mountaineer. Calafat was climbing as part of an expedition led by Juanito Oiarzabal on Annapurna when he died of cerebral edema. Calafat and his team were trapped on Annapurna by a blizzard when he became ill. A high altitude rescue attempt was made, but delayed by the weather. By the time a helicopter with doctor and mountaineer Jorge Egocheaga reached Calafat's position, 7,600m on the mountain, it was too late, and he had been covered by snow. Prior to his attempt at Annapurna, Calafat successfully summitted Mount Everest in 2006 and Cho Oyo in 2004. References 1970 births 2010 deaths Spanish mountain climbers Spanish summiters of Mount Everest Electronics engineers People from Palma de Mallorca Deaths on Annapurna Sport deaths in Nepal Mountaineering deaths
Tolo Calafat
Engineering
206
14,662,924
https://en.wikipedia.org/wiki/Tarry%20point
In geometry, the Tarry point for a triangle is a point of concurrency of the lines through the vertices of the triangle perpendicular to the corresponding sides of the triangle's first Brocard triangle . The Tarry point lies on the other endpoint of the diameter of the circumcircle drawn through the Steiner point. The point is named for Gaston Tarry. See also Concurrent lines Notes Triangle centers
Tarry point
Physics,Mathematics
83
32,552,139
https://en.wikipedia.org/wiki/Nordic%20Network%20for%20Interdisciplinary%20Environmental%20Studies
The Nordic Network for Interdisciplinary Environmental Studies (NIES) is a research network for environmental studies based primarily in the humanities. By organizing regular conferences, symposia and workshops, NIES aims to create opportunities for researchers in the Nordic countries who address environmental questions to exchange ideas and develop their work in various interdisciplinary contexts. Fields represented by members of the network include Ecocriticism, Environmental history, Environmental philosophy, Science and Technology Studies, Art history, Media studies, Ecological economics, Human Geography, Cultural studies, Anthropology, Archeology, Sustainability studies, Education for Sustainability and Landscape studies. NIES is responsible for organizing and editing the research series Studies in the Environmental Humanities (SEH) published by Rodopi. Formed in 2007, NIES was originally a cooperation among small academic groups in Sweden, Norway and Denmark. Today, it includes more than 100 researchers from all the Nordic countries. The network actively sponsors a wide range of educational initiatives, research projects and public outreach activities and is a key partner in pan-European and other international initiatives to build capacity and foster theoretical advancement in the Environmental Humanities. Since January 2011, the network's primary anchoring institution is Mid Sweden University in Sundsvall, Sweden. National anchoring institutions include University of Turku, University of Oslo, University of Iceland, Uppsala University and University of Southern Denmark. The network's current phase of operations (2011–2015) is supported by NordForsk. References External links miun.se rodopi.nl Research institutes in Sweden Environmental science Environmental studies organizations Nordic organizations
Nordic Network for Interdisciplinary Environmental Studies
Environmental_science
317
4,825,433
https://en.wikipedia.org/wiki/Arthur%20Wahl
Arthur Charles Wahl (September 8, 1917 – March 6, 2006) was an American chemist who, as a doctoral student of Glenn T. Seaborg at the University of California, Berkeley, first isolated plutonium (94) in February 1941 shortly after the element neptunium (93) was discovered by McMillan and Abelson in 1940. Wahl was a researcher on the Manhattan Project in Los Alamos until 1946, when he joined Washington University in St. Louis. Beginning in 1952, he was the Henry V. Farr Professor of Radiochemistry; he received the American Chemical Society Award in Nuclear Chemistry in 1966 and retired in 1983. He moved back to Los Alamos in 1991 and continued his scientific writing until 2005. He died in 2006 of Parkinson's disease and pneumonia. Further reading References External links 1917 births 20th-century American chemists 2006 deaths Iowa State University alumni Manhattan Project people University of California, Berkeley alumni Washington University in St. Louis faculty Nuclear chemists
Arthur Wahl
Chemistry
200
205,488
https://en.wikipedia.org/wiki/William%20Whewell
William Whewell ( ; 24 May 17946 March 1866) was an English polymath, scientist, Anglican priest, philosopher, theologian, and historian of science. He was Master at Trinity College, Cambridge. In his time as a student there, he achieved distinction in both poetry and mathematics. The breadth of Whewell's endeavours is his most remarkable feature. In a time of increasing specialization, Whewell belonged in an earlier era when natural philosophers investigated widely. He published work in mechanics, physics, geology, astronomy, and economics, while also composing poetry, writing a Bridgewater Treatise, translating the works of Goethe, and writing sermons and theological tracts. In mathematics, Whewell introduced what is now called the Whewell equation, defining the shape of a curve without reference to an arbitrarily chosen coordinate system. He also organized thousands of volunteers internationally to study ocean tides, in what is now considered one of the first citizen science projects. He received the Royal Medal for this work in 1837. One of Whewell's greatest gifts to science was his word-smithing. He corresponded with many in his field and helped them come up with neologisms for their discoveries. Whewell coined, among other terms, scientist, physicist, linguistics, consilience, catastrophism, uniformitarianism, and astigmatism; he suggested to Michael Faraday the terms electrode, ion, dielectric, anode, and cathode. Whewell died in Cambridge in 1866 as a result of a fall from his horse. Early life, education and marriages Whewell was born in Lancaster, the son of John Whewell and his wife, Elizabeth Bennison. His father was a master carpenter, and wished him to follow his trade, but William's success in mathematics at Lancaster Royal Grammar School and Heversham grammar school won him an exhibition (a type of scholarship) at Trinity College, Cambridge in 1812. He was the eldest of seven children having three brothers and three sisters born after him. Two of the brothers died as infants while the third died in 1812. Two of his sisters married; he corresponded with them in his career as a student and then a professor. His mother died in 1807, when Whewell was 13 years old. His father died in 1816, the year Whewell received his bachelor degree at Trinity College, but before his most significant professional accomplishments. Whewell married, firstly, in 1841, Cordelia Marshall, daughter of John Marshall. Within days of his marriage, Whewell was recommended to be master of Trinity College in Cambridge, following Christopher Wordsworth. Cordelia died in 1855. In 1858 he married again, to Everina Frances (née Ellis), widow of Sir Gilbert Affleck, 5th Baronet who had died in 1854. He had no children. Career In 1814 he was awarded the Chancellor's Gold Medal for poetry. He was Second Wrangler in 1816, President of the Cambridge Union Society in 1817, became fellow and tutor of his college. He was professor of mineralogy from 1828 to 1832 and Knightbridge Professor of Philosophy (then called "moral theology and casuistical divinity") from 1838 to 1855. During the years as professor of philosophy, in 1841, Whewell succeeded Christopher Wordsworth as master. Whewell influenced the syllabus of the Mathematical Tripos at Cambridge, which undergraduates studied. He was a proponent of 'mixed mathematics': applied mathematics, descriptive geometry and mathematical physics, in contrast with pure mathematics. Under Whewell, analytic topics such as elliptical integrals were replaced by physical studies of electricity, heat and magnetism. He believed an intuitive geometrical understanding of mathematics, based on Euclid and Newton, was most appropriate. Death and legacy Whewell died in Cambridge in 1866 as a result of a fall from his horse. He was buried in the chapel of Trinity College, Cambridge, whilst his wives are buried together in the Mill Road Cemetery, Cambridge. A window dedicated to Lady Affleck, his second wife, was installed in her memory in the chancel of All Saints' Church, Cambridge and made by Morris & Co. A list of his writings was prepared after his death by Isaac Todhunter in two volumes, the first being an index of the names of persons with whom Whewell corresponded. Another book was published five years later, as a biography of Whewell's life interspersed with his letters to his father, his sisters, and other correspondence, written and compiled by his niece by marriage, Janet Mary Douglas, called Mrs Stair Douglas on the book's title page. These books are available online in their entirety as part of the Internet Archive. Endeavours History and development of science In 1826 and 1828, Whewell was engaged with George Airy in conducting experiments in Dolcoath mine in Cornwall, in order to determine the density of the earth. Their united labours were unsuccessful, and Whewell did little more in the way of experimental science. He was the author, however, of an Essay on Mineralogical Classification, published in 1828, and carried out extensive work on the tides. When Whewell started his work on tides, there was a theory explaining the forces causing the tides, based on the work of Newton, Bernoulli, and Laplace. But this explained the forces, not how tides actually propagated in oceans bounded by continents. There was a series of tidal observations for a few ports, such as London and Liverpool, which allowed tide tables to be produced for these ports. However the methods used to create such tables, and in some cases the observations, were closely guarded trade secrets. John Lubbock, a former student of Whewell's, had analysed the available historic data (covering up to 25 years) for several ports to allow tables to be generated on a theoretical basis, publishing the methodology. This work was supported by Francis Beaufort, Hydrographer of the Navy, and contributed to the publication of the Admiralty Tide Tables starting in 1833. Whewell built on Lubbock's work to develop an understanding of tidal patterns around the world that could be used to generate predictions for many locations without the need for long series of tidal observations at each port. This required extensive new observations, initially obtained through an informal network, and later through formal projects enabled by Beaufort at the Admiralty. In the first of these, in June 1834, every Coast Guard station in the United Kingdom recorded the tides every fifteen minutes for two weeks. The second, in June 1835, was an international collaboration, involving Admiralty Surveyors, other Royal Navy and British observers, as well as those from the United States, France, Spain, Portugal, Belgium, Denmark, Norway, and the Netherlands. Islands, such as the Channel Islands, were particularly interesting, adding important detail of the progress of the tides through the ocean. The Admiralty also provided the resources for data analysis, and J.F. Dessiou, an expert calculator on the Admiralty staff, was in charge of the calculations. Whewell made extensive use of graphical methods, and these became not just ways of displaying results, but tools in the analysis of data. He published a number of maps showing cotidal lines (a term coined by Lubbock) – lines joining points where high tide occurred at the same time. These allowed a graphical representation of the progression of tidal waves through the ocean. From this, Whewell predicted that there should be a place where there was no tidal rise or fall in the southern part of the North Sea. Such a "no-tide zone" is now called an amphidromic point. In 1840, the naval surveyor William Hewett confirmed Whewell's prediction. This involved anchoring his ship, HMS Fairy, and taking repeated soundings at the same location with lead and line, precautions being needed to allow for irregularities in the sea bed, and the effects of tidal flow. The data showed a rise of no more than . Whewell published about 20 papers over a period of 20 years on his tidal researches. This was his major scientific achievement, and was an important source for his understanding of the process of scientific enquiry, the subject of one of his major works Philosophy of the Inductive Sciences. His best-known works are two voluminous books that attempt to systematize the development of the sciences, History of the Inductive Sciences (1837) and The Philosophy of the Inductive Sciences, Founded Upon Their History (1840, 1847, 1858–60). While the History traced how each branch of the sciences had evolved since antiquity, Whewell viewed the Philosophy as the "Moral" of the previous work as it sought to extract a universal theory of knowledge through history. In the latter, he attempted to follow Francis Bacon's plan for discovery. He examined ideas ("explication of conceptions") and by the "colligation of facts" endeavored to unite these ideas with the facts and so construct science. This colligation is an "act of thought", a mental operation consisting of bringing together a number of empirical facts by "superinducing" upon them a conception which unites the facts and renders them capable of being expressed in general laws. Whewell refers to as an example Kepler and the discovery of the elliptical orbit: the orbit's points were colligated by the conception of the ellipse, not by the discovery of new facts. These conceptions are not "innate" (as in Kant), but being the fruits of the "progress of scientific thought (history) are unfolded in clearness and distinctness". Whewell's three steps of induction Whewell analyzed inductive reasoning into three steps: The selection of the (fundamental) idea, such as space, number, cause, or likeness (resemblance); The formation of the conception, or more special modification of those ideas, as a circle, a uniform force, etc.; and, The determination of magnitudes. Upon these follow special methods of induction applicable to quantity: the method of curves, the method of means, the method of least squares and the method of residues, and special methods depending on resemblance (to which the transition is made through the law of continuity), such as the method of gradation and the method of natural classification. In Philosophy of the Inductive Sciences Whewell was the first to use the term "consilience" to discuss the unification of knowledge between the different branches of learning. Opponent of English empiricism Here, as in his ethical doctrine, Whewell was moved by opposition to contemporary English empiricism. Following Immanuel Kant, he asserted against John Stuart Mill the a priori nature of necessary truth, and by his rules for the construction of conceptions he dispensed with the inductive methods of Mill. Yet, according to Laura J. Snyder, "surprisingly, the received view of Whewell's methodology in the 20th century has tended to describe him as an anti-inductivist in the Popperian mold, that is it is claimed that Whewell endorses a 'conjectures and refutations' view of scientific discovery. Whewell explicitly rejects the hypothetico-deductive claim that hypotheses discovered by non-rational guesswork can be confirmed by consequentialist testing. Whewell explained that new hypotheses are 'collected from the facts' (Philosophy of Inductive Sciences, 1849, 17)". In sum, the scientific discovery is a partly empirical and partly rational process; the "discovery of the conceptions is neither guesswork nor merely a matter of observations", we infer more than we see. Whewell's neologisms One of Whewell's greatest gifts to science was his wordsmithing. He often corresponded with many in his field and helped them come up with new terms for their discoveries. In fact, Whewell came up with the term scientist itself in 1833, and it was first published in Whewell's anonymous 1834 review of Mary Somerville's On the Connexion of the Physical Sciences published in the Quarterly Review. (They had previously been known as "natural philosophers" or "men of science"). Work in college administration Whewell was prominent not only in scientific research and philosophy but also in university and college administration. His first work, An Elementary Treatise on Mechanics (1819), cooperated with those of George Peacock and John Herschel in reforming the Cambridge method of mathematical teaching. His work and publications also helped influence the recognition of the moral and natural sciences as an integral part of the Cambridge curriculum. In general, however, especially in later years, he opposed reform: he defended the tutorial system, and in a controversy with Connop Thirlwall (1834), opposed the admission of Dissenters; he upheld the clerical fellowship system, the privileged class of "fellow-commoners", and the authority of heads of colleges in university affairs. He opposed the appointment of the University Commission (1850) and wrote two pamphlets (Remarks) against the reform of the university (1855). He stood against the scheme of entrusting elections to the members of the senate and instead, advocated the use of college funds and the subvention of scientific and professorial work. He was elected Master of Trinity College, Cambridge in 1841, and retained that position until his death in 1866. The Whewell Professorship of International Law and the Whewell Scholarships were established through the provisions of his will. Whewell's interests in architecture Aside from Science, Whewell was also interested in the history of architecture throughout his life. He is best known for his writings on Gothic architecture, specifically his book, Architectural Notes on German Churches (first published in 1830). In this work, Whewell established a strict nomenclature for German Gothic churches and came up with a theory of stylistic development. His work is associated with the "scientific trend" of architectural writers, along with Thomas Rickman and Robert Willis. He paid from his own resources for the construction of two new courts of rooms at Trinity College, Cambridge, built in a Gothic style. The two courts were completed in 1860 and (posthumously) in 1868, and are now collectively named Whewell's Court (in the singular). Whewell's works in philosophy and morals Between 1835 and 1861 Whewell produced various works on the philosophy of morals and politics, the chief of which, Elements of Morality, including Polity, was published in 1845. The peculiarity of this work—written from what is known as the intuitional point of view—is its fivefold division of the springs of action and of their objects, of the primary and universal rights of man (personal security, property, contract, family rights, and government), and of the cardinal virtues (benevolence, justice, truth, purity and order). Among Whewell's other works—too numerous to mention—were popular writings such as: the third Bridgewater Treatise, Astronomy and General Physics considered with reference to Natural Theology (1833), the two volumes treatise The Philosophy of the Inductive Sciences: Founded Upon Their History (1840), the essay Of the Plurality of Worlds (1853), in which he argued against the probability of life on other planets, the Platonic Dialogues for English Readers (1850–1861), the Lectures on the History of Moral Philosophy in England (1852), the essay, Of a Liberal Education in General, with particular reference to the Leading Studies of the University of Cambridge (1845), the important edition and abridged translation of Hugo Grotius, De jure belli ac pacis (1853), and the edition of the Mathematical Works of Isaac Barrow (1860). Whewell was one of the Cambridge dons whom Charles Darwin met during his education there, and when Darwin returned from the Beagle voyage he was directly influenced by Whewell, who persuaded Darwin to become secretary of the Geological Society of London. The title pages of On the Origin of Species open with a quotation from Whewell's Bridgewater Treatise about science founded on a natural theology of a creator establishing laws: But with regard to the material world, we can at least go so far as this—we can perceive that events are brought about not by insulated interpositions of Divine power, exerted in each particular case, but by the establishment of general laws. Though Darwin used the concepts of Whewell as he made and tested his hypotheses regarding the theory of evolution, Whewell did not support Darwin's theory itself. "Whewell also famously opposed the idea of evolution. First he published a new book, Indications of the Creator, 1845, composed of extracts from his earlier works to counteract the popular anonymous evolutionary work Vestiges of the Natural History of Creation. Later Whewell opposed Darwin's theories of evolution." Works by Whewell (1830) New edition 1835. Third edition 1842. (1831) (1833) Astronomy and general physics considered with reference to Natural Theology (Bridgewater Treatise). Cambridge. (1836) Elementary Treatise on Mechanics, 5th edition, first edition 1819. (1837) History of the Inductive Sciences, from the Earliest to the Present Times. 3 vols, London. Volume 1, volume 2, volume 3. 2nd ed 1847 (2 vols). 3rd ed 1857 (2 vols). 1st German ed 1840–41. (1837) On the Principles of English University Education. London, 1837. (1840) The Philosophy of the Inductive Sciences, founded upon their history. 2 vols, London. 2nd ed 1847. Volume 1. Volume 2. (1845) The Elements of Morality, including polity. 2 vols, London. Volume 1 Volume 2. (1845) Indications of the Creator, Extracts bearing upon Theology from The History and Philosophy of the Inductive Sciences, London, 1st Ed, 1845. (1846) Lectures on systematic Morality. London. (1849) Of Induction, with especial reference to Mr. J. Stuart Mill's System of Logic. London. (1850) Mathematical exposition of some doctrines of political economy: a second memoir. Transactions of the Cambridge Philosophical Society 9:128–49. (1852) Lectures on the history of Moral Philosophy. Cambridge: Cambridge University Press. (1853) Hugonis Grotii de jure belli et pacis libri tres : accompanied by an abridged translation by William Whewell, London: John W. Parker, volume 1, volume 2, volume 3. (1853) Of the Plurality of Worlds. London. (1857) Spedding's complete edition of the works of Bacon. Edinburgh Review 106:287–322. (1858a) The history of scientific ideas. 2 vols, London. (1858b) Novum Organon renovatum, London. (1860a) On the philosophy of discovery: chapters historical and critical. London. (1861) Plato's Republic (translation). Cambridge. (1862) Six Lectures on Political Economy, Cambridge. (1862) Additional Lectures on the History of Moral Philosophy, Cambridge. (1866) Comte and Positivism. Macmillan's Magazine 13:353–62. Honors and recognitions Foreign Honorary Member of the American Academy of Arts and Sciences (1847) The debating society at Lancaster Royal Grammar School is named the Whewell Society in honor of Whewell being an Old Lancastrian. The crater Whewell on the Moon The Gothic buildings known as Whewell's Court in Trinity College, Cambridge The Whewell Mineral Gallery in the Sedgwick Museum of Earth Sciences, Cambridge The mineral whewellite In fiction In the 1857 novel Barchester Towers Charlotte Stanhope uses the topic of the theological arguments, concerning the possibility of intelligent life on other planets, between Whewell and David Brewster in an attempt to start up a conversation between her impecunious brother and the wealthy young widow Eleanor Bold. See also Catastrophism Uniformitarianism Earl of Bridgewater for other Bridgewater Treatise Law of three stages for Whewell's opposition to Auguste Comte's positivism Michael Faraday References Further reading Fisch, M. (1991), William Whewell Philosopher of Science, Oxford: Oxford University Press. Fisch, M. and Schaffer S. J. (eds.) (1991), William Whewell: A Composite Portrait, Oxford: Oxford University Press. . Includes an extensive bibliography. . Whewell, W., Astronomy and General Physics Considered with Reference to Natural Theology; Bridgewater Treatises, W. Pickering, 1833 (reissued by Cambridge University Press, 2009; ) Whewell, W., Of the Plurality of Worlds. An Essay; J. W. Parker and son, 1853 (reissued by Cambridge University Press, 2009; ) Yeo, R. (1991), Defining Science: William Whewell, Natural Knowledge and Public Debate in Early Victorian Britain, Cambridge: Cambridge University Press. Zamecki, Stefan, Komentarze do naukoznawczych poglądów Williama Whewella (1794–1866): studium historyczno-metodologiczne [Commentaries to the Logological Views of William Whewell (1794–1866): A Historical-Methodological Study], Warsaw, Wydawnictwa IHN PAN, 2012, , English-language summary: pp. 741–43. External links The philosophy of the inductive sciences, founded upon their history (1847) – Complete Text William Whewell (1794–1866) by Menachem Fisch, from The Routledge Encyclopedia of Philosophy William Whewell by Laura J. Snyder, from Stanford Encyclopedia of Philosophy Six Lectures from Archive for the History of Economic Thought – papers on mathematical economics as well as a set of introductory lectures William Whewell from History of Economic Thought Papers of William Whewell The Master of Trinity at Trinity College, Cambridge "William Whewell" at The MacTutor History of Mathematics archive 1794 births 1866 deaths 19th-century British economists 19th-century English Anglican priests 19th-century English male writers 19th-century English essayists 19th-century English historians 19th-century British linguists 19th-century English mathematicians 19th-century English philosophers 19th-century English poets 19th-century English theologians Action theorists Alumni of Trinity College, Cambridge Anglican philosophers Architectural theoreticians Deaths by horse-riding accident in England English economists English essayists English historical school of economics English logicians English male writers English philosophers English theologians Epistemologists Fellows of the American Academy of Arts and Sciences Fellows of the Royal Society Historians of science Masters of Trinity College, Cambridge Metaphilosophers Metaphysicians Metaphysics writers Ontologists People educated at Heversham Grammar School People educated at Lancaster Royal Grammar School People from Lancaster, Lancashire Philosophers of culture Philosophers of economics British philosophers of education Philosophers of history Philosophers of language Philosophers of literature Philosophers of logic Philosophers of mathematics Philosophers of mind Philosophers of psychology Philosophers of religion Philosophers of science Philosophers of social science English political philosophers Presidents of the Cambridge Union Presidents of the Geological Society of London Probability theorists Rationalists Royal Medal winners Scientists from Lancashire Second Wranglers Social philosophers Theorists on Western civilization Vice-chancellors of the University of Cambridge Writers about activism and social change Writers about religion and science Writers from Lancashire Knightbridge Professors of Philosophy Professors of Mineralogy (Cambridge) Authors of the Bridgewater Treatises Presidents of the Cambridge Philosophical Society
William Whewell
Mathematics
4,852
73,724,979
https://en.wikipedia.org/wiki/Respiratory%20syncytial%20virus%20F%20protein
Fusion glycoprotein F0 of the human respiratory syncytial virus (RSV) is a critical fusion glycoprotein that facilitates entry of the virus into host cells by mediating the fusion of the viral and cellular membranes. This class I fusion protein is synthesized as an inactive precursor (F0), which undergoes cleavage to form two disulfide linked subunits, F1 and F2, that are essential for its fusion activity. The RSV F protein exists in two conformations: a metastable prefusion form and a stable postfusion form, with the prefusion form being a major target for neutralizing antibodies due to its role in viral entry. The structural transitions of the F protein during the fusion process are crucial for its function, making it a significant focus in the development of vaccines and antiviral therapies against RSV infections. References Viral structural proteins F protein, Respiratory syncytial virus
Respiratory syncytial virus F protein
Chemistry
191
6,893,544
https://en.wikipedia.org/wiki/Folding%20funnel
The folding funnel hypothesis is a specific version of the energy landscape theory of protein folding, which assumes that a protein's native state corresponds to its free energy minimum under the solution conditions usually encountered in cells. Although energy landscapes may be "rough", with many non-native local minima in which partially folded proteins can become trapped, the folding funnel hypothesis assumes that the native state is a deep free energy minimum with steep walls, corresponding to a single well-defined tertiary structure. The term was introduced by Ken A. Dill in a 1987 article discussing the stabilities of globular proteins. The folding funnel hypothesis is closely related to the hydrophobic collapse hypothesis, under which the driving force for protein folding is the stabilization associated with the sequestration of hydrophobic amino acid side chains in the interior of the folded protein. This allows the water solvent to maximize its entropy, lowering the total free energy. On the side of the protein, free energy is further lowered by favorable energetic contacts: isolation of electrostatically charged side chains on the solvent-accessible protein surface and neutralization of salt bridges within the protein's core. The molten globule state predicted by the folding funnel theory as an ensemble of folding intermediates thus corresponds to a protein in which hydrophobic collapse has occurred but many native contacts, or close residue-residue interactions represented in the native state, have yet to form. In the canonical depiction of the folding funnel, the depth of the well represents the energetic stabilization of the native state versus the denatured state, and the width of the well represents the conformational entropy of the system. The surface outside the well is shown as relatively flat to represent the heterogeneity of the random coil state. The theory's name derives from an analogy between the shape of the well and a physical funnel, in which dispersed liquid is concentrated into a single narrow area. Background The protein folding problem is concerned with three questions, as stated by Ken A. Dill and Justin L. MacCallum: (i) How can an amino acid sequence determine the 3D native structure of a protein? (ii) How can a protein fold so quickly despite a vast number of possible conformations (the Levinthal's Paradox)? How does the protein know what conformations not to search? And (iii) is it possible to create a computer algorithm to predict a protein's native structure based on its amino acid sequence alone? Auxiliary factors inside the living cell such as folding catalysts and chaperones assist in the folding process but do not determine the native structure of a protein. Studies during the 1980s focused on models that could explain the shape of the energy landscape, a mathematical function that describes the free energy of a protein as a function of the microscopic degrees of freedom. After introducing the term in 1987, Ken A. Dill surveyed the polymer theory in protein folding, in which it addresses two puzzles, the first one being the Blind Watchmaker's Paradox in which biological proteins could not originate from random sequences, and the second one being Levinthal's Paradox that protein folding cannot happen randomly. Dill pulled the idea from the Blind Watchmaker into his metaphor for protein folding kinetics. The native state of protein can be achieved through a folding process involving some small bias and random choices to speed up the search time. That would mean even residues at very different positions in the amino acid sequence will be able to come into contact with each other. Yet, a bias during the folding process can change the folding time by tens to hundreds of orders of magnitude. As protein folding process goes through a stochastic search of conformations before reaching its final destination, the vast number of possible conformations is considered irrelevant, while the kinetic traps begin to play a role. The stochastic idea of protein intermediate conformations reveals the concept of an “energy landscape” or "folding funnel" in which folding properties are related to free energy and that the accessible conformations of a protein are reduced as it approaches native-like structure. The y-axis of the funnel represents the "internal free energy" of a protein: the sum of hydrogen bonds, ion-pairs, torsion angle energies, hydrophobic and solvation free energies. The many x-axes represent the conformational structures, and those that are geometrically similar to each other are close to one another in the energy landscape. The folding funnel theory is also supported by Peter G Wolynes, Zaida Luthey-Schulten and Jose Onuchic, that folding kinetics should be considered as progressive organization of partially folded structures into an ensemble (a funnel), rather than a serial linear pathway of intermediates. Native states of proteins are shown to be thermodynamically stable structures that exist in physiological conditions, and are proven in experiments with ribonuclease by Christian B. Anfinsen (see Anfinsen's dogma). It is suggested that because the landscape is encoded by the amino-acid sequence, natural selection has enabled proteins to evolve so that they are able to fold rapidly and efficiently. In a native low-energy structure, there's no competition among conflicting energy contributions, leading to a minimal frustration. This notion of frustration is further measured quantitatively in spin glasses, in which the folding transition temperature Tf is compared to the glass transition temperature Tg. Tf represents the native interactions in the folded structure and Tg represents the strength of non-native interactions in other configurations. A high Tf/Tg ratio indicates a faster folding rate in a protein and fewer intermediates compared to others. In a system with high frustration, mild difference in thermodynamic condition can lead to different kinetic traps and landscape ruggedness. Proposed funnel models Funnel-shaped energy landscape Ken A. Dill and Hue Sun Chan (1997) illustrated a folding pathway design based on Levinthal's Paradox, named the "golf-course" landscape, where a random searching for the native states would prove impossible, due to the hypothetically "flat playing field" since the protein "ball" would take a really long time to find a fall into the native "hole". However, a rugged pathway deviated from the initial smooth golf-course creates a directed tunnel where the denatured protein goes through to reach its native structure, and there can exist valleys (intermediate states) or hills (transition states) long the pathway to a protein's native state. Yet, this proposed pathway yields a contrast between pathway dependence versus pathway independence, or the Levinthal dichotomy and emphasizes the one-dimensional route of conformation. Another approach to protein folding eliminates the term "pathway" and replaces with "funnels" where it is concerned with parallel processes, ensembles and multiple dimensions instead of a sequence of structures a protein has to go through. Thus, an ideal funnel consists of a smooth multi-dimensional energy landscape where increasing interchain contacts correlate with decreasing degree of freedom and ultimately achievement of native state. Unlike an idealized smooth funnel, a rugged funnel demonstrates kinetic traps, energy barriers, and some narrow throughway paths to native state. This also explains an accumulation of misfolded intermediates where kinetic traps prevent protein intermediates from achieving their final conformation. For those that are stuck in this trap, they would have to break away favorable contacts that do not lead to their native state before reaching their original starting point and find another different search downhill. A Moat landscape, on the other hand, illustrates the idea of a variation of routes including an obligatory kinetic trap route that protein chains take to reach their native state. This energy landscape stems from a study by Christopher Dobson and his colleagues about hen egg white lysozyme, in which half of its population undergo normal fast folding, while the other half first forms α-helices domain quickly then β-sheet one slowly. It is different from the rugged landscape since there are no accidental kinetic traps but purposeful ones required for portions of protein to go through before reaching the final state. Both the rugged landscape and the Moat landscape nonetheless present the same concept in which protein configurations might come across kinetic traps during their folding process. On the other hand, the Champagne Glass landscape involves free energy barriers due to conformational entropy that partly resembles the random golf-course pathway in which a protein chain configuration is lost and has to spend time searching for the path downhill. This situation can be applied to a conformational search of polar residues that will eventually connect two hydrophobic clusters. The foldon volcano-shaped funnel model In another study, Rollins and Dill (2014) introduces the Foldon Funnel Model, a new addition to previous folding funnels, in which secondary structures form sequentially along the folding pathway and are stabilized by tertiary interactions. The model predicts that the free energy landscape has a volcano shape instead of a simple funnel that is mentioned previously, in which the outer landscape is sloped uphill because protein secondary structures are unstable. These secondary structures are then stabilized by tertiary interactions, which, despite their increasingly native-like structures, are also increasing in free energy until the second-to-last to the last step that is downhill in free energy. The highest free energy on the volcano landscape is at the step with structure just before the native state. This prediction of energy landscape is consistent with experiments showing that most protein secondary structures are unstable on their own and with measured protein equilibrium cooperativities. Thus, all earlier steps before reaching the native state are in pre-equilibrium. Despite its model being different from other models before, the Foldon Funnel Model still divides conformational space into the two kinetic states: native versus all others. Application Folding funnel theory has both qualitative and quantitative application. Visualization of funnels creates a communicating tool between statistical mechanical properties of proteins and their folding kinetics. It suggests the stability of folding process, which would be hard to destroy by mutation given maintained stability. To be more specific, a mutation can occur that leads to blockage of a routes to native state, but another route can take over provided that it reaches the final structure. A protein's stability increases as it approaches its native state through the partially folded configuration. Local structures such as helices and turns happen first followed by global assembly. Despite a process of trial and error, protein folding can be fast because proteins reach its native structure by this divide-and-conquer, local-to-global process. The idea of folding funnel helps rationalize the purpose of chaperones, in which the re-folding process of a protein can be catalyzed by chaperones pulling it apart and bringing it to a high energy landscape and let it fold again in a random fashion of trials and errors. Funneled landscapes suggest that different individual molecules of the same protein sequence may utilize microscopically different routes to reach the same destination. Some paths will be more populated than others. Funnels distinguish the basics between folding and simple classical chemical reactions analogy. A chemical reaction starts from its reactant A and goes through a change in structure to reach its product B. On the other hand, folding is a transition from disorder to order, not only from structure to structure. Simple one-dimensional reaction pathway does not capture protein folding's reduction in conformational degeneracy. In other words, folding funnels provide a microscopic framework for folding kinetics. Folding kinetics is described by simple mass action models, D-I-N (on-path intermediate I between denatured D and native N) or X-D-N (off-path intermediate X), and is referred to as the macroscopic framework of folding. Sequential Micropath view represents the mass-action model and explains folding kinetics in terms of pathways, transition states, on and off-path intermediates and what one sees in experiments, and is not concerned with the activity of a molecule or the state of a monomer sequence at a specific macroscopic transition state. Its problem is related to Levinthal's Paradox, or the searching problem. In contrast, funnel models aim to explain the kinetics in terms of underlying physical forces, to predict the microstate composition of those macrostates. Nonetheless, it proves challenging for computer simulations (energy landscape) to reconcile the "macroscopic" view of mass-action models with "microscopic" understanding of the changes in protein conformation during the folding process. Insights from funnels are insufficient to improve computer search methods. A smooth and funnel-shaped landscape on global scale can appear rough on local scale in computer simulations. See also Chaperone – proteins that assist other proteins with folding or unfolding Levinthal paradox Protein structure prediction References Further reading Biochemical reactions Protein structure
Folding funnel
Chemistry,Biology
2,580
22,390,444
https://en.wikipedia.org/wiki/Glass%20coloring%20and%20color%20marking
Glass coloring and color marking may be obtained in several ways. by the addition of coloring ions, by precipitation of nanometer-sized colloids (so-called striking glasses such as "gold ruby" or red "selenium ruby"), by colored inclusions (as in milk glass and smoked glass) by light scattering (as in phase separated glass) by dichroic coatings (see dichroic glass), or by colored coatings Coloring ions Ordinary soda-lime glass appears colorless to the naked eye when it is thin, although iron oxide impurities produce a green tint which can be viewed in thick pieces or with the aid of scientific instruments. Further metals and metal oxides can be added to glass during its manufacture to change its color which can enhance its aesthetic appeal. Examples of these additives are listed below: Iron(II) oxide may be added to glass resulting in bluish-green glass which is frequently used in beer bottles. Together with chromium it gives a richer green color, used for wine bottles. Sulfur, together with carbon and iron salts, is used to form iron polysulfides and produce amber glass ranging from yellowish to almost black. In borosilicate glasses rich in boron, sulfur imparts a blue color. With calcium it yields a deep yellow color. Manganese can be added in small amounts to remove the green tint given by iron, or in higher concentrations to give glass an amethyst color. Manganese is one of the oldest glass additives, and purple manganese glass was used since early Egyptian history. Manganese dioxide, which is black, is used to remove the green color from the glass; in a very slow process this is converted to sodium permanganate, a dark purple compound. In New England some houses built more than 300 years ago have window glass which is lightly tinted violet because of this chemical change, and such glass panes are prized as antiques. This process is widely confused with the formation of "desert amethyst glass", in which glass exposed to desert sunshine with a high ultraviolet component develops a delicate violet tint. Details of the process and the composition of the glass vary and so do the results, because it is not a simple matter to obtain or produce properly controlled specimens. Small concentrations of cobalt (0.025 to 0.1%) yield blue glass. The best results are achieved when using glass containing potash. Very small amounts can be used for decolorizing. 2 to 3% of copper oxide produces a turquoise color. Nickel, depending on the concentration, produces blue, or violet, or even black glass. Lead crystal with added nickel acquires purplish color. Nickel together with a small amount of cobalt was used for decolorizing of lead glass. Chromium is a very powerful colorizing agent, yielding dark green or in higher concentrations even black color. Together with tin oxide and arsenic it yields emerald green glass. Chromium aventurine, in which aventurescence is achieved by growth of large parallel chromium(III) oxide plates during cooling, is made from glass with added chromium oxide in amount above its solubility limit in glass. Cadmium together with sulphur forms cadmium sulfide and results in deep yellow color, often used in glazes. However, cadmium is toxic. Together with selenium and sulphur it yields shades of bright red and orange. Adding titanium produces yellowish-brown glass. Titanium, rarely used on its own, is more often employed to intensify and brighten other colorizing additives. Uranium (0.1 to 2%) can be added to give glass a fluorescent yellow or green color. Uranium glass is typically not radioactive enough to be dangerous, but if ground into a powder, such as by polishing with sandpaper, and inhaled, it can be carcinogenic. When used with lead glass with very high proportion of lead, produces a deep red color. Didymium gives green color (used in UV filters) or lilac red. Striking glasses Selenium, like manganese, can be used in small concentrations to decolorize glass, or in higher concentrations to impart a reddish color, caused by selenium nanoparticles dispersed in glass. It is a very important agent to make pink and red glass. When used together with cadmium sulfide, it yields a brilliant red color known as "Selenium Ruby". Pure metallic copper produces a very dark red, opaque glass, which is sometimes used as a substitute for gold in the production of ruby-colored glass. Metallic gold, in very small concentrations (around 0.001%, or 10 ppm), produces a rich ruby-colored glass ("Ruby Gold" or "Rubino Oro"), while lower concentrations produces a less intense red, often marketed as "cranberry". The color is caused by the size and dispersion of gold particles. Ruby gold glass is usually made of lead glass with added tin. Silver compounds such as silver nitrate and silver halides can produce a range of colors from orange-red to yellow. The way the glass is heated and cooled can significantly affect the colors produced by these compounds. Also photochromic lenses and photosensitive glass are based on silver. Purple of Cassius is a purple pigment formed by the reaction of gold salts with tin(II) chloride. Coloring added to glass The principal methods of this are enamelled glass, essentially a technique for painting patterns or images, used for both glass vessels and on stained glass, and glass paint, typically in black, and silver stain, giving yellows to oranges on stained glass. All of these are fired in a kiln or furnace to fix them, and can be extremely durable when properly applied. This is not true of "cold-painted" glass, using oil paint or other mixtures, which rarely last more than a few centuries. Colored inclusions Tin oxide with antimony and arsenic oxides produce an opaque white glass (milk glass), first used in Venice to produce an imitation porcelain, very often then painted with enamels. Similarly, some smoked glasses may be based on dark-colored inclusions, but with ionic coloring it is also possible to produce dark colors (see above). Color caused by scattering Glass containing two or more phases with different refractive indices shows coloring based on the Tyndall effect and explained by the Mie theory, if the dimensions of the phases are similar or larger than the wavelength of visible light. The scattered light is blue and violet as seen in the image, while the transmitted light is yellow and red. Dichroic glass Dichroic glass has one or several coatings in the nanometer-range (for example metals, metal oxides, or nitrides) which give the glass dichroic optical properties. Also the blue appearance of some automobile windshields is caused by dichroism. See also Crystal field theory - physical explanation coloring Color of medieval stained glass Hydrogen darkening Hydroxyl ion absorption Transparent materials References Glass engineering and science Glass chemistry
Glass coloring and color marking
Chemistry,Materials_science,Engineering
1,459
30,784,263
https://en.wikipedia.org/wiki/Vanguard%20International%20Semiconductor%20Corporation
Vanguard International Semiconductor Corporation (VIS) is a Taiwanese specialized IC foundry service provider, founded in December 1994 in Hsinchu Science Park by Morris Chang. In March 1998, VIS became a listed company on the Taiwan Over-The-Counter Stock Exchange (OTC) with the main shareholders TSMC, National Development Fund, Executive Yuan and other institutional investors. History VIS was working as a subcontractor for TSMC for the manufacturing of logic and mixed signal products, primarily focusing on the production and development of DRAM and other memory IC. In 2000, VIS announced its plan to transform from a DRAM manufacturer into a foundry service provider. As of February 2004, VIS completely terminated its DRAM production and became a pure-play foundry company. VIS acquired Fab 4 and Fab 5, two lines of 200-mm fab from Winbond Electronics Corp., expanding its production ability. The purchase was finalized in January, 2008. VIS is sponsored by the Industrial Technology Research Institute. As of 2018, VIS has a production capacity of approximately 199,000 wafers per month. VIS purchased GlobalFoundries' Fab 3E located in Tampines, Singapore for $236 million. The transfer of ownership occurred on December 31, 2019. This facility manufactures microelectromechanical systems (MEMS) as well as analog/mixed signal chips. It has a production capacity of around 35,000 200-mm wafer starts per month (WSPM). VIS is a significant supplier to the automotive industry. See also List of companies of Taiwan References External links VIS homepage Foundry semiconductor companies Computer companies of Taiwan Computer hardware companies Semiconductor companies of Taiwan Manufacturing companies based in Hsinchu Taiwanese companies established in 1994 Taiwanese brands
Vanguard International Semiconductor Corporation
Technology
354
30,353,647
https://en.wikipedia.org/wiki/Bufferbloat
Bufferbloat is the undesirable latency that comes from a router or other network equipment buffering too many data packets. Bufferbloat can also cause packet delay variation (also known as jitter), as well as reduce the overall network throughput. When a router or switch is configured to use excessively large buffers, even very high-speed networks can become practically unusable for many interactive applications like voice over IP (VoIP), audio streaming, online gaming, and even ordinary web browsing. Some communications equipment manufacturers designed unnecessarily large buffers into some of their network products. In such equipment, bufferbloat occurs when a network link becomes congested, causing packets to become queued for long periods in these oversized buffers. In a first-in first-out queuing system, overly large buffers result in longer queues and higher latency, and do not improve network throughput. It can also be induced by specific slow-speed connections hindering the on-time delivery of other packets. The bufferbloat phenomenon was described as early as 1985. It gained more widespread attention starting in 2009. According to some sources the most frequent cause of high latency ("lag") in online video games is local home network bufferbloat. High latency can render modern online gaming impossible. Buffering An established rule of thumb for the network equipment manufacturers was to provide buffers large enough to accommodate at least 250 ms of buffering for a stream of traffic passing through a device. For example, a router's Gigabit Ethernet interface would require a relatively large 32 MB buffer. Such sizing of the buffers can lead to failure of the TCP congestion control algorithm. The buffers then take some time to drain, before congestion control resets and the TCP connection ramps back up to speed and fills the buffers again. Bufferbloat thus causes problems such as high and variable latency, and choking network bottlenecks for all other flows as the buffer becomes full of the packets of one TCP stream and other packets are then dropped. A bloated buffer has an effect only when this buffer is actually used. In other words, oversized buffers have a damaging effect only when the link they buffer becomes a bottleneck. The size of the buffer serving a bottleneck can be measured using the ping utility provided by most operating systems. First, the other host should be pinged continuously; then, a several-seconds-long download from it should be started and stopped a few times. By design, the TCP congestion avoidance algorithm will rapidly fill up the bottleneck on the route. If downloading (and uploading, respectively) correlates with a direct and important increase of the round trip time reported by ping, then it demonstrates that the buffer of the current bottleneck in the download (and upload, respectively) direction is bloated. Since the increase of the round trip time is caused by the buffer on the bottleneck, the maximum increase gives a rough estimation of its size in milliseconds. In the previous example, using an advanced traceroute tool instead of the simple pinging (for example, MTR) will not only demonstrate the existence of a bloated buffer on the bottleneck, but will also pinpoint its location in the network. Traceroute achieves this by displaying the route (path) and measuring transit delays of packets across the network. The history of the route is recorded as round-trip times of the packets received from each successive host (remote node) in the route (path). Mechanism Most TCP congestion control algorithms rely on measuring the occurrence of packet drops to determine the available bandwidth between two ends of a connection. The algorithms speed up the data transfer until packets start to drop, then slow down the transmission rate. Ideally, they keep adjusting the transmission rate until it reaches an equilibrium speed of the link. So that the algorithms can select a suitable transfer speed, the feedback about packet drops must occur in a timely manner. With a large buffer that has been filled, the packets will arrive at their destination, but with a higher latency. The packets were not dropped, so TCP does not slow down once the uplink has been saturated, further filling the buffer. Newly arriving packets are dropped only when the buffer is fully saturated. Once this happens TCP may even decide that the path of the connection has changed, and again go into the more aggressive search for a new operating point. Packets are queued within a network buffer before being transmitted; in problematic situations, packets are dropped only if the buffer is full. On older routers, buffers were fairly small so they filled quickly and therefore packets began to drop shortly after the link became saturated, so the TCP protocol could adjust and the issue would not become apparent. On newer routers, buffers have become large enough to hold several seconds of buffered data. To TCP, a congested link can appear to be operating normally as the buffer fills. The TCP algorithm is unaware the link is congested and does not start to take corrective action until the buffer finally overflows and packets are dropped. All packets passing through a simple buffer implemented as a single queue will experience similar delay, so the latency of any connection that passes through a filled buffer will be affected. Available channel bandwidth can also end up being unused, as some fast destinations may not be promptly reached due to buffers clogged with data awaiting delivery to slow destinations. These effects impair interactivity of applications using other network protocols, including UDP used in latency-sensitive applications like VoIP and online gaming. Impact on applications Regardless of bandwidth requirements, any type of a service which requires consistently low latency or jitter-free transmission can be affected by bufferbloat. Such services include digital voice calls (VOIP), online gaming, video chat, and other interactive applications such as radio streaming, video on demand, and remote login. When the bufferbloat phenomenon is present and the network is under load, even normal web page loads can take many seconds to complete, or simple DNS queries can fail due to timeouts. Actually any TCP connection can timeout and disconnect, and UDP packets can get discarded. Since the continuation of a TCP download stream depends on acknowledgement (ACK) packets in the upload stream, a bufferbloat problem in the upload can cause failure of other non-related download applications, because the client ACK packets do not timely reach the internet server. Detection The DSL Reports Speedtest is an easy-to-use test that includes a score for bufferbloat. The ICSI Netalyzr was another on-line tool that could be used for checking networks for the presence of bufferbloat, together with checking for many other common configuration problems. The service was shut down in March 2019. The bufferbloat.net web site lists tools and procedures for determining whether a connection has excess buffering that will slow it down. Solutions and mitigations Several technical solutions exist which can be broadly grouped into two categories: solutions that target the network and solutions that target the endpoints. The two types of solutions are often complementary. The problem sometimes arrives with a combination of fast and slow network paths. Network solutions generally take the form of queue management algorithms. This type of solution has been the focus of the IETF AQM working group. Notable examples include: Limiting the IP queue length, see TCP tuning AQM algorithms such as CoDel and PIE. Hybrid AQM and packet scheduling algorithms such as FQ-CoDel. Amendments to the DOCSIS standard to enable smarter buffer control in cable modems. Integration of queue management (FQ-CoDel) into the Wi-Fi subsystem of the Linux operating system as Linux is commonly used in wireless access points. Notable examples of solutions targeting the endpoints are: The BBR congestion control algorithm for TCP. The Micro Transport Protocol employed by many BitTorrent clients. Techniques for using fewer connections, such as HTTP pipelining or HTTP/2 instead of the plain HTTP protocol. The problem may also be mitigated by reducing the buffer size on the OS and network hardware; however, this is often not configurable and optimal buffer size is dependent on line rate which may differ for different destinations. Utilizing DiffServ (and employing multiple priority-based queues) helps in prioritizing transmission of low-latency traffic (such as VoIP, videoconferencing, gaming), relegating dealing with congestion and bufferbloat onto non-prioritized traffic. Optimal buffer size For the longest delay TCP connections to still get their fair share of the bandwidth, the buffer size should be at least the bandwidth-delay product divided by the square root of the number of simultaneous streams. A typical rule of thumb is 50 ms of line rate data, but some popular consumer grade switches only have 1 ms, which may result in extra bandwidth loss on the longer delay connections in case of local contention with others. See also Bandwidth-delay product Congestion window Explicit Congestion Notification Head-of-line blocking Packet loss References External links BufferBloat: What's Wrong with the Internet? A discussion with Vint Cerf, Van Jacobson, Nick Weaver, and Jim Gettys April, 2011, by Jim Gettys, introduction by Vint Cerf April, 2011, by Jim Gettys, introduction by Vint Cerf 21 minute demonstration and explanation of typical broadband bufferbloat May 2012, by Fred Baker (IETF chair) in Spanish, English slides available TSO sizing and the FQ scheduler (Jonathan Corbet, LWN.net) Flow control (data) Internet architecture Network performance
Bufferbloat
Technology,Engineering
2,012
9,606,667
https://en.wikipedia.org/wiki/Pervious%20concrete
Pervious concrete (also called porous concrete, permeable concrete, no fines concrete and porous pavement) is a special type of concrete with a high porosity used for concrete flatwork applications that allows water from precipitation and other sources to pass directly through, thereby reducing the runoff from a site and allowing groundwater recharge. Pervious concrete is made using large aggregates with little to no fine aggregates. The concrete paste then coats the aggregates and allows water to pass through the concrete slab. Pervious concrete is traditionally used in parking areas, areas with light traffic, residential streets, pedestrian walkways, and greenhouses. It is an important application for sustainable construction and is one of many low impact development techniques used by builders to protect water quality. History Pervious concrete was first used in the 1800s in Europe as pavement surfacing and load bearing walls. Cost efficiency was the main motive due to a decreased amount of cement. It became popular again in the 1920s for two storey homes in Scotland and England. It became increasingly viable in Europe after WWII due to the scarcity of cement. It did not become as popular in the US until the 1970s. In India it became popular in 2000. Stormwater management The proper utilization of pervious concrete is a recognized Best Management Practice by the U.S. Environmental Protection Agency (EPA) for providing first flush pollution control and stormwater management. As regulations further limit stormwater runoff, it is becoming more expensive for property owners to develop real estate, due to the size and expense of the necessary drainage systems. Pervious concrete lowers the NRCS Runoff Curve Number or CN by retaining stormwater on site. This allows the planner/designer to achieve pre-development stormwater goals for pavement intense projects. Pervious concrete reduces the runoff from paved areas, which reduces the need for separate stormwater retention ponds and allows the use of smaller capacity storm sewers. This allows property owners to develop a larger area of available property at a lower cost. Pervious concrete also naturally filters storm water and can reduce pollutant loads entering into streams, ponds, and rivers. Pervious concrete functions like a storm water infiltration basin and allows the storm water to infiltrate the soil over a large area, thus facilitating recharge of precious groundwater supplies locally. All of these benefits lead to more effective land use. Pervious concrete can also reduce the impact of development on trees. A pervious concrete pavement allows the transfer of both water and air to root systems to help trees flourish even in highly developed areas. Properties Pervious concrete consists of cement, coarse aggregate (size should be 9.5 mm to 12.5 mm) and water with little to no fine aggregates. The addition of a small amount of sand will increase the strength. The mixture has a water-to-cement ratio of 0.28 to 0.40 with a void content of 15 to 25 percent. The correct quantity of water in the concrete is critical. A low water to cement ratio will increase the strength of the concrete, but too little water may cause surface failure. A proper water content gives the mixture a wet-metallic appearance. As this concrete is sensitive to water content, the mixture should be field checked. Entrained air may be measured by a Rapid Air system, where the concrete is stained black and sections are analyzed under a microscope. A common flatwork form has riser strips on top such that the screed is 3/8-1/2 inches (9 to 12 mm) above final pavement elevation. Mechanical screeds are preferable to manual. The riser strips are removed to guide compaction. Immediately after screeding, the concrete is compacted to improve the bond and smooth the surface. Excessive compaction of pervious concrete results in higher compressive strength, but lower porosity (and thus lower permeability). Jointing varies little from other concrete slabs. Joints are tooled with a rolling jointing tool prior to curing or saw cut after curing. Curing consists of covering concrete with 6 mil plastic sheeting within 20 minutes of concrete discharge. However, this contributes to a substantial amount of waste sent to landfills. Alternatively, preconditioned absorptive lightweight aggregate as well as internal curing admixture (ICA) have been used to effectively cure pervious concrete without waste generation. Testing and inspection Pervious concrete has a common strength of though strengths up to can be reached. There is no standardized test for compressive strength. Acceptance is based on the unit weight of a sample of poured concrete using ASTM standard no. C1688. An acceptable tolerance for the density is plus or minus of the design density. Slump and air content tests are not applicable to pervious concrete because of the unique composition. The designer of a storm water management plan should ensure that the pervious concrete is functioning properly through visual observation of its drainage characteristics prior to opening of the facility. Cold climates Concerns over the resistance to the freeze-thaw cycle have limited the use of pervious concrete in cold weather environments. The rate of freezing in most applications is dictated by the local climate. Entrained air may help protect the paste as it does in regular concrete. The addition of a small amount of fine aggregate to the mixture increases the durability of the pervious concrete. Avoiding saturation during the freeze cycle is the key to the longevity of the concrete. Related, having a well prepared 8 to 24 inch (200 to 600 mm) sub-base and a good drainage preventing water stagnation will reduce the possibility of freeze-thaw damage. Using permeable concrete for pavements can make them safer for pedestrians in the winter because water won't settle on the surface and freeze leading to dangerously icy conditions. Roads can also be made safer for cars by the use of permeable concrete as the reduction in the formation of standing water will reduce the possibility of aquaplaning, and porous roads will also reduce tire noise. Maintenance To prevent reduction in permeability, pervious concrete needs to be cleaned regularly. Cleaning can be accomplished through wetting the surface of the concrete and vacuum sweeping. See also References Further reading US EPA. Office of Research and Development. "Research Highlights: Porous Pavements: Managing Rainwater Runoff." October 17, 2008. External links National Pervious Concrete Pavement Association Pervious Concrete Design Resources American Concrete Institute Building materials Concrete Environmental engineering
Pervious concrete
Physics,Chemistry,Engineering
1,304
71,881,589
https://en.wikipedia.org/wiki/Eleanor%20Janega
Eleanor Janega is an American broadcaster and medievalist. Her scholarship focuses on gender and sexuality; apocalyptic thought; propaganda; and the urban experience, in the late medieval period. Biography Despite her initial interest in pursuing Chinese history in college, particularly the 17th century transition from the Ming Dynasty to the Qing dynasty, upon encountering professors Barbara Rosenwein and Theresa Gross-Diaz at Loyola University Chicago, she says, "It was over," and her career studying Medieval history had begun. Janega gained her undergraduate degree in History (with honours) from Loyola University Chicago, and holds an MA (with distinction) in Medieval Studies and a PhD in history, both from University College London. Her doctoral thesis on the 14th-century Bohemian preacher Milíč of Kroměříž was titled Jan Milíč of Kroměříž and Emperor Charles IV: Preaching, Power, and the Church of Prague, and was supervised by Martyn Rady. She is a guest teacher in the London School of Economics Department of International History, and teaches a standalone online course on Medieval Gender and Sexuality. Janega co-hosts the Going Medieval documentary strand on the History Hit streaming service. She also co-hosts the Gone Medieval podcast, and has appeared as a talking head on radio and television. Selected publications References External links Eleanor Janega Going Medieval Living people 1982 births 21st-century American historians 21st-century American non-fiction writers 21st-century American women writers Loyola University Chicago alumni Alumni of University College London Academics of the London School of Economics American broadcasters American medievalists American history podcasters Historians of the Czech Republic Women's historians Women's studies academics Historians of sexuality Urban historians American expatriates in England
Eleanor Janega
Biology
341
876,956
https://en.wikipedia.org/wiki/AnandTech
AnandTech was an online computer hardware magazine owned by Future plc. It was founded in April 1997 by then-14-year-old Anand Lal Shimpi, who served as CEO and editor-in-chief until August 30, 2014, with Ryan Smith replacing him as editor-in-chief. The web site was a source of hardware reviews for off-the-shelf components and exhaustive benchmarking, targeted towards computer-building enthusiasts, but later expanded to cover mobile devices such as smartphones and tablets. Some of their articles on mass-market products such as mobile phones were syndicated by CNNMoney. The large accompanying forum is recommended by some books for bargain hunting in the technology field. AnandTech was acquired by Purch on 17 December 2014. Purch was acquired by Future in 2018. On August 30, 2024, the publication shut down. The content of the website was said to be preserved, but no new articles or reviews would be published. The AnandTech forums would continue to operate. History In its early stages, Matthew Witheiler served as co-owner and Senior Hardware Editor, creating insightful and in-depth reviews for the site. In 2006, an AnandTech editor launched a spin-off called DailyTech, a technology news site. The move followed a similar evolution of the news section of AnandTech's peer publication, Tom's Guide, into TG Daily some months earlier. On December 17, 2014, Purch announced the acquisition of Anandtech.com. In 2018, Anandtech and other Purch consumer brands were sold to Future. The editorial team also included Senior Editor, Ian Cutress (who departed in February 2022), as well as Motherboard expert Gavin Bonshor. In January 2023, Gavin Bonshor was promoted to the Senior Editor position, effectively replacing Dr. Ian Cutress, the previous Senior Editor. On August 30, 2024, AnandTech announced that it had ended publication effective immediately. The site will remain online as an archive, while its community forum will remain operational. Reviews Describing AnandTech in 2008, author Paul McFedries wrote that "its heart and its claim to fame is the massive collection of incredibly in-depth reviews". In 2008, blogging expert Bruce C. Brown called AnandTech one of the "big dogs in the tech field". In 2005, computer expert Leo Laporte described AnandTech as an "outstanding review and technology website for 3D hardware and other computer components", and said that it is "one of the most professional hardware review sites online". Forums AnandTech has over 350,000 registered users and over 35 million posts. The AnandTech forums are home to distributive computing teams, known collectively as TeAm AnandTech (or simply The TeAm). AnandTech contains a wide variety of sub-forums, including the casual environment of AnandTech Off-Topic (or ATOT as the members call it) to the far more technical Highly Technical forum. AnandTech also maintains several highly regulated e-commerce forums, such as Hot Deals and For Sale/For Trade. In July 2007, the forum underwent major changes that site administrators stated as necessary for furthering userbase growth. The profanity filter was removed (although use of vulgar language is limited), and the identities of traditionally anonymous volunteer moderators were revealed (except two). See also CNET Maximum PC TechCrunch The Tech Report Tom's Hardware ZDNet List of Internet forums References External links Future plc Computing websites Magazines established in 1997 Magazines disestablished in 2024 American technology news websites Computer magazines published in the United States Online computer magazines
AnandTech
Technology
743
2,692,659
https://en.wikipedia.org/wiki/Chi%20Tauri
Chi Tauri, Latinised from χ Tauri, is a star system in the constellation of Taurus. Parallax measurements made by the Hipparcos spacecraft put it at a distance of about from Earth. The primary component has an apparent magnitude of about 5.4, meaning it is visible with the naked eye. The main component of the system is Chi Tauri A. It is a B-type main-sequence star. Its mass is 2.6 times that of the Sun and its surface glows with an effective temperature of . It may be a binary star itself, as suggested from astrometric data from Hipparcos, although no orbit could be derived. The secondary component of the system is Chi Tauri B, separated about 19″ from Chi Tauri A. It was thought to be a post-T Tauri star from its unusual spectrum, but later studies ruled this out. It is a double-lined spectroscopic binary—the two stars are not resolved but their spectra have periodic Doppler shifts indicating orbital motion. The two stars are an F-type star and a G-type star, respectively, and are designated Ba and Bb. The radial velocity of Chi Tauri B has a slow drift indicating the presence of another star in the system. Designated Chi Tauri Bc, this massive object is too dim to be detected, but it appears in Chi Tauri B's spectrum as an infrared excess. Because of this infrared excess, this unseen component is thought to be a pair of K-type main-sequence stars both with masses 70% of the Sun's. The stars within the system appear to be dynamically interacting. Naming With φ, κ1, κ2 and υ, it composed the Arabic were the Arabs' Al Kalbain, the Two Dogs. According to the catalogue of stars in the Technical Memorandum 33-507 - A Reduced Star Catalog Containing 537 Named Stars, Al Kalbain were the title for five stars: φ as Alkalbain I, this star (χ) as Alkalbain II, κ1 as Alkalbain III, κ2 as Alkalbain IV and υ as Alkalbain V. In Chinese, (), meaning Whetstone, refers to an asterism consisting of χ Tauri, ψ Tauri, 44 Tauri and φ Tauri. Consequently, the Chinese name for χ Tauri itself is (, ). References B-type main-sequence stars F-type main-sequence stars Spectroscopic binaries 5 Taurus (constellation) Tauri, Chi Durchmusterung objects Tauri, 059 027638 020430 1369
Chi Tauri
Astronomy
546
35,938,288
https://en.wikipedia.org/wiki/Dubins%20path
In geometry, the term Dubins path typically refers to the shortest curve that connects two points in the two-dimensional Euclidean plane (i.e. x-y plane) with a constraint on the curvature of the path and with prescribed initial and terminal tangents to the path, and an assumption that the vehicle traveling the path can only travel forward. If the vehicle can also travel in reverse, then the path follows the Reeds–Shepp curve. Lester Eli Dubins (1920–2010) proved using tools from analysis that any such path will consist of maximum curvature and/or straight line segments. In other words, the shortest path will be made by joining circular arcs of maximum curvature and straight lines. Discussion Dubins proved his result in 1957. In 1974 Harold H. Johnson proved Dubins' result by applying Pontryagin's maximum principle. In particular, Harold H. Johnson presented necessary and sufficient conditions for a plane curve, which has bounded piecewise continuous curvature and prescribed initial and terminal points and directions, to have minimal length. In 1992 the same result was shown again using Pontryagin's maximum principle. More recently, a geometric curve-theoretic proof has been provided by J. Ayala, D. Kirszenblat and J. Hyam Rubinstein. A proof characterizing Dubins paths in homotopy classes has been given by J. Ayala. Applications The Dubins path is commonly used in the fields of robotics and control theory as a way to plan paths for wheeled robots, airplanes and underwater vehicles. There are simple geometric and analytical methods to compute the optimal path. For example, in the case of a wheeled robot, a simple kinematic car model (also known as Dubins' car) for the systems is: where is the car's position, is the heading, the car is moving at a constant speed , and the turn rate control is bounded. In this case the maximum turning rate corresponds to some minimum turning radius (and equivalently maximum curvature). The prescribed initial and terminal tangents correspond to initial and terminal headings. The Dubins' path gives the shortest path joining two oriented points that is feasible for the wheeled-robot model. The optimal path type can be described using an analogy with cars of making a 'right turn (R)', 'left turn (L)' or driving 'straight (S).' An optimal path will always be at least one of the six types: RSR, RSL, LSR, LSL, RLR, LRL. For example, consider that for some given initial and final positions and tangents, the optimal path is shown to be of the type 'RSR.' Then this corresponds to a right-turn arc (R) followed by a straight line segment (S) followed by another right-turn arc (R). Moving along each segment in this sequence for the appropriate length will form the shortest curve that joins a starting point A to a terminal point B with the desired tangents at each endpoint and that does not exceed the given curvature. Dubins interval problem Dubins interval problem is a key variant of the Dubins path problem, where an interval of heading directions are specified at the initial and terminal points. The tangent direction of the path at initial and final points are constrained to lie within the specified intervals. One could solve this using geometrical analysis, or using Pontryagin's minimum principle. References External links Dubins Curves , from Planning Algorithms by Steven M. LaValle Isochrons for a Dubins Car, a demonstration from Wolfram Demonstrations Project An open-source Python Reeds-Shepp implementation, authored by Built Robotics Piecewise-circular curves Automated planning and scheduling Robot kinematics
Dubins path
Mathematics,Engineering
762
35,155,163
https://en.wikipedia.org/wiki/Eichler%20order
In mathematics, an Eichler order, named after Martin Eichler, is an order of a quaternion algebra that is the intersection of two maximal orders. References Number theory
Eichler order
Mathematics
40
72,497,805
https://en.wikipedia.org/wiki/Aperiodic%20crystal
Aperiodic crystals are crystals that lack three-dimensional translational symmetry, but still exhibit three-dimensional long-range order. In other words, they are periodic crystals in higher dimensions. They are classified into three different categories: incommensurate modulated structures, incommensurate composite structures, and quasicrystals. In X-ray crystallography The X-ray diffraction patterns of aperiodic crystals contain two sets of peaks, which include "main reflections" and "satellite reflections". Main reflections are usually stronger in intensity and span a lattice defined by three-dimensional reciprocal lattice vectors. Satellite reflections are weaker in intensity and are known as "lattice ghosts". These reflections do not correspond to any lattice points in physical space and cannot be indexed with the original three vectors. History The history of aperiodic crystals can be traced back to the early 20th century, when the science of X-ray crystallography was in its infancy. At that time, it was generally accepted that the ground state of matter was always an ideal crystal with three-dimensional space group symmetry, or lattice periodicity. However, in the late 1900s, a number of developments in the field of crystallography challenged this belief. Researchers began to focus on the scattering of X-rays and other particles beyond just the Bragg peaks, which allowed them to better understand the effects of defects and finite size on the structure of crystals, as well as the presence of additional spots in diffraction patterns due to periodic variations in the crystal structure. These findings showed that the ground state of matter was not always an ideal crystal, and that other, more complex structures could also exist. These structures were later classified as aperiodic crystals, and their study has continued to be an active area of research in the field of crystallography. Mathematics of the superspace approach To understand aperiodic crystal structures, one must use the superspace approach. In materials science, "superspace" or higher-dimensional space refers to the concept of describing the structures and properties of materials in terms of dimensions beyond the three dimensions of physical space. This may involve using mathematical models to describe the behavior of atoms or molecules in a materials in four, five, or even higher dimensions. Aperiodic crystals can be understood as a three-dimensional physical space wherein atoms are positioned, plus the additional dimensions of the second subspace. Superspace Dimensionalities of aperiodic crystals: , , . The "" represents the dimensions of the first subspace, which is also called the "external space" () or "parallel space" (). The "" represents the additional dimension of the second subspace, which is also called "internal space" ('') or "perpendicular space" (). It is perpendicular to the first subspace. In summary, superspace is the direct sum of two subspaces. With the superspace approach, we can now describe a three-dimensional aperiodic structure as a higher dimensional periodic structure. Peak indexing To index all Bragg peaks, both main and satellite reflections, additional lattice vectors must be introduced: , , . With respect to the three reciprocal lattice vectors spanned by the main reflection, the fourth vector can be expressed by . is modulation wave vector, which represents the direction and wavelength of the modulation wave through the crystal structure. If at least one of the values is an irrational number, then the structure is considered to be "incommensurately modulated". With the superspace approach, we can project the diffraction pattern from a higher-dimensional space to three-dimensional space. Example Biphenyl The biphenyl molecule is a simple organic molecular compound consisting of two phenyl rings bonded by a central C-C single bond, which exhibits a modulated molecular crystal structure. Two competing factors are important for the molecule's conformation. One is steric hindrance of ortho-hydrogen, which leads to the repulsion between electrons and causes torsion of the molecule. As a result, the conformation of the molecule is non-planar, which often occurs when biphenyl is in the gas phase. The other factor is the -electron effect which favors coplanarity of the two planes. This is often the case when biphenyl is at room temperature. References Crystal structure types Tessellation
Aperiodic crystal
Physics,Chemistry,Materials_science,Mathematics
890
26,901,900
https://en.wikipedia.org/wiki/Iodobenzene%20dichloride
Iodobenzene dichloride (PhICl2) is a complex of iodobenzene with chlorine. As a reagent for organic chemistry, it is used as an oxidant and chlorinating agent. Chemical structure Single-crystal X-ray crystallography has been used to determine its structure; as can be predicted by VSEPR theory, it adopts a T-shaped geometry about the central iodine atom. Preparation Iodobenzene dichloride is not stable and is not commonly available commercially. It is prepared by passing chlorine gas through a solution of iodobenzene in chloroform, from which it precipitates. The same reaction has been reported at pilot plant scale (20 kg) as well. Ph-I + Cl2 → PhICl2 An alternate preparation involving the use of chlorine generated in situ by the action of sodium hypochlorite on hydrochloric acid has also been described. Reactions Iodobenzene dichloride is hydrolyzed by basic solutions to give iodosobenzene (PhIO) and is oxidized by sodium hypochlorite to give iodoxybenzene (PhIO2). In organic synthesis, iodobenzene dichloride is used as a reagent for the selective chlorination of alkenes. and alkynes. References Further reading Iodanes Oxidizing agents Reagents for organic chemistry Phenyl compounds
Iodobenzene dichloride
Chemistry
321
38,647,394
https://en.wikipedia.org/wiki/TUGSAT-1
TUGSAT-1, also known as BRITE-Austria and CanX-3B, is the first Austrian satellite. It is an optical astronomy spacecraft operated by the Graz University of Technology as part of the international BRIght-star Target Explorer programme. Details TUGSAT-1 was manufactured by the University of Toronto based on the Generic Nanosatellite Bus, and had a mass at launch of (plus another 7 kg for the XPOD separation system). The spacecraft is cube-shaped, with each side measuring . The satellite will be used, along with five other spacecraft, to conduct photometric observations of stars with apparent magnitude of greater than 4.0 as seen from Earth. TUGSAT-1 was one of the first two BRITE satellites to be launched, along with the Austro-Canadian UniBRITE-1 spacecraft. Four more satellites, two Canadian and two Polish, were launched at later dates. Launch The TUGSAT-1 spacecraft was launched through the University of Toronto's Nanosatellite Launch System programme, as part of the NLS-8 launch, along with UniBRITE-1 and AAUSAT3. The NLS-8 launch was subcontracted to the Indian Space Research Organisation (ISRO), who placed the satellites into orbit using a Polar Satellite Launch Vehicle (PSLV) in the PSLV-CA configuration, flying from the First Launch Pad at the Satish Dhawan Space Centre. The NLS spacecraft were secondary payloads on the rocket, whose primary mission was to deploy the Franco-Indian SARAL ocean research satellite. Canada's Sapphire and NEOSSat-1 spacecraft, and the United Kingdom's STRaND-1, were also carried by the same rocket under separate launch contracts. The launch took place at 12:31 UTC on 25 February 2013, and the rocket deployed all of its payloads successfully. See also UniBRITE-1 BRITE-Toronto BRITE-Montreal Lem (BRITE-PL) Heweliusz (BRITE-PL) References Spacecraft launched in 2013 Satellites of Austria Space telescopes First artificial satellites of a country 2013 in Austria Space program of Austria Graz University of Technology Spacecraft launched by PSLV rockets
TUGSAT-1
Astronomy
448
443,235
https://en.wikipedia.org/wiki/Covariant%20transformation
In physics, a covariant transformation is a rule that specifies how certain entities, such as vectors or tensors, change under a change of basis. The transformation that describes the new basis vectors as a linear combination of the old basis vectors is defined as a covariant transformation. Conventionally, indices identifying the basis vectors are placed as lower indices and so are all entities that transform in the same way. The inverse of a covariant transformation is a contravariant transformation. Whenever a vector should be invariant under a change of basis, that is to say it should represent the same geometrical or physical object having the same magnitude and direction as before, its components must transform according to the contravariant rule. Conventionally, indices identifying the components of a vector are placed as upper indices and so are all indices of entities that transform in the same way. The sum over pairwise matching indices of a product with the same lower and upper indices is invariant under a transformation. A vector itself is a geometrical quantity, in principle, independent (invariant) of the chosen basis. A vector v is given, say, in components vi on a chosen basis ei. On another basis, say e′j, the same vector v has different components v′j and As a vector, v should be invariant to the chosen coordinate system and independent of any chosen basis, i.e. its "real world" direction and magnitude should appear the same regardless of the basis vectors. If we perform a change of basis by transforming the vectors ei into the basis vectors e′j, we must also ensure that the components vi transform into the new components v′j to compensate. The needed transformation of v is called the contravariant transformation rule. In the shown example, a vector is described by two different coordinate systems: a rectangular coordinate system (the black grid), and a radial coordinate system (the red grid). Basis vectors have been chosen for both coordinate systems: ex and ey for the rectangular coordinate system, and er and eφ for the radial coordinate system. The radial basis vectors er and eφ appear rotated anticlockwise with respect to the rectangular basis vectors ex and ey. The covariant transformation, performed to the basis vectors, is thus an anticlockwise rotation, rotating from the first basis vectors to the second basis vectors. The coordinates of v must be transformed into the new coordinate system, but the vector v itself, as a mathematical object, remains independent of the basis chosen, appearing to point in the same direction and with the same magnitude, invariant to the change of coordinates. The contravariant transformation ensures this, by compensating for the rotation between the different bases. If we view v from the context of the radial coordinate system, it appears to be rotated more clockwise from the basis vectors er and eφ. compared to how it appeared relative to the rectangular basis vectors ex and ey. Thus, the needed contravariant transformation to v in this example is a clockwise rotation. Examples of covariant transformation The derivative of a function transforms covariantly The explicit form of a covariant transformation is best introduced with the transformation properties of the derivative of a function. Consider a scalar function f (like the temperature at a location in a space) defined on a set of points p, identifiable in a given coordinate system (such a collection is called a manifold). If we adopt a new coordinates system then for each i, the original coordinate can be expressed as a function of the new coordinates, so One can express the derivative of f in old coordinates in terms of the new coordinates, using the chain rule of the derivative, as This is the explicit form of the covariant transformation rule. The notation of a normal derivative with respect to the coordinates sometimes uses a comma, as follows where the index i is placed as a lower index, because of the covariant transformation. Basis vectors transform covariantly A vector can be expressed in terms of basis vectors. For a certain coordinate system, we can choose the vectors tangent to the coordinate grid. This basis is called the coordinate basis. To illustrate the transformation properties, consider again the set of points p, identifiable in a given coordinate system where (manifold). A scalar function f, that assigns a real number to every point p in this space, is a function of the coordinates . A curve is a one-parameter collection of points c, say with curve parameter λ, c(λ). A tangent vector v to the curve is the derivative along the curve with the derivative taken at the point p under consideration. Note that we can see the tangent vector v as an operator (the directional derivative) which can be applied to a function The parallel between the tangent vector and the operator can also be worked out in coordinates or in terms of operators where we have written , the tangent vectors to the curves which are simply the coordinate grid itself. If we adopt a new coordinates system then for each i, the old coordinate can be expressed as function of the new system, so Let be the basis, tangent vectors in this new coordinates system. We can express in the new system by applying the chain rule on x. As a function of coordinates we find the following transformation which indeed is the same as the covariant transformation for the derivative of a function. Contravariant transformation The components of a (tangent) vector transform in a different way, called contravariant transformation. Consider a tangent vector v and call its components on a basis . On another basis we call the components , so in which If we express the new components in terms of the old ones, then This is the explicit form of a transformation called the contravariant transformation and we note that it is different and just the inverse of the covariant rule. In order to distinguish them from the covariant (tangent) vectors, the index is placed on top. Basis differential forms transform contravariantly An example of a contravariant transformation is given by a differential form df. For f as a function of coordinates , df can be expressed in terms of the basis . The differentials dx transform according to the contravariant rule since Dual properties Entities that transform covariantly (like basis vectors) and the ones that transform contravariantly (like components of a vector and differential forms) are "almost the same" and yet they are different. They have "dual" properties. What is behind this, is mathematically known as the dual space that always goes together with a given linear vector space. Take any vector space T. A function f on T is called linear if, for any vectors v, w and scalar α: A simple example is the function which assigns a vector the value of one of its components (called a projection function). It has a vector as argument and assigns a real number, the value of a component. All such scalar-valued linear functions together form a vector space, called the dual space of T. The sum f+g is again a linear function for linear f and g, and the same holds for scalar multiplication αf. Given a basis for T, we can define a basis, called the dual basis for the dual space in a natural way by taking the set of linear functions mentioned above: the projection functions. Each projection function (indexed by ω) produces the number 1 when applied to one of the basis vectors . For example, gives a 1 on and zero elsewhere. Applying this linear function to a vector , gives (using its linearity) so just the value of the first coordinate. For this reason it is called the projection function. There are as many dual basis vectors as there are basis vectors , so the dual space has the same dimension as the linear space itself. It is "almost the same space", except that the elements of the dual space (called dual vectors) transform covariantly and the elements of the tangent vector space transform contravariantly. Sometimes an extra notation is introduced where the real value of a linear function σ on a tangent vector u is given as where is a real number. This notation emphasizes the bilinear character of the form. It is linear in σ since that is a linear function and it is linear in u since that is an element of a vector space. Co- and contravariant tensor components Without coordinates A tensor of type (r, s) may be defined as a real-valued multilinear function of r dual vectors and s vectors. Since vectors and dual vectors may be defined without dependence on a coordinate system, a tensor defined in this way is independent of the choice of a coordinate system. The notation of a tensor is for dual vectors (differential forms) ρ, σ and tangent vectors . In the second notation the distinction between vectors and differential forms is more obvious. With coordinates Because a tensor depends linearly on its arguments, it is completely determined if one knows the values on a basis and The numbers are called the components of the tensor on the chosen basis. If we choose another basis (which are a linear combination of the original basis), we can use the linear properties of the tensor and we will find that the tensor components in the upper indices transform as dual vectors (so contravariant), whereas the lower indices will transform as the basis of tangent vectors and are thus covariant. For a tensor of rank 2, we can verify that covariant tensor contravariant tensor For a mixed co- and contravariant tensor of rank 2 mixed co- and contravariant tensor See also Covariance and contravariance of vectors General covariance Lorentz covariance References Tensors Differential geometry
Covariant transformation
Engineering
1,997
37,300,477
https://en.wikipedia.org/wiki/Common%20ICAO%20Data%20Interchange%20Network
The Common ICAO Data Interchange Network (CIDIN) is a network, run by the International Civil Aviation Organization (ICAO), which makes up part of the aeronautical fixed service. It is used to transmit text or binary messages for the purposes of air traffic control. References Aviation communications
Common ICAO Data Interchange Network
Technology
57
9,931,105
https://en.wikipedia.org/wiki/Entrainment%20%28chronobiology%29
In the study of chronobiology, entrainment refers to the synchronization of a biological clock to an environmental cycle. An example is the interaction between circadian rhythms and environmental cues, such as light and temperature. Entrainment helps organisms adapt their bodily processes according to the timing of a changing environment. For example, entrainment is manifested during travel between time zones, hence why humans experience jet lag. Biological rhythms are endogenous; they persist even in the absence of environmental cues as they are driven by an internal mechanism, most notably the circadian clock. Of the several possible cues (known as zeitgebers, German for 'time-givers') that can contribute to entrainment of the circadian clock, light has the greatest impact. Units of circadian time (CT) are used to describe entrainment to refer to the relationship between the rhythm and the light signal/pulse. Modes of entrainment There are two general modes of entrainment: phasic and continuous. The phasic mode is when there is limited interaction with the environment to "reset" the clock every day by the amount equal to the "error", which is the difference between the environmental cycle and the organism's circadian rhythm. Exposure to certain environmental stimuli will cause a phase shift, an abrupt change in the timing of the rhythm. The continuous mode is when the circadian rhythm is continuously adjusted by the environment, usually by constant light. Two properties, the free-running period of an organism, and the phase response curve, are the main pieces of information needed to investigate individual entrainment. There are also limits to entrainment. Although there may be individual differences in this limit, most organisms have a +/- 3 hours limit of entrainment. Due to this limit, it may take several days for re-entrainment. Mechanisms of entrainment The activity/rest cycle (sleep) in animals is one of the circadian rhythms that normally are entrained by environmental cues. In mammals, such endogenous rhythms are generated by the suprachiasmatic nucleus (SCN) of the anterior hypothalamus. Entrainment is accomplished by altering the concentration of clock components through altered gene expression and protein stability. Circadian oscillations occur even in the cells of isolated organs such as the liver/heart as peripheral oscillators, and it is believed that they sync up with the master pacemaker in the mammalian brain, the SCN. Such hierarchical relationships are not the only ones possible: two or more oscillators may couple in order to assume the same period without either being dominant over the other(s). This situation is analogous to pendulum clocks. Health implications When good sleep hygiene is insufficient, a person's lack of synchronization to night and day can have health consequences. There is some variation within normal chronotypes' entrainment; it is normal for humans to awaken anywhere from about 5 a.m. to 9 a.m. However, patients with DSPD, ASPD and non-24-hour sleep–wake disorder are improperly entrained to light/dark. Applications of entrainment Entrainment is used in various fields to optimize performance and health. In sports, it helps athletes adjust to new time zones quickly. In medicine, light therapy is used to treat circadian rhythm disorders. The principles of entrainment are also applied in occupational health to design better shift work schedules. See also Crepuscular – Animals active at twilight (i.e., dusk and dawn). Diurnality – Animals active during the day and sleeping at night. Nocturnality – Animal activity of sleeping during the day and active at night. References Further reading Pittendrigh CS (1981) Circadian systems: Entrainment. In Handbook Behavioral Neurobiology, Vol. 4. Biological Rhythms, J. Aschoff, ed. pp. 239–68, University of California Press, New York. Circadian rhythm
Entrainment (chronobiology)
Biology
835
770,475
https://en.wikipedia.org/wiki/Brodie%20knob
A brodie knob (alternative spelling: brody knob) is a doorknob-shaped handle that attaches to the steering wheel of an automobile or other vehicle or equipment with a steering wheel. Other names for this knob include suicide, necker, granny, knuckle buster, and wheel spinner. Design and use The device is a small, independently rotating knob (similar to a U.S. classic door knob) facing the driver that is securely mounted on the outside rim of a steering wheel. The protruding knob is an aftermarket accessory. The free rotation is intended to help make steering with one hand easier or faster. Some heavy automobiles without a power steering system tended to have heavy and slow responses requiring hand-over-hand turning of the wheel by the driver, and the knob allowed the driver to "crank" the steering wheel to make faster turns. Brodie knobs were popular on trucks and tractors before the advent of power steering. Their primary use is still in trucks, particularly semi trucks, where they allow simultaneous steering and operation of the radio or gearshift. They are also used on forklifts, farm tractors, construction equipment, riding lawnmowers, and ice resurfacers, where frequent sharp turning is required. The knobs are sometimes installed as an aftermarket accessory on farm and commercial tractors, their primary purpose being to ease one-handed steering. At the same time, the driver operates other controls with the other hand or is moving in reverse. Some boats are equipped with a helm featuring a stainless-steel wheel with a brodie knob. Etymology and disadvantages The "Steering Wheel Spinner Knob" was invented by Joel R. Thorp of Wisconsin in 1936. The name "Brodie knob" is a reference to Steve Brodie and was meant to describe all manner of reckless stunts that can be performed with the spinner knob. The device allows the driver to turn the steering wheel quickly from fully one side to the other, making it possible to make cars "spin like a top" on snow-covered streets, but it also causes drivers to oversteer at speed because of the reduced driver's feel for the car's steering system and the road. The device is often called a "suicide knob" because of being notoriously useless for controlling the wheel during an emergency. It is also called a "knuckle buster" because of the disadvantage posed by the knob when letting go of the steering wheel after going around a corner, the wheel spins rapidly, and the knob can hit the user's knuckle, forearm or elbow. If the driver is wearing a long-sleeved shirt, the protruding accessory on the steering wheel's rim can also become caught in the sleeve's open cut by the button. Attempting to free a tangled shirt sleeve from the knob may cause the driver to lose control of the car. The knobs can be dangerous if improperly installed, and in case of a collision because they can cause additional injuries to the driver upon impact. Other names include "wheel spinner," "granny knob," and "necker's knob" because it facilitated driving by using only one arm, leaving the other arm for romantic purposes. Legality Some U.S. states have laws that regulate or prohibit the use of brodie knobs. Some states allow the use of an accessory knob on the steering wheel for drivers who need them to operate a vehicle because of a disability. The U.S. Occupational Safety and Health Administration (OSHA) regulations restrict the use of auxiliary devices for specific construction vehicles. Moreover, OSHA prohibits modification of industrial equipment without the approval of the equipment manufacturer. In the Netherlands, brodie knobs are only allowed if the driver has a valid medical reason for it. In forklifts and other equipment requiring frequent sharp steering, they are permitted regardless of medical reason. The regulations in the UK do not allow any protrusions of the steering wheel, except for those with disabilities who need modifications for their cars. References External links Automotive accessories Control devices Vehicle parts Tractors
Brodie knob
Technology,Engineering
819
31,734,604
https://en.wikipedia.org/wiki/Sorbinil
Sorbinil (INN) is an aldose reductase inhibitor being investigated for treatment of diabetic complications including neuropathy and retinopathy. Aldose reductase is an enzyme present in lens and brain that removes excess glucose by converting it to sorbitol. Sorbitol accumulation can lead to the development of cataracts in the lens and neuropathy in peripheral nerves. Sorbinil has been shown to inhibit aldose reductase in human brain and placenta and calf and rat lens. Sorbinil reduced sorbitol accumulation in rat lens and sciatic nerve of diabetic rats orally administered 0.25 mg/kg sorbinil. References Aldose reductase inhibitors Hydantoins Fluoroarenes Drugs developed by Pfizer Spiro compounds
Sorbinil
Chemistry
168
22,055,435
https://en.wikipedia.org/wiki/December%202030%20lunar%20eclipse
A penumbral lunar eclipse will occur at the Moon’s descending node of orbit on Monday, December 9, 2030, with an umbral magnitude of −0.1613. A lunar eclipse occurs when the Moon moves into the Earth's shadow, causing the Moon to be darkened. A penumbral lunar eclipse occurs when part or all of the Moon's near side passes into the Earth's penumbra. Unlike a solar eclipse, which can only be viewed from a relatively small area of the world, a lunar eclipse may be viewed from anywhere on the night side of Earth. Occurring only about 7.5 hours before apogee (on December 10, 2030, at 5:05 UTC), the Moon's apparent diameter will be smaller. Visibility The eclipse will be completely visible over Africa, Europe, and north, west, central, and south Asia, seen rising over North and South America and setting over east Asia and western Australia. Eclipse details Shown below is a table displaying details about this particular solar eclipse. It describes various parameters pertaining to this eclipse. Eclipse season This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight. Related eclipses Eclipses in 2030 An annular solar eclipse on June 1. A partial lunar eclipse on June 15. A total solar eclipse on November 25. A penumbral lunar eclipse on December 9. Metonic Preceded by: Lunar eclipse of February 20, 2027 Followed by: Lunar eclipse of September 28, 2034 Tzolkinex Preceded by: Lunar eclipse of October 28, 2023 Followed by: Lunar eclipse of January 21, 2038 Half-Saros Preceded by: Solar eclipse of December 4, 2021 Followed by: Solar eclipse of December 15, 2039 Tritos Preceded by: Lunar eclipse of January 10, 2020 Followed by: Lunar eclipse of November 8, 2041 Lunar Saros 145 Preceded by: Lunar eclipse of November 28, 2012 Followed by: Lunar eclipse of December 20, 2048 Inex Preceded by: Lunar eclipse of December 30, 2001 Followed by: Lunar eclipse of November 19, 2059 Triad Preceded by: Lunar eclipse of February 9, 1944 Followed by: Lunar eclipse of October 10, 2117 Lunar eclipses of 2027–2031 Saros 145 Half-Saros cycle A lunar eclipse will be preceded and followed by solar eclipses by 9 years and 5.5 days (a half saros). This lunar eclipse is related to two total solar eclipses of Solar Saros 152. See also List of lunar eclipses and List of 21st-century lunar eclipses Notes External links 2030-12 2030-12 2030 in science
December 2030 lunar eclipse
Astronomy
620
44,441,512
https://en.wikipedia.org/wiki/Sporobolomyces%20salmonicolor
Sporobolomyces salmonicolor is a species of fungus in the subdivision Pucciniomycotina. It occurs in both a yeast state and a hyphal state, the latter formerly known as Sporidiobolus salmonicolor. It is generally considered a Biosafety Risk Group 1 fungus; however isolates of S. salmonicolor have been recovered from cerebrospinal fluid, infected skin, a nasal polyp, lymphadenitis and a case of endophthalmitis. It has also been reported in AIDS-related infections. The fungus exists predominantly in the anamorphic (asexual) state as a unicellular, haploid yeast yet this species can sometimes produce a teleomorphic (sexual) state when conjugation of compatible yeast cells occurs. The asexual form consists of a characteristic, pink, ballistosporic yeast. Ballistoconidia are borne from slender extensions of the cell known as sterigmata and are forcibly ejected into the air upon maturity. Levels of airborne yeast cells peak during the night and are abundant in areas of decaying leaves and grains. Three varieties of Sporobolomyces salmonicolor have been described; S. salmonicolor var. albus, S. salmonicolor var. fischerii, and S. salmonicolor var. salmoneus. Taxonomy In 1924, Kluyver and van Niel coined the genus Sporobolomyces and classified it under the Basidiomycota. They recognized that the yeast phase produced by Sporobolomyces exhibited the same forcible discharge mechanism as the basidiospores of the Basidiomycota. They therefore hypothesized that the asexual ballistoconidia of Sporobolomyces are homologues with the basidiospores of the Basidiomycota. Their hypothesis however was questioned by many who did not consider the asexual nature of the ballistoconidia as a basidiomycetous trait. Its classification as a basidiomycetous yeast was further demonstrated by Nyland (1949) with the discovery of its teleomorph, placed in the genus Sporidiobolus. The teleomorph presented basidiomycetous traits such as the presence of dikaryotic hyphae with clamp connections and the formation of resting spores known as teliospores. In the past, Sporobolomyces salmonicolor was thought to be conspecific with Sporobolomyces johnsonii; however it is now well established that they are distinct taxa. Sporobolomyces salmonicolor is distinguished from S. johnsonii by the absence in the former of assimilation of maltose, methyl-a-D-glucoside, cellobiose or salicin. Morphology Sporobolomyces salmonicolor produces visible, liposoluble carotenoid pigments, resulting in salmon-pink colonies. The colony surface is smooth and has a pasty texture. There is considerable cell and colony morphology when S. salmonicolor is grown in culture. The budding yeast-like cells produced during the asexual stage are ellipsoidal to subcylindrical and 8–25 × 2–5.5 μm. They can occur singly or in pairs. The ballistoconidia are kidney-shaped and can range in size from 6–18 × 2.5–7.0 μm. The characteristic ballistoconidia are borne by extension of the sterigmata which can reach up to 50 μm in length. Both pseudohyphae and true hyphae may also be present. In its sexual state, Sporobolomyces salmonicolor produces dikaryotic hyphae with clamp connections. At the terminal end of the hyphae, thick-walled teliospores are produced. Teliospores are 9–15 μm in diameter, brown, spherical, and contain lipid-rich globules. Upon germination of the teliospore, basidia with basidiospores are produced. Basidia are transversely septate, two-celled and 4–6 x 20–25 μm in size. Each basidium will generally produce two large basidiospores that are 5–6 × 7–10 μm in size. Prior to the production of the basidium, endospores have also been known to form in the interior of teliospores. This phenomenon is associated with the production of meiospores within the teliospore cytoplasm, ultimately released by rupture of the teliospore wall. Life cycle In 1969, Van der Walt and Pitout elucidated the life cycle of S. salmonicolor. They studied a colony of S. salmonicolor grown from a single cell in culture. After several generations, they observed a 2:1 ratio of diploid and haploid cells, respectively. The diploid cells were recovered from thick-walled resting spores known as teliospores. Meiosis was occurring within the teliospore, followed by germination of the teliospore and beginning of the haploid yeast state. Sporobolomyces salmonicolor is a heterothallic species; two mating types are known. Induction of the sexual stage begins with anastomosis of compatible yeast cells to form dikaryotic hyphae with clamp connections. Hyphae have "simple" septal pores that allow continuity of the cytoplasm between cells. At the terminal end of the hypha, thick-walled, resting spores called teliospores form. These germinate to form a transversely-septate basidium that bears two large basidiospores. Physiology The optimum temperature for growth of Sporobolomyces salmonicolor is between , being the highest tolerable temperature. Growth does not occur at . This species does not undergo fermentation. Additionally, Sporobolomyces salmonicolor shows positive urease activity and a positive staining response when stained with diazonium blue B. Diazonium blue B is a technique used to classify asexual yeasts as members of the Zygomycota, Basidiomycota or Ascomycota. The major ubiquinone present is Q-10. The cell wall of S. salmonicolor contains fucose, mannose, glucose and galactose, however xylose is absent. Colonies grow in the presence of glucose, sucrose, maltose, cellobiose, α,α-trehalose, melezitose, D-arabinose, ethanol, glycerol, D-mannitol, D-glucitol, D-gluconate, succinate, nitrate and urease. This species does not assimilate myo-inositol or D-glucuronate, and do not form extracellular starch-like compounds. Distribution and ecology Sporobolomyces salmonicolor has a broad geographical distribution. It has been isolated from many areas across the world including Europe, North and South America, Asia, Africa and Antarctica where it is known from a broad spectrum of substrates. It is principally characterized as a phyllosphere fungus and is commonly found in areas of decaying organic material such as leaves and grains as well as ripening grapes. Isolates of this species however have been recovered from freshwater, marine water and clinical specimens. It has also been isolated from agricultural areas and indoor built environments. In agricultural environments, Sporobolomyces salmonicolor can pose a respiratory hazard for agricultural workers. Agricultural workers are subject to increased exposure when they partake in activities involving the handling of grains. Sporobolomyces salmonicolor has additionally been isolated from straw in hay barns or hay lofts. Workers in these settings should consider the proper use of masks to avoid infection. If individuals show atopic symptoms, a change in occupation might be considered. In indoor environments, S. salmonicolor has been associated with severe water and mould damage. Flooded basements and utility rooms are places where S. salmonicolor may be recovered. This fungus will also commonly associate with standing water films, although not very much has been documented on this. It can form a pink water film around stagnant toilet water. The most efficient way to avoid exposure in the home is to eliminate moisture sources and keep bathrooms clean, dry and ventilated. Pathogenicity Sporobolomyces salmonicolor is generally interpreted to be a Biosafety Risk Group 1 fungus. It is considered an opportunistic fungal pathogen of immunocompromised individuals and has been reported in AIDS-related infections. Sporobolomyces salmonicolor has been associated with nasal polyps, lymphadenitis, bone marrow involvement in AIDS patients, infected skin, pseudomeningitis and a case of endophthalmitis. S. salmonicolor is also considered a type 1 allergen and has been known to cause asthma, nosocomial allergic alveolitis, and rhinitis. A 31-year-old woman went to her physician because of decreased vision in her left eye. The left eye showed fibrinous exudates, posterior synechiae and vitritis. After a vitreous sample was sent to the lab for identification, the yeast was identified as Sporobolomyces salmonicolor. The recommended treatment was voriconazole 200 mg, twice a day for two months. Improvement in the left eye was seen within a week. Exposure to mould and yeast within a military hospital in Finland lead to an outbreak of asthma, alveolitis and rhinitis. The building was known to have severe water and mould damage. After performing inhalation provocation tests, four cases of asthma caused by Sporobolomyces salmonicolor were reported. An additional seven workers were diagnosed with rhinitis. All seven individuals with rhinitis acted positively in nasal S. salmonicolor provocation tests. Sporobolomyces salmonicolor was recovered from the cerebrospinal fluid (CSF) of three patients in a hospital, one of whom was a kidney transplant recipient. Heavy growth of S. salmonicolor was recovered from utility rooms on the floors of each patient, and from the hospital rooms of two patients. It was suggested that this particular case was most likely caused by contamination. During the collection process, S. salmonicolor was most likely introduced into the CSF as a contaminant. Treatment Clinical infections due to Sporobolomyces salmonicolor are rare and there are currently no standard therapies for infection. Treatment with amphotericin B alone, and amphotericin B followed by either ketoconazole or fluconazole have been successful. In one case of endophthalmitis (mentioned in case report below), treatment with voriconazole was likewise successful. A small number of isolates however demonstrate resistance to fluconazole and micafungin. Popular culture The genus Sporobolomyces was the unexpected subject of a poem, The Sporobolomycetologist, with an accompanying musical score, written by the eccentric Canadian mycologist Arthur Henry Reginald Buller. References Sporidiobolales Fungi of Europe Fungi of North America Fungi of South America Fungi of Africa Fungi of Asia Fungi of Antarctica Fungi described in 1894 Fungal pathogens of humans Fungus species
Sporobolomyces salmonicolor
Biology
2,419
15,354,822
https://en.wikipedia.org/wiki/PSG10
Pregnancy-specific beta-1-glycoprotein 10 is a protein that in humans is encoded by the PSG10 gene. References Further reading
PSG10
Chemistry
33
69,116,423
https://en.wikipedia.org/wiki/Reducing%20subspace
In linear algebra, a reducing subspace of a linear map from a Hilbert space to itself is an invariant subspace of whose orthogonal complement is also an invariant subspace of That is, and One says that the subspace reduces the map One says that a linear map is reducible if it has a nontrivial reducing subspace. Otherwise one says it is irreducible. If is of finite dimension and is a reducing subspace of the map represented under basis by matrix then can be expressed as the sum where is the matrix of the orthogonal projection from to and is the matrix of the projection onto (Here is the identity matrix.) Furthermore, has an orthonormal basis with a subset that is an orthonormal basis of . If is the transition matrix from to then with respect to the matrix representing is a block-diagonal matrix with where , and References Linear algebra Matrices
Reducing subspace
Mathematics
182
16,797,206
https://en.wikipedia.org/wiki/Ursa%20Major%20II%20Dwarf
Ursa Major II Dwarf (UMa II dSph) is a dwarf spheroidal galaxy situated in the Ursa Major constellation and discovered in 2006 in the data obtained by the Sloan Digital Sky Survey. The galaxy is located approximately 30 kpc from the Sun and moves towards the Sun with the velocity of about 116 km/s. It has an elliptical (ratio of axes ~ 2:1) shape with the half-light radius of about 140 pc. Ursa Major II is one of the smallest and faintest satellites of the Milky Way—its integrated luminosity is about 4000 times that of the Sun (absolute visible magnitude of about −4.2), which is much lower than the luminosity of the majority of globular clusters. UMa II is even less luminous than some stars, like Canopus in the Milky Way. It is comparable in luminosity to Bellatrix in Orion. However, its mass is about 5 million solar masses, which means that the galaxy's mass to light ratio is around 2000. This may be an overestimate as the galaxy has a somewhat irregular shape and may be in the process of tidal disruption. The stellar population of UMa II consists mainly of old stars formed at least 10 billion years ago. The metallicity of these old stars is also very low at , which means that they contain 300 times less heavy elements than the Sun. The stars of UMa II were probably among the first stars to form in the Universe. Currently, there is no star formation in UMa II. The measurements have so far failed to detect any neutral hydrogen in it—the upper limit is only 562 solar masses. See also Ursa Major I Dwarf Ursa Minor Dwarf Notes References Dwarf spheroidal galaxies Ursa Major Local Group Milky Way Subgroup ?
Ursa Major II Dwarf
Astronomy
368
24,507,573
https://en.wikipedia.org/wiki/Gymnopilus%20subpenetrans
Gymnopilus subpenetrans is a species of mushroom in the family Hymenogastraceae. See also List of Gymnopilus species External links Gymnopilus subpenetrans at Index Fungorum subpenetrans Taxa named by William Alphonso Murrill Fungus species
Gymnopilus subpenetrans
Biology
68
3,739,375
https://en.wikipedia.org/wiki/Q-systems
Q-systems are a method of directed graph transformations according to given grammar rules, developed at the Université de Montréal by Alain Colmerauer in 1967–70 for use in natural language processing. The Université de Montréal's machine translation system, TAUM-73, used the Q-Systems as its language formalism. The data structure manipulated by a Q-system is a Q-graph, which is a directed acyclic graph with one entry node and one exit node, where each arc bears a labelled ordered tree. An input sentence is usually represented by a linear Q-graph where each arc bears a word (tree reduced to one node labelled by this word). After analysis, the Q-graph is usually a bundle of 1-arc paths, each arc bearing a possible analysis tree. After generation, the goal is usually to produce as many paths as desired outputs, with again one word per arc. A Q-System consists of a sequence of Q-treatments, each being a set of Q-rules, of the form <matched_path> == <added_path> [<condition>]. The Q-treatments are applied in sequence, unless one of them produces the empty Q-graph, in which case the result is the last Q-graph obtained. The three parts of a rule can contain variables for labels, trees, and forests. All variables after "==" must appear in the <matched_path> part. Variables are local to rules. A Q-treatment works in two steps, addition and cleaning. It first applies all its rules exhaustively, using instantiation (one-way unification), thereby adding new paths to the current Q-graph (added arcs and their trees can be used to produce new paths). If and when this addition process halts, all arcs used in some successful rule application are erased, as well as all unused arcs that are no more on any path from the entry node to the exit node. Hence, the result, if any (if the addition step terminates), is again a Q-graph. That allows several Q-Systems to be chained, each of them performing a specialized task, together forming a complex system. For example, TAUM 73 consisted of fifteen chained Q-Systems. An extension of the basic idea of the Q-Systems, namely to replace instantiation by unification (to put it simply, allow "new" variables in the right hand side part of a rule, and replace parametrized labelled trees by logical terms) led to Prolog, designed by Alain Colmerauer and Philippe Roussel in 1972. Refinements in the other direction (reducing non-determinism and introducing typed labels) by John Chandioux led to GramR, used for programming METEO from 1985 onward. In 2009, Hong Thai Nguyen of GETALP, Laboratoire d'Informatique de Grenoble reimplemented the Q-language in C, using ANTLR to compile the Q-systems and the Q-graphs, and an algorithm proposed by Christian Boitet (as none had been published and sources of the previous Fortran implementation had been lost). That implementation was corrected, completed and extended (to labels using Unicode characters and not only the printable characters of the CDC6600 of the historical version) by David Cattanéo in 2010-11. See also METEO System References Further reading Colmerauer, A: Les systèmes Q ou un formalisme pour analyser et synthétiser des phrases sur ordinateur. Mimeo, Montréal, 1969. Nguyen, H-T: Des systèmes de TA homogènes aux systèmes de TAO hétérogènes. thèse UJF, Grenoble, 2009. External links http://unldeco.imag.fr/unldeco/SystemsQ.po?localhost=/home/nguyenht/SYS-Q/MONITEUR/] new Q-systems demonstration Computational linguistics Grammar frameworks Linguistic research software
Q-systems
Technology
827
47,166,226
https://en.wikipedia.org/wiki/The%20Moving%20Museum
The Moving Museum is a not-for-profit organisation that runs a nomadic programme of contemporary art exhibitions. It has held projects in Dubai, Istanbul, and London comprising large-scale exhibitions, artist residencies, public programming, publishing, artwork commissions, and digital programming. Artists are invited through a collaborative curatorial model composed of contributors from various disciplines and are included in diverse ways: as producers, collaborators, curators, and advisors. Over 50 new projects have been commissioned across a wide range of media including works by Amalia Ulman, Broomberg and Chanarin, Clunie Reid, Hannah Perry, Hito Steyerl, Jeremy Deller, Jon Rafman, Jeremy Bailey, James Bridle, Michael Rakowitz, Tom Sachs, Ryan Gander, Mai-Thu Perret, Slavs and Tatars, Zach Blas, Anne de Vries, Ben Schumacher, Ming Wong and Lucky PDF. The Moving Museum is an independent and non-political organization founded by Aya Mousawi and Simon Sakhai in 2012; a registered Community Interest Company (CIC) in England and Wales; and a registered 501(c)(3) Charity in the United States of America. The organisation's website was designed by new media artist Jeremy Bailey with Harm van den Dorpel, Joe Hamilton, and Jonas Lund. References Art exhibitions in London Art museums and galleries in the United Arab Emirates Art museums and galleries in Istanbul Internet art Art and design organizations International cultural organizations Art museums and galleries established in 2012 Community interest companies
The Moving Museum
Engineering
315
14,005,182
https://en.wikipedia.org/wiki/Inter-protocol%20exploitation
Inter-protocol exploitation is a class of security vulnerabilities that takes advantage of interactions between two communication protocols, for example the protocols used in the Internet. It is commonly discussed in the context of the Hypertext Transfer Protocol (HTTP). This attack uses the potential of the two different protocols meaningfully communicating commands and data. It was popularized in 2007 and publicly described in research of the same year. The general class of attacks that it refers to has been known since at least 1994 (see the Security Considerations section of RFC 1738). Internet Protocol implementations allow for the possibility of encapsulating exploit code to compromise a remote program that uses a different protocol. Inter-protocol exploitation can utilize inter-protocol communication to establish the preconditions for launching an inter-protocol exploit. For example, this process could negotiate the initial authentication communication for a vulnerability in password parsing. Inter-protocol exploitation is where one protocol attacks a service running a different protocol. This is a legacy problem because the specifications of the protocols did not take into consideration an attack of this type. Technical details The two protocols involved in the vulnerability are termed the carrier and target. The carrier encapsulates the commands and/or data. The target protocol is used for communication to the intended victim service. Inter-protocol communication will be successful if the carrier protocol can encapsulate the commands and/or data sufficiently to meaningfully communicate to the target service. Two preconditions need to be met for successful communication across protocols: encapsulation and error tolerance. The carrier protocol must encapsulate the data and commands in a manner that the target protocol can understand. It is highly likely that the resulting data stream with induce parsing errors in the target protocol. The target protocol be must be sufficiently forgiving of errors. During the inter-protocol connection it is likely that a percentage of the communication will be invalid and cause errors. To meet this precondition, the target protocol implementation must continue processing despite these errors. Current implications One of the major points of concern is the potential for this attack vector to reach through firewalls and DMZs. Inter-protocol exploits can be transmitted over HTTP and launched from web browsers on an internal subnet. An important point is the web browser is not exploited through any conventional means. Example JavaScript delivered over HTTP and communicating over the IRC protocol. var form = document.createElement('form'); form.setAttribute('method', 'post'); form.setAttribute('action', 'http://irc.example.net:6667'); form.setAttribute('enctype', 'multipart/form-data'); var textarea = document.createElement('textarea'); textarea.innerText = "USER A B C D \nNICK turtle\nJOIN #hack\nPRIVMSG #hackers: I like turtles\n"; form.appendChild(textarea); document.body.appendChild(form); form.submit(); Known examples of the vulnerability were also demonstrated on files constructed to be valid HTML code and BMP image at the same time. References Computer network security Injection exploits
Inter-protocol exploitation
Technology,Engineering
686
48,624,090
https://en.wikipedia.org/wiki/Mireille%20Bousquet-M%C3%A9lou
Mireille Bousquet-Mélou (born 12 May 1967) is a French mathematician who specializes in enumerative combinatorics and who works as a senior researcher for the Centre national de la recherche scientifique (CNRS) at the computer science department (LaBRI) of the University of Bordeaux. Education and career Bousquet-Mélou was born in Albi, the second daughter of two high school teachers, and grew up in Pau where her family moved when she was three. She studied at the École Normale Supérieure in Paris from 1986 to 1990, as the only woman in her entering class of mathematicians, and earned an agrégation in mathematics in 1989, with Xavier Gérard Viennot as her mentor in combinatorics. She completed her Ph.D. at the University of Bordeaux in 1991, with a dissertation on the enumeration of orthogonally convex polyominos supervised by Viennot. She joined CNRS as a junior researcher in 1990, and completed a habilitation at Bordeaux in 1996. Awards and honors Bousquet-Mélou won the bronze medal of the CNRS in 1993, and the silver medal in 2014. Linköping University gave her an honorary doctorate in 2005, and the French Academy of Sciences gave her their Charles-Louis de Saulces de Freycinet Prize in 2009. In 2006, she was an invited speaker at the International Congress of Mathematicians in the section on combinatorics. Her presentation at the congress concerned connections between enumerative combinatorics, formal language theory, and the algebraic structure of generating functions, according to which enumeration problems whose generating functions are rational functions are often isomorphic to regular languages, and problems whose generating functions are algebraic are often isomorphic to unambiguous context-free languages. Selected publications . . . . References External links Home page Combinatorialists 1967 births Living people 21st-century French women mathematicians 20th-century French mathematicians 21st-century French mathematicians 20th-century French scientists 21st-century French scientists 20th-century French women scientists 21st-century French women scientists 20th-century French women mathematicians
Mireille Bousquet-Mélou
Mathematics
433
5,069,503
https://en.wikipedia.org/wiki/Clathrate%20gun%20hypothesis
The clathrate gun hypothesis is a proposed explanation for the periods of rapid warming during the Quaternary. The hypothesis is that changes in fluxes in upper intermediate waters in the ocean caused temperature fluctuations that alternately accumulated and occasionally released methane clathrate on upper continental slopes. This would have had an immediate impact on the global temperature, as methane is a much more powerful greenhouse gas than carbon dioxide. Despite its atmospheric lifetime of around 12 years, methane's global warming potential is 72 times greater than that of carbon dioxide over 20 years, and 25 times over 100 years (33 when accounting for aerosol interactions). It is further proposed that these warming events caused the Bond cycles and individual interstadial events, such as the Dansgaard–Oeschger interstadials. The hypothesis was supported for the Bølling–Allerød warming and Preboreal periods, but not for Dansgaard–Oeschger interstadials, although there are still debates on the topic. While it may be important on the millennial timescales, it is no longer considered relevant for the near future climate change: the IPCC Sixth Assessment Report states "It is very unlikely that gas clathrates (mostly methane) in deeper terrestrial permafrost and subsea clathrates will lead to a detectable departure from the emissions trajectory during this century". Mechanism Methane clathrate, also known commonly as methane hydrate, is a form of water ice that contains a large amount of methane within its crystal structure. Potentially large deposits of methane clathrate have been found under sediments on the ocean floors of the Earth, although the estimates of total resource size given by various experts differ by many orders of magnitude, leaving doubt as to the size of methane clathrate deposits (particularly in the viability of extracting them as a fuel resource). Indeed, cores of greater than 10 centimeters' contiguous depth had only been found in three sites as of 2000, and some resource reserve size estimates for specific deposits/locations have been based primarily on seismology. The sudden release of large amounts of natural gas from methane clathrate deposits in runaway climate change could be a cause of past, future, and present climate changes. In the Arctic Ocean, clathrates can exist in shallower water stabilized by lower temperatures rather than higher pressures; these may potentially be marginally stable much closer to the surface of the sea-bed, stabilized by a frozen 'lid' of permafrost preventing methane escape. The so-called self-preservation phenomenon has been studied by Russian geologists starting in the late 1980s. This metastable clathrate state can be a basis for release events of methane excursions, such as during the interval of the Last Glacial Maximum. A study from 2010 concluded with the possibility for a trigger of abrupt climate warming based on metastable methane clathrates in the East Siberian Arctic Shelf (ESAS) region. Possible past releases Studies published in 2000 considered this hypothetical effect to be responsible for warming events in and at the end of the Last Glacial Maximum. Although periods of increased atmospheric methane match periods of continental-slope failure, later work found that the distinct deuterium/hydrogen (D/H) isotope ratio indicated that wetland methane emissions was the main contributor to atmospheric methane concentrations. While there were major dissociation events during the last deglaciation, with Bølling–Allerød warming triggering the disappearance of the entire methane hydrate deposit in the Barents Sea within 5000 years, those events failed to counteract the onset of a major Younger Dryas cooling period, suggesting that most of the methane stayed within the seawater after being liberated from the seafloor deposits, with very little entering the atmosphere. In 2008, it was suggested that equatorial permafrost methane clathrate may have had a role in the sudden warm-up of "Snowball Earth", 630 million years ago. Other events potentially linked to methane hydrate excursions are the Permian–Triassic extinction event and the Paleocene–Eocene Thermal Maximum. Paleocene–Eocene Thermal Maximum Permian–Triassic extinction event Climate change feedback Modern deposits Most deposits of methane clathrate are in sediments too deep to respond rapidly, and 2007 modelling by Archer suggests that the methane forcing derived from them should remain a minor component of the overall greenhouse effect. Clathrate deposits destabilize from the deepest part of their stability zone, which is typically hundreds of metres below the seabed. A sustained increase in sea temperature will warm its way through the sediment eventually, and cause the shallowest, most marginal clathrate to start to break down; but it will typically take on the order of a thousand years or more for the temperature change to get that far into the seabed. Further, subsequent research on midlatitude deposits in the Atlantic and Pacific Ocean found that any methane released from the seafloor, no matter the source, fails to reach the atmosphere once the depth exceeds , while geological characteristics of the area make it impossible for hydrates to exist at depths shallower than . However, some methane clathrates deposits in the Arctic are much shallower than the rest, which could make them far more vulnerable to warming. A trapped gas deposit on the continental slope off Canada in the Beaufort Sea, located in an area of small conical hills on the ocean floor is just below sea level and considered the shallowest known deposit of methane hydrate. However, the East Siberian Arctic Shelf averages 45 meters in depth, and it is assumed that below the seafloor, sealed by sub-sea permafrost layers, hydrates deposits are located. This would mean that when the warming potentially talik or pingo-like features within the shelf, they would also serve as gas migration pathways for the formerly frozen methane, and a lot of attention has been paid to that possibility. Shakhova et al. (2008) estimate that not less than 1,400 gigatonnes of carbon is presently locked up as methane and methane hydrates under the Arctic submarine permafrost, and 5–10% of that area is subject to puncturing by open talik. Their paper initially included the line that the "release of up to 50 gigatonnes of predicted amount of hydrate storage [is] highly possible for abrupt release at any time". A release on this scale would increase the methane content of the planet's atmosphere by a factor of twelve, equivalent in greenhouse effect to a doubling in the 2008 level of . This is what led to the original Clathrate gun hypothesis, and in 2008 the United States Department of Energy National Laboratory system and the United States Geological Survey's Climate Change Science Program both identified potential clathrate destabilization in the Arctic as one of four most serious scenarios for abrupt climate change, which have been singled out for priority research. The USCCSP released a report in late December 2008 estimating the gravity of this risk. A 2012 study of the effects for the original hypothesis, based on a coupled climate–carbon cycle model (GCM) assessed a 1000-fold (from <1 to 1000 ppmv) methane increase—within a single pulse, from methane hydrates (based on carbon amount estimates for the PETM, with ~2000 GtC), and concluded it would increase atmospheric temperatures by more than 6 °C within 80 years. Further, carbon stored in the land biosphere would decrease by less than 25%, suggesting a critical situation for ecosystems and farming, especially in the tropics. Another 2012 assessment of the literature identifies methane hydrates on the Shelf of East Arctic Seas as a potential trigger. A risk of seismic activity being potentially responsible for mass methane releases has been considered as well. In 2012, seismic observations destabilizing methane hydrate along the continental slope of the eastern United States, following the intrusion of warmer ocean currents, suggests that underwater landslides could release methane. The estimated amount of methane hydrate in this slope is 2.5 gigatonnes (about 0.2% of the amount required to cause the PETM), and it is unclear if the methane could reach the atmosphere. However, the authors of the study caution: "It is unlikely that the western North Atlantic margin is the only area experiencing changing ocean currents; our estimate of 2.5 gigatonnes of destabilizing methane hydrate may therefore represent only a fraction of the methane hydrate currently destabilizing globally." Bill McGuire notes, "There may be a threat of submarine landslides around the margins of Greenland, which are less well explored. Greenland is already uplifting, reducing the pressure on the crust beneath and also on submarine methane hydrates in the sediment around its margins, and increased seismic activity may be apparent within decades as active faults beneath the ice sheet are unloaded. This could provide the potential for the earthquake or methane hydrate destabilisation of submarine sediment, leading to the formation of submarine slides and, perhaps, tsunamis in the North Atlantic." Observed emissions East Siberian Arctic Shelf Research carried out in 2008 in the Siberian Arctic showed methane releases on the annual scale of millions of tonnes, which was a substantial increase on the previous estimate of 0.5 millions of tonnes per year. apparently through perforations in the seabed permafrost, with concentrations in some regions reaching up to 100 times normal levels. The excess methane has been detected in localized hotspots in the outfall of the Lena River and the border between the Laptev Sea and the East Siberian Sea. At the time, some of the melting was thought to be the result of geological heating, but more thawing was believed to be due to the greatly increased volumes of meltwater being discharged from the Siberian rivers flowing north. By 2013, the same team of researchers used multiple sonar observations to quantify the density of bubbles emanating from subsea permafrost into the ocean (a process called ebullition), and found that 100–630 mg methane per square meter is emitted daily along the East Siberian Arctic Shelf (ESAS), into the water column. They also found that during storms, when wind accelerates air-sea gas exchange, methane levels in the water column drop dramatically. Observations suggest that methane release from seabed permafrost will progress slowly, rather than abruptly. However, Arctic cyclones, fueled by global warming, and further accumulation of greenhouse gases in the atmosphere could contribute to more rapid methane release from this source. Altogether, their updated estimate had now amounted to 17 millions of tonnes per year. However, these findings were soon questioned, as this rate of annual release would mean that the ESAS alone would account for between 28% and 75% of the observed Arctic methane emissions, which contradicts many other studies. In January 2020, it was found that the rate at which methane enters the atmosphere after it had been released from the shelf deposits into the water column had been greatly overestimated, and observations of atmospheric methane fluxes taken from multiple ship cruises in the Arctic instead indicate that only around 3.02 million tonnes of methane are emitted annually from the ESAS. A modelling study published in 2020 suggested that under the present-day conditions, annual methane release from the ESAS may be as low as 1000 tonnes, with 2.6 – 4.5 million tonnes representing the peak potential of turbulent emissions from the shelf. Beaufort Sea continental slope A radiocarbon dating study in 2018 found that after the 30-meter isobath, only around 10% of the methane in surface waters can be attributed to ancient permafrost or methane hydrates. The authors suggested that even a significantly accelerated methane release would still largely fail to reach the atmosphere. Svalbard Hong et al. 2017 studied methane seepage in the shallow arctic seas at the Barents Sea close to Svalbard. Temperature at the seabed has fluctuated seasonally over the last century, between and , it has only affected release of methane to a depth of about 1.6 meters at the sediment-water interface. Hydrates can be stable through the top 60 meters of the sediments and the current observed releases originate from deeper below the sea floor. They conclude that the increased methane flux started hundreds to thousands of years ago, noted about it, "..episodic ventilation of deep reservoirs rather than warming-induced gas hydrate dissociation." Summarizing his research, Hong stated: Research by Klaus Wallmann et al. 2018 concluded that hydrate dissociation at Svalbard 8,000 years ago was due to isostatic rebound (continental uplift following deglaciation). As a result, the water depth got shallower with less hydrostatic pressure, without further warming. The study, also found that today's deposits at the site become unstable at a depth of ~ 400 meters, due to seasonal bottom water warming, and it remains unclear if this is due to natural variability or anthropogenic warming. Moreover, another paper published in 2017 found that only 0.07% of the methane released from the gas hydrate dissociation at Svalbard appears to reach the atmosphere, and usually only when the wind speeds were low. In 2020, a subsequent study confirmed that only a small fraction of methane from the Svalbard seeps reaches the atmosphere, and that the wind speed holds a greater influence on the rate of release than dissolved methane concentration on site. Finally, a paper published in 2017 indicated that the methane emissions from at least one seep field at Svalbard were more than compensated for by the enhanced carbon dioxide uptake due to the greatly increased phytoplankton activity in this nutrient-rich water. The daily amount of carbon dioxide absorbed by the phytoplankton was 1,900 greater than the amount of methane emitted, and the negative (i.e. indirectly cooling) radiative forcing from the CO2 uptake was up to 251 times greater than the warming from the methane release. Current outlook In 2014 based on their research on the northern United States Atlantic marine continental margins from Cape Hatteras to Georges Bank, a group of scientists from the US Geological Survey, the Department of Geosciences, Mississippi State University, Department of Geological Sciences, Brown University and Earth Resources Technology, found widespread leakage of methane from the seafloor, but they did not assign specific dates, beyond suggesting that some of the seeps were more than 1000 years old. In March 2017, a meta-analysis by the USGS Gas Hydrates Project concluded: In June 2017, scientists from the Center for Arctic Gas Hydrate (CAGE), Environment and Climate at the University of Tromsø, published a study describing over a hundred ocean sediment craters, some 300 meters wide and up to 30 meters deep, formed due to explosive eruptions, attributed to destabilizing methane hydrates, following ice-sheet retreat during the last glacial period, around 15,000 years ago, a few centuries after the Bølling–Allerød warming. These areas around the Barents Sea, still seep methane today, and still existing bulges with methane reservoirs could eventually have the same fate. Later that same year, the Arctic Council published SWIPA 2017 report, where it cautioned "Arctic sources and sinks of greenhouse gases are still hampered by data and knowledge gaps." In 2018, a perspective piece devoted to tipping points in the climate system suggested that the climate change contribution from methane hydrates would be "negligible" by the end of the century, but could amount to on the millennial timescales. In 2021, the IPCC Sixth Assessment Report no longer included methane hydrates in the list of potential tipping points, and says that "it is very unlikely that CH4 emissions from clathrates will substantially warm the climate system over the next few centuries." The report had also linked terrestrial hydrate deposits to gas emission craters discovered in the Yamal Peninsula in Siberia, Russia beginning in July 2014, but noted that since terrestrial gas hydrates predominantly form at a depth below 200 meters, a substantial response within the next few centuries can be ruled out. Likewise, a 2022 assessment of tipping points described methane hydrates as a "threshold-free feedback" rather than a tipping point. In fiction The science fiction novel Mother of Storms by John Barnes offers a fictional example of catastrophic climate change caused by methane clathrate release. In The Life Lottery by Ian Irvine unprecedented seismic activity triggers a release of methane hydrate, reversing global cooling. The hypothesis is the basis of an experiment in the PlayStation 2 game Death By Degrees. In Transcendent by Stephen Baxter, averting such a crisis is a major plotline. The novel The Black Silent by author David Dun features this idea as a key scientific point. In the anime Ergo Proxy, a string of explosions in the methane hydrate reserves wipes out 85% of species on Earth. The novel The Far Shore of Time by Frederik Pohl features an alien race attempting to destroy humanity by bombing the methane clathrate reserves, thus releasing the gas into the atmosphere. The novel The Swarm by Frank Schätzing features what first appear to be freak events related to the world's oceans. In Charles Stross' Laundry Files universe, an intentionally triggered clathrate gun scenario is viewed as a possible retaliatory strategy that could be utilized by Blue Hades in response to terminal violation of the Benthic Treaty. See also Atlantic meridional overturning circulation Azolla event Clathrate compound Effects of climate change Extinction event Limnic eruption Methane chimney Ocean acidification Storegga Slide References Further reading , cited by 21 other articles. External links Methane: A Scientific Journey from Obscurity to Climate Super-Stardom Good Sept. 2004 background report from NASA GISS Wakening the Kraken History of climate variability and change Permian–Triassic extinction event Clathrates Meteorological hypotheses
Clathrate gun hypothesis
Chemistry
3,649
30,173,444
https://en.wikipedia.org/wiki/Maerdy%20Branch
The Maerdy Branch was a railway branch line in South Wales. Financed and operated by the Taff Vale Railway, on amalgamation it became part of the Great Western Railway in 1923. Designed and mainly operated as a coal mining freight railway, its creation and demise was wholly defined by the South Wales Coalfield. Design The branch was wholly designed, being developed by integrating a series of private industrial track railways with the extension of the Taff Vale Railway from the south at . In 1840, the TVR bought the private Ferndale to Maerdy colliery track and then extended it to in 1849. Operations Passenger operations began in 1875, serving interim stations from Porth at (opened 1876), Pontygwaith, Tylorstown, and Ferndale. Though the line had opened up to Maerdy the same year (with the sinking & commissioning of Maerdy Colliery), it was not until 1889 that the passenger service was extended there from Ferndale. Passenger traffic was neither heavy nor a major contribution to line finances, and so in 1900 the GWR introduced steam rail motors. Ten or eleven return trips each weekday was the standard service frequency for the branch for most of its life. Closure The last passenger train ran on 13 June 1964 as a result of the Beeching cuts, leaving only the freight service to Maerdy Colliery. As a result, the line was reduced to single track working. The line was placed into maintenance only upkeep from June 1986 and subsequently closed completely in August that year, after the coal mined at Maerdy was raised at Tower Colliery. Present day The track was lifted in 1996, with the trackbed and most of the bridges left in situ. This formed the canal section of the Taff Trail cycle route. In 2004, Rhondda Cynon Taff council came to an arrange with Network Rail to buy the trackbed from just north of Maerdy Junction to Margaret Street, Pontygwaith and convert it into a relief road for Ynyshir. Construction work started in May 2005, removing the remains of Ynyshir station, the bridges at Llanwonno road and Station street, and the replacement of the Rhondda Fach bridge at Ynyshir and the Ynyshir road bridge. Today the A4233 Porth and Lower Rhondda Fach Relief Road (Porth Bypass), has meant a significant decrease in traffic flows through the main street, Ynyshir Road. Preservation In April 2019, a local group of enthusiasts looking to improve the local economic outlook by bringing tourists to the area, proposed reinstating the 3 miles of track north of Tylorstown to Maerdy. The proposal includes reinstatement of some stations and former industrial buildings. References Taff Vale Railway Mining railways Railway lines in Wales Railway lines opened in 1875 Railway lines closed in 1964 Coal in Wales
Maerdy Branch
Engineering
577
39,007
https://en.wikipedia.org/wiki/Wall
A wall is a structure and a surface that defines an area; carries a load; provides security, shelter, or soundproofing; or, is decorative. There are many kinds of walls, including: Border barriers between countries Brick walls Defensive walls in fortifications Permanent, solid fences Retaining walls, which hold back dirt, stone, water, or noise sound Stone walls Walls in buildings that form a fundamental part of the superstructure or separate interior rooms, sometimes for fire safety Glass walls in which the primary structure is made of glass; does not include openings within walls that have glass coverings as these are windows Walls that protect from oceans (seawalls) or rivers (levees) Etymology The term wall comes from the Latin vallum meaning "an earthen wall or rampart set with palisades, a row or line of stakes, a wall, a rampart, fortification", while the Latin word murus means a defensive stone wall. English uses the same word to mean an external wall and the internal sides of a room, but this is not universal. Many languages distinguish between the two. In German, some of this distinction can be seen between Wand and Mauer, in Spanish between pared and muro. Defensive wall The word wall originally referred to defensive walls and ramparts. Building wall The purposes of walls in buildings are to support roofs, floors and ceilings; to enclose a space as part of the building envelope along with a roof to give buildings form; and to provide shelter and security. In addition, the wall may house various types of utilities such as electrical wiring or plumbing. Wall construction falls into two basic categories: framed walls or mass-walls. In framed walls the load is transferred to the foundation through posts, columns or studs. Framed walls most often have three or more separate components: the structural elements (such as 2×4 studs in a house wall), insulation, and finish elements or surfaces (such as drywall or panelling). Mass-walls are of a solid material including masonry, concrete including slipform stonemasonry, log building, cordwood construction, adobe, rammed earth, cob, earthbag construction, bottles, tin cans, straw-bale construction, and ice. Walls may or may not be leadbearing. Walls are required to conform to the local building and/or fire codes. There are three basic methods walls control water intrusion: moisture storage, drained cladding, or face-sealed cladding. Moisture storage is typical of stone and brick mass-wall buildings where moisture is absorbed and released by the walls of the structure itself. Drained cladding also known as screened walls acknowledges moisture will penetrate the cladding so a moisture barrier such as housewrap or felt paper inside the cladding provides a second line of defense and sometimes a drainage plane or air gap allows a path for the moisture to drain down through and exit the wall. Sometimes ventilation is provided in addition to the drainage plane such as in rainscreen construction. Face-sealed also called barrier wall or perfect barrier cladding relies on maintaining a leak-free surface of the cladding. Examples of face sealed cladding are the early exterior insulation finishing systems, structural glazing, metal clad panels, and corrugated metal. Building walls frequently become works of art, externally and internally, such as when featuring mosaic work or when murals are painted on them; or as design foci when they exhibit textures or painted finishes for effect. Curtain wall In architecture and civil engineering, curtain wall refers to a building facade that is not load-bearing but provides decoration, finish, front, face, or historical preservation. Precast wall Precast walls are walls which have been manufactured in a factory and then shipped to where it is needed, ready to install. It is faster to install compared to brick and other walls and may have a lower cost compared to other types of wall. Precast walls are cost effective compare to Brick Wall compound wall. Mullion wall Mullion walls are a structural system that carries the load of the floor slab on prefabricated panels around the perimeter. Partition wall A partition wall is a usually thin wall that is used to separate or divide a room, primarily a pre-existing one. Partition walls are usually not load-bearing, and can be constructed out of many materials, including steel panels, bricks, cloth, plastic, plasterboard, wood, blocks of clay, terracotta, concrete, and glass. Some partition walls are made of sheet glass. Glass partition walls are a series of individual toughened glass panels mounted in wood or metal framing. They may be suspended from or slide along a robust aluminium ceiling track. The system does not require the use of a floor guide, which allows easy operation and an uninterrupted threshold. A timber partition consists of a wooden framework, supported on the floor or by side walls. Metal lath and plaster, properly laid, forms a reinforced partition wall. Partition walls constructed from fibre cement backer board are popular as bases for tiling in kitchens or in wet areas like bathrooms. Galvanized sheet fixed to wooden or steel members are mostly adopted in works of temporary character. Plain or reinforced partition walls may also be constructed from concrete, including pre-cast concrete blocks. Metal framed partitioning is also available. This partition consists of track (used primarily at the base and head of the partition) and studs (vertical sections fixed into the track typically spaced at 24", 16", or at 12"). Internal wall partitions, also known as office partitioning, are usually made of plasterboard (drywall) or varieties of glass. Toughened glass is a common option, as low-iron glass (better known as opti-white glass) increases light and solar heat transmission. Wall partitions are constructed using beads and tracking that is either hung from the ceiling or fixed into the ground. The panels are inserted into the tracking and fixed. Some wall partition variations specify their fire resistance and acoustic performance rating. Movable partitions Movable partitions are walls that open to join two or more rooms into one large floor area. These include: Sliding—a series of panels that slide in tracks fixed to the floor and ceiling, similar sliding doors Sliding and folding doors —similar to sliding folding doors, these are good for smaller spans Folding partition walls - a series of interlocking panels suspended from an overhead track that when extended provide an acoustical separation, and when retracted stack against a wall, ceiling, closet, or ceiling pocket. Screens—usually constructed of a metal or timber frame fixed with plywood and chipboard and supported with legs for free standing and easy movement Pipe and drape—fixed or telescopic uprights and horizontals provide a ground supported drape system with removable panels. Party wall Party walls are walls that separate buildings or units within a building. They provide fire resistance and sound resistance between occupants in a building. The minimum fire resistance and sound resistance required for the party wall is determined by a building code and may be modified to suit a variety of situations. Ownership of such walls can become a legal issue. It is not a load-bearing wall and may be owned by different people. Infill wall An infill wall is the supported wall that closes the perimeter of a building constructed with a three-dimensional framework structure. Fire wall Fire walls resist spread of fire within or sometimes between structures to provide passive fire protection. A delay in the spread of fire gives occupants more time to escape and fire fighters more time to extinguish the fire. Some fire walls allow fire resistive window assemblies, and are made of non-combustible material such as concrete, cement block, brick, or fire rated drywall. Wall penetrations are sealed with fire resistive materials. A doorway in a firewall must have a rated fire door. Fire walls provide varying resistance to the spread of fire, (e.g., one, two, three or four hours). Firewalls can also act as smoke barriers when constructed vertically from slab to roof deck and horizontally from an exterior wall to exterior wall subdividing a building into sections. Shear wall Shear walls resist lateral forces such as in an earthquake or severe wind. There are different kinds of shear walls such as the steel plate shear wall. Knee wall Knee walls are short walls that either support rafters or add height in the top floor rooms of houses. In a -story house, the knee wall supports the half story. Cavity wall Cavity walls are walls made with a space between two "skins" to inhibit heat transfer. Pony wall Pony wall (or dwarf wall) is a general term for short walls, such as: A half wall that only extends partway from floor to ceiling, without supporting anything A stem wall—a concrete wall that extends from the foundation slab to the cripple wall or floor joists A cripple wall—a framed wall from the stem wall or foundation slab to the floor joists Demountable wall Demountable walls fall into 3 different main types: Glass walls (unitesed panels or butt joint), Laminated particle board walls (this may also include other finishes, such as whiteboards, cork board, magnetic, etc., typically all on purpose-made wall studs) Drywall Solar energy A trombe wall in passive solar building design acts as a heat sink. Shipbuilding On a ship, a wall that separates major compartments is called a bulkhead. A thinner wall between cabins is called a partition. Boundary wall Boundary walls include privacy walls, boundary-marking walls on property, and town walls. These intergrade into fences. The conventional differentiation is that a fence is of minimal thickness and often open in nature, while a wall is usually more than a nominal thickness and is completely closed, or opaque. More to the point, an exterior structure of wood or wire is generally called a fence—but one of masonry is a wall. A common term for both is barrier, which is convenient for structures that are partly wall and partly fence—for example the Berlin Wall. Another kind of wall-fence ambiguity is the ha-ha—which is set below ground level to protect a view, yet acts as a barrier (to cattle, for example). Before the invention of artillery, many of the world's cities and towns, particularly in Europe and Asia, had defensive or protective walls (also called town walls or city walls). In fact, the English word "wall" derives from Latin vallum—a type of fortification wall. These walls are no longer relevant for defense, so such cities have grown beyond their walls, and many fortification walls, or portions of them, have been torn down—for example in Rome, Italy and Beijing, China. Examples of protective walls on a much larger scale include the Great Wall of China and Hadrian's Wall. Border wall Some walls formally mark the border between one population and another. A border wall is constructed to limit the movement of people across a certain line or border. These structures vary in placement with regard to international borders and topography. The most famous example of border barrier in history is probably the Great Wall of China, a series of walls that separated the Empire of China from nomadic powers to the north. The most prominent recent example is the Berlin Wall, which surrounded the enclave of West Berlin and separated it from East Germany for most of the Cold War era. The US-Mexico border wall, separating the United States and Mexico, is another recent example. Retaining wall In areas of rocky soils around the world, farmers have often pulled large quantities of stone out of their fields to make farming easier and have stacked those stones to make walls that either mark the field boundary, or the property boundary, or both. Retaining walls resist movement of earth, stone, or water. They may be part of a building or external. The ground surface or water on one side of a retaining wall is typically higher than on the other side. A dike is a retaining wall, as is a levee, a load-bearing foundation wall, and a sea wall. Shared wall Special laws often govern walls that neighbouring properties share. Typically, one neighbour cannot alter the common wall if it is likely to affect the building or property on the other side. A wall may also separate apartment or hotel rooms from each other. Each wall has two sides and breaking a wall on one side will break the wall on the other side. Portable wall Portable walls, such as room dividers or portable partitions divide a larger open space into smaller rooms. Portable walls can be static, such as cubicle walls, or can be wall panels mounted on casters to provide an easy way to reconfigure assembly space. They are often found inside schools, churches, convention centers, hotels, and corporate facilities. Temporary wall A temporary wall is constructed for easy removal or demolition. A typical temporary wall can be constructed with 1⁄2" (6 mm) to 5⁄8" (16 mm) sheet rock (plasterboard), metal 2 × 3s (approx. 5 × 7 cm), or 2 × 4s, or taped, plastered and compounded. Most installation companies use lattice (strips of wood) to cover the joints of the temporary wall with the ceiling. These are sometimes known as pressurized walls or temporary pressurized walls. Walls in popular culture Walls are often seen in popular culture, oftentimes representing barriers preventing progress or entry. For example: Fictional and symbolic walls The progressive/psychedelic rock band Pink Floyd used a metaphorical wall to represent the isolation felt by the protagonist of their 1979 concept album The Wall. The American poet laureate Robert Frost describes a pointless rock wall as a metaphor for the myopia of the culture-bound in his poem "Mending Wall", published in 1914. Walls are a recurring symbol in Ursula K. Le Guin's 1974 novel The Dispossessed'. In some cases, a wall may refer to an individual's debilitating mental or physical condition, seen as an impassable barrier. In George R. R. Martin's A Song of Ice and Fire series and its television adaptation, Game of Thrones'', The Wall plays multiple important roles: as a colossal fortification, made of ice and fortified with magic spells; as a cultural barrier; and as a codification of assumptions. Breaches of the wall, who is allowed to cross it and who is not, and its destruction have important symbolic, logistical, and socio-political implications in the storyline. Reportedly over 700 feet high and 100 leagues (300 miles) wide, it divides the northern border of the Seven Kingdoms realm from the domain of the wildlings and several categories of undead who live beyond it. Historical walls In a real-life example, the Berlin Wall, constructed by the Soviet Union to divide Berlin into NATO and Warsaw Pact zones of occupation, became a worldwide symbol of oppression and isolation. Social media walls Another common usage is as a communal surface to write upon. For instance the social networking site Facebook previously used an electronic "wall" to log the scrawls of friends until it was replaced by the "timeline" feature. See also Ashlar Chemise (wall) Clay panel Climbing wall Crinkle crankle wall Fabric structure Great Green Wall (Africa) Great Green Wall (China) Green wall List of walls Sleeper wall Stone wall Tensile structure Terraced wall Thin-shell structure Wallpaper References External links Archaeological features Home Property law Structural system
Wall
Technology,Engineering
3,147
2,683,768
https://en.wikipedia.org/wiki/Fucitol
Fucitol, also known as L-fucitol, 1-deoxy-L-galactitol, and (2R,3S,4R,5S)-hexane-1,2,3,4,5-pentol, is a sugar alcohol derived from fucoidan which is found in the North Atlantic seaweed Fucus vesiculosus or by the reduction of fucose. See also Galactitol References External links Sugar alcohols
Fucitol
Chemistry
107
172,911
https://en.wikipedia.org/wiki/Nuclear%20weapon%20design
Nuclear weapons design are physical, chemical, and engineering arrangements that cause the physics package of a nuclear weapon to detonate. There are three existing basic design types: Pure fission weapons are the simplest, least technically demanding, were the first nuclear weapons built, and so far the only type ever used in warfare, by the United States on Japan in World War II. Boosted fission weapons increase yield beyond that of the implosion design, by using small quantities of fusion fuel to enhance the fission chain reaction. Boosting can more than double the weapon's fission energy yield. Staged thermonuclear weapons are arrangements of two or more "stages", most usually two. The first stage is normally a boosted fission weapon as above (except for the earliest thermonuclear weapons, which used a pure fission weapon instead). Its detonation causes it to shine intensely with X-rays, which illuminate and implode the second stage filled with a large quantity of fusion fuel. This sets in motion a sequence of events which results in a thermonuclear, or fusion, burn. This process affords potential yields up to hundreds of times those of fission weapons. Pure fission weapons have been the first type to be built by new nuclear powers. Large industrial states with well-developed nuclear arsenals have two-stage thermonuclear weapons, which are the most compact, scalable, and cost effective option, once the necessary technical base and industrial infrastructure are built. Most known innovations in nuclear weapon design originated in the United States, though some were later developed independently by other states. In early news accounts, pure fission weapons were called atomic bombs or A-bombs and weapons involving fusion were called hydrogen bombs or H-bombs. Practitioners of nuclear policy, however, favor the terms nuclear and thermonuclear, respectively. Nuclear reactions Nuclear fission separates or splits heavier atoms to form lighter atoms. Nuclear fusion combines lighter atoms to form heavier atoms. Both reactions generate roughly a million times more energy than comparable chemical reactions, making nuclear bombs a million times more powerful than non-nuclear bombs, which a French patent claimed in May 1939. In some ways, fission and fusion are opposite and complementary reactions, but the particulars are unique for each. To understand how nuclear weapons are designed, it is useful to know the important similarities and differences between fission and fusion. The following explanation uses rounded numbers and approximations. Fission When a free neutron hits the nucleus of a fissile atom like uranium-235 (235U), the uranium nucleus splits into two smaller nuclei called fission fragments, plus more neutrons (for 235U three about as often as two; an average of just under 2.5 per fission). The fission chain reaction in a supercritical mass of fuel can be self-sustaining because it produces enough surplus neutrons to offset losses of neutrons escaping the supercritical assembly. Most of these have the speed (kinetic energy) required to cause new fissions in neighboring uranium nuclei. The uranium-235 nucleus can split in many ways, provided the atomic numbers add up to 92 and the mass numbers add up to 236 (uranium-235 plus the neutron that caused the split). The following equation shows one possible split, namely into strontium-95 (95Sr), xenon-139 (139Xe), and two neutrons (n), plus energy: The immediate energy release per atom is about 180 million electron volts (MeV); i.e., 74 TJ/kg. Only 7% of this is gamma radiation and kinetic energy of fission neutrons. The remaining 93% is kinetic energy (or energy of motion) of the charged fission fragments, flying away from each other mutually repelled by the positive charge of their protons (38 for strontium, 54 for xenon). This initial kinetic energy is 67 TJ/kg, imparting an initial speed of about 12,000 kilometers per second (i.e. 1.2 cm per nanosecond). The charged fragments' high electric charge causes many inelastic coulomb collisions with nearby nuclei, and these fragments remain trapped inside the bomb's fissile pit and tamper until their kinetic energy is converted into heat. Given the speed of the fragments and the mean free path between nuclei in the compressed fuel assembly (for the implosion design), this takes about a millionth of a second (a microsecond), by which time the core and tamper of the bomb have expanded to a ball of plasma several meters in diameter with a temperature of tens of millions of degrees Celsius. This is hot enough to emit black-body radiation in the X-ray spectrum. These X-rays are absorbed by the surrounding air, producing the fireball and blast of a nuclear explosion. Most fission products have too many neutrons to be stable so they are radioactive by beta decay, converting neutrons into protons by throwing off beta particles (electrons), neutrinos and gamma rays. Their half-lives range from milliseconds to about 200,000 years. Many decay into isotopes that are themselves radioactive, so from 1 to 6 (average 3) decays may be required to reach stability. In reactors, the radioactive products are the nuclear waste in spent fuel. In bombs, they become radioactive fallout, both local and global. Meanwhile, inside the exploding bomb, the free neutrons released by fission carry away about 3% of the initial fission energy. Neutron kinetic energy adds to the blast energy of a bomb, but not as effectively as the energy from charged fragments, since neutrons do not give up their kinetic energy as quickly in collisions with charged nuclei or electrons. The dominant contribution of fission neutrons to the bomb's power is the initiation of subsequent fissions. Over half of the neutrons escape the bomb core, but the rest strike 235U nuclei causing them to fission in an exponentially growing chain reaction (1, 2, 4, 8, 16, etc.). Starting from one atom, the number of fissions can theoretically double a hundred times in a microsecond, which could consume all uranium or plutonium up to hundreds of tons by the hundredth link in the chain. Typically in a modern weapon, the weapon's pit contains of plutonium and at detonation produces approximately yield, representing the fissioning of approximately of plutonium. Materials which can sustain a chain reaction are called fissile. The two fissile materials used in nuclear weapons are: 235U, also known as highly enriched uranium (HEU), "oralloy" meaning "Oak Ridge alloy", or "25" (a combination of the last digit of the atomic number of uranium-235, which is 92, and the last digit of its mass number, which is 235); and 239Pu, also known as plutonium-239, or "49" (from "94" and "239"). Uranium's most common isotope, 238U, is fissionable but not fissile, meaning that it cannot sustain a chain reaction because its daughter fission neutrons are not (on average) energetic enough to cause follow-on 238U fissions. However, the neutrons released by fusion of the heavy hydrogen isotopes deuterium and tritium will fission 238U. This 238U fission reaction in the outer jacket of the secondary assembly of a two-stage thermonuclear bomb produces by far the greatest fraction of the bomb's energy yield, as well as most of its radioactive debris. For national powers engaged in a nuclear arms race, this fact of 238U's ability to fast-fission from thermonuclear neutron bombardment is of central importance. The plenitude and cheapness of both bulk dry fusion fuel (lithium deuteride) and 238U (a byproduct of uranium enrichment) permit the economical production of very large nuclear arsenals, in comparison to pure fission weapons requiring the expensive 235U or 239Pu fuels. Fusion Fusion produces neutrons which dissipate energy from the reaction. In weapons, the most important fusion reaction is called the D-T reaction. Using the heat and pressure of fission, hydrogen-2, or deuterium (2D), fuses with hydrogen-3, or tritium (3T), to form helium-4 (4He) plus one neutron (n) and energy: The total energy output, 17.6 MeV, is one tenth of that with fission, but the ingredients are only one-fiftieth as massive, so the energy output per unit mass is approximately five times as great. In this fusion reaction, 14 of the 17.6 MeV (80% of the energy released in the reaction) shows up as the kinetic energy of the neutron, which, having no electric charge and being almost as massive as the hydrogen nuclei that created it, can escape the scene without leaving its energy behind to help sustain the reaction – or to generate x-rays for blast and fire. The only practical way to capture most of the fusion energy is to trap the neutrons inside a massive bottle of heavy material such as lead, uranium, or plutonium. If the 14 MeV neutron is captured by uranium (of either isotope; 14 MeV is high enough to fission both 235U and 238U) or plutonium, the result is fission and the release of 180 MeV of fission energy, multiplying the energy output tenfold. For weapon use, fission is necessary to start fusion, helps to sustain fusion, and captures and multiplies the energy carried by the fusion neutrons. In the case of a neutron bomb (see below), the last-mentioned factor does not apply, since the objective is to facilitate the escape of neutrons, rather than to use them to increase the weapon's raw power. Tritium production An essential nuclear reaction is the one that creates tritium, or hydrogen-3. Tritium is employed in two ways. First, pure tritium gas is produced for placement inside the cores of boosted fission devices in order to increase their energy yields. This is especially so for the fission primaries of thermonuclear weapons. The second way is indirect, and takes advantage of the fact that the neutrons emitted by a supercritical fission "spark plug" in the secondary assembly of a two-stage thermonuclear bomb will produce tritium in situ when these neutrons collide with the lithium nuclei in the bomb's lithium deuteride fuel supply. Elemental gaseous tritium for fission primaries is also made by bombarding lithium-6 (6Li) with neutrons (n), only in a nuclear reactor. This neutron bombardment will cause the lithium-6 nucleus to split, producing an alpha particle, or helium-4 (4He), plus a triton (3T) and energy: But as was discovered in the first test of this type of device, Castle Bravo, when lithium-7 is present, one also has some amounts of the following two net reactions: Li + n → T + He + n Li + H → 2 He + n + 15.123 MeV Most lithium is 7Li, and this gave Castle Bravo a yield 2.5 times larger than expected. The neutrons are supplied by the nuclear reactor in a way similar to production of plutonium 239Pu from 238U feedstock: target rods of the 6Li feedstock are arranged around a uranium-fueled core, and are removed for processing once it has been calculated that most of the lithium nuclei have been transmuted to tritium. Of the four basic types of nuclear weapon, the first, pure fission, uses the first of the three nuclear reactions above. The second, fusion-boosted fission, uses the first two. The third, two-stage thermonuclear, uses all three. Pure fission weapons The first task of a nuclear weapon design is to rapidly assemble a supercritical mass of fissile (weapon grade) uranium or plutonium. A supercritical mass is one in which the percentage of fission-produced neutrons captured by other neighboring fissile nuclei is large enough that each fission event, on average, causes more than one follow-on fission event. Neutrons released by the first fission events induce subsequent fission events at an exponentially accelerating rate. Each follow-on fissioning continues a sequence of these reactions that works its way throughout the supercritical mass of fuel nuclei. This process is conceived and described colloquially as the nuclear chain reaction. To start the chain reaction in a supercritical assembly, at least one free neutron must be injected and collide with a fissile fuel nucleus. The neutron joins with the nucleus (technically a fusion event) and destabilizes the nucleus, which explodes into two middleweight nuclear fragments (from the severing of the strong nuclear force holding the mutually-repulsive protons together), plus two or three free neutrons. These race away and collide with neighboring fuel nuclei. This process repeats over and over until the fuel assembly goes sub-critical (from thermal expansion), after which the chain reaction shuts down because the daughter neutrons can no longer find new fuel nuclei to hit before escaping the less-dense fuel mass. Each following fission event in the chain approximately doubles the neutron population (net, after losses due to some neutrons escaping the fuel mass, and others that collide with any non-fuel impurity nuclei present). For the gun assembly method (see below) of supercritical mass formation, the fuel itself can be relied upon to initiate the chain reaction. This is because even the best weapon-grade uranium contains a significant number of 238U nuclei. These are susceptible to spontaneous fission events, which occur randomly (it is a quantum mechanical phenomenon). Because the fissile material in a gun-assembled critical mass is not compressed, the design need only ensure the two sub-critical masses remain close enough to each other long enough that a 238U spontaneous fission will occur while the weapon is in the vicinity of the target. This is not difficult to arrange as it takes but a second or two in a typical-size fuel mass for this to occur. (Still, many such bombs meant for delivery by air (gravity bomb, artillery shell or rocket) use injected neutrons to gain finer control over the exact detonation altitude, important for the destructive effectiveness of airbursts.) This condition of spontaneous fission highlights the necessity to assemble the supercritical mass of fuel very rapidly. The time required to accomplish this is called the weapon's critical insertion time. If spontaneous fission were to occur when the supercritical mass was only partially assembled, the chain reaction would begin prematurely. Neutron losses through the void between the two subcritical masses (gun assembly) or the voids between not-fully-compressed fuel nuclei (implosion assembly) would sap the bomb of the number of fission events needed to attain the full design yield. Additionally, heat resulting from the fissions that do occur would work against the continued assembly of the supercritical mass, from thermal expansion of the fuel. This failure is called predetonation. The resulting explosion would be called a "fizzle" by bomb engineers and weapon users. Plutonium's high rate of spontaneous fission makes uranium fuel a necessity for gun-assembled bombs, with their much greater insertion time and much greater mass of fuel required (because of the lack of fuel compression). There is another source of free neutrons that can spoil a fission explosion. All uranium and plutonium nuclei have a decay mode that results in energetic alpha particles. If the fuel mass contains impurity elements of low atomic number (Z), these charged alphas can penetrate the coulomb barrier of these impurity nuclei and undergo a reaction that yields a free neutron. The rate of alpha emission of fissile nuclei is one to two million times that of spontaneous fission, so weapon engineers are careful to use fuel of high purity. Fission weapons used in the vicinity of other nuclear explosions must be protected from the intrusion of free neutrons from outside. Such shielding material will almost always be penetrated, however, if the outside neutron flux is intense enough. When a weapon misfires or fizzles because of the effects of other nuclear detonations, it is called nuclear fratricide. For the implosion-assembled design, once the critical mass is assembled to maximum density, a burst of neutrons must be supplied to start the chain reaction. Early weapons used a modulated neutron generator code named "Urchin" inside the pit containing polonium-210 and beryllium separated by a thin barrier. Implosion of the pit crushes the neutron generator, mixing the two metals, thereby allowing alpha particles from the polonium to interact with beryllium to produce free neutrons. In modern weapons, the neutron generator is a high-voltage vacuum tube containing a particle accelerator which bombards a deuterium/tritium-metal hydride target with deuterium and tritium ions. The resulting small-scale fusion produces neutrons at a protected location outside the physics package, from which they penetrate the pit. This method allows better timing of the first fission events in the chain reaction, which optimally should occur at the point of maximum compression/supercriticality. Timing of the neutron injection is a more important parameter than the number of neutrons injected: the first generations of the chain reaction are vastly more effective due to the exponential function by which neutron multiplication evolves. The critical mass of an uncompressed sphere of bare metal is for uranium-235 and for delta-phase plutonium-239. In practical applications, the amount of material required for criticality is modified by shape, purity, density, and the proximity to neutron-reflecting material, all of which affect the escape or capture of neutrons. To avoid a premature chain reaction during handling, the fissile material in the weapon must be kept subcritical. It may consist of one or more components containing less than one uncompressed critical mass each. A thin hollow shell can have more than the bare-sphere critical mass, as can a cylinder, which can be arbitrarily long without ever reaching criticality. Another method of reducing criticality risk is to incorporate material with a large cross-section for neutron capture, such as boron (specifically 10B comprising 20% of natural boron). Naturally this neutron absorber must be removed before the weapon is detonated. This is easy for a gun-assembled bomb: the projectile mass simply shoves the absorber out of the void between the two subcritical masses by the force of its motion. The use of plutonium affects weapon design due to its high rate of alpha emission. This results in Pu metal spontaneously producing significant heat; a 5 kilogram mass produces 9.68 watts of thermal power. Such a piece would feel warm to the touch, which is no problem if that heat is dissipated promptly and not allowed to build up the temperature. But this is a problem inside a nuclear bomb. For this reason bombs using Pu fuel use aluminum parts to wick away the excess heat, and this complicates bomb design because Al plays no active role in the explosion processes. A tamper is an optional layer of dense material surrounding the fissile material. Due to its inertia it delays the thermal expansion of the fissioning fuel mass, keeping it supercritical for longer. Often the same layer serves both as tamper and as neutron reflector. Gun-type assembly Little Boy, the Hiroshima bomb, used of uranium with an average enrichment of around 80%, or of uranium-235, just about the bare-metal critical mass . When assembled inside its tamper/reflector of tungsten carbide, the was more than twice critical mass. Before the detonation, the uranium-235 was formed into two sub-critical pieces, one of which was later fired down a gun barrel to join the other, starting the nuclear explosion. Analysis shows that less than 2% of the uranium mass underwent fission; the remainder, representing most of the entire wartime output of the giant Y-12 factories at Oak Ridge, scattered uselessly. The inefficiency was caused by the speed with which the uncompressed fissioning uranium expanded and became sub-critical by virtue of decreased density. Despite its inefficiency, this design, because of its shape, was adapted for use in small-diameter, cylindrical artillery shells (a gun-type warhead fired from the barrel of a much larger gun). Such warheads were deployed by the United States until 1992, accounting for a significant fraction of the 235U in the arsenal, and were some of the first weapons dismantled to comply with treaties limiting warhead numbers. The rationale for this decision was undoubtedly a combination of the lower yield and grave safety issues associated with the gun-type design. Implosion-type For both the Trinity device and the Fat Man (Nagasaki) bomb, nearly identical plutonium fission through implosion designs were used. The Fat Man device specifically used , about in volume, of Pu-239, which is only 41% of bare-sphere critical mass . Surrounded by a U-238 reflector/tamper, the Fat Man's pit was brought close to critical mass by the neutron-reflecting properties of the U-238. During detonation, criticality was achieved by implosion. The plutonium pit was squeezed to increase its density by simultaneous detonation, as with the "Trinity" test detonation three weeks earlier, of the conventional explosives placed uniformly around the pit. The explosives were detonated by multiple exploding-bridgewire detonators. It is estimated that only about 20% of the plutonium underwent fission; the rest, about , was scattered. An implosion shock wave might be of such short duration that only part of the pit is compressed at any instant as the wave passes through it. To prevent this, a pusher shell may be needed. The pusher is located between the explosive lens and the tamper. It works by reflecting some of the shock wave backward, thereby having the effect of lengthening its duration. It is made out of a low density metal – such as aluminium, beryllium, or an alloy of the two metals (aluminium is easier and safer to shape, and is two orders of magnitude cheaper; beryllium has high neutron-reflective capability). Fat Man used an aluminium pusher. The series of RaLa Experiment tests of implosion-type fission weapon design concepts, carried out from July 1944 through February 1945 at the Los Alamos Laboratory and a remote site east of it in Bayo Canyon, proved the practicality of the implosion design for a fission device, with the February 1945 tests positively determining its usability for the final Trinity/Fat Man plutonium implosion design. The key to Fat Man's greater efficiency was the inward momentum of the massive U-238 tamper. (The natural uranium tamper did not undergo fission from thermal neutrons, but did contribute perhaps 20% of the total yield from fission by fast neutrons). After the chain reaction started in the plutonium, it continued until the explosion reversed the momentum of the implosion and expanded enough to stop the chain reaction. By holding everything together for a few hundred nanoseconds more, the tamper increased the efficiency. Plutonium pit The core of an implosion weapon – the fissile material and any reflector or tamper bonded to it – is known as the pit. Some weapons tested during the 1950s used pits made with U-235 alone, or in composite with plutonium, but all-plutonium pits are the smallest in diameter and have been the standard since the early 1960s. Casting and then machining plutonium is difficult not only because of its toxicity, but also because plutonium has many different metallic phases. As plutonium cools, changes in phase result in distortion and cracking. This distortion is normally overcome by alloying it with 30–35 mMol (0.9–1.0% by weight) gallium, forming a plutonium-gallium alloy, which causes it to take up its delta phase over a wide temperature range. When cooling from molten it then has only a single phase change, from epsilon to delta, instead of the four changes it would otherwise pass through. Other trivalent metals would also work, but gallium has a small neutron absorption cross section and helps protect the plutonium against corrosion. A drawback is that gallium compounds are corrosive and so if the plutonium is recovered from dismantled weapons for conversion to plutonium dioxide for power reactors, there is the difficulty of removing the gallium. Because plutonium is chemically reactive it is common to plate the completed pit with a thin layer of inert metal, which also reduces the toxic hazard. The gadget used galvanic silver plating; afterward, nickel deposited from nickel tetracarbonyl vapors was used, but thereafter and since, gold became the preferred material. Recent designs improve safety by plating pits with vanadium to make the pits more fire-resistant. Levitated-pit implosion The first improvement on the Fat Man design was to put an air space between the tamper and the pit to create a hammer-on-nail impact. The pit, supported on a hollow cone inside the tamper cavity, was said to be "levitated". The three tests of Operation Sandstone, in 1948, used Fat Man designs with levitated pits. The largest yield was 49 kilotons, more than twice the yield of the unlevitated Fat Man. It was immediately clear that implosion was the best design for a fission weapon. Its only drawback seemed to be its diameter. Fat Man was wide vs for Little Boy. The Pu-239 pit of Fat Man was only in diameter, the size of a softball. The bulk of Fat Man's girth was the implosion mechanism, namely concentric layers of U-238, aluminium, and high explosives. The key to reducing that girth was the two-point implosion design. Two-point linear implosion In the two-point linear implosion, the nuclear fuel is cast into a solid shape and placed within the center of a cylinder of high explosive. Detonators are placed at either end of the explosive cylinder, and a plate-like insert, or shaper, is placed in the explosive just inside the detonators. When the detonators are fired, the initial detonation is trapped between the shaper and the end of the cylinder, causing it to travel out to the edges of the shaper where it is diffracted around the edges into the main mass of explosive. This causes the detonation to form into a ring that proceeds inward from the shaper. Due to the lack of a tamper or lenses to shape the progression, the detonation does not reach the pit in a spherical shape. To produce the desired spherical implosion, the fissile material itself is shaped to produce the same effect. Due to the physics of the shock wave propagation within the explosive mass, this requires the pit to be a prolate spheroid, that is, roughly egg shaped. The shock wave first reaches the pit at its tips, driving them inward and causing the mass to become spherical. The shock may also change plutonium from delta to alpha phase, increasing its density by 23%, but without the inward momentum of a true implosion. The lack of compression makes such designs inefficient, but the simplicity and small diameter make it suitable for use in artillery shells and atomic demolition munitions – ADMs – also known as backpack or suitcase nukes; an example is the W48 artillery shell, the smallest nuclear weapon ever built or deployed. All such low-yield battlefield weapons, whether gun-type U-235 designs or linear implosion Pu-239 designs, pay a high price in fissile material in order to achieve diameters between six and ten inches (15 and 25 cm). Hollow-pit implosion A more efficient implosion system uses a hollow pit. A hollow plutonium pit was the original plan for the 1945 Fat Man bomb, but there was not enough time to develop and test the implosion system for it. A simpler solid-pit design was considered more reliable, given the time constraints, but it required a heavy U-238 tamper, a thick aluminium pusher, and three tons of high explosives. After the war, interest in the hollow pit design was revived. Its obvious advantage is that a hollow shell of plutonium, shock-deformed and driven inward toward its empty center, would carry momentum into its violent assembly as a solid sphere. It would be self-tamping, requiring a smaller U-238 tamper, no aluminium pusher, and less high explosive. Fusion-boosted fission The next step in miniaturization was to speed up the fissioning of the pit to reduce the minimum inertial confinement time. This would allow the efficient fission of the fuel with less mass in the form of tamper or the fuel itself. The key to achieving faster fission would be to introduce more neutrons, and among the many ways to do this, adding a fusion reaction was relatively easy in the case of a hollow pit. The easiest fusion reaction to achieve is found in a 50–50 mixture of tritium and deuterium. For fusion power experiments this mixture must be held at high temperatures for relatively lengthy times in order to have an efficient reaction. For explosive use, however, the goal is not to produce efficient fusion, but simply provide extra neutrons early in the process. Since a nuclear explosion is supercritical, any extra neutrons will be multiplied by the chain reaction, so even tiny quantities introduced early can have a large effect on the outcome. For this reason, even the relatively low compression pressures and times (in fusion terms) found in the center of a hollow pit warhead are enough to create the desired effect. In the boosted design, the fusion fuel in gas form is pumped into the pit during arming. This will fuse into helium and release free neutrons soon after fission begins. The neutrons will start a large number of new chain reactions while the pit is still critical or nearly critical. Once the hollow pit is perfected, there is little reason not to boost; deuterium and tritium are easily produced in the small quantities needed, and the technical aspects are trivial. The concept of fusion-boosted fission was first tested on May 25, 1951, in the Item shot of Operation Greenhouse, Eniwetok, yield 45.5 kilotons. Boosting reduces diameter in three ways, all the result of faster fission: Since the compressed pit does not need to be held together as long, the massive U-238 tamper can be replaced by a light-weight beryllium shell (to reflect escaping neutrons back into the pit). The diameter is reduced. The mass of the pit can be reduced by half, without reducing yield. Diameter is reduced again. Since the mass of the metal being imploded (tamper plus pit) is reduced, a smaller charge of high explosive is needed, reducing diameter even further. The first device whose dimensions suggest employment of all these features (two-point, hollow-pit, fusion-boosted implosion) was the Swan device. It had a cylindrical shape with a diameter of and a length of . It was first tested standalone and then as the primary of a two-stage thermonuclear device during Operation Redwing. It was weaponized as the Robin primary and became the first off-the-shelf, multi-use primary, and the prototype for all that followed. After the success of Swan, seemed to become the standard diameter of boosted single-stage devices tested during the 1950s. Length was usually twice the diameter, but one such device, which became the W54 warhead, was closer to a sphere, only long. One of the applications of the W54 was the Davy Crockett XM-388 recoilless rifle projectile. It had a dimension of just , and is shown here in comparison to its Fat Man predecessor (). Another benefit of boosting, in addition to making weapons smaller, lighter, and with less fissile material for a given yield, is that it renders weapons immune to predetonation. It was discovered in the mid-1950s that plutonium pits would be particularly susceptible to partial predetonation if exposed to the intense radiation of a nearby nuclear explosion (electronics might also be damaged, but this was a separate problem). RI was a particular problem before effective early warning radar systems because a first strike attack might make retaliatory weapons useless. Boosting reduces the amount of plutonium needed in a weapon to below the quantity which would be vulnerable to this effect. Two-stage thermonuclear Pure fission or fusion-boosted fission weapons can be made to yield hundreds of kilotons, at great expense in fissile material and tritium, but by far the most efficient way to increase nuclear weapon yield beyond ten or so kilotons is to add a second independent stage, called a secondary. In the 1940s, bomb designers at Los Alamos thought the secondary would be a canister of deuterium in liquefied or hydride form. The fusion reaction would be D-D, harder to achieve than D-T, but more affordable. A fission bomb at one end would shock-compress and heat the near end, and fusion would propagate through the canister to the far end. Mathematical simulations showed it would not work, even with large amounts of expensive tritium added. The entire fusion fuel canister would need to be enveloped by fission energy, to both compress and heat it, as with the booster charge in a boosted primary. The design breakthrough came in January 1951, when Edward Teller and Stanislaw Ulam invented radiation implosion – for nearly three decades known publicly only as the Teller-Ulam H-bomb secret. The concept of radiation implosion was first tested on May 9, 1951, in the George shot of Operation Greenhouse, Eniwetok, yield 225 kilotons. The first full test was on November 1, 1952, the Mike shot of Operation Ivy, Eniwetok, yield 10.4 megatons. In radiation implosion, the burst of X-ray energy coming from an exploding primary is captured and contained within an opaque-walled radiation channel which surrounds the nuclear energy components of the secondary. The radiation quickly turns the plastic foam that had been filling the channel into a plasma which is mostly transparent to X-rays, and the radiation is absorbed in the outermost layers of the pusher/tamper surrounding the secondary, which ablates and applies a massive force (much like an inside out rocket engine) causing the fusion fuel capsule to implode much like the pit of the primary. As the secondary implodes a fissile "spark plug" at its center ignites and provides neutrons and heat which enable the lithium deuteride fusion fuel to produce tritium and ignite as well. The fission and fusion chain reactions exchange neutrons with each other and boost the efficiency of both reactions. The greater implosive force, enhanced efficiency of the fissile "spark plug" due to boosting via fusion neutrons, and the fusion explosion itself provide significantly greater explosive yield from the secondary despite often not being much larger than the primary. For example, for the Redwing Mohawk test on July 3, 1956, a secondary called the Flute was attached to the Swan primary. The Flute was in diameter and long, about the size of the Swan. But it weighed ten times as much and yielded 24 times as much energy (355 kilotons vs 15 kilotons). Equally important, the active ingredients in the Flute probably cost no more than those in the Swan. Most of the fission came from cheap U-238, and the tritium was manufactured in place during the explosion. Only the spark plug at the axis of the secondary needed to be fissile. A spherical secondary can achieve higher implosion densities than a cylindrical secondary, because spherical implosion pushes in from all directions toward the same spot. However, in warheads yielding more than one megaton, the diameter of a spherical secondary would be too large for most applications. A cylindrical secondary is necessary in such cases. The small, cone-shaped re-entry vehicles in multiple-warhead ballistic missiles after 1970 tended to have warheads with spherical secondaries, and yields of a few hundred kilotons. In engineering terms, radiation implosion allows for the exploitation of several known features of nuclear bomb materials which heretofore had eluded practical application. For example: The optimal way to store deuterium in a reasonably dense state is to chemically bond it with lithium, as lithium deuteride. But the lithium-6 isotope is also the raw material for tritium production, and an exploding bomb is a nuclear reactor. Radiation implosion will hold everything together long enough to permit the complete conversion of lithium-6 into tritium, while the bomb explodes. So the bonding agent for deuterium permits use of the D-T fusion reaction without any pre-manufactured tritium being stored in the secondary. The tritium production constraint disappears. For the secondary to be imploded by the hot, radiation-induced plasma surrounding it, it must remain cool for the first microsecond, i.e., it must be encased in a massive radiation (heat) shield. The shield's massiveness allows it to double as a tamper, adding momentum and duration to the implosion. No material is better suited for both of these jobs than ordinary, cheap uranium-238, which also happens to undergo fission when struck by the neutrons produced by D-T fusion. This casing, called the pusher, thus has three jobs: to keep the secondary cool; to hold it, inertially, in a highly compressed state; and, finally, to serve as the chief energy source for the entire bomb. The consumable pusher makes the bomb more a uranium fission bomb than a hydrogen fusion bomb. Insiders never used the term "hydrogen bomb". Finally, the heat for fusion ignition comes not from the primary but from a second fission bomb called the spark plug, embedded in the heart of the secondary. The implosion of the secondary implodes this spark plug, detonating it and igniting fusion in the material around it, but the spark plug then continues to fission in the neutron-rich environment until it is fully consumed, adding significantly to the yield. In the ensuing fifty years, no one has come up with a more efficient way to build a thermonuclear bomb. It is the design of choice for the United States, Russia, the United Kingdom, China, and France, the five thermonuclear powers. On 3 September 2017 North Korea carried out what it reported as its first "two-stage thermo-nuclear weapon" test. According to Dr. Theodore Taylor, after reviewing leaked photographs of disassembled weapons components taken before 1986, Israel possessed boosted weapons and would require supercomputers of that era to advance further toward full two-stage weapons in the megaton range without nuclear test detonations. The other nuclear-armed nations, India and Pakistan, probably have single-stage weapons, possibly boosted. Interstage In a two-stage thermonuclear weapon the energy from the primary impacts the secondary. An essential energy transfer modulator called the interstage, between the primary and the secondary, protects the secondary's fusion fuel from heating too quickly, which could cause it to explode in a conventional (and small) heat explosion before the fusion and fission reactions get a chance to start. There is very little information in the open literature about the mechanism of the interstage. Its first mention in a U.S. government document formally released to the public appears to be a caption in a graphic promoting the Reliable Replacement Warhead Program in 2007. If built, this new design would replace "toxic, brittle material" and "expensive 'special' material" in the interstage. This statement suggests the interstage may contain beryllium to moderate the flux of neutrons from the primary, and perhaps something to absorb and re-radiate the x-rays in a particular manner. There is also some speculation that this interstage material, which may be code-named Fogbank, might be an aerogel, possibly doped with beryllium and/or other substances. The interstage and the secondary are encased together inside a stainless steel membrane to form the canned subassembly (CSA), an arrangement which has never been depicted in any open-source drawing. The most detailed illustration of an interstage shows a British thermonuclear weapon with a cluster of items between its primary and a cylindrical secondary. They are labeled "end-cap and neutron focus lens", "reflector/neutron gun carriage", and "reflector wrap". The origin of the drawing, posted on the internet by Greenpeace, is uncertain, and there is no accompanying explanation. Specific designs While every nuclear weapon design falls into one of the above categories, specific designs have occasionally become the subject of news accounts and public discussion, often with incorrect descriptions about how they work and what they do. Examples are listed below. Alarm Clock/Sloika The first effort to exploit the symbiotic relationship between fission and fusion was a 1940s design that mixed fission and fusion fuel in alternating thin layers. As a single-stage device, it would have been a cumbersome application of boosted fission. It first became practical when incorporated into the secondary of a two-stage thermonuclear weapon. The U.S. name, Alarm Clock, came from Teller: he called it that because it might "wake up the world" to the possibility of the potential of the Super. The Russian name for the same design was more descriptive: Sloika (), a layered pastry cake. A single-stage Soviet Sloika was tested as RDS-6s on August 12, 1953. No single-stage U.S. version was tested, but the code named Castle Union shot of Operation Castle, April 26, 1954, was a two-stage thermonuclear device code-named Alarm Clock. Its yield, at Bikini, was 6.9 megatons. Because the Soviet Sloika test used dry lithium-6 deuteride eight months before the first U.S. test to use it (Castle Bravo, March 1, 1954), it was sometimes claimed that the USSR won the H-bomb race, even though the United States tested and developed the first hydrogen bomb: the Ivy Mike H-bomb test. The 1952 U.S. Ivy Mike test used cryogenically cooled liquid deuterium as the fusion fuel in the secondary, and employed the D-D fusion reaction. However, the first Soviet test to use a radiation-imploded secondary, the essential feature of a true H-bomb, was on November 23, 1955, three years after Ivy Mike. In fact, real work on the implosion scheme in the Soviet Union only commenced in the very early part of 1953, several months after the successful testing of Sloika. Clean bombs On March 1, 1954, the largest-ever U.S. nuclear test explosion, the 15-megaton Castle Bravo shot of Operation Castle at Bikini Atoll, delivered a promptly lethal dose of fission-product fallout to more than of Pacific Ocean surface. Radiation injuries to Marshall Islanders and Japanese fishermen made that fact public and revealed the role of fission in hydrogen bombs. In response to the public alarm over fallout, an effort was made to design a clean multi-megaton weapon, relying almost entirely on fusion. The energy produced by the fissioning of unenriched natural uranium, when used as the tamper material in the secondary and subsequent stages in the Teller-Ulam design, can far exceed the energy released by fusion, as was the case in the Castle Bravo test. Replacing the fissionable material in the tamper with another material is essential to producing a "clean" bomb. In such a device, the tamper no longer contributes energy, so for any given weight, a clean bomb will have less yield. The earliest known incidence of a three-stage device being tested, with the third stage, called the tertiary, being ignited by the secondary, was May 27, 1956, in the Bassoon device. This device was tested in the Zuni shot of Operation Redwing. This shot used non-fissionable tampers; an inert substitute material such as tungsten or lead was used. Its yield was 3.5 megatons, 85% fusion and only 15% fission. The Ripple concept, which used ablation to achieve fusion using very little fission, was and still is by far the cleanest design. Unlike previous clean bombs, which were clean simply by replacing fission fuel with inert substance, Ripple was by design clean. Ripple was also extremely efficient; plans for a 15 kt/kg were made during Operation Dominic. Shot Androscoggin featured a proof-of-concept Ripple design, resulting in a 63-kiloton fizzle (significantly lower than the predicted 15 megatons). It was repeated in shot Housatonic, which featured a 9.96 megaton explosion that was reportedly >99.9% fusion. The public records for devices that produced the highest proportion of their yield via fusion reactions are the peaceful nuclear explosions of the 1970s. Others include the 10 megaton Dominic Housatonic at over 99.9% fusion, 50-megaton Tsar Bomba at 97% fusion, the 9.3-megaton Hardtack Poplar test at 95%, and the 4.5-megaton Redwing Navajo test at 95% fusion. The most ambitious peaceful application of nuclear explosions was pursued by the USSR with the aim of creating a long canal between the Pechora river basin and the Kama river basin, about half of which was to be constructed through a series of underground nuclear explosions. It was reported that about 250 nuclear devices might be used to get the final goal. The Taiga test was to demonstrate the feasibility of the project. Three of these "clean" devices of 15 kiloton yield each were placed in separate boreholes spaced about apart at depths of . They were simultaneously detonated on March 23, 1971, catapulting a radioactive plume into the air that was carried eastward by wind. The resulting trench was around long and wide, with an unimpressive depth of just . Despite their "clean" nature, the area still exhibits a noticeably higher (albeit mostly harmless) concentration of fission products, the intense neutron bombardment of the soil, the device itself and the support structures also activated their stable elements to create a significant amount of man-made radioactive elements like 60Co. The overall danger posed by the concentration of radioactive elements present at the site created by these three devices is still negligible, but a larger scale project as was envisioned would have had significant consequences both from the fallout of radioactive plume and the radioactive elements created by the neutron bombardment. On July 19, 1956, AEC Chairman Lewis Strauss said that the Redwing Zuni shot clean bomb test "produced much of importance ... from a humanitarian aspect." However, less than two days after this announcement, the dirty version of Bassoon, called Bassoon Prime, with a uranium-238 tamper in place, was tested on a barge off the coast of Bikini Atoll as the Redwing Tewa shot. The Bassoon Prime produced a 5-megaton yield, of which 87% came from fission. Data obtained from this test, and others, culminated in the eventual deployment of the highest-yielding US nuclear weapon known, and the highest yield-to-weight weapon ever made, a three-stage thermonuclear weapon with a maximum "dirty" yield of 25 megatons, designated as the B41 nuclear bomb, which was to be carried by U.S. Air Force bombers until it was decommissioned; this weapon was never fully tested. Third generation First and second generation nuclear weapons release energy as omnidirectional blasts. Third generation nuclear weapons are experimental special effect warheads and devices that can release energy in a directed manner, some of which were tested during the Cold War but were never deployed. These include: Project Prometheus, also known as "Nuclear Shotgun", which would have used a nuclear explosion to accelerate kinetic penetrators against ICBMs. Project Excalibur, a nuclear-pumped X-ray laser to destroy ballistic missiles. Nuclear shaped charges that focus their energy in particular directions. Project Orion explored the use of nuclear explosives for rocket propulsion. Fourth generation The idea of "4th-generation" nuclear weapons has been proposed as a possible successor to the examples of weapons designs listed above. These methods tend to revolve around using non-nuclear primaries to set off further fission or fusion reactions. For example, if antimatter were usable and controllable in macroscopic quantities, a reaction between a small amount of antimatter and an equivalent amount of matter could release energy comparable to a small fission weapon, and could in turn be used as the first stage of a very compact thermonuclear weapon. Extremely-powerful lasers could also potentially be used this way, if they could be made powerful-enough, and compact-enough, to be viable as a weapon. Most of these ideas are versions of pure fusion weapons, and share the common property that they involve hitherto unrealized technologies as their "primary" stages. While many nations have invested significantly in inertial confinement fusion research programs, since the 1970s it has not been considered promising for direct weapons use, but rather as a tool for weapons- and energy-related research that can be used in the absence of full-scale nuclear testing. Whether any nations are aggressively pursuing "4th-generation" weapons is not clear. In many case (as with antimatter) the underlying technology is presently thought to be very far from being viable, and if it was viable would be a powerful weapon in and of itself, outside of a nuclear weapons context, and without providing any significant advantages above existing nuclear weapons designs Pure fusion weapons Since the 1950s, the United States and Soviet Union investigated the possibility of releasing significant amounts of nuclear fusion energy without the use of a fission primary. Such "pure fusion weapons" were primarily imagined as low-yield, tactical nuclear weapons whose advantage would be their ability to be used without producing fallout on the scale of weapons that release fission products. In 1998, the United States Department of Energy declassified the following: Red mercury, a likely hoax substance, has been hyped as a catalyst for a pure fusion weapon. Cobalt bombs A doomsday bomb, made popular by Nevil Shute's 1957 novel, and subsequent 1959 movie, On the Beach, the cobalt bomb is a hydrogen bomb with a jacket of cobalt. The neutron-activated cobalt would have maximized the environmental damage from radioactive fallout. These bombs were popularized in the 1964 film Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb; the material added to the bombs is referred to in the film as 'cobalt-thorium G'. Such "salted" weapons were investigated by U.S. Department of Defense. Fission products are as deadly as neutron-activated cobalt. Initially, gamma radiation from the fission products of an equivalent size fission-fusion-fission bomb are much more intense than Cobalt-60 (): 15,000 times more intense at 1 hour; 35 times more intense at 1 week; 5 times more intense at 1 month; and about equal at 6 months. Thereafter fission drops off rapidly so that fallout is 8 times more intense than fission at 1 year and 150 times more intense at 5 years. The very long-lived isotopes produced by fission would overtake the again after about 75 years. The triple "taiga" nuclear salvo test, as part of the preliminary March 1971 Pechora–Kama Canal project, produced a small amount of fission products and therefore a comparatively large amount of case material activated products are responsible for most of the residual activity at the site today, namely . fusion generated neutron activation was responsible for about half of the gamma dose at the test site. That dose is too small to cause deleterious effects, and normal green vegetation exists all around the lake that was formed. Arbitrarily large multi-staged devices The idea of a device which has an arbitrarily large number of Teller-Ulam stages, with each driving a larger radiation-driven implosion than the preceding stage, is frequently suggested, but technically disputed. There are "well-known sketches and some reasonable-looking calculations in the open literature about two-stage weapons, but no similarly accurate descriptions of true three stage concepts." During the mid-1950s through early 1960s, scientists working in the weapons laboratories of the United States investigated weapons concepts as large as 1,000 megatons, and Edward Teller announced the design of a 10,000-megaton weapon code-named SUNDIAL at a meeting of the General Advisory Committee of the Atomic Energy Commission. Much of the information about these efforts remains classified, but such "gigaton" range weapons do not appear to have made it beyond theoretical investigations. While both the US and Soviet Union investigated (and in the case of the Soviets, tested) "very high yield" (e.g. 50 to 100-megaton) weapons designs in the 1950s and early 1960s, these appear to represent the upper-limit of Cold War weapon yields pursued seriously, and were so physically heavy and massive that they could not be carried entirely within the bomb bays of the largest bombers. Cold War warhead development trends from the mid-1960s onward, and especially after the Limited Test Ban Treaty, instead resulted in highly-compact warheads with yields in the range from hundreds of kilotons to the low megatons that gave greater options for deliverability. Following the concern caused by the estimated gigaton scale of the 1994 Comet Shoemaker-Levy 9 impacts on the planet Jupiter, in a 1995 meeting at Lawrence Livermore National Laboratory (LLNL), Edward Teller proposed to a collective of U.S. and Russian ex-Cold War weapons designers that they collaborate on designing a 1,000-megaton nuclear explosive device for diverting extinction-class asteroids (10+ km in diameter), which would be employed in the event that one of these asteroids were on an impact trajectory with Earth. Neutron bombs A neutron bomb, technically referred to as an enhanced radiation weapon (ERW), is a type of tactical nuclear weapon designed specifically to release a large portion of its energy as energetic neutron radiation. This contrasts with standard thermonuclear weapons, which are designed to capture this intense neutron radiation to increase its overall explosive yield. In terms of yield, ERWs typically produce about one-tenth that of a fission-type atomic weapon. Even with their significantly lower explosive power, ERWs are still capable of much greater destruction than any conventional bomb. Meanwhile, relative to other nuclear weapons, damage is more focused on biological material than on material infrastructure (though extreme blast and heat effects are not eliminated). ERWs are more accurately described as suppressed yield weapons. When the yield of a nuclear weapon is less than one kiloton, its lethal radius from blast, , is less than that from its neutron radiation. However, the blast is more than potent enough to destroy most structures, which are less resistant to blast effects than even unprotected human beings. Blast pressures of upwards of are survivable, whereas most buildings will collapse with a pressure of only . Commonly misconceived as a weapon designed to kill populations and leave infrastructure intact, these bombs (as mentioned above) are still very capable of leveling buildings over a large radius. The intent of their design was to kill tank crews – tanks giving excellent protection against blast and heat, surviving (relatively) very close to a detonation. Given the Soviets' vast tank forces during the Cold War, this was the perfect weapon to counter them. The neutron radiation could instantly incapacitate a tank crew out to roughly the same distance that the heat and blast would incapacitate an unprotected human (depending on design). The tank chassis would also be rendered highly radioactive, temporarily preventing its re-use by a fresh crew. Neutron weapons were also intended for use in other applications, however. For example, they are effective in anti-nuclear defenses – the neutron flux being capable of neutralising an incoming warhead at a greater range than heat or blast. Nuclear warheads are very resistant to physical damage, but are very difficult to harden against extreme neutron flux. ERWs were two-stage thermonuclears with all non-essential uranium removed to minimize fission yield. Fusion provided the neutrons. Developed in the 1950s, they were first deployed in the 1970s, by U.S. forces in Europe. The last ones were retired in the 1990s. A neutron bomb is only feasible if the yield is sufficiently high that efficient fusion stage ignition is possible, and if the yield is low enough that the case thickness will not absorb too many neutrons. This means that neutron bombs have a yield range of 1–10 kilotons, with fission proportion varying from 50% at 1 kiloton to 25% at 10 kilotons (all of which comes from the primary stage). The neutron output per kiloton is then 10 to 15 times greater than for a pure fission implosion weapon or for a strategic warhead like a W87 or W88. Weapon design laboratories All the nuclear weapon design innovations discussed in this article originated from the following three labs in the manner described. Other nuclear weapon design labs in other countries duplicated those design innovations independently, reverse-engineered them from fallout analysis, or acquired them by espionage. Lawrence Berkeley The first systematic exploration of nuclear weapon design concepts took place in mid-1942 at the University of California, Berkeley. Important early discoveries had been made at the adjacent Lawrence Berkeley Laboratory, such as the 1940 cyclotron-made production and isolation of plutonium. A Berkeley professor, J. Robert Oppenheimer, had just been hired to run the nation's secret bomb design effort. His first act was to convene the 1942 summer conference. By the time he moved his operation to the new secret town of Los Alamos, New Mexico, in the spring of 1943, the accumulated wisdom on nuclear weapon design consisted of five lectures by Berkeley professor Robert Serber, transcribed and distributed as the (classified but now fully declassified and widely available online as a PDF) Los Alamos Primer. The Primer addressed fission energy, neutron production and capture, nuclear chain reactions, critical mass, tampers, predetonation, and three methods of assembling a bomb: gun assembly, implosion, and "autocatalytic methods", the one approach that turned out to be a dead end. Los Alamos At Los Alamos, it was found in April 1944 by Emilio Segrè that the proposed Thin Man Gun assembly type bomb would not work for plutonium because of predetonation problems caused by Pu-240 impurities. So Fat Man, the implosion-type bomb, was given high priority as the only option for plutonium. The Berkeley discussions had generated theoretical estimates of critical mass, but nothing precise. The main wartime job at Los Alamos was the experimental determination of critical mass, which had to wait until sufficient amounts of fissile material arrived from the production plants: uranium from Oak Ridge, Tennessee, and plutonium from the Hanford Site in Washington. In 1945, using the results of critical mass experiments, Los Alamos technicians fabricated and assembled components for four bombs: the Trinity Gadget, Little Boy, Fat Man, and an unused spare Fat Man. After the war, those who could, including Oppenheimer, returned to university teaching positions. Those who remained worked on levitated and hollow pits and conducted weapon effects tests such as Crossroads Able and Baker at Bikini Atoll in 1946. All of the essential ideas for incorporating fusion into nuclear weapons originated at Los Alamos between 1946 and 1952. After the Teller-Ulam radiation implosion breakthrough of 1951, the technical implications and possibilities were fully explored, but ideas not directly relevant to making the largest possible bombs for long-range Air Force bombers were shelved. Because of Oppenheimer's initial position in the H-bomb debate, in opposition to large thermonuclear weapons, and the assumption that he still had influence over Los Alamos despite his departure, political allies of Edward Teller decided he needed his own laboratory in order to pursue H-bombs. By the time it was opened in 1952, in Livermore, California, Los Alamos had finished the job Livermore was designed to do. Lawrence Livermore With its original mission no longer available, the Livermore lab tried radical new designs that failed. Its first three nuclear tests were fizzles: in 1953, two single-stage fission devices with uranium hydride pits, and in 1954, a two-stage thermonuclear device in which the secondary heated up prematurely, too fast for radiation implosion to work properly. Shifting gears, Livermore settled for taking ideas Los Alamos had shelved and developing them for the Army and Navy. This led Livermore to specialize in small-diameter tactical weapons, particularly ones using two-point implosion systems, such as the Swan. Small-diameter tactical weapons became primaries for small-diameter secondaries. Around 1960, when the superpower arms race became a ballistic missile race, Livermore warheads were more useful than the large, heavy Los Alamos warheads. Los Alamos warheads were used on the first intermediate-range ballistic missiles, IRBMs, but smaller Livermore warheads were used on the first intercontinental ballistic missiles, ICBMs, and submarine-launched ballistic missiles, SLBMs, as well as on the first multiple warhead systems on such missiles. In 1957 and 1958, both labs built and tested as many designs as possible, in anticipation that a planned 1958 test ban might become permanent. By the time testing resumed in 1961 the two labs had become duplicates of each other, and design jobs were assigned more on workload considerations than lab specialty. Some designs were horse-traded. For example, the W38 warhead for the Titan I missile started out as a Livermore project, was given to Los Alamos when it became the Atlas missile warhead, and in 1959 was given back to Livermore, in trade for the W54 Davy Crockett warhead, which went from Livermore to Los Alamos. Warhead designs after 1960 took on the character of model changes, with every new missile getting a new warhead for marketing reasons. The chief substantive change involved packing more fissile uranium-235 into the secondary, as it became available with continued uranium enrichment and the dismantlement of the large high-yield bombs. Starting with the Nova facility at Livermore in the mid-1980s, nuclear design activity pertaining to radiation-driven implosion was informed by research with indirect drive laser fusion. This work was part of the effort to investigate Inertial Confinement Fusion. Similar work continues at the more powerful National Ignition Facility. The Stockpile Stewardship and Management Program also benefited from research performed at NIF. Explosive testing Nuclear weapons are in large part designed by trial and error. The trial often involves test explosion of a prototype. In a nuclear explosion, a large number of discrete events, with various probabilities, aggregate into short-lived, chaotic energy flows inside the device casing. Complex mathematical models are required to approximate the processes, and in the 1950s there were no computers powerful enough to run them properly. Even today's computers and simulation software are not adequate. It was easy enough to design reliable weapons for the stockpile. If the prototype worked, it could be weaponized and mass-produced. It was much more difficult to understand how it worked or why it failed. Designers gathered as much data as possible during the explosion, before the device destroyed itself, and used the data to calibrate their models, often by inserting fudge factors into equations to make the simulations match experimental results. They also analyzed the weapon debris in fallout to see how much of a potential nuclear reaction had taken place. Light pipes An important tool for test analysis was the diagnostic light pipe. A probe inside a test device could transmit information by heating a plate of metal to incandescence, an event that could be recorded by instruments located at the far end of a long, very straight pipe. The picture below shows the Shrimp device, detonated on March 1, 1954, at Bikini, as the Castle Bravo test. Its 15-megaton explosion was the largest ever by the United States. The silhouette of a man is shown for scale. The device is supported from below, at the ends. The pipes going into the shot cab ceiling, which appear to be supports, are actually diagnostic light pipes. The eight pipes at the right end (1) sent information about the detonation of the primary. Two in the middle (2) marked the time when X-rays from the primary reached the radiation channel around the secondary. The last two pipes (3) noted the time radiation reached the far end of the radiation channel, the difference between (2) and (3) being the radiation transit time for the channel. From the shot cab, the pipes turned horizontally and traveled along a causeway built on the Bikini reef to a remote-controlled data collection bunker on Namu Island. While x-rays would normally travel at the speed of light through a low-density material like the plastic foam channel filler between (2) and (3), the intensity of radiation from the exploding primary creates a relatively opaque radiation front in the channel filler, which acts like a slow-moving logjam to retard the passage of radiant energy. While the secondary is being compressed via radiation-induced ablation, neutrons from the primary catch up with the x-rays, penetrate into the secondary, and start breeding tritium via the third reaction noted in the first section above. This 6Li + n reaction is exothermic, producing 5 MeV per event. The spark plug has not yet been compressed and thus remains subcritical, so no significant fission or fusion takes place as a result. If enough neutrons arrive before implosion of the secondary is complete, though, the crucial temperature differential between the outer and inner parts of the secondary can be degraded, potentially causing the secondary to fail to ignite. The first Livermore-designed thermonuclear weapon, the Morgenstern device, failed in this manner when it was tested as Castle Koon on April 7, 1954. The primary ignited, but the secondary, preheated by the primary's neutron wave, suffered what was termed as an inefficient detonation; thus, a weapon with a predicted one-megaton yield produced only 110 kilotons, of which merely 10 kt were attributed to fusion. These timing effects, and any problems they cause, are measured by light-pipe data. The mathematical simulations which they calibrate are called radiation flow hydrodynamics codes, or channel codes. They are used to predict the effect of future design modifications. It is not clear from the public record how successful the Shrimp light pipes were. The unmanned data bunker was far enough back to remain outside the mile-wide crater, but the 15-megaton blast, two and a half times as powerful as expected, breached the bunker by blowing its 20-ton door off the hinges and across the inside of the bunker. (The nearest people were farther away, in a bunker that survived intact.) Fallout analysis The most interesting data from Castle Bravo came from radio-chemical analysis of weapon debris in fallout. Because of a shortage of enriched lithium-6, 60% of the lithium in the Shrimp secondary was ordinary lithium-7, which doesn't breed tritium as easily as lithium-6 does. But it does breed lithium-6 as the product of an (n, 2n) reaction (one neutron in, two neutrons out), a known fact, but with unknown probability. The probability turned out to be high. Fallout analysis revealed to designers that, with the (n, 2n) reaction, the Shrimp secondary effectively had two and half times as much lithium-6 as expected. The tritium, the fusion yield, the neutrons, and the fission yield were all increased accordingly. As noted above, Bravo's fallout analysis also told the outside world, for the first time, that thermonuclear bombs are more fission devices than fusion devices. A Japanese fishing boat, Daigo Fukuryū Maru, sailed home with enough fallout on her decks to allow scientists in Japan and elsewhere to determine, and announce, that most of the fallout had come from the fission of U-238 by fusion-produced 14 MeV neutrons. Underground testing The global alarm over radioactive fallout, which began with the Castle Bravo event, eventually drove nuclear testing literally underground. The last U.S. above-ground test took place at Johnston Island on November 4, 1962. During the next three decades, until September 23, 1992, the United States conducted an average of 2.4 underground nuclear explosions per month, all but a few at the Nevada Test Site (NTS) northwest of Las Vegas. The Yucca Flat section of the NTS is covered with subsidence craters resulting from the collapse of terrain over radioactive caverns created by nuclear explosions (see photo). After the 1974 Threshold Test Ban Treaty (TTBT), which limited underground explosions to 150 kilotons or less, warheads like the half-megaton W88 had to be tested at less than full yield. Since the primary must be detonated at full yield in order to generate data about the implosion of the secondary, the reduction in yield had to come from the secondary. Replacing much of the lithium-6 deuteride fusion fuel with lithium-7 hydride limited the tritium available for fusion, and thus the overall yield, without changing the dynamics of the implosion. The functioning of the device could be evaluated using light pipes, other sensing devices, and analysis of trapped weapon debris. The full yield of the stockpiled weapon could be calculated by extrapolation. Production facilities When two-stage weapons became standard in the early 1950s, weapon design determined the layout of the new, widely dispersed U.S. production facilities, and vice versa. Because primaries tend to be bulky, especially in diameter, plutonium is the fissile material of choice for pits, with beryllium reflectors. It has a smaller critical mass than uranium. The Rocky Flats plant near Boulder, Colorado, was built in 1952 for pit production and consequently became the plutonium and beryllium fabrication facility. The Y-12 plant in Oak Ridge, Tennessee, where mass spectrometers called calutrons had enriched uranium for the Manhattan Project, was redesigned to make secondaries. Fissile U-235 makes the best spark plugs because its critical mass is larger, especially in the cylindrical shape of early thermonuclear secondaries. Early experiments used the two fissile materials in combination, as composite Pu-Oy pits and spark plugs, but for mass production, it was easier to let the factories specialize: plutonium pits in primaries, uranium spark plugs and pushers in secondaries. Y-12 made lithium-6 deuteride fusion fuel and U-238 parts, the other two ingredients of secondaries. The Hanford Site near Richland WA operated Plutonium production nuclear reactors and separations facilities during World War 2 and the Cold War. Nine Plutonium production reactors were built and operated there. The first being the B-Reactor which began operations in September 1944 and the last being the N-Reactor which ceased operations in January 1987. The Savannah River Site in Aiken, South Carolina, also built in 1952, operated nuclear reactors which converted U-238 into Pu-239 for pits, and converted lithium-6 (produced at Y-12) into tritium for booster gas. Since its reactors were moderated with heavy water, deuterium oxide, it also made deuterium for booster gas and for Y-12 to use in making lithium-6 deuteride. Warhead design safety Because even low-yield nuclear warheads have astounding destructive power, weapon designers have always recognised the need to incorporate mechanisms and associated procedures intended to prevent accidental detonation. Gun-type It is inherently dangerous to have a weapon containing a quantity and shape of fissile material which can form a critical mass through a relatively simple accident. Because of this danger, the propellant in Little Boy (four bags of cordite) was inserted into the bomb in flight, shortly after takeoff on August 6, 1945. This was the first time a gun-type nuclear weapon had ever been fully assembled. If the weapon falls into water, the moderating effect of the water can also cause a criticality accident, even without the weapon being physically damaged. Similarly, a fire caused by an aircraft crashing could easily ignite the propellant, with catastrophic results. Gun-type weapons have always been inherently unsafe. In-flight pit insertion Neither of these effects is likely with implosion weapons since there is normally insufficient fissile material to form a critical mass without the correct detonation of the lenses. However, the earliest implosion weapons had pits so close to criticality that accidental detonation with some nuclear yield was a concern. On August 9, 1945, Fat Man was loaded onto its airplane fully assembled, but later, when levitated pits made a space between the pit and the tamper, it was feasible to use in-flight pit insertion. The bomber would take off with no fissile material in the bomb. Some older implosion-type weapons, such as the US Mark 4 and Mark 5, used this system. In-flight pit insertion will not work with a hollow pit in contact with its tamper. Steel ball safety method As shown in the diagram above, one method used to decrease the likelihood of accidental detonation employed metal balls. The balls were emptied into the pit: this prevented detonation by increasing the density of the hollow pit, thereby preventing symmetrical implosion in the event of an accident. This design was used in the Green Grass weapon, also known as the Interim Megaton Weapon, which was used in the Violet Club and Yellow Sun Mk.1 bombs. Chain safety method Alternatively, the pit can be "safed" by having its normally hollow core filled with an inert material such as a fine metal chain, possibly made of cadmium to absorb neutrons. While the chain is in the center of the pit, the pit cannot be compressed into an appropriate shape to fission; when the weapon is to be armed, the chain is removed. Similarly, although a serious fire could detonate the explosives, destroying the pit and spreading plutonium to contaminate the surroundings as has happened in several weapons accidents, it could not cause a nuclear explosion. One-point safety While the firing of one detonator out of many will not cause a hollow pit to go critical, especially a low-mass hollow pit that requires boosting, the introduction of two-point implosion systems made that possibility a real concern. In a two-point system, if one detonator fires, one entire hemisphere of the pit will implode as designed. The high-explosive charge surrounding the other hemisphere will explode progressively, from the equator toward the opposite pole. Ideally, this will pinch the equator and squeeze the second hemisphere away from the first, like toothpaste in a tube. By the time the explosion envelops it, its implosion will be separated both in time and space from the implosion of the first hemisphere. The resulting dumbbell shape, with each end reaching maximum density at a different time, may not become critical. It is not possible to tell on the drawing board how this will play out. Nor is it possible using a dummy pit of U-238 and high-speed x-ray cameras, although such tests are helpful. For final determination, a test needs to be made with real fissile material. Consequently, starting in 1957, a year after Swan, both labs began one-point safety tests. Out of 25 one-point safety tests conducted in 1957 and 1958, seven had zero or slight nuclear yield (success), three had high yields of 300 t to 500 t (severe failure), and the rest had unacceptable yields between those extremes. Of particular concern was Livermore's W47, which generated unacceptably high yields in one-point testing. To prevent an accidental detonation, Livermore decided to use mechanical safing on the W47. The wire safety scheme described below was the result. When testing resumed in 1961, and continued for three decades, there was sufficient time to make all warhead designs inherently one-point safe, without need for mechanical safing. Wire safety method In the last test before the 1958 moratorium the W47 warhead for the Polaris SLBM was found to not be one-point safe, producing an unacceptably high nuclear yield of of TNT equivalent (Hardtack II Titania). With the test moratorium in force, there was no way to refine the design and make it inherently one-point safe. A solution was devised consisting of a boron-coated wire inserted into the weapon's hollow pit at manufacture. The warhead was armed by withdrawing the wire onto a spool driven by an electric motor. Once withdrawn, the wire could not be re-inserted. The wire had a tendency to become brittle during storage, and break or get stuck during arming, preventing complete removal and rendering the warhead a dud. It was estimated that 50–75% of warheads would fail. This required a complete rebuild of all W47 primaries. The oil used for lubricating the wire also promoted corrosion of the pit. Strong link/weak link Under the strong link/weak link system, "weak links" are constructed between critical nuclear weapon components (the "hard links"). In the event of an accident the weak links are designed to fail first in a manner that precludes energy transfer between them. Then, if a hard link fails in a manner that transfers or releases energy, energy can't be transferred into other weapon systems, potentially starting a nuclear detonation. Hard links are usually critical weapon components that have been hardened to survive extreme environments, while weak links can be both components deliberately inserted into the system to act as a weak link and critical nuclear components that can fail predictably. An example of a weak link would be an electrical connector that contains electrical wires made from a low melting point alloy. During a fire, those wires would melt, breaking any electrical connection. Permissive action link A permissive action link is an access control device designed to prevent unauthorised use of nuclear weapons. Early PALs were simple electromechanical switches and have evolved into complex arming systems that include integrated yield control options, lockout devices and anti-tamper devices. References Notes Bibliography Cohen, Sam, The Truth About the Neutron Bomb: The Inventor of the Bomb Speaks Out, William Morrow & Co., 1983 Coster-Mullen, John, "Atom Bombs: The Top Secret Inside Story of Little Boy and Fat Man", Self-Published, 2011 Glasstone, Samuel and Dolan, Philip J., editors, The Effects of Nuclear Weapons (third edition) (PDF), U.S. Government Printing Office, 1977. Grace, S. Charles, Nuclear Weapons: Principles, Effects and Survivability (Land Warfare: Brassey's New Battlefield Weapons Systems and Technology, vol 10) Hansen, Chuck, "Swords of Armageddon: U.S. Nuclear Weapons Development since 1945 " (CD-ROM & download available). PDF. 2,600 pages, Sunnyvale, California, Chucklea Publications, 1995, 2007. (2nd Ed.) The Effects of Nuclear War , Office of Technology Assessment (May 1979). Rhodes, Richard. The Making of the Atomic Bomb. Simon and Schuster, New York, (1986 ) Rhodes, Richard. Dark Sun: The Making of the Hydrogen Bomb. Simon and Schuster, New York, (1995 ) Smyth, Henry DeWolf, Atomic Energy for Military Purposes , Princeton University Press, 1945. (see: Smyth Report) External links Carey Sublette's Nuclear Weapon Archive is a reliable source of information and has links to other sources. Nuclear Weapons Frequently Asked Questions: Section 4.0 Engineering and Design of Nuclear Weapons The Federation of American Scientists provides solid information on weapons of mass destruction, including nuclear weapons and their effects More information on the design of two-stage fusion bombs Militarily Critical Technologies List (MCTL), Part II (1998) (PDF) from the US Department of Defense at the Federation of American Scientists website. "Restricted Data Declassification Decisions from 1946 until Present", Department of Energy report series published from 1994 until January 2001 which lists all known declassification actions and their dates. Hosted by Federation of American Scientists. The Holocaust Bomb: A Question of Time is an update of the 1979 court case USA v. The Progressive, with links to supporting documents on nuclear weapon design. Annotated bibliography on nuclear weapons design from the Alsos Digital Library for Nuclear Issues The Woodrow Wilson Center's Nuclear Proliferation International History Project or NPIHP is a global network of individuals and institutions engaged in the study of international nuclear history through archival documents, oral history interviews and other empirical sources. Design Weapon design
Nuclear weapon design
Engineering
16,940
18,512,183
https://en.wikipedia.org/wiki/Jacob%20Matham
Jacob Matham (15 October 1571 – 20 January 1631), of Haarlem, was an engraver and pen-draftsman. Biography He was the stepson and pupil of painter and draftsman Hendrik Goltzius, and brother-in-law to engraver Simon van Poelenburgh, having married his sister, Marijtgen. He made several engravings after the paintings of Peter Paul Rubens from 1611 to 1615, and also a series after the work of Pieter Aertsen. In 1613, engraver Jan van de Velde was apprenticed to him. He was the father of Jan, Theodor and Adriaen Matham, the latter of whom was a notable engraver in his own right. References External links Vermeer and The Delft School, a full text exhibition catalog from The Metropolitan Museum of Art, which contains material on Jacob Matham 1571 births 1631 deaths Draughtsmen Dutch Golden Age printmakers Artists from Haarlem Renaissance engravers Painters from Haarlem
Jacob Matham
Engineering
208
938,571
https://en.wikipedia.org/wiki/Hirola
The hirola (Beatragus hunteri), also called the Hunter's hartebeest or Hunter's antelope, is a critically endangered antelope species found as of now, only in Kenya along the border of Somalia. It was first described by the big game hunter and zoologist H.C.V. Hunter in 1888. It is the only living member of the genus Beatragus, though other species are known from the fossil record. The global hirola population is estimated at 300–500 animals and there are none in captivity. According to a document produced by the International Union for Conservation of Nature "the loss of the hirola would be the first extinction of a mammalian genus on mainland Africa in modern human history". Description The hirola is a medium-sized antelope, tan to rufous-tawny in colour with slightly lighter under parts, predominantly white inner ears and a white tail which extends down to the hocks. It has very sharp, lyrate horns which lack a basal pedicle and are ridged along three quarters of their length. As hirola age their coat darkens towards a slate grey and the number of ridges along their horns increases. Hirola have large, dark sub-orbital glands used for marking their territories and give them the name "four-eyed antelope". They have white spectacles around their eyes and an inverted white chevron running between the eyes. The horns, hooves, udders, nostrils, lips and ear tips are black. Males and females look similar although males are slightly larger with thicker horns and darker coats. Several sources have recorded precise measurements from both captive and wild hirola. The following are maximum and minimum values taken from all sources: height at the shoulder: 99–125 cm, body weight: 73–118 kg, head and body length: 120–200 cm, horn length: 44–72 cm, horn spread (greatest outside width): 15–32 cm, tail length: 30–45 cm, ear length: 19 cm. It is not stated whether horn length was measured direct from base to tip or along the curve of the horn. There is no data on how long hirola live in the wild but in captivity they have been known to live for 15 years. Taxonomy Authorities agree that the hirola belongs in the subfamily Alcelaphinae within the family Bovidae but there has been debate about the genus in which it should be placed. The Alcelaphinae contains hartebeest, wildebeest and topi, korrigum, bontebok, blesbok, tiang and tsessebe. When it was first described the hirola was given the common name Hunter's hartebeest. Despite this it was placed in the genus Damaliscus with the topi and given the scientific name Damaliscus hunteri. Newer theories have classified it as a subspecies of the topi (Damaliscus lunatus hunteri) or placed it within its own genus as Beatragus hunteri. Recent genetic analyses on karyotypic and mitochondrial DNA support the theory that the hirola is distinct from the topi and should be placed in its own genus. They also indicate that the hirola is in fact more closely related to Alcelaphus than to Damaliscus. Placing the hirola in its own genus is further supported by behavioural observations. Neither Alcelaphus nor Damaliscus engage in flehmen, where the male tastes the urine of the female to determine oestrus. They are the only genera of bovids to have lost this behaviour. Hirola still engage in flehmen although it is less obvious than in other species. The genus Beatragus originated around 3.1 million years ago and was once widespread with fossils found in Ethiopia, Djibouti, Tanzania and South Africa. Ecology The hirola is adapted to arid environments with annual rainfall averaging . Their habitats range from open grassland with light bush to wooded savannahs with low shrubs and scattered trees, most often on sandy soils. Despite the arid environments they inhabit, hirola appear to be able to survive independently of surface water. Andanje observed hirola drinking on only 10 occasions in 674 observations (1.5%) and all 10 observations of drinking occurred at the height of the dry season. Hirola do however favour short green grass and in 392 of 674 observations (58%) hirola were grazing on growths of short green grass around waterholes. This association with waterholes may have led to reports that hirola are dependent on surface water. Hirola are primarily grazers but browse may be important in the dry season. They favour grasses with a high leaf to stem ratio and Chloris and Digitaria species are believed to be important in their diet. Kingdon does not consider the ecological requirements of the hirola unusual and in fact considers them to be more generalist than either Connochaetes spp. or Damaliscus. A vet who examined the digestive tract of several hirola concluded that they were well adapted to eating dry region grasses and roughage. They feed on the dominant grasses of the region and Kingdon (1982) believes that quantity is more important than quality in the hirola's diet. Hirola are often found in association with other species, particularly oryx, Grant's gazelle, Burchell's zebra and topi. They avoid Coke's hartebeest, African buffalo, and elephant. Whilst hirola avoid direct association with livestock, they reportedly prefer the short grass in areas where livestock have grazed. Social structure and reproduction Female hirola give birth alone and may remain separate from the herd for up to two months, making them vulnerable to predation. Eventually the female will rejoin a nursery herd consisting of females and their young. Nursery herds number from 5 to 40 although the mean herd size is 7-9. They are usually accompanied by an adult male. Young hirola leave the nursery herd at around nine months of age and form various temporary associations. They may gather together in mixed or single sex herds of up to three individuals; sub-adult or subordinate adult males may form bachelor herds of 2-38 individuals; female sub-adults may join an adult male and; if no other hirola are present, young hirola may attach themselves to a herd of Grant's gazelles or simply spend most of their time alone. Adult males attempt to secure a territory on good pasture. These territories are up to and are marked with dung, secretions from the sub-orbital glands and by stamping grounds where males scrape the soil with their hooves and slash the vegetation with their horns. It has been suggested that at low population densities adult males abandon territory defence and will instead follow a nursery herd. Nursery herds do not defend a territory but do have home ranges which overlap the territories of several adult males. The size of a nursery herd's home range varies from with a mean size of . Nursery herds are relatively stable but bachelor herds are very unstable with a fission fusion dynamic. In the 1970s hirola were observed forming aggregations of up to 300 individuals to take advantage of scarce, but spatially clumped, resources during the dry season (Bunderson, 1985). Information is lacking on male territoriality and how it relates to mating success, how and when hirola join a herd and how new herds are established (Butynski, 2000). Hirola are seasonal breeders with young being born from September to November. Data on age of sexual maturity and gestation period are not available for wild hirola however in captivity gestation was around 7.5 months (227–242 days) with one female mating at 1.4 years old and giving birth at 1.9 years. Another pair of hirola mated when they were 1.7 years of age. In captivity one of the main causes of mortality is wounds caused by intra-hirola aggression, including aggression between females. Threats The reasons for the historic decline of the hirola are not known but is likely a combination of factors including disease (particularly rinderpest), hunting, severe drought, predation, competition for food and water from domestic livestock and habitat loss caused by woody plant encroachment as a result of the extirpation of elephants within its range. This hartebeest prefers areas that are used by livestock which puts them at increased risk from diseases like tuberculosis. It might be vulnerable to poaching, and is also subject to the natural phenomena of predation and competition with other wild herbivores, particularly topi and Coke's hartebeest, which the IUCN also calls 'threats'. Population size and distribution The hirola's natural range is an area of no more than 1,500 km2 on the Kenyan-Somali border, but there is also a translocated population in Tsavo East National Park. The natural population in the 1970s was likely to number 10,000–15,000 individuals but there was an 85–90% decline between 1983 and 1985. A survey in 1995 and 1996 estimated the population to number between 500 and 2,000 individuals with 1,300 as the most reasonable estimate. A 2010 survey estimated a population of 402–466 hirola. A translocated population was established in Kenya's Tsavo East National Park with translocations in 1963 and 1996 (Hofmann, 1996; Andanje & Ottichilo, 1999; Butynski, 1999; East, 1999). The 1963 translocation released 30 animals and the first survey in December 1995 concluded that there were at least 76 hirola present in Tsavo at the time. Eight months later a further 29 translocated hirola were released in to Tsavo, at least six of which were pregnant at the time (Andanje, 1997). By December 2000 the hirola population in Tsavo had returned to 77 individuals (Andanje, 2002) and by 2011 the population was estimated at 76 individuals. In 2013, 9 individuals from 7 different herds were fitted with GPS-collars, scheduled to drop-off in June 2014, in north-eastern Kenya. This marked the first time that the species was GPS-collared in the wild. These collaring events served as a purpose to understand the basic ecology, the natural history, movements patterns and population demographics of the species. Status and conservation Hirola are critically endangered and their numbers continue to decline in the wild. There are between 300–500 individuals in the wild and none currently in captivity. Despite being one of the rarest antelopes, conservation measures for the antelope have so far been marginal. The Arawale National Reserve was created in 1973 as a small sanctuary for them, but has been left unmaintained since the 1980s. In 2005, four local communities in the Ijara District, in collaboration with Terra Nuova, established the Ishaqbini Hirola Conservancy. As of 2014, a 23 km2 predator-proof fenced sanctuary has been constructed at Ishaqbini and a founding population of 48 hirola is breeding well within the sanctuary. References Hirola Antelope Beatragus hunteri conservation status and conservation action in Kenya External links Ever heard of the hirola? No safe haven for rarest antelope Hirola Conservation Programme Kenya Wildlife Service Endemic fauna of Kenya Alcelaphinae EDGE species Mammals of Kenya Mammals of Somalia Fauna of East Africa Mammals described in 1889 Bovids of Africa Taxa named by Philip Sclater
Hirola
Biology
2,425
8,338,259
https://en.wikipedia.org/wiki/Center%20for%20Drug%20Evaluation%20and%20Research
The Center for Drug Evaluation and Research (CDER, pronounced "see'-der") is a division of the U.S. Food and Drug Administration (FDA) that monitors most drugs as defined in the Food, Drug, and Cosmetic Act. Some biological products are also legally considered drugs, but they are covered by the Center for Biologics Evaluation and Research. The center reviews applications for brand name, generic, and over the counter pharmaceuticals, manages US current Good Manufacturing Practice (cGMP) regulations for pharmaceutical manufacturing, determines which medications require a medical prescription, monitors advertising of approved medications, and collects and analyzes safety data about pharmaceuticals that are already on the market. CDER receives considerable public scrutiny, and thus implements processes that tend toward objectivity and tend to isolate decisions from being attributed to specific individuals. The decisions on approval will often make or break a small company's stock price (e.g., Martha Stewart and Imclone), so the markets closely watch CDER's decisions. The center has around 1,300 employees in "review teams" that evaluate and approve new drugs. Additionally, the CDER employs a "safety team" with 72 employees to determine whether new drugs are unsafe or present risks not disclosed in the product's labeling. The FDA's budget for approving, labeling, and monitoring drugs is roughly $290 million per year. The safety team monitors the effects of more than 3,000 prescription drugs on 200 million people with a budget of about $15 million a year. Patrizia Cavazzoni is the current director of CDER. Responsibilities CDER reviews New Drug Applications to ensure that the drugs are safe and effective. Its primary objective is to ensure that all prescription and over-the-counter (OTC) medications are safe and effective when used as directed. The FDA requires a four-phased series of clinical trials for testing drugs. Phase I involves testing new drugs on healthy volunteers in small groups to determine the maximum safe dosage. Phase II trials involve patients with the condition the drug is intended to treat to test for safety and minimal efficacy in a somewhat larger group of people. Phase III trials involve one to five thousand patients to determine whether the drug is effective in treating the condition it is intended to be used for. After this stage, a new drug application is submitted. If the drug is approved, stage IV trials are conducted after marketing to ensure there are no adverse effects or long-term effects of the drug that were not previously discovered. With the rapid advancement of biologically-derived treatments, the FDA has stated that it is working to modernize the process of approval for new drugs. In 2017, Commissioner Scott Gottlieb estimated that they have more than 600 active applications for gene and cell-based therapies. Divisions CDER is divided into 8 sections with different responsibilities: Office of New Drugs This office is responsible for oversight of clinical trials and other studies during drug development, and for the evaluation of new drug applications The Office of New Drugs is divided into several departments based on the indication of the drug (the medical need for which it is being proposed) Office of Generic Drugs This office reviews generic drug applications to ensure generic drugs are equivalent to their branded forms Office of Strategic Programs This office is responsible for business programs, represents CDER in the FDA Bioinformatics Board, and communicates with other agencies Office of Pharmaceutical Quality This office is responsible for integrating assessment, inspection, surveillance, policy, and research activities to strengthen pharmaceutical quality on a global scale. Office of Surveillance and Epidemiology This office is responsible for post-marketing surveillance to identify adverse effects that may not have been apparent during clinical trials, using the MedWatch program Office of Translational Sciences This office promotes collaboration across offices in CDER by maintaining databases and biostatistical tools for evaluating drugs Office of Medical and Regulatory Policy This office develops and reviews guidelines pertinent to CDER's mission of ensuring the safety of drugs Office of Compliance This office ensures compliance with regulations relating to drug development and marketing History The FDA has had the responsibility of reviewing drugs since the passage of the 1906 Pure Food and Drugs Act. The 1938 Federal Food, Drug and Cosmetic Act required all new drugs to be tested before marketing by submitting the original form of the new drug application. Within the first year, the FDA's Drug Division, the predecessor to CDER, received over 1200 applications. The Drug Amendments of 1962 required manufacturers to prove to the FDA that the drug in question was both safe and effective. In 1966, the division was reorganized to create the Office of New Drugs, which was responsible for reviewing new drug applications and clinical testing of drugs. In 1982, when the beginning of the biotechnology revolution blurred the line between a drug and a biologic, the Bureau of Drugs was merged with the FDA's Bureau of Biologics to form the National Center for Drugs and Biologics during an agency-wide reorganization under Commissioner Arthur Hayes. This reorganization similarly merged the bureaus responsible for medical devices and radiation control into the Center for Devices and Radiological Health. In 1987, under Commissioner Frank Young, CDER and the Center for Biologics Evaluation and Research (CBER) were split into their present form. The two groups were charged with enforcing different laws and had significantly different philosophical and cultural differences. At that time, CDER was more cautious about approving therapeutics and had a more adversarial relationship with the industry. The growing crisis around HIV testing and treatment and an inter-agency dispute between officials from the former Bureau of Drugs and officials from the former Bureau of Biologics over whether to approve Genentech's Activase (tissue plasminogen activator) led to the split. In its original form, CDER was composed of six offices: Management, Compliance, Drug Standards, Drug Evaluation I, Drug Evaluation II, Epidemiology and Biostatistics, and Research Resources. The Division of Antiviral Products was added in 1989 under Drug Evaluation II due to the large amount of drugs proposed for treating AIDS. The Office of Generic Drugs was also formed. In 2002, the FDA transferred a number of biologically produced therapeutics to CDER. These include therapeutic monoclonal antibodies, proteins intended for therapeutic use, immunomodulators, and growth factors and other products designed to alter production of blood cells. References External links Food and Drug Administration Life sciences industry Pharmaceutical regulation in the United States
Center for Drug Evaluation and Research
Biology
1,312
3,711,679
https://en.wikipedia.org/wiki/NETCONF
The Network Configuration Protocol (NETCONF) is a network management protocol developed and standardized by the IETF. It was developed in the NETCONF working group and published in December 2006 as RFC 4741 and later revised in June 2011 and published as RFC 6241. The NETCONF protocol specification is an Internet Standards Track document. NETCONF provides mechanisms to install, manipulate, and delete the configuration of network devices. Its operations are realized on top of a simple Remote Procedure Call (RPC) layer. The NETCONF protocol uses an Extensible Markup Language (XML) based data encoding for the configuration data as well as the protocol messages. The protocol messages are exchanged on top of a secure transport protocol. The NETCONF protocol can be conceptually partitioned into four layers: The Content layer consists of configuration data and notification data. The Operations layer defines a set of base protocol operations to retrieve and edit the configuration data. The Messages layer provides a mechanism for encoding remote procedure calls (RPCs) and notifications. The Secure Transport layer provides a secure and reliable transport of messages between a client and a server. The NETCONF protocol has been implemented in network devices such as routers and switches by some major equipment vendors. One particular strength of NETCONF is its support for robust configuration change using transactions involving a number of devices. History The IETF developed the Simple Network Management Protocol (SNMP) in the late 1980s and it proved to be a very popular network management protocol. In the early part of the 21st century it became apparent that in spite of what was originally intended, SNMP was not being used to configure network equipment, but was mainly being used for network monitoring. In June 2002, the Internet Architecture Board and key members of the IETF's network management community got together with network operators to discuss the situation. The results of this meeting are documented in RFC 3535. It turned out that each network operator was primarily using a different proprietary command-line interface (CLI) to configure their devices. This had a number of features that the operators liked, including the fact that it was text-based, as opposed to the BER-encoded SNMP. In addition, many equipment vendors did not provide the option to completely configure their devices via SNMP. As operators generally liked to write scripts to help manage their boxes, they found the SNMP CLI lacking in a number of ways. Most notably was the unpredictable nature of the output. The content and formatting of output was prone to change in unpredictable ways. Around this same time, Juniper Networks had been using an XML-based network management approach. This was brought to the IETF and shared with the broader community. Collectively, these two events led the IETF in May 2003 to the creation of the NETCONF working group. This working group was chartered to work on a network configuration protocol, which would better align with the needs of network operators and equipment vendors. The first version of the base NETCONF protocol was published as RFC 4741 in December 2006. Several extensions were published in subsequent years (notifications in RFC 5277 in July 2008, partial locks in RFC 5717 in December 2009, with-defaults in RFC 6243 in June 2011, system notifications in RFC 6470 in February 2012, access control in RFC 6536 in March 2012). A revised version of the base NETCONF protocol was published as RFC 6241 in June 2011. Protocol layers Content The content of NETCONF operations is well-formed XML. Most content is related to network management. Subsequently, support for encoding in JavaScript Object Notation (JSON) was also added. The NETMOD working group has completed work to define a "human-friendly" modeling language for defining the semantics of operational data, configuration data, notifications, and operations, called YANG. YANG is defined in RFC 6020 (version 1) and RFC 7950 (version 1.1), and is accompanied by the "Common YANG Data Types" found in RFC 6991. During the summer of 2010, the NETMOD working group was re-chartered to work on core configuration models (system, interface, and routing) as well as work on compatibility with the SNMP modeling language. Operations The base protocol defines the following protocol operations: Basic NETCONF functionality can be extended by the definition of NETCONF capabilities. The set of additional protocol features that an implementation supports is communicated between the server and the client during the capability exchange portion of session setup. Mandatory protocol features are not included in the capability exchange since they are assumed. RFC 4741 defines a number of optional capabilities including :xpath and :validate. Note that RFC 6241 obsoletes RFC 4741. A capability to support subscribing and receiving asynchronous event notifications is published in RFC 5277. This document defines the <create-subscription> operation, which enables creating real-time and replay subscriptions. Notifications are then sent asynchronously using the <notification> construct. It also defines the :interleave capability, which when supported with the basic :notification capability facilitates the processing of other NETCONF operations while the subscription is active. A capability to support partial locking of the running configuration is defined in RFC 5717. This allows multiple sessions to edit non-overlapping sub-trees within the running configuration. Without this capability, the only lock available is for the entire configuration. A capability to monitor the NETCONF protocol is defined in RFC 6022. This document contains a data model including information about NETCONF datastores, sessions, locks, and statistics that facilitates the management of a NETCONF server. It also defines methods for NETCONF clients to discover data models supported by a NETCONF server and defines the <get-schema> operation to retrieve them. Messages The NETCONF messages layer provides a simple, transport-independent framing mechanism for encoding RPC invocations (<rpc> messages), RPC results (<rpc-reply> messages), and event notifications (<notification> messages). Every NETCONF message is a well-formed XML document. An RPC result is linked to an RPC invocation by a message-id attribute. NETCONF messages can be pipelined, i.e., a client can invoke multiple RPCs without having to wait for RPC result messages first. RPC messages are defined in RFC 6241 and notification messages are defined in RFC 5277. Transport NETCONF Protocol over Secure Shell (SSH): rfc:6242 NETCONF Protocol over Transport Layer Security (TLS) with Mutual X.509 Authentication: rfc:7589 See also YANG RESTCONF Network management Configuration management Network monitoring XML Schema References Internet protocols Internet Standards Network management System administration Application layer protocols
NETCONF
Technology,Engineering
1,422
49,636,277
https://en.wikipedia.org/wiki/Ruth%20Sonntag%20Nussenzweig
Ruth Sonntag Nussenzweig (20 June 1928 – 1 April 2018) was an Austrian-Brazilian immunologist specializing in the development of malaria vaccines. In a career spanning over 60 years, she was primarily affiliated with New York University (NYU). She served as C.V. Starr Professor of Medical and Molecular Parasitology at Langone Medical Center, Research Professor at the NYU Department of Pathology, and finally Professor Emerita of Microbiology and Pathology at the NYU Department of Microbiology. Biography Dr. Nussenzweig was born Ruth Sonntag in Vienna, Austria, to a secular Jewish family in which both of her parents were physicians. In 1939, after the Anschluss, the Sonntags fled to São Paulo, Brazil. While attending the University of São Paulo School of Medicine, she became involved in leftist politics and met Victor Nussenzweig, her future husband and lifelong research partner. After receiving her M.D., Nussenzweig moved to Paris for a research fellowship. In 1963, she did further graduate work at the NYU laboratory of immunologist Zoltán Óváry. In 1965, the Nussenzweigs returned to São Paulo, and found that working conditions had become untenable since the 1964 military coup; many of their friends and colleagues had been jailed by the regime, and Victor was singled out for questioning by the School's new military administration. Through the intervention of Baruj Benacerraf, both Nussenzweigs obtained Assistant Professorships at NYU, and moved permanently to the United States. Dr. Nussenzweig returned briefly to Brazil to defend her doctoral thesis, earning her Ph.D. from the University of São Paulo in 1968. Dr. Nussenzweig's family includes multiple people who have made significant contributions to research and academia, including husband Victor, Professor Emeritus at the NYU School of Medicine; son Michel C. Nussenzweig, Professor of Medicine at The Rockefeller University; daughter Sonia Nussenzweig-Hotimsky, Professor of Anthropology at the Foundation School of Sociology and Politics in São Paulo; and son Andre Nussenzweig, Distinguished Investigator at the National Institutes of Health. Research work In 1967, Dr. Nussenzweig demonstrated that mice could acquire immunity to the Plasmodium berghei parasite. She did so by exposing the mice to P. berghei sporozoites that had been inactivated by X-ray irradiation. Major publications Huang, Jing; Li, Xiangming; Coelho-Dos-Reis, Jordana G A; Zhang, Min; Mitchell, Robert; Nogueira, Raquel Tayar; Tsao, Tiffany; Noe, Amy R; Ayala, Ramses; Sahi, Vincent; Gutierrez, Gabriel M; Nussenzweig, Victor; Wilson, James M; Nardin, Elizabeth H; Nussenzweig, Ruth S; Tsuji, Moriya. "Human immune system mice immunized with Plasmodium falciparum circumsporozoite protein induce protective human humoral immunity against malaria." Journal of Immunological Methods. 2015 Sep;:42-50 Teixeira, Lais H; Tararam, Cibele A; Lasaro, Marcio O; Camacho, Ariane G A; Ersching, Jonatan; Leal, Monica T; Herrera, Socrates; Bruna-Romero, Oscar; Soares, Irene S; Nussenzweig, Ruth S; Ertl, Hildegund C J; Nussenzweig, Victor; Rodrigues, Mauricio M. "Immunogenicity of a Prime-Boost Vaccine Containing the Circumsporozoite Proteins of Plasmodium vivax in Rodents. Infection and Immunity. 2014 Feb;82(2):793-807 Mishra, Satish; Nussenzweig, Ruth S; Nussenzweig, Victor. "Antibodies to Plasmodium circumsporozoite protein (CSP) inhibit sporozoite's cell traversal activity." Journal of Immunological Methods. 2012 Mar;377(1-2):47-52 Camacho, Ariane Guglielmi Ariza; Teixeira, Lais Helena; Bargieri, Daniel Youssef; Boscardin, Silvia Beatriz; Soares, Irene da Silva; Nussenzweig, Ruth Sonntag; Nussenzweig, Victor; Rodrigues, Mauricio Martins. "TLR5-dependent immunogenicity of a recombinant fusion protein containing an immunodominant epitope of malarial circumsporozoite protein and the FliC flagellin of Salmonella Typhimurium." Memórias do Instituto Oswaldo Cruz. 2011 Aug;106 Suppl 1:167-171 Mishra, Satish; Rai, Urvashi; Shiratsuchi, Takayuki; Li, Xiangming; Vanloubbeeck, Yannick; Cohen, Joe; Nussenzweig, Ruth S; Winzeler, Elizabeth A; Tsuji, Moriya; Nussenzweig, Victor. "Identification of non-CSP antigens bearing CD8 epitopes in mice immunized with irradiated sporozoites." Vaccine. 2011 Oct 6;29(43):7335-7342 Awards Paul Ehrlich and Ludwig Darmstaedter Prize, 1985 Carlos J. Finlay Prize for Microbiology (Cuba), 1985 Albert B. Sabin Gold Medal, 2008 Warren Alpert Foundation Prize, 2015 National Order of Scientific Merit (Brazil) References 1928 births 2018 deaths New York University faculty Malariologists Vaccinologists Brazilian microbiologists Brazilian people of Austrian-Jewish descent Members of the United States National Academy of Sciences Austrian emigrants to Brazil Brazilian expatriates in the United States Women parasitologists Scientists from Vienna Members of the National Academy of Medicine Graduate Women in Science members
Ruth Sonntag Nussenzweig
Biology
1,258
44,374,849
https://en.wikipedia.org/wiki/Sayre%27s%20paradox
Sayre's paradox is a dilemma encountered in the design of automated handwriting recognition systems. A standard statement of the paradox is that a cursively written word cannot be recognized without being segmented and cannot be segmented without being recognized. The paradox was first articulated in a 1973 publication by Kenneth M. Sayre, after whom it was named. Nature of the problem It is relatively easy to design automated systems capable of recognizing words inscribed in a printed format. Such words are segmented into letters by the very act of writing them on the page. Given templates matching typical letter shapes in a given language, individual letters can be identified with a high degree of probability. In cases of ambiguity, probable letter sequences can be compared with a selection of properly spelled words in that language (called a lexicon). If necessary, syntactic features of the language can be applied to render a generally accurate identification of the words in question. Printed-character recognition systems of this sort are commonly used in processing standardized government forms, in sorting mail by zip code, and so forth. In cursive writing, however, letters comprising a given word typically flow sequentially without gaps between them. Unlike a sequence of printed letters, cursively connected letters are not segmented in advance. Here is where Sayre's Paradox comes into play. Unless the word is already segmented into letters, template-matching techniques like those described above cannot be applied. That is, segmentation is a prerequisite for word recognition. But there are no reliable techniques for segmenting a word into letters unless the word itself has been identified. Word recognition requires letter segmentation, and letter segmentation requires word recognition. There is no way a cursive writing recognition system employing standard template-matching techniques can do both simultaneously. Advantages to be gained by use of automated cursive writing recognition systems include routing mail with handwritten addresses, reading handwritten bank checks, and automated digitalization of hand-written documents. These are practical incentives for finding ways of circumventing Sayre's Paradox. Avoiding the paradox One way of ameliorating the adverse effects of the paradox is to normalize the word inscriptions to be recognized. Normalization amounts to eliminating idiosyncrasies in the penmanship of the writer, such as unusual slope of the letters and unusual slant of the cursive line. This procedure can increase the probability of a correct match with a letter template, resulting in an incremental improvement in the success rate of the system. Since improvement of this sort still depends on accurate segmentation, however, it remains subject to the limitations of Sayre's Paradox. Researchers have come to realize that the only way to circumvent the paradox is by use of procedures that do not rely on accurate segmentation. Directions of current research Segmentation is accurate to the extent that it matches distinctions among letters in the actual inscriptions presented to the system for recognition (the input data). This is sometimes referred to as “explicit segmentation”. “Implicit segmentation,” by contrast, is division of the cursive line into more parts than the number of actual letters in the cursive line itself. Processing these “implicit parts” to achieve eventual word identification requires specific statistical procedures involving hidden Markov models (HMM). A Markov model is a statistical representation of a random process, which is to say a process in which future states are independent of states occurring before the present. In such a process, a given state is dependent only on the conditional probability of its following the state immediately before it. An example is a series of outcomes from successive casts of a die. An HMM is a Markov model, individual states of which are not fully known. Conditional probabilities between states are still determinate, but the identities of individual states are not fully disclosed. Recognition proceeds by matching HMMs of words to be recognized with previously prepared HMMs of words in the lexicon. The best match in a given case is taken to indicate the identity of the handwritten word in question. As with systems based on explicit segmentation, automated recognition systems based on implicit segmentation are judged more or less successful according to the percentage of correct identifications they accomplish. Instead of explicit segmentation techniques, most automated handwriting recognition systems today employ implicit segmentation in conjunction with HMM-based matching procedures. The constraints epitomized by Sayre's Paradox are largely responsible for this shift in approach. References External links Kenneth M. Sayre and the Philosophic Institute. Paradoxes Handwriting recognition Natural language processing
Sayre's paradox
Technology
929
21,747,724
https://en.wikipedia.org/wiki/Geer%20tube
The Geer tube was an early single-tube color television cathode ray tube, developed by Willard Geer. The Geer tube used a pattern of small phosphor-covered three-sided pyramids on the inside of the CRT faceplate to mix separate red, green, and blue signals from three electron guns. The Geer tube had a number of disadvantages and was never used commercially due to superior images generated by RCA's shadow mask system. Nevertheless, Geer's patent was awarded first, and RCA purchased an option on it in case their own developments did not become viable. History Color television Color television had been studied even before commercial broadcasting became common, but it was only in the late 1940s that the problem was seriously considered. At the time, a number of systems were being proposed that used separate red, green, and blue signals (RGB), broadcast in succession. Most experimental systems broadcast entire frames in sequence, with a colored filter (or "gel") that rotated in front of an otherwise conventional black and white television tube. Each frame encoded one color of the picture, and the wheel spun in sync with the signal so the correct gel was in front of the screen when that colored frame was being displayed. Because they broadcast separate signals for the different colors, all of these systems were incompatible with existing black-and-white sets. Another problem was that the mechanical filter made them flicker unless very high refresh rates were used. RCA worked along different lines entirely, using a luminance-chrominance system. This system did not directly encode or transmit the RGB signals; instead, it combined these colors into one overall brightness figure, the "luminance". Luminance closely matched the black-and-white signal of existing broadcasts, allowing it to be displayed on black-and-white televisions. This was a major advantage over the mechanical systems being proposed by other groups. Color information was then separately encoded and folded into the signal as a high-frequency modification to produce a composite video signal – on a black and white television, this extra information would be seen as a slight randomization of the image intensity, but the limited resolution of existing sets made this invisible in practice. On color sets, the signal would be filtered out and added to the luminance to re-create the original RGB for display. Although RCA's system had enormous benefits, it had not been successfully developed because it was difficult to produce the display tubes. Black and white TVs used a continuous signal, and the tube could be coated with an even deposit of phosphor. With the luminance concept, the color was changing continually along the line, which was far too fast for any sort of mechanical filter to follow. Instead, the phosphor had to be broken down into a discrete pattern of colored spots. Focusing the right signal on each of these tiny spots was beyond the capability of the electron guns of the era. Geer's solution Charles Willard Geer, then an assistant professor at the University of Southern California, was lecturing on the mechanical methods of producing color television that was being experimented with in the 1940s and decided that an electronically scanned system would be superior if someone would only invent one. Mentioning it later to his wife, she replied, "You'd better get busy and invent it yourself". Geer solved the display problem with the novel application of optics. Instead of trying to focus the electron beams onto tiny spots, he instead focused them onto larger areas and used simple optics to re-combine each individual primary color at any given place on the screen into a single pixel. The tube was arranged with three separate electron guns, one each for red, green, and blue (RGB), arranged around the outside of the picture area. This made a Geer tube quite large; the "necks" of the tubes normally lie behind the display area and give the TV its depth, whereas in the Geer tube, the necks projected around the outside of the display area, making it much larger. The rear face of the screen was covered with a series of tiny triangular pyramids imprinted on an aluminum sheet, coated on the inside of each face with colored phosphor. Properly aligned, a given electron beam could only reach one face of the pyramids, striking it and traveling through the thin metal into the thicker phosphor layer inside. When all three guns hit their respective faces, the colored light was created in the interior of the pyramid where it mixed, producing a proper color display on the open base, which faced the user. One enormous advantage of the Geer system is that it could be used with any of the proposed color television broadcasting systems. CBS was promoting a "field sequential" system at 144 frames per second that they intended to display with a mechanical color filter wheel. This same signal could be displayed on a Geer tube by sending each successive frame to a different gun, in turn. RCA's "dot sequential" system could also be shown by de-multiplexing the signals and sending all three color signals to each of the appropriate guns at the same time. B&W signals could be displayed by sending the same signal, muted by 1/3, and also to all three guns at the same time. Getting the electron beam to hit the correct pyramid, and not the surrounding ones, was a major problem for the design. The beam from an electron gun is normally circular, so when it was aimed at a triangular target, some part of the beam normally went past the target pyramid and hit others on the screen. This results in overscan, causing the image to be blurry and washed out. The problem was particularly difficult to solve because the angle between the beam and faces changed as the beams scanned the tube – pyramids near the gun would be hit at close to a right angle, but ones at the opposite side of the tube were at an acute angle. Considering that each gun was offset from the CRT's main axis, it was necessary to make major geometrical corrections to the raster geometry during the scan. Competing systems Geer filed for a patent on his design on July 11, 1944. Technicolor purchased the patent rights and started development of prototype units in concert with the Stanford Research Institute, spending a reported $500,000 in 1950 (approx. equivalent, $4 million, in 2005) on development. The system was widely reported on at the time, including mentions in Time, Popular Science, Popular Mechanics, Radio Electronics, and others. Many other companies were also working on color television systems, most notably RCA. They had filed a patent on their shadow mask system only a few weeks after Geer. When Geer, and Technicolor, informed RCA of their patent, RCA took out licenses and added further funding to the project as a "second iron in the fire" in case none of their in-house developments worked out. In head-to-head testing against other color television systems for the NTSC color standardization efforts that started in November 1949, Geer's tube did not fare particularly well. Overscan bled the colors into neighboring pixels and led to soft colors and poor color registration and contrast. This problem was by no means limited to the Geer tube; several different technologies were demonstrated at the show, and only the CBS mechanical system proved able to produce a picture that satisfied the judges. In 1950, the CBS system was adopted as the NTSC standard. Geer continued to work on the overscan problems throughout the late 1940s and into the 1950s, filing additional patents on various corrections to improve the system. Other vendors were making similar strides with their own technologies, and in 1953 the NTSC reconvened a panel to consider the color issue. This time RCA's shadow mask system quickly demonstrated itself as superior to all the other systems, including Geer's. The shadow mask remained the primary method of building color televisions, with Sony Trinitron a distant second, until the early 2000s when LCD technology replaced CRTs. At the same time, RCA's version of color encoding into a signal that was compatible with existing B&W sets was also adopted, with modifications, and remained the primary U.S. television standard until 2009, when analog television was shut down. After NTSC Geer continued to work on his basic concept for some time, as well as other television-related concepts. In 1955 he filed a patent on a flat TV tube that used a gun arranged to lie beside the image area that fired upward toward the top. The beam was deflected through 90 degrees by a series of charged wires so the beam was now traveling horizontally across the back of the picture area. A second grid, located beside the first, then bent the beams through a small angle so they hit the back of the screen. It does not appear that this device was ever constructed, and the arrangement of aiming elements suggests focusing the image would be a serious problem. Two other inventors had been working on this problem as well, Dennis Gabor in England (better known for the development of holograms) and William Aiken in the US. Both of their patents were filed before Geer's, and the Aiken tube was successfully built in small numbers. More recently, similar concepts were used, combined with computer-controlled convergence systems, to produce "flatter" systems, typically for computer monitor use. Sony sold small-screen monochrome TVs using basically-similar nearly-flat CRTs; they were used for outside-broadcast monitors, as well. However, these were quickly displaced by LCD-based systems. In 1960 he filed for a patent on a three-dimensional television system that used two color tubes and a 2-dimensional version of his pyramids. The vertical channels reflected the light in two directions, providing different images for each eye. Patents U.S. Patent 2,480,848, "Color Television Device", Charles Willard Geer/Technicolor Motion Picture Corporation, filed July 11, 1944, issued September 6, 1949 U.S. Patent 2,622,220, "Television Color Screen", Charles Willard Geer/Technicolor Motion Picture Corporation, filed March 22, 1949, issued December 16, 1952 U.S. Patent 2,850,669, "Television Picture Tube or the Like", Charles Willard Geer, filed April 26, 1955, issued September 2, 1958 U.S. Patent 3,184,630, "Three-Dimensional Display Apparatus", Charles Willard Geer, filed July 12, 1960, issued May 18, 1960 See also Chromatron, another early color television CRT that is no longer used Beam-index tube Shadow mask Aperture grille References Citations Bibliography Edward W. Herold, "History and development of the color picture tube", Proceedings of the Society of Information Display, Volume 15 Issue 4 (August 1974), pp. 141–149. "Teacher's Tube", Time, March 20, 1950. Further reading Mark Heyer and Al Pinsky, "Interview with Harold B. Law", IEEE History Center, July 15, 1975 Television technology Vacuum tube displays Early color television
Geer tube
Technology
2,287
7,521,584
https://en.wikipedia.org/wiki/Biodemography
Biodemography is the science dealing with the integration of biological theory and demography. Overview Biodemography is a new branch of human (classical) demography concerned with understanding the complementary biological and demographic determinants of and interactions between the birth and death processes that shape individuals, cohorts and populations. The biological component brings human demography under the unifying theoretical umbrella of evolution, and the demographic component provides an analytical foundation for many of the principles upon which evolutionary theory rests including fitness, selection, structure, and change. Biodemographers are concerned with birth and death processes as they relate to populations in general and to humans in particular, whereas population biologists specializing in life history theory are interested in these processes only insofar as they relate to fitness and evolution. Traditionally, evolutionary biologists seldom focused on older, post-reproductives because these individuals (it is typically argued) do not contribute to fitness. In contrast, biodemographers embraced research programs expressly designed to study individuals at ages beyond their reproductive years because information on these age classes will shed important light on longevity and aging. The biological and demographic components of biodemography are not hierarchical but reciprocal in that both are primary windows on the world and are thus synergistic, complementary and mutually informing. However, there has been much more synthesis between the approaches to demographic research in recent years, such that collaboration between evolutionary, ecology and demographic researchers is increasingly common. An example of this is the "Evolutionary Demography Society", formed in 2012/2013 to increase opportunities for inter and multidisciplinary approaches to understanding how life history and ageing are related and lead to different population demographics. Biodemography is one of a small number of key subdisciplines arising from the social sciences that has embraced biology such as evolutionary psychology and neuroeconomics. However, unlike the others which focus more narrowly on biological sub-areas (neurology) or concepts (evolution), biodemography has no explicit biological boundaries. As a consequence, it is an interdisciplinary concept, but maintains biological roots. The hierarchical organizations that are inherent to both biology (cell, organ, individual) and demography (individual cohort, population) form a chain in which the individual serves as the link between the lower mechanistic levels, and the higher functional levels. Biodemography serves to inform research on human aging through theory building using mathematical and statistical modeling, hypothesis testing using experimental methods, and coherence-seeking using genetics and evolutionary concepts. See also Biodemography of human longevity Epidemiology Max Planck Institute for Demographic Research Paleodemography Mortality displacement Society for Biodemography and Social Biology References Further reading Gavrilov L.A., Gavrilova N.S. 2012. "Biodemography of Exceptional Longevity: Early-life and mid-life predictors of human longevity". Biodemography and Social Biology, 58(1):14–39, Curtsinger J.W., Gavrilova N.S., Gavrilov L.A. 2006. "Biodemography of Aging and Age-Specific Mortality in Drosophila melanogaster". In: Masoro E.J. & Austad S.N.. (eds.): Handbook of the Biology of Aging, Sixth Edition. Academic Press. San Diego, CA. 261–288. Carey, J. R., and J. W. Vaupel. 2005. "Biodemography". in D. Poston and M. Micklin, editors. Handbook of Population. Kluwer Academic/Plenum Publishers, New York. 625–658 Carnes, B.A., S.J. Olshansky, and D. Grahn. 2003. "Biological evidence for limits to the duration of life". Biogerontology 4: 31–45. Gavrilov L.A., Gavrilova N.S., Olshansky S.J., Carnes B.A. 2002. "Genealogical data and biodemography of human longevity". Social Biology, 49(3-4): 160–173. Gavrilov, L.A., Gavrilova, N.S. 2001. "Biodemographic study of familial determinants of human longevity". Population: An English Selection, 13(1): 197–222. Leonid A. Gavrilov & Natalia S. Gavrilova (1991), The Biology of Life Span: A Quantitative Approach. New York: Harwood Academic Publisher, National Research Council (US) Panel for the Workshop on the Biodemography of Fertility and Family Behavior; Wachter KW, Bulatao RA, editors. (2003). Offspring: Human Fertility Behavior in Biodemographic Perspective. Washington (DC): National Academies Press (US). doi:10.17226/10654 External links Biodemography of Exceptional Longevity Laboratory of Survival and Longevity Biodemography and Paleodemography Max Planck Institute for Demographic Research National Institute on Aging Biodemography and Social Biology Academic journal. Demography
Biodemography
Environmental_science
1,082
242,527
https://en.wikipedia.org/wiki/Robe
A robe is a loose-fitting outer garment. Unlike garments described as capes or cloaks, robes usually have sleeves. The English word robe derives from Middle English robe ("garment"), borrowed from Old French robe ("booty, spoils"), itself taken from the Frankish word *rouba ("spoils, things stolen, clothes"), and is related to the word rob. Types There are various types of robes, including: A gown worn as part of the academic regalia of faculty or students, especially for ceremonial occasions, such as a convocations, congregations or graduations. A gown worn as part of the attire of a judge or barrister. A wide variety of long, flowing religious dress including pulpit robes and the robes worn by various types of monks. A gown worn as part of the official dress of a peer or royalty. Any of several women's fashions of French origin, as robe à l'anglaise (18th century), robe de style (1920s). A gown worn in fantasy literature and role-playing games by wizards and other magical characters. A bathrobe worn mostly after bathing or swimming. A gown used to cover a state of underdress, often after rising in the morning, is called a dressing gown. They are similar to a bathrobe but without the absorbent material. (Informal usage) Any long flowing garment; for example, a cassock is sometimes called a robe, although a cassock is close-fitting. A cured animal hide with fur or hair still attached, often from a buffalo, either worn or used in the home for warmth. See also Abaya - women's garment from Middle East/North Africa Academic stole Buffalo robe - buffalo hide used by Native Americans Clothing Kaftan Kimono - traditional Japanese garment Mantle (royal garment) Seamless robe of Jesus - Biblical relic Senegalese kaftan Thawb - ankle-length garment often worn in many places in the Middle East and Africa Tricivara - Buddhist monastic robe Wrap dress References External links Academic dress Judicial clothing Nightwear Costume design Religious clothing de:Amtstracht
Robe
Engineering,Biology
436
3,532,943
https://en.wikipedia.org/wiki/Enid%20Mumford
Enid Mumford (6 March 1924 – 7 April 2006) was a British social scientist, computer scientist and Professor Emerita of Manchester University and a visiting fellow at Manchester Business School, largely known for her work on human factors and socio-technical systems. Biography Enid Mumford was born on Merseyside in North West England, where her father Arthur McFarland was magistrate and her mother Dorothy Evans was teacher. She attended Wallasey high school, and received her BA in Social Science from Liverpool University in 1946. After graduation Enid Mumford spent time working in industry, first as personnel manager for an aircraft factory and later as production manager for an alarm clock manufacturer. The first job was important for her career as an academic, since it involved looking after personnel policy and industrial relations strategy for a large number of women staff. The second job also proved invaluable, as she was running a production department, providing a level of practical experience that is unusual among academics. Enid Mumford then joined the Faculty of Social Science at Liverpool University in 1956. Later she then spent a year at the University of Michigan, where she worked for the University Bureau of Public Health Economics and studied Michigan medical facilities while her husband took a higher degree in dental science. On returning to England, she joined the newly formed Manchester Business School (MBS), where she undertook many research contracts investigating the human and organisational impacts of computer based systems. During this time she became Professor of Organisational Behaviour and Director of the Computer and Work Design Research Unit (CAWDRU). She also directed the MBA programme for four years. As a newly joined member of Manchester Business School, Enid provides formative advice to students starting on research/engineering projects advising students to choose topics of study that are interesting yet challenging. In addition, Enid mentions that research projects should include research methods such as Large-scale surveys, face-to-face interviews and close observations. Finally, she suggests all students to keep good respectable terms with everyone involved with their research methods. She was a companion of the Chartered Institute of Personnel and Development, a Fellow of the British Computer Society (BCS), also an Honorary Fellow of the BCS in 1984, and also a founder member and ex-chairperson of the BCS Sociotechnical Group. In 1983 Enid Mumford was awarded the American Warnier Prize for her contributions to information science. In 1996, she was given an Honorary Doctorate by the university of Jyvaskyla in Finland. And in 1999, she was the only British recipient of a Leo Lifetime Achievement Award for Exceptional Achievement in Information Systems, one of only four in that year. Leo Awards are given by the Association for Information Systems (AIS) and the International Conference on Information Systems (ICIS). Work Research in industrial relations At the Faculty of Social Science at Liverpool University Mumford carried out research in industrial relations in the Liverpool docks and in the North West coal industry. To collect information for the dock research, she became a canteen assistant in the canteens used by the stevedores for meals. Each canteen was in a different part of the waterfront estate and served dockers working on different shipping lines and with different cargoes. The coal mine research required her to spend many months underground talking to miners at the coal face. The purpose of research is understanding, explanation and prediction. When gathering data from face-to-face interviewing programs, fewer formal methods are shown to be more respectable and often show superior quality of information. Whereas observational research tends to look at the patterns of behaviour and insights into why this behaviour is taking place. This can be hard to apply statics to this data, rather a description of what has taken place and why, will be more beneficial. Human factors and socio-technical systems Early in her career Enid Mumford realised that the implementation of large computer systems generally resulted in failure to produce a satisfactory outcome. Such failure could arise even when the underlying technology was adequate. She demonstrated that the underlying cause was an inability to overcome human factors associated with the implementation and use of computers. Four decades later, despite the identification of these sociotechnical factors and the development of methodologies to overcome such problems, large scale computer implementations are often unsuccessful in practice. Mumford recognised that user participation of system design is just as important as the technology being introduced. She believed it was important to take into account users' social and technical needs when creating an IS, and that user participation is needed for this to happen. Mumford described participation as the democratic processes that allow staff to have control over the environment they work in and the future of their job. Enid Mumford specifically emphasized the importance of participative system design. This emphasis has been accepted within the context of IS development. One of the main success factors indicated from this design was the importance of exploitative progression in the post implementation environment. Enid Mumford's theory of the importance of user participation has been widely recognised as effective and beneficial. Mumford also used Talcott Parsons and Edward Shils’ patterns variables to propose five different contracts that can be used to evaluate employer-employee relationships. One of the contracts proposed was the work structure contract, which aimed to emphasize the importance of ensuring employees found their jobs both interesting and challenging. To implement this contract, Mumford states the need for the continual questioning of production processes and principles alongside the identification of tools, techniques, and technologies which can be considered efficient and humanistic. Influencing all five contracts of the employer-employee relationship was the value contract. This contract specifically set out to develop a set of values both employees and management could agree on, simply because the values and interests of employees differ from those of the employers. Mumford described that employees were interested in being economically incentivised in exchange for the services they provide; however, the overall consensus was to produce values such as long-term humanistic profitability, ensuring both company economic success and employee motivation. The socio-technical approach While at MBS, Mumford developed a close relationship with the Tavistock Institute and became interested in their democratic socio-technical approach to work organisation. Since then, she has applied this approach to the design and implementation of computer-based systems and information technology. One of her largest socio-technical projects was with the Digital Equipment Corporation (DEC) in Boston. In the 1970s she became a member of the International Quality of Working Life Group, the goal of which was to spread the socio-technical message around the world. She later became a council member of the Tavistock Institute and was also a member of the US Socio-technical Round Table. Mumford’s 2000 conference paper titled “Socio-Technical Design: An Unfulfilled Promise or a Future Opportunity?” discussed the origins and evolution of socio-technical design, starting with its beginnings at the Tavistock institute. Mumford outlined the promises and possibilities of socio-technical design that were apparent at the time of its conception. She highlighted the ways that it had moved from success into failure, and evaluated the socio-technical initiatives that had occurred in different nations. Despite the replacement of socio-technical projects by more efficient systems such as lean production, socio-technical notions remain essential when conceptualizing frameworks involving humans and computers (Mumford, 2000). Choosing the type of method you are going to use is dependent on a number of factors. Mumford highlights the importance of the question ‘what will be most effective in enabling me to collect the data I need to test my hypothesis and answer my questions?’. The chosen method may be a single technique but is preferably a blend of techniques that will reinforce each other and provide different but complimentary data. Often a mix of methods produces the best results as it not only considers the political issues with research such as differences in opinions between researchers and how the task should be carried out, but also allows the subject to be fully investigated to achieve the most accurate results. Among Enid Mumford’s accomplishments and spearheading believing is the advancement of a coordinated strategy for frameworks usage named Effective Technical and Human Implementation of Computer Systems (ETHICS) that joins work plan as a feature of the frameworks arranging and execution exertion. This examination addresses why ETHICS at first rose in prominence and afterward declined throughout the long term. To respond to this inquiry, we apply Latour's (1999) five-circle structure to depict the arrangement of science. The discoveries uncover that Mumford held and adjusted numerous heterogeneous entertainers and assets that together added to the forming of ETHICS. As the substance of ETHICS was formed by the interweaving of numerous components, when a portion of these components later changed and subverted their past arrangement, the substance of ETHICS was not reshaped, and subsequently it lost its status and declined. The paper closes by drawing more broad exercises for IS research. Future Analysis, it is an attribute of most of today’s computer systems is their flexibility in terms of work organization. To help systems designers, managers and other interested groups take advantage of this flexibility and achieve good organizational as well as good technical design, the author developed the ETHICS method. Mumford suggests change and that those affected by it should be involved and have an input on the change if it’s to be accepted. This reflects on the ethical views Mumford has as she supports the idea of morality as a natural right. She makes it very clear on how moral responsibility is personal and precious and how no one can take it away from someone. This is relatable to employees as they should be made aware of the changes within their organization. Enid Mumford had always been passionate about developing the information systems research community, her favoured method of research came in the form of action research as this helps to promote cooperative development of systems, this research method is proven by the influential Manchester conference in 1984. This was the first conference to ever genuinely question the broadly differing conceptions of what established Information Systems research is. Enid Mumford’s draws success from the implementation of Socio-Technical Design; an organisational development method that focuses on the relationship between people and technology in the work environment. Its relationship with action research, was highlighted by its evolution in the 1960s and 1970s. Which improved general work practices as well as the relationship between management and workers. With the Global economy being in a recession during the early 1980s, Enid Mumford’s theory of socio-technical design gave way to several cost cutting methods that helped better organisations during this period. By making technology more viable in the workplace environment, enabling them to introduce lean production and suitable downsizing techniques. ETHICS Methodology of Systems Implementation Enid Mumford devised the ETHICS approach to the design and implementation of computer-based information systems. She explains in her work that while others are more intent on improving the ‘bottom line’ of corporations with the use of IT, Enid’s approach was more focused on the everyday workers and IT’s impact on their working lives (Avison et al., 2006). Her work placed the social context and human activities/needs at the centre of IS design. Findings from projects across the 1960 and 1970s were consolidated by Mumford and her peers to bring rise to system development methodology known as ETHICS (Effective Technical & Human Implementation of Computer-based Systems). In the progress method Mumford used, she included quick ethics, to support the business process. By including quick ethics in the process, it created the business process to become more efficient and more effective in attaining business objectives and was able to offer a higher quality working environment that inspires staff. Furthermore, Mumford’s work around Ethics Methodology, change management, and the humanly acceptable development of systems to provide an ethically acceptable way for the use technology was supported by Critical Research in Information Systems (CRIS) as many of ideas that still dominate Critical Research, which aim to improve the Social Reality. The overlapping theories between Mumford’s work and CRIS are related to change and change management, which have links to the issues of power and coercion. Mumford’s also uses wording derived from the Marxist tradition of Critical Research, for example the ‘ideology of capitalism’. Mumford also debated the commodification of computing and working time, which is also identified as a critical research area. Making Enid Mumford’s work around Ethics Methodology and Change very important in today’s economy. Effective Technical and Human Implementation of Computer Systems (ETHICS) method is made to help integrate the company and its aims with that of its stakeholders. ETHICS uses a mix of technology and people participation to come up with solutions. The ETHICS method can greatly contribute to encourage people to embrace change and adopt new technological solution, thus resulting in higher job satisfaction and efficiency. This ETHICS method follows 15 steps for designing new systems, they start with asking why to change and then end with the evaluation and testing to see if it is achieving what is required. Designing human systems for new technology (Ethics) methods that are transforming virtually every aspect of human life, interaction, and the process of work. such changes are drastically evident in the way in which human work is performed and organised. the ethics states that the bridge builders in IT development aim to understands the users from a presentation perspective, furthermore, to work in collaboration in the development and growth of IT artifacts, which then results in serving the interests of the stakeholders. Action Research A theoretical foundation in Mumford’s career was Action research – an approach adopted from the Tavistock Institute in which analysis and theory are associated with remedial change. She believed "There should be no theory without practice and no practice without research." Whilst working at Turner’s Asbestos Cement, she used this approach to survey the sales office, who then discussed their problems internally and implemented a work structure that alleviated most of their efficiency and job satisfaction problems. Enid Mumford: a tribute Nineteen individuals influenced by Enid Mumford contributed to Enid Mumford: A Tribute, an article reflecting on Mumford's contributions. Publications Enid Mumford has produced a large number of publications and books in the field of sociotechnical design. A selection: 1989. XSEL's Progress: the continuing journey of an expert system. Wiley. 1995. Effective Systems Design and Requirements Analysis: the ETHICS Approach. Macmillan. 1996. Systems Design: Ethical Tools for Ethical Change. Macmillan. 1999. Dangerous Decisions: problem solving in tomorrow's world. Plenum. 2003. Redesigning Human Systems. Idea Publishing Group. 2006. Designing human systems: an agile update to ETHICS Books and book chapters Mumford, E. (1983). Designing Secretaries: The Participative Design of a Word Processing System. Manchester Business School, UK. . First published 1983, http://www.opengrey.eu/item/display/10068/558836 Mumford Enid (1996). The past and the present. Chapter 1 pp. 1–13. In “Systems design : ethical tools for ethical change”. Macmillan, Basingstoke, UK. . First published January 1996, Mumford, E. (1996). Systems design in an unstable environment. Systems Design Ethical Tools for Ethical Change, 30–45. https://doi.org/10.1007/978-1-349-14199-9_3, Macmillan, Basingstoke, UK. . First published January 1996, Mumford E. (1996). An Ethical Pioneer: Mary Parker Follett. Chapter 4. pp 46–63. In “Systems Design Ethical Tools for Ethical Change”. Palgrave, London. , First published: January 1996, https://doi.org/10.1007/978-1-349-14199-9_4 Mumford Enid (1996). Designing for freedom in the ethical company. Chapter 6. pp79–98. In "Systems Design Ethical Tools for Ethical Change". Palgrave, London, UK. . First published on: 11 November 1996, Mumford, E. (1996). Designing for the future. In. Systems Design Ethical Tools for Ethical Change Chapter 7 (pp. 99–107). Publisher. https://doi.org/10.1007/978-1-349-14199-9_7 Mumford Enid (1997). Requirements Analysis for Information Systems. Chapter 3. pp 15–20. In “Systems for Sustainability”, which is edited by Frank A. Stowell, Ray L. Ison, Rosalind Armson, Jacky Holloway, Sue Jackson and Steve McRobb. Springer, Boston, MA. . First published 31 July 1997. Mumford, E. (1999). The Problems of Problem Solving. In Dangerous Decisions: Problem Solving in Tomorrow’s World (pp. 13–24). Springer, Boston, MA. https://doi.org/10.1007/978-0-585-27445-4_2 Mumford, E. (1999).  Dangerous Decisions Problem Solving in Tomorrow's World. [ebook] Chapter 4, Problem Solving and the Police pp. 59–73. First published on: 31 May 1999 https://link.springer.com/book/10.1007/b102291 Mumford, E. (2001). Action Research: Helping Organizations to Change. Chapter 3.pp. 46–77. In”Qualitative Research in IS: Issues and Trends”,edited by Trauth, Eileen M., UK.1-930708-06-8. First published:1 July 2000, https://www.igi-global.com/gateway/chapter/28259 Mumford Enid & Carolyn Axtell (2003). Tools and Methods to Support the Design and Implementation of New Work Systems. Chapter 17. pp 331–346. In “The new workplace: a guide to the human impact of modern working practices”, edited by David Holman David Holman, Toby D. Wall, Chris W. Clegg, Paul Sparrow and Ann Howard. Wiley & Sons, Chichester, UK. . First published: 1 January 2002, Publisher link: villey.com Mumford, E. (1996). Systems Design: Ethical Tools for Ethical Change. Palgrave Macmillan, London, UK. . First published: 11 November 1996, Enid Mumford, Steve Hickey, and Holly Matthies (2006). Designing Human Systems for New Technology - The ETHICS Method, by Enid Mumford (1983) Pages 37–51 https://books.google.com/books?id=he9NuM64WN8C&pg=PP1 Mumford, Enid. “Designing for Freedom in a Technical World.” In InformationTechnology and Changes in Organizational Work, edited by Wanda J. Orlikowski, Geoff Walsham, Matthew R. Jones, and Janice I. Degross, 425–441. IFIP Advances in Information and Communication Technology. Boston, MA: Springer US, 1996. https://doi.org/10.1007/978-0-387-34872-8_25 Conference and journal papers Mumford, E. (1994). New treatments or old remedies: is business process reengineering really socio-technical design? Journal of Strategic Information Systems, 3(4), 313–326. https://doi.org/10.1016/0963-8687(94)90036-1 Mumford, E. (1995). Contracts, complexity and contradictions: The changing employment relationship. Personnel Review, 24(8), 54–70. Mumford, E. (1995). Review: Understanding and Evaluating Methodologies. International Journal of Information Management Vol 15, Issue 3, Pages 243-245. Published by Elsevier Science Ltd, , . Mumford, E. (1996). Risky ideas in the risk society. Journal of Information Technology (Routledge, Ltd.), 11(4), 321. https://doi.org/10.1057/jit.1996.6 Facilitating Technology Transfer through Partnership: Learning from practice and research: IFIP TC8 WG8.6 International Working Conference on Diffusion, Adoption, and Implementation of Information Technology (25-27 June 1997), Ambleside, Cumbria, UK.Book: 383 pages, part of the “IFIP Advances in Information and Communication Technology book series (IFIPAICT)”, edited by Tom McMaster, Enid Mumford, E. Burton Swanson, Brian Warboys, David Wastell. Springer, Boston, MA. . First published: 1997, https://doi.org/10.1007/978-0-387-35092-9 Mumford, E. (1998). Problems, knowledge, solutions: solving complex problems. The Journal Of Strategic Information Systems, 7(4), 255-269. https://doi.org/10.1016/S0963-8687(99)00003-7 Mumford Enid (1999), Choosing Problem Solving Methods Chapter 2. pp 25–39. In “Dangerous Decision: Problem Solving in Tomorrow’s World”), Springer, Boston, MA. eBook Packages Springer Book Archive. . https://doi.org/10.1007/b102291 Mumford, E. (2000). Socio-Technical Design: An Unfulfilled Promise or a Future Opportunity? In R. Baskerville, J. Stage, & J. I. DeGross (Eds.), Organizational and Social Perspectives on Information Technology: IFIP TC8 WG8.2 International Working Conference on the Social and Organizational Perspective on Research and Practice in Information Technology June 9–11, 2000, Aalborg, Denmark (pp. 33–46). Springer US. Mumford, E. (2001). Advice for an action researcher. Information Technology & People, 14(1), 12–27. Mumford, E. (2006). Researching people problems: Some advice to a student. Inf. Syst. J., 16, 383–389. https://doi.org/10.1111/j.1365-2575.2006.00223.x. Mumford, E. (2006). The story of socio-technical design: reflections on its successes, failures and potential. Information Systems Journal, 16(4), 317-342. References External links last version of Enid Mumford website, on Internet Archive Guardian obituary 1924 births 2006 deaths British computer scientists Information systems researchers Fellows of the British Computer Society British women computer scientists University of Michigan people 20th-century British women scientists
Enid Mumford
Technology
4,685
3,986,130
https://en.wikipedia.org/wiki/Computational%20phylogenetics
Computational phylogenetics, phylogeny inference, or phylogenetic inference focuses on computational and optimization algorithms, heuristics, and approaches involved in phylogenetic analyses. The goal is to find a phylogenetic tree representing optimal evolutionary ancestry between a set of genes, species, or taxa. Maximum likelihood, parsimony, Bayesian, and minimum evolution are typical optimality criteria used to assess how well a phylogenetic tree topology describes the sequence data. Nearest Neighbour Interchange (NNI), Subtree Prune and Regraft (SPR), and Tree Bisection and Reconnection (TBR), known as tree rearrangements, are deterministic algorithms to search for optimal or the best phylogenetic tree. The space and the landscape of searching for the optimal phylogenetic tree is known as phylogeny search space. Maximum Likelihood (also likelihood) optimality criterion is the process of finding the tree topology along with its branch lengths that provides the highest probability observing the sequence data, while parsimony optimality criterion is the fewest number of state-evolutionary changes required for a phylogenetic tree to explain the sequence data. Traditional phylogenetics relies on morphological data obtained by measuring and quantifying the phenotypic properties of representative organisms, while the more recent field of molecular phylogenetics uses nucleotide sequences encoding genes or amino acid sequences encoding proteins as the basis for classification. Many forms of molecular phylogenetics are closely related to and make extensive use of sequence alignment in constructing and refining phylogenetic trees, which are used to classify the evolutionary relationships between homologous genes represented in the genomes of divergent species. The phylogenetic trees constructed by computational methods are unlikely to perfectly reproduce the evolutionary tree that represents the historical relationships between the species being analyzed. The historical species tree may also differ from the historical tree of an individual homologous gene shared by those species. Types of phylogenetic trees and networks Phylogenetic trees generated by computational phylogenetics can be either rooted or unrooted depending on the input data and the algorithm used. A rooted tree is a directed graph that explicitly identifies a most recent common ancestor (MRCA), usually an inputed sequence that is not represented in the input. Genetic distance measures can be used to plot a tree with the input sequences as leaf nodes and their distances from the root proportional to their genetic distance from the hypothesized MRCA. Identification of a root usually requires the inclusion in the input data of at least one "outgroup" known to be only distantly related to the sequences of interest. By contrast, unrooted trees plot the distances and relationships between input sequences without making assumptions regarding their descent. An unrooted tree can always be produced from a rooted tree, but a root cannot usually be placed on an unrooted tree without additional data on divergence rates, such as the assumption of the molecular clock hypothesis. The set of all possible phylogenetic trees for a given group of input sequences can be conceptualized as a discretely defined multidimensional "tree space" through which search paths can be traced by optimization algorithms. Although counting the total number of trees for a nontrivial number of input sequences can be complicated by variations in the definition of a tree topology, it is always true that there are more rooted than unrooted trees for a given number of inputs and choice of parameters. Both rooted and unrooted phylogenetic trees can be further generalized to rooted or unrooted phylogenetic networks, which allow for the modeling of evolutionary phenomena such as hybridization or horizontal gene transfer. Coding characters and defining homology Morphological analysis The basic problem in morphological phylogenetics is the assembly of a matrix representing a mapping from each of the taxa being compared to representative measurements for each of the phenotypic characteristics being used as a classifier. The types of phenotypic data used to construct this matrix depend on the taxa being compared; for individual species, they may involve measurements of average body size, lengths or sizes of particular bones or other physical features, or even behavioral manifestations. Of course, since not every possible phenotypic characteristic could be measured and encoded for analysis, the selection of which features to measure is a major inherent obstacle to the method. The decision of which traits to use as a basis for the matrix necessarily represents a hypothesis about which traits of a species or higher taxon are evolutionarily relevant. Morphological studies can be confounded by examples of convergent evolution of phenotypes. A major challenge in constructing useful classes is the high likelihood of inter-taxon overlap in the distribution of the phenotype's variation. The inclusion of extinct taxa in morphological analysis is often difficult due to absence of or incomplete fossil records, but has been shown to have a significant effect on the trees produced; in one study only the inclusion of extinct species of apes produced a morphologically derived tree that was consistent with that produced from molecular data. Some phenotypic classifications, particularly those used when analyzing very diverse groups of taxa, are discrete and unambiguous; classifying organisms as possessing or lacking a tail, for example, is straightforward in the majority of cases, as is counting features such as eyes or vertebrae. However, the most appropriate representation of continuously varying phenotypic measurements is a controversial problem without a general solution. A common method is simply to sort the measurements of interest into two or more classes, rendering continuous observed variation as discretely classifiable (e.g., all examples with humerus bones longer than a given cutoff are scored as members of one state, and all members whose humerus bones are shorter than the cutoff are scored as members of a second state). This results in an easily manipulated data set but has been criticized for poor reporting of the basis for the class definitions and for sacrificing information compared to methods that use a continuous weighted distribution of measurements. Because morphological data is extremely labor-intensive to collect, whether from literature sources or from field observations, reuse of previously compiled data matrices is not uncommon, although this may propagate flaws in the original matrix into multiple derivative analyses. Molecular analysis The problem of character coding is very different in molecular analyses, as the characters in biological sequence data are immediate and discretely defined - distinct nucleotides in DNA or RNA sequences and distinct amino acids in protein sequences. However, defining homology can be challenging due to the inherent difficulties of multiple sequence alignment. For a given gapped MSA, several rooted phylogenetic trees can be constructed that vary in their interpretations of which changes are "mutations" versus ancestral characters, and which events are insertion mutations or deletion mutations. For example, given only a pairwise alignment with a gap region, it is impossible to determine whether one sequence bears an insertion mutation or the other carries a deletion. The problem is magnified in MSAs with unaligned and nonoverlapping gaps. In practice, sizable regions of a calculated alignment may be discounted in phylogenetic tree construction to avoid integrating noisy data into the tree calculation. Distance-matrix methods Distance-matrix methods of phylogenetic analysis explicitly rely on a measure of "genetic distance" between the sequences being classified, and therefore, they require an MSA as an input. Distance is often defined as the fraction of mismatches at aligned positions, with gaps either ignored or counted as mismatches. Distance methods attempt to construct an all-to-all matrix from the sequence query set describing the distance between each sequence pair. From this is constructed a phylogenetic tree that places closely related sequences under the same interior node and whose branch lengths closely reproduce the observed distances between sequences. Distance-matrix methods may produce either rooted or unrooted trees, depending on the algorithm used to calculate them. They are frequently used as the basis for progressive and iterative types of multiple sequence alignments. The main disadvantage of distance-matrix methods is their inability to efficiently use information about local high-variation regions that appear across multiple subtrees. UPGMA and WPGMA The UPGMA (Unweighted Pair Group Method with Arithmetic mean) and WPGMA (Weighted Pair Group Method with Arithmetic mean) methods produce rooted trees and require a constant-rate assumption - that is, it assumes an ultrametric tree in which the distances from the root to every branch tip are equal. Neighbor-joining Neighbor-joining methods apply general cluster analysis techniques to sequence analysis using genetic distance as a clustering metric. The simple neighbor-joining method produces unrooted trees, but it does not assume a constant rate of evolution (i.e., a molecular clock) across lineages. Fitch–Margoliash method The Fitch–Margoliash method uses a weighted least squares method for clustering based on genetic distance. Closely related sequences are given more weight in the tree construction process to correct for the increased inaccuracy in measuring distances between distantly related sequences. The distances used as input to the algorithm must be normalized to prevent large artifacts in computing relationships between closely related and distantly related groups. The distances calculated by this method must be linear; the linearity criterion for distances requires that the expected values of the branch lengths for two individual branches must equal the expected value of the sum of the two branch distances - a property that applies to biological sequences only when they have been corrected for the possibility of back mutations at individual sites. This correction is done through the use of a substitution matrix such as that derived from the Jukes-Cantor model of DNA evolution. The distance correction is only necessary in practice when the evolution rates differ among branches. Another modification of the algorithm can be helpful, especially in case of concentrated distances (please refer to concentration of measure phenomenon and curse of dimensionality): that modification, described in, has been shown to improve the efficiency of the algorithm and its robustness. The least-squares criterion applied to these distances is more accurate but less efficient than the neighbor-joining methods. An additional improvement that corrects for correlations between distances that arise from many closely related sequences in the data set can also be applied at increased computational cost. Finding the optimal least-squares tree with any correction factor is NP-complete, so heuristic search methods like those used in maximum-parsimony analysis are applied to the search through tree space. Using outgroups Independent information about the relationship between sequences or groups can be used to help reduce the tree search space and root unrooted trees. Standard usage of distance-matrix methods involves the inclusion of at least one outgroup sequence known to be only distantly related to the sequences of interest in the query set. This usage can be seen as a type of experimental control. If the outgroup has been appropriately chosen, it will have a much greater genetic distance and thus a longer branch length than any other sequence, and it will appear near the root of a rooted tree. Choosing an appropriate outgroup requires the selection of a sequence that is moderately related to the sequences of interest; too close a relationship defeats the purpose of the outgroup and too distant adds noise to the analysis. Care should also be taken to avoid situations in which the species from which the sequences were taken are distantly related, but the gene encoded by the sequences is highly conserved across lineages. Horizontal gene transfer, especially between otherwise divergent bacteria, can also confound outgroup usage. Maximum parsimony Maximum parsimony (MP) is a method of identifying the potential phylogenetic tree that requires the smallest total number of evolutionary events to explain the observed sequence data. Some ways of scoring trees also include a "cost" associated with particular types of evolutionary events and attempt to locate the tree with the smallest total cost. This is a useful approach in cases where not every possible type of event is equally likely - for example, when particular nucleotides or amino acids are known to be more mutable than others. The most naive way of identifying the most parsimonious tree is simple enumeration - considering each possible tree in succession and searching for the tree with the smallest score. However, this is only possible for a relatively small number of sequences or species because the problem of identifying the most parsimonious tree is known to be NP-hard; consequently a number of heuristic search methods for optimization have been developed to locate a highly parsimonious tree, if not the best in the set. Most such methods involve a steepest descent-style minimization mechanism operating on a tree rearrangement criterion. Branch and bound The branch and bound algorithm is a general method used to increase the efficiency of searches for near-optimal solutions of NP-hard problems first applied to phylogenetics in the early 1980s. Branch and bound is particularly well suited to phylogenetic tree construction because it inherently requires dividing a problem into a tree structure as it subdivides the problem space into smaller regions. As its name implies, it requires as input both a branching rule (in the case of phylogenetics, the addition of the next species or sequence to the tree) and a bound (a rule that excludes certain regions of the search space from consideration, thereby assuming that the optimal solution cannot occupy that region). Identifying a good bound is the most challenging aspect of the algorithm's application to phylogenetics. A simple way of defining the bound is a maximum number of assumed evolutionary changes allowed per tree. A set of criteria known as Zharkikh's rules severely limit the search space by defining characteristics shared by all candidate "most parsimonious" trees. The two most basic rules require the elimination of all but one redundant sequence (for cases where multiple observations have produced identical data) and the elimination of character sites at which two or more states do not occur in at least two species. Under ideal conditions these rules and their associated algorithm would completely define a tree. Sankoff-Morel-Cedergren algorithm The Sankoff-Morel-Cedergren algorithm was among the first published methods to simultaneously produce an MSA and a phylogenetic tree for nucleotide sequences. The method uses a maximum parsimony calculation in conjunction with a scoring function that penalizes gaps and mismatches, thereby favoring the tree that introduces a minimal number of such events (an alternative view holds that the trees to be favored are those that maximize the amount of sequence similarity that can be interpreted as homology, a point of view that may lead to different optimal trees ). The imputed sequences at the interior nodes of the tree are scored and summed over all the nodes in each possible tree. The lowest-scoring tree sum provides both an optimal tree and an optimal MSA given the scoring function. Because the method is highly computationally intensive, an approximate method in which initial guesses for the interior alignments are refined one node at a time. Both the full and the approximate version are in practice calculated by dynamic programming. MALIGN and POY More recent phylogenetic tree/MSA methods use heuristics to isolate high-scoring, but not necessarily optimal, trees. The MALIGN method uses a maximum-parsimony technique to compute a multiple alignment by maximizing a cladogram score, and its companion POY uses an iterative method that couples the optimization of the phylogenetic tree with improvements in the corresponding MSA. However, the use of these methods in constructing evolutionary hypotheses has been criticized as biased due to the deliberate construction of trees reflecting minimal evolutionary events. This, in turn, has been countered by the view that such methods should be seen as heuristic approaches to find the trees that maximize the amount of sequence similarity that can be interpreted as homology. Maximum likelihood The maximum likelihood method uses standard statistical techniques for inferring probability distributions to assign probabilities to particular possible phylogenetic trees. The method requires a substitution model to assess the probability of particular mutations; roughly, a tree that requires more mutations at interior nodes to explain the observed phylogeny will be assessed as having a lower probability. This is broadly similar to the maximum-parsimony method, but maximum likelihood allows additional statistical flexibility by permitting varying rates of evolution across both lineages and sites. In fact, the method requires that evolution at different sites and along different lineages must be statistically independent. Maximum likelihood is thus well suited to the analysis of distantly related sequences, but it is believed to be computationally intractable to compute due to its NP-hardness. The "pruning" algorithm, a variant of dynamic programming, is often used to reduce the search space by efficiently calculating the likelihood of subtrees. The method calculates the likelihood for each site in a "linear" manner, starting at a node whose only descendants are leaves (that is, the tips of the tree) and working backwards toward the "bottom" node in nested sets. However, the trees produced by the method are only rooted if the substitution model is irreversible, which is not generally true of biological systems. The search for the maximum-likelihood tree also includes a branch length optimization component that is difficult to improve upon algorithmically; general global optimization tools such as the Newton–Raphson method are often used. Some tools that use maximum likelihood to infer phylogenetic trees from variant allelic frequency data (VAFs) include AncesTree and CITUP. Bayesian inference Bayesian inference can be used to produce phylogenetic trees in a manner closely related to the maximum likelihood methods. Bayesian methods assume a prior probability distribution of the possible trees, which may simply be the probability of any one tree among all the possible trees that could be generated from the data, or may be a more sophisticated estimate derived from the assumption that divergence events such as speciation occur as stochastic processes. The choice of prior distribution is a point of contention among users of Bayesian-inference phylogenetics methods. Implementations of Bayesian methods generally use Markov chain Monte Carlo sampling algorithms, although the choice of move set varies; selections used in Bayesian phylogenetics include circularly permuting leaf nodes of a proposed tree at each step and swapping descendant subtrees of a random internal node between two related trees. The use of Bayesian methods in phylogenetics has been controversial, largely due to incomplete specification of the choice of move set, acceptance criterion, and prior distribution in published work. Bayesian methods are generally held to be superior to parsimony-based methods; they can be more prone to long-branch attraction than maximum likelihood techniques, although they are better able to accommodate missing data. Whereas likelihood methods find the tree that maximizes the probability of the data, a Bayesian approach recovers a tree that represents the most likely clades, by drawing on the posterior distribution. However, estimates of the posterior probability of clades (measuring their 'support') can be quite wide of the mark, especially in clades that aren't overwhelmingly likely. As such, other methods have been put forwards to estimate posterior probability. Some tools that use Bayesian inference to infer phylogenetic trees from variant allelic frequency data (VAFs) include Canopy, EXACT, and PhyloWGS. Model selection Molecular phylogenetics methods rely on a defined substitution model that encodes a hypothesis about the relative rates of mutation at various sites along the gene or amino acid sequences being studied. At their simplest, substitution models aim to correct for differences in the rates of transitions and transversions in nucleotide sequences. The use of substitution models is necessitated by the fact that the genetic distance between two sequences increases linearly only for a short time after the two sequences diverge from each other (alternatively, the distance is linear only shortly before coalescence). The longer the amount of time after divergence, the more likely it becomes that two mutations occur at the same nucleotide site. Simple genetic distance calculations will thus undercount the number of mutation events that have occurred in evolutionary history. The extent of this undercount increases with increasing time since divergence, which can lead to the phenomenon of long branch attraction, or the misassignment of two distantly related but convergently evolving sequences as closely related. The maximum parsimony method is particularly susceptible to this problem due to its explicit search for a tree representing a minimum number of distinct evolutionary events. Types of models All substitution models assign a set of weights to each possible change of state represented in the sequence. The most common model types are implicitly reversible because they assign the same weight to, for example, a G>C nucleotide mutation as to a C>G mutation. The simplest possible model, the Jukes-Cantor model, assigns an equal probability to every possible change of state for a given nucleotide base. The rate of change between any two distinct nucleotides will be one-third of the overall substitution rate. More advanced models distinguish between transitions and transversions. The most general possible time-reversible model, called the GTR model, has six mutation rate parameters. An even more generalized model known as the general 12-parameter model breaks time-reversibility, at the cost of much additional complexity in calculating genetic distances that are consistent among multiple lineages. One possible variation on this theme adjusts the rates so that overall GC content - an important measure of DNA double helix stability - varies over time. Models may also allow for the variation of rates with positions in the input sequence. The most obvious example of such variation follows from the arrangement of nucleotides in protein-coding genes into three-base codons. If the location of the open reading frame (ORF) is known, rates of mutation can be adjusted for position of a given site within a codon, since it is known that wobble base pairing can allow for higher mutation rates in the third nucleotide of a given codon without affecting the codon's meaning in the genetic code. A less hypothesis-driven example that does not rely on ORF identification simply assigns to each site a rate randomly drawn from a predetermined distribution, often the gamma distribution or log-normal distribution. Finally, a more conservative estimate of rate variations known as the covarion method allows autocorrelated variations in rates, so that the mutation rate of a given site is correlated across sites and lineages. Choosing the best model The selection of an appropriate model is critical for the production of good phylogenetic analyses, both because underparameterized or overly restrictive models may produce aberrant behavior when their underlying assumptions are violated, and because overly complex or overparameterized models are computationally expensive and the parameters may be overfit. The most common method of model selection is the likelihood ratio test (LRT), which produces a likelihood estimate that can be interpreted as a measure of "goodness of fit" between the model and the input data. However, care must be taken in using these results, since a more complex model with more parameters will always have a higher likelihood than a simplified version of the same model, which can lead to the naive selection of models that are overly complex. For this reason model selection computer programs will choose the simplest model that is not significantly worse than more complex substitution models. A significant disadvantage of the LRT is the necessity of making a series of pairwise comparisons between models; it has been shown that the order in which the models are compared has a major effect on the one that is eventually selected. An alternative model selection method is the Akaike information criterion (AIC), formally an estimate of the Kullback–Leibler divergence between the true model and the model being tested. It can be interpreted as a likelihood estimate with a correction factor to penalize overparameterized models. The AIC is calculated on an individual model rather than a pair, so it is independent of the order in which models are assessed. A related alternative, the Bayesian information criterion (BIC), has a similar basic interpretation but penalizes complex models more heavily. Determining the most suitable model for phylogeny reconstruction constitutes a fundamental step in numerous evolutionary studies. However, various criteria for model selection are leading to debate over which criterion is preferable. It has recently been shown that, when topologies and ancestral sequence reconstruction are the desired output, choosing one criterion over another is not crucial. Instead, using the most complex nucleotide substitution model, GTR+I+G, leads to similar results for the inference of tree topology and ancestral sequences. A comprehensive step-by-step protocol on constructing phylogenetic trees, including DNA/Amino Acid contiguous sequence assembly, multiple sequence alignment, model-test (testing best-fitting substitution models) and phylogeny reconstruction using Maximum Likelihood and Bayesian Inference, is available at Protocol Exchange A non traditional way of evaluating the phylogenetic tree is to compare it with clustering result. One can use a Multidimensional Scaling technique, so called Interpolative Joining to do dimensionality reduction to visualize the clustering result for the sequences in 3D, and then map the phylogenetic tree onto the clustering result. A better tree usually has a higher correlation with the clustering result. Evaluating tree support As with all statistical analysis, the estimation of phylogenies from character data requires an evaluation of confidence. A number of methods exist to test the amount of support for a phylogenetic tree, either by evaluating the support for each sub-tree in the phylogeny (nodal support) or evaluating whether the phylogeny is significantly different from other possible trees (alternative tree hypothesis tests). Nodal support The most common method for assessing tree support is to evaluate the statistical support for each node on the tree. Typically, a node with very low support is not considered valid in further analysis, and visually may be collapsed into a polytomy to indicate that relationships within a clade are unresolved. Consensus tree Many methods for assessing nodal support involve consideration of multiple phylogenies. The consensus tree summarizes the nodes that are shared among a set of trees. In a *strict consensus,* only nodes found in every tree are shown, and the rest are collapsed into an unresolved polytomy. Less conservative methods, such as the *majority-rule consensus* tree, consider nodes that are supported by a given percentage of trees under consideration (such as at least 50%). For example, in maximum parsimony analysis, there may be many trees with the same parsimony score. A strict consensus tree would show which nodes are found in all equally parsimonious trees, and which nodes differ. Consensus trees are also used to evaluate support on phylogenies reconstructed with Bayesian inference (see below). Bootstrapping and jackknifing In statistics, the bootstrap is a method for inferring the variability of data that has an unknown distribution using pseudoreplications of the original data. For example, given a set of 100 data points, a pseudoreplicate is a data set of the same size (100 points) randomly sampled from the original data, with replacement. That is, each original data point may be represented more than once in the pseudoreplicate, or not at all. Statistical support involves evaluation of whether the original data has similar properties to a large set of pseudoreplicates. In phylogenetics, bootstrapping is conducted using the columns of the character matrix. Each pseudoreplicate contains the same number of species (rows) and characters (columns) randomly sampled from the original matrix, with replacement. A phylogeny is reconstructed from each pseudoreplicate, with the same methods used to reconstruct the phylogeny from the original data. For each node on the phylogeny, the nodal support is the percentage of pseudoreplicates containing that node. The statistical rigor of the bootstrap test has been empirically evaluated using viral populations with known evolutionary histories, finding that 70% bootstrap support corresponds to a 95% probability that the clade exists. However, this was tested under ideal conditions (e.g. no change in evolutionary rates, symmetric phylogenies). In practice, values above 70% are generally supported and left to the researcher or reader to evaluate confidence. Nodes with support lower than 70% are typically considered unresolved. Jackknifing in phylogenetics is a similar procedure, except the columns of the matrix are sampled without replacement. Pseudoreplicates are generated by randomly subsampling the data—for example, a "10% jackknife" would involve randomly sampling 10% of the matrix many times to evaluate nodal support. Posterior probability Reconstruction of phylogenies using Bayesian inference generates a posterior distribution of highly probable trees given the data and evolutionary model, rather than a single "best" tree. The trees in the posterior distribution generally have many different topologies. When the input data is variant allelic frequency data (VAF), the tool EXACT can compute the probabilities of trees exactly, for small, biologically relevant tree sizes, by exhaustively searching the entire tree space. Most Bayesian inference methods utilize a Markov-chain Monte Carlo iteration, and the initial steps of this chain are not considered reliable reconstructions of the phylogeny. Trees generated early in the chain are usually discarded as burn-in. The most common method of evaluating nodal support in a Bayesian phylogenetic analysis is to calculate the percentage of trees in the posterior distribution (post-burn-in) which contain the node. The statistical support for a node in Bayesian inference is expected to reflect the probability that a clade really exists given the data and evolutionary model. Therefore, the threshold for accepting a node as supported is generally higher than for bootstrapping. Step counting methods Bremer support counts the number of extra steps needed to contradict a clade. Shortcomings These measures each have their weaknesses. For example, smaller or larger clades tend to attract larger support values than mid-sized clades, simply as a result of the number of taxa in them. Bootstrap support can provide high estimates of node support as a result of noise in the data rather than the true existence of a clade. Limitations and workarounds Ultimately, there is no way to measure whether a particular phylogenetic hypothesis is accurate or not, unless the true relationships among the taxa being examined are already known (which may happen with bacteria or viruses under laboratory conditions). The best result an empirical phylogeneticist can hope to attain is a tree with branches that are well supported by the available evidence. Several potential pitfalls have been identified: Homoplasy Certain characters are more likely to evolve convergently than others; logically, such characters should be given less weight in the reconstruction of a tree. Weights in the form of a model of evolution can be inferred from sets of molecular data, so that maximum likelihood or Bayesian methods can be used to analyze them. For molecular sequences, this problem is exacerbated when the taxa under study have diverged substantially. As time since the divergence of two taxa increase, so does the probability of multiple substitutions on the same site, or back mutations, all of which result in homoplasies. For morphological data, unfortunately, the only objective way to determine convergence is by the construction of a tree – a somewhat circular method. Even so, weighting homoplasious characters does indeed lead to better-supported trees. Further refinement can be brought by weighting changes in one direction higher than changes in another; for instance, the presence of thoracic wings almost guarantees placement among the pterygote insects because, although wings are often lost secondarily, there is no evidence that they have been gained more than once. Horizontal gene transfer In general, organisms can inherit genes in two ways: vertical gene transfer and horizontal gene transfer. Vertical gene transfer is the passage of genes from parent to offspring, and horizontal (also called lateral) gene transfer occurs when genes jump between unrelated organisms, a common phenomenon especially in prokaryotes; a good example of this is the acquired antibiotic resistance as a result of gene exchange between various bacteria leading to multi-drug-resistant bacterial species. There have also been well-documented cases of horizontal gene transfer between eukaryotes. Horizontal gene transfer has complicated the determination of phylogenies of organisms, and inconsistencies in phylogeny have been reported among specific groups of organisms depending on the genes used to construct evolutionary trees. The only way to determine which genes have been acquired vertically and which horizontally is to parsimoniously assume that the largest set of genes that have been inherited together have been inherited vertically; this requires analyzing a large number of genes. Hybrids, speciation, introgressions and incomplete lineage sorting The basic assumption underlying the mathematical model of cladistics is a situation where species split neatly in bifurcating fashion. While such an assumption may hold on a larger scale (bar horizontal gene transfer, see above), speciation is often much less orderly. Research since the cladistic method was introduced has shown that hybrid speciation, once thought rare, is in fact quite common, particularly in plants. Also paraphyletic speciation is common, making the assumption of a bifurcating pattern unsuitable, leading to phylogenetic networks rather than trees. Introgression can also move genes between otherwise distinct species and sometimes even genera, complicating phylogenetic analysis based on genes. This phenomenon can contribute to "incomplete lineage sorting" and is thought to be a common phenomenon across a number of groups. In species level analysis this can be dealt with by larger sampling or better whole genome analysis. Often the problem is avoided by restricting the analysis to fewer, not closely related specimens. Taxon sampling Owing to the development of advanced sequencing techniques in molecular biology, it has become feasible to gather large amounts of data (DNA or amino acid sequences) to infer phylogenetic hypotheses. For example, it is not rare to find studies with character matrices based on whole mitochondrial genomes (~16,000 nucleotides, in many animals). However, simulations have shown that it is more important to increase the number of taxa in the matrix than to increase the number of characters, because the more taxa there are, the more accurate and more robust is the resulting phylogenetic tree. This may be partly due to the breaking up of long branches. Phylogenetic signal Another important factor that affects the accuracy of tree reconstruction is whether the data analyzed actually contain a useful phylogenetic signal, a term that is used generally to denote whether a character evolves slowly enough to have the same state in closely related taxa as opposed to varying randomly. Tests for phylogenetic signal exist. Continuous characters Morphological characters that sample a continuum may contain phylogenetic signal, but are hard to code as discrete characters. Several methods have been used, one of which is gap coding, and there are variations on gap coding. In the original form of gap coding:group means for a character are first ordered by size. The pooled within-group standard deviation is calculated ... and differences between adjacent means ... are compared relative to this standard deviation. Any pair of adjacent means is considered different and given different integer scores ... if the means are separated by a "gap" greater than the within-group standard deviation ... times some arbitrary constant. If more taxa are added to the analysis, the gaps between taxa may become so small that all information is lost. Generalized gap coding works around that problem by comparing individual pairs of taxa rather than considering one set that contains all of the taxa. Missing data In general, the more data that are available when constructing a tree, the more accurate and reliable the resulting tree will be. Missing data are no more detrimental than simply having fewer data, although the impact is greatest when most of the missing data are in a small number of taxa. Concentrating the missing data across a small number of characters produces a more robust tree. The role of fossils Because many characters involve embryological, or soft-tissue or molecular characters that (at best) hardly ever fossilize, and the interpretation of fossils is more ambiguous than that of living taxa, extinct taxa almost invariably have higher proportions of missing data than living ones. However, despite these limitations, the inclusion of fossils is invaluable, as they can provide information in sparse areas of trees, breaking up long branches and constraining intermediate character states; thus, fossil taxa contribute as much to tree resolution as modern taxa. Fossils can also constrain the age of lineages and thus demonstrate how consistent a tree is with the stratigraphic record; stratocladistics incorporates age information into data matrices for phylogenetic analyses. See also List of phylogenetics software Bayesian network Bioinformatics Cladistics Computational biology Disk-covering method Evolutionary dynamics Microbial phylogenetics PHYLIP Phylogenetic comparative methods Phylogenetic tree Phylogenetics Population genetics Quantitative comparative linguistics Statistical classification Systematics Taxonomy (biology) References Further reading External links Computational fields of study
Computational phylogenetics
Technology,Biology
7,513
428,848
https://en.wikipedia.org/wiki/Alginite
Alginite is a component of some types of kerogen alongside amorphous organic matter. Alginite consists of organic-walled marine microfossils, distinct from inorganic (silica)-walled microfossils that comprise diatomaceous earth. Alginite is a complex soil aggregate of algae based biomass fossil, clay, volcanic ash and calcium carbonate. This material contains a complete spectrum of minerals, biological, macro- and micro-organisms helping to turn lands fertile again in regions where soil has been severely degraded in the past. At least two forms of alginite are distinguishable, "alginite A" (telalginite) and "alginite B" (lamalginite). The "A" form contains morphologically distinguishable microfossils while the "B" form is more amorphous and film-like. References External links Sedimentology Organic minerals Petrology
Alginite
Chemistry
189
39,535,144
https://en.wikipedia.org/wiki/Air%20Force%20Plant%20PJKS
Air Force Plant Peter J. Kiewit and Sons (AFP PJKS, AFP #79) is a Formerly Used Defense Site (CO7570090038) at the Colorado Front Range and used during the Cold War (1957-1968) to provide "rocket assembly, engine testing, and research and development") for the Titan missile complexes southeast of Denver (construction began April 1959). The of former "Martin Missile Test Site 1" was "deeded" to the USAF in 1957, was subsequently operated by the builder (Glenn L. Martin Company), was listed on the EPA's National Priorities List on November 21, 1989; and remained USAF property until transferred to Lockheed Martin in February 2001. The site is used by, and entirely within, the secure Lockheed Martin/United Launch Alliance Waterton Canyon facility of that produces Titan IV launch vehicles and the GPS III space vehicles. Entirely within the East Fork Brush Creek watershed, the former USAF firearms ranges used by PJKS military police remains along the creek, is managed by the Skyline Hunting and Fishing Club, and is used for periodic Jefferson County police and local Boy Scout training. HGM-25A Titan I ICBMs were liquid-fueled rockets using LOX/RP-1 propellant which required the missiles to periodically be removed from the launch silos for servicing. Environmental sites at PJKS include 59 within six operable units (e.g., OU1, OU4, & OU6), and there are six areas of concern (12 of 14 underground tanks have been removed). Groundwater contaminants include trichloroethene (TCE), hydrazine, vinyl chloride, benzene, and nitrate. In fiscal year 1996, "technical work groups were formed with EPA, the State of Colorado, USGS, and the U.S. Army Corps of Engineers to support RI site characterization and risk assessment." References External links Historic American Engineering Record (HAER) documentation, filed under Waterton Canyon Road & Colorado Highway 121, Lakewood, Jefferson County, CO: Plants of the United States Air Force Military installations closed in 2001 Buildings and structures in Jefferson County, Colorado Historic American Engineering Record in Colorado Military history of Colorado Rocketry
Air Force Plant PJKS
Engineering
460
4,194,311
https://en.wikipedia.org/wiki/Stealth%20wallpaper
For computer network security, stealth wallpaper is a material designed to prevent an indoor Wi-Fi network from extending or "leaking" to the outside of a building, where malicious persons may attempt to eavesdrop or attack a network. While it is simple to prevent all electronic signals from passing through a building by covering the interior with metal, stealth wallpaper accomplishes the more difficult task of blocking Wi-Fi signals while still allowing cellphone signals to pass through. The first stealth wallpaper was originally designed by UK defense contractor BAE Systems In 2012, The Register reported that a commercial wallpaper had been developed by Grenoble Institute of Technology and the Centre Technique du Papier with planned sale in 2013. This wallpaper blocks three selected Wi-Fi frequencies. Nevertheless, it does allow GSM and 4G signals to pass through the network, therefore allowing cell phone use to remain unaffected by the wallpaper. See also Electromagnetic shielding Faraday cage TEMPEST Wallpaper Wireless security References External links Azcom: Stealth Wallpaper Prevents Wi-Fi Signals Escaping without Blocking Mobile Phone Signals The Register: Wifi Blocking Wallpaper BAE Systems research and development Computer network security Wi-Fi
Stealth wallpaper
Technology,Engineering
236
648,042
https://en.wikipedia.org/wiki/ADE%20classification
In mathematics, the ADE classification (originally A-D-E classifications) is a situation where certain kinds of objects are in correspondence with simply laced Dynkin diagrams. The question of giving a common origin to these classifications, rather than a posteriori verification of a parallelism, was posed in . The complete list of simply laced Dynkin diagrams comprises Here "simply laced" means that there are no multiple edges, which corresponds to all simple roots in the root system forming angles of (no edge between the vertices) or (single edge between the vertices). These are two of the four families of Dynkin diagrams (omitting and ), and three of the five exceptional Dynkin diagrams (omitting and ). This list is non-redundant if one takes for If one extends the families to include redundant terms, one obtains the exceptional isomorphisms and corresponding isomorphisms of classified objects. The A, D, E nomenclature also yields the simply laced finite Coxeter groups, by the same diagrams: in this case the Dynkin diagrams exactly coincide with the Coxeter diagrams, as there are no multiple edges. Lie algebras In terms of complex semisimple Lie algebras: corresponds to the special linear Lie algebra of traceless operators, corresponds to the even special orthogonal Lie algebra of even-dimensional skew-symmetric operators, and are three of the five exceptional Lie algebras. In terms of compact Lie algebras and corresponding simply laced Lie groups: corresponds to the algebra of the special unitary group corresponds to the algebra of the even projective special orthogonal group , while are three of five exceptional compact Lie algebras. Binary polyhedral groups The same classification applies to discrete subgroups of , the binary polyhedral groups; properly, binary polyhedral groups correspond to the simply laced affine Dynkin diagrams and the representations of these groups can be understood in terms of these diagrams. This connection is known as the after John McKay. The connection to Platonic solids is described in . The correspondence uses the construction of McKay graph. Note that the ADE correspondence is not the correspondence of Platonic solids to their reflection group of symmetries: for instance, in the ADE correspondence the tetrahedron, cube/octahedron, and dodecahedron/icosahedron correspond to while the reflection groups of the tetrahedron, cube/octahedron, and dodecahedron/icosahedron are instead representations of the Coxeter groups and The orbifold of constructed using each discrete subgroup leads to an ADE-type singularity at the origin, termed a du Val singularity. The McKay correspondence can be extended to multiply laced Dynkin diagrams, by using a pair of binary polyhedral groups. This is known as the Slodowy correspondence, named after Peter Slodowy – see . Labeled graphs The ADE graphs and the extended (affine) ADE graphs can also be characterized in terms of labellings with certain properties, which can be stated in terms of the discrete Laplace operators or Cartan matrices. Proofs in terms of Cartan matrices may be found in . The affine ADE graphs are the only graphs that admit a positive labeling (labeling of the nodes by positive real numbers) with the following property: Twice any label is the sum of the labels on adjacent vertices. That is, they are the only positive functions with eigenvalue 1 for the discrete Laplacian (sum of adjacent vertices minus value of vertex) – the positive solutions to the homogeneous equation: Equivalently, the positive functions in the kernel of The resulting numbering is unique up to scale, and if normalized such that the smallest number is 1, consists of small integers – 1 through 6, depending on the graph. The ordinary ADE graphs are the only graphs that admit a positive labeling with the following property: Twice any label minus two is the sum of the labels on adjacent vertices. In terms of the Laplacian, the positive solutions to the inhomogeneous equation: The resulting numbering is unique (scale is specified by the "2") and consists of integers; for E8 they range from 58 to 270, and have been observed as early as . Other classifications The elementary catastrophes are also classified by the ADE classification. The ADE diagrams are exactly the quivers of finite type, via Gabriel's theorem. There is also a link with generalized quadrangles, as the three non-degenerate GQs with three points on each line correspond to the three exceptional root systems E6, E7 and E8. The classes A and D correspond degenerate cases where the line set is empty or we have all lines passing through a fixed point, respectively. It was suggested that symmetries of small droplet clusters may be subject to an ADE classification. The minimal models of two-dimensional conformal field theory have an ADE classification. Four dimensional superconformal gauge quiver theories with unitary gauge groups have an ADE classification. Extension of the classification Arnold has subsequently proposed many further extensions in this classification scheme, in the idea to revisit and generalize the Coxeter classification and Dynkin classification under the single umbrella of root systems. He tried to introduce informal concepts of Complexification and Symplectization based on analogies between Picard–Lefschetz theory which he interprets as the Complexified version of Morse theory and then extend them to other areas of mathematics. He tries also to identify hierarchies and dictionaries between mathematical objects and theories where for example diffeomorphism corresponds to the A type of the Dynkyn classification, volume preserving diffeomorphism corresponds to B type and Symplectomorphisms corresponds to C type. In the same spirit he revisits analogies between different mathematical objects where for example the Lie bracket in the scope of Diffeomorphisms becomes analogous (and at the same time includes as a special case) the Poisson bracket of Symplectomorphism. Trinities Arnold extended this further under the rubric of "mathematical trinities". McKay has extended his correspondence along parallel and sometimes overlapping lines. Arnold terms these "trinities" to evoke religion, and suggest that (currently) these parallels rely more on faith than on rigorous proof, though some parallels are elaborated. Further trinities have been suggested by other authors. Arnold's trinities begin with R/C/H (the real numbers, complex numbers, and quaternions), which he remarks "everyone knows", and proceeds to imagine the other trinities as "complexifications" and "quaternionifications" of classical (real) mathematics, by analogy with finding symplectic analogs of classic Riemannian geometry, which he had previously proposed in the 1970s. In addition to examples from differential topology (such as characteristic classes), Arnold considers the three Platonic symmetries (tetrahedral, octahedral, icosahedral) as corresponding to the reals, complexes, and quaternions, which then connects with McKay's more algebraic correspondences, below. McKay's correspondences are easier to describe. Firstly, the extended Dynkin diagrams (corresponding to tetrahedral, octahedral, and icosahedral symmetry) have symmetry groups respectively, and the associated foldings are the diagrams (note that in less careful writing, the extended (tilde) qualifier is often omitted). More significantly, McKay suggests a correspondence between the nodes of the diagram and certain conjugacy classes of the monster group, which is known as McKay's E8 observation; see also monstrous moonshine. McKay further relates the nodes of to conjugacy classes in 2.B (an order 2 extension of the baby monster group), and the nodes of to conjugacy classes in 3.Fi24' (an order 3 extension of the Fischer group) – note that these are the three largest sporadic groups, and that the order of the extension corresponds to the symmetries of the diagram. Turning from large simple groups to small ones, the corresponding Platonic groups have connections with the projective special linear groups PSL(2,5), PSL(2,7), and PSL(2,11) (orders 60, 168, and 660), which is deemed a "McKay correspondence". These groups are the only (simple) values for p such that PSL(2,p) acts non-trivially on p points, a fact dating back to Évariste Galois in the 1830s. In fact, the groups decompose as products of sets (not as products of groups) as: and These groups also are related to various geometries, which dates to Felix Klein in the 1870s; see icosahedral symmetry: related geometries for historical discussion and for more recent exposition. Associated geometries (tilings on Riemann surfaces) in which the action on p points can be seen are as follows: PSL(2,5) is the symmetries of the icosahedron (genus 0) with the compound of five tetrahedra as a 5-element set, PSL(2,7) of the Klein quartic (genus 3) with an embedded (complementary) Fano plane as a 7-element set (order 2 biplane), and PSL(2,11) the (genus 70) with embedded Paley biplane as an 11-element set (order 3 biplane). Of these, the icosahedron dates to antiquity, the Klein quartic to Klein in the 1870s, and the buckyball surface to Pablo Martin and David Singerman in 2008. Algebro-geometrically, McKay also associates E6, E7, E8 respectively with: the 27 lines on a cubic surface, the 28 bitangents of a plane quartic curve, and the 120 tritangent planes of a canonic sextic curve of genus 4. The first of these is well-known, while the second is connected as follows: projecting the cubic from any point not on a line yields a double cover of the plane, branched along a quartic curve, with the 27 lines mapping to 27 of the 28 bitangents, and the 28th line is the image of the exceptional curve of the blowup. Note that the fundamental representations of E6, E7, E8 have dimensions 27, 56 (28·2), and 248 (120+128), while the number of roots is 27+45 = 72, 56+70 = 126, and 112+128 = 240. This should also fit into the scheme of relating E8,7,6 with the largest three of the sporadic simple groups, Monster, Baby and Fischer 24', cf. monstrous moonshine. See also Elliptic surface References Sources Problem VIII. The A-D-E classifications (V. Arnold). External links John Baez, This Week's Finds in Mathematical Physics: Week 62, Week 63, Week 64, Week 65, August 28, 1995, through October 3, 1995, and Week 230, May 4, 2006 The McKay Correspondence, Tony Smith ADE classification, McKay correspondence, and string theory, Luboš Motl, The Reference Frame, May 7, 2006 Lie groups
ADE classification
Mathematics
2,329
1,268,216
https://en.wikipedia.org/wiki/Mercury%28I%29%20chloride
Mercury(I) chloride is the chemical compound with the formula Hg2Cl2. Also known as the mineral calomel (a rare mineral) or mercurous chloride, this dense white or yellowish-white, odorless solid is the principal example of a mercury(I) compound. It is a component of reference electrodes in electrochemistry. History The name calomel is thought to come from the Greek καλός "beautiful", and μέλας "black"; or καλός and μέλι "honey" from its sweet taste. The "black" name (somewhat surprising for a white compound) is probably due to its characteristic disproportionation reaction with ammonia, which gives a spectacular black coloration due to the finely dispersed metallic mercury formed. It is also referred to as the mineral horn quicksilver or horn mercury. Calomel was taken internally and used as a laxative, for example to treat George III in 1801, and disinfectant, as well as in the treatment of syphilis, until the early 20th century. Until fairly recently, it was also used as a horticultural fungicide, most notably as a root dip to help prevent the occurrence of clubroot amongst crops of the family Brassicaceae. Mercury became a popular remedy for a variety of physical and mental ailments during the age of "heroic medicine". It was prescribed by doctors in America throughout the 18th century, and during the revolution, to make patients regurgitate and release their body from "impurities". Benjamin Rush was a well-known advocate of mercury in medicine and used calomel to treat sufferers of yellow fever during its outbreak in Philadelphia in 1793. Calomel was given to patients as a purgative or cathartic until they began to salivate and was often administered to patients in such great quantities that their hair and teeth fell out. Yellow fever was also treated with calomel. Lewis and Clark brought calomel on their expedition. Researchers used that same mercury, found deep in latrine pits, to retrace the locations of their respective locations and campsites. Properties Mercury is unique among the group 12 metals for its ability to form the M–M bond so readily. Hg2Cl2 is a linear molecule. The mineral calomel crystallizes in the tetragonal system, with space group I4/m 2/m 2/m. The unit cell of the crystal structure is shown below: The Hg–Hg bond length of 253 pm (Hg–Hg in the metal is 300 pm) and the Hg–Cl bond length in the linear Hg2Cl2 unit is 243 pm. The overall coordination of each Hg atom is octahedral as, in addition to the two nearest neighbours, there are four other Cl atoms at 321 pm. Longer mercury polycations exist. Preparation and reactions Mercurous chloride forms by the reaction of elemental mercury and mercuric chloride: Hg + HgCl2 → Hg2Cl2 It can be prepared via metathesis reaction involving aqueous mercury(I) nitrate using various chloride sources including NaCl or HCl. 2 HCl + Hg2(NO3)2 → Hg2Cl2 + 2 HNO3 Ammonia causes Hg2Cl2 to disproportionate: Hg2Cl2 + 2 NH3 → Hg + Hg(NH2)Cl + NH4Cl Calomel electrode Mercurous chloride is employed extensively in electrochemistry, taking advantage of the ease of its oxidation and reduction reactions. The calomel electrode is a reference electrode, especially in older publications. Over the past 50 years, it has been superseded by the silver/silver chloride (Ag/AgCl) electrode. Although the mercury electrodes have been widely abandoned due to the dangerous nature of mercury, many chemists believe they are still more accurate and are not dangerous as long as they are handled properly. The differences in experimental potentials vary little from literature values. Other electrodes can vary by 70 to 100 millivolts. Photochemistry Mercurous chloride decomposes into mercury(II) chloride and elemental mercury upon exposure to UV light. Hg2Cl2 → HgCl2 + Hg The formation of Hg can be used to calculate the number of photons in the light beam, by the technique of actinometry. By utilizing a light reaction in the presence of mercury(II) chloride and ammonium oxalate, mercury(I) chloride, ammonium chloride and carbon dioxide are produced. 2 HgCl2 + (NH4)2C2O4 Hg2Cl2(s) + 2 [][Cl−] + 2 CO2 This particular reaction was discovered by J. M. Eder (hence the name Eder reaction) in 1880 and reinvestigated by W. E. Rosevaere in 1929. Related mercury(I) compounds Mercury(I) bromide, Hg2Br2, is light yellow, whereas mercury(I) iodide, Hg2I2, is greenish in colour. Both are poorly soluble. Mercury(I) fluoride is unstable in the absence of a strong acid. Safety considerations Mercurous chloride is toxic, although due to its low solubility in water it is generally less dangerous than its mercuric chloride counterpart. It was used in medicine as a diuretic and purgative (laxative) in the United States from the late 1700s through the 1860s. Calomel was also a common ingredient in teething powders in Britain up until 1954, causing widespread mercury poisoning in the form of pink disease, which at the time had a mortality rate of 1 in 10. These medicinal uses were later discontinued when the compound's toxicity was discovered. It has also found uses in cosmetics as soaps and skin lightening creams, but these preparations are now illegal to manufacture or import in many countries including the US, Canada, Japan and the European Union. A study of workers involved in the production of these preparations showed that the sodium salt of 2,3-dimercapto-1-propanesulfonic acid (DMPS) was effective in lowering the body burden of mercury and in decreasing the urinary mercury concentration to normal levels. References External links International Chemical Safety Card 0984 National Pollutant Inventory - Mercury and compounds Fact Sheet NIOSH Pocket Guide to Chemical Hazards Mercury(I) compounds Chlorides Metal halides Alchemical substances Obsolete pesticides Laxatives Diuretics Luminescent minerals Chemical compounds containing metal–metal bonds
Mercury(I) chloride
Chemistry
1,379
230,978
https://en.wikipedia.org/wiki/Cylinder%20head
In a piston engine, the cylinder head sits above the cylinders, forming the roof of the combustion chamber. In sidevalve engines the head is a simple plate of metal containing the spark plugs and possibly heat dissipation fins. In more modern overhead valve and overhead camshaft engines, the head is a more complicated metal block that also contains the inlet and exhaust passages, and often coolant passages, Valvetrain components, and fuel injectors. Number of cylinder heads A piston engine typically has one cylinder head per bank of cylinders. Most modern engines with a "straight" (inline) layout today use a single cylinder head that serves all the cylinders. Engines with a "V" layout or "flat" layout typically use two cylinder heads (one for each cylinder bank), however a small number of 'narrow-angle' V engines (such as the Volkswagen VR5 and VR6 engines) use a single cylinder head spanning the two banks. Most radial engines have one head for each cylinder, although this is usually of the monobloc form wherein the head is made as an integral part of the cylinder. This is also common for motorcycles, and such head/cylinder components are referred to as barrels. Some engines, particularly medium- and large-capacity diesel engines built for industrial, marine, power generation, and heavy traction purposes (large trucks, locomotives, heavy equipment, etc.) have individual cylinder heads for each cylinder. This reduces repair costs as a single failed head on a single cylinder can be changed instead of a larger, much more expensive unit fitting all the cylinders. Such a design also allows engine manufacturers to easily produce a 'family' of engines of different layouts and/or cylinder numbers without requiring new cylinder head designs. Design Engine/valvetrain configurations Sidevalve engines In a flathead (sidevalve) engine, all of the valvetrain components are contained within the block, therefore the head is usually a simple plate of metal bolted to the top of the engine block. Sidevalve engines were once universal but are now largely obsolete in automobiles, found almost exclusively in small engines such as lawnmowers, weed trimmers and chainsaws. Intake over exhaust (IOE) engines Intake Over Exhaust (IOE) engines combined elements of the sidevalve and overhead valve designs. Used extensively in American motorcycles in the early 1900s, the IOE engine remained in production in limited numbers until the 1990s. IOE engines are more efficient than sidevalve engines, but also more complex, larger and more expensive to manufacture. Overhead engines (OHV & OHC) In an overhead valve (OHV) or overhead camshaft (OHC) engine, the cylinder head contains several airflow passages called ports; intake ports deliver the fuel+air intake charge from the intake manifold to the combustion chamber, and exhaust ports route combustion waste gases out the combustion chamber to the exhaust manifold. Valves open and close the ports, with the intakes offset fore-and-aft from the exhausts. The head also contains the spark plugs, and on water-cooled engines, the coolant passages. Overhead valve (OHV) engines A single camshaft located in the engine block uses pushrods and rocker arms to actuate all the valves. OHV engines are typically more compact than equivalent OHC engines, and fewer parts mean cheaper production, but they have largely been replaced by OHC designs, except in some American V8 engines. Overhead camshaft (OHC) engines An overhead camshaft (OHC) engine locates the camshaft(s) in the cylinder head above the combustion chamber. Eliminating pushrods lessens valvetrain inertia and provides space for optimized port designs, both providing increased power potential. In a single overhead camshaft (SOHC) engine, the camshaft may be seated centrally between valve rows, or directly above a single row of valves (replacing rocker arm actuation with tappets). SOHC engines were widely from the 1960s to 1990s. (eliminating pushrods but still utilizing rocker arms) Double overhead camshaft (DOHC) engines seat a camshaft directly above each row of offset valves (intakes inboard, exhausts outboard). DOHC designs allow optimal crossflow positioning of valves to provide higher-RPM operation. They are typically larger in size (especially width) than equivalent OHV or SOHC engines. Even though more components raise production costs, DOHC engines seen widespread use in automobile engines since the 1990s. Gallery See also Crossflow cylinder head Reverse-flow cylinder head Head gasket Junk head Monobloc head Flathead engine T-head engine References Engine technology
Cylinder head
Technology
947
31,184,294
https://en.wikipedia.org/wiki/Treatment%20of%20infections%20after%20exposure%20to%20ionizing%20radiation
Infections caused by exposure to ionizing radiation can be extremely dangerous, and are of public and government concern. Numerous studies have demonstrated that the susceptibility of organisms to systemic infection increased following exposure to ionizing radiation. The risk of systemic infection is higher when the organism has a combined injury, such as a conventional blast, thermal burn, or radiation burn. There is a direct quantitative relationship between the magnitude of the neutropenia that develops after exposure to radiation and the increased risk of developing infection. Because no controlled studies of therapeutic intervention in humans are available, almost all of the current information is based on animal research. Cause of infection Infections caused by ionizing radiation can be endogenous, originating from the oral and gastrointestinal bacterial flora, and exogenous, originating from breached skin following trauma. The organisms causing endogenous infections are generally gram negative bacilli such as Enterobacteriaceae (i.e. Escherichia coli, Klebsiella pneumoniae, Proteus spp. ), and Pseudomonas aeruginosa. Exposure to higher doses of radiation is associated with systemic anaerobic infections due to gram negative bacilli and gram positive cocci. Fungal infections can also emerge in those that fail antimicrobial therapy and stay febrile for over 7–10 days. Exogenous infections can be caused by organisms that colonize the skin such as Staphylococcus aureus or Streptococcus spp. and organisms that are acquired from the environment such as Pseudomonas spp. Organisms causing sepsis following exposure to ionizing radiation: Principles of treatment The management of established or suspected infection following exposure to radiation (characterized by neutropenia and fever) is similar to that used for other febrile neutropenic patients. However, important differences between the two conditions exist. The patient that develops neutropenia after radiation is susceptible to irradiation damage to other tissues, such as the gastrointestinal tract, lungs and the central nervous system. These patients may require therapeutic interventions not needed in other types of neutropenic infections. The response of irradiated animals to antimicrobial therapy is sometimes unpredictable, as was evident in experimental studies where metronidazole and pefloxacin therapies were detrimental. Antimicrobial agents that decrease the number of the strict anaerobic component of the gut flora (i.e., metronidazole) generally should not be given because they may enhance systemic infection by aerobic or facultative bacteria, thus facilitating mortality after irradiation. Choice of antimicrobials An empirical regimen of antibiotics should be selected, based on the pattern of bacterial susceptibility and nosocomial infections in the particular area and institution and the degree of neutropenia. Broad-spectrum empirical therapy (see below for choices) with high doses of one or more antibiotics should be initiated at the onset of fever. These antimicrobials should be directed at the eradication of Gram-negative aerobic organisms (i.e. Enterobacteriaceae, Pseudomonas ) that account for more than three-fourths of the isolates causing sepsis. Because aerobic and facultative Gram-positive bacteria (mostly alpha-hemolytic streptococci) cause sepsis in about a quarter of the victims, coverage for these organisms may be necessary in the rest of the individuals. A standardized plan for the management of febrile, neutropenic patients must be devised in each institution or agency., Empirical regimens must contain antibiotics broadly active against Gram-negative aerobic bacteria (a quinolones [i.e. ciprofloxacin, levofloxacin ], a fourth-generation cephalosporins [e.g. cefepime, ceftazidime ], or an aminoglycoside [i.e. gentamicin, amikacin]) Antibiotics directed against Gram-positive bacteria need to be included in instances and institutions where infections due to these organisms are prevalent. ( amoxicillin, vancomycin, or linezolid). These are the antimicrobial agents that can be used for therapy of infection following exposure to irradiation: a. First choice: ciprofloxacin (a second-generation quinolone) or levofloxacin (a third-generation quinolone) +/- amoxicillin or vancomycin. Ciprofloxacin is effective against Gram-negative organisms (including Pseudomonas species) but has poor coverage for Gram-positive organisms (including Staphylococcus aureus and Streptococcus pneumoniae) and some atypical pathogens. Levofloxacin has expanded Gram-positive coverage (penicillin-sensitive and penicillin-resistant S. pneumoniae) and expanded activity against atypical pathogens. b. Second choice: ceftriaxone (a third-generation cephalosporin) or cefepime (a fourth-generation cephalosporin) +/- amoxicillin or vancomycin. Cefepime exhibits an extended spectrum of activity for Gram-positive bacteria (staphylococci) and Gram-negative organisms, including Pseudomonas aeruginosa and certain Enterobacteriaceae that generally are resistant to most third-generation cephalosporins. Cefepime is an injectable and is not available in an oral form. c. Third choice: gentamicin or amikacin (both aminoglycosides) +/- amoxicillin or vancomycin (all injectable). Aminoglycosides should be avoided whenever feasible due to associated toxicities. The second and third choices of antimicrobials are suitable for children because quinolones are not approved for use in this age group. The use of these agents should be considered in individuals exposed to doses above 1.5 Gy, should be given to those who develop fever and neutropenia and should be administered within 48 hours of exposure. An estimation of the exposure dose should be done by biological dosimetry whenever possible and by detailed history of exposure. If infection is documented by cultures, the empirical regimen may require adjustment to provide appropriate coverage for the specific isolate(s). When the patient remains afebrile, the initial regimen should be continued for a minimum of 7 days. Therapy may need to be continued for at least 21–28 days or until the risk of infection has declined because of recovery of the immune system. A mass casualty situation may mandate the use of oral antimicrobials. Modification of therapy Modifications of this initial antibiotic regimen should be made when microbiological culture shows specific bacteria that are resistant to the initial antimicrobials. The modification, if needed, should be influenced by a thorough evaluation of the history, physical examination findings, laboratory data, chest radiograph, and epidemiological information. Antifungal coverage with amphotericin B may need to be added. If diarrhea is present, cultures of stool should be examined for enteropathogens (i.e., Salmonella, Shigella, Campylobacter, and Yersinia). Oral and pharyngeal mucositis and esophagitis suggest Herpes simplex infection or candidiasis. Either empirical antiviral or antifungal therapy or both should be considered. In addition to infections due to neutropenia, a patient with the Acute Radiation Syndrome will also be at risk for viral, fungal and parasitic infections. If these types of infection are suspected, cultures should be performed and appropriate medication started if indicated. References External links Armed Forces Radiobiology Research Institute, Uniformed Services University Infection in Radiation Sickness, Washington DC, USA Medical consequences of nuclear war. TRIAGE AND TREATMENT OF RADIATION-INJURED MASS CASUALTIES. Borden Institute 2000s Chapter 5 INFECTIOUS COMPLICATIONS OF RADIATION INJURY. Borden Institute 2000s Radiation health effects
Treatment of infections after exposure to ionizing radiation
Chemistry,Materials_science
1,689
205,670
https://en.wikipedia.org/wiki/Ross%20154
Ross 154 (V1216 Sgr) is a star in the southern zodiac constellation of Sagittarius. It has an apparent visual magnitude of 10.44, making it much too faint to be seen with the naked eye. At a minimum, viewing Ross 154 requires a telescope with an aperture of under ideal conditions. The distance to this star can be estimated from parallax measurements, which places it at away from Earth. It is the nearest star in the southern constellation Sagittarius, and one of the nearest stars to the Sun. Description This star was first catalogued by American astronomer Frank Elmore Ross in 1925, and formed part of his fourth list of new variable stars. In 1926, he added it to his second list of stars showing a measurable proper motion after comparing its position with photographic plates taken earlier by fellow American astronomer E. E. Barnard. A preliminary parallax value of was determined in 1937 by Walter O'Connell using photographic plates from the Yale telescope in Johannesburg, South Africa. This placed the star at the sixth position of the then-known nearby stars. Ross 154 was found to be a UV Ceti-type flare star, with a mean time between major flares of about two days. The first such flare activity was observed from Australia in 1951 when the star increased in magnitude by 0.4. Typically, the star will increase by 3–4 magnitudes during a flare. The strength of the star's surface magnetic field is an estimated . Ross 154 is an X-ray source and it has been detected by several X-ray observatories. The quiescent X-ray luminosity is about . X-ray flare emission from this star has been observed by Chandra observatory, with a particularly large flare emitting . A stellar classification of M3.5V makes this a red dwarf star that is generating energy through the nuclear fusion of hydrogen at its core. It has an estimated 18% of the Sun's mass and 20% of the Sun's radius, but it is radiating only 0.4% of the luminosity of the Sun. In contrast to the Sun where convection only occurs in the outer layers, a red dwarf with a mass this low will be entirely convective. Based on the relatively high projected rotation, this is probably a young star with an estimated age of less than a billion years. The abundance of elements heavier than helium is about half that in the Sun. No low-mass companions have been discovered in orbit around Ross 154. Nor does it display the level of excess infrared emission that would suggest the presence of circumstellar dust. Such debris disks are rare among M-type star systems older than about 10 million years, having been primarily cleared away by drag from the stellar wind. The space velocity components of this star in the galactic coordinate system are = [–12.2, –1.0, –7.2]. It has not been identified as a member of a specific stellar moving group and is orbiting through the Milky Way galaxy at a distance from the core that varies from with an orbital eccentricity of 0.052. Based on its low velocity relative to the Sun, this is believed to be a young disk (Population I) star. This star will make its closest approach to the Sun in about 157,000 years, when it comes within . See also Frank Elmore Ross List of nearest stars and brown dwarfs References External links SolStation.com: Ross 154 Flare stars M-type main-sequence stars Local Bubble Sagittarius (constellation) CD-23 14742 154 0729 092403 Sagittarii, V1216 ?
Ross 154
Astronomy
748
55,643,561
https://en.wikipedia.org/wiki/Alfred%20George%20Nash
Alfred George Nash FRSE (1853–23 December 1930) was a Jamaican civil engineer of Scots descent who held several legislative roles there. Life He was born in Mandeville, Jamaica in 1853. He was educated at St George's College in Kingston. He was traveled to Scotland to study engineering at the University of Edinburgh, graduating with a BSc around 1873. He was working in Britain as a civil engineer in the 1870s returning to Jamaica in 1882. He was a Member of the Legislative Council of Jamaica. In 1897 he was elected a Fellow of the Royal Society of Edinburgh. His proposers were Alexander Crum Brown, Cargill Gilston Knott, Peter Guthrie Tait and Andrew Jamieson. He died on 23 December 1930. He is buried in St Mark's Anglican Churchyard in Mandeville, Jamaica. References 1853 births 1930 deaths People from Mandeville, Jamaica Civil engineers Fellows of the Royal Society of Edinburgh Alumni of the University of Edinburgh Members of the Legislative Council of Jamaica Jamaican engineers People educated at St. George's College, Jamaica
Alfred George Nash
Engineering
214
78,187,130
https://en.wikipedia.org/wiki/NGC%206644
NGC 6644 is a bipolar planetary nebula located in the constellation Sagittarius. NGC 6644 was discovered by American astronomer Edward Charles Pickering in 1880. With an apparent visual magnitude of 10.7, a telescope with an aperture of at least 150 millimeters must be used to observe it. The nebula is located about 1.1 degrees northeast of the star Lambda Sagittarii. According to the most recent studies (2010), the distance of NGC 6644 is 6.131 ± 1.226 kpc (∼20,000 light-years). See also List of planetary nebulae References Planetary nebulae 6644 Astronomical objects discovered in 1880 Sagittarius (constellation)
NGC 6644
Astronomy
140
522,037
https://en.wikipedia.org/wiki/Mr.%20Yuk
Mr. Yuk is a trademarked graphic image, created by UPMC Children's Hospital of Pittsburgh, and widely employed in the United States in labeling of substances that are poisonous if ingested. Objective To help children learn to avoid ingesting poisons, Mr. Yuk was conceived by Richard Moriarty, a pediatrician and clinical professor of pediatrics at the University of Pittsburgh School of Medicine who founded the Pittsburgh Poison Center and the National Poison Center Network. Moriarty felt that the traditional skull and crossbones representing poison was no longer appropriate for children; Congressman Bill Coyne later said that by the 1970s the symbol was "associated with swashbuckling pirates and buccaneers rather than with harmful substances." The design and color were chosen when Moriarty used focus groups of young children to determine which combination was the most unappealing. Possible expressions were "mad" (crossed eyes and intense expression), "dead" (a sunken mouth and Xs for eyes), and "sick" (a sour expression with the tongue sticking out). Children were asked to rank the faces according to which they liked the best, along with the skull and crossbones, and the "sick" face was least popular. The shade of fluorescent green that was chosen was christened "Yucky!" by a young child and gave the design its name. History In 1971 the Pittsburgh Poison Centre issued the Mr. Yuk sticker. Over the next few years, Mr. Yuk stickers were used nationwide to promote poison centres in the United States of America. The stickers usually contained phone numbers of poison control centers that may give guidance if poisoning has occurred or is suspected. Usually, Mr. Yuk stickers carried the national toll-free number 1-800-222-1222. In some areas, local poison control centers and children's hospitals issue stickers with local numbers, under license. A public service announcement was also produced in 1971 featuring a theme song. Effectiveness At least two peer-reviewed medical studies (Fergusson 1982, Vernberg 1984) have suggested that Mr. Yuk stickers do not effectively keep children away from potential poisons and may even attract children. Specifically, Vernberg and colleagues note concerns for using the stickers to protect young children. Fergusson and colleagues state that "the method may be effective with older children or as an adjunct to an integrated poisoning prevention campaign". To evaluate the effectiveness of six projected symbols (skull-and-crossbones, red stop sign, and four others), tests were conducted at day care centers. Children in the program rated Mr. Yuk as the most unappealing image. By contrast, children rated the skull-and-crossbones to be the most appealing. Licensing Mr. Yuk and his graphic rendering are registered trademarks and service marks of the UPMC Children's Hospital of Pittsburgh, and the rendering itself is additionally protected by copyright. The Children's Hospital of Pittsburgh of UPMC gives out free sheets of Mr. Yuk stickers if contacted by mail. Modern usage Given the evidence regarding the campaign's effectiveness, some poison control centers no longer distribute Mr. Yuk stickers. However, as of May 2024, other poison control centers, such as the Pittsburgh Poison Center continue to offer stickers. See also Emoticon Hazard symbol Mr. Ouch Smiley Poison control center References External links Mr. Yuk Information Page American Association of Poison Control Centers Acceptable Labelling on Pesticide Containers Original Mr.Yuk Public Service Announcement On YouTube Symbols introduced in 1971 Male characters in advertising Public service announcement characters Public service announcements of the United States Poison control centers Culture of Pittsburgh Pictograms Stickers Children's health in the United States Articles containing video clips
Mr. Yuk
Mathematics
766
35,134,363
https://en.wikipedia.org/wiki/List%20of%20LM-series%20integrated%20circuits
The following is a list of LM-series integrated circuits. Many were among the first analog integrated circuits commercially produced since late 1965; some were groundbreaking innovations. As of 2007, many are still being used. The LM series originated with integrated circuits made by National Semiconductor. The prefix LM stands for linear monolithic, referring to the analog components integrated onto a single piece of silicon. Because of the popularity of these parts, many of them were second-sourced by other manufacturers who kept the sequence number as an aid to identification of compatible parts. Several generations of pin-compatible descendants of the original parts have since become de facto standard electronic components. Operational amplifiers Differential comparators Current-mode (Norton) amplifiers Instrumentation amplifiers Audio amplifiers Precision reference Voltage regulators Voltage-to-frequency converters Current sources Temperature sensors and thermostats Others See also Linear integrated circuit, List of linear integrated circuits 4000-series integrated circuits, List of 4000-series integrated circuits 7400-series integrated circuits, List of 7400-series integrated circuits Pin compatibility Notes Suffixes that denote specific versions of the part (e.g. LM305 vs. LM305A) are not shown in this list. Obsolete 4-bit microprocessors of the LM6400 family, manufactured by Sanyo, have no relationship to the analog LM series and are not included in this list. The first digit of each part denote different temperature ranges. Mostly, LM1xx indicates military-grade temperature range of -55 °C to +125 °C, LM2xx indicates industrial-grade temperature range of -25 °C to +85 °C and LM3xx indicates commercial temperature range of 0 °C to 70 °C. Some obsolete parts continue to be manufactured by different companies other than the original manufacturer. References Further reading Historical Data Books Linear Databook (1980, 1376 pages), National Semiconductor Linear Databook 1 (1988, 1262 pages), National Semiconductor Linear Databook 2 (1988, 934 pages), National Semiconductor Linear Databook 3 (1988, 930 pages), National Semiconductor Linear and Interface Databook (1990, 1658 pages), Motorola Linear Databook (1986, 568 pages), RCA Historical Design Books Analog Applications Manual (1979, 418 pages), Signetics Linear Applications Handbook (1994, 1287 pages), National Semiconductor Linear Design Seminar Slide Book (1992, 502 pages), Texas Instruments Linear Design Seminar Reference Book (1993, 451 pages), Texas Instruments Electronic design Electronics lists Linear integrated circuits
List of LM-series integrated circuits
Engineering
517
61,594,718
https://en.wikipedia.org/wiki/Cercopithecine%20gammaherpesvirus%2014
Cercopithecine gammaherpesvirus 14 (CeHV-14) is a species of virus in the genus Lymphocryptovirus, subfamily Gammaherpesvirinae, family Herpesviridae, and order Herpesvirales. References External links Gammaherpesvirinae
Cercopithecine gammaherpesvirus 14
Biology
64
39,313,775
https://en.wikipedia.org/wiki/Reliable%20Event%20Logging%20Protocol
Reliable Event Logging Protocol (RELP), a networking protocol for computer data logging in computer networks, extends the functionality of the syslog protocol to provide reliable delivery of event messages. It is most often used in environments which do not tolerate message loss, such as the financial industry. Overview RELP uses TCP for message transmission. This provides basic protection against message loss, but does not guarantee delivery under all circumstances. When a connection aborts, TCP cannot reliably detect whether the last messages sent have actually reached their destination. Unlike the syslog protocol, RELP works with a backchannel which conveys information back to the sender about messages processed by the receiver. This enables RELP to always know which messages have been properly received, even in the case of a connection abort. History RELP was developed in 2008 as a reliable protocol for rsyslog-to-rsyslog communication. As RELP designer Rainer Gerhards explains, the lack of reliable transmission in industry-standard syslog was a core motivation to create RELP. Originally, RFC 3195 syslog was considered to take up this part in rsyslog, but it suffered from high overhead and missing support for new IETF syslog standards (which have since been published as RFC 5424, but were not named at that time). While RELP was initially meant solely for rsyslog use, it became adopted more widely. Currently tools both under Linux and Windows support RELP. There are also in-house deployments for Java. While RELP is still not formally standardized, it has evolved into an industry standard for computer logging. Technical details RELP is inspired by RFC 3195 syslog and RFC 3080. During initial connection, sender and receiver negotiate session options, like supported command set or application level window size. Network event messages are transferred as commands, where the receiver acknowledges each command as soon as it has processed it. Sessions may be closed by both sender and receiver, but usually should be terminated by the sender side. In order to facilitate message recovery on session aborts, RELP keeps transaction numbers for each command, and negotiates which messages need to be resent on session reestablishment. The current version of RELP does not specify native TLS support. However, practical deployments use wrappers around the RELP session in order to provide that functionality. Implementations Only publicly available implementations are listed. This list is not exhaustive. librelp - the original C RELP library rsyslog MonitorWare (Windows) logstash rlp_01 - Java RELP Library jla_01 - RELP Logback Plugin jla_04 - Java Util Logging RELP Handler jla_05 - Log4j RELP Plugin References Internet protocols Internet Standards System administration
Reliable Event Logging Protocol
Technology
578
347,726
https://en.wikipedia.org/wiki/Schematron
Schematron is a rule-based validation language for making assertions about the presence or absence of patterns in XML trees. It is a structural schema language expressed in XML using a small number of elements and XPath languages. In many implementations, the Schematron XML is processed into XSLT code for deployment anywhere that XSLT can be used. Schematron is capable of expressing constraints in ways that other XML schema languages like XML Schema and DTD cannot. For example, it can require that the content of an element be controlled by one of its siblings. Or it can request or require that the root element, regardless of what element that is, must have specific attributes. Schematron can also specify required relationships between multiple XML files. Constraints and content rules may be associated with "plain-English" (or any language) validation error messages, allowing translation of numeric Schematron error codes into meaningful user error messages. Users of Schematron define all the error messages themselves. The current ISO recommendation is Information technology, Document Schema Definition Languages (DSDL), Part 3: Rule-based validation, Schematron (ISO/IEC 19757-3:2020). Uses Constraints are specified in Schematron using an XPath-based language that can be deployed as XSLT code, making it practical for applications such as the following: Adjunct to Structural Validation By testing for co-occurrence constraints, non-regular constraints, and inter-document constraints, Schematron can extend the validations that can be expressed in languages such as DTDs, RELAX NG or XML Schema. Lightweight Business Rules Engine Schematron is not a comprehensive, Rete rules engine, but it can be used to express rules about complex structures with an XML document. XML Editor Syntax Highlighting Rules Some XML editors use Schematron rules to conditionally highlight XML files for errors. Not all XML editors support Schematron. Versions Schematron was invented by Rick Jelliffe while at Academia Sinica Computing Centre, Taiwan. He described Schematron as "a feather duster to reach the parts other schema languages cannot reach". The most common versions of Schematron are: Schematron 1.0 (1999) Schematron 1.3 (2000): This version used the namespace http://xml.ascc.net/schematron/. It was supported by an XSLT implementation with a plug-in architecture. Schematron 1.5 (2001): This version was widely implemented and can still be found. Schematron 1.6 (2002): This version was the base of ISO Schematron and obsoleted by it. ISO Schematron (2006): This version regularizes several features, and provides an XML output format, Schematron Validation Report Language (SVRL). It uses the new namespace http://purl.oclc.org/dsdl/schematron. ISO Schematron (2010) ISO Schematron (2016): This version added support for XSLT2. ISO Schematron (2020): This version added support for XSLT3. Schematron as an ISO Standard Schematron has been standardized by the ISO as Information technology, Document Schema Definition Languages (DSDL), Part 3: Rule-based validation, Schematron (ISO/IEC 19757-3:2020). This standard is currently not listed on the ISO Publicly Available Specifications list. Paper versions may be purchased from ISO or national standards bodies. Schemas that use ISO/IEC FDIS 19757-3 should use the following namespace: http://purl.oclc.org/dsdl/schematron Sample rule Schematron rules can be created using a standard XML editor or XForms application. The following is a sample schema: <schema xmlns="http://purl.oclc.org/dsdl/schematron"> <pattern> <title>Date rules</title> <rule context="Contract"> <assert test="ContractDate > current-date()">ContractDate should be in the past because future contracts are not allowed.</assert> </rule> </pattern> </schema> This rule checks to make sure that the XML element has a date that is before the current date. If this rule fails the validation will fail and an error message which is the body of the assert element will be returned to the user. Implementation Schematron schemas are suitable for use in XML Pipelines, thereby allowing workflow process designers to build and maintain rules using XML manipulation tools. The W3C's XProc pipelining language, for example, has native support for Schematron schema processing through its "validate-with-schematron" step. Since Schematron schemas can be transformed into XSLT stylesheets, these can themselves be used in XML Pipelines which support XSLT transformation. An Apache Ant task can be used to convert Schematron rules into XSLT files. There exists also native Schematron implementation, like the Java implementation from Innovimax/INRIA, QuiXSchematron, that also do streaming. See also XML Schema Language comparison - Comparison to other XML Schema languages. Service Modeling Language - Service Modeling Language uses Schematron. Document Schema Definition Languages References External links Academia Sinica Computing Centre's Schematron Home Page A book on Schematron (in German) Schematron online tutorial and reference Data modeling languages ISO/IEC standards XML XML-based programming languages XML-based standards
Schematron
Technology
1,177
30,872,597
https://en.wikipedia.org/wiki/Hypoxia%20%28environmental%29
Hypoxia (hypo: "below", oxia: "oxygenated") refers to low oxygen conditions. Hypoxia is problematic for air-breathing organisms, yet it is essential for many anaerobic organisms. Hypoxia applies to many situations, but usually refers to the atmosphere and natural waters. Atmospheric hypoxia Atmospheric hypoxia occurs naturally at high altitudes. Total atmospheric pressure decreases as altitude increases, causing a lower partial pressure of oxygen, which is defined as hypobaric hypoxia. Oxygen remains at 20.9% of the total gas mixture, differing from hypoxic hypoxia, where the percentage of oxygen in the air (or blood) is decreased. This is common in the sealed burrows of some subterranean animals, such as blesmols. Atmospheric hypoxia is also the basis of altitude training, which is a standard part of training for elite athletes. Several companies mimic hypoxia using normobaric artificial atmosphere. Aquatic hypoxia An aquatic system lacking dissolved oxygen (0% saturation) is termed anaerobic, reducing, or anoxic. In water, oxygen levels are approximately 7 ppm or 0.0007% in good quality water, but fluctuate. Many organisms require hypoxic conditions. Oxygen is poisonous to anaerobic bacteria for example. Oxygen depletion is typically expressed as a percentage of the oxygen that would dissolve in the water at the prevailing temperature and salinity. A system with low concentration—in the range between 1 and 30% saturation—is called hypoxic or dysoxic. Most fish cannot live below 30% saturation since they rely on oxygen to derive energy from their nutrients. Hypoxia leads to impaired reproduction of remaining fish via endocrine disruption. A "healthy" aquatic environment should seldom experience less than 80% saturation. The exaerobic zone is found at the boundary of anoxic and hypoxic zones. Hypoxia can occur throughout the water column and also at high altitudes as well as near sediments on the bottom. It usually extends throughout 20–50% of the water column, but depends on the water depth and location of pycnoclines (rapid changes in water density with depth). It can occur in 10–80% of the water column. For example, in a 10-meter water column, it can reach up to 2 meters below the surface. In a 20-meter water column, it can extend up to 8 meters below the surface. Seasonal kill Hypolimnetic oxygen depletion can lead to both summer and winter "kills". During summer stratification, inputs or organic matter and sedimentation of primary producers can increase rates of respiration in the hypolimnion. If oxygen depletion becomes extreme, aerobic organisms, like fish, may die, resulting in what is known as a "summer kill". The same phenomena can occur in the winter, but for different reasons. During winter, ice and snow cover can attenuate light, and therefore reduce rates of photosynthesis. The freezing over of a lake also prevents air-water interactions that allow the exchange of oxygen. This creates a lack of oxygen while respiration continues. When the oxygen becomes badly depleted, anaerobic organisms can die, resulting in a "winter kill". Causes of hypoxia Oxygen depletion can result from a number of natural factors, but is most often a concern as a consequence of pollution and eutrophication in which plant nutrients enter a river, lake, or ocean, and phytoplankton blooms are encouraged. While phytoplankton, through photosynthesis, will raise DO saturation during daylight hours, the dense population of a bloom reduces DO saturation during the night by respiration. When phytoplankton cells die, they sink towards the bottom and are decomposed by bacteria, a process that further reduces DO in the water column. If oxygen depletion progresses to hypoxia, fish kills can occur and invertebrates like worms and clams on the bottom may be killed as well. Hypoxia may also occur in the absence of pollutants. In estuaries, for example, because freshwater flowing from a river into the sea is less dense than salt water, stratification in the water column can result. Vertical mixing between the water bodies is therefore reduced, restricting the supply of oxygen from the surface waters to the more saline bottom waters. The oxygen concentration in the bottom layer may then become low enough for hypoxia to occur. Areas particularly prone to this include shallow waters of semi-enclosed water bodies such as the Waddenzee or the Gulf of Mexico, where land run-off is substantial. In these areas a so-called "dead zone" can be created. Low dissolved oxygen conditions are often seasonal, as is the case in Hood Canal and areas of Puget Sound, in Washington State. The World Resources Institute has identified 375 hypoxic coastal zones around the world, concentrated in coastal areas in Western Europe, the Eastern and Southern coasts of the US, and East Asia, particularly in Japan. Hypoxia may also be the explanation for periodic phenomena such as the Mobile Bay jubilee, where aquatic life suddenly rushes to the shallows, perhaps trying to escape oxygen-depleted water. Recent widespread shellfish kills near the coasts of Oregon and Washington are also blamed on cyclic dead zone ecology. Phytoplankton breakdown Phytoplankton are mostly made up of lignin and cellulose, which are broken down by oxidative mechanism, which consume oxygen. Environmental factors The breakdown of phytoplankton in the environment depends on the presence of oxygen, and once oxygen is no longer in the bodies of water, ligninperoxidases cannot continue to break down the lignin. When oxygen is not present in the water, the time required for breakdown of phytoplankton changes from 10.7 days to a total of 160 days. The rate of phytoplankton breakdown can be represented using this equation: In this equation, G(t) is the amount of particulate organic carbon (POC) overall at a given time, t. G(0) is the concentration of POC before breakdown takes place. k is a rate constant in year-1, and t is time in years. For most POC of phytoplankton, the k is around 12.8 years-1, or about 28 days for nearly 96% of carbon to be broken down in these systems. Whereas for anoxic systems, POC breakdown takes 125 days, over four times longer. It takes approximately 1 mg of oxygen to break down 1 mg of POC in the environment, and therefore, hypoxia takes place quickly as oxygen is used up quickly to digest POC. About 9% of POC in phytoplankton can be broken down in a single day at 18 °C. Therefore, it takes about eleven days to completely break down phytoplankton. After POC is broken down, this particulate matter can be turned into other dissolved carbon, such as carbon dioxide, bicarbonate ions, and carbonate. As much as 30% of phytoplankton can be broken down into dissolved carbon. When this particulate organic carbon interacts with 350 nm ultraviolet light, dissolved inorganic carbon is formed, removing even more oxygen from the environment in the forms of carbon dioxide, bicarbonate ions, and carbonate. Dissolved inorganic carbon is made at a rate of 2.3–6.5 mg/(m3⋅day). As phytoplankton breakdown, free phosphorus and nitrogen become available in the environment, which also fosters hypoxic conditions. As the breakdown of this phytoplankton takes place, the more phosphorus turns into phosphates, and nitrogens turn into nitrates. This depletes the oxygen even more so in the environment, further creating hypoxic zones in higher quantities. As more minerals such as phosphorus and nitrogen are displaced into these aquatic systems, the growth of phytoplankton greatly increases, and after their death, hypoxic zones are formed. See also Algal blooms Anoxic event Dead zone (ecology) Cyanobacterial bloom Denitrification Eutrophication Hypoxia in fish Oxygen minimum zone References Sources External links Hypoxia in the Gulf of Mexico Scientific Assessment of Hypoxia in U.S. Coastal Waters Council on Environmental Quality Dead zone in front of Atlantic City Hypoxia in Oregon Waters Aquatic ecology Chemical oceanography Environmental science Water quality indicators Oxygen Endocrine disruptors Limnology
Hypoxia (environmental)
Chemistry,Biology,Environmental_science
1,829
44,690,496
https://en.wikipedia.org/wiki/Voluntary%20CQC%20Mark%20Certification
The Voluntary CQC Mark Certification is a voluntary product certification for Chinese products or products that are imported to China. The Voluntary CQC Mark Certification can be applied for products which are not in the China Compulsory Certification product catalogue and thus cannot receive a China Compulsory Certificate (CCC Certificate). The CQC Mark guarantees the conformity of the product with the Chinese standards (Guobiao standards) regarding safety, quality, environmental and energy efficiencies. Products marked with the CQC Mark are less likely to be detained at Chinese customs. In addition, the CQC Mark raises the competitiveness of a product in the Chinese market. The whole certification process is similar to the Compulsory China Certificate (CCC) certification process. Administration The Voluntary CQC Mark Certification is conducted by the China Quality Certification Center, the largest professional certification body that is sanctioned by the governmental agency CCIC (China Certification & Inspection Group). The CQC is also responsible for the mandated process for manufacturers to receive their CCC certification. The Voluntary CQC Mark Certification product range covers more than 500 products that do not require a mandatory CCC certification. Applicable products The following products can receive a CQC Mark, if they do not require a mandatory CCC certification: Electric products and electronic components Household electric appliance accessories Electrical accessories Audio and video apparatus Lighting apparatus and tools (lamps and luminaries) Power tools Small and medium-sized electric machines and accessories Medical instruments/Medical devices Household and similar electrical appliances Machines Commercially used machines Electric wires and cables Low voltage apparatus Automotive and motorbike accessories, e.g. tyres Glass Power system relay protection and automation devices Water pumps Electric meters Low voltage apparatus and accessories High voltage equipment and appliances Generator sets Photovoltaic products Motors Additional CQC Certification for CCC certified wires and cables Test and control instruments Earthmoving machineries and accessories Electric vehicle charging stations and plugs Wind power products Thermal energy products Construction materials Textiles Building products Sanitary products Cement products Office equipment Surge protection Light electric vehicles and accessories Electric cars and accessories Bearing products Restriction of Hazardous Substances (RoHS1 certification) Certification for non-metallic materials and parts School supplies Certification for the restricted use of polycyclic aromatic hydrocarbons (PAHs) Accumulators and batteries Metal welding, cutting and heat treatment Equipment Certification process The certification process is similar to the CCC certification process. With good consultancy, the CQC Mark Certification can be received in 4 months. Self-applicants may take up to 8 months to finish the whole certification procedure. The process includes the following steps: Submission of application documents and supporting materials Type Testing. A CNCA-designated test laboratory in China will test product samples Factory Inspection. CQC will send representatives to inspect the manufacturing facilities Evaluation of the results Approval of the CQC Certificate (or failure and retesting) Receiving the Voluntary CQC Mark Certification Application for Marking Permission at the CNCA Annual Follow-up Factory Inspections by Chinese officials Follow-Up Certification In order to keep the validity of the certification, the CQC certificate and printing permission of the CQC mark must be renewed annually as part of a follow-up certification. Follow-up certifications require a one-day factory inspection. The follow-up procedure is much shorter than the initial certification process. Moreover, it is associated with lower costs. Charges Depending on the product, the fee charging for the CQC Mark Certification can vary. The following list gives an overview about the costs: Submission fees and administrative charge Charge for type testings in China Factory inspection fees Travel expenses of the Chinese officials that will be sent to inspect the factory Charges for application of Marking Permission at the CNCA Additional costs: Translator/interpreter fees Product costs for type testings shipping/mail costs Additional fees when type testings are not successful Costs for optional change/extension of certificate (much lower than initial certification) Benefits of a Voluntary CQC Mark Certification Products marked with a CQC Mark enjoy high reputation on the Chinese market. It shows that the product conforms the requirements of the Chinese standards in regard to safety, quality, environment and performance. It will highly increase the product's competitiveness in the Chinese and international market, and as well facilitates a smoother access of foreign enterprises’ products into the domestic market. The official China Compulsory Certificate product catalogue is constantly going to be extended. This means that a product that did not require the mandatory CCC Certificate before can fall under the new product range. The Voluntary CQC Mark Certification can be relatively easily changed into a mandatory CCC Certificate which gives the manufacturer huge advantages ahead to his competitors. See also Common Criteria National Development and Reform Commission Guobiao standards References General references "A Brief Guide to CCC: China Compulsory Certification", Julian Busch, Official CQC website CNCA website Symbols introduced in 2002 Certification marks Economy of China Safety codes Foreign trade of China
Voluntary CQC Mark Certification
Mathematics
999
19,330,337
https://en.wikipedia.org/wiki/Data%20%26%20Knowledge%20Engineering
Data & Knowledge Engineering is a monthly peer-reviewed academic journal in the area of database systems and knowledge base systems. It is published by Elsevier and was established in 1985. The editor-in-chief is P.P. Chen (Louisiana State University). Abstracting and indexing The journal is abstracted and indexed in Current Contents/Engineering, Computing & Technology, Ei Compendex, Inspec, Science Citation Index Expanded, Scopus, and Zentralblatt MATH. According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.992. References External links Academic journals established in 1985 Elsevier academic journals English-language journals Monthly journals Knowledge engineering
Data & Knowledge Engineering
Engineering
143
31,675,783
https://en.wikipedia.org/wiki/Taphrina%20bullata
Taphrina bullata is an ascomycete fungus that is a plant pathogen. It causes leaf blisters on pear trees. References Fungal tree pathogens and diseases Pear tree diseases Taphrinomycetes Fungi described in 1866 Taxa named by Miles Joseph Berkeley Taxa named by Christopher Edmund Broome Fungus species
Taphrina bullata
Biology
65
49,479,058
https://en.wikipedia.org/wiki/Long%20intergenic%20non-protein%20coding%20rna%20598
Long intergenic non-protein coding RNA 598 is a protein that in humans is encoded by the LINC00598 gene. References Further reading Proteins
Long intergenic non-protein coding rna 598
Chemistry
33
7,867,732
https://en.wikipedia.org/wiki/Carrot%20seed%20oil
Carrot seed oil is the essential oil extract of the seed from the carrot plant Daucus carota. The oil has a woody, earthy sweet smell and is yellow or amber-coloured to pale orange-brown in appearance. The pharmacologically active constituents of carrot seed extract are three flavones: luteolin, luteolin 3'-O-beta-D-glucopyranoside, and luteolin 4'-O-beta-D-glucopyranoside. Rather than the extract the distilled (ethereal) oil is used in perfumery and food aromatization. The main constituent of this oil is carotol. Pressed carrot seed oil is extracted by cold-pressing the seeds of the carrot plant. The properties of pressed carrot seed oil are quite different from those of the essential oil. References Essential oils Vegetable oils Carrot
Carrot seed oil
Chemistry
184
23,091,839
https://en.wikipedia.org/wiki/Axle%20track
In automobiles (and other wheeled vehicles which have two wheels on an axle), the axle track is the distance between the hub flanges on an axle. Wheel track, track width or simply track refers to the distance between the centerline of two wheels on the same axle. In the case of an axle with dual wheels, the centerline of the dual wheel assembly is used for the wheel track specification. Axle and wheel track are commonly measured in millimetres or inches. Common usage Despite their distinct definitions, axle track, (not to be frequently and incorrectly used interchangeably as wheel track and track width), normally refers to the distance between the centerline of the wheels. For a vehicle with two axles, the term can be expressed as front track and rear track. For a vehicle with more than two axles, the axles are normally numbered for reference. Offset wheel track In vehicles with offset wheels, wheel track is distinct from axle track because the centreline of the wheel is not flush with the hub flange. If wheels of a different offset are fitted, the wheel track changes but the axle track does not. Railroad context In the railroad industry, the term "axle track" is not used; the same concept is called "flange gauge" or "wheel gauge". It is measured on a wheelset of a railroad car or tram from one wheel flange reference line to the reference line of the other wheel. It must be compatible with the "track gauge" – the distance between the facing edges of the running rails – of the network it runs on. The maximum and minimum limits to the differences between the two gauges are usually 1" and " (9–35 mm). Model railroads Model railway elements such as track, rolling stock and locomotives are categorised by their wheel or track gauge. An HO scale or OO gauge model locomotive, for example, has a wheel gauge of 16.5mm. See also Track gauge – determines the distance between the reference lines of the rails Wheelbase – the distance between the front and rear axles Wheelset References Automotive engineering Track gauges Train wheels
Axle track
Engineering
429
64,039,598
https://en.wikipedia.org/wiki/Iodine%20azide
Iodine azide () is an explosive inorganic compound, which in ordinary conditions is a yellow solid. Formally, it is an inter-pseudohalogen. Preparation Iodine azide can be prepared from the reaction between silver azide and elemental iodine: Since silver azide can only be handled safely while moist, but even small traces of water cause the iodine azide to decompose, this synthesis is done by suspending the silver azide in dichloromethane and adding a drying agent before reaction with the iodine. In this way, a pure solution of iodine azide results, which can then be carefully evaporated to form needle-shaped golden crystals. This reaction was used in the original synthesis of iodine azide in 1900, where it was obtained as unstable solutions in ether and impure crystals contaminated by iodine. Iodine azide can also be generated in situ by reacting iodine monochloride and sodium azide under conditions where it is not explosive. Properties In the solid state, iodine azide exists as a one-dimensional polymeric structure, forming two polymorphs, both of which crystallize in an orthorhombic lattice with the space group Pbam. The gas phase exists as monomeric units. Iodine azide exhibits both high reactivity and comparative stability, consequences of the polarity of the I–N bond. The group introduced by substitution with iodine azide can frequently undergo subsequent reactions due to its high energy content. The isolated compound is strongly shock- and friction-sensitive. Its explosivity has been characterized as follows: These values lie significantly lower in comparison to classical explosives like TNT or RDX, and also to acetone peroxide. Dilute solutions (< 3%) of the compound in dichloromethane can be handled safely. Uses Despite its explosive character, iodine azide has many practical uses in chemical synthesis. Similar to bromine azide, it can add across an alkene double bond via both ionic and radical mechanisms, giving anti stereoselectivity. Addition of to an alkene followed by reduction with lithium aluminium hydride is a convenient method of aziridine synthesis. Azirines can also be synthesized from the addition product by adding base to eliminate HI, giving a vinyl azide which undergoes thermolysis to form an azirine. Further radical modes of reactivity include radical substitutions on weak C-H bonds to form α‐azido ethers, benzal acetals, and aldehydes, and the conversion of aldehydes to acyl azides. External links Raman spectrum of iodine azide References   Iodine compounds Azido compounds Explosive polymers Inorganic polymers Pseudohalogens
Iodine azide
Chemistry
567
1,683,605
https://en.wikipedia.org/wiki/Reckless%20burning
Reckless burning is a crime that involves illegally setting fire to something not of building proportions, such as leaves or trash. It is a lesser charge than arson. It is usually enacted and levied in areas of high fire risk to prevent people from starting fires that could easily get out of control. See also Vandalism Arson Pyromania References Arson Fire
Reckless burning
Chemistry
73
10,686,369
https://en.wikipedia.org/wiki/Species%20at%20Risk%20Act
The Species at Risk Act (, SARA) is a piece of Canadian federal legislation which became law in Canada on December 12, 2002. It is designed to meet one of Canada's key commitments under the International Convention on Biological Diversity. The goal of the Act is to prevent wildlife species in Canada from disappearing by protecting endangered or threatened organisms and their habitats. It also manages species which are not yet threatened, but whose existence or habitat is in jeopardy. SARA defines a method to determine the steps that need to be taken in order to help protect existing relatively healthy environments, as well as recover threatened habitats, although timing and implementation of recovery plans have limitations. It identifies ways in which governments, organizations, and individuals can work together to preserve species at risk and establishes penalties for failure to obey the law. The Act designates COSEWIC, an independent committee of wildlife experts and scientists, to identify threatened species and assess their conservation status. COSEWIC then issues a report to the government, and the Minister of the Environment evaluates the committee's recommendations when considering whether to add a species to the Schedule 1, which is the official List of Wildlife Species at Risk, or change its status. The Minister will give the list of wildlife species at Risk to the governor in council and will take advice from the Cabinet. The Cabinet is in charge of taking the list of species into account. If a species is listed as extirpated, endangered, or threatened, SARA requires that a Recovery Strategy be prepared by the federal government, in consultation with the relevant provinces and territories, wildlife management boards, and Indigenous organizations. The Recovery Strategy describes the major threats to the species and its habitat, identifies population objectives, and in broad terms, states what will need to be done to stop or reverse the species declines. Proposed Recovery Strategies are posted on the Species at Risk Public Registry, after which public comments are accepted, generally for 60 days. 30 days after the end of the public comment period, the recovery strategy must be finalized. Recent controversies In July 2016, the Government of Canada issued an emergency order to stop the development of a 2 km2 area on the South Shore (Montreal), Quebec to protect the Western Chorus Frog, which by 2009 had seen a 90% decrease in its historical range. This action was opposed by the Government of Quebec, who perceived it as an overstepping of provincial jurisdiction. The emergency order stopped the development of 171 new residences that had been approved by the local municipalities and by the Ministry of Sustainable Development, Environment and Parks (Quebec). 1000 residences are still permitted to be constructed. The original approved plan included 35.5 hectares to be retained for Western Chorus Frog habitat and breeding ponds and for a conservation area. 87 hectares will now be set aside. On March 31, 2022 the government of Canada had decided to revamp the Species at risk act by removing outdated amendments. Bill S-6 will add modernization to the amendments of 29 status which include the species at risk act. See also List of Wildlife Species at Risk (Canada) References Further reading Link To Copy of The Act SARA Homepage Committee on the Status of Endangered Wildlife in Canada website Species at Risk website Nature conservation in Canada Wildlife conservation in Canada Environmental law in Canada 2002 in Canadian law 2002 in the environment Convention on Biological Diversity Canadian federal legislation
Species at Risk Act
Biology
669
5,177,332
https://en.wikipedia.org/wiki/Beta%20helix
A beta helix is a tandem protein repeat structure formed by the association of parallel beta sheet in a helical pattern with either two or three faces. The beta helix is a type of solenoid protein domain. The structure is stabilized by inter-strand hydrogen bonds, protein-protein interactions, and sometimes bound metal ions. Both left- and right-handed beta helices have been identified. These structures are distinct from jelly-roll folds, a different protein structure sometimes known as a "double-stranded beta helix". The first beta-helix was observed in the enzyme pectate lyase, which contains a seven-turn helix that reaches 34 Å (3.4 nm) long. The P22 phage tail spike protein, a component of the P22 bacteriophage, has 13 turns and in its assembled homotrimer is 200 Å (20 nm) in length. Its interior is close-packed with no central pore and contains both hydrophobic residues and charged residues neutralized by salt bridges. Both pectate lyase and P22 tailspike protein contain right-handed helices; left-handed versions have been observed in enzymes such as UDP-N-acetylglucosamine acyltransferase and archaeal carbonic anhydrase. Other proteins that contain beta helices include the antifreeze proteins from the beetle Tenebrio molitor (right-handed) and from the spruce budworm, Choristoneura fumiferana (left-handed), where regularly spaced threonines on the β-helices bind to the surface of ice crystals and inhibit their growth. Beta helices can associate with each other effectively, either face-to-face (mating the faces of their triangular prisms) or end-to-end (forming hydrogen bonds). Hence, β-helices can be used as "tags" to induce other proteins to associate, similar to coiled coil segments. Members of the pentapeptide repeat family have been shown to possess a quadrilateral beta-helix structure. References External links SCOP family of right-handed β-helices SCOP family of left-handed β-helices CATH β-helix protein family Protein folds Helices
Beta helix
Biology
469
49,436,429
https://en.wikipedia.org/wiki/Baghir%20A.%20Suleimanov
Baghir A. Suleimanov () — Petroleum Scientist, Doctor of Technical Sciences, Professor, Corresponding Member of Azerbaijan National Academy of Sciences Early life and career Suleimanov was born in Baku on June 22, 1959. He graduated from the Azerbaijan State Oil and Industry University. He received his PhD in 1987, his DSc in 1997, and his professorship in 2011. In 2014, he was chosen corresponding member of the Azerbaijan National Academy of Sciences (ANAS) on the specialization of "Development of oil and gas fields," and in 2019, he was elected foreign member of the Russian Academy of Natural Sciences. He attends the scientific school of academician Azad Mirzajanzade. At 1981–1985, B. Suleimanov worked as an operator and engineer on oil and gas production in the Oil and Gas Production Unit named after N. Narimanov. He worked as an assistant at the Azerbaijan Oil and Chemical Institute's "Development and exploitation of oil fields" faculty from 1985 to 1988, and as a senior, lead, and head scientist at the ANAS Institute of Mathematics and Mechanics' "Nonlinear mechanics of oil and gas" department from 1988 to 2000. Since 2000, he has worked at the SOCAR «OilGasScientificResearchProject» Institute as director and deputy director. Since 2009 he has been working as deputy director for oil and gas production at the same Institute. The construction of scientific and experimental basis for the use of heterogeneous systems in the development of oil fields, as well as the development of novel methodologies, technologies, and chemical compositions for oil recovery, is the major focus of B. Suleimanov's research activities. B. Suleimanov has written 310 scientific publications, including 118 patents, two monographs, and four textbooks. B. Suleimanov has mentored 24 PhD students and 6 DSc students. Scientific activity In the development of oil fields, based on the physical and mathematical modeling of the filtration of heterogeneous systems in non-homogeneous porous media, novel technologies for increasing the oil recovery, bottom-hole stimulation and water shut off have been developed and successfully applied; The existence of the S-shaped filtration law during the flow of non-Newtonian fluids in non-homogeneous layers is determined, experimentally and theoretically justified; As a result of a comprehensive study of the stationary and non-stationary filtration of gaseous Newtonian, non-Newtonian fluids and gas under pre-critical phase conditions, the theory of flow of the studied systems based on the slip effect in the porous medium was developed; As a result of the statistical modeling of the life cicle of oil fields, the methodology of dividing the development process into stages, determining the maximum production and recoverable resources was developed; Based on the use of fractal, multifractal dimensions and Fisher-Shannon indicators, new methods of oil field development analysis were developed and applied; Methods of selecting candidate wells for the application of various geological-technical measures have been developed based on mathematical modeling; As a result of modeling various technological processes in oil production wells, new types of well equipment - submersible pumps, workover equipment, sand screens, etc. have been developed and successfully applied; Various oil industry chemicals, such as demulsifiers, depressants, surfactants, polymers compositions were created and effectively employed. Scientific achievements Ranked among the World's Top 2% Scientists, the prestigious list of the world's most influential people in science; h-index - 28, i10-index - 64 based on citations given to scientific works; More than 500 thousand tons of oil and gas (in oil equivalent) were produced as a result of the application of developed techniques and technologies. Membership in scientific and engineering institutions Corresponding member of ANAS; Member of the Presidium of the Azerbaijan Supreme Attestation Commission under the President of Azerbaijan Republic; Foreign member of the Russian Academy of Natural Sciences on "Oil and gas" section; Member of SOCAR's central commission on oil and gas resources; Member of SOCAR's central commission for the development of oil, gas and gas-condensate fields; Deputy Chairman of the Scientific Council of the "OilGasScientificResearchProject" Institute, SOCAR; Chairman of the "Development, exploitation and drilling of wells" section of the Scientific Council of the "OilGasScientificResearchProject" Institute, SOCAR; Member of the Scientific Council of the "Oil and Gas" Institute of ANAS; Member of Society of Petroleum Engineers (SPE). Activities in scientific and technical publications Editor-in-chief of "Scientific Petroleum" journal; Deputy Editor-in-chief of "SOCAR Proceedings" journal; Deputy Editor-in-chief of "ANAS Transactions. Earth Sciences" journal; Member of the editorial board of "Azerbaijan Oil Industry" journal; Member of the editorial board of "Territoriya Nefteqaz" (Russia) scientific-practical journal; Member of the editorial board of The Journal "Bulletin of the Russian Academy of Natural Sciences" (Russia); Member of the editorial board of "Vesti gazovoy nauki" (Russia) scientific and technical publications; Reviewer in Journal of Petroleum Science & Engineering; Results in Engineering; Petroleum Science & Technology; International Journal of Oil, Gas and Coal Technology; Physics of Fluids; Industrial & Engineering Chemistry Research Journal; Energy & Fuels; Fuels; Colloids and Surfaces A: Physicochemical and Engineering Aspects Journal; Journal of the Chemical Society of Pakistan; RSC Advances; Journal of Molecular Liquids; Heliyon; ACS Omega; Colloid Journal. Awards and prizes 2024 г. — Record in Citation Diploma awarded by the Azrbaijan National Academy of Sciences for highest number of citations of his scientific works; 2020 — Azerbaijan State Oil and Industry University 100th anniversary (1920-2020) jubilee medal; 2019 — "Honorary oilman" breastplate of SOCAR; 2018 — "Academician Azad Mirzajanzade international silver medal" of the Russian Academy of Natural Sciences; 2018 — Winner of the fifth Republic competition in innovation; 2017 — Winner of the fourth Republic competition in innovation; 2017 — Medal for services to Ivano-Frankivsk National Technical University of Oil and Gas; 2016 — Winner of the third Republic competition in innovation; 2016 — "Taraggi" medal by order of the President of Azerbaijan Republic; 2013 — Winner of the first Republic competition in innovation; 2009 — Honorable diploma of SOCAR for special achievements in development of oil and gas industry of Azerbaijan Republic; 2004 — Honorable diploma of SOCAR for special achievements in development of oil and gas industry of Azerbaijan Republic; 1985 — "Inventor of the USSR" breastplate. Selected publications Books B. A. Suleimanov, E. F. Veliyev, A. A. Aliyev. Oil and gas well cementing for engineers. — UK: John Wiley & Sons Ltd., 2023. — 272p. Baghir A. Suleimanov, Elchin F. Veliyev, Vladimir Vishnyakov. Nanocolloids for petroleum engineering: Fundamentals and practices. — UK: John Wiley & Sons Ltd., 2022. — 288p. Print , Online B. A. Suleimanov, E. F. Veliyev, А. D. Shavgenov. Well cementing: Fundamentals and practices. Series: Modern petroleum and gas technologies. - Моscow, Institute of Computer Science - 2022, 292p. [in Russian] B. A. Suleimanov. Enhanced oil recovery: Fundamentals and practices. Series: Modern petroleum and gas technologies. - Моscow, Institute of Computer Science - 2022, 286p. [in Russian] * Vladimir Vishnyakov, Baghir Suleimanov, Ahmad Salmanov, Eldar Zeynalov. Primer on enhanced oil recovery. 1st Edition. Gulf Professional Publishing, Elsevier Inc., 2019. — 223 p. Paperback , eBook B. A. Suleimanov. Specific features of heterogeneous system filtration. Series: Modern petroleum and gas technologies. - Моscow, Institute of Computer Science - 2006, 356p. [in Russian] Articles Suleimanov B. A., Abbasov H. F. Gasified acid solution in pre-transition state for well stimulation // Journal of Dispersion Science and Technology. — Received 21 Aug 2024, Accepted 26 Dec 2024, Published online: 10 Jan 2025. — doi: 10.1080/01932691.2024.2448758; Suleimanov B. A., Abbasov E. M. Predıctıng of water breakthrough tıme ınto the well on pressure buıld-up curves // Applied and Computational Mathematics. - 2024. - Vol.23, N. 2. - P. 219-227. - doi: 10.30546/1683-6154.23.2.2024.219; Suleimanov B. A., Feyzullayev Kh. A. Simulation study of water shut-off treatment for heterogeneous layered oil reservoirs // Journal of Dispersion Science and Technology. - 2024. - P. 1-11. - doi: 10.1080/01932691.2024.2338361; Suleimanov B. A., Abbasov H. F. Wettability alteration of quartz sand using Z-type Langmuir–Blodgett hydrophobic films // Physics of Fluids. - 2024. - Vol. 36. - P. 034118. - doi: 10.1063/5.0196917; Baghir A. Suleimanov, Sabina J. Rzayeva, Aygun F. Akberova and Ulviyya T. Akhmedova. Self-foamed biosystem for deep reservoir conformance control // Petroleum Science and Technology, 2022, Vol. 40, No. 20, 2450–2467; B. A. Suleimanov, S. J. Rzayeva and U. T. Akhmedova. Self-gasified biosystems for enhanced oil recovery // International Journal of Modern Physics B, 2021, Vol. 35, No. 27, 2150274; B. A. Suleimanov, E. F. Veliyev and N. V. Naghiyeva. Colloidal dispersion gels for in-depth permeability modification // Modern Physics Letters B, 2021, Vol. 35, No. 1, 2150038; Baghir A. Suleimanov, Elchin F. Veliyev, Aliyev A. Azizagha. Colloidal dispersion nanogels for in-situ fluid diversion // Journal of Petroleum Science and Engineering, 2020, Vol. 193, No. 10, 107411; Rayyat Huseyn Ismayilov, Fuad Famil Valiyev, Nizami Vali Israfilov, Wen-Zhen Wang, Gene-Hsiang Lee, Shie-Ming Peng, Baghir A. Suleimanov. Long chain defective metal string complex with modulated oligo-α-pyridylamino ligand: Synthesis, crystal structure and properties // Journal of Molecular Structure, 2020, Vol. 1200, 126998; Baghir A. Suleimanov, Khasay A. Feyzullayev. Numerical simulation of water shut-off for heterogeneous composite oil reservoirs // SPE-198388-MS. SPE Annual Caspian Technical Conference held in Baku, Azerbaijan, 16 – 18 October 2019; Baghir A. Suleimanov, Khasay A. Feyzullayev, Elhan M. Abbasov. Numerical simulation of water shut-off performance for heterogeneous composite oil reservoirs // Applied and Computational Mathematics, 2019, Vol. 18, No. 3, 261–271; B. A. Suleimanov, N. I. Guseinova. Analyzing the state of oil field development based on the Fisher and Shannon information measures. Automation and Remote Control, 2019, Vol. 80, 882–896; Baghir A. Suleimanov, Hakim F. Abbasov, Fuad F. Valiyev, Rayyat H. Ismayilov, Shie-Ming Peng. Thermal-conductivity enhancement of microfluids with Ni3(μ3-ppza)4Cl2 metal string complex particles // ASME. Journal of Heat Transfer, 2019, Vol. 141, 012404; Rayyat Huseyn Ismayilov, Fuad Famil Valiyev, Dilgam Babir Tagiyev, You Song, Nizami Vali Israfilov, Wen-Zhen Wang, Gene-Hsiang Lee, Shie-Ming Peng, Baghir A. Suleimanov. Linear pentanuclear nickel(II) and tetranuclear copper(II) complexes with pyrazine-modulated tripyridyldiamine ligand: Synthesis, structure and properties // Inorganica Chimica Acta, 2018, Vol. 483, 386-391; Baghir A. Suleimanov, Arif A. Suleymanov, Elkhan M. Abbasov, Erlan T. Baspayev. A mechanism for generating the gas slippage effect near the dewpoint pressure in a porous media gas condensate flow // Journal of Natural Gas Science and Engineering, 2018, Vol. 53, 237–248; Baghir A. Suleimanov, Yashar A. Latifov, Elchin F. Veliyev, Harry Frampton. Comparative analysis of the EOR mechanisms by using low salinity and low hardness alkaline water // Journal of Petroleum Science and Engineering, 2018, Vol. 162, 35–43; Baghir A. Suleimanov, Naida I. Guseynova, Sabina C. Rzayeva, Gulnar D. Tulesheva. Experience of acidizing injection wells for enhanced oil recovery at the Zhetybai field (Kazakhstan) // SPE-189028-MS. SPE Annual Caspian Technical Conference and Exhibition, At Baku, Azerbaijan, 1-3 November, 2017; Baghir A. Suleimanov, Naida I. Guseynova, Elchin F. Veliyev. Control of displacement front uniformity by fractal dimensions // SPE-187784-MS. SPE Russian Petroleum Technology Conference, Moscow, Russia, 16-18 October 2017; Baghir A. Suleimanov, Elchin F. Veliyev. Novel polymeric nanogel as diversion agent for enhanced oil recovery // Petroleum Science and Technology, 2017, Vol. 35, No. 4, 319–326; Baghir A. Suleimanov, Elkhan M. Abbasov, and Marziya R. Sisenbayeva. Mechanism of gas saturated oil viscosity anomaly near to phase transition point // Physics of Fluids, 2017, Vol. 29, 012106; Baghir A. Suleimanov, Rayyat H. Ismayilov, Hakim F. Abbasov, Wen-Zhen Wang, Shie-Ming Peng. Thermophysical properties of nano- and microfluids with [Ni5(μ5-pppmda)4Cl2] metal string complex particles. // Colloids and Surfaces A: Physicochemical and Engineering Aspects, 2017, Vol. 513, 41–50; Baghir A. Suleimanov, Hakim F. Abbasov. Chemical control of quartz suspensions aggregative stability // Journal of Dispersion Science and Technology, 2017, Vol. 38, No. 08, 1103–1109; Rayyat Huseyn Ismayilov, Wen-Zhen Wang, Gene-Hsiang Lee, Shie-Ming Peng, Baghir A. Suleimanov. Synthesis, crystal structure and properties of a pyrimidine modulated tripyridyldiamino ligand and its complexes // Polyhedron, 2017, Vol. 122, 203-209; B. A. Suleimanov, E. F. Veliyev. Nanogels for deep reservoir conformance control // SPE-182534-MS. SPE Annual Caspian Technical Conference & Exhibition, 1-3 November 2016, Astana, Kazakhstan; B. A. Suleimanov, F. S. Ismailov, O. A. Dyshin, E. F. Veliyev. Screening evaluation of EOR methods based on fuzzy logic and Bayesian inference mechanisms // SPE-182044-MS. SPE Russian Petroleum Technology Conference and Exhibition, 24-26 October 2016, Moscow, Russia; B. A. Suleimanov, O. A. Dyshin, E. F. Veliyev. Compressive strength of polymer nanogels used for enhanced oil recovery (EOR) // SPE-181960-MS. SPE Russian Petroleum Technology Conference and Exhibition, 24-26 October 2016, Moscow, Russia; B. A. Suleimanov, H. F. Abbasov. Effect of copper nanoparticle aggregation on the thermal conductivity of nanofluids // Russian Journal of Physical Chemistry A, 2016, Vol. 90, 420–428; B. A. Suleimanov, E. F. Veliyev & O. A. Dyshin. Effect of nanoparticles on the compressive strength of polymer gels used for enhanced oil recovery (EOR) // Petroleum Science and Technology, 2015, Vol. 33, No. 10, 1133–1140; Baghir Alekper Suleimanov, Fakhreddin Sattar Ismailov, Oleq Aleksandrovich Dyshin, Svetlana Sirlibayevna Keldibayeva. Statistical modeling of life cycle of oil reservoir development // Journal of the Japan Petroleum Institute, 2014, Vol. 57, No. 1, 47–57; Suleimanov B. A., Dyshin O. A. Application of discrete wavelet transform to the solution of boundary value problems for quasi-linear parabolic equations // Applied Mathematics and Computation. - 2013. - Vol. 219, No. 12. - P. 7036–7047. - doi: 10.1016/j.amc.2012.11.033; B. A. Suleimanov. Mechanism of slip effect in gassed liquid flow // Colloid Journal, 2011, Vol. 73, No. 6, 846–855; B. A. Suleimanov, F. S. Ismailov, E. F. Veliyev. Nanofluid for enhanced oil recovery // Journal of Petroleum Science and Engineering, 2011, Vol. 78, 431–437; E. M. Abbasov, O. A. Dyshin, B. A. Suleimanov. Wavelet method for solving the unsteady porous-medium flow problem with discontinuous coefficients // Computational Mathematics and Mathematical Physics, 2008, Vol. 48, 2194–2210; B. A. Suleimanov, E. M. Abbasov, A. O. Efendieva. Stationary filtration in a fractal inhomogeneous porous medium // Journal of Engineering Physics and Thermophysics, 2005, Vol. 78, No. 4, 832–834; B. A. Suleimanov. On the effect of interaction between dispersed phase particles on the rheology of fractally heterogeneous disperse systems // Colloid Journal, 2004, Vol. 66, No. 2, 249–252; B. A. Suleimanov, Kh. F. Azizov, E. M. Abbasov. Slippage effect during gassed oil displacement // Energy Sources, 1996, Vol. 18, No. 7, 773–779. Presentations USA, Great Britain, Turkey, Russia, Belarus, Ukraine, Kazakhstan, Uzbekistan, etc. participated as a speaker in international forums, conferences and symposia held in different countries. Academic editorship Fakhreddin S. Ismailov, Ahmad М. Salmanov, Bakir I. Maharramov. Oil-gas fields and prospective structures of Azerbaijan. Lesser Caucasus-Talysh oil and gas province. Book 3. Referencebook / Scientific editor: Baghir A. Suleimanov. - Baku: «MSV» publishing house, 2024. - 668р.; ISBN 978-9952-37-978-5. Fakhreddin S. Ismailov, Ahmad М. Salmanov, Bakir I. Maharramov, Hafiz I. Shakarov. Oil-gas fields and prospective structures of Azerbaijan. Area of Caspian Sea. Book 2. Referencebook / Scientific editor: Baghir A. Suleimanov. - Baku: «MSV» publishing house, 2024. - 648р.; ISBN 978-9952-37-978-5. Fakhreddin S. Ismailov, Ahmad М. Salmanov, Bakir I. Maharramov. Oil-gas fields and prospective structures of Azerbaijan. Qusar-Devechi, Khizi & Absheron OGR. Book 1. Referencebook / Scientific editor: Baghir A. Suleimanov. - Baku: «Mars Print» publishing house, 2023. - 776р.; ISBN 978-9952-37-978-5. Ahmad М. Salmanov, Bakir I. Maharramov, Elmir Sh. Qaragezov, Nizami S. Kerimov. Geology of oil-gas fields and development stages of Azerbaijan area of Caspian sea. Referencebook / Scientific editor: Baghir A. Suleimanov. - Baku: «MSV» publishing house, 2023. - 508р. ISBN 978-9952-39-083-4 Ahmad М. Salmanov, Bakir I. Maharramov, Elmir Sh. Qaragezov, Nizami S. Kerimov. Geology of oil-gas fields and development stages in onshore territory of Azerbaijan. Referencebook / Scientific editor: Baghir A. Suleimanov. - Baku: «MSV» publishing house, 2023. - 624р.; ISBN 978-9952-39-147-3. Sh. Z. Ismailov, A. А. Suleymanov, I. N. Aliyev, F. F. Ahmed. Features of developing and exploiting hydrocarbon fields by surface and subsea field complexes: textbook / Scientific editor: Baghir A. Suleimanov. Ministry of Science and Education Republic of Azerbaijan, Azerbaijan State Oil and Industrial University. — Baku: Elm, 2023, 217 p. He is the scientific editor of the New Bibliography of Academician Azad Mirzajanzade and 4 volumes of Selected Works, published by ANAS "Elm" publishing house in connection with the implementation of the Decree of the President of Azerbaijan Republic on the 90th anniversary of Academician Azad Mirzajanzade. Azad Khalil oglu Mirzajanzade: biobibliographic index / Scientific editor. B. A. Suleimanov. - Baku: Elm, 2018. - 216 p. ISBN 978-9952-514-50-6 A.Kh. Mirzajanzade. Selected works. Volume I / Scientific ed. B. A. Suleimanov. - Baku: Elm, 2018. - 658 c. ISBN 978-9952-514-54-4 A.Kh. Mirzajanzade. Selected works. Volume II / Scientific ed. B. A. Suleimanov. - Baku: Elm, 2018. - 574 c. ISBN 978-9952-514-89-6 References External links Official website ANAS Suleimanov Baghir Alekper oglu Researchgate.net Baghir Suleimanov "OilGasScientificResearchProject" Institute, SOCAR 1959 births Living people Academic staff of Azerbaijan State Oil and Industry University Azerbaijani academics Azerbaijani engineers Petroleum engineers
Baghir A. Suleimanov
Engineering
5,088
77,310,453
https://en.wikipedia.org/wiki/Single-pixel%20imaging
Single-pixel imaging is a computational imaging technique for producing spatially-resolved images using a single detector instead of an array of detectors (as in conventional camera sensors). A device that implements such an imaging scheme is called a single-pixel camera. Combined with compressed sensing, the single-pixel camera can recover images from fewer measurements than the number of reconstructed pixels. Single-pixel imaging differs from raster scanning in that multiple parts of the scene are imaged at the same time, in a wide-field fashion, by using a sequence of mask patterns either in the illumination or in the detection stage. A spatial light modulator (such as a digital micromirror device) is often used for this purpose. Single-pixel cameras were developed to be simpler, smaller, and cheaper alternatives to conventional, silicon-based digital cameras, with the ability to also image a broader spectral range. Since then, it has been adapted and demonstrated to be suitable for numerous applications in microscopy, tomography, holography, ultrafast imaging, FLIM and remote sensing. History The origins of single-pixel imaging can be traced back to the development of dual photography and compressed sensing in the mid 2000s. The seminal paper written by Duarte et al. in 2008 at Rice University concretised the foundations of the single-pixel imaging technique. It also presented a detailed comparison of different scanning and imaging modalities in existence at that time. These developments were also one of the earliest applications of the digital micromirror device (DMD), developed by Texas Instruments for their DLP projection technology, for structured light detection. Soon, the technique was extended to computational ghost imaging, terahertz imaging, and 3D imaging. Systems based on structured detection were often termed single-pixel cameras, whereas those based on structured illumination were often referred to as computational ghost imaging. By using pulsed-lasers as the light source, single-pixel imaging was applied for time-of-flight measurements used in depth-mapping LiDAR applications. Apart from the DMD, different light modulation schemes were also experimented with liquid crystals and LED arrays. In the early 2010s, single-pixel imaging was exploited in fluorescence microscopy, for imaging biological samples. Coupled with the technique of time-correlated single photon counting (TCSPC), the use of single-pixel imaging for compressive fluorescence lifetime imaging microscopy (FLIM) has also been explored. Since the late 2010s, machine learning techniques, especially Deep learning, have been increasingly used to optimise the illumination, detection, or reconstruction strategies of single-pixel imaging. Principles Theory In sampling, digital data acquisition involves uniformly sampling discrete points of an analog signal at or above the Nyquist rate. For example, in a digital camera, the sampling is done with a 2-D array of pixelated detectors on a CCD or CMOS sensor ( is usually millions in consumer digital cameras). Such a sample can be represented using the vector with elements . A vector can be expressed as the coefficients of an orthonormal basis expansion:where are the basis vectors. Or, more compactly: where is the basis matrix formed by stacking . It is often possible to find a basis in which the coefficient vector is sparse (with non-zero coefficients) or r-compressible (the sorted coefficients decay as a power law). This is the principle behind compression standards like JPEG and JPEG-2000, which exploit the fact that natural images tend to be compressible in the DCT and wavelet bases. Compressed sensing aims to bypass the conventional "sample-then-compress" framework by directly acquiring a condensed representation with linear measurements. Similar to the previous step, this can be represented mathematically as:where is an vector and is the -rank measurement matrix. This so-called under-determined measurement makes the inverse problem an ill-posed problem, which in general is unsolvable. However, compressed sensing exploits the fact that with the proper design of , the compressible signal can be exactly or approximately recovered using computational methods. It has been shown that incoherence between the bases and (along with the existence of sparsity in ) is sufficient for such a scheme to work. Popular choices of are random matrices or random subsets of basis vectors from Fourier, Walsh-Hadamard or Noiselet bases. It has also been shown that the optimisation given by:works better to retrieve the signal from the random measurements , than other common methods like least-squares minimisation. An improvement to the optimisation algorithm, based on total-variation minimisation, is especially useful for reconstructing images directly in the pixel basis. Single-pixel camera The single-pixel camera is an optical computer that implements the compressed sensing measurement architecture described above. It works by sequentially measuring the inner products between the image and the set of 2-D test functions , to compute the measurement vector . In a typical setup, it consists of two main components: a spatial light modulator (SLM) and a single-pixel detector. The light from a wide-field source is collimated and projected onto the scene, and the reflected/transmitted light is focussed on to the detector with lenses. The SLM is used to realise the test functions , often as binary pattern masks, and to introduce them either in the illumination or in the detection path. The detector integrates and converts the light signal into an output voltage, which is then digitised by an A/D converter and analysed by a computer. Rows from a randomly permuted (for incoherence) Walsh-Hadamard matrix, reshaped into square patterns, are commonly used as binary test functions in single-pixel imaging. To obtain both positive and negative values (±1 in this case), the mean light intensity can be subtracted from each measurements, since the SLM can produce only binary patterns with 0 (off) and 1 (on) conditions. An alternative is to split the positive and negative elements into two sets, measure both with the negative set inverted (i.e., -1 replaced with +1), and subtract the measurements in the end. Values between 0 and 1 can be obtained by dithering the DMD micromirrors during the detector's integration time. Examples of commonly used detectors include photomultiplier tubes, avalanche photodiodes, or hybrid photo multipliers (sandwich of layers of photon amplification stages). A spectrometer can also be used for multispectral imaging, along with an array of detectors, one for each spectral channel. Another common addition is a time-correlated single photon counting (TCSPC) board to process the detector output, which, coupled with a pulsed laser, enables lifetime measurement and is useful in biomedical imaging. Advantages and drawbacks The most important advantage of the single-pixel design is its reduced size, complexity, and cost of the photon detector (just a single unit). This enables the use of exotic detectors capable of multi-spectral, time-of-flight, photon counting, and other fast detection schemes. This made single-pixel imaging suitable for various fields, ranging from microscopy to astronomy. The quantum efficiency of a photodiode is also higher than that of the pixel sensors in a typical CCD or CMOS array. Coupled with the fact that each single-pixel measurement receives about  times more photons than an average pixel sensor, this help reduce image distortion from dark noise and read-out noise significantly. Another important advantage is the fill factor of SLMs like a DMD, which can reach around 90% (compared to that of a CCD/CMOS array which is only around 50%). In addition, single-pixel imaging inherits the theoretical advantages that underpins the compressed sensing framework, such as its universality (the same measurement matrix works for many sparsifying bases ) and robustness (measurements have equal priority, and thus, loss of a measurement does not corrupt the entire reconstruction). The main drawback the single-pixel imaging technique face is the tradeoff between speed of acquisition and spatial resolution. Fast acquisition needs projecting fewer patterns (since each of them is measured sequentially), which leads to lower resolution of the reconstructed image. An innovative method of "fusing" the low resolution single-pixel image with a high spatial-resolution CCD/CMOS image (dubbed "Data Fusion") has been proposed to mitigate this problem. Deep learning methods to learn the optimal set of patterns suitable to image a particular category of samples are also being developed to improve the speed and reliability of the technique. Applications Some of the research fields that are increasingly employing and developing single-pixel imaging are listed below: Multispectral and hyperspectral imaging Infrared imaging spectroscopy Diffuse optics and imaging through scattering media Time-resolved and life-time microscopy Fluorescence spectroscopy X-ray diffraction tomography Biomedical imaging Terahertz and ultrafast imaging Magnetic resonance imaging Photoacoustic imaging Holography and phase imaging Long-range imaging and remote sensing Cytometry and polarimetry Real-time and post-processed video See also Compressed sensing Computational imaging Structured light Digital micromirror device Photodetector Hadamard matrix References Further reading External links Optical imaging Signal processing
Single-pixel imaging
Technology,Engineering
1,902
868,936
https://en.wikipedia.org/wiki/Paris%20Gun
The Paris Gun () was a type of German long-range siege gun, several of which were used to bombard Paris during World War I. They were in service from March to August 1918. When the guns were first employed, Parisians believed they had been bombed by a high-altitude Zeppelin, as the sound of neither an airplane nor a gun could be heard. They were the largest pieces of artillery used during the war by barrel length, and qualify under the (later) formal definition of large-calibre artillery. Also called the "Kaiser Wilhelm Geschütz" ("Kaiser Wilhelm Gun"), they were often confused with Big Bertha, the German howitzer used against Belgian forts in the Battle of Liège in 1914; indeed, the French called them by this name as well. They were also confused with the smaller "Langer Max" (Long Max) cannon, from which they were derived. Although the famous Krupp-family artillery makers produced all these guns, the resemblance ended there. As military weapons, the Paris Guns were not a great success: the payload was small, the barrel required frequent replacement, and the guns' accuracy was good enough for only city-sized targets. The German objective was to build a psychological weapon to attack the morale of the Parisians, not to destroy the city itself. Description Due to the weapon's apparent total destruction by the Germans in the face of the final Entente offensives, its capabilities are not known with full certainty. Figures stated for the weapon's size, range, and performance varied widely depending on the source—not even the number of shells fired is certain. In the 1980s, a long note on the gun was discovered and published. This was written by Dr. Fritz Rausenberger (in German), the Krupp engineer in charge of the gun's development, shortly before his death in 1926. Thanks to this, the details of the gun's design and capabilities were considerably clarified. The gun was capable of firing a shell to a range of and a maximum altitude of —the greatest height reached by a human-made projectile until the first successful V-2 flight test in October 1942. At the start of its 182-second flight, each shell from the Paris Gun reached a speed of . The distance was so far that the Coriolis effect—the rotation of the Earth—was substantial enough to affect trajectory calculations. The gun was fired at an azimuth of 232 degrees (southwest) from Crépy-en-Laon, which was at a latitude of 49.5 degrees north. Seven barrels were constructed. They used worn-out 38 cm SK L/45 "Max" long gun barrels that were fitted with an internal tube that reduced the caliber from to . The tube was long and projected out of the end of the gun, so an extension was bolted to the old gun-muzzle to cover and reinforce the lining tube. A further, long smooth-bore extension was attached to the end of this, giving a total barrel length of . This smooth section was intended to improve accuracy and reduce the dispersion of the shells, as it reduced the slight yaw a shell might have immediately after leaving the gun barrel produced by the gun's rifling. The barrel was braced to counteract barrel drop due to its length and weight, and vibrations while firing; it was mounted on a special rail-transportable carriage and fired from a prepared, concrete emplacement with a turntable. The original breech of the old gun did not require modification or reinforcement. Since it was based on a naval weapon, the gun was manned by a crew of 80 Imperial Navy sailors under the command of Vice-Admiral Maximilian Rogge, chief of the Ordnance branch of the Admiralty. It was surrounded by several batteries of standard army artillery to create a "noise-screen" chorus around the big gun so that it could not be located by French and British spotters. The projectile flew significantly higher than projectiles from previous guns. Writer and journalist Adam Hochschild put it this way: "It took about three minutes for each giant shell to cover the distance to the city, climbing to an altitude of at the top of its trajectory. This was by far the highest point ever reached by a man-made object, so high that gunners, in calculating where the shells would land, had to take into account the rotation of the Earth. For the first time in warfare, deadly projectiles rained down on civilians from the stratosphere". This reduced drag from air resistance, allowing the shell to achieve a range of over . The unfinished V-3 cannon would have been able to fire larger projectiles to a longer range, and with a substantially higher rate of fire. The unfinished Iraqi super gun would also have been substantially bigger. Projectiles The Paris Gun shells weighed . The shells initially used had a diameter of and a length of . The main body of the shell was composed of thick steel, containing of TNT. The small amount of explosive—around 6.6% of the weight of the shell—meant that the effect of its shellburst was small for the shell's size. The thickness of the shell casing, to withstand the forces of firing, meant that shells would explode into a comparatively small number of large fragments, limiting their destructive effect. A crater produced by a shell falling in the Tuileries Garden was described by an eyewitness as being across and deep. The shells were propelled at such a high velocity that each successive shot wore away a considerable amount of steel from the rifled bore. Each shell was sequentially numbered according to its increasing diameter, and had to be fired in numeric order, lest the projectile lodge in the bore and the gun explode. Also, when the shell was rammed into the gun, the chamber was precisely measured to determine the difference in its length: a few inches off would cause a great variance in the velocity, and with it, the range. Then, with the variance determined, the additional quantity of propellant was calculated, and its measure taken from a special car and added to the regular charge. After 65 rounds had been fired, each of progressively larger caliber to allow for wear, the barrel was sent back to Krupp and rebored with a new set of shells. The shell's explosive was contained in two compartments, separated by a wall. This strengthened the shell and supported the explosive charge under the acceleration of firing. One of the shell's two fuzes was mounted in the wall, with the other in the base of the shell. The fuzes proved very reliable as every single one of the 303 shells that landed in and around Paris successfully detonated. The shell's nose was fitted with a streamlined, lightweight, ballistic cap and the side had grooves that engaged with the rifling of the gun barrel, spinning the shell as it was fired so its flight was stable. Two copper driving bands provided a gas-tight seal against the gun barrel during firing. Use in World War I The Paris gun was used to shell Paris at a range of . The gun was fired from a wooded hill (Le mont de Joie) near Crépy, and the first shell landed at 7:18 a.m. on 23 March 1918 on the Quai de la Seine, the explosion being heard across the city. Shells continued to land at 15-minute intervals, with 21 counted on the first day. On the first day, fifteen people were killed and thirty-six wounded. The effect on morale in Paris was immediate: by 27 March, queues of thousands had started at the Gare d'Orsay and, at the Gare Montparnasse, ticket sales out of the capital were suspended due to demand. The initial assumption was these were bombs dropped from an airplane or Zeppelin flying too high to be seen or heard, or perhaps an "aerial torpedo". Within a few hours, sufficient casing fragments had been collected to show that the explosions were the result of shells, not bombs. By the end of the day, military authorities were aware the shells were being fired from behind German lines by a new long-range gun, although there was initial press speculation on the origin of the shells. This included the theory they were being fired by German agents close by Paris, or even within the city itself, so abandoned quarries close to the city were searched for a hidden gun. Another possibility was that German forces had penetrated the front line, but authorities realized that such heavy artillery could not be moved and emplaced so quickly. The press reported the German gun's range as about , which amazed American ordnance officers, and the shells as , compared to the caliber of heavy German siege shells. The previous world distance record was German bombardment of Dunkirk from , while the best American gun had a range of . Experts thought that the German weapon might be a product of the Škoda Works. Three emplacements for the gun were located within days by the French reconnaissance pilot Didier Daurat, the path of the shells which landed in Paris having revealed the direction from which they were being fired. The closest emplacement was engaged by a 34 cm railway gun while the other two sites were bombed by aircraft, although this failed to interrupt the German bombardment. Between 320 and 367 shells were fired, at a maximum rate of around 20 per day. The shells killed 250 people and wounded 620, and caused considerable damage to property. The worst incident was on 29 March 1918, when a shell hit the roof of the St-Gervais-et-St-Protais Church, collapsing the roof onto the congregation then hearing the Good Friday service. A total of 91 people were killed and 68 were wounded. There was no firing between 25 and 29 March, when the first barrel was being replaced; an unconfirmed intelligence report claimed that it had exploded. Barrels were probably changed again between 7–11 April and again between 21–24 April. The diameter of the later shells increased from , indicating that the used barrels had been re-bored. A further emplacement, later identified as specifically designed for the Paris Gun, was found by advancing US troops at the beginning of August, on the north side of the wooded hill at Coucy-le-Château-Auffrique, some from Paris. The gun was taken back to Germany in August 1918 as Allied advances threatened its security. No guns were ever captured by the Allies. It is believed that near the end of the war they were completely destroyed by the Germans. One spare mounting was captured by American troops in Bruyères-sur-Fère, near Château-Thierry, but the gun was never found; the construction plans seem to have been destroyed as well. After World War I Under the terms of the Treaty of Versailles, the Germans were required to turn over a complete Paris Gun to the Allies, but they never complied with this. In the 1930s, the German Army became interested in rockets for long-range artillery as a replacement for the Paris Gun—which was specifically banned under the Versailles Treaty. This work eventually led to the V-2 rocket that was used in World War II. Despite the ban, Krupp continued theoretical work on long-range guns. They started experimental work after the Nazi government began funding the project upon coming to power in 1933. This research led to the 21 cm K 12 (E), a refinement of the Paris Gun design concept. Although it was broadly similar in size and range to its predecessor, Krupp's engineers had significantly reduced the problem of barrel wear. They also improved mobility over the fixed Paris Gun by making the K 12 a railway gun. The first K 12 was delivered to the German Army in 1939 and a second in 1940. During World War II, they were deployed in the Nord-Pas-de-Calais region of France; they were used to shell Kent in Southern England between late 1940 and early 1941. One gun was captured by Allied forces in the Netherlands in 1945. In popular culture A parody of the Paris Gun appears in the Charlie Chaplin movie The Great Dictator. Firing at the Cathedral of Notre Dame, the "Tomanians" (the fictional country that represented Germany) succeed in blowing up a small outhouse. The destruction of the St-Gervais-et-St-Protais Church inspired Romain Rolland to write his novel Pierre et Luce. See also Krupp K5, a , World War II German gun with a range. Notes References Bibliography Henry W. Miller, Railway Artillery: A Report on the Characteristics, Scope of Utility, etc. of Railway Artillery, United States Government Printing Office, 1921 Henry W. Miller, The Paris Gun: The Bombardment of Paris by the German Long Range Guns and the Great German Offensive of 1918, Jonathan Cape, Harrison Smith, New York, 1930 Ian V. Hogg, The Guns 1914 -18, Ballantine Books, New York, 1971 External links The Paris Gun in the First World War.com Encyclopedia Paris Gun at S. Berliner, III's ORDNANCE Superguns Une page sur le canon de Paris 210 mm artillery Siege artillery World War I railway artillery of Germany Lost objects Paris in World War I
Paris Gun
Physics
2,694
626,077
https://en.wikipedia.org/wiki/Lites
Lites is a discontinued Unix-like operating system, based on 4.4BSD and the Mach microkernel. Specifically, Lites is a multi-threaded server and emulation library that provided unix functions to a Mach-based system. At the time of its release, Lites provided binary compatibility with 4.4BSD, NetBSD, FreeBSD, 386BSD, UX (4.3BSD), and Linux. Lites was originally written by Johannes Helander at Helsinki University of Technology, and was further developed by the Flux Research Group at the University of Utah. See also HPBSD References External links , Utah Lites Berkeley Software Distribution Mach (kernel) Microkernel-based operating systems Microkernels X86 operating systems
Lites
Technology
161
7,040,363
https://en.wikipedia.org/wiki/Resource%20breakdown%20structure
In project management, the resource breakdown structure (RBS) is a hierarchical list of resources related by function and resource type that is used to facilitate planning and controlling of project work. The Resource Breakdown Structure includes, at a minimum, the personnel resources needed for successful completion of a project, and preferably contains all resources on which project funds will be spent, including personnel, tools, machinery, materials, equipment and fees and licenses. Money is not considered a resource in the RBS; only those resources that will cost money are included. Definition Assignable resources, such as personnel, are typically defined from a functional point of view: "who" is doing the work is identified based on their role within the project, rather than their department or role within the parent companies. In some cases, a geographic division may be preferred. Each descending (lower) level represents an increasingly detailed description of the resource until small enough to be used in conjunction with the work breakdown structure (WBS) to allow the work to be planned, monitored and controlled. Example In common practice, only non-expendable (i.e., durable goods) resources are listed in an RBS. Example of hierarchies of resources: 1. Engineering 1.1 Mr. Fred Jones, Manager 1.1.2 Ms. Jane Wagner, Architectural Lead 1.1.3 Software Design Team and Resources 1.1.3.1 Mr. Gary Neimi, Software Engineer 1.1.3.2 Ms. Jackie Toms, UI Designer 1.1.3.3 Standard Time Timesheet (timesheet and project tracking software) 1.1.3.4 Microsoft Project (project scheduling) 1.1.3.5 SQL Server (database) 1.1.4 Hardware Architecture Team and Resources 1.1.4.1 Ms. Korina Johannes, Resource Manager 1.1.4.2 Mr. Yan Xu, Testing Lead 1.1.4.3 Test Stand A 1.1.4.3.1 SAN Group A 1.1.4.3.2 Server A1 1.1.4.4 Test Stand B 1.1.4.4.1 SAN Group B 1.1.4.4.2 Server B1 Both human and physical resources, such as software and test instruments, are listed in the example above. The nomenclature is a numbered, hierarchical list of indented layers, each level adds an additional digit representing. For example, the numeric labels (1.1, 1.1.2) make each resource uniquely identifiable. Use in Microsoft Project The RBS (also known as the User Breakdown Structure) fields in a Project file are specifically coded by the administrator of that project, usually the Project Manager. Sometimes a PM Administrator is designated in larger project who will manage the Project tool itself. This field is called the Enterprise Resource Outline Code and it falls into one of two categories, RBS (resource field) and RBS (assignment field). These are high-level fields that require managers who know what these will be used for in terms of the organization. See also Business architecture List of project management topics Microsoft Project Project planning References Schedule (project management) Enterprise architecture
Resource breakdown structure
Physics
656
57,621,897
https://en.wikipedia.org/wiki/O.%20Frank%20Tuttle
Orville Frank Tuttle (June 25, 1916, Olean, New York – December 13, 1983, Tucson, Arizona) was an American mineralogist, geochemist, and petrologist, known for his research on granites and feldspars, with pioneering development of apparatus in experimental petrography. After completing high school in Smethport, Pennsylvania, he worked in the Bradford oilfields and studied geology at Pennsylvania State College (renamed in 1953 Pennsylvania State University), where he received a bachelor's degree in 1939 and a master's degree in 1940. He then matriculated at Massachusetts Institute of Technology for his doctoral work, which was interrupted by the Second World War, in which he was engaged in wartime research on crystal growth and characterization. In 1948 he received his doctorate at MIT. In 1947, he started his collaboration in experimental petrography with Norman L. Bowen at the Geophysical Laboratory of the Carnegie Institution in Washington. There he invented the "Tuttle Press" and the "Tuttle Bomb" (a high-pressure chamber), which were widely used in experimental petrography. Together with Bowen he explored in particular the formation of granite. In 1953 he became professor of geochemistry at Pennsylvania State University. In 1959 he became dean of the college of mineral industries. In 1960, he was diagnosed with Parkinson's disease in the early stages. In 1965, he moved to Stanford University, where he was granted sick leave in 1967 and formally resigned in 1971. He moved to Tucson with his wife. In 1977 he received a tentative diagnosis of Alzheimer's disease and moved to a nursing home. He was awarded in 1952 the Mineralogical Society of America Award, in 1975 the Roebling Medal, and in 1967 the Arthur L. Day Medal. He was elected a foreign member of the Geological Society of London and in 1968 a member of the National Academy of Sciences. He married Dawn Hardes in 1941 and the couple had two daughters. References American mineralogists American geochemists Penn State College of Earth and Mineral Sciences alumni Massachusetts Institute of Technology alumni Pennsylvania State University faculty Stanford University faculty Members of the United States National Academy of Sciences People from Olean, New York People from Smethport, Pennsylvania 1916 births 1983 deaths 20th-century American chemists
O. Frank Tuttle
Chemistry
463
30,370,937
https://en.wikipedia.org/wiki/Genetic%20codes%20%28database%29
Genetic codes is a simple ASN.1 database hosted by the National Center for Biotechnology Information and listing all the known Genetic codes. See also Genetic code References External links https://www.ncbi.nlm.nih.gov/Taxonomy/Utils/wprintgc.cgi ftp://ftp.ncbi.nih.gov/entrez/misc/data/gc.prt Biological databases Molecular genetics Gene expression Protein biosynthesis
Genetic codes (database)
Chemistry,Biology
104
34,296,790
https://en.wikipedia.org/wiki/Data%20infrastructure
A data infrastructure is a digital infrastructure promoting data sharing and consumption. Similarly to other infrastructures, it is a structure needed for the operation of a society as well as the services and facilities necessary for an economy to function, the data economy in this case. Background There is an intense discussion at international level on e-infrastructures and data infrastructure serving scientific work. The European Strategy Forum on Research Infrastructures (ESFRI) presented the first European roadmap for large-scale Research Infrastructures. These are modeled as layered hardware and software systems which support sharing of a wide spectrum of resources, spanning from networks, storage, computing resources, and system-level middleware software, to structured information within collections, archives, and databases. The e-Infrastructure Reflection Group (e-IRG) has proposed a similar vision. In particular, it envisions e-Infrastructures where the principles of global collaboration and shared resources are intended to encompass the sharing needs of all research activities. In the framework of the Joint Information Systems Committee (JISC) e-infrastructure programme, e-Infrastructures are defined in terms of integration of networks, grids, data centers and collaborative environments, and are intended to include supporting operation centers, service registries, credential delegation services, certificate authorities, training and help desk services. The Cyberinfrastructure programme launched by the US National Science Foundation (NSF) plans to develop new research environments in which advanced computational, collaborative, data acquisition and management services are made available to researchers connected through high-performance networks. More recently, the vision for “global research data infrastructures” has been drawn by identifying a number of recommendations for developers of future research infrastructures. This vision document highlighted the open issues affecting data infrastructures development – both technical and organizational – and identified future research directions. Besides these initiatives targeting “generic” infrastructures there are others oriented to specific domains, e.g. the European Commission promotes the INSPIRE initiative for an e-Infrastructure oriented to the sharing of content and service resources of European countries in the ambit of geospatial datasets. Related Projects D4Science OpenAIRE EUDAT GRDI2020 (Wayback Machine, Snapshot from June 3, 2017) EPOS See also Data cooperative Hybrid Data Infrastructure Information Infrastructure Research Infrastructure Spatial Data Infrastructure References What is data infrastructure? Information systems IT infrastructure Data
Data infrastructure
Technology
483
45,162
https://en.wikipedia.org/wiki/Peridot
Peridot ( ), sometimes called chrysolite, is a yellow-green transparent variety of olivine. Peridot is one of the few gemstones that occur in only one color. Peridot can be found in mafic and ultramafic rocks occurring in lava and peridotite xenoliths of the mantle. The gem occurs in silica-deficient rocks such as volcanic basalt and pallasitic meteorites. Along with diamonds, peridot is one of only two gems observed to be formed not in Earth's crust, but in the molten rock of the upper mantle. Gem-quality peridot is rare on Earth's surface due to its susceptibility to alteration during its movement from deep within the mantle and weathering at the surface. Peridot has a chemical formula of . Peridot is one of the birthstones for the month of August. Etymology The origin of the name peridot is uncertain. The Oxford English Dictionary suggests an alteration of Anglo–Norman (classical Latin -), a kind of opal, rather than the Arabic word , meaning "gemstone". The Middle English Dictionarys entry on peridot includes several variations: , , and  — other variants substitute y for letter i used here. The earliest use of the word in English is possibly in the 1705 register of the St. Albans Abbey: The dual entry is in Latin with the translation to English listed as peridot. It records that on his death in 1245, Bishop John bequeathed various items, including peridot gems, to the Abbey. Appearance Peridot is one of the few gemstones that occur in only one color: an olive-green. The intensity and tint of the green, however, depends on the percentage of iron in the crystal structure, so the color of individual peridot gems can vary from yellow, to olive, to brownish-green. In rare cases, peridot may have a medium-dark toned, pure green with no secondary yellow hue or brown mask. Lighter-colored gems are due to lower iron concentrations. Mineral properties Crystal structure The molecular structure of peridot consists of isomorphic olivine, silicate, magnesium and iron in an orthorhombic crystal system. In an alternative view, the atomic structure can be described as a hexagonal, close-packed array of oxygen ions with half of the octahedral sites occupied by magnesium or iron ions and one-eighth of the tetrahedral sites occupied by silicon ions. Surface property Oxidation of peridot does not occur at natural surface temperature and pressure but begins to occur slowly at with rates increasing with temperature. The oxidation of the olivine occurs by an initial breakdown of the fayalite component, and subsequent reaction with the forsterite component, to give magnetite and orthopyroxene. Occurrence Geologically Olivine, of which peridot is a type, is a common mineral in mafic and ultramafic rocks, often found in lava and in peridotite xenoliths of the mantle, which lava carries to the surface; however, gem-quality peridot occurs in only a fraction of these settings. Peridots can also be found in meteorites. Peridots can be differentiated by size and composition. A peridot formed as a result of volcanic activity tends to contain higher concentrations of lithium, nickel and zinc than those found in meteorites. Olivine is an abundant mineral, but gem-quality peridot is rather rare due to its chemical instability on Earth's surface. Olivine is usually found as small grains and tends to exist in a heavily weathered state, unsuitable for decorative use. Large crystals of forsterite, the variety most often used to cut peridot gems, are rare; as a result, peridot is considered to be precious. In the ancient world, the mining of peridot was called topazios then, on St. John's Island, in the Red Sea began about 300 . The principal source of peridot olivine today is the San Carlos Apache Indian Reservation in Arizona. It is also mined at another location in Arizona, and in Arkansas, Hawaii, Nevada, and New Mexico at Kilbourne Hole, in the US; and in Australia, Brazil, China, Egypt, Kenya, Mexico, Myanmar (Burma), Norway, Pakistan, Saudi Arabia, South Africa, Sri Lanka, and Tanzania. In meteorites Peridot crystals have been collected from some pallasite meteorites. The most commonly studied pallasitic peridot belongs to the Indonesian Jeppara meteorite, but others exist such as the Brenham, Esquel, Fukang, and Imilac meteorites. Pallasitic (extraterrestrial) peridot differs chemically from its earthbound counterpart, in that pallasitic peridot lacks nickel. Gemology Orthorhombic minerals, like peridot, have biaxial birefringence defined by three principal axes: , and . Refractive index readings of faceted gems can range around = 1.651, = 1.668, and = 1.689, with a biaxial positive birefringence of 0.037–0.038. With decreasing magnesium and increasing iron concentration, the specific gravity, color darkness and refractive indices increase, and the shifts toward the index. Increasing iron concentration ultimately forms the iron-rich end-member of the olivine solid solution series fayalite. A study of Chinese peridot gem samples determined the hydro-static specific gravity to be 3.36 . The visible-light spectroscopy of the same Chinese peridot samples showed light bands between 493.0–481.0 nm, the strongest absorption at 492.0 nm. The largest cut peridot olivine is a specimen in the gem collection of the Smithsonian Museum in Washington, D.C. Inclusions are common in peridot crystals but their presence depends on the location where it was found and the geological conditions that led to its crystallization. Primary negative crystals – rounded gas bubbles – form in situ with peridot, and are common in Hawaiian peridots. Secondary negative crystals form in peridot fractures. "Lily pad" cleavages are often seen in San Carlos peridots, and are a type of secondary negative crystal. They can easily be seen under reflected light as circular discs surrounding a negative crystal. Silky and rod-like inclusions are common in Pakistani peridots. The most common mineral inclusion in peridot is the chromium-rich mineral chromite. Magnesium-rich minerals also can exist in the form of pyrope and magnesiochromite. These two types of mineral inclusions are typically surrounded "lily-pad" cleavages. Biotite flakes appear flat, brown, translucent, and tabular. Cultural history Peridot has been prized since the earliest civilizations for its claimed protective powers to drive away fears and nightmares, according to superstitions. There is a superstition that it carries the gift of "inner radiance", sharpening the mind and opening it to new levels of awareness and growth, helping one to recognize and realize one's destiny and spiritual purpose. (There is no scientific evidence for any such claims.) Peridot olivine is the birthstone for the month of August. Peridot has often been mistaken for emerald beryl and other green gems. Noted gemologist G.F. Kunz discussed the confusion between beryl and peridot in many church treasures, most notably the "Three Magi treasure" in the Dom of Cologne, Germany. Gallery Footnotes References External links Ganoksin Mineralminers USGS peridot data Emporia Edu Florida State University – Peridot Gemstones Silicate minerals
Peridot
Physics
1,634
873,829
https://en.wikipedia.org/wiki/Derecho
A derecho (, from , 'straight') is a widespread, long-lived, straight-line wind storm that is associated with a fast-moving group of severe thunderstorms known as a mesoscale convective system. Derechos cause hurricane-force winds, heavy rains, and flash floods. In many cases, convection-induced winds take on a bow echo (backward "C") form of squall line, often forming beneath an area of diverging upper tropospheric winds, and in a region of both rich low-level moisture and warm-air advection. Derechos move rapidly in the direction of movement of their associated storms, similar to an outflow boundary (gust front), except that the wind remains sustained for a greater period of time (often increasing in strength after onset), and may reach tornado- and hurricane-force winds. A derecho-producing convective system may remain active for many hours and, occasionally, over multiple days. A warm-weather phenomenon, derechos mostly occur in summer, especially during June, July, and August in the Northern Hemisphere, or March, April, and May in the Southern Hemisphere, within areas of moderately strong instability and moderately strong vertical wind shear. However, derechos can occur at any time of the year. They are equally likely during day and night times. Various studies since the 1980s have shed light on the physical processes responsible for the production of widespread damaging winds by thunderstorms. In addition, it has become apparent that the most damaging derechos are associated with particular types of mesoscale convective systems that are self-perpetuating (meaning that the convective systems are not strongly dependent on the larger-scale meteorological processes such as those associated with blizzard-producing winter storms and strong cold fronts). In addition, the term "derecho" sometimes is misapplied to convectively generated wind events that are not particularly well-organized or long-lasting. For these reasons, a more precise, physically based definition of "derecho" has been introduced within the meteorological community. Etymology Derecho comes from the Spanish adjective for "straight" (or "direct"), in contrast with a tornado which is a "twisted" wind. The word was first used in the American Meteorological Journal in 1888 by Gustavus Detlef Hinrichs in a paper describing the phenomenon and based on a significant derecho event that crossed Iowa on 31 July 1877. Development Organized areas of thunderstorm activity reinforce pre-existing frontal zones, and can outrun cold fronts. The resultant mesoscale convective system (MCS) often forms at the point of the strongest divergence of the upper-level flow, and new storm cells are developed in the area with the greatest low-level inflow. The convection tends to move east or toward the equator, roughly parallel to low-level thickness lines and usually somewhat to the right of the mean tropospheric flow. When the convection is strongly linear or slightly curved, the MCS is called a squall line, with the strongest winds typically occurring just behind the leading edge of the significant wind shift and pressure rise. Classic derechos occur with squall lines that contain bow- or spearhead-shaped features as seen by weather radar that are known as bow echoes or spearhead echoes. Squall lines typically "bow out" due to the formation of a mesoscale high-pressure system which forms within the stratiform rain area behind the initial convective line. This high-pressure area is formed due to strong descending air currents behind the squall line, and could come in the form of a downburst. The size of the bow may vary, and the storms associated with the bow may die and redevelop. During the cool season within the Northern Hemisphere, derechos generally develop within a pattern of mid-tropospheric southwesterly winds, in an environment of low to moderate atmospheric instability (caused by relative warmth and moisture near ground level, with cooler air aloft, as measured by convective available potential energy), and high values of vertical wind shear ( within the lowest of the atmosphere). Warm season derechos in the Northern Hemisphere most often form in west to northwesterly flow at mid-levels of the troposphere, with moderate to high levels of thermodynamic instability. As previously mentioned, derechos favor environments of low-level warm advection and significant low-level moisture. Classification and criteria A common definition is a thunderstorm complex that produces a damaging wind swath of at least , featuring a concentrated area of convectively-induced wind gusts exceeding . According to the National Weather Service (NWS) criterion, a derecho is classified as a band of storms that have winds of at least along the entire span of the storm front, maintained over a time span of at least six hours. Some studies add a requirement that no more than two or three hours separate any two successive wind reports. A more recent, more physically based definition of "derecho" proposes that the term be reserved for use with convective systems that not only contain unique radar-observed features such as bow echoes and mesovortices, but also for events that produce damage swaths at least 100 km (60 miles) wide and 650 km (400 miles) long. On January 11, 2022, the National Oceanic and Atmospheric Administration and Environment and Climate Change Canada formally revised the criteria for a storm to be classified as a derecho. A wind storm must meet the following criteria: Wind damage swath extending for more than Wind gusts of at least along most of its length Several, well-separated or greater gusts Prior to January 11, 2022, the definition for a derecho was: Wind damage swath extending for more than Wind gusts of at least along most of its length Types Four types of derechos are generally recognized: Serial derecho – This type of derecho is usually associated with a very deep low. Single-bow – A very large bow echo around or upwards of long. This type of serial derecho is less common than the multi-bow kind. A few examples of a single-bow serial derecho are the derecho that occurred in association with the October 2010 North American storm complex, and the December 2021 Midwest derecho. Multi-bow – Multiple bow derechos are embedded in a large squall line typically around long. One example of a multi-bow serial derecho is a derecho that occurred during the 1993 Storm of the Century in Florida. Because of embedded supercells, tornadoes can spin out of these types of derechos. This is a much more common type of serial derecho than the single-bow kind. Multi-bow serial derechos can be associated with line echo wave patterns (LEWPs) on weather radar. Progressive derecho – A line of thunderstorms take the bow-shape and may travel for hundreds of miles along stationary fronts. Examples of this include "Hurricane Elvis" in 2003, the August 2020 Midwest derecho, the Boundary Waters-Canadian Derecho of 4–5 July 1999, and the May 2022 Canadian derecho. Tornado formation is less common in a progressive than serial type. Hybrid derecho – A derecho with characteristics of both a serial and progressive derecho. Similar to serial derechos and progressive derechos, these types of derechos are associated with a deep low, but are relatively small in size. An example is the Late-May 1998 tornado outbreak and derecho that moved through the central Northern Plains and the Southern Great Lakes on 30–31 May 1998. Low dewpoint derecho – A derecho that occurs in an environment of comparatively limited low-level moisture, with appreciable moisture confined to the mid-levels of the atmosphere. Such derechos most often occur between late fall and early spring in association with strong low-pressure systems. Low dew point derechos are essentially organized bands of successive, dry downbursts. The Utah-Wyoming derecho of 31 May 1994 was an event of this type. It produced a wind gust at Provo, Utah, where sixteen people were injured, and removed part of the roof of the Saltair Pavilion on the Great Salt Lake. Surface dew points along the path of the derecho were about . Characteristics Winds in a derecho can be enhanced by downburst clusters embedded inside the storm. These straight-line winds may exceed , reaching in past events. Tornadoes sometimes form within derecho events, although such events are often difficult to confirm due to the additional damage caused by straight-line winds in the immediate area. With the average tornado in the United States and Canada rating in the low end of the F/EF1 classification at peak winds and most or all of the rest of the world even lower, derechos tend to deliver the vast majority of extreme wind conditions over much of the territory in which they occur. Datasets compiled by the United States National Weather Service and other organizations show that a large swath of the north-central United States, and presumably at least the adjacent sections of Canada and much of the surface of the Great Lakes, can expect winds from over a significant area at least once in any 50-year period, including both convective events and extra-tropical cyclones and other events deriving power from baroclinic sources. Only in 40 to 65 percent or so of the United States resting on the coast of the Atlantic basin, and a fraction of the Everglades, are derechos surpassed in this respect — by landfalling hurricanes, which at their worst may have winds as severe as EF3 tornadoes. Certain derecho situations are the most common instances of severe weather outbreaks which may become less favorable to tornado production as they become more violent; the height of 30–31 May 1998 upper Middle West-Canada-New York State derecho and the latter stages of significant tornado and severe weather outbreaks in 2003 and 2004 are only three examples of this. Some upper-air measurements used for severe-weather forecasting may reflect this point of diminishing return for tornado formation, and the mentioned three situations were instances during which the rare Particularly Dangerous Situation severe thunderstorm variety of severe weather watches were issued from the Storm Prediction Center of the U.S. National Oceanic & Atmospheric Administration. Some derechos develop a radar signature resembling that of a hurricane in the low levels. They may have a central eye free of precipitation, with a minimum central pressure and surrounding bands of strong convection, but are really associated with an MCS developing multiple squall lines, and are not tropical in nature. These storms have a warm core, like other mesoscale convective systems. One such derecho occurred across the Midwestern U.S. on 21 July 2003. An area of convection developed across eastern Iowa near a weak stationary/warm front and ultimately matured, taking on the shape of a wavy squall line across western Ohio and southern Indiana. The system re-intensified after leaving the Ohio Valley, starting to form a large hook, with occasional hook echoes appearing along its eastern side. A surface low pressure center formed and became more impressive later in the day. Another example is the May 2009 Southern Midwest derecho. Location Derechos in North America form predominantly from April to August, peaking in frequency from May into July. During this time of year, derechos are mostly found in the Midwestern United States and the U.S. Interior Highlands most commonly from Oklahoma and across the Ohio Valley. During mid-summer when a hot and muggy air mass covers the north-central U.S., they will often develop farther north into Manitoba or Northwestern Ontario, sometimes well north of the Canada–US border. North Dakota, Minnesota, and upper Michigan are also vulnerable to derecho storms when such conditions are in place. They often occur along stationary fronts on the northern periphery of where the most intense heat and humidity bubble exists. Late-year derechos are normally confined to Texas and the Deep South, although a late-summer derecho struck upper parts of the New York State area after midnight on 7 September 1998. Warm season derechos have greater instability than their cold season counterpart, while cool season derechos have greater shear than their warm season counterpart. Although these storms most commonly occur in North America, derechos can occur elsewhere in the world, with a few areas relatively frequently. Outside North America, they sometimes are called by different names. For example, in Bangladesh and parts of Eastern India, a type of storm known as "Kalbaisakhi" or "Nor'westers" may be a progressive derecho. One such event occurred on 10 July 2002 in Germany: a serial derecho killed eight people and injured 39 near Berlin. Derechos occur in southeastern South America (particularly Argentina and southern Brazil) and South Africa as well, and on rarer occasions, close to or north of the 60th parallel in northern Canada. Primarily a mid-latitudes phenomenon, derechos do occur in the Amazon Basin of Brazil. On 8 August 2010, a derecho struck Estonia and tore off the tower of Väike-Maarja Church. Derechos are occasionally observed in China. Damage risk Since derechos occur during warm months and often in places with cold winter climates, people who are most at risk are those involved in outdoor activities. Campers, hikers, and motorists are most at risk because of falling trees toppled over by straight-line winds. Wide swaths of forest have been felled by such storms. People who live in mobile homes are also at risk; mobile homes that are not anchored to the ground may be overturned from the high winds. Across the United States, Michigan and New York have incurred many of the fatalities from derechos. Prior to Hurricane Katrina, the death tolls from derechos and hurricanes were comparable for the United States. Derechos may also severely damage an urban area's electrical distribution system, especially if these services are routed above ground. The derecho that struck Chicago, Illinois on 11 July 2011 left more than 860,000 people without electricity. The June 2012 North American derecho took out electrical power to more than 3.7 million customers starting in the Midwestern United States, across the central Appalachians, into the Mid-Atlantic States during a heat wave. The August 2020 Midwest Derecho delivered a maximum measured wind speed of , with damage-estimated speeds as high as in the Cedar Rapids, Iowa area. The storm was referred to as one of the largest "land-based hurricanes" in recorded history spawning 17 confirmed tornadoes across Wisconsin, Illinois, and Indiana. Ten million acres of crops were damaged or destroyed, accounting for roughly a third of the state of Iowa's agricultural area. Over a million homes across the Midwest were without basic services such as water and electricity. Iowa Governor Reynolds requested $4 billion in federal aid to assist in the recovery efforts. Winds were confirmed as having stirred up in Colorado and Nebraska, and then proceeded in force crossing 5 states including Iowa, Minnesota, Illinois, Indiana, and Ohio leaving destruction in excess of $7.5 billion in estimated damages. The 21 May 2022 derecho in southern Ontario and western Quebec travelled lengthwise along the most heavily populated region in Canada, reaching peak wind speeds of 190 km/h. The derecho killed 10 people and caused $875 million property damage, the sixth largest "insured loss event" in Canadian history. Destruction of utility poles deprived some rural communities of telephone and electricity services for several weeks. Aviation Derechos can be hazardous to aviation due to embedded microbursts, downbursts, and downburst clusters. In addition, the powerful updrafts and high cloud tops can cause for dangerous conditions. Their sheer size also makes them very difficult to navigate around. See also Convective storm detection Extreme weather Mesocyclone Mesoscale convective vortex (MCV) Mesovortex Microburst List of derecho events References Further reading Ashley, Walker S., et al. (2004). "Derecho Families". Proceedings of the 22nd Conference on Severe Local Storms, American Meteorological Society, Hyannis, Massachusetts. External links Facts about derechos (Storm Prediction Center's "About Derechos" web page; Stephen Corfidi with Robert Johns and Jeffry Evans) What is a derecho? (University of Nebraska at Lincoln) What is a derecho? (Meteorologist Jeff Haby's education page) Derecho Hazards in the United States (Walker Ashley) Origin of the term "Derecho" as a Severe Weather Event (Meteorologist Robert Johns) A Mediterranean derecho: Catalonia (Spain), 17 August 2003 (ECSS 2003, León, Spain, 9–12 November 2004) American Museum of Natural History Science Bulletins: Derecho December 2003 Storm Severe weather and convection Weather hazards Wind Spanish words and phrases
Derecho
Physics
3,555
5,309,463
https://en.wikipedia.org/wiki/Camber%20beam
In building, a camber beam is a piece of timber cut archwise, and steel bent or rolled, with an obtuse angle in the middle, commonly used in platforms as church leads, and other occasions where long and strong beams are required. The camber curve is ideally a parabola, but practically a circle segment as even with modern materials and calculations, cambers are imprecise. A camber beam is much stronger than another of the same size, since being laid with the hollow side downwards, as they usually are, they form a kind of supporting arch. References External links Architectural elements Building
Camber beam
Technology,Engineering
125
1,088,971
https://en.wikipedia.org/wiki/Stop-and-wait%20ARQ
Stop-and-wait ARQ, also referred to as alternating bit protocol, is a method in telecommunications to send information between two connected devices. It ensures that information is not lost due to dropped packets and that packets are received in the correct order. It is the simplest automatic repeat-request (ARQ) mechanism. A stop-and-wait ARQ sender sends one frame at a time; it is a special case of the general sliding window protocol with transmit and receive window sizes equal to one in both cases. After sending each frame, the sender doesn't send any further frames until it receives an acknowledgement (ACK) signal. After receiving a valid frame, the receiver sends an ACK. If the ACK does not reach the sender before a certain time, known as the timeout, the sender sends the same frame again. The timeout countdown is reset after each frame transmission. The above behavior is a basic example of Stop-and-Wait. However, real-life implementations vary to address certain issues of design. Typically the transmitter adds a redundancy check number to the end of each frame. The receiver uses the redundancy check number to check for possible damage. If the receiver sees that the frame is good, it sends an ACK. If the receiver sees that the frame is damaged, the receiver discards it and does not send an ACK—pretending that the frame was completely lost, not merely damaged. One problem is when the ACK sent by the receiver is damaged or lost. In this case, the sender doesn't receive the ACK, times out, and sends the frame again. Now the receiver has two copies of the same frame, and doesn't know if the second one is a duplicate frame or the next frame of the sequence carrying identical DATA. Another problem is when the transmission medium has such a long latency that the sender's timeout runs out before the frame reaches the receiver. In this case the sender resends the same packet. Eventually the receiver gets two copies of the same frame, and sends an ACK for each one. The sender, waiting for a single ACK, receives two ACKs, which may cause problems if it assumes that the second ACK is for the next frame in the sequence. To avoid these problems, the most common solution is to define a 1 bit sequence number in the header of the frame. This sequence number alternates (from 0 to 1) in subsequent frames. When the receiver sends an ACK, it includes the sequence number of the next packet it expects. This way, the receiver can detect duplicated frames by checking if the frame sequence numbers alternate. If two subsequent frames have the same sequence number, they are duplicates, and the second frame is discarded. Similarly, if two subsequent ACKs reference the same sequence number, they are acknowledging the same frame. Stop-and-wait ARQ is inefficient compared to other ARQs, because the time between packets, if the ACK and the data are received successfully, is twice the transit time (assuming the turnaround time can be zero). The throughput on the channel is a fraction of what it could be. To solve this problem, one can send more than one packet at a time with a larger sequence number and use one ACK for a set. This is what is done in Go-Back-N ARQ and the Selective Repeat ARQ. See also Alternating bit protocol Data link layer Error detection and correction References Tanenbaum, Andrew S., Computer Networks, 4th ed. Logical link control Error detection and correction de:ARQ-Protokoll it:Stop-and-wait ARQ kk:Stop-and-wait ARQ
Stop-and-wait ARQ
Engineering
772
31,766,044
https://en.wikipedia.org/wiki/Mobile%20collaboration
Mobile collaboration is a technology-based process of communicating using electronic assets and accompanying software designed for use in remote locations. Newest generation hand-held electronic devices feature video, audio, and telestration (on-screen drawing) capabilities broadcast over secure networks, enabling multi-party conferencing in real time (although real time communication is not a strict requirement of mobile collaboration and may not be applicable or practical in many collaboration scenarios). Differing from traditional video conferencing, mobile collaboration utilizes wireless, cellular and broadband technologies enabling effective collaboration independent of location. Where traditional video conferencing has been limited to boardrooms, offices, and lecture theatres, recent technological advancements have extended the capabilities of video conferencing for use with discreet, hand-held mobile devices, permitting true mobile collaborative possibilities. Scope The scope of mobile collaboration takes into account a number of elements that continue to evolve in their sophistication and complexity: video, audio and telestration capabilities, conferencing and telepresence systems, collaboration tools, transmission technologies, and mobility. Forecasts Cisco Systems predicts "two-thirds of the world's mobile data traffic will be video by 2015." The Unified Communications Interoperability Forum (UCIF), a non-profit alliance of technology vendors states that "one important driver for the growth of UC (unified communications) is mobility and the remote worker. No segment is growing faster than mobile communications, and virtually every smart phone will be equipped with video chat, IM, directory, and other UC features within a few years." Impact on industry To date, the use of mobile collaboration technology extends to industries as diverse as manufacturing, energy, healthcare, insurance, government and public safety. Mobile collaboration allows multiple users in multiple locations the ability to synergistically combine their input while working towards the resolution of problems or issues in today’s complex work environments. This can be done in real time with advanced video, audio and telestrator capabilities, comparable to working together in the same room but without the associated expense and downtime typically involved in getting the experts to remote locations. Manufacturing Manufacturers of all kinds use mobile collaboration technology in a number of ways. Recent trends in globalization and outsourcing in particular, have meant that companies need to communicate with employees, suppliers, and customers the world over. The flexibility of hand-held mobile collaboration devices allow real-time communication to take place at any location where products are being designed, built, and inspected, such as an automotive assembly plant a continent away. Improved communication through mobile collaboration affects many aspects of complex manufacturing such as production line maintenance, supply chain management and equipment field service. Energy Companies in the energy sector face unique challenges due to, for example, the vast distances between a head office and the remote, harsh environment of an offshore oil rig, as well as the often inadequacies or absence of necessary transmission networks. Recent advancements in mobile collaboration technology and transmission networks are making it possible for employees in these situations to collaborate in secure and reliable ways with colleagues thousands of miles away. The use of mobile collaboration in the energy sector is enabling companies to conduct remote inspections, safety audits, maintenance, repair and overhaul work, as well as IT/communication infrastructure troubleshooting. Healthcare Although telemedicine technology has been in use for a number of years in the healthcare sector, mobile collaboration technology extends these capabilities to locations now reachable through the use of hand-held devices such as a remote community, long-term care facility, or a patient’s home. Healthcare professionals in multiple locations can together view, discuss, and assess patient issues. The use of mobile collaboration technology within the healthcare sector has the potential to improve the quality and access to care, while making its delivery more cost-effective. Education Mobile collaboration technology might also be used for remote education. From one on one tutoring to large classes it has many uses. Homeschooling could really benefit from this technology as you participate in a lecture from anywhere in the world. Most useful you can record your classes or lectures and review them. Internet schools, including higher education, will most certainly also benefit from this development in mobile education. Though these methods are not widely used they are quite useful and most likely will become widely popular. Franchise businesses Mobile collaboration between franchiser and franchisee allows modern technology to be used to allow a better flow of communications similar to face-to-face, albeit remotely via video/voice media such as smartphones, tablets, iPhones, iPads, etc. to be collectively used without requiring one party to travel to another location. This in turn reduces travel time and expenses not to mention better and quicker modes of communication. Franchisers who have several hundred franchisees find it an absolute must. See also List of video telecommunication services and product brands Virtual collaboration Visual networking References Collaboration Mobile technology Teleconferencing Videotelephony
Mobile collaboration
Technology
997
11,005,376
https://en.wikipedia.org/wiki/Rotational%20modulation%20collimator
Rotational modulation collimators (or RMCs) are a specialization of the modulation collimator, an imaging device invented by Minoru Oda. Devices of this type create images of high energy X-rays (or other radiations that cast shadows). Since high energy X-rays are not easily focused, such optics have found applications in various instruments. RMCs selectively block and unblock X-rays in a way which depends on their incoming direction, converting image information into time variations. Various mathematical transformations can then reconstitute the image of the source. The Small Astronomy Satellite 3, launched in 1975, was one orbiting experiment that used RMCs. A more recent satellite that used RMCs was RHESSI. See also Coded aperture Collimator Modulation References RHESSI Imaging Explained Astronomical instruments
Rotational modulation collimator
Astronomy
165
1,959,273
https://en.wikipedia.org/wiki/Metabelian%20group
In mathematics, a metabelian group is a group whose commutator subgroup is abelian. Equivalently, a group G is metabelian if and only if there is an abelian normal subgroup A such that the quotient group G/A is abelian. Subgroups of metabelian groups are metabelian, as are images of metabelian groups over group homomorphisms. Metabelian groups are solvable. In fact, they are precisely the solvable groups of derived length at most 2. Examples Any abelian group is metabelian. Any dihedral group is metabelian, as it has a cyclic normal subgroup of index 2. More generally, any generalized dihedral group is metabelian, as it has an abelian normal subgroup of index 2. If F is a field, the group of affine maps (where a ≠ 0) acting on F is metabelian. Here the abelian normal subgroup is the group of pure translations , and the abelian quotient group is isomorphic to the group of homotheties . If F is a finite field with q elements, this metabelian group is of order q(q − 1). The group of direct isometries of the Euclidean plane is metabelian. This is similar to the above example, as the elements are again affine maps. The translations of the plane form an abelian normal subgroup of the group, and the corresponding quotient is the circle group. The finite Heisenberg group H3,p of order p3 is metabelian. The same is true for any Heisenberg group defined over a ring (group of upper-triangular 3 × 3 matrices with entries in a commutative ring). All nilpotent groups of class 3 or less are metabelian. The lamplighter group is metabelian. All groups of order p5 are metabelian (for prime p). All groups, G, with abelian subgroups A and B such that G=AB are metabelian. All groups of order less than 24 are metabelian. In contrast to this last example, the symmetric group S4 of order 24 is not metabelian, as its commutator subgroup is the non-abelian alternating group A4. References External links Ryan Wisnesky, Solvable groups (subsection Metabelian Groups) Groupprops, The Group Properties Wiki Metabelian group Properties of groups Solvable groups
Metabelian group
Mathematics
513
41,392
https://en.wikipedia.org/wiki/Narrowband%20modem
In telecommunications, a narrowband modem is a modem whose modulated output signal has an essential frequency spectrum that is limited to that which can be wholly contained within, and faithfully transmitted through, a voice channel with a nominal 4 kHz bandwidth. Note: High frequency (HF) modems are limited to operation over a voice channel with a nominal 3 kHz bandwidth. References Modems
Narrowband modem
Technology
80
25,865,359
https://en.wikipedia.org/wiki/Flat-panel%20detector
Flat-panel detectors are a class of solid-state x-ray digital radiography devices similar in principle to the image sensors used in digital photography and video. They are used in both projectional radiography and as an alternative to x-ray image intensifiers (IIs) in fluoroscopy equipment. Principles X-rays pass through the subject being imaged and strike one of two types of detectors. Indirect detectors Indirect detectors contain a layer of scintillator material, typically either gadolinium oxysulfide or cesium iodide, which converts the x-rays into light. Directly behind the scintillator layer is an amorphous silicon detector array manufactured using a process very similar to that used to make LCD televisions and computer monitors. Like a TFT-LCD display, millions of roughly 0.2 mm pixels each containing a thin-film transistor form a grid patterned in amorphous silicon on the glass substrate. Unlike an LCD, but similar to a digital camera's image sensor chip, each pixel also contains a photodiode which generates an electrical signal in proportion to the light produced by the portion of scintillator layer in front of the pixel. The signals from the photodiodes are amplified and encoded by additional electronics positioned at the edges or behind the sensor array in order to produce an accurate and sensitive digital representation of the x-ray image. Direct FPDs Direct conversion imagers utilize photoconductors, such as amorphous selenium (a-Se), to capture and convert incident x-ray photons directly into electric charge. X-ray photons incident upon a layer of a-Se generate electron-hole pairs via the internal photoelectric effect. A bias voltage applied to the depth of the selenium layer draw the electrons and holes to corresponding electrodes; the generated current is thus proportional to the intensity of the irradiation. Signal is then read out using underlying readout electronics, typically by a thin-film transistor (TFT) array. By eliminating the optical conversion step inherent to indirect conversion detectors, lateral spread of optical photons is eliminated, thus reducing blur in the resulting signal profile in direct conversion detectors. Coupled with the small pixel sizes achievable with TFT technology, a-Se direct conversion detectors can thus provide high spatial resolution. This high spatial resolution, coupled with a-Se's relative high quantum detection efficiency for low energy photons (< 30 keV), motivate the use of this detector configuration for mammography, in which high resolution is desirable to identify microcalcifications. Advantages and disadvantages Flat-panel detectors are more sensitive and faster than film. Their sensitivity allows a lower dose of radiation for a given picture quality than film. For fluoroscopy, they are lighter, far more durable, smaller in volume, more accurate, and have much less image distortion than x-ray image intensifiers and can also be produced with larger areas. Disadvantages compared to IIs can include defective image elements, higher costs and lower spatial resolution. In general radiography, there are time and cost savings to be made over computed radiography and (especially) film systems. In the United States, digital radiography is on course to surpass use of computed radiography and film. In mammography, direct conversion FPDs have been shown to outperform film and indirect technologies in terms of resolution, signal-to-noise ratio, and quantum efficiency. Digital mammography is commonly recommended as the minimum standard for breast screening programmes. See also X-ray detectors References External links Xray fluoroscopy with portable X-ray generator Nondestructive testing Radiography Medical imaging
Flat-panel detector
Materials_science
767
2,854,670
https://en.wikipedia.org/wiki/Genetic%20use%20restriction%20technology
Genetic use restriction technology (GURT), also known as terminator technology or suicide seeds, is designed to restrict access to "genetic materials and their associated phenotypic traits." The technology works by activating (or deactivating) specific genes using a controlled stimulus in order to cause second generation seeds to be either infertile or to not have one or more of the desired traits of the first generation plant. GURTs can be used by agricultural firms to enhance protection of their innovations in genetically modified organisms by making it impossible for farmers to reproduce the desired traits on their own. Another possible use is to prevent the escape of genes from genetically modified organisms into the surrounding environment. The technology was originally developed under a cooperative research and development agreement between the Agricultural Research Service of the United States Department of Agriculture and Delta & Pine Land Company in the 1990s. The purpose of the development was to protect the intellectual property of biotechnology firms that the United States Department of Agriculture viewed as being a specifically American technological competence. The technology, while still being developed, is not yet commercially available due to the political and scientific controversies that accompanied its development. GURT was first reported on by the Subsidiary Body on Scientific, Technical and Technological Advice (SBSTTA) to the UN Convention on Biological Diversity and discussed during the 8th Conference of the Parties to the United Nations Convention on Biological Diversity in Curitiba, Brazil, March 20–31, 2006. Process The GURT process is typically composed of four genetic components: a target gene, a promoter, a trait switch, and a genetic switch, sometimes with slightly different names given in different papers. A typical GURT involves the engineering of a plant that has a target gene in its DNA that expresses when activated by a promoter gene. However, it is separated from the target gene by a blocker sequence that prevents the promoter from accessing the target. When the plant receives a given external input, a genetic switch in the plant takes the input, amplifies it, and converts it into a biological signal. When a trait switch receives the amplified signal, it creates an enzyme that cuts the blocker sequence out. With the blocker sequence eliminated, the promoter gene allows the target gene to express itself in the plant. In other versions of the process, an operator must bind to the trait switch in order for it to make the enzymes that cut out the blocker sequence. However, there are repressors that bind to the trait switch and prevent it from doing so. In this case, when the external input is applied, the repressors bond to the input instead of to the trait switch, allowing the enzymes to be created that cut the blocker sequence, thereby allowing the trait to be expressed. Other GURTs embody alternative approaches, such as letting the genetic switch directly affect the blocker sequence and bypass the need for a trait switch. Variants There are two broad categories of GURTs: Variety-specific genetic use restriction technologies (V-GURTs) and Trait specific genetic use restriction technologies (T-GURTs). The two variants have been described as follows:V-GURTs are designed to restrict the use of all genetic materials contained in an entire plant variety. Prior to being sold to growers, the seeds of V-GURTs are activated by the seed company. The seeds can germinate, and the plants grow and reproduce normally, but their offspring will be sterile... . Thus, farmers could not save seed from year-to-year to replant. In contrast, T-GURTs only restrict the use of particular traits conferred by a transgene, but seeds are fertile. Growers could replant seed from the previous harvest, but they would not contain the transgenic trait. Variety specific GURTs or V-GURTs Variety-specific genetic use restriction technologies destroy seed development and plant fertility by means of a "genetic process triggered by a chemical inducer that will allow the plant to grow and to form seeds, but will cause the embryo of each of those seeds to produce a cell toxin that will prevent its germination if replanted, thus causing second generation seeds to be sterile... ." The toxin degrades the DNA or RNA of the plant. Thus, the seed from the crop is not viable and cannot be used as seeds to produce subsequent crops, but only for sale as food or fodder. Trait specific GURTs or T-GURTs Trait specific genetic use restriction technologies modify a crop in such a way that the genetic enhancement engineered into the crop does not function until the plant is treated with a specific chemical. The chemical acts as the external input, activating the target gene. One difference in T-GURTs is the possibility that the gene could be toggled on and off with different chemical inputs, resulting in the same toggling on or off an associated trait. With T-GURTs, seeds could possibly be saved for planting with a condition that the new plants do not get any enhanced traits unless the external input is added. Benefits and risks GURTs have a number of potential uses, though they have not yet been used in commercial agricultural products available on the market or in pharmaceutical applications. These uses include protection of intellectual property for biotechnological innovations, and bio-confinement (preventing escape of genetically engineered genes into nature). Intellectual property protection The original aim of the developers of GURTs was the protection of intellectual property in agricultural biotechnology. That is, the developers sought to prevent farmers from reusing patented seeds in cases where patents for biological innovations did not exist or could not be easily enforced. This problem is not generally posed for farmers using hybrid seeds (which, in any case, are not fertile or do not breed true) and, thus, could not be used to grow subsequent crops. However, the V-GURTS make it impossible for farmers to use seeds they have produced to grow crops in subsequent seasons because the entire genome of the targeted cells is destroyed. The T-GURTs could be used by seed companies to allow for the commercialisation of seeds that are fertile, but that develop into plants with desired traits only when sprayed with an activator chemical sold by the company. Bio-confinement An ongoing fear raised by GURTs and other biotechnologies is that the genes of genetically modified plants might escape into nature via sexual reproduction with compatible wild plants or with other cultivated plants. This is known as 'transgene escape' and is among the highest priority risks posed by genetic engineering of plants. This risk of escape is one of the reasons that the GURT process has not yet been used in commercial applications (indeed, the main producing companies have vowed to not commercialise these products, though they still have related research programs). Ironically, GURTs – themselves a process for the genetic modification of plants – may also be used to secure the 'bio-confinement' of the transgenes of genetically modified plants. GURTs, because they control plant fertility in various ways, could be used to prevent the escape of transgenes into wild relatives and help reduce risks of deleterious impacts on biodiversity. For bio-confinement, both "V- and T GURTs could be targeted to reproductive tissues, most typically pollen and seed (or embryo)." Crops modified to produce non-food products (eg. in pharmacology, therapeutic proteins, monoclonal antibodies and vaccines) could be armed with GURTs to prevent accidental transmission of these traits into crops meant for foods. Other uses Another possible advantage is that non-viable seeds produced on V-GURT plants may reduce the propagation of volunteer plants. Volunteer plants can become an economic problem for larger-scale mechanized farming systems that incorporate crop rotation. Furthermore, under warm, wet harvest conditions non V-GURT grain can sprout, lowering the quality of grain produced. It is likely that this problem would not occur with the use of V-GURT grain varieties. Another proposed use is in synthetic biology, where a restricted activator chemical must be added to the fermentation medium to produce a desired output chemical. Controversy As of 2006, GURT seeds have not been commercialized anywhere in the world due to opposition from farmers, consumers, indigenous peoples, NGOs, and some governments. Using the technology, companies that manufacture genetic use restriction technologies could potentially acquire an advantageous position vis-a-vis farmers because the seeds sold could not be resown. V-GURTs would not have an immediate impact on the many farmers who use hybrid seeds, as they do not produce their own planting seeds, buying instead specialized hybrid seeds from seed production companies. However, approximately 80 percent of farmers in Brazil and Pakistan grow crops using seeds saved from previous harvests. Another concern is that farmers purchasing the seeds would be greatly impacted, given they would have to buy new seeds every year. It has been argued that this would result in higher prices in food. Some analysts have expressed concerns that GURT seeds might adversely impact biodiversity and threaten native species of plants. However, proponents of the technology dispute these claims, arguing that because non-GMO hybrid plants are used in the same way and GURT seeds could help farmers deal with cross pollination, the benefits outweigh the potential negatives. In 2000, the United Nations Convention on Biological Diversity recommended a de facto moratorium on field-testing and commercial sale of terminator seeds; the moratorium was re-affirmed and the language strengthened in March 2006, at the COP8 meeting of the UNCBD. Specifically, the moratorium recommended that, due to a lack of research on the technology's potential risks, no field testing of GURTs nor products using them should be allowed until there was a sufficiently justified reason to do so. India and Brazil have passed national laws to prohibit the technology. See also Cartagena Protocol on Biosafety Diamond v. Chakrabarty Digital rights management Genetic pollution Genetically modified organism Seed saving Transgenic maize References External links - UNEP/CBD/COP/5/2 - 11 November 1999 - Mention of genetic use restriction tech on pages 22, 42 UN Convention on Biological Diversity - Cartagena Protocol on Biosafety USPTO Patent Number 5,723,765 - method for producing a seed incapable of germination, (claim no. 10) Genetic engineering Genetics techniques
Genetic use restriction technology
Chemistry,Engineering,Biology
2,099
3,828,350
https://en.wikipedia.org/wiki/International%20Rectifier
International Rectifier was an American power management technology company manufacturing analog and mixed-signal ICs, advanced circuit devices, integrated power systems, and high-performance integrated components for computing. On 13 January 2015, the company became a part of Infineon Technologies. IR's products, as a part of Infineon Technologies' overall semiconductor portfolio, continue to be used in many applications including lighting, automobile, satellite, aircraft, and defense systems; as well as key components in power supply systems in electronics-based products that include especially microcomputers, servers, networking and telecommunications equipment. History In the 1950s the company commercialized germanium rectifiers (1954) and created the first silicon-based rectifier (1959). In 1974 they developed the first power and Darlington transistors which used glass passivation. In 1979 they developed first hexagonal power MOSFET Then in 1983, they developed the first intelligent power ICs. In 1983 they lost a patent infringement lawsuit over the rights to doxycycline to Pfizer, Inc., resulting in a judgment of $55 million to Pfizer. To avoid bankruptcy, International Rectifier gave Pfizer its animal health and feed additive businesses. In 2000, they developed FlipFET wafer packaging. Two years later, they developed DirectFET, a MOSFET packaging technology developed to address thermal limitations found in advanced computing, consumer, and communications applications. In 2003, they developed iMOTION Integrated Design Platform for motor control applications. In 2006, SmartRectifier IC was introduced for AC/DC applications. In 2007 the company launched SupIRBuck integrated voltage regulators. In 2008 a GaN-based power device platform was introduced. In 2011, they introduced PowIRstage devices and CHiL digital controllers. In 2012, they followed by launching micro-integrated power modules for motor control applications and COOLiRIGBTs for automotive. In 2014, the company was bought by Infineon Technologies for $3 billion. By 2015, International Rectifier had officially become a part of Infineon Technologies Manufacturing International Rectifier also had wafer fabrication and assembly facilities around the world. The locations include: El Segundo, California Temecula, California Leominster, Massachusetts Mesa, Arizona San Jose, California Newport, Wales Tijuana, Mexico References External links International Rectifier Corporation History Companies formerly listed on the New York Stock Exchange Equipment semiconductor companies Companies based in El Segundo, California 2014 mergers and acquisitions Electronics companies established in 1947 Electronics companies disestablished in 2014 1947 establishments in California 2014 disestablishments in California Defunct semiconductor companies of the United States Defunct manufacturing companies based in Greater Los Angeles Defunct computer companies of the United States Defunct computer hardware companies
International Rectifier
Engineering
570