id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
18,517,278
https://en.wikipedia.org/wiki/K%20Puppis
k Puppis (k Pup, k Puppis) is a Bayer designation given to an optical double star in the constellation Puppis, the two components being k1 Puppis and k2 Puppis. Bayer designation Note that the Bayer designation for this star is "k" not "kappa" (κ). In Bayer's original Uranometria, k Puppis was listed as ρ (rho) Navis. When Lacaille broke apart the large constellation Argo Navis into Carina, Puppis, and Vela, he re-designated the stars with Greek letters in a single sequence across all three constellations. Additionally, Lacaille used Latin letters for many additional stars. κ (kappa) is in the constellation of Vela and so there is no kappa in Puppis. The confusion also extends to the proper name Markab which properly applies to κ Velorum (and other stars) but which has also been used for k Puppis when it is called κ Puppis. Description Both k1 Puppis and k2 Puppis are bright blue B-type stars of nearly equal brightness, +4.50 and +4.62, respectively. To the naked eye, the pair has a combined magnitude of +3.80. On the sky, the two stars are separated by approximately 9.9 seconds of arc along PA 318°. The optical pair can be distinguished easily with a small telescope. The component k1 Puppis is a binary star system in its own right, while k2 Puppis is a variable star. Each star within the k Puppis optical pair is between 450 and 470 light years from Earth. k Puppis is listed in the General Catalogue of Variable Stars as a suspected variable star, but the range and type are not stated. The International Bulletin of Variable Stars has since published research showing that k2 Puppis is the variable component. It is an SX Arietis variable with a period of 1.9093 days which is also the rotational period of the star. The total amplitude is 0.015 apparent magnitude. k2 Puppis is a chemically peculiar star with a strong magnetic field. It is classified as a He-weak star and in addition to a deficit of helium in its spectrum, it shows an overabundance of many iron peak and rare earth elements. All of its spectral lines show variability, probably due to variations in the chemical makeup of its atmosphere as it rotates. References External links Puppis Puppis, k k Puppis k Puppis k Puppis k1 Puppis B-type main-sequence stars 2948 9 037229 061555 6 Suspected variables Durchmusterung objects SX Arietis variables Helium-weak stars
K Puppis
[ "Astronomy" ]
561
[ "Puppis", "Constellations" ]
18,517,425
https://en.wikipedia.org/wiki/Lauricella%27s%20theorem
In the theory of orthogonal functions, Lauricella's theorem provides a condition for checking the closure of a set of orthogonal functions, namely: Theorem – A necessary and sufficient condition that a normal orthogonal set be closed is that the formal series for each function of a known closed normal orthogonal set in terms of converge in the mean to that function. The theorem was proved by Giuseppe Lauricella in 1912. References G. Lauricella: Sulla chiusura dei sistemi di funzioni ortogonali, Rendiconti dei Lincei, Series 5, Vol. 21 (1912), pp. 675–85. Theorems in functional analysis
Lauricella's theorem
[ "Mathematics" ]
139
[ "Theorems in mathematical analysis", "Theorems in functional analysis" ]
18,518,396
https://en.wikipedia.org/wiki/P-Laplacian
In mathematics, the p-Laplacian, or the p-Laplace operator, is a quasilinear elliptic partial differential operator of 2nd order. It is a nonlinear generalization of the Laplace operator, where is allowed to range over . It is written as Where the is defined as In the special case when , this operator reduces to the usual Laplacian. In general solutions of equations involving the p-Laplacian do not have second order derivatives in classical sense, thus solutions to these equations have to be understood as weak solutions. For example, we say that a function u belonging to the Sobolev space is a weak solution of if for every test function we have where denotes the standard scalar product. Energy formulation The weak solution of the p-Laplace equation with Dirichlet boundary conditions in an open bounded set is the minimizer of the energy functional among all functions in the Sobolev space satisfying the boundary conditions in the sense that (when has a smooth boundary, this is equivalent to require that functions coincide with the boundary datum in trace sense). In the particular case and is a ball of radius 1, the weak solution of the problem above can be explicitly computed and is given by where is a suitable constant depending on the dimension and on only. Observe that for the solution is not twice differentiable in classical sense. See also Infinity Laplacian Notes Sources Further reading . Notes on the p-Laplace equation by Peter Lindqvist Juan Manfredi, Strong comparison Principle for p-harmonic functions Elliptic partial differential equations
P-Laplacian
[ "Mathematics" ]
315
[ "Mathematical analysis", "Mathematical analysis stubs" ]
20,741,711
https://en.wikipedia.org/wiki/ICRANet
ICRANet, the International Center for Relativistic Astrophysics Network, is an international organization which promotes research activities in relativistic astrophysics and related areas. Its members are four countries and three Universities and Research Centers: Armenia, the Federative Republic of Brazil, Italian Republic, the Vatican City State, the University of Arizona (USA), Stanford University (USA) and ICRA. ICRANet headquarters are located in Pescara, Italy. History of ICRANet foundation: ICRA and ICRANet ICRA and ICRANet In 1985, the International Center for Relativistic Astrophysics ICRA was founded by Remo Ruffini (University of Rome "La Sapienza") together with Riccardo Giacconi (Nobel Prize for Physics 2002), Abdus Salam (Nobel Prize for Physics 1979), Paul Boynton (University of Washington), George Coyne (former director of the Vatican observatory), Francis Everitt (Stanford University) and Fang Li-Zhi (University of Science and Technology of China). The Statute and the Agreement establishing ICRANet were signed on March 19, 2003, and they were recognized in the same year by the Republic of Armenia and the Vatican City State. ICRANet has been created in 2005 by a law of the Italian Government, ratified by the Italian Parliament and signed by the President of the Italian Republic Carlo Azeglio Ciampi on February 10, 2005. The Republic of Armenia, Italian Republic, the Vatican City State, ICRA, the University of Arizona and the Stanford University are the founding members. On September 12, 2005, ICRANet Steering Committee was established and had its first meeting: Remo Ruffini and Fang Li-Zhi were appointed respectively Director and Chairman of the Steering Committee. On December 19, 2006 ICRANet Scientific Committee was established and had its first meeting in Washington DC. Riccardo Giacconi was appointed Chairman and John Mester Co-Chairman. On September 21, 2005 the Director of ICRANet signed, together with the then Ambassador of Brazil in Rome Dante Coelho De Lima the adhesion of the Federative Republic of Brazil to ICRANet. The entrance of Brazil, requested by the then President of Brazil Luiz Inácio Lula da Silva has been unanimously ratified by the Brazilian Parliament. On August 12, 2011, the then President of Brazil Dilma Rousseff signed the entrance of Brazil in ICRANet. Marcel Grossmann meetings By the beginning of the twentieth century the new branch of mathematics, tensor calculus, was developed in the works of Gregorio Ricci Curbastro and Tullio Levi Civita of the University of Padua and the University of Rome "La Sapienza". Marcel Grossmann of the University of Zurich who had a deep knowledge of the Italian school of geometry and who was close to Einstein introduced to him these concepts. The collaboration between Einstein and Grossmann was essential for the development of General Relativity. Remo Ruffini and Abdus Salam in 1975 established the Marcel Grossmann meetings (MG) on Recent Developments in Theoretical and Experimental General Relativity, Gravitation, and Relativistic Field Theories, which take place every three years in different countries, gathering more than 1000 researchers. MG1 and MG2 were held in 1975 and in 1979 in Trieste; MG3 in 1982 in Shanghai; MG4 in 1985 in Rome; MG5 in 1988 in Perth; MG6 in 1991 in Kyoto; MG7 in 1994 at Stanford; MG8 in 1997 in Jerusalem; MG9 in 2000 in Rome; MG10 in 2003 in Rio de Janeiro; MG11 in 2006 in Berlin; MG12 in 2009 in Paris; MG13 in 2012 in Stockholm; MG14 in 2015 and MG15 in 2018 both in Rome. Since its foundation, ICRANet has always played a leading role in the organization of those meetings. Celebration of the International Year of Astronomy 2009 ICRANet has been Organizational Associate of the International Year of Astronomy 2009 and supported the global coordination of IYA2009 financially. In this occasion ICRANet organized a series of international meetings under the general title "The Sun, the Star, the Universe and General Relativity" including: the 1st Zeldovich meeting (Minsk, Belarus), the Sobral Meeting (Fortaleza, Brazil), the 1st Galileo - Xu Guangqi meeting (Shanghai, China), the 11th Italian-Korean Symposium on Relativistic Astrophysics (Seoul, South Korea) and the 5th Australasian Conference - Christchurch Meeting (Christchurch, New Zealand). Celebration of the International Year of Light 2015 Under the initiative of the United Nations and UNESCO, 2015 was declared the International Year of Light, and it represented the centenary of the formulation of the equations of general relativity by Albert Einstein, and the fiftieth anniversary of the birth of relativistic astrophysics. ICRANet was a "Bronze Associate" sponsor of those celebrations. In 2015, ICRANet also organized a series of international meetings including: the Second ICRANet César Lattes Meeting (Niterói – Rio de Janeiro – João Pessoa – Recife – Fortaleza, Brazil), the International Conference on Gravitation and Cosmology / the 4th Galileo-Xu Guangqi meeting (Beijing, China), Fourteenth Marcel Grossmann Meeting - MG14 (Rome, Italy), the 1st ICRANet Julio Garavito Meeting on Relativistic Astrophysics (Bucaramanga – Bogotá, Colombia), the 1st Sandoval Vallarta Caribbean Meeting on Relativistic Astrophysics (Mexico City, Mexico). Organization and structure The organization consists of the Director, the Steering Committee and the Scientific Committee. The members of committees are representatives of the countries and member institutions. ICRANet has a number of permanent Faculty positions. Their activities are supported by administrative staff and secretariat personnel. ICRANet financing is based by Statute on the funds provided by the governments and by voluntary contributions, donations. The initial Director of ICRANet appointed in 1985 was Remo Ruffini. Ruffini remains Director . In 2023 the Steering Committee consists of: Albania: Elida Bylyku Armenia: Nouneh Zastoukhova ICRA: Yu Wang Italy: Italian Foreign Ministry, Unità Scientifica e Tecnologica Bilaterale e Multilaterale: Amb. Lorenzo Angeloni, Cons. Alessandro Garbellini, Ministry of Economy and Finance, Ragioneria Generale dello Stato, IGAE, Uff. IX: Dr. Antonio Bartolini, Dr. Salvatore Sebastiano Vizzini MIUR: Dr. Vincenzo Di Felice, Dott.ssa Giulietta Iorio Pescara Municipality: Avv. Carlo Masci Stanford University: Francis Everitt University of Arizona: Xiaohui Fan Vatican City State: Guy J. Consolmagno, S.J. The current Chairperson (2019) of the ICRANet Steering Committee is Francis Everitt. The first Chairperson of the Scientific Committee was Riccardo Giacconi, Nobel Prize for Physics in 2002, who ended his term in 2013. The current (2019) Chairperson of the Scientific Committee is Massimo Della Valle. The Scientific Committee in 2019 consists of: Prof. Narek Sahakyan (Armenia), Dr. Barres de Almeida Ulisses (Brazil), Dr. Carlo Luciano Bianco (ICRA), Prof. Massimo Della Valle (Italy), Prof. John Mester (Stanford University), Prof. Chris Fryer (University of Arizona) and Dr. Gabriele Gionti (Vatican City State). The Faculty in 2019 consists of Professors Ulisses Barres de Almeida, Vladimir Belinski, Carlo Luciano Bianco, Donato Bini, Pascal Chardonnet, Christian Cherubini, Filippi Simonetta, Robert Jantzen, Roy Patrick Kerr, Hans Ohanian, Giovanni Pisani, Brian Mathew Punsly, Jorge Rueda, Remo Ruffini, Gregory Vereshchagin, and She-Sheng Xue, and is supported by an Adjunct Faculty made up of more than 30 internationally renowned scientists participating in ICRANet activities, and between eighty "Lecturers" and "Visiting Professors". Among these are the Nobel Laureates Murray Gell-Mann, Theodor Hänsch, Gerard ’t Hooft and Steven Weinberg. Member states and institutions Currently ICRANet members are four countries and three Universities and research centers. Member states: Member institutions: ICRANet has signed collaboration agreements with over 60 institutions, universities and research centers in different countries. ICRANet seats and centers The network is composed of several seats and centers. Seat agreements, establishing rights and privileges, including extraterritoriality, have been signed for the seat in Pescara in Italy, for the seat in Rio de Janeiro in Brazil and for the seat in Yerevan in Armenia. The Seat Agreement for Pescara has been ratified on May 13, 2010. The Seat agreement for Yerevan has been unanimously approved by the Parliament of Armenia on November 13, 2015. High-speed optical fiber connection with different locations are made possible by the connection to the pan-European data network for the research and education community (GÉANT) through the GARR network. Currently ICRANet centers are operative at: ICRANet Headquarters in Pescara, Italy The Department of Physics of University "La Sapienza" (Rome, Italy); Villa Ratti (Nice, France); The Presidium of the Armenian National Academy of Sciences (Yerevan, Armenia); CBPF – Centro Brasileiro de Pesquisas Físicas (Rio de Janeiro, Brazil); Isfahan University of Technology (Isfahan, Iran); National Academy of Science of Belarus (Minsk, Belarus). ICRANet Centers in Pescara, Rome and Nice ICRANet headquarters are located in Pescara, Italy. This center coordinates ICRANet activities and yearly meetings of the Scientific and the Steering committees are usually held there. International meetings such as the Italian-Korean Symposia on Relativistic Astrophysics are regularly held in this center. Scientific activities in Pescara center include the fundamental research on early cosmology by the Russian school guided by Vladimir Belinski. Activities of the ICRANet Seat at Villa Ratti in Nice include the coordination of the IRAP PhD program, as well as scientific activities connected with the ultra high energy observations by the University of Savoy and the VLT observations performed by the Côte d’Azur Observatory, which involve the thesis works of IRAP PhD students. The University of Savoy is the closest French lab to the CERN. ICRANet Center in Armenia Since January 2014, the ICRANet Center in Yerevan has been established at the Presidium of the National Academy of Sciences of Armenia, at Marshall Baghramian Avenue, 24a. Scientific activities in this center are coordinated by the Director, Dr. Narek Sahakyan. In 2014, the Government of Armenia approved the Agreement to establish the ICRANet international center in Armenia. The Seat Agreement has been signed in Rome on February 14, 2015, by the director of ICRANet, Remo Ruffini and the Ambassador of Armenia in Italy, Mr. Sargis Ghazaryan. On November 13, 2015, the Parliament of Armenia unanimously approved the Seat Agreement. Since January 2016 ICRANet Armenia center is registered at the Ministry of Foreign Affairs as an international organization. The main areas of scientific research in ICRANet-Armenia are in the fields of relativistic astrophysics, astroparticle physics, X-ray astrophysics, high and very high energy gamma-ray astrophysics, high energy neutrino astrophysics. The center is a full member of the MAGIC international collaboration since 2017. Also, the center is actively involved in development of the Open Universe Initiative. In Armenia, the ICRANet center collaborates with other scientific institutions from the Academy and Universities, and provides to organize joint international meetings and workshops, summer schools for PhD students and mobility programs for scientists in the field of Astrophysics. ICRANet center in Armenia coordinates ICRANet activities in the area of Central-Asian and Middle-Eastern countries. A summer school and an international scientific conference dedicated to the issues of Relativistic Astrophysics "1st Scientific ICRANet Meeting in Armenia: Black Holes: the largest energy sources in the Universe" were held in Armenia from June 28 to July 4, 2014. ICRANet Center in Brazil The Seat of ICRANet in Rio de Janeiro has been established initially on the premises granted by CBPF, with the possible expansion to the Cassino da Urca. A school of Cosmology and Astrophysics is being developed jointly with Brazilian institutions. The 2nd ICRANet César Lattes Meeting devoted to relativistic astrophysics was held in Rio de Janeiro in 2015. Currently (2019) ICRANet has signed scientific collaboration agreements with 17 Brazilian universities, institutions and research centers. There are two specific programs initiated by ICRANet, which are underway: the possibility of restructuring the mountain side of the Cassino da Urca as the Seat of ICRANet for Brazil and Latin America (with a project by the Italian Architect Carlo Serafini), building of the Brazilian Science Data Center (BSDC), a novel astrophysics data base, built following the concept of the ASI Science Data Center (ASDC) by the Italian Space Agency, which will consist on a unique research infrastructure at the interface between experimental and theoretical astrophysicists. ICRANet Center in Minsk The ICRANet-Minsk center has been established at the National Academy of Science of Belarus (NASB), with whom ICRANet has signed a cooperation agreement on 2013. The Protocol for the opening of the ICRANet-Minsk center has been signed in April 2016. The "First ICRANet-Minsk workshop on high energy astrophysics" has been held at the ICRANet-Minsk center from 26 to 28 of April 2017. ICRANet Center in Isfahan The ICRANet Center in Isfahan has been established at the Isfahan University of Technology. The Protocol of cooperation, signed in 2016 by Remo Ruffini, Director of ICRANet, and Mahnoud Modarres-Hashemi, Rector of the Isfahan University of Technology, includes the promotion and development of scientific and technological research in the fields of cosmology, gravitation and relativistic astrophysics. It also includes the organization of joint international conferences and workshops, institutional exchanges for students, researchers and faculty members. ICRANet Centers in USA The present Chairman of the ICRANet Steering Committee Francis Everitt is responsible for the ICRANet Center at the Leland Stanford Junior University. His notable activity has been the conception, development, launch, data acquisition, and elaboration of the final data analysis of the NASA Gravity Probe B mission, one of the most complex physics experiments ever performed in space. The first Chairman of the ICRANet Steering Committee Fang Li-Zhi developed the collaboration with the Physics Department of the University of Arizona in Tucson. The collaboration with its Astronomy Department is promoted by David Arnett. ICRANet and IRAP PhD program Since 2005 ICRANet co-organizes an International Ph.D. program in Relativistic Astrophysics — International Relativistic Astrophysics Ph.D. Program, IRAP-PhD, the first joint PhD astrophysics program with: ASI - Italian Space Agency (Italy); Bremen University (Germany); Carl von Ossietzky University of Oldenburg (Germany); CAPES - Brazilian Federal Agency for Support and Evaluation of Graduate Education (Brazil); CBPF - Brazilian Centre for Physics Research (Brazil); CNR - National Research Council (Italy); FAPERJ -Foundation "Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro" (Brazil); ICRA - International Center for Relativistic Astrophysics (Italy); ICTP - Abdus Salam International Centre for Theoretical Physics (Italy); IHES - Institut Hautes Etudes Scientifiques (France); Indian centre for space physics (India); INFN - National Institute for Nuclear Physics (Italy); NAS RA - Armenian National Academy of Sciences (Armenia); Nice University Sophia Antipolis (France); Observatory of the Côte d'Azur (France); Rome University - “Sapienza” (Italy); Savoy University (France); TWAS - Academy of sciences for the developing world; UAM - Metropolitan Autonomous University (Mexico); UNIFE - University of Ferrara (Italy). Among the associated centers, there are both institutes devoted to theory and others devoted to experiments and observations. In that way, PhD students can have a wider education on theoretical relativistic astrophysics and put it in practice. The official language of the IRAP PhD is English and students have also the opportunity to learn the national language of their hosting country, attending several academic courses in the partner Universities. By 2019, 122 students were enrolled in the IRAP PhD program: 1 from Albania, 4 from Argentina, 8 from Armenia, 1 from Austria, 2 from Belarus, 16 from Brazil, 5 from China, 9 from Colombia, 3 from Croatia, 5 from France, 5 from Germany 7 from India, 2 from Iran, 38 from Italy 2 from Kazakhstan, 1 from Lebanon, 1 from Mexico, 1 from Pakistan, 4 from Russia, 1 from Serbia, 1 from Sweden, 1 from Switzerland, 1 from Saudi Arabia, 2 from Taiwan and 1 from Turkey. The IRAP-PhD program was the only European PhD program in Astrophysics awarded the Erasmus Mundus label and funded by the European Commission in 2010–2017. Scientific research at ICRANet ICRANet main goals are training, education and research in the field of relativistic astrophysics, cosmology, theoretical physics and mathematical physics. Its main activities are devoted to promote the international scientific co-operation and to carry on scientific research. According to the 2018 ICRANet Scientific Report, the main areas of scientific research in ICRANet are: High Energy Gamma-rays from Active Galactic Nuclei The ICRANet Brazilian Science Data Center (BSDC) and Multi-frequency selection and studies of blazars; Exact solutions of Einstein and Einstein-Maxwell equations; Gamma-Ray Bursts; Theoretical Astroparticle Physics; Generalization of the Kerr-Newman solution; Black Holes and Quasars; The electron-positron pairs in physics and astrophysics; From nuclei to compact stars; Supernovae; Symmetries in General Relativity; Self Gravitating Systems, Galactic Structures and Galactic Dynamics; Interdisciplinary Complex Systems. Between 2006 and 2019, ICRANet has released over 1800 scientific publications in refereed journals such as Physical Review, the Astrophysical Journal, Astronomy and Astrophysics etc., in its various fields of research. New scientific concepts and terms introduced by ICRANet scientists: Black hole (Ruffini, Wheeler 1971) Ergosphere (Rees, Ruffini, Wheeler, 1974) Pursue and plunge (Rees, Ruffini, Wheeler, 1974) Black hole mass formula (Christodoulou, Ruffini, 1971) Reversible and irreversible transformations of black holes (Christodoulou, Ruffini, 1971) Dyadosphere (Damour, Ruffini, 1975; Preparata, Ruffini, Xue, 1998) Dyadotorus (Cherubini et al., 2009) Induced Gravitational Collapse (Rueda, Ruffini, 2012) Binary-driven Hypernova (Ruffini et al., 2014) Cosmic matrix (Ruffini et al., 2015) Other activities International meetings The Galileo-Xu Guangqi meetings (2009-) The Galileo-Xu Guangqi meetings have been created in the name of Galileo and Xu Guangqi, the collaborator of Matteo Ricci (Ri Ma Dou), generally recognized for bringing to China the works of Euclid and Galileo and for his strong commitment to the process of modernization and scientific development of China. The 1st Galileo - Xu Guangqi Meeting was held in Shanghai, China, in 2009. The 2nd Galileo - Xu Guangqi meeting took place in Hanbury Botanic Gardens (Ventimiglia, Italy) and Villa Ratti (Nice, France) in 2010. The 3rd and 4th Galileo - Xu Guangqi meetings were both held in Beijing, China, respectively in 2011 and 2015. Italian-Korean Symposia (1987-) The Italian-Korean Symposia on Relativistic Astrophysics is a series of biannual meetings, alternatively organized in Italy and in Korea since 1987. The symposia discussions cover topics in astrophysics and cosmology, such as gamma-ray bursts and compact stars, high energy cosmic rays, dark energy and dark matter, general relativity, black holes, and new physics related to cosmology. Stueckelberg Workshops on Relativistic Field Theories (2006-2008) These workshops represent a one-week dialogues on Relativistic Field Theories in Curved Space, which is inspired to the work of E. C. G. Stueckelberg. Invited lectures were delivered by Professors Abhay Ashtekar, Thomas Thiemann, Gerard 't Hooft and Hagen Kleinert. The Zeldovich Meetings (2009-) The Zeldovich Meetings are a series of international conferences held in Minsk, in honor of Ya. B. Zeldovich, one of the fathers of the Soviet Atomic Bomb and the founder of the Russian School on Relativistic Astrophysics, which celebrate and discuss his wide research interests, ranging from chemical physics, elementary particle and nuclear physics to astrophysics and cosmology. The 1st Zeldovich Meeting was held at the Belarusian State University in Minsk, from 20 to 23 April 2009; the 2nd Zeldovich Meeting was held at the National Academy of Sciences of Belarus from 10 to 14 March 2014, to celebrate Ya. B. Zeldovich 100th Anniversary; the 3rd Zeldovich Meeting has been held at the National Academy of Sciences of Belarus from 23 to 27 April 2018. Other meetings ICRANet has also organized: six Italian-Sino Workshops on Cosmology and Relativistic Astrophysics, held in Pescara every year from 2004 to 2009, except for the 5th Italian-Sino Workshop held in Taipei-Hualien, Taiwan, in 2008; two ICRANet Cesar Latter meetings (in 2007 and 2015) and the 1st URCA meeting on Relativistic Astrophysics in Rio de Janeiro, Brazil. PhD schools In the framework of the IRAP PhD program, ICRANet has organized several PhD schools: 11 of them have been held in Nice (France), 3 in Les Houches, 1 in Ferrara (Italy), 1 in Pescara (Italy) and 1 in Beijing (China). ICRANet visiting program ICRANet has developed a program of short and long term visits for scientific collaboration. Prominent personalities have carried out their activities at ICRA and ICRANet, among them are: Prof. Riccardo Giacconi, Nobel Prize for Physics in 2002, Gerardus 't Hooft, Dutch physicist and Nobel Prize for Physics in 1999; Steven Weinberg, Nobel Prize in 1979; Murray Gell -Mann Nobel Prize in 1969; Subrahmanyan Chandrasekhar Nobel Prize in 1930; Haensch Theodor, Nobel Prize in 2001; Valeriy Ginzburg., Francis Everitt, Chairman of the Scientific Committee of ICRANet, Isaak Khalatnikov, Russian physicist and former director of the Landau Institute for Theoretical Physics from 1965 to 1992; Roy Kerr, New Zealand mathematician and discoverer of the "Kerr Metric"; Thibault Damour; Demetrios Christodoulou; Hagen Kleinert; Neta and John Bachall; Tsvi Piran; Charles Misner; Robert Williams; José Gabriel Funes; Fang Li-Zhi; Rashid Sunyaev. Weekly seminars ICRANet co-organizes with ICRA Joint Astrophysics Seminar at the Department of Physics of University "La Sapienza" in Rome. All institutions collaborating with ICRANet, as well as ICRANet centers, participate at those seminars. Brazilian Science Data Center The main objective of the Brazilian Science Data Center (BSDC) is to provide data of all international space missions existing on the wavelength of X- and gamma rays, and later on the whole electromagnetic spectrum, for all the galactic and extragalactic sources of the Universe. A special attention will be paid to the achievement and the complete respect of the levels defined by the International Virtual Observatory Alliance (IVOA). In addition to these specific objectives, BSDC will promote technical seminars, annual workshops and it will assure a plan of scientific divulgation and popularization of science with the aim of the understanding of the Universe. The BSDC is currently being implemented at CBPF, and at the Universidade Federal do Rio Grande do Sul (UFRGS), and will be expanded to all other ICRANet centers in Brazil as well as to the other Latin-American ICRANet Centers in Argentina, Colombia and Mexico: a unique coordinated continental research network planned for Latin America. References External links http://www.icranet.org/ ICRANet information brochure 2015 Astrophysics International research institutes International scientific organizations International organizations based in Europe
ICRANet
[ "Physics", "Astronomy" ]
5,251
[ "Astronomical sub-disciplines", "Astrophysics" ]
20,744,016
https://en.wikipedia.org/wiki/Health%20management%20system
The health management system (HMS) is an evolutionary medicine regulative process proposed by Nicholas Humphrey in which actuarial assessment of fitness and economic-type cost–benefit analysis determine the body's regulation of its physiology and health. The incorporation of the cost–benefit calculations into body regulation provides a science grounded approach to mind–body phenomena such as placebos, are otherwise not explainable by low level, noneconomic, and purely feedback based homeostatic or allostatic theories. Many medical symptoms such as inflammation, fever, pain, sickness behavior, or morning sickness have an evolutionary medicine function of enabling the body to protect, heal or restore itself from injury, infection or other physiological disruption. The deployment of self-treatments have costs as well as benefits with the result that evolution has selected management processes in the brain such that self-treatments are used only when they provide an overall cost–benefit advantage. The brain controls such physiological process through top–down regulation. External treatment and the availability of support is factored into the health management system's cost–benefit assessment as to whether to deploy or not an evolved self-treatment. Placebos are explained as the result of false information about the availability of external treatment and support that mislead the health management system into not deploying evolved self-treatments. This results in the placebo suppression of medical symptoms. Evolutionary medicine Since Hippocrates, it has been recognized that the body has self-healing powers (vis medicatrix naturae). Modern evolutionary medicine identifies them with physiologically based self-treatments that provide the body with prophylactic, healing, or restorative capabilities against injuries, infections and physiological disruption. Examples include: Immune responses Fever Sickness behavior Nausea Morning sickness Diarrhea Hypoferremia Depression Pain These evolved self-treatments deployed by the body are experienced by humans as unpleasant and unwanted illness symptoms. Deployment Such self-treatments according to evolutionary medicine are deployed to increase an individual's biological fitness. Two factors affect their deployment. First, it is usually advantageous to deploy them on a precautionary basis. As a result, it will often turn out that they have been deployed apparently unnecessarily, though this has in fact been advantageous since in probabilistic terms they have provided an insurance against a potentially costly outcome. As Nesse notes: "Vomiting, for example, may cost only a few hundred calories and a few minutes, whereas not vomiting may result in a 5% chance of death" page 77. Second, self-treatments are costly both in using energy, and also in their risk of damaging the body. Immunity – energy for activating lymphocyte and antibody production, and in the risk of an immune response resulting in an immune related disorder. Fever – energy (each 1 °C raise in blood temperature increases energy expenditure by 10–15%. 90% of the total cost of fighting pneumonia goes on increased body temperature. There is also the risk of hyperpyrexia. Sickness behavior – restricted ability by an animal to forage and defend itself Nausea – loss of food nutrients, and potential risk of aspiration Morning sickness – loss of food nutrients when a mother needs additional, not less, nourishment Hypoferremia – impairment in biological processes needing iron resulting in iron deficiency anemia Depression – impaired activity and problem solving. Pain – restricted movement and the inability to concentrate One factor in deployment is low level physiological control by proinflammatory cytokines such as IL-1 triggered by bacterial lipopolysaccharides (LPS). Another is higher level control in which the brain takes into account what it learns about circumstances and how that makes it well and ill. Conditioning shows the existence of such learnt control: give saccharin paired in a drink with a drug that creates immunosuppression, and later on, giving saccharin alone will produce immunosuppression. Such conditioning happens both in experimental rodents and humans. Cost benefit analysis Economic resource management Evolution, according to Nicholas Humphrey, has selected an internal health management system that uses cost benefit analysis upon whether the deployment of a self-treatment aids biological fitness, and so should be activated. a specially designed procedure for "economic resource management" that is, I believe, one of the key features of the "natural health-care service" which has evolved in ourselves and other animals to help us deal throughout our lives with repeated bouts of sickness, injury, and other threats to our well-being. An analogy is explicitly made with the health economics consideration used in management decisions involving external medical treatment. Now, if you wonder about this choice of managerial terminology for talking about biological healing systems, I should say that it is quite deliberate (and so is the pun on NHS.) With the phrase "natural health-care service" I do intend to evoke, at a biological level, all the economic connotations that are so much a part of modern health-care in society. External medications External medications will affect the cost benefits advantages of deploying an evolved self-treatment. Some animals use external ones. Wild animals, including apes, do so in the form of ingested detoxifying clays, rough leaves that clear gut parasites, and pharmacologically active plants Complementary to this, research finds that animals have the ability to select and prefer substances that aid their recuperation from illness. Social support The welfare of social animals (including humans) depends upon other individuals (social buffering). The actuarial assessments of the costs and benefits of deploying a self-treatment therefore will depend upon the presence, or not, of other individuals. The presence of helpful others will affect, for example, the risk of predators when incapacitated, and—in those case in which animals do this (such as humans)—the provision of food, and care during sickness. The health management system factors in the presence of such external treatment and social support as one aspect of the circumstances needed to determine whether it is advantageous to deploy or not an evolved self-treatment. Placebos False information All humans societies use external medications, and some individuals exist that are considered to have special healing knowledge about illnesses and their treatments. Humans are also usually supportive to those in their group. The availability of these things will affect the cost benefits of the body deploying its own biological ones. This could, in turn, lead to the health management system (given its beliefs (information) about treatments and support) to deploy or not, or doing so differently, the body's own treatments. Nicholas Humphrey describes how the health management system explains placebos – an external treatment without direct physiological effects – as follows: Suppose, for example, a doctor gives someone who is suffering an infection a pill that she rightly believes to contain an antibiotic: because her hopes will be raised she will no doubt make appropriate adjustments to her health-management strategy – lowering her precautionary defences in anticipation of the sickness not lasting long. The health management system, in other words, when faced with an infection is tricked into making a mistaken cost benefit analysis using false information. The effect of that false information is that the benefits of the self-treatment cease to outweigh its costs. As a result, it is not deployed, and an individual does not experience unwanted medical symptoms. Lack of harm Failure to deploy an evolved self-treatment need not put an individual at risk since evolution has advantaged their deployment on a precautionary basis. As Nicholas Humphrey notes: many of the health-care measures we've been discussing are precautionary measures designed to protect from dangers that lie ahead in an uncertain future. Pain is a way of making sure you give your body rest just in case you need it. Rationing the use of the immune system is a way of making sure you have the resources to cope with renewed attacks just in case they happen. Your healing systems are basically tending to be cautious, and sometimes over-cautious, as if working on the principle of better safe than sorry. Therefore, not deploying an evolved self-treatment, and so not having a medical symptom due to placebo false information might be without consequence. Central governor The health management system's idea of a top down neural control of the body is also found in the idea that a central governor regulates muscle fatigue to protect the body from the harmful effects (such as anoxia and hyperglycemia) of over prolonged exercise. The idea of a fatigue governor was first proposed in 1924 by the 1922 Nobel Prize winner Archibald Hill, and more recently, on the basis of modern research, by Tim Noakes. Like with the health management system, the central governor shares the idea that much of what is attributed to low level feedback homeostatic regulation is, in fact, due to top down control by the brain. The advantage of this top down management is that the brain can enhance such regulation by allowing it to be modified by information. For example, in endurance running, a cost-benefit trade-off exists between the advantages of continuing to run, and the risk if this is too prolonged that it might harm the body. Being able to regulate fatigue in terms of information about the benefits and costs of continued exercise would enhance biological fitness. Low level theories exist that suggest that fatigue is due mechanical failure of the exercising muscles ("peripheral fatigue"). However, such low level theories do not explain why running muscle fatigue is affected by information relevant to cost benefit trade offs. For example, marathon runners can carry on running longer if told they are near the finishing line, than far away. The existence of a central governor can explain this effect. See also Central governor Deployment cost–benefit selection in physiology Evolutionary medicine Health science Management control system Mind–body Neural top–down control of physiology Placebo effect Psychogenic disease Psychosomatic medicine References External links Richard Dawkins interviews Nicholas Humphrey upon Placebos Decision support systems Human homeostasis Mind–body interventions Physiology
Health management system
[ "Technology", "Biology" ]
2,036
[ "Physiology", "Human homeostasis", "Decision support systems", "Information systems", "Homeostasis" ]
20,748,402
https://en.wikipedia.org/wiki/PSRK
PSRK (short for Predictive Soave–Redlich–Kwong) is an estimation method for the calculation of phase equilibria of mixtures of chemical components. The original goal for the development of this method was to enable the estimation of properties of mixtures containing supercritical components. This class of substances cannot be predicted with established models, for example UNIFAC. Principle PSRK is a group-contribution equation of state. This is a class of prediction methods that combines equations of state (mostly cubic) with activity coefficient models based on group contributions, such as UNIFAC. The activity coefficient model is used to adapt the equation-of-state parameters for mixtures by a so-called mixing rule. The use of an equation of state introduces all thermodynamic relations defined for equations of state into the PRSK model. This allows the calculation of densities, enthalpies, heat capacities, and other properties. Equations As stated previously, the PSRK model is based on a combination of the Soave–Redlich–Kwong equation of state with a mixing rule whose parameters are determined by the UNIFAC method. Equation of state The equation of state of Soave is defined as follows: The original α-function has been replaced by the function of Mathias–Copeman: The parameters of the Mathias–Copeman equation are fitted to experimental vapor-pressure data of pure components and provide a better description of the vapor pressure than the original relation. The form of the equation is chosen as it can be reduced to the original Soave form by setting the parameters c2 and c3 to zero. Additionally, the parameter c1 can be obtained from the acentric factor, using the relation This may be performed if no fitted Mathias–Copeman parameter is available. Mixing rule The PSRK mixing rule calculates the parameters a and b of the equation of state by and where the parameters ai and bi are those of the pure substances, their mole fractions are given by xi, and the excess Gibbs energy by gE. The excess Gibbs energy is calculated by a slightly modified UNIFAC model. Model parameters For the equation of state PSRK needs the critical temperature and pressure, additionally at a minimum the acentric factor for all pure components in the considered mixture is also required. The integrity of the model can be improved if the acentric factor is replaced by Mathias–Copeman constants fitted to experimental vapor-pressure data of pure components. The mixing rule uses UNIFAC, which needs a variety of UNIFAC-specific parameters. Aside from some model constants, the most important parameters are the group-interaction parameters — these are obtained from parametric fits to experimental vapor–liquid equilibria of mixtures. Hence, for high-quality model parameters, experimental data (pure-component vapor pressures and VLE of mixtures) are needed. These are normally provided by factual data banks, like the Dortmund Data Bank, which has been the base for the PSRK development. In few cases additionally needed data have been determined experimentally if no data have been available from other sources. The latest available parameters have been published in 2005. The further development is now taken over by the UNIFAC Consortium. Example calculation The prediction of a vapor–liquid equilibrium is successful even in mixtures containing supercritical components. However, the mixture has to be subcritical. In the given example carbon dioxide is the supercritical component with Tc = 304.19 K and Pc = 7475 kPa. The critical point of the mixture lies at T = 411 K and P ≈ 15000 kPa. The composition of the mixture is near 78 mole% carbon dioxide and 22 mole% cyclohexane. PSRK describes this binary mixture quite well, the dew point curve, as well as the bubble point curve and the critical point of the mixture. Model weaknesses In a PSRK follow-up work (VTPR) some model weaknesses are quoted: The gradient of the Mathias–Copeman α-function is without any thermodynamic background and, if extrapolated to higher temperatures, the described vapor-pressure curve tends to diverge. The Soave–Redlich–Kwong equation of state describes the vapor densities of pure components and mixtures quite well, but the deviations of the liquid-density prediction are high. For the VLE prediction of mixtures with components that have very differing sizes (e. g. ethanol, C2H6O, and eicosane, C20H42) larger systematic errors are found. Heats of mixing and activity coefficients at infinite dilution are predicted poorly. Literature External links Short PSRK description from the developers UNIFAC Consortium at the Carl von Ossietzky University Oldenburg (develops the PSRK model since 2005) Group assignment for PSRK and UNIFAC Thermodynamic models
PSRK
[ "Physics", "Chemistry" ]
995
[ "Thermodynamic models", "Thermodynamics" ]
20,751,641
https://en.wikipedia.org/wiki/RMIT%20School%20of%20Aerospace%2C%20Mechanical%20and%20Manufacturing%20Engineering
The RMIT School of Aerospace, Mechanical and Manufacturing Engineering (also known as SAMME) was an Australian tertiary education school within the College of Science Engineering of RMIT University. The School consisted of three major disciplines, Aerospace and Aviation Engineering, Manufacturing and Materials Engineering and Mechanical and Automotive Engineering. Location The Department was located in the adjoining Buildings 56 & 57 (Level 9) at the City campus and also on the Bundoora East campus. Industry Partners Partners of the school included: Airbus, ADF, BMW, Boeing, Ford, GKN, Holden, RAeS, Subaru and Volkswagen Group. Sir Lawrence Wackett Aerospace Centre The Sir Lawrence Wackett Aerospace Centre is a research centre created in conjunction with the School of Mathematics. It is located in Port Melbourne adjoining other Aerospace companies. The centre aims to "Create new intellectual property in partnership with industry, through research and design that addresses real world issues, for commercial use and development.". In 2004 a Memorandum of Understanding was signed between the centre and the Indian National Aerospace Laboratory. The MoU involved the centre undertaking design work for Indian aircraft. In 2007 it was successful in winning a $5 Billion tender from the Australian Department of Defence over 20 years to upgrade and replace the helicopter fleet. National Aerospace Resource Centre The National Aerospace Resource Centre is a research collection partnership between the Royal Aeronautical Society's Australian Division and the RMIT School of Aerospace, Mechanical and Manufacturing Engineering. It consists of approximately 100,000 volumes, including: technical reports (from NASA, NACA, AMRL and DSTO), conference proceedings, books, videos, aircraft manuals and journals, and is housed at RMIT's Bundoora West Library. References School of Aerospace, Mechanical and Manufacturing Engineering, RMIT Mechanical engineering schools
RMIT School of Aerospace, Mechanical and Manufacturing Engineering
[ "Engineering" ]
360
[ "Mechanical engineering schools", "Mechanical engineering organizations", "Engineering universities and colleges" ]
20,753,688
https://en.wikipedia.org/wiki/Nuclear%20magnetic%20resonance%20spectra%20database
A nuclear magnetic resonance spectra database is an electronic repository of information concerning Nuclear magnetic resonance (NMR) spectra. Such repositories can be downloaded as self-contained data sets or used online. The form in which the data is stored varies, ranging from line lists that can be graphically displayed to raw free induction decay (FID) data. Data is usually annotated in a way that correlates the spectral data with the related molecular structure. Data format Line list The form in which most NMR is described in literature papers. It is common for databases to display line lists graphically in a manner that is similar to how processed spectra might appear. These line list however lack first and higher order splitting, satellites from low abundance isotopes like carbon or platinum, as well as the information concerning line width and other informative aspects of line shape. The advantage of a line list is that it requires a minimal amount of memory. Processed image Once an FID is processed into a spectrum it can be converted into an image that usually takes up less memory than the FID. This method requires more memory than a line list but supplies the user with considerably more information. The processed image has less information that a raw FID but it also take less memory and is easily displayed in browsers and requires no specialty data handling software. Raw FID file The raw free induction decay data obtained when performing the experiment are stored according to the formatting preferences of the instrument manufacturer. This data format contains the most information and requires the most storage space. A variety of commercial and free of software programs allow users to process FID data into useful spectra once FID data is downloaded. Common search methods Some database search methods are commonly available: Compound name — May include official IUPAC names and common names. Molecular formula — Either an exact formula or a range. Molecular structure — This method requires a molecular editor interface. Registration number — Commonly the CAS Registry Number but most databases also have their own numbering scheme. Peak range or other spectral characteristics — The user numerically enters data related to a spectra of an unknown compound. This data is used to for compounds which share the shifts within specified constraints. This allows users to locate the exact compound or molecules with similar functional groups. Spectra search — Software is used to search a database for spectra that resemble the a submitted spectra. List of databases The following is a partial list of nuclear magnetic resonance spectra databases: ACD/Labs Advanced Chemistry Development (ACD/labs) is a chemoinformatics company which produces software for use in handling NMR data and predicting NMR spectra. ACD/Labs offers the Aldrich library as an add-on to their general spectrum processing software and specialized NMR software products. The NMR predictors allow improving the prediction of NMR spectra by adding data to user training databases. The content databases used to train the prediction algorithms (HNMR DB, CNMR DB, FNMR DB, NNMR DB, and PNMR DB) also include references to instruments and literature. These databases can be either purchased or leased as libraries through individual or group contracts. Aldrich NMR Library A portion of this database is still available in a three volume print version from Aldrich. The full electronic version includes a supplement of spectra not included in the paper version. In all, this database includes more than 15,000 compounds with the associated 300 MHz 1H and 75 MHz 13C spectra. The product includes the software necessary to view and handle the NMR data. This database can be purchased as a library through individual or group contracts. The spectra data appear to be stored as images of processed FID data. Biological Magnetic Resonance Data Bank The Biological Magnetic Resonance Data Bank (BioMagResBank or BMRB) is sponsored by the Department of Biochemistry at the University of Wisconsin–Madison; it is dedicated to Proteins, Peptides, Nucleic Acids, and other Biomolecules. It stores a large variety of raw NMR data. Wiley's KnowItAll NMR Spectral Library Wiley offers a comprehensive collection of spectral data, including their Sadtler standard spectra. Their collection of NMR spectral data can be searched or used to build predictions; it includes CNMR, HNMR, and XNMR (F-19 NMR, P-31 NMR, N-15 NMR, etc.) spectra. ChemGate A database that was developed and maintained by the publisher John Wiley & Sons. This database included more than 700,000 NMR, IR and MS Spectra, statistics specific to the NMR spectra are not listed. The NMR data includes 1H,13C, 11B, 15N, 17O, 19F, 29Si, and 31P. The data were in the form of graphically displayed line lists. Access to the database could be purchased piecemeal or leased as the entire library through individual or group contracts. These data are now made available through Wiley Online Library. ChemSpider The ChemSpider chemical database accepts user submitted raw NMR data. The data in accepted in the JCAMP-DX format which can be actively viewed online with the JSpecView applet or the data can be downloaded for processing with other software packages. NMRShiftDB The NMRDShiftDB features a graphically displayed line list data. The data are hosted by Cologne University. Online access is free and user participation is encouraged. The data are available under the GNU FDL license. Contained 53972 measured spectra of, among other nuclei, 13C, 1H, 15N, 11B, 19F, 29Si, and 31P NMR as of March 4, 2021. SpecInfo on the Internet Available through Wiley Online Library (John Wiley & Sons), SpecInfo on the Internet NMR is a collection of approximately 440,000 NMR spectra (organized as 13C, 1H, 19F, 31P, and 29Si NMR databases). The data are accessed via the Internet using a Java interface and are stored in a server developed jointly with BASF. The software includes PDF report generation, spectrum prediction (database-trained and/or algorithm based), structure drawing, structure search, spectrum search, text field search, and more. Access to the databases is available to subscribers either as NMR only or combined with mass spectrometry and FT-IR data. Many of these data were also made available via ChemGate, described below. Coverage can be freely verified at Compound Search. A smaller collection of these data is still available via STN International. Spectral Database for Organic Compounds The Spectral Database for Organic Compounds (SDBS) is developed and maintained by Japan's National Institute of Advanced Industrial Science and Technology. SDBS includes 14700 1H NMR spectra and 13000 13C NMR spectra as well as FT-IR, Raman, ESR, and MS data. The data are stored and displayed as an image of the processed data. Annotation is achieved by a list of the chemical shifts correlated to letters which are also used to label a molecular line drawing. Access to the database is available free of charge for noncommercial use. Users are requested not to download more than 50 spectra and/or compound information in one day. Between 1997 and February 2008 the database has been accessed more than 200 million times. T. Saito, K. Hayamizu, M. Yanagisawa and O. Yamamoto are attributed reproducibility for the NMR data. See also Chemical database NMR spectroscopy References Nuclear magnetic resonance spectroscopy Chemical databases
Nuclear magnetic resonance spectra database
[ "Physics", "Chemistry" ]
1,561
[ "Nuclear magnetic resonance", "Spectrum (physical sciences)", "Nuclear magnetic resonance spectroscopy", "Chemical databases", "Spectroscopy" ]
20,754,087
https://en.wikipedia.org/wiki/Monatin
Monatin, commonly known as arruva, is a naturally occurring, high intensity sweetener isolated from the plant Sclerochiton ilicifolius, found in the Transvaal region of South Africa. Monatin contains no carbohydrate or sugar, and nearly no food energy, unlike sucrose or other nutritive sweeteners. The name "monatin" is derived from the indigenous word for it, "molomo monate," which literally means "mouth nice." Monatin is an indole derivative and, upon degradation, smells like feces. It is 3000 times sweeter than sugar. See also Sugar substitute References External links Food additives Sugar substitutes Amino acids Alpha hydroxy acids Indoles Dicarboxylic acids
Monatin
[ "Chemistry" ]
158
[ "Amino acids", "Biomolecules by chemical classification" ]
20,754,622
https://en.wikipedia.org/wiki/Cement%20bond%20log
A cement bond log documents the evaluation of the integrity of cement work performed on an oil well. In the process of drilling and completing a well, cement is injected through the wellbore and rises up the annulus between the steel casing and the formation. A sonic tool is typically run on wireline by a service company that detects the bond of the cement to the casing and formation via a principle based on resonance. Casing that is not bound has a higher resonant vibration than that which is bound, causing the imparted energy from the sonic signal to be transferred to the formation. In this sense, the amplitude of the waveform received is the basic measurement that is evaluated. The data is collected by the tool and recorded on a log which is used by the oil producing company as an indicator of zonal isolation in the well. There are production reasons and legal reasons (governed by a petroleum regulatory body in each individual state) that dictate the well must have specific areas of isolation. References Well logging Petroleum engineering
Cement bond log
[ "Engineering" ]
211
[ "Petroleum engineering", "Energy engineering", "Well logging" ]
23,593,482
https://en.wikipedia.org/wiki/C8H15NOS2
{{DISPLAYTITLE:C8H15NOS2}} The molecular formula C8H15NOS2 (molar mass: 205.341 g/mol, exact mass: 205.0595 u) may refer to: Lipoamide 6-(Methylsulfinyl)hexyl isothiocyanate Molecular formulas
C8H15NOS2
[ "Physics", "Chemistry" ]
73
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
23,594,484
https://en.wikipedia.org/wiki/Borate%20buffered%20saline
Borate buffered saline (abbreviated BBS) is a buffer used in some biochemical techniques to maintain the pH within a relatively narrow range. Borate buffers have an alkaline buffering capacity in the 8–10 range. Boric acid has a pKa of 9.14 at 25 °C. Applications BBS has many uses because it is isotonic and has a strong bactericidal effect. It can be used to dilute substances and has applications in coating procedures. Additives such as Polysorbate 20 and milk powder can be used to add to BBS's functionality as a washing buffer or blocking buffer. Contents The following is a sample recipe for BBS: 10 mM Sodium borate 150 mM NaCl Adjust pH to pH 8.2 The simplest way to prepare a BBS solution is to use BBS tablets. They are formulated to give a ready to use borate buffered saline solution upon dissolution in 500 ml of deionized water. Concentration of borate and NaCl as well as the pH can vary, and the resulting solution would still be referred to as "borate buffered saline". Borate concentration (giving buffering capacity) can vary from 10 mM to 100 mM. As BBS is used to emulate physiological conditions (as in animal or human body), the pH value is slightly alkaline, ranging from 8.0 to 9.0. NaCl gives the isotonic (mostly used 150 mM NaCl corresponds to physiological conditions: 0.9% NaCl) salt concentration. References Buffer solutions
Borate buffered saline
[ "Chemistry", "Biology" ]
320
[ "Biochemistry stubs", "Buffer solutions", "Biotechnology stubs", "Biochemistry" ]
23,596,560
https://en.wikipedia.org/wiki/Optical%20contact%20bonding
Optical contact bonding is a glueless process whereby two closely conformal surfaces are joined, being held purely by intermolecular forces. History Isaac Newton has been credited with the first description of conformal interaction observed through the interference phenomenon known as Newton's rings, though it was S. D. Poisson in 1823 who first described the optical characteristics of two identical surfaces in contact. It was not until the 19th century that objects were made with such precision that the binding phenomenon was observed. The bond was referred to as "ansprengen" in German language. By 1900, optical contact bonding was being employed in the construction of optical prisms, and the following century saw further research into the phenomenon at the same time that ideas of inter-atom interactions were first being studied. Explanation Intermolecular forces such as Van der Waals forces, hydrogen bonds, and dipole–dipole interactions are typically not sufficiently strong to hold two apparently conformal rigid bodies together, since the forces drop off rapidly with distance, and the actual area in contact between the two bodies is small due to surface roughness and minor imperfections. However, if the bodies are conformal to an accuracy of better than 10 angstroms (1 nanometer), then a sufficient surface area is in close enough contact for the intermolecular interactions to have an observable macroscopic effect—that is, the two objects stick together. Such a condition requires a high degree of accuracy and surface smoothness, which is typically found in optical components, such as prisms. Production of an optical contact bond In addition to both surfaces' being practically conformal (in practice often completely flat), the surfaces must also be extremely clean and free from any small contamination that would prevent or weaken the bond—including grease films and specks of dust. For bonding to occur, the surfaces need only to be brought together; the intermolecular forces draw the bodies into the lowest energy conformation, and no pressure needs to be applied. Advantages Since the method requires no binder, balsam or glue, the physical properties of the bound object are the same as the objects joined. Typically, glues and binders are more heat sensitive or have undesirable properties compared to the actual bodies being joined. The use of optical contact bonding allows the production of a final product with properties as well as the bulk solid. This can include temperature and chemical resistances, spectral absorption properties and reduced contamination from bonding materials. Uses Originally the process was confined to optical equipment such as prisms—the earliest examples being made around 1900. Later the range of use was expanded to microelectronics and other miniaturised devices. See also Adhesion – Property of attraction between unlike molecules , which are joined temporarily in a similar fashion References External links Description of production of experimental 'Lab on a chip' via contact bonding RSC Publishing (Royal Society of Chemistry) Optical Fabrication: Optical Contacting Grows More Robust by Chris Myatt, Nick Traggis, and Kathy Li Dessau. Optics Intermolecular forces Materials science
Optical contact bonding
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
626
[ "Applied and interdisciplinary physics", "Optics", "Molecular physics", "Materials science", "Intermolecular forces", " molecular", "nan", "Atomic", " and optical physics" ]
23,597,565
https://en.wikipedia.org/wiki/Design%20II%20for%20Windows
DESIGN II for Windows is a rigorous process simulator for chemical and hydrocarbon processes including refining, refrigeration, petrochemical, gas processing, gas treating, pipelines, fuel cells, ammonia, methanol and hydrogen facilities. History In 1969 the DESIGN program was first offered on the University Computing Company (UCC) time sharing services. The DISTILL column program was merged into DESIGN to create DESIGN 2000 in 1975, and in 1984 – REFINE column and crude feeds program were merged into DESIGN 2000 to create DESIGN II. In 1991 the Windows user interface was added to DESIGN II and DESIGN II for Windows was born. WinSim Inc. has developed and marketed DESIGN II for Windows, a steady-state process simulator, since 1995 when the company purchased the rights to the program from ChemShare Corporation. Website: http://www.winsim.com/ References Chemical engineering software
Design II for Windows
[ "Chemistry", "Engineering" ]
184
[ "Chemical engineering software", "Chemical engineering" ]
23,599,134
https://en.wikipedia.org/wiki/Converted%20wetland
A converted wetland is one that has been drained, dredged, filled, leveled, or otherwise altered for the production of an agricultural commodity. The definition is part of The Highly Erodible Land Conservation and Wetland Conservation Compliance provisions (Swampbuster) introduced in the 1985 Farm Bill (also known as The Food Security Act of 1985). The provisions aim to reduce soil loss on erosion-prone lands and to protect wetlands for the multiple benefits they provide. Description Under the swampbuster program, converted wetlands are wetlands that were drained or altered to improve agricultural production after December 23, 1985, the date swampbuster was enacted. On lands with this designation, no drainage maintenance and no additional drainage are allowed. Lands converted before December 23, 1985, are called prior converted wetlands, and alterations to these lands are subject to less stringent requirements. Under swampbuster, there are no restrictions on either drainage maintenance or additional drainage on prior converted wetlands, which are estimated to total more than . Approximately 48 US states have lost an estimated 53 percent of their original wetlands in the past 200 years. It is estimated that 87 percent of wetland conversions from the mid-1950s to the mid-1970s were due to agricultural conversion. The wetland conservation provisions have reduced wetland conversions and have helped preserve the environmental functions of wetlands, such as flood control, sediment control, groundwater recharge, water quality, wildlife habitat, to name a few. See also Agricultural expansion Groundwater-dependent ecosystems Wetland conservation References External links United States Department of Agriculture Wetland conservation in the United States Ecological restoration
Converted wetland
[ "Chemistry", "Engineering" ]
314
[ "Ecological restoration", "Environmental engineering" ]
10,502,110
https://en.wikipedia.org/wiki/Trirhenium%20nonachloride
Trirhenium nonachloride is a compound with the formula ReCl3, sometimes also written Re3Cl9. It is a dark red hygroscopic solid that is insoluble in ordinary solvents. The compound is important in the history of inorganic chemistry as an early example of a cluster compound with metal-metal bonds. It is used as a starting material for synthesis of other rhenium complexes. Structure and physical properties As shown by X-ray crystallography trirhenium nonachloride consists of Re3Cl12 subunits that share three chloride bridges with adjacent clusters. The interconnected network of clusters forms sheets. Around each Re center are seven ligands, four bridging chlorides, one terminal chloride, and two Re-Re bonds. The hydrate is molecular with the formula Re3Cl9(H2O)3. The heat of oxidation is evaluated according to the equation: 1/3 Re3Cl9 + 4 OH− + 2 OCl− → ReO4− + 2 H2O + 5Cl− The enthalpy for this process is 190.7 ± 0.2 kcal/mol. Preparation and reactions The compound was discovered in 1932. Trirhenium nonachloride is efficiently prepared by thermal decomposition of rhenium pentachloride or hexachlororhenic(IV) acid: 3 ReCl5 → Re3Cl9 + 3 Cl2 If the sample is vacuum sublimed at 500 °C, the resulting material is comparatively unreactive. The partially hydrated material such as can be more useful synthetically. Other synthetic methods include treating rhenium with sulfuryl chloride. This process is sometimes conducted with the addition of aluminium chloride. It is also obtained by heating Re2(O2CCH3)4Cl2 under HCl: 3/2 Re2(O2CCH3)4Cl2 + 6 HCl → Re3Cl9 + 6 HO2CCH3 Reaction of the tri- and pentachlorides gives rhenium tetrachloride: 3 ReCl5 + Re3Cl9 → 6 ReCl4 References Rhenium compounds Chlorides Metal halides Substances discovered in the 1930s
Trirhenium nonachloride
[ "Chemistry" ]
466
[ "Chlorides", "Inorganic compounds", "Metal halides", "Salts" ]
10,504,376
https://en.wikipedia.org/wiki/Spectral%20element%20method
In the numerical solution of partial differential equations, a topic in mathematics, the spectral element method (SEM) is a formulation of the finite element method (FEM) that uses high-degree piecewise polynomials as basis functions. The spectral element method was introduced in a 1984 paper by A. T. Patera. Although Patera is credited with development of the method, his work was a rediscovery of an existing method (see Development History) Discussion The spectral method expands the solution in trigonometric series, a chief advantage being that the resulting method is of a very high order. This approach relies on the fact that trigonometric polynomials are an orthonormal basis for . The spectral element method chooses instead a high degree piecewise polynomial basis functions, also achieving a very high order of accuracy. Such polynomials are usually orthogonal Chebyshev polynomials or very high order Lagrange polynomials over non-uniformly spaced nodes. In SEM computational error decreases exponentially as the order of approximating polynomial increases, therefore a fast convergence of solution to the exact solution is realized with fewer degrees of freedom of the structure in comparison with FEM. In structural health monitoring, FEM can be used for detecting large flaws in a structure, but as the size of the flaw is reduced there is a need to use a high-frequency wave. In order to simulate the propagation of a high-frequency wave, the FEM mesh required is very fine resulting in increased computational time. On the other hand, SEM provides good accuracy with fewer degrees of freedom. Non-uniformity of nodes helps to make the mass matrix diagonal, which saves time and memory and is also useful for adopting a central difference method (CDM). The disadvantages of SEM include difficulty in modeling complex geometry, compared to the flexibility of FEM. Although the method can be applied with a modal piecewise orthogonal polynomial basis, it is most often implemented with a nodal tensor product Lagrange basis. The method gains its efficiency by placing the nodal points at the Legendre-Gauss-Lobatto (LGL) points and performing the Galerkin method integrations with a reduced Gauss-Lobatto quadrature using the same nodes. With this combination, simplifications result such that mass lumping occurs at all nodes and a collocation procedure results at interior points. The most popular applications of the method are in computational fluid dynamics and modeling seismic wave propagation. A-priori error estimate The classic analysis of Galerkin methods and Céa's lemma holds here and it can be shown that, if is the solution of the weak equation, is the approximate solution and : where is related to the discretization of the domain (ie. element length), is independent from , and is no larger than the degree of the piecewise polynomial basis. Similar results can be obtained to bound the error in stronger topologies. If As we increase , we can also increase the degree of the basis functions. In this case, if is an analytic function: where depends only on . The Hybrid-Collocation-Galerkin possesses some superconvergence properties. The LGL form of SEM is equivalent, so it achieves the same superconvergence properties. Development History Development of the most popular LGL form of the method is normally attributed to Maday and Patera. However, it was developed more than a decade earlier. First, there is the Hybrid-Collocation-Galerkin method (HCGM), which applies collocation at the interior Lobatto points and uses a Galerkin-like integral procedure at element interfaces. The Lobatto-Galerkin method described by Young is identical to SEM, while the HCGM is equivalent to these methods. This earlier work is ignored in the spectral literature. Related methods G-NI or SEM-NI are the most used spectral methods. The Galerkin formulation of spectral methods or spectral element methods, for G-NI or SEM-NI respectively, is modified and Gauss-Lobatto integration is used instead of integrals in the definition of the bilinear form and in the functional . Their convergence is a consequence of Strang's lemma. SEM is a Galerkin based FEM (finite element method) with Lagrange basis (shape) functions and reduced numerical integration by Lobatto quadrature using the same nodes. The pseudospectral method, orthogonal collocation, differential quadrature method, and G-NI are different names for the same method. These methods employ global rather than piecewise polynomial basis functions. The extension to a piecewise FEM or SEM basis is almost trivial. The spectral element method uses a tensor product space spanned by nodal basis functions associated with Gauss–Lobatto points. In contrast, the p-version finite element method spans a space of high order polynomials by nodeless basis functions, chosen approximately orthogonal for numerical stability. Since not all interior basis functions need to be present, the p-version finite element method can create a space that contains all polynomials up to a given degree with fewer degrees of freedom. However, some speedup techniques possible in spectral methods due to their tensor-product character are no longer available. The name p-version means that accuracy is increased by increasing the order of the approximating polynomials (thus, p) rather than decreasing the mesh size, h. The hp finite element method (hp-FEM) combines the advantages of the h and p refinements to obtain exponential convergence rates. Notes Numerical differential equations Partial differential equations Computational fluid dynamics Finite element method
Spectral element method
[ "Physics", "Chemistry" ]
1,150
[ "Computational fluid dynamics", "Fluid dynamics", "Computational physics" ]
10,513,240
https://en.wikipedia.org/wiki/Microsomal%20ethanol%20oxidizing%20system
The microsomal ethanol oxidizing system (MEOS) is an alternate pathway of ethanol metabolism that occurs in the smooth endoplasmic reticulum in the oxidation of ethanol to acetaldehyde. While playing only a minor role in ethanol metabolism in average individuals, MEOS activity increases after chronic alcohol consumption. The MEOS pathway requires the CYP2E1 enzyme, part of the cytochrome P450 family of enzymes, to convert ethanol to acetaldehyde. Ethanol’s affinity for CYP2E1 is lower than its affinity for alcohol dehydrogenase. It has delayed activity in non-chronic alcohol consumption states as increase in MEOS activity is correlated with an increase in production of CYP2E1, seen most conclusively in alcohol dehydrogenase negative deer mice. The MEOS pathway converts ethanol to acetaldehyde by way of a redox reaction. In this reaction, ethanol is oxidized (losing two hydrogens) and O2 is reduced (by accepting hydrogen) to form H2O. NADPH is used as donor of hydrogen, forming NADP+. This process consumes ATP and dissipates heat, thus leading to the hypothesis that long term drinkers see an increase in resting energy expenditure. The increase in rest energy expenditure has, according to some studies, been explained by indicating that the MEOS "expends" nine calories per gram of ethanol to metabolize versus 7 calories per gram of ethanol ingested. This results in a net loss of 2 calories per gram of ethanol ingested. References Metabolic pathways
Microsomal ethanol oxidizing system
[ "Chemistry" ]
332
[ "Metabolic pathways", "Metabolism" ]
12,783,915
https://en.wikipedia.org/wiki/Bomab
The BOttle MAnnequin ABsorber phantom was developed by Bush in 1949 (Bush 1949) and has since been accepted in North America as the industry standard (ANSI 1995) for calibrating whole body counting systems. The phantom consists of 10 polyethylene bottles, either cylinders or elliptical cylinders, that represent the head, neck chest, abdomen, thighs, calves, and arms. Each section is filled with a radioactive solution, in water, that has the amount of radioactivity proportional to the volume of each section. This simulates a homogeneous distribution of material throughout the body. The solution will also be acidified and contain stable element carrier so that the radioactivity does not plate out on the container walls. The phantom, which contains a known amount of radioactivity can be used to calibrate the whole body counter by relating the observed response to the known amount of radioactivity. As different radioactive materials emit different energies of gamma photons, the calibration has to be repeated to cover the expected energy range: usually 120 to 2,000 keV. Examples of radioactive isotopes that are used for efficiency calibration include 57Co, 60Co, 88Y, 137Cs and 152Eu. Although the phantom was designed to be used lying down, it is used in any orientation. Other uses Performance testing: BOMAB phantoms are sometimes used by performance testing organizations to test operating assay facilities. Phantoms, containing known quantities of radioactive material, are sent to assay facilities as blind samples. Design characteristics: Phantoms can be used to evaluate the relative effect of size, shape and positioning on the performance of in vivo measurement equipment. Background: A water filled BOMAB is often used to estimate the (blank) background for in vivo assay systems. Detection Limits: A BOMAB filled with approximately 140 g of K-40, which is the nominal content in a 70 kg man, is sometimes used to estimate detection sensitivity of in vivo personnel counting systems. See also Computational human phantom Imaging phantom References External links Bush F. The integral dose received from a uniformly distributed radioactive isotope. British J Radiol. 22:96-102; 1949. Health Physics Society. Specifications for the Bottle Manikin Absorber Phantom. An American National Standard. New York: American National Standards Institute; ANSI/HPS N13.35; 1995. Radiobiology
Bomab
[ "Chemistry", "Biology" ]
485
[ "Radiobiology", "Radioactivity" ]
3,504,168
https://en.wikipedia.org/wiki/Differential%20entropy
Differential entropy (also referred to as continuous entropy) is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy (a measure of average surprisal) of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP). Differential entropy (described here) is commonly encountered in the literature, but it is a limiting case of the LDDP, and one that loses its fundamental association with discrete entropy. In terms of measure theory, the differential entropy of a probability measure is the negative relative entropy from that measure to the Lebesgue measure, where the latter is treated as if it were a probability measure, despite being unnormalized. Definition Let be a random variable with a probability density function whose support is a set . The differential entropy or is defined as For probability distributions which do not have an explicit density function expression, but have an explicit quantile function expression, , then can be defined in terms of the derivative of i.e. the quantile density function as . As with its discrete analog, the units of differential entropy depend on the base of the logarithm, which is usually 2 (i.e., the units are bits). See logarithmic units for logarithms taken in different bases. Related concepts such as joint, conditional differential entropy, and relative entropy are defined in a similar fashion. Unlike the discrete analog, the differential entropy has an offset that depends on the units used to measure . For example, the differential entropy of a quantity measured in millimeters will be more than the same quantity measured in meters; a dimensionless quantity will have differential entropy of more than the same quantity divided by 1000. One must take care in trying to apply properties of discrete entropy to differential entropy, since probability density functions can be greater than 1. For example, the uniform distribution has negative differential entropy; i.e., it is better ordered than as shown now being less than that of which has zero differential entropy. Thus, differential entropy does not share all properties of discrete entropy. The continuous mutual information has the distinction of retaining its fundamental significance as a measure of discrete information since it is actually the limit of the discrete mutual information of partitions of and as these partitions become finer and finer. Thus it is invariant under non-linear homeomorphisms (continuous and uniquely invertible maps), including linear transformations of and , and still represents the amount of discrete information that can be transmitted over a channel that admits a continuous space of values. For the direct analogue of discrete entropy extended to the continuous space, see limiting density of discrete points. Properties of differential entropy For probability densities and , the Kullback–Leibler divergence is greater than or equal to 0 with equality only if almost everywhere. Similarly, for two random variables and , and with equality if and only if and are independent. The chain rule for differential entropy holds as in the discrete case . Differential entropy is translation invariant, i.e. for a constant . Differential entropy is in general not invariant under arbitrary invertible maps. In particular, for a constant For a vector valued random variable and an invertible (square) matrix In general, for a transformation from a random vector to another random vector with same dimension , the corresponding entropies are related via where is the Jacobian of the transformation . The above inequality becomes an equality if the transform is a bijection. Furthermore, when is a rigid rotation, translation, or combination thereof, the Jacobian determinant is always 1, and . If a random vector has mean zero and covariance matrix , with equality if and only if is jointly gaussian (see below). However, differential entropy does not have other desirable properties: It is not invariant under change of variables, and is therefore most useful with dimensionless variables. It can be negative. A modification of differential entropy that addresses these drawbacks is the relative information entropy, also known as the Kullback–Leibler divergence, which includes an invariant measure factor (see limiting density of discrete points). Maximization in the normal distribution Theorem With a normal distribution, differential entropy is maximized for a given variance. A Gaussian random variable has the largest entropy amongst all random variables of equal variance, or, alternatively, the maximum entropy distribution under constraints of mean and variance is the Gaussian. Proof Let be a Gaussian PDF with mean μ and variance and an arbitrary PDF with the same variance. Since differential entropy is translation invariant we can assume that has the same mean of as . Consider the Kullback–Leibler divergence between the two distributions Now note that because the result does not depend on other than through the variance. Combining the two results yields with equality when following from the properties of Kullback–Leibler divergence. Alternative proof This result may also be demonstrated using the calculus of variations. A Lagrangian function with two Lagrangian multipliers may be defined as: where g(x) is some function with mean μ. When the entropy of g(x) is at a maximum and the constraint equations, which consist of the normalization condition and the requirement of fixed variance , are both satisfied, then a small variation δg(x) about g(x) will produce a variation δL about L which is equal to zero: Since this must hold for any small δg(x), the term in brackets must be zero, and solving for g(x) yields: Using the constraint equations to solve for λ0 and λ yields the normal distribution: Example: Exponential distribution Let be an exponentially distributed random variable with parameter , that is, with probability density function Its differential entropy is then Here, was used rather than to make it explicit that the logarithm was taken to base e, to simplify the calculation. Relation to estimator error The differential entropy yields a lower bound on the expected squared error of an estimator. For any random variable and estimator the following holds: with equality if and only if is a Gaussian random variable and is the mean of . Differential entropies for various distributions In the table below is the gamma function, is the digamma function, is the beta function, and γE is Euler's constant. Many of the differential entropies are from. Variants As described above, differential entropy does not share all properties of discrete entropy. For example, the differential entropy can be negative; also it is not invariant under continuous coordinate transformations. Edwin Thompson Jaynes showed in fact that the expression above is not the correct limit of the expression for a finite set of probabilities. A modification of differential entropy adds an invariant measure factor to correct this, (see limiting density of discrete points). If is further constrained to be a probability density, the resulting notion is called relative entropy in information theory: The definition of differential entropy above can be obtained by partitioning the range of into bins of length with associated sample points within the bins, for Riemann integrable. This gives a quantized version of , defined by if . Then the entropy of is The first term on the right approximates the differential entropy, while the second term is approximately . Note that this procedure suggests that the entropy in the discrete sense of a continuous random variable should be . See also Information entropy Self-information Entropy estimation References External links Entropy and information Information theory Statistical randomness
Differential entropy
[ "Physics", "Mathematics", "Technology", "Engineering" ]
1,560
[ "Telecommunications engineering", "Physical quantities", "Applied mathematics", "Entropy and information", "Computer science", "Entropy", "Information theory", "Dynamical systems" ]
3,505,214
https://en.wikipedia.org/wiki/Margaret%20Geller
Margaret J. Geller (born December 8, 1947) is an American astrophysicist at the Center for Astrophysics Harvard & Smithsonian. Her work has included pioneering maps of the nearby universe, studies of the relationship between galaxies and their environment, and the development and application of methods for measuring the distribution of matter in the universe. Career Geller made pioneering maps of large-scale structure in the universe. Geller received a Bachelor of Arts degree in Physics at the University of California, Berkeley (1970) and a Ph.D. in Physics from Princeton (1974). Geller completed her doctoral dissertation, titled "Bright galaxies in rich clusters: a statistical model for magnitude distributions", under the supervision of James Peebles. Although Geller was thinking about studying solid state physics in graduate school, Charles Kittel suggested she go to Princeton to study astrophysics. After research fellowships at the Center for Astrophysics Harvard & Smithsonian and the Institute of Astronomy in Cambridge, England, she became an assistant professor of Astronomy at Harvard University (1980-1983). She then joined the permanent scientific staff of the Smithsonian Astrophysical Observatory, a partner in the Center for Astrophysics Harvard & Smithsonian. Geller is a Fellow of the American Association for the Advancement of Science and a Fellow of the American Physical Society. In 1990, she was elected as a Fellow of the American Academy of Arts and Sciences. Two years later, she was elected to the Physics section of the US National Academy of Sciences. From 2000 to 2003, she served on the Council of the National Academy of Sciences. She has received seven honorary degrees (D. S. H. C. or L. H. C.). Research Geller is known for observational and theoretical work in cosmology and extragalactic astronomy. Her long range goals are to discover what the universe looks like and to understand how the patterns we observe today evolved. In the 1980's, she made pioneering maps of the nearby universe, which included the Great Wall and was the inspiration for Jasper Johns 2020 piece called Slice. Her SHELS project maps the distribution of dark matter in the universe. With the 6.5-m MMT, she leads a deeper survey of the middle-aged universe called HectoMAP. Geller has developed innovative techniques for investigating the structure and mass of clusters of galaxies and the relationship between clusters and their surroundings. Geller is also a co-discoverer of hypervelocity stars which may be an important tracer of the matter distribution in the Galaxy. Films and Public Lectures Geller has made several films for public education. Her 8-minute video Where the Galaxies Are (1989) was the first graphic voyage through the observed universe and was awarded a CINE Gold Eagle. A later 40-minute film, So Many Galaxies...So Little Time, contains more sophisticated prize-winning (IEEE/Siggraph) graphics and was on display at the National Air and Space Museum. Geller has lectured extensively to public audiences around the world. She has lectured twice in the main amphitheater at the Chautauqua Institution. She is included in NPR's list of The Best Commencement Speeches, Ever. Her story about her entry into astrophysics and meeting the renowned astrophysicist John Archibald Wheeler, entitled "Mapping the Universe" was published by The Story Collider podcast on May 21, 2014. Books Geller's work is discussed in Physics in the Twentieth Century. Popular articles by Geller appear with those by Robert Woodrow Wilson, David Todd Wilkinson, J. Anthony Tyson and Vera Rubin in Beyond Earth: Mapping the Universe and with others by Alan Lightman, Robert Kirshner, Vera Rubin, Alan Guth, and James E. Gunn in Bubbles, Voids and Bumps in Time: The New Cosmology. Awards and honors 1989 Newcomb Cleveland Prize of the American Association for the Advancement of Science along with John P. Huchra for "Mapping the Universe" 1990 MacArthur Foundation Fellowship 1990 American Academy of Arts and Science 1992 National Academy of Sciences 1993 Helen Sawyer Hogg Lecture of the Canadian Astronomical Society 1996 Klopsteg Memorial Award of the American Association of Physics Teachers 1997 New York Public Library Library Lion 2003 La Medaille de l'ADION of Nice Observatory 2008 Magellanic Premium by the American Philosophical Society for her research into the groupings of galaxies. 2009 Honorary Degree (D.S.H.C.) from Colby College 2010 Henry Norris Russell Lectureship of the American Astronomical Society 2010 James Craig Watson Medal of the National Academy of Sciences 2013 Julius Edgar Lilienfeld Prize of the American Physical Society 2014 Karl Schwarzschild Medal of the German Astronomical Society 2014 Honorary Degree (D.S.H.C.) from Dartmouth College 2017 Honorary Degree (L.H.C.) from University of Turin References Further reading External links Margaret Geller's homepage at the Smithsonian Astrophysical Observatory at the Accademia delle Scienzia di Torino, April 2017 at the University of Turin, April 2017 at the 2013 meeting of the American Physical Society at Chautauqua Caught in the Cosmic Web Research Features Interview Living people 1947 births Discoverers of astronomical objects Fellows of the American Academy of Arts and Sciences Harvard University faculty MacArthur Fellows Academics of the University of Cambridge Scientists from Ithaca, New York Princeton University alumni Smithsonian Institution people University of California, Berkeley alumni American women astronomers Members of the United States National Academy of Sciences Fellows of the American Physical Society Fellows of the American Association for the Advancement of Science Harvard–Smithsonian Center for Astrophysics people
Margaret Geller
[ "Astronomy" ]
1,125
[ "Astronomers", "Astronomical objects", "Discoverers of astronomical objects" ]
3,508,610
https://en.wikipedia.org/wiki/Ostwald%20ripening
Ostwald ripening is a phenomenon observed in solid solutions and liquid sols that involves the change of an inhomogeneous structure over time, in that small crystals or sol particles first dissolve and then redeposit onto larger crystals or sol particles. Dissolution of small crystals or sol particles and the redeposition of the dissolved species on the surfaces of larger crystals or sol particles was first described by Wilhelm Ostwald in 1896. For colloidal systems, Ostwald ripening is also found in water-in-oil emulsions, while flocculation is found in oil-in-water emulsions. Mechanism This thermodynamically-driven spontaneous process occurs because larger particles are more energetically favored than smaller particles. This stems from the fact that molecules on the surface of a particle are energetically less stable than the ones in the interior. Consider a cubic crystal of atoms: all the atoms inside are bonded to 6 neighbours and are quite stable, but atoms on the surface are only bonded to 5 neighbors or fewer, which makes these surface atoms less stable. Large particles are more energetically favorable since, continuing with this example, more atoms are bonded to 6 neighbors and fewer atoms are at the unfavorable surface. As the system tries to lower its overall energy, molecules on the surface of a small particle (energetically unfavorable, with only 3 or 4 or 5 bonded neighbors) will tend to detach from the particle and diffuse into the solution. Kelvin's equation describes the relationship between the radius of curvature and the chemical potential between the surface and the inner volume: where corresponds to the chemical potential, to the surface tension, to the atomic volume and to the radius of the particle. The chemical potential of an ideal solution can also be expressed as a function of the solute’s concentration if liquid and solid phases are in equilibrium. where corresponds to the Boltzmann constant, to the temperature and to the solute concentration in a solution in which the solid and the liquid phase are in equilibrium. Combining both expressions the following equation is obtained: Thus, the equilibrium concentration, , is lower around bigger particles than it is around smaller particles. where and are the particles radius, and . Inferring from Fick’s first law of diffusion, the particles will move from big concentrations, corresponding to areas surrounding small particles, to small concentrations, corresponding to areas surrounding large nanoparticles. Thus, the small particles will tend to shrink while the big particles will grow. As a result, the average size of the nanoparticles in the solution will grow, and the dispersion of sizes will decrease. Therefore, if a solution is left for a long time, in the extreme case of , its particles would evolve until they would finally form a single huge spherical particle to minimize the total surface area. The history of research progress in quantitatively modeling Ostwald ripening is long, with many derivations. In 1958, Lifshitz and Slyozov performed a mathematical investigation of Ostwald ripening in the case where diffusion of material is the slowest process. They began by stating how a single particle grows in a solution. This equation describes where the boundary is between small, shrinking particles and large, growing particles. They finally conclude that the average radius of the particles ⟨R⟩, grows as follows: where Note that the quantity is different from , and that the statement that ⟨R⟩ goes as relies on being zero; but because nucleation is a separate process from growth, this places outside the bounds of validity of the equation. In contexts where the actual value of is irrelevant, an approach that respects the meanings of all terms is to take the time derivative of the equation to eliminate and . Another such approach is to change the to with the initial time having a positive value. Also contained in the Lifshitz and Slyozov derivation is an equation for the size distribution function of particles. For convenience, the radius of particles is divided by the average radius to form a new variable, ρ = . Three years after that Lifshitz and Slyozov published their findings (in Russian, 1958), Carl Wagner performed his own mathematical investigation of Ostwald ripening, examining both systems where diffusion was slow and also where attachment and detachment at the particle surface was slow. Although his calculations and approach were different, Wagner came to the same conclusions as Lifshitz and Slyozov for slow-diffusion systems. This duplicate derivation went unnoticed for years because the two scientific papers were published on opposite sides of the Iron Curtain in 1961. It was not until 1975 that Kahlweit addressed the fact that the theories were identical and combined them into the Lifshitz-Slyozov-Wagner or LSW theory of Ostwald ripening. Many experiments and simulations have shown LSW theory to be robust and accurate. Even some systems that undergo spinodal decomposition have been shown to quantitatively obey LSW theory after initial stages of growth. Wagner derived that when attachment and detachment of molecules is slower than diffusion, then the growth rate becomes where is the reaction rate constant of attachment with units of length per time. Since the average radius is usually something that can be measured in experiments, it is fairly easy to tell if a system is obeying the slow-diffusion equation or the slow-attachment equation. If the experimental data obeys neither equation, then it is likely that another mechanism is taking place and Ostwald ripening is not occurring. Although LSW theory and Ostwald ripening were intended for solids ripening in a fluid, Ostwald ripening is also observed in liquid-liquid systems, for example, in an oil-in-water emulsion polymerization. In this case, Ostwald ripening causes the diffusion of monomers (i.e. individual molecules or atoms) from smaller droplets to larger droplets due to greater solubility of the single monomer molecules in the larger monomer droplets. The rate of this diffusion process is linked to the solubility of the monomer in the continuous (water) phase of the emulsion. This can lead to the destabilization of emulsions (for example, by creaming and sedimentation). Controlled Ostwald Ripening Inhibition of sulfathiazole crystal growth by polyvinylpyrrolidone. The polymer forms a noncondensed netlike film over the sulfathiazole crystal, allowing the crystal to grow out only through the openings of the net. The growth is thus controlled by the pore size of the polymer network at the crystal surface. The smaller the pore size, the higher is the supersaturation of the solution required for the crystals to grow. Specific examples One example of Ostwald ripening is the re-crystallization of water within ice cream which gives old ice cream a gritty, crunchy texture. Larger ice crystals grow at the expense of smaller ones within the ice cream, creating a coarser texture. Another gastronomical example is the ouzo effect, where the droplets in the cloudy microemulsion grow by Ostwald ripening. In geology, it is the textural coarsening, aging or growth of phenocrysts and crystals in solid rock which is below the solidus temperature. It is often ascribed as a process in the formation of orthoclase megacrysts, as an alternative to the physical processes governing crystal growth from nucleation and growth rate thermochemical limitations. In aqueous solution chemistry and precipitates ageing, the term refers to the growth of larger crystals from those of smaller size which have a higher solubility than the larger ones. In the process, many small crystals formed initially (nuclei) slowly disappear, except for a few that grow larger, at the expense of the small crystals (crystal growth). The smaller crystals act as fuel for the growth of bigger crystals. Limiting Ostwald ripening is fundamental in modern technology for the solution synthesis of quantum dots. Ostwald ripening is also the key process in the digestion and aging of precipitates, an important step in gravimetric analysis. The digested precipitate is generally purer, and easier to wash and filter. Ostwald ripening can also occur in emulsion systems, with molecules diffusing from small droplets to large ones through the continuous phase. When a miniemulsion is desired, an extremely hydrophobic compound is added to stop this process from taking place. Diffusional growth of larger drops in liquid water clouds in the atmosphere at the expense of smaller drops is also characterized as Ostwald ripening. See also Aggregation Coalescence (chemistry) Coalescence (physics) Critical radius Kirkendall effect Rock microstructure Viedma ripening References External links Ostwald Ripening a 3D Kinetic Monte Carlo simulation Physical chemistry Chemical engineering thermodynamics Colloidal chemistry Crystallographic defects Precipitation
Ostwald ripening
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,836
[ "Colloidal chemistry", "Applied and interdisciplinary physics", "Crystallographic defects", "Chemical engineering", "Materials science", "Colloids", "Surface science", "Crystallography", "Chemical engineering thermodynamics", "nan", "Materials degradation", "Physical chemistry" ]
3,508,737
https://en.wikipedia.org/wiki/Plutonium%20tetrafluoride
Plutonium(IV) fluoride is a chemical compound with the formula PuF4. This salt is generally a brown solid but can appear a variety of colors depending on the grain size, purity, moisture content, lighting, and presence of contaminants. Its primary use in the United States has been as an intermediary product in the production of plutonium metal for nuclear weapons usage. Formation Plutonium(IV) fluoride is produced in the reaction between plutonium dioxide (PuO2) or plutonium(III) fluoride (PuF3) with hydrofluoric acid (HF) in a stream of oxygen (O2) at 450 to 600 °C. The main purpose of the oxygen stream is to avoid reduction of the product by hydrogen gas, small amounts of which are often found in HF. PuO2 + O2 + 4 HF → PuF4 + O2 + 2 H2O 4 PuF3 + O2 + 4 HF → 4 PuF4 + 2 H2O Laser irradiation of plutonium hexafluoride (PuF6) at wavelengths under 520 nm causes it to decompose into plutonium pentafluoride (PuF5) and fluorine; if this is continued, plutonium(IV) fluoride is obtained. Properties In terms of its structure, solid plutonium(IV) fluoride features 8-coordinate Pu centers interconnected by doubly bridging fluoride ligands. Reaction of plutonium tetrafluoride with barium, calcium, or lithium at 1200 °C give Pu metal: PuF4 + 2 Ba → 2 BaF2 + Pu PuF4 + 2 Ca → 2 CaF2 + Pu PuF4 + 4 Li → 4 LiF + Pu References Plutonium(IV) compounds Fluorides Metal halides Nuclear materials Actinide halides
Plutonium tetrafluoride
[ "Physics", "Chemistry" ]
397
[ "Inorganic compounds", "Salts", "Materials", "Nuclear materials", "Metal halides", "Fluorides", "Matter" ]
3,510,274
https://en.wikipedia.org/wiki/Centrosymmetry
In crystallography, a centrosymmetric point group contains an inversion center as one of its symmetry elements. In such a point group, for every point (x, y, z) in the unit cell there is an indistinguishable point (-x, -y, -z). Such point groups are also said to have inversion symmetry. Point reflection is a similar term used in geometry. Crystals with an inversion center cannot display certain properties, such as the piezoelectric effect and the frequency doubling effect (second-harmonic generation). In addition, in such crystals, one-photon absorption (OPA) and two-photon absorption (TPA) processes are mutually exclusive, i.e., they do not occur simultaneously, and provide complementary information. The following space groups have inversion symmetry: the triclinic space group 2, the monoclinic 10-15, the orthorhombic 47-74, the tetragonal 83-88 and 123-142, the trigonal 147, 148 and 162-167, the hexagonal 175, 176 and 191-194, the cubic 200-206 and 221-230. Point groups lacking an inversion center (non-centrosymmetric) can be polar, chiral, both, or neither. A polar point group is one whose symmetry operations leave more than one common point unmoved. A polar point group has no unique origin because each of those unmoved points can be chosen as one. One or more unique polar axes could be made through two such collinear unmoved points. Polar crystallographic point groups include 1, 2, 3, 4, 6, m, mm2, 3m, 4mm, and 6mm. A chiral (often also called enantiomorphic) point group is one containing only proper (often called "pure") rotation symmetry. No inversion, reflection, roto-inversion or roto-reflection (i.e., improper rotation) symmetry exists in such point group. Chiral crystallographic point groups include 1, 2, 3, 4, 6, 222, 422, 622, 32, 23, and 432. Chiral molecules such as proteins crystallize in chiral point groups. The remaining non-centrosymmetric crystallographic point groups , 2m, , m2, 3m are neither polar nor chiral. See also Centrosymmetric matrix Rule of mutual exclusion References Symmetry ru:Центральная симметрия
Centrosymmetry
[ "Physics", "Mathematics" ]
530
[ "Geometry", "Symmetry" ]
3,510,908
https://en.wikipedia.org/wiki/Minkowski%20functional
In mathematics, in the field of functional analysis, a Minkowski functional (after Hermann Minkowski) or gauge function is a function that recovers a notion of distance on a linear space. If is a subset of a real or complex vector space then the or of is defined to be the function valued in the extended real numbers, defined by where the infimum of the empty set is defined to be positive infinity (which is a real number so that would then be real-valued). The set is often assumed/picked to have properties, such as being an absorbing disk in , that guarantee that will be a real-valued seminorm on In fact, every seminorm on is equal to the Minkowski functional (that is, ) of any subset of satisfying (where all three of these sets are necessarily absorbing in and the first and last are also disks). Thus every seminorm (which is a defined by purely algebraic properties) can be associated (non-uniquely) with an absorbing disk (which is a with certain geometric properties) and conversely, every absorbing disk can be associated with its Minkowski functional (which will necessarily be a seminorm). These relationships between seminorms, Minkowski functionals, and absorbing disks is a major reason why Minkowski functionals are studied and used in functional analysis. In particular, through these relationships, Minkowski functionals allow one to "translate" certain properties of a subset of into certain properties of a function on The Minkowski function is always non-negative (meaning ). This property of being nonnegative stands in contrast to other classes of functions, such as sublinear functions and real linear functionals, that do allow negative values. However, might not be real-valued since for any given the value is a real number if and only if is not empty. Consequently, is usually assumed to have properties (such as being absorbing in for instance) that will guarantee that is real-valued. Definition Let be a subset of a real or complex vector space Define the of or the associated with or induced by as being the function valued in the extended real numbers, defined by (recall that the infimum of the empty set is , that is, ). Here, is shorthand for For any if and only if is not empty. The arithmetic operations on can be extended to operate on where for all non-zero real The products and remain undefined. Some conditions making a gauge real-valued In the field of convex analysis, the map taking on the value of is not necessarily an issue. However, in functional analysis is almost always real-valued (that is, to never take on the value of ), which happens if and only if the set is non-empty for every In order for to be real-valued, it suffices for the origin of to belong to the or of in If is absorbing in where recall that this implies that then the origin belongs to the algebraic interior of in and thus is real-valued. Characterizations of when is real-valued are given below. Motivating examples Example 1 Consider a normed vector space with the norm and let be the unit ball in Then for every Thus the Minkowski functional is just the norm on Example 2 Let be a vector space without topology with underlying scalar field Let be any linear functional on (not necessarily continuous). Fix Let be the set and let be the Minkowski functional of Then The function has the following properties: It is : It is : for all scalars It is : Therefore, is a seminorm on with an induced topology. This is characteristic of Minkowski functionals defined via "nice" sets. There is a one-to-one correspondence between seminorms and the Minkowski functional given by such sets. What is meant precisely by "nice" is discussed in the section below. Notice that, in contrast to a stronger requirement for a norm, need not imply In the above example, one can take a nonzero from the kernel of Consequently, the resulting topology need not be Hausdorff. Common conditions guaranteeing gauges are seminorms To guarantee that it will henceforth be assumed that In order for to be a seminorm, it suffices for to be a disk (that is, convex and balanced) and absorbing in which are the most common assumption placed on More generally, if is convex and the origin belongs to the algebraic interior of then is a nonnegative sublinear functional on which implies in particular that it is subadditive and positive homogeneous. If is absorbing in then is positive homogeneous, meaning that for all real where If is a nonnegative real-valued function on that is positive homogeneous, then the sets and satisfy and if in addition is absolutely homogeneous then both and are balanced. Gauges of absorbing disks Arguably the most common requirements placed on a set to guarantee that is a seminorm are that be an absorbing disk in Due to how common these assumptions are, the properties of a Minkowski functional when is an absorbing disk will now be investigated. Since all of the results mentioned above made few (if any) assumptions on they can be applied in this special case. Convexity and subadditivity A simple geometric argument that shows convexity of implies subadditivity is as follows. Suppose for the moment that Then for all Since is convex and is also convex. Therefore, By definition of the Minkowski functional But the left hand side is so that Since was arbitrary, it follows that which is the desired inequality. The general case is obtained after the obvious modification. Convexity of together with the initial assumption that the set is nonempty, implies that is absorbing. Balancedness and absolute homogeneity Notice that being balanced implies that Therefore Algebraic properties Let be a real or complex vector space and let be an absorbing disk in is a seminorm on is a norm on if and only if does not contain a non-trivial vector subspace. for any scalar If is an absorbing disk in and then If is a set satisfying then is absorbing in and where is the Minkowski functional associated with that is, it is the gauge of In particular, if is as above and is any seminorm on then if and only if If satisfies then Topological properties Assume that is a (real or complex) topological vector space (TVS) (not necessarily Hausdorff or locally convex) and let be an absorbing disk in Then where is the topological interior and is the topological closure of in Importantly, it was assumed that was continuous nor was it assumed that had any topological properties. Moreover, the Minkowski functional is continuous if and only if is a neighborhood of the origin in If is continuous then Minimal requirements on the set This section will investigate the most general case of the gauge of subset of The more common special case where is assumed to be an absorbing disk in was discussed above. Properties All results in this section may be applied to the case where is an absorbing disk. Throughout, is any subset of The proofs of these basic properties are straightforward exercises so only the proofs of the most important statements are given. The proof that a convex subset that satisfies is necessarily absorbing in is straightforward and can be found in the article on absorbing sets. For any real so that taking the infimum of both sides shows that This proves that Minkowski functionals are strictly positive homogeneous. For to be well-defined, it is necessary and sufficient that thus for all and all real if and only if is real-valued. The hypothesis of statement (7) allows us to conclude that for all and all scalars satisfying Every scalar is of the form for some real where and is real if and only if is real. The results in the statement about absolute homogeneity follow immediately from the aforementioned conclusion, from the strict positive homogeneity of and from the positive homogeneity of when is real-valued. Examples If is a non-empty collection of subsets of then for all where Thus for all If is a non-empty collection of subsets of and satisfies then for all The following examples show that the containment could be proper. Example: If and then but which shows that its possible for to be a proper subset of when The next example shows that the containment can be proper when the example may be generalized to any real Assuming that the following example is representative of how it happens that satisfies but Example: Let be non-zero and let so that and From it follows that That follows from observing that for every which contains Thus and However, so that as desired. Positive homogeneity characterizes Minkowski functionals The next theorem shows that Minkowski functionals are those functions that have a certain purely algebraic property that is commonly encountered. If holds for all and real then so that Only (1) implies (3) will be proven because afterwards, the rest of the theorem follows immediately from the basic properties of Minkowski functionals described earlier; properties that will henceforth be used without comment. So assume that is a function such that for all and all real and let For all real so by taking for instance, it follows that either or Let It remains to show that It will now be shown that if or then so that in particular, it will follow that So suppose that or in either case for all real Now if then this implies that that for all real (since ), which implies that as desired. Similarly, if then for all real which implies that as desired. Thus, it will henceforth be assumed that a positive real number and that (importantly, however, the possibility that is or has not yet been ruled out). Recall that just like the function satisfies for all real Since if and only if so assume without loss of generality that and it remains to show that Since which implies that (so in particular, is guaranteed). It remains to show that which recall happens if and only if So assume for the sake of contradiction that and let and be such that where note that implies that Then This theorem can be extended to characterize certain classes of -valued maps (for example, real-valued sublinear functions) in terms of Minkowski functionals. For instance, it can be used to describe how every real homogeneous function (such as linear functionals) can be written in terms of a unique Minkowski functional having a certain property. Characterizing Minkowski functionals on star sets Characterizing Minkowski functionals that are seminorms In this next theorem, which follows immediately from the statements above, is assumed to be absorbing in and instead, it is deduced that is absorbing when is a seminorm. It is also not assumed that is balanced (which is a property that is often required to have); in its place is the weaker condition that for all scalars satisfying The common requirement that be convex is also weakened to only requiring that be convex. Positive sublinear functions and Minkowski functionals It may be shown that a real-valued subadditive function on an arbitrary topological vector space is continuous at the origin if and only if it is uniformly continuous, where if in addition is nonnegative, then is continuous if and only if is an open neighborhood in If is subadditive and satisfies then is continuous if and only if its absolute value is continuous. A is a nonnegative homogeneous function that satisfies the triangle inequality. It follows immediately from the results below that for such a function if then Given the Minkowski functional is a sublinear function if and only if it is real-valued and subadditive, which is happens if and only if and is convex. Correspondence between open convex sets and positive continuous sublinear functions Let be an open convex subset of If then let and otherwise let be arbitrary. Let be the Minkowski functional of where this convex open neighborhood of the origin satisfies Then is a continuous sublinear function on since is convex, absorbing, and open (however, is not necessarily a seminorm since it is not necessarily absolutely homogeneous). From the properties of Minkowski functionals, we have from which it follows that and so Since this completes the proof. See also Notes References Further reading F. Simeski, A. M. P. Boelens, and M. Ihme. "Modeling Adsorption in Silica Pores via Minkowski Functionals and Molecular Electrostatic Moments". Energies 13 (22) 5976 (2020). . Convex analysis Functional analysis Hermann Minkowski
Minkowski functional
[ "Mathematics" ]
2,549
[ "Functional analysis", "Functions and mappings", "Mathematical relations", "Mathematical objects" ]
116,790
https://en.wikipedia.org/wiki/Hygroscopy
Hygroscopy is the phenomenon of attracting and holding water molecules via either absorption or adsorption from the surrounding environment, which is usually at normal or room temperature. If water molecules become suspended among the substance's molecules, adsorbing substances can become physically changed, e.g. changing in volume, boiling point, viscosity or some other physical characteristic or property of the substance. For example, a finely dispersed hygroscopic powder, such as a salt, may become clumpy over time due to collection of moisture from the surrounding environment. Deliquescent materials are sufficiently hygroscopic that they dissolve in the water they absorb, forming an aqueous solution. Hygroscopy is essential for many plant and animal species' attainment of hydration, nutrition, reproduction and/or seed dispersal. Biological evolution created hygroscopic solutions for water harvesting, filament tensile strength, bonding and passive motion – natural solutions being considered in future biomimetics. Etymology and pronunciation The word hygroscopy () uses combining forms of hygro- (for moisture or humidity) and -scopy. Unlike any other -scopy word, it no longer refers to a viewing or imaging mode. It did begin that way, with the word hygroscope referring in the 1790s to measuring devices for humidity level. These hygroscopes used materials, such as certain animal hairs, that appreciably changed shape and size when they became damp. Such materials were then said to be hygroscopic because they were suitable for making a hygroscope. Eventually, the word hygroscope ceased to be used for any such instrument in modern usage, but the word hygroscopic (tending to retain moisture) lived on, and thus also hygroscopy (the ability to do so). Nowadays an instrument for measuring humidity is called a hygrometer (hygro- + -meter). History Early hygroscopy literature began circa 1880. Studies by Victor Jodin (Annales Agronomiques, October 1897) focused on the biological properties of hygroscopicity. He noted pea seeds, both living and dead (without germinative capacity), responded similarly to atmospheric humidity, their weight increasing or decreasing in relation to hygrometric variation. Marcellin Berthelot viewed hygroscopicity from the physical side, a physico-chemical process. Berthelot's principle of reversibility, briefly- that water dried from plant tissue could be restored hygroscopically, was published in "Recherches sur la desiccation des plantes et des tissues végétaux; conditions d'équilibre et de réversibilité," (Annales de Chimie et de Physique, April 1903). Léo Errera viewed hygroscopicity from perspectives of the physicist and the chemist. His memoir "Sur l'Hygroscopicité comme cause de l'action physiologique à distance" (Recueil de l'lnstitut Botanique Léo Errera, Université de Bruxelles, tome vi., 1906) provided a hygroscopy definition that remains valid to this day. Hygroscopy is "exhibited in the most comprehensive sense, as displayed Overview Hygroscopic substances include cellulose fibers (such as cotton and paper), sugar, caramel, honey, glycerol, ethanol, wood, methanol, sulfuric acid, many fertilizer chemicals, many salts and a wide variety of other substances. If a compound dissolves in water, then it is considered to be hydrophilic. Zinc chloride and calcium chloride, as well as potassium hydroxide and sodium hydroxide (and many different salts), are so hygroscopic that they readily dissolve in the water they absorb: this property is called deliquescence. Not only is sulfuric acid hygroscopic in concentrated form but its solutions are hygroscopic down to concentrations of 10% v/v or below. A hygroscopic material will tend to become damp and cakey when exposed to moist air (such as the salt inside salt shakers during humid weather). Because of their affinity for atmospheric moisture, desirable hygroscopic materials might require storage in sealed containers. Some hygroscopic materials, e.g., sea salt and sulfates, occur naturally in the atmosphere and serve as cloud seeds, cloud condensation nuclei (CCNs). Being hygroscopic, their microscopic particles provide an attractive surface for moisture vapour to condense and form droplets. Modern-day human cloud seeding efforts began in 1946. When added to foods or other materials for the express purpose of maintaining moisture content, hygroscopic materials are known as humectants. Materials and compounds exhibit different hygroscopic properties, and this difference can lead to detrimental effects, such as stress concentration in composite materials. The volume of a particular material or compound is affected by ambient moisture and may be considered its coefficient of hygroscopic expansion (CHE) (also referred to as CME, or coefficient of moisture expansion) or the coefficient of hygroscopic contraction (CHC)—the difference between the two terms being a difference in sign convention. Differences in hygroscopy can be observed in plastic-laminated paperback book covers—often, in a suddenly moist environment, the book cover will curl away from the rest of the book. The unlaminated side of the cover absorbs more moisture than the laminated side and increases in area, causing a stress that curls the cover toward the laminated side. This is similar to the function of a thermostat's bimetallic strip. Inexpensive dial-type hygrometers make use of this principle using a coiled strip. Deliquescence is the process by which a substance absorbs moisture from the atmosphere until it dissolves in the absorbed water and forms a solution. Deliquescence occurs when the vapour pressure of the solution that is formed is less than the partial pressure of water vapour in the air. While some similar forces are at work here, it is different from capillary attraction, a process where glass or other solid substances attract water, but are not changed in the process (e.g., water molecules do not become suspended between the glass molecules). Deliquescence Deliquescence, like hygroscopy, is also characterized by a strong affinity for water and tendency to absorb moisture from the atmosphere if exposed to it. Unlike hygroscopy, however, deliquescence involves absorbing sufficient water to form an aqueous solution. Most deliquescent materials are salts, including calcium chloride, magnesium chloride, zinc chloride, ferric chloride, carnallite, potassium carbonate, potassium phosphate, ferric ammonium citrate, ammonium nitrate, potassium hydroxide, and sodium hydroxide. Owing to their very high affinity for water, these substances are often used as desiccants, which is also an application for concentrated sulfuric and phosphoric acids. Some deliquescent compounds are used in the chemical industry to remove water produced by chemical reactions (see drying tube). Biology Hygroscopy appears in both plant and animal kingdoms, the latter benefiting via hydration and nutrition. Some amphibian species secrete a hygroscopic mucus that harvests moisture from the air. Orb web building spiders produce hygroscopic secretions that preserve the stickiness and adhesion force of their webs. One aquatic reptile species is able to travel beyond aquatic limitations, onto land, due to its hygroscopic integument. Plants benefit from hygroscopy via hydration and reproduction – demonstrated by convergent evolution examples. Hygroscopic movement (hygrometrically activated movement) is integral in fertilization, seed/spore release, dispersal and germination. The phrase "hygroscopic movement" originated in 1904's "Vorlesungen Über Pflanzenphysiologie", translated in 1907 as "Lectures on Plant Physiology" (Ludwig Jost and R.J. Harvey Gibson, Oxford, 1907). When movement becomes larger scale, affected plant tissues are colloquially termed hygromorphs. Hygromorphy is a common mechanism of seed dispersal as the movement of dead tissues respond to hygrometric variation, e.g. spore release from the fertile margins of Onoclea sensibilis. Movement occurs when plant tissue matures, dies and desiccates, cell walls drying, shrinking; and also when humidity re-hydrates plant tissue, cell walls enlarging, expanding. The direction of the resulting force depends upon the architecture of the tissue and is capable of producing bending, twisting or coiling movements. Hygroscopic hydration examples Air plants, a Tillandsia species, are epiphytes that use their degenerated, non-nutritive roots to anchor upon rocks or other plants. Hygroscopic leaves absorb their necessary moisture from humidity in the air. The collected water molecules are transported from leaf surfaces to an internal storage network via osmotic pressure with capacity sufficient for the plant's growing requirements. The file snake (Acrochordus granulatus), from a family known as completely aquatic, has hygroscopic skin that serves as a water reservoir, retarding desiccation, allowing it to travel out of water. Another example is the sticky capture silk found in spider webs, e.g. from the orb-weaver spider (Larinioides cornutus). This spider, as typical, coats its threads with a self-made hydrogel, an aggregate blend of glycoproteins, low molecular mass organic and inorganic compounds (LMMCs), and water. The LMMCs are hygroscopic, thus is the glue, its moisture absorbing properties using environmental humidity to keep the capture silk soft and tacky. The waxy monkey tree frog (Phyllomedusa sauvagii) and the Australian green tree frog (Litoria caerulea) benefit from two hygroscopically-enabled hydration processes; transcutaneous uptake of condensation on their skin and reduced evaporative water loss due to the condensed water film barrier covering their skin. Condensation volume is enhanced by the hygroscopic secretions they wipe across their granular skin. Some toads use hygroscopic secretions to reduce evaporative water loss, Anaxyrus sp. being an example. The venomous secretion from its parotoid gland also includes hygroscopic glycosaminoglycans. When the toad wipes this protective secretion on its body its skin becomes moistened by the surrounding environmental humidity, considered an aid in water balance. Red and white clover (Trifolium pratense) and (Trifolium repens), yellow bush lupine (Lupinus arboreus) and several members of the legume family have a hygroscopic hilar valve (hilum) that controls seed embryo moisture levels. The saguaro (Carnegiea gigantea), another eudicots species, also has hygroscopic seeds shown to imbibe up to 20% atmospheric moisture, by weight. Functionally, the hilar valve allows water vapor to enter or exit to ensure viability, while blocking liquid water. If however, humidity levels gradually rise to a high enough level, the hilar valve remains open, allowing liquid water passage for germination. Physiologically, the inner and outer epidermides have independent hilar valve control. The outer epidermis has columnar-shaped cells, annularly arranged about the hilum. These counter palisade cells, being hygroscopic, respond to external humidity by swelling and closing the hilar valve during high humidity, preventing water absorption into the seed. Reversibly, they shrivel, opening the valve during low humidity, allowing the seed to expel excess moisture. The inner epidermis, inside the seed's impermeable integument, has palisade epidermis cells, a second annularly arranged hygroscopic layer attuned to the embryo's moisture level. There exists a moisture tension between inner and outer palisade cells. For the hilum to close, this moisture needs to exceed some minimum level (14–25% for these species). While the hilar valve is open (i.e., low outer humidity) if the humidity suddenly increases, the moisture tension reaches that protective threshold and the hilum closes, preventing moisture (liquid water) from entering. If, however, the outer humidity rises gradually, implying suitable growing conditions, the moisture tension level doesn't immediately exceed the threshold, keeping the hilum open and enabling the gradual moisture entry necessary for imbibition. Hygroscopic-assisted propagation examples Typical of hygroscopic movement are plant tissues with "closely packed long (columnar) parallel thick-walled cells (that) respond by expanding longitudinally when exposed to humidity and shrinking when dried (Reyssat et al., 2009)". Cell orientation, pattern structure (annular, planar, bi-layered or tri-layered) and the effects of the opposite-surface's cell orientation control the hygroscopic reaction. Moisture responsive seed encapsulations rely on valves opening when exposed to wetting or drying; discontinuous tissue structures provide such predetermined breaking points (sutures), often implemented via reduced cell wall thickness or seams within bi- or tri-layered structures. Graded distributions varying in density and/or cell orientation focus hygroscopic movement, frequently observed as biological actuators (a hinge function); e.g. pinecones (Pinus spp.), the ice plant (Aizoaceae spp.) and the wheat awn (Triticum spp.), described below. ] Hygroscopic bi-layered cell arrays act as a capitulum hinge in some plants, Xerochrysum bracteatum and Syngonanthus elegans being examples. The hygroscopic bending of involucral bracts surrounding a capitulum contributes to flower protection and pollination and assists dispersion by protecting delicate pappi filaments from entanglement or destruction by precipitation, e.g. Taraxacum (dandelions). In nature these involucral bracts have a diurnal rhythm. The whorl of hygroscopic bracts bend outward exposing the capitulum (see illustration) during the day, then inward, closing it at night, as the relative humidity shifts in response to the daily temperature change. Bracts are scarious, the hinge and blade composed exclusively of dead cells (Nishikawa et al., 2008), allowing the hygroscopically activated bracts to function from flowering through achene dispersal. Physiologically, the bract's lower section is source to the hinge-like function, consisting of sclerenchyma-like abaxial (inner petal) tissue, parenchyma and adaxial epidermis (outer petal tissue). Bract cell wall composition is rather uniform but its cells gradually change in orientation. The bract's hygroscopic bending is due to the differing cell orientations of its inner and outer epidermides, causing adaxial–abaxial force gradients between opposing sides that change with moisture; thus, the aggregate hygrometric force, in whorl unison, controls the capitulum's repetitive opening and closing. Some trees and shrubs in fire-prone regions evolved a dual-stage hygroscopic dispersal; an initial thermo-sensitive enabling (extreme heat or fire), then a serotinous hygroresponsive seed release. Examples are the woody fruits of Myrtaceae (e.g. Eucalyptus species plurimae, Melaleuca spp.) and Proteaceae (e.g. Hakea spp., Banksia spp., Xylomelum spp.) and the woody cones of Pinaceae (e.g. Pinus spp.) and the cypress family (Cupressaceae), e.g. the giant sequoia (Sequoiadendron giganteum)). Typical in lodgepole pine (Pinus contorta), Eucalyptus, and Banksia are resin-sealed seed encapsulations that require the heat of fire to physically melt the resin, enabling serotinous seed release. Such seed encapsulations may "reduce seed loss or damage from granivores, desiccation, and fire (Moya et al., 2008; Talluto & Benkman, 2014; Lamont et al., 2016, 2020)." The similarity of dual-stage dispersal techniques between different clades, angiosperms and gymnosperms, can be interpreted as a result of convergent evolution (e.g. Clarke et al., 2013). Banksia attenuata, typical of Banksia spp., has a seed bearing follicle composed of a bi-layer hygroscopic cell network. The woody follicle is thermo-sensitive, then hygroresponsive; serotinous humidity opening the ventral suture and exposing seed when germination conditions are favorable. Physiologically, the heat-sensitive follicle valves of Banksia spp. are sealed by a wax (resin) layer, released by high ambient temperatures (fire), "thereby facilitating opening (e.g. Huss et al., 2018)." The follicle mesocarp consists of high density branched fiber bundles; the endocarp, low density parallel fibers. A suture is caused by differential hygroscopic movements between layers, their microfibril structures having a large angle disparity (microfibril angle (MFA) γ = 75–90°). Pine cone scales (pinaceae spp.) employ a hygromorphic hinge for their seed release. Physiology involves a bi-layered structure of closely packed long parallel thick-walled cells. Fiber alignments within layers are non-uniform, varying longitudinally, producing different microfibril angles (MFAs) of 30° and 74° between layers over the span of the scale. The region of greatest MFA, the hinge knuckle, is a small region near the scale and midrib (central stem) union. In mature pine cones the outer scale layer is the controlling tissue, its long thick-walled cells responding longitudinally to environmental humidity. Distortion occurs in the knuckle region as movement of the outer layer overtakes that of the more passive inner scale layer, forcing the scale to bend or flex. The remainder of the scale is hygroscopically passive, though amplifies apex displacement via length and geometrically; e.g. bending the scale closed with hydration or flexing it open with dehydration- releasing seed.] Flowering plants of the Asteraceae family have hygroscopically-influenced dispersion, coordinating anemochory (wind dispersal) with favorable environmental conditions, common in A. genera Erigeron, Leontodon, Senecio, Sonchus and Taraxacum. As example, the flight-enabling pappus of the common dandelion achene undergoes binary morphing (opened or closed) of its whisker-like filaments, in unison with chorused responses of the remaining achenes. Pappus movement is controlled via a hygroscopic actuator in the apical plate, at the beak's top, the locus for all the achene's filaments. High humidity causes each pappus to close, contracting its radially patterned structure, reducing its area and the likelihood of wind current dispersal. For any achene that become released, flight dynamics of the reduced pappus dramatically limit dispersal range. The hygroscopic actuator's responsiveness to changes in relative humidity (RH) is predictable, repeatable; e.g. the pappi of centaurea imperialis remain closed at ≥ 78% RH and open completely at ≤ 75% RH. During more favorable lower humidity conditions, pappi fully expand and wind current allochory is re-enabled. The orchid tree (Bauhinia variegata) depends upon hygro-responsive twisting for its dispersal. Its seed pod contains two hygroscopic sclerenchyma fibre layers, nearly orthogonal, joining at the valves. During dehiscence the large 90° microfibril angle between endocarp layers, combined with dual sided shrinkage, results in opposing helical torques that force a suture at the weakest point, the seed case valves; their opening releases seed. Some plants synchronize the opening of their mature seed capsule with active rainfall- hygrochasy. This dispersal technique is frequently observed in the arid regions of southern and eastern Africa, the Israeli desert, parts of North America and Somalia, and believed evolved to offer higher survival rates in arid environs. Hygrochasy is commonly associated with family Aizoaceae spp., the ice plant, as > 98% of its species utilize post-wetting dehiscence; such dispersal is also observed in family Plantaginaceae with the alpine Veronica of New Zealand, evolving in the last 9Myr. Common to all seed capsules are triangular circumferentially-arranged hygroscopic keels (valves) covering its seeds. These protective valves mechanically open only when hydrated with liquid water. Each keel (five for Delosperma nakurense (Engl.) Herre) is composed of cellulosic lattice tissue that swells with hydration, opening within minutes. The enlarged cells force straightening of an inherent desiccated fold in the keel, the hygroscopic hinge, near the keel's union with the capsule perimeter. Fully opened, the keel pivots over 150°, upward then backward, exposing seed compartments, one beneath each valve, separated by septa, all resting upon the capsule floor. Seeds are visible, but restrained by the cup-like ring created by the encircling keels. The final requirement for dispersal is rainfall, or sufficient moisture, to flush seed from this barrier, colloquially termed the splash cup. Seed that overflows or splashes from the cup is dispersed to the nearby ground. Any remaining seed will be preserved when keels desiccate, hygroscopically shrink, and restore to their natural folded, closed state. The hygromorphic process is reversible, repeatable; neglected seed having subsequent dispersal opportunity via future rainfalls. The seeds of some flowering herbs and grasses have hygroscopic appendages (awns) that bend with changes in humidity, enabling them to disperse over the ground, termed herpochory. The awn will thrust (or twist) when the seed is released, its motion dependent upon plant physiology. Subsequent hygrometric changes cause movements to repeat, thrusting (or twisting), pushing the seed into the ground. Two angiospermae families have similar methods of dispersal, though method of implementation varies within family: Geraniaceae family examples are the common stork's-bill (Erodium cicutarium) and geraniums (Pelargonium sp.); Poaceae family, Needle-and-Thread (Hesperostipa comata) and wheat (Triticum spp.). All rely upon a bi-layered parallel fiber hygroscopic cell physiology to control the awn's movement for dispersal and self-burial of seeds. Alignment of cellulose fibrils in the awn's controlling cell wall determines direction of movement. If fiber alignments are tilted, non-parallel venation, a helix develops and awn movement becomes twisting (coiling) instead of bending; e.g. coiling occurs in awns of Erodium, and Hesperostipa. Some plants use hygroscopic movements for Ballochory (self-dispersal), active ballists forcibly ejecting their seeds; e.g. species of geranium, violet, wood sorrel, witch hazel, touch-me-not (Impatiens), and acanthus. Rupturing of the Bauhinia purpurea seed pod reportedly propels its seed up to 15 metres distance. Engineering properties Hygroscopicity is a general term used to describe a material's ability to absorb moisture from the environment. There is no standard quantitative definition of hygroscopicity, so generally the qualification of hygroscopic and non-hygroscopic is determined on a case-by-case basis. For example, pharmaceuticals that pick up more than 5% by mass, between 40 and 90% relative humidity at 25 °C, are described as hygroscopic, while materials that pick up less than 1%, under the same conditions are regarded as non-hygroscopic. The amount of moisture held by hygroscopic materials is usually proportional to the relative humidity. Tables containing this information can be found in many engineering handbooks and is also available from suppliers of various materials and chemicals. Hygroscopy also plays an important role in the engineering of plastic materials. Some plastics, e. g. nylon, are hygroscopic while others are not. Polymers Many engineering polymers are hygroscopic, including nylon, ABS, polycarbonate, cellulose, carboxymethyl cellulose, and poly(methyl methacrylate) (PMMA, plexiglas, perspex). Other polymers, such as polyethylene and polystyrene, do not normally absorb much moisture, but are able to carry significant moisture on their surface when exposed to liquid water. Type-6 nylon (a polyamide) can absorb up to 9.5% of its weight in moisture. Applications in baking The use of different substances' hygroscopic properties in baking are often used to achieve differences in moisture content and, hence, crispiness. Different varieties of sugars are used in different quantities to produce a crunchy, crisp cookie (British English: biscuit) versus a soft, chewy cake. Sugars such as honey, brown sugar, and molasses are examples of sweeteners used to create moister and chewier cakes. Research Several hygroscopic approaches to harvest atmospheric moisture have been demonstrated and require further development to assess their potentials as a viable water source. Experiments with fog collection, in select environs, duplicated the hydrophilic surfaces and hygroscopic surface wetting observed in tree frog hydration (biomimicry). Subsequent material optimizations developed artificial hydrophilic surfaces with collection rates of 25 mg H2O/(cm2 h), more than twice the collection rate of tree frogs under comparable conditions, i.e. 100% RH (relative humidity). Another approach performs at lower 15–30% RHs but also has environs limitations; a sustainable biomass source is necessary. Super hygroscopic polymer films composed of biomass and hygroscopic salts are able to condense moisture from atmospheric humidity. By implementing rapid sorption-desorption kinetics and operating 14–24 cycles per day, this technique produced an equivalent water yield of 5.8–13.3 L kg−1 of sustainable raw materials, demonstrating the potential for low-cost, scalable atmospheric water harvesting. Hygroscopic glues are candidates for commercial development. The most common cause of synthetic glue failure at high humidity is attributed to water lubricating the contact area, impacting bond quality. Hygroscopic glues may allow more durable adhesive bonds by absorbing (pulling) inter-facial environmental moisture away from the glue-substrate boundary. Integrating hygroscopic movement into smart building designs and systems is frequently mentioned, e.g. self-opening windows. Such movement is appealing, an adaptive, self-shaping response that requires no external force or energy. However, capabilities of current material choices are limited. Biomimetic design of hygromorphic wood composites and hygro-actuated building systems have been modeled and evaluated. Hygrometric response time, precise shape changes and durability are lacking. Most currently available hygro-actuated composites are inferior and exhibit fatigue failure well before that seen in nature, e.g. in pine cone scales, indicating that a better understanding of the plants' biological structures is needed. Materials composed of fluid-responsive active bilayer systems that can direct planned conformational hygromorphing are necessary. Current composites require undesirable trade-offs between hygromorphic response time and mechanical stability that must also be balanced with changing environmental stimuli. See also Cloud condensation nuclei Critical relative humidity Efflorescent Equilibrium moisture content Hydrophile Hydrophobe References External links Video on the deliquescense of calcium chloride The movement of hygroscopic organic salts Acrochordidae Biomimetics Chemical properties Evolutionary biology concepts Mineralogy concepts Plant morphology Plant physiology Tillandsia
Hygroscopy
[ "Chemistry", "Engineering", "Biology" ]
6,254
[ "Plant physiology", "Biological engineering", "Plants", "Plant morphology", "Bionics", "Evolutionary biology concepts", "Bioinformatics", "nan", "Biomimetics" ]
117,534
https://en.wikipedia.org/wiki/Optical%20microscope
The optical microscope, also referred to as a light microscope, is a type of microscope that commonly uses visible light and a system of lenses to generate magnified images of small objects. Optical microscopes are the oldest design of microscope and were possibly invented in their present compound form in the 17th century. Basic optical microscopes can be very simple, although many complex designs aim to improve resolution and sample contrast. The object is placed on a stage and may be directly viewed through one or two eyepieces on the microscope. In high-power microscopes, both eyepieces typically show the same image, but with a stereo microscope, slightly different images are used to create a 3-D effect. A camera is typically used to capture the image (micrograph). The sample can be lit in a variety of ways. Transparent objects can be lit from below and solid objects can be lit with light coming through (bright field) or around (dark field) the objective lens. Polarised light may be used to determine crystal orientation of metallic objects. Phase-contrast imaging can be used to increase image contrast by highlighting small details of differing refractive index. A range of objective lenses with different magnification are usually provided mounted on a turret, allowing them to be rotated into place and providing an ability to zoom-in. The maximum magnification power of optical microscopes is typically limited to around 1000x because of the limited resolving power of visible light. While larger magnifications are possible no additional details of the object are resolved. Alternatives to optical microscopy which do not use visible light include scanning electron microscopy and transmission electron microscopy and scanning probe microscopy and as a result, can achieve much greater magnifications. Types There are two basic types of optical microscopes: simple microscopes and compound microscopes. A simple microscope uses the optical power of a single lens or group of lenses for magnification. A compound microscope uses a system of lenses (one set enlarging the image produced by another) to achieve a much higher magnification of an object. The vast majority of modern research microscopes are compound microscopes, while some cheaper commercial digital microscopes are simple single-lens microscopes. Compound microscopes can be further divided into a variety of other types of microscopes, which differ in their optical configurations, cost, and intended purposes. Simple microscope A simple microscope uses a lens or set of lenses to enlarge an object through angular magnification alone, giving the viewer an erect enlarged virtual image. The use of a single convex lens or groups of lenses are found in simple magnification devices such as the magnifying glass, loupes, and eyepieces for telescopes and microscopes. Compound microscope A compound microscope uses a lens close to the object being viewed to collect light (called the objective lens), which focuses a real image of the object inside the microscope (image 1). That image is then magnified by a second lens or group of lenses (called the eyepiece) that gives the viewer an enlarged inverted virtual image of the object (image 2). The use of a compound objective/eyepiece combination allows for much higher magnification. Common compound microscopes often feature exchangeable objective lenses, allowing the user to quickly adjust the magnification. A compound microscope also enables more advanced illumination setups, such as phase contrast. Other microscope variants There are many variants of the compound optical microscope design for specialized purposes. Some of these are physical design differences allowing specialization for certain purposes: Stereo microscope, a low-powered microscope which provides a stereoscopic view of the sample, commonly used for dissection. Comparison microscope has two separate light paths allowing direct comparison of two samples via one image in each eye. Inverted microscope, for studying samples from below; useful for cell cultures in liquid or for metallography. Fiber optic connector inspection microscope, designed for connector end-face inspection Traveling microscope, for studying samples of high optical resolution. Other microscope variants are designed for different illumination techniques: Petrographic microscope, whose design usually includes a polarizing filter, rotating stage, and gypsum plate to facilitate the study of minerals or other crystalline materials whose optical properties can vary with orientation. Polarizing microscope, similar to the petrographic microscope. Phase-contrast microscope, which applies the phase contrast illumination method. Epifluorescence microscope, designed for analysis of samples that include fluorophores. Confocal microscope, a widely used variant of epifluorescent illumination that uses a scanning laser to illuminate a sample for fluorescence. Two-photon microscope, used to image fluorescence deeper in scattering media and reduce photobleaching, especially in living samples. Student microscope – an often low-power microscope with simplified controls and sometimes low-quality optics designed for school use or as a starter instrument for children. Ultramicroscope, an adapted light microscope that uses light scattering to allow viewing of tiny particles whose diameter is below or near the wavelength of visible light (around 500 nanometers); mostly obsolete since the advent of electron microscopes Tip-enhanced Raman microscope, is a variant of optical microscope based on tip-enhanced Raman spectroscopy, without traditional wavelength-based resolution limits. This microscope primarily realized on the scanning-probe microscope platforms using all optical tools. Digital microscope A digital microscope is a microscope equipped with a digital camera allowing observation of a sample via a computer. Microscopes can also be partly or wholly computer-controlled with various levels of automation. Digital microscopy allows greater analysis of a microscope image, for example, measurements of distances and areas and quantitation of a fluorescent or histological stain. Low-powered digital microscopes, USB microscopes, are also commercially available. These are essentially webcams with a high-powered macro lens and generally do not use transillumination. The camera is attached directly to a computer's USB port to show the images directly on the monitor. They offer modest magnifications (up to about 200×) without the need to use eyepieces and at a very low cost. High-power illumination is usually provided by an LED source or sources adjacent to the camera lens. Digital microscopy with very low light levels to avoid damage to vulnerable biological samples is available using sensitive photon-counting digital cameras. It has been demonstrated that a light source providing pairs of entangled photons may minimize the risk of damage to the most light-sensitive samples. In this application of ghost imaging to photon-sparse microscopy, the sample is illuminated with infrared photons, each spatially correlated with an entangled partner in the visible band for efficient imaging by a photon-counting camera. History Invention The earliest microscopes were single lens magnifying glasses with limited magnification, which date at least as far back as the widespread use of lenses in eyeglasses in the 13th century. Compound microscopes first appeared in Europe around 1620 including one demonstrated by Cornelis Drebbel in London (around 1621) and one exhibited in Rome in 1624. The actual inventor of the compound microscope is unknown although many claims have been made over the years. These include a claim 35 years after they appeared by Dutch spectacle-maker Johannes Zachariassen that his father, Zacharias Janssen, invented the compound microscope and/or the telescope as early as 1590. Johannes' testimony, which some claim is dubious, pushes the invention date so far back that Zacharias would have been a child at the time, leading to speculation that, for Johannes' claim to be true, the compound microscope would have to have been invented by Johannes' grandfather, Hans Martens. Another claim is that Janssen's competitor, Hans Lippershey (who applied for the first telescope patent in 1608) also invented the compound microscope. Other historians point to the Dutch innovator Cornelis Drebbel with his 1621 compound microscope. Galileo Galilei is sometimes cited as a compound microscope inventor. After 1610, he found that he could close focus his telescope to view small objects, such as flies, close up and/or could look through the wrong end in reverse to magnify small objects. The only drawback was that his 2 foot long telescope had to be extended out to 6 feet to view objects that close. After seeing the compound microscope built by Drebbel exhibited in Rome in 1624, Galileo built his own improved version. In 1625, Giovanni Faber coined the name microscope for the compound microscope Galileo submitted to the in 1624 (Galileo had called it the "occhiolino" or "little eye"). Faber coined the name from the Greek words μικρόν (micron) meaning "small", and σκοπεῖν (skopein) meaning "to look at", a name meant to be analogous with "telescope", another word coined by the Linceans. Christiaan Huygens, another Dutchman, developed a simple 2-lens ocular system in the late 17th century that was achromatically corrected, and therefore a huge step forward in microscope development. The Huygens ocular is still being produced to this day, but suffers from a small field size, and other minor disadvantages. Popularization Antonie van Leeuwenhoek (1632–1724) is credited with bringing the microscope to the attention of biologists, even though simple magnifying lenses were already being produced in the 16th century. Van Leeuwenhoek's home-made microscopes were simple microscopes, with a single very small, yet strong lens. They were awkward in use, but enabled van Leeuwenhoek to see detailed images. It took about 150 years of optical development before the compound microscope was able to provide the same quality image as van Leeuwenhoek's simple microscopes, due to difficulties in configuring multiple lenses. In the 1850s, John Leonard Riddell, Professor of Chemistry at Tulane University, invented the first practical binocular microscope while carrying out one of the earliest and most extensive American microscopic investigations of cholera. Lighting techniques While basic microscope technology and optics have been available for over 400 years it is much more recently that techniques in sample illumination were developed to generate the high quality images seen today. In August 1893, August Köhler developed Köhler illumination. This method of sample illumination gives rise to extremely even lighting and overcomes many limitations of older techniques of sample illumination. Before development of Köhler illumination the image of the light source, for example a lightbulb filament, was always visible in the image of the sample. The Nobel Prize in physics was awarded to Dutch physicist Frits Zernike in 1953 for his development of phase contrast illumination which allows imaging of transparent samples. By using interference rather than absorption of light, extremely transparent samples, such as live mammalian cells, can be imaged without having to use staining techniques. Just two years later, in 1955, Georges Nomarski published the theory for differential interference contrast microscopy, another interference-based imaging technique. Fluorescence microscopy Modern biological microscopy depends heavily on the development of fluorescent probes for specific structures within a cell. In contrast to normal transilluminated light microscopy, in fluorescence microscopy the sample is illuminated through the objective lens with a narrow set of wavelengths of light. This light interacts with fluorophores in the sample which then emit light of a longer wavelength. It is this emitted light which makes up the image. Since the mid-20th century chemical fluorescent stains, such as DAPI which binds to DNA, have been used to label specific structures within the cell. More recent developments include immunofluorescence, which uses fluorescently labelled antibodies to recognise specific proteins within a sample, and fluorescent proteins like GFP which a live cell can express making it fluorescent. Components All modern optical microscopes designed for viewing samples by transmitted light share the same basic components of the light path. In addition, the vast majority of microscopes have the same 'structural' components (numbered below according to the image on the right): Eyepiece (ocular lens) (1) Objective turret, revolver, or revolving nose piece (to hold multiple objective lenses) (2) Objective lenses (3) Focus knobs (to move the stage) Coarse adjustment (4) Fine adjustment (5) Stage (to hold the specimen) (6) Light source (a light or a mirror) (7) Diaphragm and condenser (8) Mechanical stage (9) Eyepiece (ocular lens) The eyepiece, or ocular lens, is a cylinder containing two or more lenses; its function is to bring the image into focus for the eye. The eyepiece is inserted into the top end of the body tube. Eyepieces are interchangeable and many different eyepieces can be inserted with different degrees of magnification. Typical magnification values for eyepieces include 5×, 10× (the most common), 15× and 20×. In some high performance microscopes, the optical configuration of the objective lens and eyepiece are matched to give the best possible optical performance. This occurs most commonly with apochromatic objectives. Objective turret (revolver or revolving nose piece) Objective turret, revolver, or revolving nose piece is the part that holds the set of objective lenses. It allows the user to switch between objective lenses. Objective lens At the lower end of a typical compound optical microscope, there are one or more objective lenses that collect light from the sample. The objective is usually in a cylinder housing containing a glass single or multi-element compound lens. Typically there will be around three objective lenses screwed into a circular nose piece which may be rotated to select the required objective lens. These arrangements are designed to be parfocal, which means that when one changes from one lens to another on a microscope, the sample stays in focus. Microscope objectives are characterized by two parameters, namely, magnification and numerical aperture. The former typically ranges from 5× to 100× while the latter ranges from 0.14 to 0.7, corresponding to focal lengths of about 40 to 2 mm, respectively. Objective lenses with higher magnifications normally have a higher numerical aperture and a shorter depth of field in the resulting image. Some high performance objective lenses may require matched eyepieces to deliver the best optical performance. Oil immersion objective Some microscopes make use of oil-immersion objectives or water-immersion objectives for greater resolution at high magnification. These are used with index-matching material such as immersion oil or water and a matched cover slip between the objective lens and the sample. The refractive index of the index-matching material is higher than air allowing the objective lens to have a larger numerical aperture (greater than 1) so that the light is transmitted from the specimen to the outer face of the objective lens with minimal refraction. Numerical apertures as high as 1.6 can be achieved. The larger numerical aperture allows collection of more light making detailed observation of smaller details possible. An oil immersion lens usually has a magnification of 40 to 100×. Focus knobs Adjustment knobs move the stage up and down with separate adjustment for coarse and fine focusing. The same controls enable the microscope to adjust to specimens of different thickness. In older designs of microscopes, the focus adjustment wheels move the microscope tube up or down relative to the stand and had a fixed stage. Frame The whole of the optical assembly is traditionally attached to a rigid arm, which in turn is attached to a robust U-shaped foot to provide the necessary rigidity. The arm angle may be adjustable to allow the viewing angle to be adjusted. The frame provides a mounting point for various microscope controls. Normally this will include controls for focusing, typically a large knurled wheel to adjust coarse focus, together with a smaller knurled wheel to control fine focus. Other features may be lamp controls and/or controls for adjusting the condenser. Stage The stage is a platform below the objective lens which supports the specimen being viewed. In the center of the stage is a hole through which light passes to illuminate the specimen. The stage usually has arms to hold slides (rectangular glass plates with typical dimensions of 25×75 mm, on which the specimen is mounted). At magnifications higher than 100× moving a slide by hand is not practical. A mechanical stage, typical of medium and higher priced microscopes, allows tiny movements of the slide via control knobs that reposition the sample/slide as desired. If a microscope did not originally have a mechanical stage it may be possible to add one. All stages move up and down for focus. With a mechanical stage slides move on two horizontal axes for positioning the specimen to examine specimen details. Focusing starts at lower magnification in order to center the specimen by the user on the stage. Moving to a higher magnification requires the stage to be moved higher vertically for re-focus at the higher magnification and may also require slight horizontal specimen position adjustment. Horizontal specimen position adjustments are the reason for having a mechanical stage. Due to the difficulty in preparing specimens and mounting them on slides, for children it is best to begin with prepared slides that are centered and focus easily regardless of the focus level used. Light source Many sources of light can be used. At its simplest, daylight is directed via a mirror. Most microscopes, however, have their own adjustable and controllable light source – often a halogen lamp, although illumination using LEDs and lasers are becoming a more common provision. Köhler illumination is often provided on more expensive instruments. Condenser The condenser is a lens designed to focus light from the illumination source onto the sample. The condenser may also include other features, such as a diaphragm and/or filters, to manage the quality and intensity of the illumination. For illumination techniques like dark field, phase contrast and differential interference contrast microscopy additional optical components must be precisely aligned in the light path. Magnification The actual power or magnification of a compound optical microscope is the product of the powers of the eyepiece and the objective lens. For example a 10x eyepiece magnification and a 100x objective lens magnification gives a total magnification of 1,000×. Modified environments such as the use of oil or ultraviolet light can increase the resolution and allow for resolved details at magnifications larger than 1,000x. Operation Illumination techniques Many techniques are available which modify the light path to generate an improved contrast image from a sample. Major techniques for generating increased contrast from the sample include cross-polarized light, dark field, phase contrast and differential interference contrast illumination. A recent technique (Sarfus) combines cross-polarized light and specific contrast-enhanced slides for the visualization of nanometric samples. Other techniques Modern microscopes allow more than just observation of transmitted light image of a sample; there are many techniques which can be used to extract other kinds of data. Most of these require additional equipment in addition to a basic compound microscope. Reflected light, or incident, illumination (for analysis of surface structures) Fluorescence microscopy, both: Epifluorescence microscopy Confocal microscopy Microspectroscopy (where a UV-visible spectrophotometer is integrated with an optical microscope) Ultraviolet microscopy Near-Infrared microscopy Multiple transmission microscopy for contrast enhancement and aberration reduction. Automation (for automatic scanning of a large sample or image capture) Applications Optical microscopy is used extensively in microelectronics, nanophysics, biotechnology, pharmaceutic research, mineralogy and microbiology. Optical microscopy is used for medical diagnosis, the field being termed histopathology when dealing with tissues, or in smear tests on free cells or tissue fragments. In industrial use, binocular microscopes are common. Aside from applications needing true depth perception, the use of dual eyepieces reduces eye strain associated with long workdays at a microscopy station. In certain applications, long-working-distance or long-focus microscopes are beneficial. An item may need to be examined behind a window, or industrial subjects may be a hazard to the objective. Such optics resemble telescopes with close-focus capabilities. Measuring microscopes are used for precision measurement. There are two basic types. One has a reticle graduated to allow measuring distances in the focal plane. The other (and older) type has simple crosshairs and a micrometer mechanism for moving the subject relative to the microscope. Very small, portable microscopes have found some usage in places where a laboratory microscope would be a burden. Limitations At very high magnifications with transmitted light, point objects are seen as fuzzy discs surrounded by diffraction rings. These are called Airy disks. The resolving power of a microscope is taken as the ability to distinguish between two closely spaced Airy disks (or, in other words the ability of the microscope to reveal adjacent structural detail as distinct and separate). It is these impacts of diffraction that limit the ability to resolve fine details. The extent and magnitude of the diffraction patterns are affected by both the wavelength of light (λ), the refractive materials used to manufacture the objective lens and the numerical aperture (NA) of the objective lens. There is therefore a finite limit beyond which it is impossible to resolve separate points in the objective field, known as the diffraction limit. Assuming that optical aberrations in the whole optical set-up are negligible, the resolution d, can be stated as: Usually a wavelength of 550 nm is assumed, which corresponds to green light. With air as the external medium, the highest practical NA is 0.95, and with oil, up to 1.5. In practice the lowest value of d obtainable with conventional lenses is about 200 nm. A new type of lens using multiple scattering of light allowed to improve the resolution to below 100 nm. Surpassing the resolution limit Multiple techniques are available for reaching resolutions higher than the transmitted light limit described above. Holographic techniques, as described by Courjon and Bulabois in 1979, are also capable of breaking this resolution limit, although resolution was restricted in their experimental analysis. Using fluorescent samples more techniques are available. Examples include Vertico SMI, near field scanning optical microscopy which uses evanescent waves, and stimulated emission depletion. In 2005, a microscope capable of detecting a single molecule was described as a teaching tool. Despite significant progress in the last decade, techniques for surpassing the diffraction limit remain limited and specialized. While most techniques focus on increases in lateral resolution there are also some techniques which aim to allow analysis of extremely thin samples. For example, sarfus methods place the thin sample on a contrast-enhancing surface and thereby allows to directly visualize films as thin as 0.3 nanometers. On 8 October 2014, the Nobel Prize in Chemistry was awarded to Eric Betzig, William Moerner and Stefan Hell for the development of super-resolved fluorescence microscopy. Structured illumination SMI SMI (spatially modulated illumination microscopy) is a light optical process of the so-called point spread function (PSF) engineering. These are processes which modify the PSF of a microscope in a suitable manner to either increase the optical resolution, to maximize the precision of distance measurements of fluorescent objects that are small relative to the wavelength of the illuminating light, or to extract other structural parameters in the nanometer range. Localization microscopy SPDMphymod SPDM (spectral precision distance microscopy), the basic localization microscopy technology is a light optical process of fluorescence microscopy which allows position, distance and angle measurements on "optically isolated" particles (e.g. molecules) well below the theoretical limit of resolution for light microscopy. "Optically isolated" means that at a given point in time, only a single particle/molecule within a region of a size determined by conventional optical resolution (typically approx. 200–250 nm diameter) is being registered. This is possible when molecules within such a region all carry different spectral markers (e.g. different colors or other usable differences in the light emission of different particles). Many standard fluorescent dyes like GFP, Alexa dyes, Atto dyes, Cy2/Cy3 and fluorescein molecules can be used for localization microscopy, provided certain photo-physical conditions are present. Using this so-called SPDMphymod (physically modifiable fluorophores) technology a single laser wavelength of suitable intensity is sufficient for nanoimaging. 3D super resolution microscopy 3D super resolution microscopy with standard fluorescent dyes can be achieved by combination of localization microscopy for standard fluorescent dyes SPDMphymod and structured illumination SMI. STED Stimulated emission depletion is a simple example of how higher resolution surpassing the diffraction limit is possible, but it has major limitations. STED is a fluorescence microscopy technique which uses a combination of light pulses to induce fluorescence in a small sub-population of fluorescent molecules in a sample. Each molecule produces a diffraction-limited spot of light in the image, and the centre of each of these spots corresponds to the location of the molecule. As the number of fluorescing molecules is low the spots of light are unlikely to overlap and therefore can be placed accurately. This process is then repeated many times to generate the image. Stefan Hell of the Max Planck Institute for Biophysical Chemistry was awarded the 10th German Future Prize in 2006 and Nobel Prize for Chemistry in 2014 for his development of the STED microscope and associated methodologies. Alternatives In order to overcome the limitations set by the diffraction limit of visible light other microscopes have been designed which use other waves. Atomic force microscope (AFM) Scanning electron microscope (SEM) Scanning ion-conductance microscopy (SICM) Scanning tunneling microscope (STM) Transmission electron microscopy (TEM) Ultraviolet microscope X-ray microscope It is important to note that higher frequency waves have limited interaction with matter, for example soft tissues are relatively transparent to X-rays resulting in distinct sources of contrast and different target applications. The use of electrons and X-rays in place of light allows much higher resolution – the wavelength of the radiation is shorter so the diffraction limit is lower. To make the short-wavelength probe non-destructive, the atomic beam imaging system (atomic nanoscope) has been proposed and widely discussed in the literature, but it is not yet competitive with conventional imaging systems. STM and AFM are scanning probe techniques using a small probe which is scanned over the sample surface. Resolution in these cases is limited by the size of the probe; micromachining techniques can produce probes with tip radii of 5–10 nm. Additionally, methods such as electron or X-ray microscopy use a vacuum or partial vacuum, which limits their use for live and biological samples (with the exception of an environmental scanning electron microscope). The specimen chambers needed for all such instruments also limits sample size, and sample manipulation is more difficult. Color cannot be seen in images made by these methods, so some information is lost. They are however, essential when investigating molecular or atomic effects, such as age hardening in aluminium alloys, or the microstructure of polymers. See also Digital microscope Köhler illumination Microscope slide References Cited sources Further reading "Metallographic and Materialographic Specimen Preparation, Light Microscopy, Image Analysis and Hardness Testing", Kay Geels in collaboration with Struers A/S, ASTM International 2006. "Light Microscopy: An ongoing contemporary revolution", Siegfried Weisenburger and Vahid Sandoghdar, arXiv:1412.3255 2014. External links Antique Microscopes & Scientific Instruments A site about Antique Microscopes, their Accessories, and History Antique Microscopes.com A collection of early microscopes Historical microscopes, an illustrated collection with more than 3000 photos of scientific microscopes by European makers The Golub Collection, A collection of 17th through 19th century microscopes, including extensive descriptions Molecular Expressions, concepts in optical microscopy Online tutorial of practical optical microscopy at University of Cambridge OpenWetWare Cell Centered Database Microscopes Dutch inventions Optical microscopy
Optical microscope
[ "Chemistry", "Technology", "Engineering" ]
5,734
[ "Optical microscopy", "Microscopes", "Measuring instruments", "Microscopy" ]
22,210,655
https://en.wikipedia.org/wiki/Aquatic%20locomotion
Aquatic locomotion or swimming is biologically propelled motion through a liquid medium. The simplest propulsive systems are composed of cilia and flagella. Swimming has evolved a number of times in a range of organisms including arthropods, fish, molluscs, amphibians, reptiles, birds, and mammals. Evolution of swimming Swimming evolved a number of times in unrelated lineages. Supposed jellyfish fossils occur in the Ediacaran, but the first free-swimming animals appear in the Early to Middle Cambrian. These are mostly related to the arthropods, and include the Anomalocaridids, which swam by means of lateral lobes in a fashion reminiscent of today's cuttlefish. Cephalopods joined the ranks of the active swimmers (nekton) in the late Cambrian, and chordates were probably swimming from the Early Cambrian. Many terrestrial animals retain some capacity to swim, however some have returned to the water and developed the capacities for aquatic locomotion. Most apes (including humans), however, lost the swimming instinct. In 2013 Pedro Renato Bender, a research fellow at the University of the Witwatersrand's Institute for Human Evolution, proposed a theory to explain the loss of that instinct. Termed the Saci last common ancestor hypothesis (after Saci, a Brazilian folklore character who cannot cross water barriers), it holds that the loss of instinctive swimming ability in apes is best explained as a consequence of constraints related to the adaptation to an arboreal life in the last common ancestor of apes. Bender hypothesized that the ancestral ape increasingly avoided deep-water bodies when the risks of being exposed to water were clearly higher than the advantages of crossing them. A decreasing contact with water bodies then could have led to the disappearance of the doggy paddle instinct. Micro-organisms Microbial swimmers, sometimes called microswimmers, are microscopic entities that have the ability to move in fluid or aquatic environment. Natural microswimmers are found everywhere in the natural world as biological microorganisms, such as bacteria, archaea, protists, sperm and microanimals. Bacterial Ciliates Ciliates use small flagella called cilia to move through the water. One ciliate will generally have hundreds to thousands of cilia that are densely packed together in arrays. During movement, an individual cilium deforms using a high-friction power stroke followed by a low-friction recovery stroke. Since there are multiple cilia packed together on an individual organism, they display collective behavior in a metachronal rhythm. This means the deformation of one cilium is in phase with the deformation of its neighbor, causing deformation waves that propagate along the surface of the organism. These propagating waves of cilia are what allow the organism to use the cilia in a coordinated manner to move. A typical example of a ciliated microorganism is the Paramecium, a one-celled, ciliated protozoan covered by thousands of cilia. The cilia beating together allow the Paramecium to propel through the water at speeds of 500 micrometers per second. Flagellates Certain organisms such as bacteria and animal sperm have flagellum which have developed a way to move in liquid environments. A rotary motor model shows that bacteria uses the protons of an electrochemical gradient in order to move their flagella. Torque in the flagella of bacteria is created by particles that conduct protons around the base of the flagellum. The direction of rotation of the flagella in bacteria comes from the occupancy of the proton channels along the perimeter of the flagellar motor. Movement of sperm is called sperm motility. The middle of the mammalian spermatozoon contains mitochondria that power the movement of the flagellum of the sperm. The motor around the base produces torque, just like in bacteria for movement through the aqueous environment. Pseudopodia Movement using a pseudopod is accomplished through increases in pressure at one point on the cell membrane. This pressure increase is the result of actin polymerization between the cortex and the membrane. As the pressure increases the cell membrane is pushed outward creating the pseudopod. When the pseudopod moves outward, the rest of the body is pulled forward by cortical tension. The result is cell movement through the fluid medium. Furthermore, the direction of movement is determined by chemotaxis. When chemoattraction occurs in a particular area of the cell membrane, actin polymerization can begin and move the cell in that direction. An excellent example of an organism that utilizes pseudopods is Naegleria fowleri. A Simple Animation Invertebrates Among the radiata, jellyfish and their kin, the main form of swimming is to flex their cup shaped bodies. All jellyfish are free-swimming, although many of these spend most of their time swimming passively. Passive swimming is akin to gliding; the organism floats, using currents where it can, and does not exert any energy into controlling its position or motion. Active swimming, in contrast, involves the expenditure of energy to travel to a desired location. In bilateria, there are many methods of swimming. The arrow worms (chaetognatha) undulate their finned bodies, not unlike fish. Nematodes swim by undulating their fin-less bodies. Some Arthropod groups can swim – including many crustaceans. Most crustaceans, such as shrimp, will usually swim by paddling with special swimming legs (pleopods). Swimming crabs swim with modified walking legs (pereiopods). Daphnia, a crustacean, swims by beating its antennae instead. There are also a number of forms of swimming molluscs. Many free-swimming sea slugs, such as sea angels, flap fin-like structures. Some shelled molluscs, such as scallops can briefly swim by clapping their two shells open and closed. The molluscs most evolved for swimming are the cephalopods. Violet sea-snails exploit a buoyant foam raft stabilized by amphiphilic mucins to float at the sea surface. Among the Deuterostomia, there are a number of swimmers as well. Feather stars can swim by undulating their many arms. Salps move by pumping waters through their gelatinous bodies. The deuterostomes most evolved for swimming are found among the vertebrates, notably the fish. Jet propulsion Jet propulsion is a method of aquatic locomotion where animals fill a muscular cavity and squirt out water to propel them in the opposite direction of the squirting water. Most organisms are equipped with one of two designs for jet propulsion; they can draw water from the rear and expel it from the rear, such as jellyfish, or draw water from front and expel it from the rear, such as salps. Filling up the cavity causes an increase in both the mass and drag of the animal. Because of the expanse of the contracting cavity, the animal's velocity fluctuates as it moves through the water, accelerating while expelling water and decelerating while vacuuming water. Even though these fluctuations in drag and mass can be ignored if the frequency of the jet-propulsion cycles is high enough, jet-propulsion is a relatively inefficient method of aquatic locomotion. All cephalopods can move by jet propulsion, but this is a very energy-consuming way to travel compared to the tail propulsion used by fish. The relative efficiency of jet propulsion decreases further as animal size increases. Since the Paleozoic, as competition with fish produced an environment where efficient motion was crucial to survival, jet propulsion has taken a back role, with fins and tentacles used to maintain a steady velocity. The stop-start motion provided by the jets, however, continues to be useful for providing bursts of high speed – not least when capturing prey or avoiding predators. Indeed, it makes cephalopods the fastest marine invertebrates, and they can out accelerate most fish. Oxygenated water is taken into the mantle cavity to the gills and through muscular contraction of this cavity, the spent water is expelled through the hyponome, created by a fold in the mantle. Motion of the cephalopods is usually backward as water is forced out anteriorly through the hyponome, but direction can be controlled somewhat by pointing it in different directions. Most cephalopods float (i.e. are neutrally buoyant), so do not need to swim to remain afloat. Squid swim more slowly than fish, but use more power to generate their speed. The loss in efficiency is due to the amount of water the squid can accelerate out of its mantle cavity. Jellyfish use a one-way water cavity design which generates a phase of continuous cycles of jet-propulsion followed by a rest phase. The Froude efficiency is about 0.09, which indicates a very costly method of locomotion. The metabolic cost of transport for jellyfish is high when compared to a fish of equal mass. Other jet-propelled animals have similar problems in efficiency. Scallops, which use a similar design to jellyfish, swim by quickly opening and closing their shells, which draws in water and expels it from all sides. This locomotion is used as a means to escape predators such as starfish. Afterwards, the shell acts as a hydrofoil to counteract the scallop's tendency to sink. The Froude efficiency is low for this type of movement, about 0.3, which is why it's used as an emergency escape mechanism from predators. However, the amount of work the scallop has to do is mitigated by the elastic hinge that connects the two shells of the bivalve. Squids swim by drawing water into their mantle cavity and expelling it through their siphon. The Froude efficiency of their jet-propulsion system is around 0.29, which is much lower than a fish of the same mass. Much of the work done by scallop muscles to close its shell is stored as elastic energy in abductin tissue, which acts as a spring to open the shell. The elasticity causes the work done against the water to be low because of the large openings the water has to enter and the small openings the water has to leave. The inertial work of scallop jet-propulsion is also low. Because of the low inertial work, the energy savings created by the elastic tissue is so small that it's negligible. Medusae can also use their elastic mesoglea to enlarge their bell. Their mantle contains a layer of muscle sandwiched between elastic fibers. The muscle fibers run around the bell circumferentially while the elastic fibers run through the muscle and along the sides of the bell to prevent lengthening. After making a single contraction, the bell vibrates passively at the resonant frequency to refill the bell. However, in contrast with scallops, the inertial work is similar to the hydrodynamic work due to how medusas expel water – through a large opening at low velocity. Because of this, the negative pressure created by the vibrating cavity is lower than the positive pressure of the jet, meaning that inertial work of the mantle is small. Thus, jet-propulsion is shown as an inefficient swimming technique. Fish Many fish swim through water by creating undulations with their bodies or oscillating their fins. The undulations create components of forward thrust complemented by a rearward force, side forces which are wasted portions of energy, and a normal force that is between the forward thrust and side force. Different fish swim by undulating different parts of their bodies. Eel-shaped fish undulate their entire body in rhythmic sequences. Streamlined fish, such as salmon, undulate the caudal portions of their bodies. Some fish, such as sharks, use stiff, strong fins to create dynamic lift and propel themselves. It is common for fish to use more than one form of propulsion, although they will display one dominant mode of swimming Gait changes have even been observed in juvenile reef fish of various sizes. Depending on their needs, fish can rapidly alternate between synchronized fin beats and alternating fin beats. According to Guinness World Records 2009, Hippocampus zosterae (the dwarf seahorse) is the slowest moving fish, with a top speed of about per hour. They swim very poorly, rapidly fluttering a dorsal fin and using pectoral fins (located behind their eyes) to steer. Seahorses have no caudal fin. Body-caudal fin (BCF) propulsion Anguilliform: Anguilliform swimmers are typically slow swimmers. They undulate the majority of their body and use their head as the fulcrum for the load they are moving. At any point during their undulation, their body has an amplitude between 0.5-1.0 wavelengths. The amplitude that they move their body through allows them to swim backwards. Anguilliform locomotion is usually seen in fish with long, slender bodies like eels, lampreys, oarfish, and a number of catfish species. Subcarangiform, Carangiform, Thunniform: These swimmers undulate the posterior half of their body and are much faster than anguilliform swimmers. At any point while they are swimming, a wavelength <1 can be seen in the undulation pattern of the body. Some Carangiform swimmers include nurse sharks, bamboo sharks, and reef sharks. Thunniform swimmers are very fast and some common Thunniform swimmers include tuna, white sharks, salmon, jacks, and mako sharks. Thunniform swimmers only undulate their high aspect ratio caudal fin, so they are usually very stiff to push more water out of the way. Ostraciiform: Ostraciiform swimmers oscillate their caudal region, making them relatively slow swimmers. Boxfish, torpedo rays, and momyrs employ Ostraciiform locomotion. The cow fish uses Osctraciiform locomotion to hover in the water column. Median paired fin (MPF) propulsion Tetraodoniform, Balistiform, Diodontiform: These swimmers oscillate their median (pectoral) fins. They are typically slow swimmers, and some notable examples include the oceanic sunfish (which has extremely modified anal and dorsal fins), puffer fish, and triggerfish. Rajiform, Amiiform, Gymnotiform: This locomotory mode is accomplished by undulation of the pectoral and median fins. During their undulation pattern, a wavelength >1 can be seen in their fins. They are typically slow to moderate swimmers, and some examples include rays, bowfin, and knife fishes. The black ghost knife fish is a Gymnotiform swimmer that has a very long ventral ribbon fin. Thrust is produced by passing waves down the ribbon fin while the body remains rigid. This also allows the ghost knife fish to swim in reverse. Labriform: Labriform swimmers are also slow swimmers. They oscillate their pectoral fins to create thrust. Oscillating fins create thrust when a starting vortex is shed from the trailing edge of the fin. As the foil departs from the starting vortex, the effect of that vortex diminishes, while the bound circulation remains, producing lift. Labriform swimming can be viewed as continuously starting and stopping. Wrasses and surf perch are common Labriform swimmers. Hydrofoils Hydrofoils, or fins, are used to push against the water to create a normal force to provide thrust, propelling the animal through water. Sea turtles and penguins beat their paired hydrofoils to create lift. Some paired fins, such as pectoral fins on leopard sharks, can be angled at varying degrees to allow the animal to rise, fall, or maintain its level in the water column. The reduction of fin surface area helps to minimize drag, and therefore increase efficiency. Regardless of size of the animal, at any particular speed, maximum possible lift is proportional to (wing area) x (speed)2. Dolphins and whales have large, horizontal caudal hydrofoils, while many fish and sharks have vertical caudal hydrofoils. Porpoising (seen in cetaceans, penguins, and pinnipeds) may save energy if they are moving fast. Since drag increases with speed, the work required to swim unit distance is greater at higher speeds, but the work needed to jump unit distance is independent of speed. Seals propel themselves through the water with their caudal tail, while sea lions create thrust solely with their pectoral flippers. Drag powered swimming As with moving through any fluid, friction is created when molecules of the fluid collide with organism. The collision causes drag against moving fish, which is why many fish are streamlined in shape. Streamlined shapes work to reduce drag by orienting elongated objects parallel to the force of drag, therefore allowing the current to pass over and taper off the end of the fish. This streamlined shape allows for more efficient use of energy locomotion. Some flat-shaped fish can take advantage of pressure drag by having a flat bottom surface and curved top surface. The pressure drag created allows for the upward lift of the fish. Appendages of aquatic organisms propel them in two main and biomechanically extreme mechanisms. Some use lift powered swimming, which can be compared to flying as appendages flap like wings, and reduce drag on the surface of the appendage. Others use drag powered swimming, which can be compared to oars rowing a boat, with movement in a horizontal plane, or paddling, with movement in the parasagittal plane. Drag swimmers use a cyclic motion in which they push water back in a power stroke, and return their limb forward in the return or recovery stroke. When they push water directly backwards, this moves their body forward, but as they return their limbs to the starting position, they push water forward, which will thus pull them back to some degree, and so opposes the direction that the body is heading. This opposing force is called drag. The return-stroke drag causes drag swimmers to employ different strategies than lift swimmers. Reducing drag on the return stroke is essential for optimizing efficiency. For example, ducks paddle through the water spreading the webs of their feet as they move water back, and then when they return their feet to the front they pull their webs together to reduce the subsequent pull of water forward. The legs of water beetles have little hairs which spread out to catch and move water back in the power stroke, but lay flat as the appendage moves forward in the return stroke. Also, one side of a water beetle leg is wider than the others and is held perpendicular to the motion when pushing backward, but the leg rotates when the limb returns forward, so the thinner side catches less water. Drag swimmers experience a lessened efficiency in swimming due to resistance which affects their optimum speed. The less drag a fish experiences, the more it will be able to maintain higher speeds. Morphology of the fish can be designed to reduce drag, such as streamlining the body. The cost of transport is much higher for the drag swimmer, and when deviating from its optimum speed, the drag swimmer is energetically strained much more than the lift swimmer. There are natural processes in place to optimize energy use, and it is thought that adjustments of metabolic rates can compensate in part for mechanical disadvantages. Semi-aquatic animals compared to fully aquatic animals exhibit exacerbation of drag. Design that allows them to function out of the water limits the efficiency possible to be reached when in the water. In water swimming at the surface exposes them to resistive wave drag and is associated with a higher cost than submerged swimming. Swimming below the surface exposes them to resistance due to return strokes and pressure, but primarily friction. Frictional drag is due to fluid viscosity and morphology characteristics. Pressure drag is due to the difference of water flow around the body and is also affected by body morphology. Semi-aquatic organisms encounter increased resistive forces when in or out of the water, as they are not specialized for either habitat. The morphology of otters and beavers, for example, must meet needs for both environments. Their fur decreases streamlining and creates additional drag. The platypus may be a good example of an intermediate between drag and lift swimmers because it has been shown to have a rowing mechanism which is similar to lift-based pectoral oscillation. The limbs of semi-aquatic organisms are reserved for use on land and using them in water not only increases the cost of locomotion, but limits them to drag-based modes. Although they are less efficient, drag swimmers are able to produce more thrust at low speeds than lift swimmers. They are also thought to be better for maneuverability due to the large thrust produced. Amphibians Most of the Amphibia have a larval state, which has inherited anguilliform motion, and a laterally compressed tail to go with it, from fish ancestors. The corresponding tetrapod adult forms, even in the tail-retaining sub-class Urodeles, are sometimes aquatic to only a negligible extent (as in the genus Salamandra, whose tail has lost its suitability for aquatic propulsion), but the majority of Urodeles, from the newts to the giant salamander Megalobatrachus, retain a laterally compressed tail for a life that is aquatic to a considerable degree, which can use in a carangiform motion. Of the tailless amphibians (the frogs and toads of the sub-class Anura) the majority are aquatic to an insignificant extent in adult life, but in that considerable minority that are mainly aquatic we encounter for the first time the problem of adapting the tailless-tetrapod structure for aquatic propulsion. The mode that they use is unrelated to any used by fish. With their flexible back legs and webbed feet they execute something close to the leg movements of a human 'breast stroke,' rather more efficiently because the legs are better streamlined. Reptiles From the point of view of aquatic propulsion, the descent of modern members of the class Reptilia from archaic tailed Amphibia is most obvious in the case of the order Crocodilia (crocodiles and alligators), which use their deep, laterally compressed tails in an essentially carangiform mode of propulsion (see Fish locomotion#Carangiform). Terrestrial snakes, in spite of their 'bad' hydromechanical shape with roughly circular cross-section and gradual posterior taper, swim fairly readily when required, by an anguilliform propulsion (see Fish locomotion#Anguilliform). Cheloniidae (sea turtles) have found a solution to the problem of tetrapod swimming through the development of their forelimbs into flippers of high-aspect-ratio wing shape, with which they imitate a bird's propulsive mode more accurately than do the eagle-rays themselves. Fin and flipper locomotion Aquatic reptiles such as sea turtles (see also turtles) and extinct species like Pliosauroids predominantly use their pectoral flippers to propel themselves through the water and their pelvic flippers for maneuvering. During swimming they move their pectoral flippers in a dorso-ventral motion, causing forward motion. During swimming, they rotate their front flippers to decrease drag through the water column and increase efficiency. Newly hatched sea turtles exhibit several behavioral skills that help orientate themselves towards the ocean as well as identifying the transition from sand to water. If rotated in the pitch, yaw or roll direction, the hatchlings are capable of counteracting the forces acting upon them by correcting with either their pectoral or pelvic flippers and redirecting themselves towards the open ocean. Among mammals otariids (fur seals) swim primarily with their front flippers, using the rear flippers for steering, and phocids (true seals) move the rear flippers laterally, pushing the animal through the water. Escape reactions Some arthropods, such as lobsters and shrimps, can propel themselves backwards quickly by flicking their tail, known as lobstering or the caridoid escape reaction. Varieties of fish, such as teleosts, also use fast-starts to escape from predators. Fast-starts are characterized by the muscle contraction on one side of the fish twisting the fish into a C-shape. Afterwards, muscle contraction occurs on the opposite side to allow the fish to enter into a steady swimming state with waves of undulation traveling alongside the body. The power of the bending motion comes from fast-twitch muscle fibers located in the central region of the fish. The signal to perform this contraction comes from a set of Mauthner cells which simultaneously send a signal to the muscles on one side of the fish. Mauthner cells are activated when something startles the fish and can be activated by visual or sound-based stimuli. Fast-starts are split up into three stages. Stage one, which is called the preparatory stroke, is characterized by the initial bending to a C-shape with small delay caused by hydrodynamic resistance. Stage two, the propulsive stroke, involves the body bending rapidly to the other side, which may occur multiple times. Stage three, the rest phase, cause the fish to return to normal steady-state swimming and the body undulations begin to cease. Large muscles located closer to the central portion of the fish are stronger and generate more force than the muscles in the tail. This asymmetry in muscle composition causes body undulations that occur in Stage 3. Once the fast-start is completed, the position of the fish has been shown to have a certain level of unpredictability, which helps fish survive against predators. The rate at which the body can bend is limited by resistance contained in the inertia of each body part. However, this inertia assists the fish in creating propulsion as a result of the momentum created against the water. The forward propulsion created from C-starts, and steady-state swimming in general, is a result of the body of the fish pushing against the water. Waves of undulation create rearward momentum against the water providing the forward thrust required to push the fish forward. Efficiency The Froude propulsion efficiency is defined as the ratio of power output to the power input: where U1 = free stream velocity and U2 = jet velocity. A good efficiency for carangiform propulsion is between 50 and 80%. Minimizing drag Pressure differences occur outside the boundary layer of swimming organisms due to disrupted flow around the body. The difference on the up- and down-stream surfaces of the body is pressure drag, which creates a downstream force on the object. Frictional drag, on the other hand, is a result of fluid viscosity in the boundary layer. Higher turbulence causes greater frictional drag. Reynolds number (Re) is the measure of the relationships between inertial and viscous forces in flow ((animal's length x animal's velocity)/kinematic viscosity of the fluid). Turbulent flow can be found at higher Re values, where the boundary layer separates and creates a wake, and laminar flow can be found at lower Re values, when the boundary layer separation is delayed, reducing wake and kinetic energy loss to opposing water momentum. The body shape of a swimming organism affects the resulting drag. Long, slender bodies reduce pressure drag by streamlining, while short, round bodies reduce frictional drag; therefore, the optimal shape of an organism depends on its niche. Swimming organisms with a fusiform shape are likely to experience the greatest reduction in both pressure and frictional drag. Wing shape also affects the amount of drag experienced by an organism, as with different methods of stroke, recovery of the pre-stroke position results in the accumulation of drag. High-speed ram ventilation creates laminar flow of water from the gills along the body of an organism. The secretion of mucus along the organism's body surface, or the addition of long-chained polymers to the velocity gradient, can reduce frictional drag experienced by the organism. Buoyancy Many aquatic/marine organisms have developed organs to compensate for their weight and control their buoyancy in the water. These structures, make the density of their bodies very close to that of the surrounding water. Some hydrozoans, such as siphonophores, has gas-filled floats; the Nautilus, Sepia, and Spirula (Cephalopods) have chambers of gas within their shells; and most teleost fish and many lantern fish (Myctophidae) are equipped with swim bladders. Many aquatic and marine organisms may also be composed of low-density materials. Deep-water teleosts, which do not have a swim bladder, have few lipids and proteins, deeply ossified bones, and watery tissues that maintain their buoyancy. Some sharks' livers are composed of low-density lipids, such as hydrocarbon squalene or wax esters (also found in Myctophidae without swim bladders), which provide buoyancy. Swimming animals that are denser than water must generate lift or adapt a benthic lifestyle. Movement of the fish to generate hydrodynamic lift is necessary to prevent sinking. Often, their bodies act as hydrofoils, a task that is more effective in flat-bodied fish. At a small tilt angle, the lift is greater for flat fish than it is for fish with narrow bodies. Narrow-bodied fish use their fins as hydrofoils while their bodies remain horizontal. In sharks, the heterocercal tail shape drives water downward, creating a counteracting upward force while thrusting the shark forward. The lift generated is assisted by the pectoral fins and upward-angle body positioning. It is supposed that tunas primarily use their pectoral fins for lift. Buoyancy maintenance is metabolically expensive. Growing and sustaining a buoyancy organ, adjusting the composition of biological makeup, and exerting physical strain to stay in motion demands large amounts of energy. It is proposed that lift may be physically generated at a lower energy cost by swimming upward and gliding downward, in a "climb and glide" motion, rather than constant swimming on a plane. Temperature Temperature can also greatly affect the ability of aquatic organisms to move through water. This is because temperature not only affects the properties of the water, but also the organisms in the water, as most have an ideal range specific to their body and metabolic needs. Q10 (temperature coefficient), the factor by which a rate increases at a 10 °C increase in temperature, is used to measure how organisms' performance relies on temperature. Most have increased rates as water becomes warmer, but some have limits to this and others find ways to alter such effects, such as by endothermy or earlier recruitment of faster muscle. For example, Crocodylus porosus, or estuarine crocodiles, were found to increase swimming speed from 15 °C to 23 °C and then to have peak swimming speed from 23 °C to 33 °C. However, performance began to decline as temperature rose beyond that point, showing a limit to the range of temperatures at which this species could ideally perform. Submergence The more of the animal's body that is submerged while swimming, the less energy it uses. Swimming on the surface requires two to three times more energy than when completely submerged. This is because of the bow wave that is formed at the front when the animal is pushing the surface of the water when swimming, creating extra drag. Secondary evolution While tetrapods lost many of their natural adaptations to swimming when they evolved onto the land, many have re-evolved the ability to swim or have indeed returned to a completely aquatic lifestyle. Primarily or exclusively aquatic animals have re-evolved from terrestrial tetrapods multiple times: examples include amphibians such as newts, reptiles such as crocodiles, sea turtles, ichthyosaurs, plesiosaurs and mosasaurs, marine mammals such as whales, seals and otters, and birds such as penguins. Many species of snakes are also aquatic and live their entire lives in the water. Among invertebrates, a number of insect species have adaptations for aquatic life and locomotion. Examples of aquatic insects include dragonfly larvae, water boatmen, and diving beetles. There are also aquatic spiders, although they tend to prefer other modes of locomotion under water than swimming proper. Examples are: Some breeds of dog swim recreationally. Umbra, a world record-holding dog, can swim 4 miles (6.4 km) in 73 minutes, placing her in the top 25% in human long-distance swimming competitions. The fishing cat is one wild species of cat that has evolved special adaptations for an aquatic or semi-aquatic lifestyle – webbed digits. Tigers and some individual jaguars are the only big cats known to go into water readily, though other big cats, including lions, have been observed swimming. A few domestic cat breeds also like swimming, such as the Turkish Van. Horses, moose, and elk are very powerful swimmers, and can travel long distances in the water. Elephants are also capable of swimming, even in deep waters. Eyewitnesses have confirmed that camels, including dromedary and Bactrian camels, can swim, despite the fact that there is little deep water in their natural habitats. Both domestic and wild rabbits can swim. Domestic rabbits are sometimes trained to swim as a circus attraction. A wild rabbit famously swam in an apparent attack on U.S. President Jimmy Carter's boat when it was threatened in its natural habitat. The guinea pig (or cavy) is noted as having an excellent swimming ability. Mice can swim quite well. They do panic when placed in water, but many lab mice are used in the Morris water maze, a test to measure learning. When mice swim, they use their tails like flagella and kick with their legs. Many snakes are excellent swimmers as well. Large adult anacondas spend the majority of their time in the water, and have difficulty moving on land. Many monkeys can naturally swim and some, like the proboscis monkey, crab-eating macaque, and rhesus macaque swim regularly. Human swimming Swimming has been known amongst humans since prehistoric times; the earliest record of swimming dates back to Stone Age paintings from around 7,000 years ago. Competitive swimming started in Europe around 1800 and was part of the first modern 1896 Summer Olympics in Athens, though not in a form comparable to the contemporary events. It was not until 1908 that regulations were implemented by the International Swimming Federation to produce competitive swimming. See also Animal locomotion Aquatic Fish fin Locomotion in space Robot locomotion Role of skin in locomotion Terrestrial locomotion Tradeoffs for locomotion in air and water Undulatory locomotion Aerial locomotion in marine animals References Ethology Swimming Animal locomotion
Aquatic locomotion
[ "Physics", "Biology" ]
7,270
[ "Animal locomotion", "Physical phenomena", "Behavior", "Animals", "Behavioural sciences", "Motion (physics)", "Ethology" ]
22,211,457
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93P%C3%B3sa%20theorem
In the mathematical discipline of graph theory, the Erdős–Pósa theorem, named after Paul Erdős and Lajos Pósa, relates two parameters of a graph: The size of the largest collection of vertex-disjoint cycles contained in the graph; The size of the smallest feedback vertex set in the graph: a set that contains one vertex from every cycle. Motivation and statement In many applications, we are interested in finding a minimum feedback vertex set in a graph: a small set that includes one vertex from every cycle, or, equivalently, a small set of vertices whose removal destroys all cycles. This is a hard computational problem; if we are not able to solve it exactly, we can instead try to find lower and upper bounds on the size of the minimum feedback vertex set. One approach to find lower bounds is to find a collection of vertex-disjoint cycles in a graph. For example, consider the graph in Figure 1. The cycles A-B-C-F-A and G-H-I-J-G share no vertices. As a result, if we want to remove vertices and destroy all cycles in the graph, we must remove at least two vertices: one from the first cycle and one from the second. This line of reasoning generalizes: if we can find vertex-disjoint cycles in a graph, then every feedback vertex set in the graph must have at least vertices. Unfortunately, in general, this bound is not tight: if the largest collection of vertex-disjoint cycles in a graph contains cycles, then it does not necessarily follow that there is a feedback vertex set of size . The graph in Figure 1 is an example of this: even if we destroy cycle G-H-I-J-G by removing one of the vertices G, H, I, or J, we cannot destroy all four of the cycles A-B-C-F-A, A-B-E-F-A, B-C-D-E-B, and C-D-E-F-C by removing only one more vertex. Any minimum feedback vertex set in the graph in Figure 1 has three vertices: for example, the three vertices A, C, and G. It is possible to construct examples in which the gap between the two quantities - the size of the largest collection of vertex-disjoint cycles, and the size of the smallest feedback vertex set - is arbitrarily large. The Erdős–Pósa theorem says that despite this, knowing one quantity does put lower and upper bounds on the other quantity. Formally, the theorem states that there exists a function such that for each positive integer , every graph either contains a collection of vertex-disjoint cycles, or has a feedback vertex set of at most vertices. For example, suppose we have determined that for the graph in Figure 1, there is a collection of 2 vertex-disjoint cycles, but no collection of 3 vertex-disjoint cycles. Our earlier argument says that the smallest feedback vertex set must have at least 2 vertices; the Erdős–Pósa theorem says that the smallest feedback vertex set must have at most vertices. In principle, many functions could satisfy the theorem. For the purpose of discussing bounds on how large needs to be, we define the Erdős–Pósa function to give, for each positive integer , the least value of for which the statement of the theorem holds. Bounds on the Erdős–Pósa function In addition to proving that the function exists, obtained the bounds for some constants and . In Big O notation, . A previous unpublished result of Béla Bollobás stated : in simpler terms, any graph which does not contain two vertex-disjoint cycles has a feedback vertex set of at most three vertices. One example showing that is , the complete graph on 5 vertices. Here, because any cycle must contain at least three vertices, and there are only 5 vertices total, any two cycles must overlap in at least one vertex. On the other hand, a set of only two vertices cannot be a feedback vertex set because the other three vertices will form a cycle: a feedback vertex set must contain at least three vertices. The result that was first published by , who also gave a complete characterization of the case : that is, he described the graphs which, like the example of given above, do not contain two vertex-disjoint cycles. Later, proved and . Erdős–Pósa property A family of graphs or hypergraphs is defined to have the Erdős–Pósa property if there exists a function such that for every (hyper-)graph and every integer one of the following is true: contains vertex-disjoint subgraphs each isomorphic to a graph in ; or contains a vertex set of size at most such that has no subgraph isomorphic to a graph in . The definition is often phrased as follows. If one denotes by the maximum number of vertex disjoint subgraphs of isomorphic to a graph in and by the minimum number of vertices whose deletion from leaves a graph without a subgraph isomorphic to a graph in , then , for some function not depending on . Rephrased in this terminology, the Erdős–Pósa theorem states that the family consisting of all cycles has the Erdős–Pósa property, with bounding function . Robertson and Seymour (1986) generalized this considerably. Given a graph , let () denote the family of all graphs that contain as a minor. As a corollary of their grid minor theorem, Robertson and Seymour proved that () has the Erdős–Pósa property if and only if is a planar graph. Moreover, it is now known that the corresponding bounding function is if is a forest , while for every other planar graph . When we take to be a triangle, the family () consists of all graphs that contain at least one cycle, and a vertex set such that has no subgraph isomorphic to a graph in () is exactly a feedback vertex set. Thus, the special case where is a triangle is equivalent to the Erdős–Pósa theorem. References See also Pósa theorem (1962). External links List of graph classes for which the Erdös-Pósa property is known to (not) hold Theorems in graph theory
Erdős–Pósa theorem
[ "Mathematics" ]
1,300
[ "Theorems in graph theory", "Theorems in discrete mathematics" ]
22,211,736
https://en.wikipedia.org/wiki/Aerosol%20impaction
In the physics of aerosols, aerosol impaction is the process in which particles are removed from an air stream by forcing the gases to make a sharp bend. Particles above a certain size possess so much momentum that they can not follow the air stream and strike a collection surface, which is available for later analysis of mass and composition. Removal of particles from an air-stream by impaction followed by mass and composition analysis has always been a different approach as to filter sampling, yet has been little utilized for routine analysis because of lack of suitable analytical methods. Advantages The most clear and important advantage of impaction, as opposed to filtration, is that two key aerosol parameters, size and composition, can be simultaneously established. There are many advantages of impaction as a sampling method. For two of the most common configurations, an orifice and an infinite slot, theoretical predictions can be made and empirically verified that give the cuts point and shape of the collection efficiency of an impaction stage. The air stream moves over the sample, not through it as in filtration, reducing desiccation and chemical transformations of the collected sample. Almost complete control of the type of surface on which the particles are impacted, as opposed to the limited choice of filter types. By varying the speed of the air stream and the sharpness of the bend, one can separate particles into numerous size classifications while retaining a sample for analysis. Disadvantages There are also several disadvantages to impaction as a sampling method. Only a limited amount of material is available for mass and compositional analysis, as one can not collect more than a few monolayers of particles before particle bounce and mis-sizing are a potential problem. See also Deposition (aerosol physics) Cascade impactor Aerodynamic Aerosol Classifier References Particulates Aerosols
Aerosol impaction
[ "Chemistry" ]
369
[ "Particulates", "Particle technology", "Aerosols", "Colloids" ]
22,214,599
https://en.wikipedia.org/wiki/Obstacle
An obstacle (also called a barrier, impediment, or stumbling block) is an object, thing, action or situation that causes an obstruction. A obstacle blocks or hinders our way forward. Different types of obstacles include physical, economic, biopsychosocial, cultural, political, technological and military. Types Physical As physical obstacles, we can enumerate all those physical barriers that block the action and prevent the progress or the achievement of a concrete goal. Examples: architectural barriers that hinder access to people with reduced mobility; doors, gates, and access control systems, designed to keep intruders or attackers out; large objects, fallen trees or collapses through passageways, paths, roads, railroads, waterways or airfields, preventing mobility; sandbanks, rocks or coral reefs, preventing free navigation; hills, mountains and weather phenomena preventing the free traffic of aircraft; meteors, meteorites, micrometeorites, cosmic dust, comets, space debris, strong electromagnetic radiation or gravitational field, preventing a spacecraft from navigating freely in space. Sports In sports, a variety of physical barriers or obstacles were introduced in the competition rules to make them even more difficult and competitive: in the athletics, there are barriers in obstacle running contests of 110 meters and 3000 meters, as well as in high jump and in pole vault; in equestrian competitions, there are also jumps over obstacles; in tennis and volleyball, a net stands as an obstacle that divides the court; in the cycling, motorcycle and motor racing, circuit designs are interposed with difficult paths to obstruct and render more difficult the competition; in team sports, like soccer, football, basketball and volleyball, attack players are hampered by defensive players, that make it difficult to move or throw the ball towards the goal; in other sports, such as Parkour, the competitor aims to move from one point to another in the most fluid and fast as possible, jumping obstacles of urban architecture that get in the way. Economic Can be defined as those elements of material deprivation that people may have to achieve certain goals, such as: the lack of money as an obstacle to the development of certain projects; the lack of water as an obstacle to the human capacity to produce certain crops on the field and to their own survival; the lack of light as an obstacle to mobility at night; the lack of electricity as an obstacle to the benefits provided by electronic devices and electrical machines; the lack of schools and teachers as obstacles to education and the fullness of citizenship; the lack of hospitals and physicians as obstacles to a system for the improvement of public health; the lack of transportation infrastructure as an obstacle to trade, industrial and tourism activities, among others, and to economic development. Biopsychosocial and cultural People are prevented to achieve certain goals by biological, psychological, social or cultural barriers, such as: diseases, as obstacles to human life in its fullness; physical disabilities as obstacles to the mobility of handicapped, which can be facilitated by accessibility resources; shyness as an obstacle to social relations; fear as an obstacle that prevents facing potential enemies or socio-political opponents, or facing possible economic barriers; social exclusion or the arrest of individuals as obstacles to socio-cultural integration into a community; the lack of psychomotor coordination as an obstacle to the development of qualified abilities; the level of mastery of the spoken idiom, or the differences between the spoken languages, as barriers to national or international social relations; the different religions as obstacles to the mutual moral understanding or interreligious dialogue, nationally or internationally; Political Obstacles or difficulties which groups of citizens, their political representatives, political parties or countries interpose to each other in order to hinder the actions of certain of their opponents, such as: the prevention of a political minority group to achieving their aspirations in the parliament by the politically dominant voting majority, in the parliamentary procedure; the ideological repression, persecution and imprisonment for political reasons; the blocking of the international political and economic influence of a country by a multilateral treaty or alliance between countries opposed to such influence. Technological The improvement of living conditions of any human community is constantly challenged by the need of technologies still inaccessible or unavailable, which can be internally developed or acquired from other communities that have already developed them, and in both cases must overcome such barriers as: in the technology transfer between different countries, the trade and diplomatic negotiating skills with the countries which are providers of the desired new technologies; in the internal development approach, the educational level of the community or country, the accessible collection of specialized information, their technological and industrial base, their institutional level of scientific and technological research, development and innovation, and the level of practiced international collaboration. Military When different communities or countries, which border or not, cannot develop good relations, for economic, cultural or political reasons, they may exceed the limits of diplomatic negotiations, creating military defensive or offensive obstacles to their opponents or enemies, such as: building fortifications, entrenchments, barbed wire beds or mine fields, and other similar tactics intended to prevent or hinder movement of the enemy in a certain direction, and to protect your own forces from attack; blocking or destroying physical resources or logistic interconnections, such as bridges, highways, ports or airports, creating barriers to migration, trade, tourism, etc.; the invasion of the opponent's territory, seeking to block, destroy or use physical, logistical or strategic resources, in order to hinder existing threats. Image gallery References Architectural elements Civil engineering Military tactics Borders
Obstacle
[ "Physics", "Technology", "Engineering" ]
1,113
[ "Building engineering", "Construction", "Architectural elements", "Civil engineering", "Space", "Spacetime", "Borders", "Components", "Architecture" ]
22,215,454
https://en.wikipedia.org/wiki/Catabolite%20repression
Carbon catabolite repression, or simply catabolite repression, is an important part of global control system of various bacteria and other microorganisms. Catabolite repression allows microorganisms to adapt quickly to a preferred (rapidly metabolizable) carbon and energy source first. This is usually achieved through inhibition of synthesis of enzymes involved in catabolism of carbon sources other than the preferred one. The catabolite repression was first shown to be initiated by glucose and therefore sometimes referred to as the glucose effect. However, the term "glucose effect" is actually a misnomer since other carbon sources are known to induce catabolite repression. It was discovered by Frédéric Diénert in 1900. Jacques Monod provides a bibliography of pre-1940 literature. Escherichia coli Catabolite repression was extensively studied in Escherichia coli. E. coli grows faster on glucose than on any other carbon source. For example, if E. coli is placed on an agar plate containing only glucose and lactose, the bacteria will use glucose first and lactose second. When glucose is available in the environment, the synthesis of β-galactosidase is under repression due to the effect of catabolite repression caused by glucose. The catabolite repression in this case is achieved through the utilization of phosphotransferase system. An important enzyme from the phosphotransferase system called Enzyme II A (EIIA) plays a central role in this mechanism. There are different catabolite-specific EIIA in a single cell, even though different bacterial groups have specificities to different sets of catabolites. In enteric bacteria one of the EIIA enzymes in their set is specific for glucose transport only. When glucose levels are high inside the bacteria, EIIA mostly exists in its unphosphorylated form. This leads to inhibition of adenylyl cyclase and lactose permease, therefore cAMP levels are low and lactose can not be transported inside the bacteria. Once the glucose is all used up, the second preferred carbon source (i.e. lactose) has to be used by bacteria. Absence of glucose will "turn off" catabolite repression. When glucose levels are low, the phosphorylated form of EIIA accumulates and consequently activates the enzyme adenylyl cyclase, which will produce high levels of cAMP. cAMP binds to catabolite activator protein (CAP) and together they will bind to a promoter sequence on the lac operon. However, this is not enough for the lactose genes to be transcribed. Lactose must be present inside the cell to remove the lactose repressor from the operator sequence (transcriptional regulation). When these two conditions are satisfied, it means for the bacteria that glucose is absent and lactose is available. Next, bacteria start to transcribe the lac operon and produce β-galactosidase enzymes for lactose metabolism. The example above is a simplification of a complex process. Catabolite repression is considered to be a part of global control system and therefore it affects more genes rather than just lactose gene transcription. Bacillus subtilis Gram positive bacteria such as Bacillus subtilis have a cAMP-independent catabolite repression mechanism controlled by catabolite control protein A (CcpA). In this alternative pathway CcpA negatively represses other sugar operons so they are off in the presence of glucose. It works by the fact that Hpr is phosphorylated by a specific mechanism, when glucose enters through the cell membrane protein EIIC, and when Hpr is phosphorylated it can then allow CcpA to block transcription of the alternative sugar pathway operons at their respective cre sequence binding sites. Note that E. coli has a similar cAMP-independent catabolite repression mechanism that utilizes a protein called catabolite repressor activator (Cra). References External links https://web.archive.org/web/20110605181224/http://www.mun.ca/biochem/courses/4103/topics/catabintro.html http://pathmicro.med.sc.edu/mayer/geneticreg.htm Bacteria Biochemical reactions Gene expression
Catabolite repression
[ "Chemistry", "Biology" ]
927
[ "Gene expression", "Prokaryotes", "Biochemical reactions", "Molecular genetics", "Cellular processes", "Bacteria", "Molecular biology", "Biochemistry", "Microorganisms" ]
22,217,265
https://en.wikipedia.org/wiki/Nucleic%20acid%20structure%20determination
Experimental approaches of determining the structure of nucleic acids, such as RNA and DNA, can be largely classified into biophysical and biochemical methods. Biophysical methods use the fundamental physical properties of molecules for structure determination, including X-ray crystallography, NMR and cryo-EM. Biochemical methods exploit the chemical properties of nucleic acids using specific reagents and conditions to assay the structure of nucleic acids. Such methods may involve chemical probing with specific reagents, or rely on native or analogue chemistry. Different experimental approaches have unique merits and are suitable for different experimental purposes. Biophysical methods X-ray crystallography X-ray crystallography is not common for nucleic acids alone, since neither DNA nor RNA readily form crystals. This is due to the greater degree of intrinsic disorder and dynamism in nucleic acid structures and the negatively charged (deoxy)ribose-phosphate backbones, which repel each other in close proximity. Therefore, crystallized nucleic acids tend to be complexed with a protein of interest to provide structural order and neutralize the negative charge. Nuclear magnetic resonance spectroscopy (NMR) Nucleic acid NMR is the use of NMR spectroscopy to obtain information about the structure and dynamics of nucleic acid molecules, such as DNA or RNA. As of 2003, nearly half of all known RNA structures had been determined by NMR spectroscopy. Nucleic acid NMR uses similar techniques as protein NMR, but has several differences. Nucleic acids have a smaller percentage of hydrogen atoms, which are the atoms usually observed in NMR, and because nucleic acid double helices are stiff and roughly linear, they do not fold back on themselves to give "long-range" correlations. The types of NMR usually done with nucleic acids are 1H or proton NMR, 13C NMR, 15N NMR, and 31P NMR. Two-dimensional NMR methods are almost always used, such as correlation spectroscopy (COSY) and total coherence transfer spectroscopy (TOCSY) to detect through-bond nuclear couplings, and nuclear Overhauser effect spectroscopy (NOESY) to detect couplings between nuclei that are close to each other in space. Parameters taken from the spectrum, mainly NOESY cross-peaks and coupling constants, can be used to determine local structural features such as glycosidic bond angles, dihedral angles (using the Karplus equation), and sugar pucker conformations. For large-scale structure, these local parameters must be supplemented with other structural assumptions or models, because errors add up as the double helix is traversed, and unlike with proteins, the double helix does not have a compact interior and does not fold back upon itself. NMR is also useful for investigating nonstandard geometries such as bent helices, non-Watson–Crick basepairing, and coaxial stacking. It has been especially useful in probing the structure of natural RNA oligonucleotides, which tend to adopt complex conformations such as stem-loops and pseudoknots. NMR is also useful for probing the binding of nucleic acid molecules to other molecules, such as proteins or drugs, by seeing which resonances are shifted upon binding of the other molecule. Cryogenic electron microscopy (cryo-EM) Cryogenic electron microscopy (cryo-EM) is a technique that uses an electron beam to image samples that have been cryogenically preserved in an aqueous solution. Liquid samples are pipetted on small metallic grids and plunged into a liquid ethane/propane solution which is kept extremely cold by a liquid nitrogen bath. Upon this freezing process, water molecules in the sample do not have enough time to form hexagonal lattices as found in ice, and therefore the sample is preserved in a glassy water-like state (also referred to as a vitrified ice), making these samples easier to image using the electron beam. An advantage of cryo-EM over x-ray crystallography is that the samples are preserved in their aqueous solution state and not perturbed by forming a crystal of the sample. One disadvantage, is that it is difficult to resolve nucleic acid or protein structures that are smaller than ~75 kilodaltons, partly due to the difficulty of having enough contrast to locate particles in this vitrified aqueous solution. Another disadvantage is that to attain atomic-level structure information about a sample requires taking many images (often referred to as electron micrographs) and averaging over those images in a process called single-particle reconstruction. This is a computationally intensive process. Cryo-EM is a newer, less perturbative version of transmission electron microscopy (TEM). It is less perturbative because the sample is not dried onto a surface, this drying process is often done in negative-stain TEM, and because Cryo-EM does not require contrast agent like heavy metal salts (e.g. uranyl acetate or phoshotungstic acid) which also may affect the structure of the biomolecule. Transmission electron microscopy, as a technique, utilizes the fact that samples interact with a beam of electrons and only parts of the sample that do not interact with the electron beam are allowed to 'transmit' onto the electron detection system. TEM, in general, has been a useful technique in determining nucleic acid structure since the 1960s. While double-stranded DNA (dsDNA) structure may not traditionally be considered structure, in the typical sense of alternating segments of single- and double-stranded regions, in reality, dsDNA is not simply a perfectly ordered double helix at every location of its length due to thermal fluctuations in the DNA and alternative structures that can form like g-quadruplexes. CryoEM of nucleic acid has been done on ribosomes, viral RNA, and single-stranded RNA structures within viruses. These studies have resolved structural features at different resolutions from the nucleobase level (2-3 angstroms) up to tertiary structure motifs (greater than a nanometer). Chemical probing RNA chemical probing uses chemicals that react with RNAs. Importantly, their reactivity depends on local RNA structure e.g. base-pairing or accessibility. Differences in reactivity can therefore serve as a footprint of structure along the sequence. Different reagents react at different positions on the RNA structure, and have different spectra of reactivity. Recent advances allow the simultaneous study of the structure of many RNAs (transcriptome-wide probing) and the direct assay of RNA molecules in their cellular environment (in-cell probing). Structured RNA is first reacted with the probing reagents for a given incubation time. These reagents would form a covalent adduct on the RNA at the site of reaction. When the RNA is reverse transcribed using a reverse transcriptase into a DNA copy, the DNA generated is truncated at the positions of reaction because the enzyme is blocked by the adducts. The collection of DNA molecules of various truncated lengths therefore informs the frequency of reaction at every base position, which reflects the structure profile along the RNA. This is traditionally assayed by running the DNA on a gel, and the intensity of bands inform the frequency of observing a truncation at each position. Recent approaches use high-throughput sequencing to achieve the same purpose with greater throughput and sensitivity. The reactivity profile can be used to study the degree of structure at particular positions for specific hypotheses, or used in conjunction with computational algorithms to produce a complete experimentally supported structure model. Depending on the chemical reagent used, some reagents, e.g. hydroxyl radicals, would cleave the RNA molecule instead. The result in the truncated DNA is the same. Some reagents, e.g. DMS, sometimes do not block the reverse transcriptase, but trigger a mistake at the site in the DNA copy instead. These can be detected when using high-throughput sequencing methods, and is sometimes employed for improved results of probing as mutational profiling (MaP). Positions on the RNA can be protected from the reagents not only by local structure but also by a binding protein over that position. This has led some work to use chemical probing to also assay protein-binding. Hydroxyl radical probing As hydroxyl radicals are short-lived in solution, they need to be generated upon experiment. This can be done using H2O2, ascorbic acid, and Fe(II)-EDTA complex. These reagents form a system that generates hydroxyl radicals through Fenton chemistry. The hydroxyl radicals can then react with the nucleic acid molecules. Hydroxyl radicals attack the ribose/deoxyribose ring and this results in breaking of the sugar-phosphate backbone. Sites under protection from binding proteins or RNA tertiary structure would be cleaved by hydroxyl radical at a lower rate. These positions would therefore show up as absence of bands on the gel, or low signal through sequencing. DMS Dimethyl sulfate, known as DMS, is a chemical that can be used to modify nucleic acids in order to determine secondary structure. Reaction with DMS adds a methyl adduct at the site, known as methylation. In particular, DMS methylates N1 of adenine (A) and N3 of cytosine (C), both located at the site of natural hydrogen bonds upon base-pairing. Therefore, modification can only occur at A and C nucleobases that are single-stranded, base paired at the end of a helix, or in a base pair at or next to a GU wobble pair, the latter two being positions where the base-pairing can occasionally open up. Moreover, since modified sites cannot be base-paired, modification sites can be detected by RT-PCR, where the reverse transcriptase falls off at methylated bases and produces different truncated cDNAs. These truncated cDNAs can be identified through gel electrophoresis or high-throughput sequencing. Improving upon truncation-based methods, DMS mutational profiling with sequencing (DMS-MaPseq) can detect multiple DMS modifications in a single RNA molecule, which enables one to obtain more information per read (for a read of 150 nt, typically two to three mutation sites, rather than zero to one truncation sites), determine structures of low-abundance RNAs, and identify subpopulations of RNAs with alternative secondary structures. DMS-MaPseq uses a thermostable group II intron reverse transcriptase (TGIRT) that creates a mutation (rather than a truncation) in the cDNA when it encounters a base methylated by DMS, but otherwise it reverse transcribes with high fidelity. Sequencing the resulting cDNA identifies which bases were mutated during reverse transcription; these bases cannot have been base-paired in the original RNA. DMS modification can also be used for DNA, for example in footprinting DNA-protein interactions. SHAPE Selective 2′-hydroxyl acylation analyzed by primer extension, or SHAPE, takes advantage of reagents that preferentially modify the backbone of RNA in structurally flexible regions. Reagents such as N-methylisatoic anhydride (NMIA) and 1-methyl-7-nitroisatoic anhydride (1M7) react with the 2'-hydroxyl group to form adducts on the 2'-hydroxyl of the RNA backbone. Compared to the chemicals used in other RNA probing techniques, these reagents have the advantage of being largely unbiased to base identity, while remaining very sensitive to conformational dynamics. Nucleotides which are constrained (usually by base-pairing) show less adduct formation than nucleotides which are unpaired. Adduct formation is quantified for each nucleotide in a given RNA by extension of a complementary DNA primer with reverse transcriptase and comparison of the resulting fragments with those from an unmodified control. SHAPE therefore reports on RNA structure at the individual nucleotide level. This data can be used as input to generate highly accurate secondary structure models. SHAPE has been used to analyze diverse RNA structures, including that of an entire HIV-1 genome. The best approach is to use a combination of chemical probing reagents and experimental data. In SHAPE-Seq SHAPE is extended by bar-code based multiplexing combined with RNA-Seq and can be performed in a high-throughput fashion. Carbodiimides The carbodiimide moiety can also form covalent adducts at exposed nucleobases, which are uracil, and to a smaller extent guanine, upon nucleophilic attack by a deprotonated N. They react primarily with N3 of uracil and N1 of guanine modifying two sites responsible for hydrogen bonding on the bases. 1-cyclohexyl-(2-morpholinoethyl)carbodiimide metho-p-toluene sulfonate, also known as CMCT or CMC, is the most commonly used carbodiimide for RNA structure probing. Similar to DMS, it can be detected by reverse transcription followed by gel electrophoresis or high-throughput sequencing. As it is reactive towards G and U, it can be used to complement the data from DMS probing experiments, which inform A and C. 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide, also known as EDC, is a water-soluble carbodiimide that exhibits similar reactivity as CMC, and is also used for the chemical probing of RNA structure. EDC is able to permeate into cells and is thus used for direct in-cell probing of RNA in their native environments. Kethoxal, glyoxal and derivatives Some 1,2-dicarbonyl compounds are able to react with single-stranded guanine (G) at N1 and N2, forming a five-membered ring adduct at the Watson-Crick face. 1,1-Dihydroxy-3-ethoxy-2-butanone, also known as kethoxal, has a structure related to 1,2-dicarbonyls, and was the first in this category used extensively for the chemical probing of RNA. Kethoxal causes the modification of guanine, specifically altering the N1 and the exocyclic amino group (N2) simultaneously by covalent interaction. Glyoxal, methylglyoxal, and phenylglyoxal, which all carry the key 1,2-dicarbonyl moiety, all react with free guanines similar to kethoxal, and can be used to probe unpaired guanine bases in structured RNA. Due to their chemical properties, these reagents can permeate readily into cells and can therefore be used to assay RNAs in their native cellular environments. LASER or NAz Probing Light-Activated Structural Examination of RNA (LASER) probing utilizes UV light to activate nicotinoyl azide (NAz), generating highly reactive nitrenium cation in water, which reacts with solvent accessible guanosine and adenosine of RNA at C-8 position through a barrierless Friedel-Crafts reaction. LASER probing targets both single-stranded and double-stranded residues as long as they are solvent accessible. Because hydroxyl radical probing requires synchrotron radiation to measure solvent accessibility of RNA in vivo, it is hard to apply hydroxyl radical probing to footprint RNA in cells for many laboratories. In contrast, LASER probing utilizes a hand-held UV lamp (20 W) for excitation, it is much easier to apply LASER probing for in vivo studying RNA solvent accessibility. This chemical probing method is light-controllable, and probes solvent accessibility of nucleobase, which has been shown to footprint RNA binding proteins inside cells. In-line probing In-line probing does not involve treatment with any type of chemical or reagent to modify RNA structures. This type of probing assay uses the structure dependent cleavage of RNA; single stranded regions are more flexible and unstable and will degrade over time. The process of in-line probing is often used to determine changes in structure due to ligand binding. Binding of a ligand can result in different cleavage patterns. The process of in-line probing involves incubation of structural or functional RNAs over a long period of time. This period can be several days, but varies in each experiment. The incubated products are then run on a gel to visualize the bands. This experiment is often done using two different conditions: 1) with ligand and 2) in the absence of ligand. Cleavage results in shorter band lengths and is indicative of areas that are not basepaired, as basepaired regions tend to be less sensitive to spontaneous cleavage. In-line probing is a functional assay that can be used to determine structural changes in RNA in response to ligand binding. It can directly show the change in flexibility and binding of regions of RNA in response to a ligand, as well as compare that response to analogous ligands. This assay is commonly used in dynamic studies, specifically when examining riboswitches. Nucleotide analog interference mapping (NAIM) Nucleotide analog interference mapping (NAIM) is the process of using nucleotide analogs, molecules that are similar in some ways to nucleotides but lack function, to determine the importance of a functional group at each location of an RNA molecule. The process of NAIM is to insert a single nucleotide analog into a unique site. This can be done by transcribing a short RNA using T7 RNA polymerase, then synthesizing a short oligonucleotide containing the analog in a specific position, then ligating them together on the DNA template using a ligase. The nucleotide analogs are tagged with a phosphorothioate, the active members of the RNA population are then distinguished from the inactive members, the inactive members then have the phosphorothioate tag removed and the analog sites are identified using gel electrophoresis and autoradiography. This indicates a functionally important nucleotide, as cleavage of the phosphorothioate by iodine results in an RNA that is cleaved at the site of the nucleotide analog insert. By running these truncated RNA molecules on a gel, the nucleotide of interest can be identified against a sequencing experiment Site directed incorporation results indicate positions of importance where when running on a gel, functional RNAs that have the analog incorporated at that position will have a band present, but if the analog results in non-functionality, when the functional RNA molecules are run on a gel there will be no band corresponding to that position on the gel. This process can be used to evaluate an entire area, where analogs are placed in site specific locations, differing by a single nucleotide, then when functional RNAs are isolated and run on a gel, all areas where bands are produced indicate non-essential nucleotides, but areas where bands are absent from the functional RNA indicate that inserting a nucleotide analog in that position caused the RNA molecule to become non-functional References RNA Molecular biology techniques
Nucleic acid structure determination
[ "Chemistry", "Biology" ]
4,046
[ "Molecular biology techniques", "Molecular biology" ]
22,222,088
https://en.wikipedia.org/wiki/Robert%20H.%20Brill
Robert Brill is an American archaeologist, best known for his work on the chemical analysis of ancient glass. Born in the US in 1929, Brill attended West Side High School in Newark, New Jersey, before going on to study for his B.S. degree at Upsala College (Brill 1993a, Brill 2006, Getty Conservation Institute 2009). Having completed his Ph.D. in physical chemistry at Rutgers University in 1954, Brill returned to Upsala College to teach chemistry. In 1960, he joined the staff of the Corning Museum of Glass as their second research scientist. Throughout his career at Corning, where a four-year directorship punctuated his time as a research scientist, Brill was a forerunner in the scientific investigation of glass, glazes and colorants, developing and challenging the usefulness of emerging techniques. His pioneering work with the application of lead and oxygen isotope analysis in archaeology led him occasionally to add the investigation of metal objects to his portfolio so that, together, his published works number more than 160 (Brill and Wampler 1967). Perhaps the most famous of these is his Chemical Analyses of Early Glass, a sum of his 39 years of work and now a seminal reference guide in the field (Brill 1999). Since 1982, Brill has served on the International Commission on Glass. Within this, he founded TC17, the technical committee for the Archaeometry of Glass, which lists among its aims the ‘promotion of collaboration among glass specialists in widely separated countries’ and the stimulation and encouragement of glass scientists ‘in developing countries’ (Archaeometry of Glass 2005). His internationalism is aptly demonstrated by his study of glasses from around the world, with his attentions most recently being focused on those from the Silk Road. It seems he was attracted by the lack of previous study and the need for further development in the field. Seeing a disparity between contemporary knowledge of glasses from the western world and those from East Asia, Brill was keen to add insight to a hitherto unexploited field and, as such, has gone on to contribute a great deal to Silk Road studies (Brill 1993b). The 1960s The 1960s saw Brill beginning to develop the analytical techniques that would define the early years of his career at Corning, and yet the scope of his interest within glass remained vast. Indeed, 1961 saw Brill pen a letter to Nature with a colleague, that was a ‘bombshell’, according to Newton, in the field of glass-dating (1971, 3). Here Brill suggested that the rather enigmatic weathering crust found to form on buried glass objects over time could be used to date the object in a method rather similar to dendrochronology, using the separate layers of the shiny lamination (Brill 1961, Brill and Hood 1961, Newton 1971). Whilst in dendrochronology the tree-rings account simply for the tree's annual growth, in the weathering crust on glass Brill suggested the accumulation of a layer of laminate might respond to some kind of annual event of climatic change (Brill 1961). Unfortunately, despite the examples of the method's successful applications provided by Brill, such as the almost accurate count of 156 layers on a bottle-base from the York River submerged in 1781 and excavated in 1935, the technique largely failed to convince and did not see widespread adoption (Brill 1961, Newton 1971). Isotope analysis The most important of these techniques would prove to be Brill's pioneering application of lead isotope analysis, hitherto used only in geology, to archaeological objects. Brill first presented this idea at the 1965 Seminar in Examination of Works of Art, held at the Museum of Fine Arts Boston, but the first widely published account of the method seems to be Brill and Wampler's 1967 article in the American Journal of Archaeology. Here, Brill and Wampler outlined how the technique could be used to provenance the lead contents of archaeological objects to lead ore sources around the world, based on the isotopic signature of various leads, which relates them to ‘ores occurring in different geographical areas’ (1967, 63). These different areas have different signatures because they are of varying geological age, something reflected by the individual lead isotopes which form only after the radioactive decay of uranium and thorium (Brill et al. 1965, Brill and Wampler 1967). While the lead isotope ratios used for provenancing are different, they are not unique: areas geologically similar will yield similar lead isotope signatures (Brill 1970). Furthermore, if leads were salvaged and mixed in ancient times, the isotope ratio will be compromised (Brill 1970). Aside from these two limitations, there is little else that could affect the lead isotope reading an object would yield. As such, Brill's method was greeted enthusiastically and he went on to develop the technique, as well as oxygen isotope analysis, in his 1970 publication. Here he demonstrated how the technique could be used both to classify early glasses and to a certain extent characterize the ingredients from which they were made (1970, 143). Chemical-analytical round robin In 1965, Brill launched another important innovation in glass analysis, the comparison of interlaboratory experiments in order to verify analytical results (Brill 1965). ‘Originally inspired by a plea from W E S Turner’, according to Freestone, Brill first mooted his idea at the VIIth International Congress on Glass, in Brussels (Brill 1965a, I. Freestone, pers. comm. 2009). It wasn’t until the VIIIth International Congress on Glass in 1968, however, that Brill fully launched his concept of an ‘analytical round robin’, having distributed a number of reference glasses to be tested in different laboratories using a range of current techniques including X-ray fluorescence and neutron activation analysis (1968, 49). When discussing his motive for the experiment, Brill aptly stated: 'The truth is that the chemical analysis of glasses is a difficult undertaking and still remains in some senses an art' (1968, 49). By conducting the round robin experiment, Brill hoped the results gathered from different laboratories would help ‘correlate [...] earlier results’ and ‘calibrate future analyses in reference to one another’, as well as suggest which out of the analytical procedures used was the most accurate and effective (1968, 49). The results of the round robin were presented at the 'IXth International Congress on Glass' in 1971, and showed that, as Brill suspected, there was poor agreement between certain identified elements, and therefore these might be ‘troublesome’ generally across analyses (1971, 97). These included calcium, aluminium, lead and barium, among others (Brill 1971). Aside from their correctional potential, the results, from 45 different laboratories in 15 countries, also provided an enormous data set from which, Brill suggested, the participants could ‘evaluate their own methods and procedures against the findings of other analysts’ (1971, 97). At the time, Brill could hardly have suspected that the data would go on to have such great import, but Croegaard's generation of preferred glass compositions, from statistical analysis of the data, were used successfully by many people until Brill's own reference guide was published in 1999 (I. Freestone, 'pers. comm.', 2009). The Middle East Brill made various trips to the Middle East, including accompanying Theodore Wertime's 1968 survey of the ancient technologies of Iran, alongside other great minds such as the noted ceramicist, Frederick Matson (UCL Institute for Archaeo-Metallurgical Studies 2007). In the years 1963-1964, the Corning Museum of Glass and the University of Missouri, following a long history of excavation at the necropolis of Beth She'Arim, conducted an examination of a huge slab of glass, some 2000 years old, that had been languishing in an ancient cistern (Brill and Wosinski 1965). Brill cannot recall who first suggested this slab, measuring 3.4m by 1.94m, could be made of glass, but the only way to test it was to drill a core through its 45 cm thickness and analyse it (Brill 1967, Brill and Wosinski 1965). On analysis of the core, Brill found that the glass was devitrified and stained, and not very homogenous, with a presence of wollastonite crystals throughout (1965, 219.2). Investigation of the manufacture technology required to produce the slab, suggested that in order to produce such a slab of glass, it would have been necessary to heat over eleven tons of batch material, and sustain it at around 1050˚C for between five and ten days (Brill 1967)! His initial interpretation was that the glass must have been heated either from above or from the sides using a kind of tank furnace; a hypothesis that was proven accurate when excavation underneath the slab suggested it had been melted in situ, in a tank whose floor was a bed of limestone blocks with a thin parting layer of clay (Brill and Wosinski 1965, Brill 1967). Brill's interpretation, that the slab and its surroundings suggest ‘some early form of reverberatory furnace’ was the first suggestion of the use of tank furnaces in early glassmaking (1967, 92). The evidence at Beth She’arim encouraged further innovative thought because whilst the slab represented glass production on a grand scale, no associated evidence for glass working was found. Brill had already suspected that historical glassmaking occurred in two phases, the heavy ‘engineering’ stage when the glass is formed from the batch ingredients and the ‘crafting’ stage when the glass is formed into artefacts (Brill, pers. comm., 2009). These stages could occur in combination at one location, or at two differing locales, and the time span of production after the initial glass melt is highly flexible. For Brill, the idea of this ‘dual nature of all glassmaking’ was ‘crystallized’ at Beth She’Arim, where only the raw glass production was represented, and would be reinforced later by the contrasting evidence, where working was favoured over production, found at Jalame, as discussed below (Brill, pers. comm., 2009). The 1970s Aside from the aforementioned published results of his analytical round robin and his lead and oxygen isotope studies in the early 1970s, the 1970s saw Brill publish comparatively little, perhaps due to his post as director at The Corning Museum of Glass. Those publications he did pen are largely concerned with the development of lead isotope analysis and are listed in the further reading section. Alas, before Brill could be named Director, however, the museum was to be blighted by an enormous flood, ‘possibly the greatest single catastrophe borne by an American museum’ according to Buechner, Brill's successor in 1976 (1977, 7). The Corning flood The flood was brought to Corning by Hurricane Agnes, a tropical storm that filled the Chemung River system to bursting until, on the morning of June 23, 1972, the river breached its banks and decimated the town (Martin and Edwards 1977). The Corning Glass Centre was under around twenty feet of water on the lower level's west side, while the museum itself was filled to a water-level of five feet and four inches (Martin and Edwards 1977). 528 of the museum's objects were damaged, the library's rare books were ruined and paper index systems, data and catalogues were all lost (Martin and Edwards 1977). In the wake of this destruction, Brill was named Director, so that his time holding this position, from 1972-1975, would be spent overseeing the complete restoration of the museum. Buechner praises how Brill 'painstakingly' prepared the insurance claim that would support the museum throughout the renovation process and facilitate the replacement of many wonderful objects (1977, 7). Under Brill's auspices, the Corning Museum of Glass was reopened just 39 days after the event, on August 1, but it would be another four years before the collection and library were restored to their former glory (Buechner 1977). The 1980s In 1982, Brill joined the International Commission on Glass (Corning Museum of Glass 2009). The International Commission functions through various technical committees, among which Brill saw an opening for TC17, the committee for the Archaeometry of Glass, which he founded shortly after joining. The main purpose of TC17, whose members met for the first time in Beijing in 1984, is ‘to bring together glass scientists, archaeologists and museum curators to present and discuss the results of research on early glass and glassmaking and on the conservation of historical glass objects’, as expressed in their mission statement (Archaeometry of Glass 2005). Brill chaired this committee until 2004 and received the W E S Turner Award from the International Commission of Glass on his departure, in recognition of his contribution as founder (Corning Museum of Glass 2009). Jalame One of the on-running projects of the Corning Museum of Glass published the excavation report from their many field seasons at the ancient glass factory in Jalame, in Late Roman Palestine (Brill 1988, Schreurs and Brill 1984). Brill was called upon to conduct scientific investigations of the huge amount of material generated at the site, in order to exploit the full potential of the artefacts; after all, the site was being excavated specifically because of its role as a glass factory (Brill 1988). Of the vast quantity of glass fragments from Jalame, both vessel sherds and cullet, most were aqua and green and all were soda-lime-silica glasses melted in highly reducing conditions (Schreurs and Brill 1984). Where the melting conditions had been increasingly reducing, a ferri-sulfide chromophore complex was shown to have formed, thus changing the bluey-aqua colour of the glass to an olive, or even an amber shade (Schreurs and Brill 1984). Despite these colour variations, Brill's further chemical analysis showed the vessel glasses to be highly homogeneous in composition, apart from a clear divide where around 40 glasses demonstrated the intentional addition of manganese (Brill 1988). Brill conducted an investigation of the furnace at Jalame, nicknamed the Red Room, in which there was a mysterious absence of glass finds of any kind (Brill 1988). Whilst work at Beth She’Arim had eventually found there to be five firing chambers responsible for heating the one tank, the fragmentary remains at Jalame made it very difficult to interpret the furnace set-up, apart from the fact that they believed there to have been only one firing chamber (Brill 1988). The Institute of Nautical Archaeology In the late eighties, Brill contributed to various studies with the Institute of Nautical Archaeology, following the excavation of a number of exciting shipwrecks including the Serçe Liman, and the Ulu Burun (Barnes et al. 1986, Brill 1989). Brill's own technique of lead isotope analysis was to provide a means for provenancing items aboard ship, and thus determine the ship's origin and her ports-of-call. The excavators of the Serçe Liman wanted to know whether she was Byzantine or Islamic, a complicated question for lead isotope analysis as the lead ores of the Eastern Mediterranean share geographical characteristics and therefore overlap (Barnes et al. 1986). Using 900 lead net sinkers divided into six loose groupings, Brill found groups III, V and VI to be Byzantine, that is with ores found in modern-day Turkey (Barnes et al. 1986). Group I, however, was taken to be most indicative of the ship's origin; this group contained net sinkers, but also two ceramic glazes and three glass vessels, all sharing virtually identical lead ores with only one isotopic match, ‘an ore from Anguran, northwest of Tehran’ according to Barnes et al. (1986, 7). The origins of Silk Road research Brill's submissions to the XIVth International Congress on Glass, which took place in New Delhi in 1986, can be seen to represent the origins of his work on the Great Silk Road, the impressive trade route carrying goods from the East through India to Europe. Here, chemical analysis of Early Indian glasses helped Brill determine the ingredients and techniques of production, ‘to make certain broad generalizations as to regions or periods of manufacture’, and therefore to follow an object's movement along the trade route (1987, 1). For the XIVth Congress, Brill conducted atomic absorption spectroscopy (AAS) and optical emission spectroscopy (OES) on samples of 38 glasses from India, and the success of his method was made clear when he was able to separate 21 samples away from those made in the Middle East and Europe (Brill 1987). The glasses were shown to have mixed alkali compositions, a feature that is ‘rare among glasses from more westerly sources’, and therefore Brill concluded that they had definitely been manufactured in India (1987, 4). Brill also collaborated with Mckinnon to conduct chemical analyses of some glass samples from Sumatra, Indonesia, the results of which would be the ‘first data of their kind from this island’ (1987, 1). The results of the study, which also used samples from Java, another important location for the Silk Road, were hoped by McKinnon and Brill to ‘stimulate a greater awareness of glass in the economy [...] of ancient Sumatra and further new lines of research in the archaeology of the region’ (1987, 1). The 1990s The beginning of the 1990s saw Brill accorded the Archaeological Institute of America's Pomerance Award for scientific contributions to archaeology; however the decade mostly reflects Brill's continuing dedication to Asian glasses and the study of the Silk Road (Archaeological Institute of America 2009). In Scientific Research in Early Chinese Glass, Brill reflected that in comparison to the knowledge of glassmaking in the West, ‘little is known about Chinese glass and about the role it played in the overall unfolding of glass history on a worldwide basis’ (1991, vii). One reason for this is that glass was never produced in the East in such great quantities as it was in the West but also that archaeological Chinese glasses are often prone to problems (Brill 1991). The difficulties of analysing Chinese glasses were reflected later in the publication where, following the chemical investigation of 71 samples, Brill found that identifying the ‘basic formulation’, or ‘any of the primary batch materials’ of the glasses was still almost impossible (Brill et al. 1991). Brill had greater success in differentiating between Chinese glass samples when using lead isotope analysis, a method that has proven effective in the first instance of identifying Chinese glass as the leads used here are different from those anywhere else in the world (Brill, Barnes et al. 1991). Brill found his Chinese samples to fall into two distinct groups, possessing on one hand the highest, and on the other the lowest, lead isotope ratios he had ever encountered (Brill, Barnes et al. 1991). As such, he was able to show that despite the striking similarity in the glasses’ chemical composition and appearance, the ores from which their leads were sourced must have been from very geologically-different mines (Brill, Barnes et al. 1991). Brill conducted further investigations of ancient Asian glasses for the Nara Symposium on the Silk Road's maritime route in 1991, ‘to demonstrate [...] that chemical analyses can be useful for learning how glass was traded along the Desert, Steppe, and Maritime Routes of the Silk Road’ (1993a, 71), as well as providing a more technical discussion on glass and glassmaking in China for the Glass Art Society's Toledo Conference in 1993 (Brill 1993b). Further lead isotope analysis, this time on Chinese and central Asian pigments, was conducted with a larger team for the Getty's Conservation of Ancient Sites on the Silk Road, which saw Brill et al. launching studies that held incredible potential for understanding ‘chronological or stylistic differences among Buddhist cave paintings’, or ‘distinguish[ing] between original and repainted parts of individual works’ (1993, 371). Chemical Analyses of Early Glasses In 1999, Brill published the sum of 39 years worth of results from his chemical investigations at Corning in two volumes of reference material with a third forthcoming (Brill 1999). Brill was reluctant to publish the data without any accompanying interpretation, but he felt that the most important factor was to quickly release the material into a wider sphere, made ‘readily accessible to the scientific community’ (1999, 8). Of Corning's 10,000 research artefacts, the master catalogue contains 6,400 samples, an abbreviated catalogue, or AbbCat, of which is presented in the two volumes (1999, 11). 19 geographical, typological or chronological categories of glass samples are recorded, spanning Brill's various research projects and collaborations, from Egypt to the East (Brill 1999). It also records the results of oxygen isotope analyses, reminding us that Brill was ever one for the integration of different investigative methods. Brill's legacy Since 2000, Dr Brill's interest in Silk Road studies and ancient glass compositions has continued, but his publication rate has slowed somewhat. His years of prolific publication, however, and his willingness to analyse glass from almost every situation have provided the archaeometry of glass with a bounty of reference material, as reflected by the Chemical Analyses of Early Glasses. Despite his official retirement from the Corning Museum of Glass on May 31, 2008, he returned to the laboratory the next day and continues to work, showing no intention of enjoying a retirement proper any time soon (Brill, pers. comm., 2009). References Archaeological Institute of America (2009) http://www.archaeological.org/webinfo.php?page=10101 Pomerance Award Winners, consulted 01.04.2009 Archaeometry of Glass (2005) http://www.icg.group.shef.ac.uk/tc17.html Archaeometry of Glass Mission Statement, consulted 20.03.2009 Barnes, I. L., Brill, R. H., Deal, E. C. and Piercy, G. V. (1986) Lead Isotope Studies of Some of the Finds from the Serçe Liman Shipwreck, In: Olin, J. S. and Blackman, M. J. (Eds.) Proceedings of the 24th International Archaeometry Symposium Washington: Smithsonian Institution Press Brill, R. H. (1961) The Record of Time in Weathered Glass Archaeology 14 (1) pp. 18–22 Brill, R. H. (1963) Ancient Glass Scientific American (November) pp. 120–130 Brill, R. H. (1965) Interlaboratory Comparison Experiments on the Analysis of Ancient Glass. In: Proceedings of the VIIth International Congress on Glass, Brussels pp. 226(1)-226(4) Brill, R. H. (1967) A Great Glass Slab from Ancient Galilee Archaeology 20 (2) pp. 89–96 Brill, R. H. (1968) The Scientific Investigation of Ancient Glasses. In: The Proceedings of the VIIIth International Congress on Glass pp. 47–68 Brill, R. H. (1970) Lead and Oxygen Isotopes in Ancient Objects. In: Allibone, T. E. (Ed.) The Impact of the Natural Sciences on Archaeology: A joint symposium of the Royal Society and the British Academy. London: Oxford University Press Brill, R. H.(1971) A Chemical-Analytical Round-Robin of Four Synthetic Ancient Glasses. In: The Proceedings of the IXth International Congress on Glass Brill, R. H. (1987) Chemical Analyses of Some Early Indian Glasses, In: Bhardwaj, H. C. (Ed.) XIVth International Congress on Glass 1986, New Delhi, India pp. 1–27 Brill, R. H. (1988) Scientific Investigations of the Jalame Glass and Related Finds, In: Weinberg, G. D. (Ed.) Excavations at Jalame: Site of a Glass Factory in Late Roman Palestine Columbia: University of Missouri Press Brill, R. H. (1989) Chemical Analyses of some Metal Finds from the Ulu Burun and Cape Gelidonya Shipwrecks. Paper submitted to George F. Bass and Cemal Pulak at the Institute of Nautical Archaeology Brill, R. H. (1991) Introduction, In: (Eds.) Brill, R. H. and Martin, J. H. Scientific Research in Early Chinese Glass New York: Corning Museum of Glass Brill, R. H. (1993a) Scientific Investigations of Ancient Asian Glass, In: Nara Symposium ’91: UNESCO Maritime Route of Silk Roads pp. 70–79 Brill, R. H. (1993b) Glass and Glassmaking in Ancient China, In: The Toledo Conference Journal 1993 The Glass Art Society pp. 56–69 Brill, R. H. (1999) Chemical Analyses of Early Glass, Volume 1: Catalogue of Samples New York: The Corning Museum of Glass Brill, R. H. (2006) Don’t Go with the Flow! Glass Worldwide 8 p. 12 Brill, R. H., Barnes, I. L. and Joel, E. C. (1991) Lead Isotope Studies of Early Chinese Glasses, In: (Eds.) Brill, R. H. and Martin, J. H. Scientific Research in Early Chinese Glass New York: Corning Museum of Glass pp. 65–91 Brill, R. H. and Hood, H. P. (1961) A New Method for Dating Ancient Glass Nature 189 pp. 12–14 Brill, R. H., Shields, W. R. and Wampler, J. M. (1965) New Directions in Lead Isotope Research, In: Application of Science in Examination of Works of Art Boston: Museum of Fine Arts pp. 155–166 Brill, R. H., Tong, S. S. C. and Dohrenwend, D. (1991) Chemical Analyses of Some Early Chinese Glasses, In: Brill, R. H. and Martin, J. H. (Eds.) Scientific Research in Early Chinese Glass New York: Corning Museum of Glass pp. 31–64 Brill, R. H. and Wampler, J. M. (1967) Isotope Studies of Ancient Lead American Journal of Archaeology 71 pp. 63–77 Brill, R. H. and Wosinski, J. F. (1965) A Huge Slab of Glass in the Ancient Necropolis of Beth She’Arim In: Proceedings of the VIIth International Congress on Glass, Brussels pp. 219(1)-219(11) Buechner, T. S. (1977) Preface, In: Martin, J. H. and Edwards, C K. (Eds.) The Corning Flood: Museum Under Water New York: Corning Museum of Glass Corning Museum of Glass(2009) http://www.cmog.org/dynamic.aspx?id=2190 Corning Museum of Glass Staff Biography, consulted 12.03.2009, 20.03.2009 Newton, R. G. (1971) The Enigma of the Layered Crusts on Some Weathered Glasses, A Chronological Account of the Investigations Archaeometry 13 (1) pp. 1–9 Schreurs, J. W. H. and Brill, R. H. (1984) Iron and Sulfur Related Colors in Ancient Glasses Archaeometry 26 (2) pp. 199–209 Martin, J. H. and Edwards, C. K. (1977) Introduction, In: Martin, J. H. and Edwards, C K. (Eds.) The Corning Flood: Museum Under Water New York: Corning Museum of Glass McKinnon, E. E. and Brill, R. H. (1987) Chemical Analyses of Some Glasses from Sumatra, In: Bhardwaj, H. C. (Ed.) XIVth International Congress on Glass 1986, New Delhi, India pp. 1–14 The Getty Conservation Institute (2009) http://www.getty.us/conservation/publications/pdf_publications/silkroad7.pdf Getty Silk Road Publication Contributors, consulted 12.03.2009 UCL Institute of Archaeo-Metallurgy (2007) http://www.ucl.ac.uk/iams/iransurvey/team.php Wertime's Iran Survey: The Travellers, consulted 12.03.2009 Further reading Brill, R. H. (1962) A Note on the Scientist's Definition of Glass Journal of Glass Studies 4 pp. 127–138 Brill, R. H. (1964) An Observation on the Corinth Diaretum Journal of Glass Studies 6 pp. 56–58 Brill, R. H. (Ed.) (1971) Science and Archaeology Massachusetts: MIT Press Brill, R. H. (1992) Chemical Analyses of some Glasses from Frattesina Journal of Glass Studies 34 pp. 11–22 Brill, R. H. (2001) Some Thoughts on the Chemistry and Technology of Islamic Glass, In: Carboni, S. and Whitehouse, D. (Eds.) Glass of the Sultans New York: The Metropolitan Museum of Art Brill, R. H., Barnes, L. L. and Adams, B. (1974) Lead Isotopes in Some Ancient Egyptian Objects. In: Recent Advances in Science and Technology of Materials 3 pp. 9–27 New York: Plenum Press Brill, R. H. and Cahill, N. D. (1988) A Red Opaque Glass from Sardis and Some Thoughts on Red Opaques in General Journal of Glass Studies 30 Brill, R. H. and Shields, W. R. (1972) Lead Isotopes in Ancient Coins. In: Special Publication of the Royal Numismatic Society 8 pp. 279–303 Oxford: Oxford University Press Brill, R. H., Yamasaki, K., Barnes, I. L., Rosman, K. J. R. and Diaz, M. (1979) Lead Isotopes in Some Japanese and Chinese Glasses Ars Orientalis 11 pp. 87–109 External links The Corning Museum of Glass History of glass Glass chemistry Living people People associated with the Corning Museum of Glass Year of birth missing (living people)
Robert H. Brill
[ "Chemistry", "Materials_science", "Engineering" ]
6,426
[ "Glass engineering and science", "Glass chemistry" ]
19,541,161
https://en.wikipedia.org/wiki/Dealkalization%20of%20water
The dealkalization of water refers to the removal of alkalinity ions from water. Chloride cycle anion ion-exchange dealkalizers remove alkalinity from water. Chloride cycle dealkalizers operate similar to sodium cycle cation water softeners. Like water softeners, dealkalizers contain ion-exchange resins that are regenerated with a concentrated salt (brine) solution - NaCl. In the case of a water softener, the cation exchange resin is exchanging sodium (the Na+ ion of NaCl) for hardness minerals such as calcium and magnesium. A dealkalizer contains strong base anion exchange resin that exchanges chloride (the Cl– ion of the NaCl) for carbonate (), bicarbonate () and sulfate (). As water passes through the anion resin the carbonate, bicarbonate and sulfate ions are exchanged for chloride ions. "Higher capacities can be realized by use of type II rather than type I strong base anion resins. Although bicarbonates are not held as tightly as chlorides on the SBA (strong base anion) resins in the hydroxide form, when the resin is predominantly in the chloride form the pH has been raised by a small addition of caustic to the brine regenerant, there will be a favorable exchange of bicarbonate for the chloride. This exchange works well only with high alkalinity waters (40% to 80%), with capacities of 4 to 10 Kg/CF being obtained. The advantages of SBA resin dealkalization is that low-cost salt is used in place of the acid necessary for the SAC (strong acid cation) and un-lined steel tanks can be used." Purpose Dealkalizers are most often used as pre-treatment to a boiler and are usually preceded by a water softener. Alkalinity is a factor that most often dictates the amount of boiler blowdown. High alkalinity promotes boiler foaming and carryover and causes high amounts of boiler blowoff. When alkalinity is the limiting factor affecting the amount of blowdown, a dealkalizer will increase the cycles of concentrations and reduce blowdown and operating costs. The reduction of blowdown by dealkalization keeps the water treatment chemicals in the boiler longer, thus minimizing the amount of chemicals required for efficient, noncorrosive operation. Carbonate and bicarbonate alkalinities are decomposed by heat in boiler water releasing carbon dioxide into the steam. This gas combines with the condensed steam in process equipment and return lines to form carbonic acid. This depresses the pH value of the condensate returns and results in corrosive attack on the equipment and piping. In general, a dealkalizer is best applied to boilers operating below 700 psi (48 bar). In order to justify installation of a dealkalizer on low-pressure boilers, the alkalinity content should be above 50 ppm with the amount of make-up water exceeding 1,000 gallons (approx. 4,000 litres) per day. Cooling system make-up will also benefit from reduced alkalinity. The addition of a dealkalizer to a cooling water system will substantially reduce the amount of acid required to treat the same amount of water. Alternate method Hydrogen cycle weak acid cation resins convert alkalinity into carbon dioxide while removing calcium and magnesium. The resin is regenerated with acids at levels close to stoichiometric requirements. References Water treatment
Dealkalization of water
[ "Chemistry", "Engineering", "Environmental_science" ]
717
[ "Water treatment", "Environmental engineering", "Water technology", "Water pollution" ]
19,543,615
https://en.wikipedia.org/wiki/ARGOS%20DSS
ARGOS is a Decision Support System (DSS) for crisis and emergency management for incidents with chemical, biological, radiological, and nuclear (CBRN) releases. System In case of incidents with chemical, biological, radiological or nuclear releases, ARGOS can be used to get an overview of the situation, create a prognosis of how the situation will evolve, and calculate the consequences of the incidents. The target is accidents, as well as terrorist initiated events related to CBRN industries, transports of hazardous materials, and others. ARGOS improves situation awareness, facilitates decision support, and information sharing among the emergency response organizations. As a simulation instrument, ARGOS is also valuable for training of response organizations, and for providing information to the public. The ARGOS system makes intensive use of geographic information system (GIS) to display data on geographic maps. Colours are used to express the concentration, contamination, time-of-arrival, trajectories, doses or inhalation, and ISO curves can display important threshold levels. The GIS system can display NPP’s – measuring stations and weather conditions like precipitation and wind fields. For running short range prognoses, ARGOS can download a numerical weather prediction from the Met-Office in the country. As the main atmospheric dispersion model, ARGOS includes the RIMPUFF model from Risø National Laboratory. User Group The current member countries of the ARGOS User Group are (January 2017): Australia, Brazil, Canada, Denmark, Estonia, Germany, Ireland, Japan, Latvia, Lithuania, Norway, Poland, Singapore, Sweden, Ukraine and United Kingdom. The ARGOS User Group has the objective of maintaining and further evolve ARGOS as a state-of-the-art decision support system for emergency response, as well as a network of expertise. The ARGOS User Group arranges bi-annual meetings where all members have equal opportunities to influence the development of the system. The ARGOS User Group discusses experiences with ARGOS and decides which new facilities to develop, which new models to include, among others. History The original development of ARGOS started in 1992 by the Danish Emergency Management Agency and Prolog Development Center, in close cooperation with Risø National Laboratory and the Danish Meteorological Institute. References External links ARGOS DSS Homepage Atmospheric dispersion modeling
ARGOS DSS
[ "Chemistry", "Engineering", "Environmental_science" ]
475
[ "Atmospheric dispersion modeling", "Environmental modelling", "Environmental engineering" ]
19,554,842
https://en.wikipedia.org/wiki/Oseen%20equations
In fluid dynamics, the Oseen equations (or Oseen flow) describe the flow of a viscous and incompressible fluid at small Reynolds numbers, as formulated by Carl Wilhelm Oseen in 1910. Oseen flow is an improved description of these flows, as compared to Stokes flow, with the (partial) inclusion of convective acceleration. Oseen's work is based on the experiments of G.G. Stokes, who had studied the falling of a sphere through a viscous fluid. He developed a correction term, which included inertial factors, for the flow velocity used in Stokes' calculations, to solve the problem known as Stokes' paradox. His approximation leads to an improvement to Stokes' calculations. Equations The Oseen equations are, in case of an object moving with a steady flow velocity U through the fluid—which is at rest far from the object—and in a frame of reference attached to the object: where u is the disturbance in flow velocity induced by the moving object, i.e. the total flow velocity in the frame of reference moving with the object is −U + u, p is the pressure, ρ is the density of the fluid, μ is the dynamic viscosity, ∇ is the gradient operator, and ∇2 is the Laplace operator. The boundary conditions for the Oseen flow around a rigid object are: with r the distance from the object's center, and p∞ the undisturbed pressure far from the object. Longitudinal and transversal waves A fundamental property of Oseen's equation is that the general solution can be split into longitudinal and transversal waves. A solution is a longitudinal wave if the velocity is irrotational and hence the viscous term drops out. The equations become In consequence Velocity is derived from potential theory and pressure is from linearized Bernoulli's equations. A solution is a transversal wave if the pressure is identically zero and the velocity field is solenoidal. The equations are Then the complete Oseen solution is given by a splitting theorem due to Horace Lamb. The splitting is unique if conditions at infinity (say ) are specified. For certain Oseen flows, further splitting of transversal wave into irrotational and rotational component is possible Let be the scalar function which satisfies and vanishes at infinity and conversely let be given such that , then the transversal wave is where is determined from and is the unit vector. Neither or are transversal by itself, but is transversal. Therefore, The only rotational component is being . Fundamental solutions The fundamental solution due to a singular point force embedded in an Oseen flow is the Oseenlet. The closed-form fundamental solutions for the generalized unsteady Stokes and Oseen flows associated with arbitrary time-dependent translational and rotational motions have been derived for the Newtonian and micropolar fluids. Using the Oseen equation, Horace Lamb was able to derive improved expressions for the viscous flow around a sphere in 1911, improving on Stokes law towards somewhat higher Reynolds numbers. Also, Lamb derived—for the first time—a solution for the viscous flow around a circular cylinder. The solution to the response of a singular force when no external boundaries are present be written as If , where is the singular force concentrated at the point and is an arbitrary point and is the given vector, which gives the direction of the singular force, then in the absence of boundaries, the velocity and pressure is derived from the fundamental tensor and the fundamental vector Now if is arbitrary function of space, the solution for an unbounded domain is where is the infinitesimal volume/area element around the point . Two-dimensional Without loss of generality taken at the origin and . Then the fundamental tensor and vector are where where is the modified Bessel function of the second kind of order zero. Three-dimensional Without loss of generality taken at the origin and . Then the fundamental tensor and vector are where Calculations Oseen considered the sphere to be stationary and the fluid to be flowing with a flow velocity () at an infinite distance from the sphere. Inertial terms were neglected in Stokes' calculations. It is a limiting solution when the Reynolds number tends to zero. When the Reynolds number is small and finite, such as 0.1, correction for the inertial term is needed. Oseen substituted the following flow velocity values into the Navier-Stokes equations. Inserting these into the Navier-Stokes equations and neglecting the quadratic terms in the primed quantities leads to the derivation of Oseen's approximation: Since the motion is symmetric with respect to axis and the divergence of the vorticity vector is always zero we get: the function can be eliminated by adding to a suitable function in , is the vorticity function, and the previous function can be written as: and by some integration the solution for is: thus by letting be the "privileged direction" it produces: then by applying the three boundary conditions we obtain the new improved drag coefficient now become: and finally, when Stokes' solution was solved on the basis of Oseen's approximation, it showed that the resultant drag force is given by where: is the Reynolds number based on the radius of the sphere, is the hydrodynamic force is the flow velocity is the fluid viscosity The force from Oseen's equation differs from that of Stokes by a factor of Correction to Stokes' solution The equations for the perturbation read: but when the velocity field is: In the far field ≫ 1, the viscous stress is dominated by the last term. That is: The inertia term is dominated by the term: The error is then given by the ratio: This becomes unbounded for ≫ 1, therefore the inertia cannot be ignored in the far field. By taking the curl, Stokes equation gives Since the body is a source of vorticity, would become unbounded logarithmically for large This is certainly unphysical and is known as Stokes' paradox. Solution for a moving sphere in incompressible fluid Consider the case of a solid sphere moving in a stationary liquid with a constant velocity. The liquid is modeled as an incompressible fluid (i.e. with constant density), and being stationary means that its velocity tends towards zero as the distance from the sphere approaches infinity. For a real body there will be a transient effect due to its acceleration as it begins its motion; however after enough time it will tend towards zero, so that the fluid velocity everywhere will approach the one obtained in the hypothetical case in which the body is already moving for infinite time. Thus we assume a sphere of radius a moving at a constant velocity , in an incompressible fluid that is at rest at infinity. We will work in coordinates that move along with the sphere with the coordinate center located at the sphere's center. We have: Since these boundary conditions, as well as the equation of motions, are time invariant (i.e. they are unchanged by shifting the time ) when expressed in the coordinates, the solution depends upon the time only through these coordinates. The equations of motion are the Navier-Stokes equations defined in the resting frame coordinates . While spatial derivatives are equal in both coordinate systems, the time derivative that appears in the equations satisfies: where the derivative is with respect to the moving coordinates . We henceforth omit the m subscript. Oseen's approximation sums up to neglecting the term non-linear in . Thus the incompressible Navier-Stokes equations become: for a fluid having density ρ and kinematic viscosity ν = μ/ρ (μ being the dynamic viscosity). p is the pressure. Due to the continuity equation for incompressible fluid , the solution can be expressed using a vector potential . This turns out to be directed at the direction and its magnitude is equivalent to the stream function used in two-dimensional problems. It turns out to be: where is Reynolds number for the flow close to the sphere. Note that in some notations is replaced by so that the derivation of from is more similar to its derivation from the stream function in the two-dimensional case (in polar coordinates). Elaboration can be expressed as follows: where: , so that . The vector Laplacian of a vector of the type reads: . It can thus be calculated that: Therefore: Thus the vorticity is: where we have used the vanishing of the divergence of to relate the vector laplacian and a double curl. The equation of motion's left hand side is the curl of the following: We calculate the derivative separately for each term in . Note that: And also: We thus have: Combining all the terms we have: Taking the curl, we find an expression that is equal to times the gradient of the following function, which is the pressure: where is the pressure at infinity, .is the polar angle originated from the opposite side of the front stagnation point ( where is the front stagnation point). Also, the velocity is derived by taking the curl of : These p and u satisfy the equation of motion and thus constitute the solution to Oseen's approximation. Modifications to Oseen's approximation One may question, however, whether the correction term was chosen by chance, because in a frame of reference moving with the sphere, the fluid near the sphere is almost at rest, and in that region inertial force is negligible and Stokes' equation is well justified. Far away from the sphere, the flow velocity approaches u and Oseen's approximation is more accurate. But Oseen's equation was obtained applying the equation for the entire flow field. This question was answered by Proudman and Pearson in 1957, who solved the Navier-Stokes equations and gave an improved Stokes' solution in the neighborhood of the sphere and an improved Oseen's solution at infinity, and matched the two solutions in a supposed common region of their validity. They obtained: Applications The method and formulation for analysis of flow at a very low Reynolds number is important. The slow motion of small particles in a fluid is common in bio-engineering. Oseen's drag formulation can be used in connection with flow of fluids under various special conditions, such as: containing particles, sedimentation of particles, centrifugation or ultracentrifugation of suspensions, colloids, and blood through isolation of tumors and antigens. The fluid does not even have to be a liquid, and the particles do not need to be solid. It can be used in a number of applications, such as smog formation and atomization of liquids. Blood flow in small vessels, such as capillaries, is characterized by small Reynolds and Womersley numbers. A vessel of diameter of with a flow of , viscosity of for blood, density of and a heart rate of , will have a Reynolds number of 0.005 and a Womersley number of 0.0126. At these small Reynolds and Womersley numbers, the viscous effects of the fluid become predominant. Understanding the movement of these particles is essential for drug delivery and studying metastasis movements of cancers. Notes References Fluid dynamics Equations of fluid dynamics
Oseen equations
[ "Physics", "Chemistry", "Engineering" ]
2,299
[ "Equations of fluid dynamics", "Equations of physics", "Chemical engineering", "Piping", "Fluid dynamics" ]
459,163
https://en.wikipedia.org/wiki/Siphon
A siphon (; also spelled syphon) is any of a wide variety of devices that involve the flow of liquids through tubes. In a narrower sense, the word refers particularly to a tube in an inverted "U" shape, which causes a liquid to flow upward, above the surface of a reservoir, with no pump, but powered by the fall of the liquid as it flows down the tube under the pull of gravity, then discharging at a level lower than the surface of the reservoir from which it came. There are two leading theories about how siphons cause liquid to flow uphill, against gravity, without being pumped, and powered only by gravity. The traditional theory for centuries was that gravity pulling the liquid down on the exit side of the siphon resulted in reduced pressure at the top of the siphon. Then atmospheric pressure was able to push the liquid from the upper reservoir, up into the reduced pressure at the top of the siphon, like in a barometer or drinking straw, and then over. However, it has been demonstrated that siphons can operate in a vacuum and to heights exceeding the barometric height of the liquid. Consequently, the cohesion tension theory of siphon operation has been advocated, where the liquid is pulled over the siphon in a way similar to the chain fountain. It need not be one theory or the other that is correct, but rather both theories may be correct in different circumstances of ambient pressure. The atmospheric pressure with gravity theory cannot explain siphons in vacuum, where there is no significant atmospheric pressure. But the cohesion tension with gravity theory cannot explain gas siphons, siphons working despite bubbles, and the flying droplet siphon, where gases do not exert significant pulling forces, and liquids not in contact cannot exert a cohesive tension force. All known published theories in modern times recognize Bernoulli's equation as a decent approximation to idealized, friction-free siphon operation. History Egyptian reliefs from 1500 BC depict siphons used to extract liquids from large storage jars. Physical evidence for the use of siphons by Greeks are the Justice cup of Pythagoras in Samos in the 6th century BC and usage by Greek engineers in the 3rd century BC at Pergamon. Hero of Alexandria wrote extensively about siphons in the treatise Pneumatica. The Banu Musa brothers of 9th-century Baghdad invented a double-concentric siphon, which they described in their Book of Ingenious Devices. The edition edited by Hill includes an analysis of the double-concentric siphon. Siphons were studied further in the 17th century, in the context of suction pumps (and the recently developed vacuum pumps), particularly with an eye to understanding the maximum height of pumps (and siphons) and the apparent vacuum at the top of early barometers. This was initially explained by Galileo Galilei via the theory of ("nature abhors a vacuum"), which dates to Aristotle, and which Galileo restated as , but this was subsequently disproved by later workers, notably Evangelista Torricelli and Blaise Pascal – see barometer: history. Theory A practical siphon, operating at typical atmospheric pressures and tube heights, works because gravity pulling down on the taller column of liquid leaves reduced pressure at the top of the siphon (formally, hydrostatic pressure when the liquid is not moving). This reduced pressure at the top means gravity pulling down on the shorter column of liquid is not sufficient to keep the liquid stationary against the atmospheric pressure pushing it up into the reduced-pressure zone at the top of the siphon. So the liquid flows from the higher-pressure area of the upper reservoir up to the lower-pressure zone at the top of the siphon, over the top, and then, with the help of gravity and a taller column of liquid, down to the higher-pressure zone at the exit. The chain model is a useful but not completely accurate conceptual model of a siphon. The chain model helps to understand how a siphon can cause liquid to flow uphill, powered only by the downward force of gravity. A siphon can sometimes be thought of like a chain hanging over a pulley, with one end of the chain piled on a higher surface than the other. Since the length of chain on the shorter side is lighter than the length of chain on the taller side, the heavier chain on the taller side will move down and pull up the chain on the lighter side. Similar to a siphon, the chain model is obviously just powered by gravity acting on the heavier side, and there is clearly no violation of conservation of energy, because the chain is ultimately just moving from a higher to a lower location, as the liquid does in a siphon. There are a number of problems with the chain model of a siphon, and understanding these differences helps to explain the actual workings of siphons. First, unlike in the chain model of the siphon, it is not actually the weight on the taller side compared to the shorter side that matters. Rather it is the difference in height from the reservoir surfaces to the top of the siphon, that determines the balance of pressure. For example, if the tube from the upper reservoir to the top of the siphon has a much larger diameter than the taller section of tube from the lower reservoir to the top of the siphon, the shorter upper section of the siphon may have a much larger weight of liquid in it, and yet the lighter volume of liquid in the down tube can pull liquid up the fatter up tube, and the siphon can function normally. Another difference is that under most practical circumstances, dissolved gases, vapor pressure, and (sometimes) lack of adhesion with tube walls, conspire to render the tensile strength within the liquid ineffective for siphoning. Thus, unlike a chain, which has significant tensile strength, liquids usually have little tensile strength under typical siphon conditions, and therefore the liquid on the rising side cannot be pulled up in the way the chain is pulled up on the rising side. An occasional misunderstanding of siphons is that they rely on the tensile strength of the liquid to pull the liquid up and over the rise. While water has been found to have a significant tensile strength in some experiments (such as with the z-tube), and siphons in vacuum rely on such cohesion, common siphons can easily be demonstrated to need no liquid tensile strength at all to function. Furthermore, since common siphons operate at positive pressures throughout the siphon, there is no contribution from liquid tensile strength, because the molecules are actually repelling each other in order to resist the pressure, rather than pulling on each other. To demonstrate, the longer lower leg of a common siphon can be plugged at the bottom and filled almost to the crest with liquid as in the figure, leaving the top and the shorter upper leg completely dry and containing only air. When the plug is removed and the liquid in the longer lower leg is allowed to fall, the liquid in the upper reservoir will then typically sweep the air bubble down and out of the tube. The apparatus will then continue to operate as a normal siphon. As there is no contact between the liquid on either side of the siphon at the beginning of this experiment, there can be no cohesion between the liquid molecules to pull the liquid over the rise. It has been suggested by advocates of the liquid tensile strength theory, that the air start siphon only demonstrates the effect as the siphon starts, but that the situation changes after the bubble is swept out and the siphon achieves steady flow. But a similar effect can be seen in the flying-droplet siphon (see above). The flying-droplet siphon works continuously without liquid tensile strength pulling the liquid up. The siphon in the video demonstration operated steadily for more than 28 minutes until the upper reservoir was empty. Another simple demonstration that liquid tensile strength is not needed in the siphon is to simply introduce a bubble into the siphon during operation. The bubble can be large enough to entirely disconnect the liquids in the tube before and after the bubble, defeating any liquid tensile strength, and yet if the bubble is not too big, the siphon will continue to operate with little change as it sweeps the bubble out. Another common misconception about siphons is that because the atmospheric pressure is virtually identical at the entrance and exit, the atmospheric pressure cancels, and therefore atmospheric pressure cannot be pushing the liquid up the siphon. But equal and opposite forces may not completely cancel if there is an intervening force that counters some or all of one of the forces. In the siphon, the atmospheric pressure at the entrance and exit are both lessened by the force of gravity pulling down the liquid in each tube, but the pressure on the down side is lessened more by the taller column of liquid on the down side. In effect, the atmospheric pressure coming up the down side does not entirely "make it" to the top to cancel all of the atmospheric pressure pushing up the up side. This effect can be seen more easily in the example of two carts being pushed up opposite sides of a hill. As shown in the diagram, even though the person on the left seems to have his push canceled entirely by the equal and opposite push from the person on the right, the person on the left's seemingly canceled push is still the source of the force to push the left cart up. In some situations siphons do function in the absence of atmospheric pressure and due to tensile strength – see vacuum siphons – and in these situations the chain model can be instructive. Further, in other settings water transport does occur due to tension, most significantly in transpirational pull in the xylem of vascular plants. Water and other liquids may seem to have no tensile strength because when a handful is scooped up and pulled on, the liquids narrow and pull apart effortlessly. But liquid tensile strength in a siphon is possible when the liquid adheres to the tube walls and thereby resists narrowing. Any contamination on the tube walls, such as grease or air bubbles, or other minor influences such as turbulence or vibration, can cause the liquid to detach from the walls and lose all tensile strength. In more detail, one can look at how the hydrostatic pressure varies through a static siphon, considering in turn the vertical tube from the top reservoir, the vertical tube from the bottom reservoir, and the horizontal tube connecting them (assuming a U-shape). At liquid level in the top reservoir, the liquid is under atmospheric pressure, and as one goes up the siphon, the hydrostatic pressure decreases (under vertical pressure variation), since the weight of atmospheric pressure pushing the water up is counterbalanced by the column of water in the siphon pushing down (until one reaches the maximal height of a barometer/siphon, at which point the liquid cannot be pushed higher) – the hydrostatic pressure at the top of the tube is then lower than atmospheric pressure by an amount proportional to the height of the tube. Doing the same analysis on the tube rising from the lower reservoir yields the pressure at the top of that (vertical) tube; this pressure is lower because the tube is longer (there is more water pushing down), and requires that the lower reservoir is lower than the upper reservoir, or more generally that the discharge outlet simply be lower than the surface of the upper reservoir. Considering now the horizontal tube connecting them, one sees that the pressure at the top of the tube from the top reservoir is higher (since less water is being lifted), while the pressure at the top of the tube from the bottom reservoir is lower (since more water is being lifted), and since liquids move from high pressure to low pressure, the liquid flows across the horizontal tube from the top basin to the bottom basin. The liquid is under positive pressure (compression) throughout the tube, not tension. Bernoulli's equation is considered in the scientific literature to be a fair approximation to the operation of the siphon. In non-ideal fluids, compressibility, tensile strength and other characteristics of the working fluid (or multiple fluids) complicate Bernoulli's equation. Once started, a siphon requires no additional energy to keep the liquid flowing up and out of the reservoir. The siphon will draw liquid out of the reservoir until the level falls below the intake, allowing air or other surrounding gas to break the siphon, or until the outlet of the siphon equals the level of the reservoir, whichever comes first. In addition to atmospheric pressure, the density of the liquid, and gravity, the maximal height of the crest in practical siphons is limited by the vapour pressure of the liquid. When the pressure within the liquid drops to below the liquid's vapor pressure, tiny vapor bubbles can begin to form at the high point, and the siphon effect will end. This effect depends on how efficiently the liquid can nucleate bubbles; in the absence of impurities or rough surfaces to act as easy nucleation sites for bubbles, siphons can temporarily exceed their standard maximal height during the extended time it takes bubbles to nucleate. One siphon of degassed water was demonstrated to for an extended period of time and other controlled experiments to . For water at standard atmospheric pressure, the maximal siphon height is approximately ; for mercury it is , which is the definition of standard pressure. This equals the maximal height of a suction pump, which operates by the same principle. The ratio of heights (about 13.6) equals the ratio of densities of water and mercury (at a given temperature), since the column of water (resp. mercury) is balancing with the column of air yielding atmospheric pressure, and indeed maximal height is (neglecting vapor pressure and velocity of liquid) inversely proportional to density of liquid. Modern research into the operation of the siphon In 1948, Malcolm Nokes investigated siphons working in both air pressure and in a partial vacuum; for siphons in vacuum he concluded: "The gravitational force on the column of liquid in the downtake tube less the gravitational force in the uptake tube causes the liquid to move. The liquid is therefore in tension and sustains a longitudinal strain which, in the absence of disturbing factors, is insufficient to break the column of liquid". But for siphons of small uptake height working at atmospheric pressure, he wrote: "... the tension of the liquid column is neutralized and reversed by the compressive effect of the atmosphere on the opposite ends of the liquid column." Potter and Barnes at the University of Edinburgh revisited siphons in 1971. They re-examined the theories of the siphon and ran experiments on siphons in air pressure. They concluded: "By now it should be clear that, despite a wealth of tradition, the basic mechanism of a siphon does not depend upon atmospheric pressure." Gravity, pressure and molecular cohesion were the focus of work in 2010 by Hughes at the Queensland University of Technology. He used siphons at air pressure and his conclusion was: "The flow of water out of the bottom of a siphon depends on the difference in height between the inflow and outflow, and therefore cannot be dependent on atmospheric pressure…" Hughes did further work on siphons at air pressure in 2011 and concluded: "The experiments described above demonstrate that ordinary siphons at atmospheric pressure operate through gravity and not atmospheric pressure". The father and son researchers Ramette and Ramette successfully siphoned carbon dioxide under air pressure in 2011 and concluded that molecular cohesion is not required for the operation of a siphon, but: "The basic explanation of siphon action is that, once the tube is filled, the flow is initiated by the greater pull of gravity on the fluid on the longer side compared with that on the short side. This creates a pressure drop throughout the siphon tube, in the same sense that 'sucking' on a straw reduces the pressure along its length all the way to the intake point. The ambient atmospheric pressure at the intake point responds to the reduced pressure by forcing the fluid upwards, sustaining the flow, just as in a steadily sucked straw in a milkshake." Again in 2011, Richert and Binder (at the University of Hawaii) examined the siphon and concluded that molecular cohesion is not required for the operation of a siphon but relies upon gravity and a pressure differential, writing: "As the fluid initially primed on the long leg of the siphon rushes down due to gravity, it leaves behind a partial vacuum that allows pressure on the entrance point of the higher container to push fluid up the leg on that side". The research team of Boatwright, Puttick, and Licence, all at the University of Nottingham, succeeded in running a siphon in high vacuum, also in 2011. They wrote: "It is widely believed that the siphon is principally driven by the force of atmospheric pressure. An experiment is described that shows that a siphon can function even under high-vacuum conditions. Molecular cohesion and gravity are shown to be contributing factors in the operation of a siphon; the presence of a positive atmospheric pressure is not required". Writing in Physics Today in 2011, J. Dooley from Millersville University stated that both a pressure differential within the siphon tube and the tensile strength of the liquid are required for a siphon to operate. A researcher at Humboldt State University, A. McGuire, examined flow in siphons in 2012. Using the advanced general-purpose multiphysics simulation software package LS-DYNA he examined pressure initialisation, flow, and pressure propagation within a siphon. He concluded: "Pressure, gravity and molecular cohesion can all be driving forces in the operation of siphons". In 2014, Hughes and Gurung (at the Queensland University of Technology) ran a water siphon under varying air pressures ranging from sea level to 11.9 km () altitude. They noted: "Flow remained more or less constant during ascension indicating that siphon flow is independent of ambient barometric pressure". They used Bernoulli's equation and the Poiseuille equation to examine pressure differentials and fluid flow within a siphon. Their conclusion was: "It follows from the above analysis that there must be a direct cohesive connection between water molecules flowing in and out of a siphon. This is true at all atmospheric pressures in which the pressure in the apex of the siphon is above the vapour pressure of water, an exception being ionic liquids". Practical requirements A plain tube can be used as a siphon. An external pump has to be applied to start the liquid flowing and prime the siphon (in home use this is often done by a person inhaling through the tube until enough of it has filled with liquid; this may pose danger to the user, depending on the liquid that is being siphoned). This is sometimes done with any leak-free hose to siphon gasoline from a motor vehicle's gasoline tank to an external tank. (Siphoning gasoline by mouth often results in the accidental swallowing of gasoline, or aspirating it into the lungs, which can cause death or lung damage.) If the tube is flooded with liquid before part of the tube is raised over the intermediate high point and care is taken to keep the tube flooded while it is being raised, no pump is required. Devices sold as siphons often come with a siphon pump to start the siphon process. In some applications it can be helpful to use siphon tubing that is not much larger than necessary. Using piping of too great a diameter and then throttling the flow using valves or constrictive piping appears to increase the effect of previously cited concerns over gases or vapor collecting in the crest which serve to break the vacuum. If the vacuum is reduced too much, the siphon effect can be lost. Reducing the size of pipe used closer to requirements appears to reduce this effect and creates a more functional siphon that does not require constant re-priming and restarting. In this respect, where the requirement is to match a flow into a container with a flow out of said container (to maintain a constant level in a pond fed by a stream, for example) it would be preferable to utilize two or three smaller separate parallel pipes that can be started as required rather than attempting to use a single large pipe and attempting to throttle it. Automatic intermittent siphon Siphons are sometimes employed as automatic machines, in situations where it is desirable to turn a continuous trickling flow or an irregular small surge flow into a large surge volume. A common example of this is a public restroom with urinals regularly flushed by an automatic siphon in a small water tank overhead. When the container is filled, all the stored liquid is released, emerging as a large surge volume that then resets and fills again. One way to do this intermittent action involves complex machinery such as floats, chains, levers, and valves, but these can corrode, wear out, or jam over time. An alternate method is with rigid pipes and chambers, using only the water itself in a siphon as the operating mechanism. A siphon used in an automatic unattended device needs to be able to function reliably without failure. This is different from the common demonstration self-starting siphons in that there are ways the siphon can fail to function which require manual intervention to return to normal surge flow operation. A video demonstration of a self-starting siphon can be found here, courtesy of The Curiosity Show. The most common failure is for the liquid to dribble out slowly, matching the rate that the container is filling, and the siphon enters an undesired steady-state condition. Preventing dribbling typically involves pneumatic principles to trap one or more large air bubbles in various pipes, which are sealed by water traps. This method can fail if it cannot start working intermittently without water already present in parts of the mechanism, and which will not be filled if the mechanism starts from a dry state. A second problem is that the trapped air pockets will shrink over time if the siphon is not operating due to no inflow. The air in pockets is absorbed by the liquid, which pulls liquid up into the piping until the air pocket disappears, and can cause activation of water flow outside the normal range of operating when the storage tank is not full, leading to loss of the liquid seal in lower parts of the mechanism. A third problem is where the lower end of the liquid seal is simply a U-trap bend in an outflow pipe. During vigorous emptying, the kinetic motion of the liquid out the outflow can propel too much liquid out, causing a loss of the sealing volume in the outflow trap and loss of the trapped air bubble to maintain intermittent operation. A fourth problem involves seep holes in the mechanism, intended to slowly refill these various sealing chambers when the siphon is dry. The seep holes can be plugged by debris and corrosion, requiring manual cleaning and intervention. To prevent this, the siphon may be restricted to pure liquid sources, free of solids or precipitate. Many automatic siphons have been invented going back to at least the 1850s, for automatic siphon mechanisms that attempt to overcome these problems using various pneumatic and hydrodynamic principles. Applications and terminology When certain liquids needs to be purified, siphoning can help prevent either the bottom (dregs) or the top (foam and floaties) from being transferred out of one container into a new container. Siphoning is thus useful in the fermentation of wine and beer for this reason, since it can keep unwanted impurities out of the new container. Self-constructed siphons, made of pipes or tubes, can be used to evacuate water from cellars after floodings. Between the flooded cellar and a deeper place outside a connection is built, using a tube or some pipes. They are filled with water through an intake valve (at the highest end of the construction). When the ends are opened, the water flows through the pipe into the sewer or the river. Siphoning is common in irrigated fields to transfer a controlled amount of water from a ditch, over the ditch wall, into furrows. Large siphons may be used in municipal waterworks and industry. Their size requires control via valves at the intake, outlet and crest of the siphon. The siphon may be primed by closing the intake and outlets and filling the siphon at the crest. If intakes and outlets are submerged, a vacuum pump may be applied at the crest to prime the siphon. Alternatively the siphon may be primed by a pump at either the intake or outlet. Gas in the liquid is a concern in large siphons. The gas tends to accumulate at the crest and if enough accumulates to break the flow of liquid, the siphon stops working. The siphon itself will exacerbate the problem because as the liquid is raised through the siphon, the pressure drops, causing dissolved gases within the liquid to come out of solution. Higher temperature accelerates the release of gas from liquids so maintaining a constant, low temperature helps. The longer the liquid is in the siphon, the more gas is released, so a shorter siphon overall helps. Local high points will trap gas so the intake and outlet legs should have continuous slopes without intermediate high points. The flow of the liquid moves bubbles thus the intake leg can have a shallow slope as the flow will push the gas bubbles to the crest. Conversely, the outlet leg needs to have a steep slope to allow the bubbles to move against the liquid flow; though other designs call for a shallow slope in the outlet leg as well to allow the bubbles to be carried out of the siphon. At the crest the gas can be trapped in a chamber above the crest. The chamber needs to be occasionally primed again with liquid to remove the gas. Siphon rain gauge A siphon rain gauge is a rain gauge that can record rainfall over an extended period. A siphon is used to automatically empty the gauge. It is often simply called a "siphon gauge" and is not to be confused with a siphon pressure gauge. Siphon drainage A siphon drainage method is being implemented in several expressways as of 2022. Recent studies found that it can reduce groundwater level behind expressway retaining walls, and there was no indication of clogging. This new drainage system is being pioneered as a long-term method to limit leakage hazard in the retaining wall. Siphon drainage is also used in draining unstable slopes, and siphon roof-water drainage systems have been in use since the 1960s. Siphon spillway A siphon spillway in a dam is usually not technically a siphon, as it is generally used to drain elevated water levels. However, a siphon spillway operates as an actual siphon if it raises the flow higher than the surface of the source reservoir, as sometimes is the case when used in irrigation. In operation, a siphon spillway is considered to be "pipe flow" or "closed-duct flow". A normal spillway flow is pressurized by the height of the reservoir above the spillway, whereas a siphon flow rate is governed by the difference in height of the inlet and outlet. Some designs make use of an automatic system that uses the flow of water in a spiral vortex to remove the air above to prime the siphon. Such a design includes the volute siphon. Flush toilet Flush toilets often have some siphon effect as the bowl empties. Some toilets also use the siphon principle to obtain the actual flush from the cistern. The flush is triggered by a lever or handle that operates a simple diaphragm-like piston pump that lifts enough water to the crest of the siphon to start the flow of water which then completely empties the contents of the cistern into the toilet bowl. The advantage of this system was that no water would leak from the cistern excepting when flushed. These were mandatory in the UK until 2011. Early urinals incorporated a siphon in the cistern which would flush automatically on a regular cycle because there was a constant trickle of clean water being fed to the cistern by a slightly open valve. Devices that are not true siphons Siphon coffee While if both ends of a siphon are at atmospheric pressure, liquid flows from high to low, if the bottom end of a siphon is pressurized, liquid can flow from low to high. If pressure is removed from the bottom end, the liquid flow will reverse, illustrating that it is pressure driving the siphon. An everyday illustration of this is the siphon coffee brewer, which works as follows (designs vary; this is a standard design, omitting coffee grounds): a glass vessel is filled with water, then corked (so air-tight) with a siphon sticking vertically upwards another glass vessel is placed on top, open to the atmosphere – the top vessel is empty, the bottom is filled with water the bottom vessel is then heated; as the temperature increases, the vapor pressure of the water increases (it increasingly evaporates); when the water boils the vapor pressure equals atmospheric pressure, and as the temperature increases above boiling the pressure in the bottom vessel then exceeds atmospheric pressure, and pushes the water up the siphon tube into the upper vessel. a small amount of still hot water and steam remain in the bottom vessel and are kept heated, with this pressure keeping the water in the upper vessel when the heat is removed from the bottom vessel, the vapor pressure decreases, and can no longer support the column of water – gravity (acting on the water) and atmospheric pressure then push the water back into the bottom vessel. In practice, the top vessel is filled with coffee grounds, and the heat is removed from the bottom vessel when the coffee has finished brewing. What vapor pressure means concretely is that the boiling water converts high-density water (a liquid) into low-density steam (a gas), which thus expands to take up more volume (in other words, the pressure increases). This pressure from the expanding steam then forces the liquid up the siphon; when the steam then condenses down to water the pressure decreases and the liquid flows back down. Siphon pump While a simple siphon cannot output liquid at a level higher than the source reservoir, a more complicated device utilizing an airtight metering chamber at the crest and a system of automatic valves, may discharge liquid on an ongoing basis, at a level higher than the source reservoir, without outside pumping energy being added. It can accomplish this despite what initially appears to be a violation of conservation of energy because it can take advantage of the energy of a large volume of liquid dropping some distance, to raise and discharge a small volume of liquid above the source reservoir. Thus it might be said to "require" a large quantity of falling liquid to power the dispensing of a small quantity. Such a system typically operates in a cyclical or start/stop but ongoing and self-powered manner. Ram pumps do not work in this way. These metering pumps are true siphon pumping devices which use siphons as their power source. Inverted siphon An inverted siphon is not a siphon but a term applied to pipes that must dip below an obstruction to form a U-shaped flow path. Large inverted siphons are used to convey water being carried in canals or flumes across valleys, for irrigation or gold mining. The Romans used inverted siphons of lead pipes to cross valleys that were too big to construct an aqueduct. Inverted siphons are commonly called traps for their function in preventing sewer gases from coming back out of sewers and sometimes making dense objects like rings and electronic components retrievable after falling into a drain. Liquid flowing in one end simply forces liquid up and out the other end, but solids like sand will accumulate. This is especially important in sewerage systems or culverts which must be routed under rivers or other deep obstructions where the better term is "depressed sewer". Back siphonage Back siphonage is a plumbing term applied to the reversal of normal water flow in a plumbing system due to sharply reduced or negative pressure on the water supply side, such as high demand on water supply by fire-fighting; it is not an actual siphon as it is suction. Back siphonage is rare as it depends on submerged inlets at the outlet (home) end and these are uncommon. Back siphonage is not to be confused with backflow; which is the reversed flow of water from the outlet end to the supply end caused by pressure occurring at the outlet end. Also, building codes usually demand a check valve where the water supply enters a building to prevent backflow into the drinking water system. Anti-siphon valve Building codes often contain specific sections on back siphonage and especially for external faucets (See the sample building code quote, below). Backflow prevention devices such as anti-siphon valves are required in such designs. The reason is that external faucets may be attached to hoses which may be immersed in an external body of water, such as a garden pond, swimming pool, aquarium or washing machine. In these situations the unwanted flow is not actually the result of a siphon but suction due to reduced pressure on the water supply side. Should the pressure within the water supply system fall, the external water may be returned by back pressure into the drinking water system through the faucet. Another possible contamination point is the water intake in the toilet tank. An anti-siphon valve is also required here to prevent pressure drops in the water supply line from suctioning water out of the toilet tank (which may contain additives such as "toilet blue") and contaminating the water system. Anti-siphon valves function as a one-direction check valve. Anti-siphon valves are also used medically. Hydrocephalus, or excess fluid in the brain, may be treated with a shunt which drains cerebrospinal fluid from the brain. All shunts have a valve to relieve excess pressure in the brain. The shunt may lead into the abdominal cavity such that the shunt outlet is significantly lower than the shunt intake when the patient is standing. Thus a siphon effect may take place and instead of simply relieving excess pressure, the shunt may act as a siphon, completely draining cerebrospinal fluid from the brain. The valve in the shunt may be designed to prevent this siphon action so that negative pressure on the drain of the shunt does not result in excess drainage. Only excess positive pressure from within the brain should result in drainage. The anti-siphon valve in medical shunts is preventing excess forward flow of liquid. In plumbing systems, the anti-siphon valve is preventing backflow. Sample building code regulations regarding "back siphonage" from the Canadian province of Ontario: 7.6.2.3.Back Siphonage Every potable water system that supplies a fixture or tank that is not subject to pressures above atmospheric shall be protected against back-siphonage by a backflow preventer. Where a potable water supply is connected to a boiler, tank, cooling jacket, lawn sprinkler system or other device where a non-potable fluid may be under pressure that is above atmospheric or the water outlet may be submerged in the non-potable fluid, the water supply shall be protected against backflow by a backflow preventer. Where a hose bibb is installed outside a building, inside a garage, or where there is an identifiable risk of contamination, the potable water system shall be protected against backflow by a backflow preventer. Other anti-siphoning devices Along with anti-siphon valves, anti-siphoning devices also exist. The two are unrelated in application. Siphoning can be used to remove fuel from tanks. With the cost of fuel increasing, it has been linked in several countries to the rise in fuel theft. Trucks, with their large fuel tanks, are most vulnerable. The anti-siphon device prevents thieves from inserting a tube into the fuel tank. Siphon barometer A siphon barometer is the term sometimes applied to the simplest of mercury barometers. A continuous U-shaped tube of the same diameter throughout is sealed on one end and filled with mercury. When placed into the upright, "U", position, mercury will flow away from the sealed end, forming a partial vacuum, until balanced by atmospheric pressure on the other end. The term "siphon" derives from the belief that air pressure is involved in the operation of a siphon. The difference in height of the fluid between the two arms of the U-shaped tube is the same as the maximum intermediate height of a siphon. When used to measure pressures other than atmospheric pressure, a siphon barometer is sometimes called a siphon gauge; these are not siphons but follow a standard U-shaped design leading to the term. Siphon barometers are still produced as precision instruments. Siphon barometers should not be confused with a siphon rain gauge., Siphon bottle A siphon bottle (also called a soda syphon or, archaically, a siphoid) is a pressurized bottle with a vent and a valve. It is not a siphon as pressure within the bottle drives the liquid up and out a tube. A special form was the gasogene. Siphon cup A siphon cup is the (hanging) reservoir of paint attached to a spray gun, it is not a siphon as a vacuum pump extracts the paint. This name is to distinguish it from gravity-fed reservoirs. An archaic use of the term is a cup of oil in which the oil is transported out of the cup via a cotton wick or tube to a surface to be lubricated, this is not a siphon but an example of capillary action. Heron's siphon Heron's siphon is not a siphon as it works as a gravity driven pressure pump, at first glance it appears to be a perpetual motion machine but will stop when the air in the priming pump is depleted. In a slightly differently configuration, it is also known as Heron's fountain. Venturi siphon A venturi siphon, also known as an eductor, is not a siphon but a form of vacuum pump using the Venturi effect of fast flowing fluids (e.g. air), to produce low pressures to suction other fluids; a common example is the carburetor. See pressure head. The low pressure at the throat of the venturi is called a siphon when a second fluid is introduced, or an aspirator when the fluid is air, this is an example of the misconception that air pressure is the operating force for siphons. Siphonic roof drainage Despite the name, siphonic roof drainage does not work as a siphon; the technology makes use of gravity induced vacuum pumping to carry water horizontally from multiple roof drains to a single downpipe and to increase flow velocity. Metal baffles at the roof drain inlets reduce the injection of air which increases the efficiency of the system. One benefit to this drainage technique is reduced capital costs in construction compared to traditional roof drainage. Another benefit is the elimination of pipe pitch or gradient required for conventional roof drainage piping. However this system of gravity pumping is mainly suitable for large buildings and is not usually suitable for residential properties. Self-siphons The term self-siphon is used in a number of ways. Liquids that are composed of long polymers can "self-siphon" and these liquids do not depend on atmospheric pressure. Self-siphoning polymer liquids work the same as the siphon-chain model where the lower part of the chain pulls the rest of the chain up and over the crest. This phenomenon is also called a tubeless siphon. "Self-siphon" is also often used in sales literature by siphon manufacturers to describe portable siphons that contain a pump. With the pump, no external suction (e.g. from a person's mouth/lungs) is required to start the siphon and thus the product is described as a "self-siphon". If the upper reservoir is such that the liquid there can rise above the height of the siphon crest, the rising liquid in the reservoir can "self-prime" the siphon and the whole apparatus be described as a "self-siphon". Once primed, such a siphon will continue to operate until the level of the upper reservoir falls below the intake of the siphon. Such self-priming siphons are useful in some rain gauges and dams. In nature Anatomy The term "siphon" is used for a number of structures in human and animal anatomy, either because flowing liquids are involved or because the structure is shaped like a siphon, but in which no actual siphon effect is occurring: see Siphon (disambiguation). There has been a debate if whether the siphon mechanism plays a role in blood circulation. However, in the 'closed loop' of circulation this was discounted; "In contrast, in 'closed' systems, like the circulation, gravity does not hinder uphill flow nor does it cause downhill flow, because gravity acts equally on the ascending and descending limbs of the circuit", but for "historical reasons", the term is used. One hypothesis (in 1989) was that a siphon existed in the circulation of the giraffe. But further research in 2004 found that, "There is no hydrostatic gradient and since the 'fall' of fluid does not assist the ascending arm, there is no siphon. The giraffe's high arterial pressure, which is sufficient to raise the blood 2 m from heart to head with sufficient remaining pressure to perfuse the brain, supports this concept." However, a paper written in 2005 urged more research on the hypothesis: The principle of the siphon is not species specific and should be a fundamental principle of closed circulatory systems. Therefore, the controversy surrounding the role of the siphon principle may best be resolved by a comparative approach. Analyses of blood pressure on a variety of long-necked and long-bodied animals, which take into account phylogenetic relatedness, will be important. In addition experimental studies that combined measurements of arterial and venous blood pressures, with cerebral blood flow, under a variety of gravitational stresses (different head positions), will ultimately resolve this controversy. Species Some species are named after siphons because they resemble siphons in whole or in part. Geosiphons are fungi. There are species of alga belonging to the family Siphonocladaceae in the phylum Chlorophyta which have tube-like structures. Ruellia villosa is a tropical plant in the family Acanthaceae that is also known by the botanical synonym, Siphonacanthus villosus Nees'. Geology In speleology, a siphon or a sump is that part of a cave passage that lies under water and through which cavers have to dive to progress further into the cave system, but it is not an actual siphon. Rivers A river siphon occurs when part of the water flow passes under a submerged object like a rock or tree trunk. The water flowing under the obstruction can be very powerful, and as such can be very dangerous for kayaking, canyoning, and other river-based watersports. Explanation using Bernoulli's equation Bernoulli's equation may be applied to a siphon to derive its ideal flow rate and theoretical maximum height. Let the surface of the upper reservoir be the reference elevation. Let point A be the start point of siphon, immersed within the higher reservoir and at a depth −d below the surface of the upper reservoir. Let point B be the intermediate high point on the siphon tube at height +hB above the surface of the upper reservoir. Let point C be the drain point of the siphon at height −hC below the surface of the upper reservoir. Bernoulli's equation: = fluid velocity along the streamline = gravitational acceleration downwards = elevation in gravity field = pressure along the streamline = fluid density Apply Bernoulli's equation to the surface of the upper reservoir. The surface is technically falling as the upper reservoir is being drained. However, for this example we will assume the reservoir to be infinite and the velocity of the surface may be set to zero. Furthermore, the pressure at both the surface and the exit point C is atmospheric pressure. Thus: Apply Bernoulli's equation to point A at the start of the siphon tube in the upper reservoir where P = PA, v = vA and y = −d Apply Bernoulli's equation to point B at the intermediate high point of the siphon tube where P = PB, v = vB and y = hB Apply Bernoulli's equation to point C where the siphon empties. Where v = vC and y = −hC. Furthermore, the pressure at the exit point is atmospheric pressure. Thus: Velocity As the siphon is a single system, the constant in all four equations is the same. Setting equations 1 and 4 equal to each other gives: Solving for vC: Velocity of siphon: The velocity of the siphon is thus driven solely by the height difference between the surface of the upper reservoir and the drain point. The height of the intermediate high point, hB, does not affect the velocity of the siphon. However, as the siphon is a single system, vB = vC and the intermediate high point does limit the maximum velocity. The drain point cannot be lowered indefinitely to increase the velocity. Equation 3 will limit the velocity to retain a positive pressure at the intermediate high point to prevent cavitation. The maximum velocity may be calculated by combining equations 1 and 3: Setting PB = 0 and solving for vmax: Maximum velocity of siphon: The depth, −d, of the initial entry point of the siphon in the upper reservoir, does not affect the velocity of the siphon. No limit to the depth of the siphon start point is implied by Equation 2 as pressure PA increases with depth d. Both these facts imply the operator of the siphon may bottom skim or top skim the upper reservoir without impacting the siphon's performance. This equation for the velocity is the same as that of any object falling height hC. This equation assumes PC is atmospheric pressure. If the end of the siphon is below the surface, the height to the end of the siphon cannot be used; rather the height difference between the reservoirs should be used. Maximum height Although siphons can exceed the barometric height of the liquid in special circumstances, e.g. when the liquid is degassed and the tube is clean and smooth, in general the practical maximum height can be found as follows. Setting equations 1 and 3 equal to each other gives: Maximum height of the intermediate high point occurs when it is so high that the pressure at the intermediate high point is zero; in typical scenarios this will cause the liquid to form bubbles and if the bubbles enlarge to fill the pipe then the siphon will "break". Setting PB = 0: Solving for hB: General height of siphon: This means that the height of the intermediate high point is limited by pressure along the streamline being always greater than zero. Maximum height of siphon: This is the maximum height that a siphon will work. Substituting values will give approximately for water and, by definition of standard pressure, for mercury. The ratio of heights (about 13.6) equals the ratio of densities of water and mercury (at a given temperature). As long as this condition is satisfied (pressure greater than zero), the flow at the output of the siphon is still only governed by the height difference between the source surface and the outlet. Volume of fluid in the apparatus is not relevant as long as the pressure head remains above zero in every section. Because pressure drops when velocity is increased, a static siphon (or manometer) can have a slightly higher height than a flowing siphon. Operation in a vacuum Experiments have shown that siphons can operate in a vacuum, via cohesion and tensile strength between molecules, provided that the liquids are pure and degassed and surfaces are very clean. The Oxford English Dictionary (OED) entry on siphon, published in 1911, states that a siphon works by atmospheric pressure. Stephen Hughes of Queensland University of Technology criticized this in a 2010 article which was widely reported in the media. The OED editors stated, "there is continuing debate among scientists as to which view is correct. ... We would expect to reflect this debate in the fully updated entry for siphon, due to be published later this year." Hughes continued to defend his view of the siphon in a late September post at the Oxford blog. The 2015 definition by the OED is: A tube used to convey liquid upwards from a reservoir and then down to a lower level of its own accord. Once the liquid has been forced into the tube, typically by suction or immersion, flow continues unaided. The Encyclopædia Britannica currently describes a siphon as: Siphon, also spelled syphon, instrument, usually in the form of a tube bent to form two legs of unequal length, for conveying liquid over the edge of a vessel and delivering it at a lower level. Siphons may be of any size. The action depends upon the influence of gravity (not, as sometimes thought, on the difference in atmospheric pressure; a siphon will work in a vacuum) and upon the cohesive forces that prevent the columns of liquid in the legs of the siphon from breaking under their own weight. At sea level, water can be lifted a little more than 10 metres (33 feet) by a siphon. In civil engineering, pipelines called inverted siphons are used to carry sewage or stormwater under streams, highway cuts, or other depressions in the ground. In an inverted siphon the liquid completely fills the pipe and flows under pressure, as opposed to the open-channel gravity flow that occurs in most sanitary or storm sewers. Standards in engineering or industry The American Society of Mechanical Engineers (ASME) publishes the following Tri-Harmonized Standard: ASSE 1002/ASME A112.1002/CSA B125.12 on Performance Requirements for Anti-Siphon Fill Valves (Ballcocks) for Gravity Water Closet Flush Tanks See also 1992 Guadalajara explosions for details of an accident where a plumbing method (trap, also known as an inverted siphon) was partially responsible for gas explosions. Communicating vessels Concentric siphon Gravity feed Jiggle syphon Marot jar Pythagorean cup Water level (device) References External links xkcd on the siphon Articles containing video clips Egyptian inventions Fluid dynamics Tools 16th-century BC establishments
Siphon
[ "Chemistry", "Engineering" ]
10,543
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
459,844
https://en.wikipedia.org/wiki/Surface%20acoustic%20wave
A surface acoustic wave (SAW) is an acoustic wave traveling along the surface of a material exhibiting elasticity, with an amplitude that typically decays exponentially with depth into the material, such that they are confined to a depth of about one wavelength. Discovery SAWs were first explained in 1885 by Lord Rayleigh, who described the surface acoustic mode of propagation and predicted its properties in his classic paper. Named after their discoverer, Rayleigh waves have a longitudinal and a vertical shear component that can couple with any media like additional layers in contact with the surface. This coupling strongly affects the amplitude and velocity of the wave, allowing SAW sensors to directly sense mass and mechanical properties. The term 'Rayleigh waves' is often used synonymously with 'SAWs', although strictly speaking there are multiple types of surface acoustic waves, such as Love waves, which are polarised in the plane of the surface, rather than longitudinal and vertical. SAWs such as Love and Rayleigh waves tend to propagate for much longer than bulk waves, as they only have to travel in two dimensions, rather than in three. Furthermore, in general they have a lower velocity than their bulk counterparts. SAW devices Surface acoustic wave devices provide wide-range of applications with the use of electronic system, including delay lines, filters, correlators and DC to DC converters. The possibilities of these SAW device could provide potential field in radar system, communication systems. Application in electronic components This kind of wave is commonly used in devices called SAW devices in electronic circuits. SAW devices are used as filters, oscillators and transformers, devices that are based on the transduction of acoustic waves. The transduction from electric energy to mechanical energy (in the form of SAWs) is accomplished by the use of piezoelectric materials. Electronic devices employing SAWs normally use one or more interdigital transducers (IDTs) to convert acoustic waves to electrical signals and vice versa by exploiting the piezoelectric effect of certain materials, like quartz, lithium niobate, lithium tantalate, lanthanum gallium silicate, etc. These devices are fabricated by substrate cleaning/treatments like polishing, metallisation, photolithography, and passivation/protection (dielectric) layer manufacturing. These are typical process steps used in manufacturing of semiconductors like silicon integrated circuits. All parts of the device (substrate, its surface, metallisation material type, thickness of metallisation, its edges formed by photolithography, layers - like passivation coating the metallisation) have effect on the performance of the SAW devices because propagation of Rayleigh waves is highly dependent on the substrate material surface, its quality and all layers in contact with the substrate. For example in SAW filters the sampling frequency is dependent on the width of the IDT fingers, the power handling capability is related to the thickness and materials of the IDT fingers, and the temperature stability depends not only of the temperature behavior of the substrate but also on the metals selected for the IDT electrodes and the possible dielectric layers coating the substrate and the electrodes. SAW filters are now used in mobile telephones, and provide technical advantages in performance, cost, and size over other filter technologies such as quartz crystals (based on bulk waves), LC filters, and waveguide filters specifically at frequencies below 1.5-2.5 GHz depending on the RF power needed to be filtered. Complementing technology to SAW for frequencies above 1.5-2.5 GHz is based on thin-film bulk acoustic resonators (TFBAR, or FBAR). Much research has been done in the last 20 years in the area of surface acoustic wave sensors. Sensor applications include all areas of sensing (such as chemical, optical, thermal, pressure, acceleration, torque and biological). SAW sensors have seen relatively modest commercial success to date, but are commonly commercially available for some applications such as touchscreen displays. They have been successfully applied to torque sensing in motorsport powertrains and high performance aerospace applications as well as temperature sensing in harsh environments such as high voltage electrical power transmission and the combined sensing of torque and temperature on the rotor of electric motors SAW device applications in radio and television SAW resonators are used in many of the same applications in which quartz crystals are used, because they can operate at higher frequency. They are often used in radio transmitters where tunability is not required. They are often used in applications such as garage door opener remote controls, short range radio frequency links for computer peripherals, and other devices where channelization is not required. Where a radio link might use several channels, quartz crystal oscillators are more commonly used to drive a phase locked loop. Since the resonant frequency of a SAW device is set by the mechanical properties of the crystal, it does not drift as much as a simple LC oscillator, where conditions such as capacitor performance and battery voltage will vary substantially with temperature and age. SAW filters are also often used in radio receivers, as they can have precisely determined and narrow passbands. This is helpful in applications where a single antenna must be shared between a transmitter and a receiver operating at closely spaced frequencies. SAW filters are also frequently used in television receivers, for extracting subcarriers from the signal; until the analog switchoff, the extraction of digital audio subcarriers from the intermediate frequency strip of a television receiver or video recorder was one of the main markets for SAW filters. Early pioneer Jeffery Collins incorporated surface acoustic wave devices in a Skynet receiver he developed in the 1970s. It synchronised signals faster than existing technology. They are also often used in digital receivers, and are well suited to superhet applications. This is because the intermediate frequency signal is always at a fixed frequency after the local oscillator has been mixed with the received signal, and so a filter with a fixed frequency and high Q provides excellent removal of unwanted or interference signals. In these applications, SAW filters are almost always used with a phase locked loop synthesized local oscillator, or a varicap driven oscillator. SAW in geophysics In seismology surface acoustic waves could become the most destructive type of seismic wave produced by earthquakes, which propagate in more complex media, such as ocean bottom, rocks, etc. so that it need to be noticed and monitored by people to protect living environment. SAW in quantum acoustics SAWs play a key role in the field of quantum acoustics (QA) where, in contrast to quantum optics (QO) which studies the interaction between matter and light, the interaction between quantum systems (phonons, (quasi-)particles and artificial qubits) and acoustic waves is analysed. The propagation speed of the respective waves of QA is five orders of magnitude slower than that of QO. As a result, QA offers a different perspective of the quantum regime in terms of wavelengths which QO has not covered. One example of these additions is the quantum optical investigation of qubits and quantum dots fabricated in such a way as to emulate essential aspects of natural atoms, e.g. energy-level structures and coupling to an electromagnetic field. These artificial atoms are arranged into a circuit dubbed ‘giant atoms’, due to its size reaching 10−4–10−3 m. Quantum optical experiments generally made use of microwave fields for matter-light interaction, but because of the difference of wavelength between the giant atoms and microwave fields, the latter of which has a wavelength ranging between 10−2–10−1 m, SAWs were used instead for their more suitable wavelength (10−6 m). Within the fields of magnonics and spintronics, a resonant coupling between spin waves and surface acoustic waves with equal wave-vector and frequency allows for the transfer of energy from one form to another, in either direction. This can for example be useful in the construction of magnetic field sensors, which are sensitive to both the intensity and direction of external magnetic fields. These sensors, constructed using a structure of magnetostrictive and piezoelectric layers have the benefit of operating without batteries and wires, as well as having a broad range of operating conditions, such as high temperatures or rotating systems. Single electron control Even at the smallest scales of current semiconductor technology, each operation is carried out by huge streams of electrons. Reducing the number of electrons involved in these processes, with the ultimate goal of achieving single electron control is a serious challenge. This is due to the electrons being highly interactive with each other and their surroundings, making it difficult to separate just one from the rest. The use of SAWs can help with achieving this goal. When SAWs are generated on a piezoelectric surface, the strain wave generates an electromagnetic potential. The potential minima can then trap single electrons, allowing them to be individually transported. Although this technique was first thought of as a way to accurately define a standard unit of current, it turned out to be more useful in the field of quantum information. Usually, qubits are stationary, making the transfer of information between them difficult. The single electrons, carried by the SAWs, can be used as so called flying qubits, able to transport information from one place to another. To realise this a single electron source is needed, as well as a receiver between which the electron can be transported. Quantum dots (QD) are typically used for these stationary electron confinements. This potential minimum is sometimes called a SAW QD. The process, as seen in the GIF on the right, is typically as follows. First SAWs are generated with an interdigital transducer with specific dimensions between the electrodes to get the favorable wavelengths. Then from the stationary QD the electron quantum tunnels to the potential minimum, or SAW QD. The SAWs transfer some kinetic energy to the electron, driving it forward. It is then carried through a one dimensional channel on a surface of piezoelectric semiconductor material like GaAs. Finally, the electron tunnels out of the SAW QD and into the receiver QD, after which the transfer is complete. This process can also be repeated in both directions. SAW and 2D materials As acoustic vibrations can interact with the moving charges in a piezoelectric semiconductor through the strain-induced piezoelectric field in bulk materials, this acoustoelectric (AE) coupling is also important in 2D materials, such as graphene. In these 2D materials the two-dimensional electron gas has band gap energies generally much higher than the energy of the SAW phonons traveling through the material. Therefore the SAW phonons are typically absorbed via intra-band electronic transitions. In graphene these transitions are the only way, as the linear dispersion relation of its electrons prevents momentum/energy conservation when it would absorb a SAW for an inter-band transition. Often the interaction between moving charges and SAWs results in the diminishing of the SAW intensity as it moves through the 2D electron gas, as well as re-normalizing the SAW velocity. The charges take over kinetic energy from the SAW and lose this energy again through carrier scattering. Aside from SAW intensity attenuation, there are specific situations in which the wave can be amplified as well. By applying a voltage over the material, the charge carriers may obtain a higher drift speed than the SAW. Then they pass on a part of their kinetic energy to the SAW, causing it to amplify its intensity and velocity. The converse works as well. If the SAW is moving faster than the carriers, it may transfer kinetic energy to them, and thereby losing some velocity and intensity. SAW in microfluidics In recent years, attention has been drawn to using SAWs to drive microfluidic actuation and a variety of other processes. Owing to the mismatch of sound velocities in the SAW substrate and fluid, SAWs can be efficiently transferred into the fluid, creating significant inertial forces and fluid velocities. This mechanism can be exploited to drive fluid actions such as pumping, mixing, and jetting.[8] To drive these processes, there is a change of mode of the wave at the liquid-substrate interface. In the substrate, the SAW wave is a transverse wave and upon entering the droplet the wave becomes a longitudinal wave.[9] It is this longitudinal wave that creates the flow of fluid within the microfluidic droplet, allowing mixing to take place. This technique can be used as an alternative to microchannels and microvalves for manipulation of substrates, allowing for an open system. This mechanism has also been used in droplet-based microfluidics for droplet manipulation. Notably, using SAW as an actuation mechanism, droplets were pushed towards two or more outlets for sorting. Moreover, SAWs were used for droplet size modulation, splitting, trapping, tweezing, and nanofluidic pipetting. Droplet impact on flat and inclined surfaces has been manipulated and controlled using SAW. PDMS (polydimethylsiloxane) is a material that can be used to create microchannels and microfluidic chips. It has many uses, including in experiments where living cells are to be tested or processed. If living organisms need to be kept alive, it is important to monitor and control their environment, such as heat and pH levels; however, if these elements are not regulated, the cells may die or it may result in unwanted reactions. PDMS has been found to absorb acoustic energy, causing the PDMS to heat up quickly (exceeding 2000 Kelvin/second). The use of SAW as a way to heat these PDMS devices, along with liquids inside microchannels, is now a technique that can be done in a controlled manner with the ability to manipulate the temperature to within 0.1 °C. The development of Flexible Surface Acoustic Wave (SAW) devices has been a significant driver in the advancement of wearable technology and microfluidic systems. These devices are typically fabricated on polymer substrates, such as Polyethylene Naphthalate (PEN) and polyimide, and utilize sputtering deposition of materials like AlN and ZnO. This combination of flexibility and advanced materials has expanded their application potential across various fields. SAW in flow measurement Surface acoustic waves can be used for flow measurement. SAW relies on the propagation of a wave front, which appears similar to seismic activities. The waves are generated at the excitation centre and spread out along the surface of a solid material. An electric pulse induces them to generate SAWs that propagate like the waves of an earthquake. Interdigital transducer acts as sender and as receiver. When one is in sender mode, the two most distant ones act as receivers. The SAWs travel along the surface of the measuring tube, but a portion will couple out to the liquid. The decoupling angle depends on the liquid respectively the propagation velocity of the wave which is specific to the liquid. On the other side of the measuring tube, portions of the wave will couple into the tube and continue their way along its surface to the next interdigital transducer. Another portion will be coupled out again and travels back to the other side of the measuring tube where the effect repeats itself and the transducer on this side detects the wave. That means excitation of any one transducer here will lead to a sequence of input signals on two other transducers in the distance. Two of the transducers send their signals in the direction of flow, two in the other direction. See also Linear elasticity Love wave Phonon Picosecond ultrasonics Rayleigh wave Surface plasmon polariton Ultrasound References External links History of SAW Devices SAW Sensor Watching ripples on crystals Surface waves Microtechnology
Surface acoustic wave
[ "Physics", "Materials_science", "Engineering" ]
3,248
[ "Physical phenomena", "Microtechnology", "Surface waves", "Materials science", "Waves" ]
460,235
https://en.wikipedia.org/wiki/Resultant%20force
In physics and engineering, a resultant force is the single force and associated torque obtained by combining a system of forces and torques acting on a rigid body via vector addition. The defining feature of a resultant force, or resultant force-torque, is that it has the same effect on the rigid body as the original system of forces. Calculating and visualizing the resultant force on a body is done through computational analysis, or (in the case of sufficiently simple systems) a free body diagram. The point of application of the resultant force determines its associated torque. The term resultant force should be understood to refer to both the forces and torques acting on a rigid body, which is why some use the term resultant force–torque. The force equal to the resultant force in magnitude, yet pointed in the opposite direction, is called an equilibrant force. Illustration The diagram illustrates simple graphical methods for finding the line of application of the resultant force of simple planar systems. Lines of application of the actual forces and in the leftmost illustration intersect. After vector addition is performed "at the location of ", the net force obtained is translated so that its line of application passes through the common intersection point. With respect to that point all torques are zero, so the torque of the resultant force is equal to the sum of the torques of the actual forces. Illustration in the middle of the diagram shows two parallel actual forces. After vector addition "at the location of ", the net force is translated to the appropriate line of application, whereof it becomes the resultant force . The procedure is based on a decomposition of all forces into components for which the lines of application (pale dotted lines) intersect at one point (the so-called pole, arbitrarily set at the right side of the illustration). Then the arguments from the previous case are applied to the forces and their components to demonstrate the torque relationships. The rightmost illustration shows a couple, two equal but opposite forces for which the amount of the net force is zero, but they produce the net torque    where   is the distance between their lines of application. This is "pure" torque, since there is no resultant force. Bound vector A force applied to a body has a point of application. The effect of the force is different for different points of application. For this reason a force is called a bound vector, which means that it is bound to its point of application. Forces applied at the same point can be added together to obtain the same effect on the body. However, forces with different points of application cannot be added together and maintain the same effect on the body. It is a simple matter to change the point of application of a force by introducing equal and opposite forces at two different points of application that produce a pure torque on the body. In this way, all of the forces acting on a body can be moved to the same point of application with associated torques. A system of forces on a rigid body is combined by moving the forces to the same point of application and computing the associated torques. The sum of these forces and torques yields the resultant force-torque. Associated torque If a point R is selected as the point of application of the resultant force F of a system of n forces Fi then the associated torque T is determined from the formulas and It is useful to note that the point of application R of the resultant force may be anywhere along the line of action of F without changing the value of the associated torque. To see this add the vector kF to the point of application R in the calculation of the associated torque, The right side of this equation can be separated into the original formula for T plus the additional term including kF, because the second term is zero. To see this notice that F is the sum of the vectors Fi which yields thus the value of the associated torque is unchanged. Torque-free resultant It is useful to consider whether there is a point of application R such that the associated torque is zero. This point is defined by the property where F is resultant force and Fi form the system of forces. Notice that this equation for R has a solution only if the sum of the individual torques on the right side yield a vector that is perpendicular to F. Thus, the condition that a system of forces has a torque-free resultant can be written as If this condition is satisfied then there is a point of application for the resultant which results in a pure force. If this condition is not satisfied, then the system of forces includes a pure torque for every point of application. Wrench The forces and torques acting on a rigid body can be assembled into the pair of vectors called a wrench. If a system of forces and torques has a net resultant force F and a net resultant torque T, then the entire system can be replaced by a force F and an arbitrarily located couple that yields a torque of T. In general, if F and T are orthogonal, it is possible to derive a radial vector R such that , meaning that the single force F, acting at displacement R, can replace the system. If the system is zero-force (torque only), it is termed a screw and is mathematically formulated as screw theory. The resultant force and torque on a rigid body obtained from a system of forces Fi i=1,...,n, is simply the sum of the individual wrenches Wi, that is Notice that the case of two equal but opposite forces F and -F acting at points A and B respectively, yields the resultant W=(F-F, A×F - B× F) = (0, (A-B)×F). This shows that wrenches of the form W=(0, T) can be interpreted as pure torques. References Sources Force Dynamics (mechanics)
Resultant force
[ "Physics", "Mathematics" ]
1,177
[ "Physical phenomena", "Force", "Physical quantities", "Quantity", "Mass", "Classical mechanics", "Motion (physics)", "Dynamics (mechanics)", "Wikipedia categories named after physical quantities", "Matter" ]
460,322
https://en.wikipedia.org/wiki/Nuclear%20reaction
In nuclear physics and nuclear chemistry, a nuclear reaction is a process in which two nuclei, or a nucleus and an external subatomic particle, collide to produce one or more new nuclides. Thus, a nuclear reaction must cause a transformation of at least one nuclide to another. If a nucleus interacts with another nucleus or particle, they then separate without changing the nature of any nuclide, the process is simply referred to as a type of nuclear scattering, rather than a nuclear reaction. In principle, a reaction can involve more than two particles colliding, but because the probability of three or more nuclei to meet at the same time at the same place is much less than for two nuclei, such an event is exceptionally rare (see triple alpha process for an example very close to a three-body nuclear reaction). The term "nuclear reaction" may refer either to a change in a nuclide induced by collision with another particle or to a spontaneous change of a nuclide without collision. Natural nuclear reactions occur in the interaction between cosmic rays and matter, and nuclear reactions can be employed artificially to obtain nuclear energy, at an adjustable rate, on-demand. Nuclear chain reactions in fissionable materials produce induced nuclear fission. Various nuclear fusion reactions of light elements power the energy production of the Sun and stars. History In 1919, Ernest Rutherford was able to accomplish transmutation of nitrogen into oxygen at the University of Manchester, using alpha particles directed at nitrogen 14N + α → 17O + p.  This was the first observation of an induced nuclear reaction, that is, a reaction in which particles from one decay are used to transform another atomic nucleus. Eventually, in 1932 at Cambridge University, a fully artificial nuclear reaction and nuclear transmutation was achieved by Rutherford's colleagues John Cockcroft and Ernest Walton, who used artificially accelerated protons against lithium-7, to split the nucleus into two alpha particles. The feat was popularly known as "splitting the atom", although it was not the modern nuclear fission reaction later (in 1938) discovered in heavy elements by the German scientists Otto Hahn, Lise Meitner, and Fritz Strassmann. Nuclear reaction equations Nuclear reactions may be shown in a form similar to chemical equations, for which invariant mass must balance for each side of the equation, and in which transformations of particles must follow certain conservation laws, such as conservation of charge and baryon number (total atomic mass number). An example of this notation follows: To balance the equation above for mass, charge and mass number, the second nucleus to the right must have atomic number 2 and mass number 4; it is therefore also helium-4. The complete equation therefore reads: or more simply: Instead of using the full equations in the style above, in many situations a compact notation is used to describe nuclear reactions. This style of the form A(b,c)D is equivalent to A + b producing c + D. Common light particles are often abbreviated in this shorthand, typically p for proton, n for neutron, d for deuteron, α representing an alpha particle or helium-4, β for beta particle or electron, γ for gamma photon, etc. The reaction above would be written as 6Li(d,α)α. Energy conservation Kinetic energy may be released during the course of a reaction (exothermic reaction) or kinetic energy may have to be supplied for the reaction to take place (endothermic reaction). This can be calculated by reference to a table of very accurate particle rest masses, as follows: according to the reference tables, the nucleus has a standard atomic weight of 6.015 atomic mass units (abbreviated u), the deuterium has 2.014 u, and the helium-4 nucleus has 4.0026 u. Thus: the sum of the rest mass of the individual nuclei = 6.015 + 2.014 = 8.029 u; the total rest mass on the two helium-nuclei = 2 × 4.0026 = 8.0052 u; missing rest mass = 8.029 – 8.0052 = 0.0238 atomic mass units. In a nuclear reaction, the total (relativistic) energy is conserved. The "missing" rest mass must therefore reappear as kinetic energy released in the reaction; its source is the nuclear binding energy. Using Einstein's mass-energy equivalence formula E = mc2, the amount of energy released can be determined. We first need the energy equivalent of one atomic mass unit: Hence, the energy released is 0.0238 × 931 MeV = 22.2 MeV. Expressed differently: the mass is reduced by 0.3%, corresponding to 0.3% of 90 PJ/kg is 270 TJ/kg. This is a large amount of energy for a nuclear reaction; the amount is so high because the binding energy per nucleon of the helium-4 nucleus is unusually high because the He-4 nucleus is "doubly magic". (The He-4 nucleus is unusually stable and tightly bound for the same reason that the helium atom is inert: each pair of protons and neutrons in He-4 occupies a filled 1s nuclear orbital in the same way that the pair of electrons in the helium atom occupy a filled 1s electron orbital). Consequently, alpha particles appear frequently on the right-hand side of nuclear reactions. The energy released in a nuclear reaction can appear mainly in one of three ways: kinetic energy of the product particles (fraction of the kinetic energy of the charged nuclear reaction products can be directly converted into electrostatic energy); emission of very high energy photons, called gamma rays; some energy may remain in the nucleus, as a metastable energy level. When the product nucleus is metastable, this is indicated by placing an asterisk ("*") next to its atomic number. This energy is eventually released through nuclear decay. A small amount of energy may also emerge in the form of X-rays. Generally, the product nucleus has a different atomic number, and thus the configuration of its electron shells is wrong. As the electrons rearrange themselves and drop to lower energy levels, internal transition X-rays (X-rays with precisely defined emission lines) may be emitted. Q-value and energy balance In writing down the reaction equation, in a way analogous to a chemical equation, one may, in addition, give the reaction energy on the right side: For the particular case discussed above, the reaction energy has already been calculated as Q = 22.2 MeV. Hence: The reaction energy (the "Q-value") is positive for exothermal reactions and negative for endothermal reactions, opposite to the similar expression in chemistry. On the one hand, it is the difference between the sums of kinetic energies on the final side and on the initial side. But on the other hand, it is also the difference between the nuclear rest masses on the initial side and on the final side (in this way, we have calculated the Q-value above). Reaction rates If the reaction equation is balanced, that does not mean that the reaction really occurs. The rate at which reactions occur depends on the energy and the flux of the incident particles, and the reaction cross section. An example of a large repository of reaction rates is the REACLIB database, as maintained by the Joint Institute for Nuclear Astrophysics. Charged vs. uncharged particles In the initial collision which begins the reaction, the particles must approach closely enough so that the short-range strong force can affect them. As most common nuclear particles are positively charged, this means they must overcome considerable electrostatic repulsion before the reaction can begin. Even if the target nucleus is part of a neutral atom, the other particle must penetrate well beyond the electron cloud and closely approach the nucleus, which is positively charged. Thus, such particles must be first accelerated to high energy, for example by: particle accelerators; nuclear decay (alpha particles are the main type of interest here since beta and gamma rays are rarely involved in nuclear reactions); very high temperatures, on the order of millions of degrees, producing thermonuclear reactions; cosmic rays. Also, since the force of repulsion is proportional to the product of the two charges, reactions between heavy nuclei are rarer, and require higher initiating energy, than those between a heavy and light nucleus; while reactions between two light nuclei are the most common ones. Neutrons, on the other hand, have no electric charge to cause repulsion, and are able to initiate a nuclear reaction at very low energies. In fact, at extremely low particle energies (corresponding, say, to thermal equilibrium at room temperature), the neutron's de Broglie wavelength is greatly increased, possibly greatly increasing its capture cross-section, at energies close to resonances of the nuclei involved. Thus low-energy neutrons may be even more reactive than high-energy neutrons. Notable types While the number of possible nuclear reactions is immense, there are several types that are more common, or otherwise notable. Some examples include: Fusion reactions – two light nuclei join to form a heavier one, with additional particles (usually protons or neutrons) emitted subsequently. Spallation – a nucleus is hit by a particle with sufficient energy and momentum to knock out several small fragments or smash it into many fragments. Induced gamma emission belongs to a class in which only photons were involved in creating and destroying states of nuclear excitation. Fission reactions – a very heavy nucleus, after absorbing additional light particles (usually neutrons), splits into two or sometimes three pieces. This is an induced nuclear reaction. Spontaneous fission, which occurs without assistance of a neutron, is usually not considered a nuclear reaction. At most, it is not an induced nuclear reaction. Direct reactions An intermediate energy projectile transfers energy or picks up or loses nucleons to the nucleus in a single quick (10−21 second) event. Energy and momentum transfer are relatively small. These are particularly useful in experimental nuclear physics, because the reaction mechanisms are often simple enough to calculate with sufficient accuracy to probe the structure of the target nucleus. Inelastic scattering Only energy and momentum are transferred. (p,p') tests differences between nuclear states. (α,α') measures nuclear surface shapes and sizes. Since α particles that hit the nucleus react more violently, elastic and shallow inelastic α scattering are sensitive to the shapes and sizes of the targets, like light scattered from a small black object. (e,e') is useful for probing the interior structure. Since electrons interact less strongly than do protons and neutrons, they reach to the centers of the targets and their wave functions are less distorted by passing through the nucleus. Charge-exchange reactions Energy and charge are transferred between projectile and target. Some examples of this kind of reactions are: (p,n) (3He,t) Nucleon transfer reactions Usually at moderately low energy, one or more nucleons are transferred between the projectile and target. These are useful in studying outer shell structure of nuclei. Transfer reactions can occur: from the projectile to the target - stripping reactions from the target to the projectile - pick-up reactions Examples: (α,n) and (α,p) reactions. Some of the earliest nuclear reactions studied involved an alpha particle produced by alpha decay, knocking a nucleon from a target nucleus. (d,n) and (d,p) reactions. A deuteron beam impinges on a target; the target nuclei absorb either the neutron or proton from the deuteron. The deuteron is so loosely bound that this is almost the same as proton or neutron capture. A compound nucleus may be formed, leading to additional neutrons being emitted more slowly. (d,n) reactions are used to generate energetic neutrons. The strangeness exchange reaction (K, π) has been used to study hypernuclei. The reaction 14N(α,p)17O performed by Rutherford in 1917 (reported 1919), is generally regarded as the first nuclear transmutation experiment. Reactions with neutrons Reactions with neutrons are important in nuclear reactors and nuclear weapons. While the best-known neutron reactions are neutron scattering, neutron capture, and nuclear fission, for some light nuclei (especially odd-odd nuclei) the most probable reaction with a thermal neutron is a transfer reaction: Some reactions are only possible with fast neutrons: (n,2n) reactions produce small amounts of protactinium-231 and uranium-232 in the thorium cycle which is otherwise relatively free of highly radioactive actinide products. 9Be + n → 2α + 2n can contribute some additional neutrons in the beryllium neutron reflector of a nuclear weapon. 7Li + n → T + α + n unexpectedly contributed additional yield in the Bravo, Romeo and Yankee shots of Operation Castle, the three highest-yield nuclear tests conducted by the U.S. Compound nuclear reactions Either a low-energy projectile is absorbed or a higher energy particle transfers energy to the nucleus, leaving it with too much energy to be fully bound together. On a time scale of about 10−19 seconds, particles, usually neutrons, are "boiled" off. That is, it remains together until enough energy happens to be concentrated in one neutron to escape the mutual attraction. The excited quasi-bound nucleus is called a compound nucleus. Low energy (e, e' xn), (γ, xn) (the xn indicating one or more neutrons), where the gamma or virtual gamma energy is near the giant dipole resonance. These increase the need for radiation shielding around electron accelerators. See also Acoplanarity Atomic mass Atomic nucleus Atomic number CNO cycle Nuclear chain reaction Oppenheimer–Phillips process Nuclear Power References Sources Physical phenomena Nuclear chemistry Nuclear physics Nuclear fission Nuclear fusion Radioactivity
Nuclear reaction
[ "Physics", "Chemistry" ]
2,855
[ "Nuclear fission", "Physical phenomena", "Nuclear chemistry", "nan", "Nuclear physics", "Nuclear fusion", "Radioactivity" ]
460,613
https://en.wikipedia.org/wiki/Ring%20of%20integers
In mathematics, the ring of integers of an algebraic number field is the ring of all algebraic integers contained in . An algebraic integer is a root of a monic polynomial with integer coefficients: . This ring is often denoted by or . Since any integer belongs to and is an integral element of , the ring is always a subring of . The ring of integers is the simplest possible ring of integers. Namely, where is the field of rational numbers. And indeed, in algebraic number theory the elements of are often called the "rational integers" because of this. The next simplest example is the ring of Gaussian integers , consisting of complex numbers whose real and imaginary parts are integers. It is the ring of integers in the number field of Gaussian rationals, consisting of complex numbers whose real and imaginary parts are rational numbers. Like the rational integers, is a Euclidean domain. The ring of integers of an algebraic number field is the unique maximal order in the field. It is always a Dedekind domain. Properties The ring of integers is a finitely-generated -module. Indeed, it is a free -module, and thus has an integral basis, that is a basis of the -vector space  such that each element  in can be uniquely represented as with . The rank  of as a free -module is equal to the degree of  over . Examples Computational tool A useful tool for computing the integral closure of the ring of integers in an algebraic field is the discriminant. If is of degree over , and form a basis of over , set . Then, is a submodule of the spanned by . pg. 33 In fact, if is square-free, then forms an integral basis for . pg. 35 Cyclotomic extensions If is a prime,  is a th root of unity and is the corresponding cyclotomic field, then an integral basis of is given by . Quadratic extensions If is a square-free integer and is the corresponding quadratic field, then is a ring of quadratic integers and its integral basis is given by if and by if . This can be found by computing the minimal polynomial of an arbitrary element where . Multiplicative structure In a ring of integers, every element has a factorization into irreducible elements, but the ring need not have the property of unique factorization: for example, in the ring of integers , the element 6 has two essentially different factorizations into irreducibles: A ring of integers is always a Dedekind domain, and so has unique factorization of ideals into prime ideals. The units of a ring of integers is a finitely generated abelian group by Dirichlet's unit theorem. The torsion subgroup consists of the roots of unity of . A set of torsion-free generators is called a set of fundamental units. Generalization One defines the ring of integers of a non-archimedean local field as the set of all elements of with absolute value ; this is a ring because of the strong triangle inequality. If is the completion of an algebraic number field, its ring of integers is the completion of the latter's ring of integers. The ring of integers of an algebraic number field may be characterised as the elements which are integers in every non-archimedean completion. For example, the -adic integers are the ring of integers of the -adic numbers . See also Minimal polynomial (field theory) Integral closure – gives a technique for computing integral closures Notes Citations References Ring theory Algebraic number theory
Ring of integers
[ "Mathematics" ]
713
[ "Fields of abstract algebra", "Algebraic number theory", "Ring theory", "Number theory" ]
460,637
https://en.wikipedia.org/wiki/Continuous%20linear%20extension
In functional analysis, it is often convenient to define a linear transformation on a complete, normed vector space by first defining a linear transformation on a dense subset of and then continuously extending to the whole space via the theorem below. The resulting extension remains linear and bounded, and is thus continuous, which makes it a continuous linear extension. This procedure is known as continuous linear extension. Theorem Every bounded linear transformation from a normed vector space to a complete, normed vector space can be uniquely extended to a bounded linear transformation from the completion of to In addition, the operator norm of is if and only if the norm of is This theorem is sometimes called the BLT theorem. Application Consider, for instance, the definition of the Riemann integral. A step function on a closed interval is a function of the form: where are real numbers, and denotes the indicator function of the set The space of all step functions on normed by the norm (see Lp space), is a normed vector space which we denote by Define the integral of a step function by: as a function is a bounded linear transformation from into Let denote the space of bounded, piecewise continuous functions on that are continuous from the right, along with the norm. The space is dense in so we can apply the BLT theorem to extend the linear transformation to a bounded linear transformation from to This defines the Riemann integral of all functions in ; for every The Hahn–Banach theorem The above theorem can be used to extend a bounded linear transformation to a bounded linear transformation from to if is dense in If is not dense in then the Hahn–Banach theorem may sometimes be used to show that an extension exists. However, the extension may not be unique. See also References Functional analysis Linear operators
Continuous linear extension
[ "Mathematics" ]
354
[ "Functions and mappings", "Functional analysis", "Mathematical objects", "Linear operators", "Mathematical relations" ]
460,642
https://en.wikipedia.org/wiki/Equicontinuity
In mathematical analysis, a family of functions is equicontinuous if all the functions are continuous and they have equal variation over a given neighbourhood, in a precise sense described herein. In particular, the concept applies to countable families, and thus sequences of functions. Equicontinuity appears in the formulation of Ascoli's theorem, which states that a subset of C(X), the space of continuous functions on a compact Hausdorff space X, is compact if and only if it is closed, pointwise bounded and equicontinuous. As a corollary, a sequence in C(X) is uniformly convergent if and only if it is equicontinuous and converges pointwise to a function (not necessarily continuous a-priori). In particular, the limit of an equicontinuous pointwise convergent sequence of continuous functions fn on either metric space or locally compact space is continuous. If, in addition, fn are holomorphic, then the limit is also holomorphic. The uniform boundedness principle states that a pointwise bounded family of continuous linear operators between Banach spaces is equicontinuous. Equicontinuity between metric spaces Let X and Y be two metric spaces, and F a family of functions from X to Y. We shall denote by d the respective metrics of these spaces. The family F is equicontinuous at a point x0 ∈ X if for every ε > 0, there exists a δ > 0 such that d(ƒ(x0), ƒ(x)) < ε for all ƒ ∈ F and all x such that d(x0, x) < δ. The family is pointwise equicontinuous if it is equicontinuous at each point of X. The family F is uniformly equicontinuous if for every ε > 0, there exists a δ > 0 such that d(ƒ(x1), ƒ(x2)) < ε for all ƒ ∈ F and all x1, x2 ∈ X such that d(x1, x2) < δ. For comparison, the statement 'all functions ƒ in F are continuous' means that for every ε > 0, every ƒ ∈ F, and every x0 ∈ X, there exists a δ > 0 such that d(ƒ(x0), ƒ(x)) < ε for all x ∈ X such that d(x0, x) < δ. For continuity, δ may depend on ε, ƒ, and x0. For uniform continuity, δ may depend on ε and ƒ. For pointwise equicontinuity, δ may depend on ε and x0. For uniform equicontinuity, δ may depend only on ε. More generally, when X is a topological space, a set F of functions from X to Y is said to be equicontinuous at x if for every ε > 0, x has a neighborhood Ux such that for all and ƒ ∈ F. This definition usually appears in the context of topological vector spaces. When X is compact, a set is uniformly equicontinuous if and only if it is equicontinuous at every point, for essentially the same reason as that uniform continuity and continuity coincide on compact spaces. Used on its own, the term "equicontinuity" may refer to either the pointwise or uniform notion, depending on the context. On a compact space, these notions coincide. Some basic properties follow immediately from the definition. Every finite set of continuous functions is equicontinuous. The closure of an equicontinuous set is again equicontinuous. Every member of a uniformly equicontinuous set of functions is uniformly continuous, and every finite set of uniformly continuous functions is uniformly equicontinuous. Examples A set of functions with a common Lipschitz constant is (uniformly) equicontinuous. In particular, this is the case if the set consists of functions with derivatives bounded by the same constant. Uniform boundedness principle gives a sufficient condition for a set of continuous linear operators to be equicontinuous. A family of iterates of an analytic function is equicontinuous on the Fatou set. Counterexamples The sequence of functions fn(x) = arctan(nx), is not equicontinuous because the definition is violated at x0=0. Equicontinuity of maps valued in topological groups Suppose that is a topological space and is an additive topological group (i.e. a group endowed with a topology making its operations continuous). Topological vector spaces are prominent examples of topological groups and every topological group has an associated canonical uniformity. Definition: A family of maps from into is said to be equicontinuous at if for every neighborhood of in , there exists some neighborhood of in such that for every . We say that is equicontinuous if it is equicontinuous at every point of . Note that if is equicontinuous at a point then every map in is continuous at the point. Clearly, every finite set of continuous maps from into is equicontinuous. Equicontinuous linear maps Because every topological vector space (TVS) is a topological group, the definition of an equicontinuous family of maps given for topological groups transfers to TVSs without change. Characterization of equicontinuous linear maps A family of maps of the form between two topological vector spaces is said to be if for every neighborhood of the origin in there exists some neighborhood of the origin in such that for all If is a family of maps and is a set then let With notation, if and are sets then for all if and only if Let and be topological vector spaces (TVSs) and be a family of linear operators from into Then the following are equivalent: is equicontinuous; is equicontinuous at every point of is equicontinuous at some point of is equicontinuous at the origin. that is, for every neighborhood of the origin in there exists a neighborhood of the origin in such that (or equivalently, for every ). for every neighborhood of the origin in is a neighborhood of the origin in the closure of in is equicontinuous. denotes endowed with the topology of point-wise convergence. the balanced hull of is equicontinuous. while if is locally convex then this list may be extended to include: the convex hull of is equicontinuous. the convex balanced hull of is equicontinuous. while if and are locally convex then this list may be extended to include: for every continuous seminorm on there exists a continuous seminorm on such that for all Here, means that for all while if is barreled and is locally convex then this list may be extended to include: is bounded in ; is bounded in denotes endowed with the topology of bounded convergence (that is, uniform convergence on bounded subsets of while if and are Banach spaces then this list may be extended to include: (that is, is uniformly bounded in the operator norm). Characterization of equicontinuous linear functionals Let be a topological vector space (TVS) over the field with continuous dual space A family of linear functionals on is said to be if for every neighborhood of the origin in there exists some neighborhood of the origin in such that for all For any subset the following are equivalent: is equicontinuous. is equicontinuous at the origin. is equicontinuous at some point of is contained in the polar of some neighborhood of the origin in the (pre)polar of is a neighborhood of the origin in the weak* closure of in is equicontinuous. the balanced hull of is equicontinuous. the convex hull of is equicontinuous. the convex balanced hull of is equicontinuous. while if is normed then this list may be extended to include: is a strongly bounded subset of while if is a barreled space then this list may be extended to include: is relatively compact in the weak* topology on is weak* bounded (that is, is bounded in ). is bounded in the topology of bounded convergence (that is, is bounded in ). Properties of equicontinuous linear maps The uniform boundedness principle (also known as the Banach–Steinhaus theorem) states that a set of linear maps between Banach spaces is equicontinuous if it is pointwise bounded; that is, for each The result can be generalized to a case when is locally convex and is a barreled space. Properties of equicontinuous linear functionals Alaoglu's theorem implies that the weak-* closure of an equicontinuous subset of is weak-* compact; thus that every equicontinuous subset is weak-* relatively compact. If is any locally convex TVS, then the family of all barrels in and the family of all subsets of that are convex, balanced, closed, and bounded in correspond to each other by polarity (with respect to ). It follows that a locally convex TVS is barreled if and only if every bounded subset of is equicontinuous. Equicontinuity and uniform convergence Let X be a compact Hausdorff space, and equip C(X) with the uniform norm, thus making C(X) a Banach space, hence a metric space. Then Arzelà–Ascoli theorem states that a subset of C(X) is compact if and only if it is closed, uniformly bounded and equicontinuous. This is analogous to the Heine–Borel theorem, which states that subsets of Rn are compact if and only if they are closed and bounded. As a corollary, every uniformly bounded equicontinuous sequence in C(X) contains a subsequence that converges uniformly to a continuous function on X. In view of Arzelà–Ascoli theorem, a sequence in C(X) converges uniformly if and only if it is equicontinuous and converges pointwise. The hypothesis of the statement can be weakened a bit: a sequence in C(X) converges uniformly if it is equicontinuous and converges pointwise on a dense subset to some function on X (not assumed continuous). This weaker version is typically used to prove Arzelà–Ascoli theorem for separable compact spaces. Another consequence is that the limit of an equicontinuous pointwise convergent sequence of continuous functions on a metric space, or on a locally compact space, is continuous. (See below for an example.) In the above, the hypothesis of compactness of X  cannot be relaxed. To see that, consider a compactly supported continuous function g on R with g(0) = 1, and consider the equicontinuous sequence of functions on R defined by ƒn(x) = . Then, ƒn converges pointwise to 0 but does not converge uniformly to 0. This criterion for uniform convergence is often useful in real and complex analysis. Suppose we are given a sequence of continuous functions that converges pointwise on some open subset G of Rn. As noted above, it actually converges uniformly on a compact subset of G if it is equicontinuous on the compact set. In practice, showing the equicontinuity is often not so difficult. For example, if the sequence consists of differentiable functions or functions with some regularity (e.g., the functions are solutions of a differential equation), then the mean value theorem or some other kinds of estimates can be used to show the sequence is equicontinuous. It then follows that the limit of the sequence is continuous on every compact subset of G; thus, continuous on G. A similar argument can be made when the functions are holomorphic. One can use, for instance, Cauchy's estimate to show the equicontinuity (on a compact subset) and conclude that the limit is holomorphic. Note that the equicontinuity is essential here. For example, ƒn(x) = converges to a multiple of the discontinuous sign function. Generalizations Equicontinuity in topological spaces The most general scenario in which equicontinuity can be defined is for topological spaces whereas uniform equicontinuity requires the filter of neighbourhoods of one point to be somehow comparable with the filter of neighbourhood of another point. The latter is most generally done via a uniform structure, giving a uniform space. Appropriate definitions in these cases are as follows: A set A of functions continuous between two topological spaces X and Y is topologically equicontinuous at the points x ∈ X and y ∈ Y if for any open set O about y, there are neighborhoods U of x and V of y such that for every f ∈ A, if the intersection of f[U] and V is nonempty, f[U] ⊆ O. Then A is said to be topologically equicontinuous at x ∈ X if it is topologically equicontinuous at x and y for each y ∈ Y. Finally, A is equicontinuous if it is equicontinuous at x for all points x ∈ X. A set A of continuous functions between two uniform spaces X and Y is uniformly equicontinuous if for every element W of the uniformity on Y, the set is a member of the uniformity on X Introduction to uniform spaces We now briefly describe the basic idea underlying uniformities. The uniformity is a non-empty collection of subsets of where, among many other properties, every , contains the diagonal of (i.e. ). Every element of is called an entourage. Uniformities generalize the idea (taken from metric spaces) of points that are "-close" (for ), meaning that their distance is < . To clarify this, suppose that is a metric space (so the diagonal of is the set ) For any , let denote the set of all pairs of points that are -close. Note that if we were to "forget" that existed then, for any , we would still be able to determine whether or not two points of are -close by using only the sets . In this way, the sets encapsulate all the information necessary to define things such as uniform continuity and uniform convergence without needing any metric. Axiomatizing the most basic properties of these sets leads to the definition of a uniformity. Indeed, the sets generate the uniformity that is canonically associated with the metric space . The benefit of this generalization is that we may now extend some important definitions that make sense for metric spaces (e.g. completeness) to a broader category of topological spaces. In particular, to topological groups and topological vector spaces. A weaker concept is that of even continuity A set A of continuous functions between two topological spaces X and Y is said to be evenly continuous at x ∈ X and y ∈ Y if given any open set O containing y there are neighborhoods U of x and V of y such that f[U] ⊆ O whenever f(x) ∈ V. It is evenly continuous at x if it is evenly continuous at x and y for every y ∈ Y, and evenly continuous if it is evenly continuous at x for every x ∈ X. Stochastic equicontinuity Stochastic equicontinuity is a version of equicontinuity used in the context of sequences of functions of random variables, and their convergence. See also - an analogue of a continuous function in discrete spaces. Notes References . . Mathematical analysis Theory of continuous functions
Equicontinuity
[ "Mathematics" ]
3,276
[ "Theory of continuous functions", "Mathematical analysis", "Topology" ]
460,997
https://en.wikipedia.org/wiki/Backdraft
A backdraft (North American English), backdraught (British English) or smoke explosion is the abrupt burning of superheated gases in a fire caused when oxygen rapidly enters a hot, oxygen-depleted environment; for example, when a window or door to an enclosed space is opened or broken. Backdrafts are typically seen as a blast of smoke and/or flame out of an opening of a building. Backdrafts present a serious threat to firefighters. There is some debate concerning whether backdrafts should be considered a type of flashover. Burning When material is heated enough, it begins to break down into smaller compounds, including flammable or even explosive gas, typically hydrocarbons. This is called pyrolysis, and does not require oxygen. If oxygen is also provided, then the hydrocarbons can combust, starting a fire. If material undergoing pyrolysis is later given sufficient oxygen, the hydrocarbons will ignite, and therefore, combustion takes place. Cause A backdraft can occur when a compartment fire has little or no ventilation. Due to this, little or no oxygen can flow into the compartment. Then, because fires reduce oxygen, the oxygen concentration decreases. When the oxygen concentration becomes too low to support combustion, some or all of the combustion switches to pyrolysis. However, the hydrocarbons and smoke (primarily particulate matter) remain at a temperature hot enough to auto-ignite. If oxygen is then re-introduced to the compartment, e.g. by opening a door or window to a closed room, while the gasses are still hot enough to auto-ignite, combustion will restart, often abruptly or even explosively, as the gasses are heated by the combustion and expand rapidly because of the rapidly increasing temperature, combined with the energy released from combustion. The colour and movement of smoke is used by firefighters to infer fire conditions, including the risk of backdraft. Characteristic warning signs of a backdraft include yellow or brown smoke, smoke which exits small holes in puffs (a sort of breathing effect) and is often found around the edges of doors and windows, and windows which appear brown or black when viewed from the exterior due to soot from incomplete combustion. This is an indication that the room lacks enough oxygen to permit oxidation of the soot particles. Firefighters often look to see if there is soot on the inside of windows and in any cracks in the window (caused e.g. by the heat). The windows may also have a slight vibration due to varying pressure within the compartment due to intermittent combustion. If firefighters discover a room sucking air into itself, for example through a crack, they generally evacuate immediately, because this is a strong indication that a backdraft is imminent. Due to pressure differences, puffs of smoke are sometimes drawn back into the enclosed space from which they emanated, which is how the term backdraft originated. Backdrafts are very dangerous, often surprising even experienced firefighters. The most common tactic used by firefighters to defuse a potential backdraft is to ventilate a room from its highest point, allowing the heat and smoke to escape without igniting. Common signs of imminent backdraft include a sudden inrush of air upon creating an opening into a closed compartment, no visible signs of flame in a hot compartment (fire above its upper flammability limit), "pulsing" smoke plumes from openings, and auto-ignition of hot gases at openings as they mix with oxygen in the surrounding air. Backdrafts and flashovers ISO 13943 broadly defines flashover as a "transition to a state of total surface involvement in a fire of combustible materials within an enclosure." This definition embraces several different scenarios and includes backdrafts, but there is considerable disagreement about categorizing backdrafts as flashovers. In common usage, the term flashover describes the near-simultaneous ignition of material caused by heat attaining the autoignition temperature of the combustible material and gases in an enclosure. Flashovers according to this narrower definition, i.e. those caused by rising temperatures, would not be considered backdrafts since backdrafts are caused by the introduction of oxygen into an enclosed space with conditions already suitable for ignition, and are thus caused by chemical change. In popular culture Backdrafts were publicized by the 1991 movie Backdraft, in which a serial arsonist in Chicago uses them as a means of assassinating conspirators in a scam. In the film adaptation of Stephen King's 1408, the protagonist Mike Enslin induces one as a last-ditch effort to kill the room. The term is also used and is the title of a scene in the 2012 video game Root Double: Before Crime * After Days. References External links A backdraft (still image and video) Slow Motion Backdraft video White Smoke Warning Daniel's Block Fire-BACKDRAFT Combustion Fire Fire protection Firefighting Public safety Thermodynamics
Backdraft
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
1,038
[ "Building engineering", "Combustion", "Thermodynamics", "Fire protection", "Fire", "Dynamical systems" ]
461,227
https://en.wikipedia.org/wiki/Superconducting%20magnet
A superconducting magnet is an electromagnet made from coils of superconducting wire. They must be cooled to cryogenic temperatures during operation. In its superconducting state the wire has no electrical resistance and therefore can conduct much larger electric currents than ordinary wire, creating intense magnetic fields. Superconducting magnets can produce stronger magnetic fields than all but the strongest non-superconducting electromagnets, and large superconducting magnets can be cheaper to operate because no energy is dissipated as heat in the windings. They are used in MRI instruments in hospitals, and in scientific equipment such as NMR spectrometers, mass spectrometers, fusion reactors and particle accelerators. They are also used for levitation, guidance and propulsion in a magnetic levitation (maglev) railway system being constructed in Japan. Construction Cooling During operation, the magnet windings must be cooled below their critical temperature, the temperature at which the winding material changes from the normal resistive state and becomes a superconductor, which is in the cryogenic range far below room temperature. The windings are typically cooled to temperatures significantly below their critical temperature, because the lower the temperature, the better superconductive windings work—the higher the currents and magnetic fields they can stand without returning to their non-superconductive state. Two types of cooling systems are commonly used to maintain magnet windings at temperatures sufficient to maintain superconductivity: Liquid-cooled Liquid helium is used as a coolant for many superconductive windings. It has a boiling point of 4.2 K, far below the critical temperature of most winding materials. The magnet and coolant are contained in a thermally insulated container (dewar) called a cryostat. To keep the helium from boiling away, the cryostat is usually constructed with an outer jacket containing (significantly cheaper) liquid nitrogen at 77 K. Alternatively, a thermal shield made of conductive material and maintained in 40 K – 60 K temperature range, cooled by conductive connections to the cryocooler cold head, is placed around the helium-filled vessel to keep the heat input to the latter at acceptable level. One of the goals of the search for high temperature superconductors is to build magnets that can be cooled by liquid nitrogen alone. At temperatures above about 20 K cooling can be achieved without boiling off cryogenic liquids. Mechanical cooling Because of increasing cost and the dwindling availability of liquid helium, many superconducting systems are cooled using two stage mechanical refrigeration. In general two types of mechanical cryocoolers are employed which have sufficient cooling power to maintain magnets below their critical temperature. The Gifford–McMahon cryocooler has been commercially available since the 1960s and has found widespread application. The G-M regenerator cycle in a cryocooler operates using a piston type displacer and heat exchanger. Alternatively, 1999 marked the first commercial application using a pulse tube cryocooler. This design of cryocooler has become increasingly common due to low vibration and long service interval as pulse tube designs use an acoustic process in lieu of mechanical displacement. In a typical two-stage refrigerator, the first stage will offer higher cooling capacity but at higher temperature (≈ 77 K) with the second stage reaching ≈ 4.2 K and <  of cooling power. In use, the first stage is used primarily for ancillary cooling of the cryostat with the second stage used primarily for cooling the magnet. Coil winding materials The maximal magnetic field achievable in a superconducting magnet is limited by the field at which the winding material ceases to be superconducting, its "critical field", Hc, which for type-II superconductors is its upper critical field. Another limiting factor is the "critical current", Ic, at which the winding material also ceases to be superconducting. Advances in magnets have focused on creating better winding materials. The superconducting portions of most current magnets are composed of niobium–titanium. This material has critical temperature of and can superconduct at up to about . More expensive magnets can be made of niobium–tin (Nb3Sn). These have a Tc of 18 K. When operating at 4.2 K they are able to withstand a much higher magnetic field intensity, up to 25 T to 30 T. Unfortunately, it is far more difficult to make the required filaments from this material. This is why sometimes a combination of Nb3Sn for the high-field sections and NbTi for the lower-field sections is used. Vanadium–gallium is another material used for the high-field inserts. High-temperature superconductors (e.g. BSCCO or YBCO) may be used for high-field inserts when required magnetic fields are higher than Nb3Sn can manage. BSCCO, YBCO or magnesium diboride may also be used for current leads, conducting high currents from room temperature into the cold magnet without an accompanying large heat leak from resistive leads. Conductor structure The coil windings of a superconducting magnet are made of wires or tapes of Type II superconductors (e.g.niobium–titanium or niobium–tin). The wire or tape itself may be made of tiny filaments (about 20 micrometres thick) of superconductor in a copper matrix. The copper is needed to add mechanical stability, and to provide a low resistance path for the large currents in case the temperature rises above Tc or the current rises above Ic and superconductivity is lost. These filaments need to be this small because in this type of superconductor the current only flows in a surface layer whose thickness is limited to the London penetration depth (see Skin effect). The coil must be carefully designed to withstand (or counteract) magnetic pressure and Lorentz forces that could otherwise cause wire fracture or crushing of insulation between adjacent turns. Operation Power supply The current to the coil windings is provided by a high current, very low voltage DC power supply, since in steady state the only voltage across the magnet is due to the resistance of the feeder wires. Any change to the current through the magnet must be done very slowly, first because electrically the magnet is a large inductor and an abrupt current change will result in a large voltage spike across the windings, and more importantly because fast changes in current can cause eddy currents and mechanical stresses in the windings that can precipitate a quench (see below). So the power supply is usually microprocessor-controlled, programmed to accomplish current changes gradually, in gentle ramps. It usually takes several minutes to energize or de-energize a laboratory-sized magnet. Persistent mode An alternate operating mode used by most superconducting magnets is to short-circuit the windings with a piece of superconductor once the magnet has been energized. The windings become a closed superconducting loop, the power supply can be turned off, and persistent currents will flow for months, preserving the magnetic field. The advantage of this persistent mode is that stability of the magnetic field is better than is achievable with the best power supplies, and no energy is needed to power the windings. The short circuit is made by a 'persistent switch', a piece of superconductor inside the magnet connected across the winding ends, attached to a small heater. When the magnet is first turned on, the switch wire is heated above its transition temperature, so it is resistive. Since the winding itself has no resistance, no current flows through the switch wire. To go to persistent mode, the supply current is adjusted until the desired magnetic field is obtained, then the heater is turned off. The persistent switch cools to its superconducting temperature, short-circuiting the windings. Then the power supply can be turned off. The winding current, and the magnetic field, will not actually persist forever, but will decay slowly according to a normal inductive time constant (L/R): where is a small residual resistance in the superconducting windings due to joints or a phenomenon called flux motion resistance. Nearly all commercial superconducting magnets are equipped with persistent switches. Magnet quench A quench is an abnormal termination of magnet operation that occurs when part of the superconducting coil enters the normal (resistive) state. This can occur because the field inside the magnet is too large, the rate of change of field is too large (causing eddy currents and resultant heating in the copper support matrix), or a combination of the two. More rarely a defect in the magnet can cause a quench. When this happens, that particular spot is subject to rapid Joule heating from the enormous current, which raises the temperature of the surrounding regions. This pushes those regions into the normal state as well, which leads to more heating in a chain reaction. The entire magnet rapidly becomes normal (this can take several seconds, depending on the size of the superconducting coil). This is accompanied by a loud bang as the energy in the magnetic field is converted to heat, and rapid boil-off of the cryogenic fluid. The abrupt decrease of current can result in kilovolt inductive voltage spikes and arcing. Permanent damage to the magnet is rare, but components can be damaged by localized heating, high voltages, or large mechanical forces. In practice, magnets usually have safety devices to stop or limit the current when the beginning of a quench is detected. If a large magnet undergoes a quench, the inert vapor formed by the evaporating cryogenic fluid can present a significant asphyxiation hazard to operators by displacing breathable air. A large section of the superconducting magnets in CERN's Large Hadron Collider unexpectedly quenched during start-up operations in 2008, necessitating the replacement of a number of magnets. In order to mitigate against potentially destructive quenches, the superconducting magnets that form the LHC are equipped with fast-ramping heaters that are activated once a quench event is detected by the complex quench protection system. As the dipole bending magnets are connected in series, each power circuit includes 154 individual magnets, and should a quench event occur, the entire combined stored energy of these magnets must be dumped at once. This energy is transferred into dumps that are massive blocks of metal which heat up to several hundreds of degrees Celsius due to the resistive heating in a matter of seconds. Although undesirable, a magnet quench is a "fairly routine event" during the operation of a particle accelerator. Magnet "training" In certain cases, superconducting magnets designed for very high currents require extensive bedding in, to enable the magnets to function at their full planned currents and fields. This is known as "training" the magnet, and involves a type of material memory effect. One situation this is required in is the case of particle colliders such as CERN's Large Hadron Collider. The magnets of the LHC were planned to run at 8 TeV (2 × 4 TeV) on its first run and 14 TeV (2 × 7 TeV) on its second run, but were initially operated at a lower energy of 3.5 TeV and 6.5 TeV per beam respectively. Because of initial crystallographic defects in the material, they will initially lose their superconducting ability ("quench") at a lower level than their design current. CERN states that this is due to electromagnetic forces causing tiny movements in the magnets, which in turn cause superconductivity to be lost when operating at the high precision needed for their planned current. By repeatedly running the magnets at a lower current and then slightly increasing the current until they quench under control, the magnet will gradually both gain the required ability to withstand the higher currents of its design specification without quenches occurring, and have any such issues "shaken" out of them, until they are eventually able to operate reliably at their full planned current without experiencing quenches. History Although the idea of making electromagnets with superconducting wire was proposed by Heike Kamerlingh Onnes shortly after he discovered superconductivity in 1911, a practical superconducting electromagnet had to await the discovery of superconducting materials that could support large critical supercurrent densities in high magnetic fields. The first successful superconducting magnet was built by G.B. Yntema in 1955 using niobium wire and achieved a field of 0.7 T at 4.2 K. Then, in 1961, J.E. Kunzler, E. Buehler, F.S.L. Hsu, and J.H. Wernick made the discovery that a compound of niobium and tin could support critical-supercurrent densities greater than 100,000 amperes per square centimetre in magnetic fields of 8.8 teslas. Despite its brittle nature, niobium–tin has since proved extremely useful in supermagnets generating magnetic fields up to 20 T. The persistent switch was invented in 1960 by Dwight Adams while a postdoctoral associate at Stanford University. The second persistent switch was constructed at the University of Florida by M.S. student R.D. Lichti in 1963. It has been preserved in a showcase in the UF Physics Building. In 1962, T.G. Berlincourt and R.R. Hake discovered the high-critical-magnetic-field, high-critical-supercurrent-density properties of niobium–titanium alloys. Although niobium–titanium alloys possess less spectacular superconducting properties than niobium–tin, they are highly ductile, easily fabricated, and economical. Useful in supermagnets generating magnetic fields up to 10 teslas, niobium–titanium alloys are the most widely used supermagnet materials. In 1986, the discovery of high temperature superconductors by Georg Bednorz and Karl Müller energized the field, raising the possibility of magnets that could be cooled by liquid nitrogen instead of the more difficult-to-work-with helium. In 2007, a magnet with windings of YBCO achieved a world record field of . The US National Research Council has a goal of creating a 30-tesla superconducting magnet. Globally in 2014, almost six billion US dollars worth of economic activity resulted from which superconductivity is indispensable. MRI systems, most of which employ niobium–titanium, accounted for about 80% of that total. In 2016, Yoon et al. reported a 26 T no-insulation superconducting magnet that they built out of GdBa2Cu3O7–x, using a technique which was previously reported in 2013. In 2017, a YBCO magnet created by the National High Magnetic Field Laboratory (NHMFL) broke the previous world record with a strength of 32 T. This is an all superconducting user magnet, designed to last for many decades. They hold the current record as of March 2018. In 2019, a new world-record of 32.35 T with all-superconducting magnet is achieved by Institute of Electrical Engineering, Chinese Academy of Sciences (IEE, CAS). No-insulation technique for the HTS insert magnet is also used. In 2019, the NHMFL also developed a non-insulated YBCO test coil combined with a resistive magnet and broke the lab's own world record for highest continuous magnetic field for any configuration of magnet at 45.5 T. A 1.2 GHz (28.2 T) NMR magnet was achieved in 2020 using an HTS magnet. In 2022, the Hefei Institutes of Physical Science, Chinese Academy of Sciences (HFIPS, CAS) claims new world record for strongest steady magnetic field of 45.22 T reached, while the previous NHMFL 45.5 T record in 2019 was actually reached when the magnet failed immediately in a quench. Uses Superconducting magnets have a number of advantages over resistive electromagnets. They can generate much stronger magnetic fields than ferromagnetic-core electromagnets, which are limited to fields of around 2 T. The field is generally more stable, resulting in less noisy measurements. They can be smaller, and the area at the center of the magnet where the field is created is empty rather than being occupied by an iron core. Large magnets can consume much less power. In the persistent state (above), the only power the magnet consumes is that needed for refrigeration equipment. Higher fields can be achieved with cooled resistive electromagnets, as superconducting coils enter the non-superconducting state at high fields. Steady fields of over 40 T can be achieved, usually by combining a Bitter electromagnet with a superconducting magnet (often as an insert). Superconducting magnets are widely used in MRI scanners, NMR equipment, mass spectrometers, magnetic separation processes, and particle accelerators. Rail transport In Japan, after decades of research and development into superconducting maglev by Japanese National Railways and later Central Japan Railway Company (JR Central), the Japanese government gave permission to JR Central to build the Chūō Shinkansen, linking Tokyo to Nagoya and later to Osaka. Particle accelerator One of the most challenging uses of superconducting magnets is in the LHC particle accelerator. Its niobium–titanium (Nb–Ti) magnets operate at 1.9 K to allow them to run safely at 8.3 T. Each magnet stores 7 MJ. In total the magnets store . Once or twice a day, as protons are accelerated from 450 GeV to 7 TeV, the field of the superconducting bending magnets is increased from 0.54 T to 8.3 T. Fusion reactor The central solenoid and toroidal field superconducting magnets designed for the ITER fusion reactor use niobium–tin (Nb3Sn) as a superconductor. The central solenoid coil carries a current of 46 kA and produce a magnetic field of 13.5 T. The 18 toroidal field coils at a maximum field of 11.8 T store an energy of 41 GJ (total?). They have been tested at a record current of 80 kA. Other lower field ITER magnets use niobium–titanium. Most of the ITER magnets have their field varied many times per hour. Mass spectrometer One high-resolution mass spectrometer planned to use a 21-tesla SC magnet. See also Fault current limiter Flux pumping References Further reading Martin N. Wilson, Superconducting Magnets (Monographs on Cryogenics), Oxford University Press, New edition (1987), . Yukikazu Iwasa, Case Studies in Superconducting Magnets: Design and Operational Issues (Selected Topics in Superconductivity), Kluwer Academic / Plenum Publishers, (October 1994), . Habibo Brechna, Superconducting magnet systems, New York, Springer-Verlag New York, Inc., 1973, , External links Making Superconducting Magnets From the National High Magnetic Field Laboratory 1986 evaluation of NbTi and Nb3Sn for particle accelerator magnets. Types of magnets Superconductivity fr:Supraconductivité#Électro-aimants
Superconducting magnet
[ "Physics", "Materials_science", "Engineering" ]
4,132
[ "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Electrical resistance and conductance" ]
461,454
https://en.wikipedia.org/wiki/Conservative%20vector%20field
In vector calculus, a conservative vector field is a vector field that is the gradient of some function. A conservative vector field has the property that its line integral is path independent; the choice of path between two points does not change the value of the line integral. Path independence of the line integral is equivalent to the vector field under the line integral being conservative. A conservative vector field is also irrotational; in three dimensions, this means that it has vanishing curl. An irrotational vector field is necessarily conservative provided that the domain is simply connected. Conservative vector fields appear naturally in mechanics: They are vector fields representing forces of physical systems in which energy is conserved. For a conservative system, the work done in moving along a path in a configuration space depends on only the endpoints of the path, so it is possible to define potential energy that is independent of the actual path taken. Informal treatment In a two- and three-dimensional space, there is an ambiguity in taking an integral between two points as there are infinitely many paths between the two points—apart from the straight line formed between the two points, one could choose a curved path of greater length as shown in the figure. Therefore, in general, the value of the integral depends on the path taken. However, in the special case of a conservative vector field, the value of the integral is independent of the path taken, which can be thought of as a large-scale cancellation of all elements that do not have a component along the straight line between the two points. To visualize this, imagine two people climbing a cliff; one decides to scale the cliff by going vertically up it, and the second decides to walk along a winding path that is longer in length than the height of the cliff, but at only a small angle to the horizontal. Although the two hikers have taken different routes to get up to the top of the cliff, at the top, they will have both gained the same amount of gravitational potential energy. This is because a gravitational field is conservative. Intuitive explanation M. C. Escher's lithograph print Ascending and Descending illustrates a non-conservative vector field, impossibly made to appear to be the gradient of the varying height above ground (gravitational potential) as one moves along the staircase. The force field experienced by the one moving on the staircase is non-conservative in that one can return to the starting point while ascending more than one descends or vice versa, resulting in nonzero work done by gravity. On a real staircase, the height above the ground is a scalar potential field: one has to go upward exactly as much as one goes downward in order to return to the same place, in which case the work by gravity totals to zero. This suggests path-independence of work done on the staircase; equivalently, the force field experienced is conservative (see the later section: Path independence and conservative vector field). The situation depicted in the print is impossible. Definition A vector field , where is an open subset of , is said to be conservative if there exists a (continuously differentiable) scalar field on such that Here, denotes the gradient of . Since is continuously differentiable, is continuous. When the equation above holds, is called a scalar potential for . The fundamental theorem of vector calculus states that, under some regularity conditions, any vector field can be expressed as the sum of a conservative vector field and a solenoidal field. Path independence and conservative vector field Path independence A line integral of a vector field is said to be path-independent if it depends on only two integral path endpoints regardless of which path between them is chosen: for any pair of integral paths and between a given pair of path endpoints in . The path independence is also equivalently expressed as for any piecewise smooth closed path in where the two endpoints are coincident. Two expressions are equivalent since any closed path can be made by two path; from an endpoint to another endpoint , and from to , so where is the reverse of and the last equality holds due to the path independence Conservative vector field A key property of a conservative vector field is that its integral along a path depends on only the endpoints of that path, not the particular route taken. In other words, if it is a conservative vector field, then its line integral is path-independent. Suppose that for some (continuously differentiable) scalar field over as an open subset of (so is a conservative vector field that is continuous) and is a differentiable path (i.e., it can be parameterized by a differentiable function) in with an initial point and a terminal point . Then the gradient theorem (also called fundamental theorem of calculus for line integrals) states that This holds as a consequence of the definition of a line integral, the chain rule, and the second fundamental theorem of calculus. in the line integral is an exact differential for an orthogonal coordinate system (e.g., Cartesian, cylindrical, or spherical coordinates). Since the gradient theorem is applicable for a differentiable path, the path independence of a conservative vector field over piecewise-differential curves is also proved by the proof per differentiable curve component. So far it has been proven that a conservative vector field is line integral path-independent. Conversely, if a continuous vector field is (line integral) path-independent, then it is a conservative vector field, so the following biconditional statement holds: The proof of this converse statement is the following. is a continuous vector field which line integral is path-independent. Then, let's make a function defined as over an arbitrary path between a chosen starting point and an arbitrary point . Since it is path-independent, it depends on only and regardless of which path between these points is chosen. Let's choose the path shown in the left of the right figure where a 2-dimensional Cartesian coordinate system is used. The second segment of this path is parallel to the axis so there is no change along the axis. The line integral along this path is By the path independence, its partial derivative with respect to (for to have partial derivatives, needs to be continuous.) is since and are independent to each other. Let's express as where and are unit vectors along the and axes respectively, then, since , where the last equality is from the second fundamental theorem of calculus. A similar approach for the line integral path shown in the right of the right figure results in so is proved for the 2-dimensional Cartesian coordinate system. This proof method can be straightforwardly expanded to a higher dimensional orthogonal coordinate system (e.g., a 3-dimensional spherical coordinate system) so the converse statement is proved. Another proof is found here as the converse of the gradient theorem. Irrotational vector fields Let (3-dimensional space), and let be a (continuously differentiable) vector field, with an open subset of . Then is called irrotational if its curl is everywhere in , i.e., if For this reason, such vector fields are sometimes referred to as curl-free vector fields or curl-less vector fields. They are also referred to as longitudinal vector fields. It is an identity of vector calculus that for any (continuously differentiable up to the 2nd derivative) scalar field on , we have Therefore, every conservative vector field in is also an irrotational vector field in . This result can be easily proved by expressing in a Cartesian coordinate system with Schwarz's theorem (also called Clairaut's theorem on equality of mixed partials). Provided that is a simply connected open space (roughly speaking, a single piece open space without a hole within it), the converse of this is also true: Every irrotational vector field in a simply connected open space is a conservative vector field in . The above statement is not true in general if is not simply connected. Let be with removing all coordinates on the -axis (so not a simply connected space), i.e., . Now, define a vector field on by Then has zero curl everywhere in ( at everywhere in ), i.e., is irrotational. However, the circulation of around the unit circle in the -plane is ; in polar coordinates, , so the integral over the unit circle is Therefore, does not have the path-independence property discussed above so is not conservative even if since where is defined is not a simply connected open space. Say again, in a simply connected open region, an irrotational vector field has the path-independence property (so as conservative). This can be proved directly by using Stokes' theorem,for any smooth oriented surface which boundary is a simple closed path . So, it is concluded that In a simply connected open region, any vector field that has the path-independence property (so it is a conservative vector field.) must also be irrotational and vice versa. Abstraction More abstractly, in the presence of a Riemannian metric, vector fields correspond to differential . The conservative vector fields correspond to the exact , that is, to the forms which are the exterior derivative of a function (scalar field) on . The irrotational vector fields correspond to the closed , that is, to the such that . As any exact form is closed, so any conservative vector field is irrotational. Conversely, all closed are exact if is simply connected. Vorticity The vorticity of a vector field can be defined by: The vorticity of an irrotational field is zero everywhere. Kelvin's circulation theorem states that a fluid that is irrotational in an inviscid flow will remain irrotational. This result can be derived from the vorticity transport equation, obtained by taking the curl of the Navier–Stokes equations. For a two-dimensional field, the vorticity acts as a measure of the local rotation of fluid elements. The vorticity does not imply anything about the global behavior of a fluid. It is possible for a fluid that travels in a straight line to have vorticity, and it is possible for a fluid that moves in a circle to be irrotational. Conservative forces If the vector field associated to a force is conservative, then the force is said to be a conservative force. The most prominent examples of conservative forces are gravitational force (associated with a gravitational field) and electric force (associated with an electrostatic field). According to Newton's law of gravitation, a gravitational force acting on a mass due to a mass located at a distance from , obeys the equation where is the gravitational constant and is a unit vector pointing from toward . The force of gravity is conservative because , where is the gravitational potential energy. In other words, the gravitation field associated with the gravitational force is the gradient of the gravitation potential associated with the gravitational potential energy . It can be shown that any vector field of the form is conservative, provided that is integrable. For conservative forces, path independence can be interpreted to mean that the work done in going from a point to a point is independent of the moving path chosen (dependent on only the points and ), and that the work done in going around a simple closed loop is : The total energy of a particle moving under the influence of conservative forces is conserved, in the sense that a loss of potential energy is converted to the equal quantity of kinetic energy, or vice versa. See also Beltrami vector field Conservative force Conservative system Complex lamellar vector field Helmholtz decomposition Laplacian vector field Longitudinal and transverse vector fields Solenoidal vector field References Further reading Vector calculus Force
Conservative vector field
[ "Physics", "Mathematics" ]
2,376
[ "Force", "Physical quantities", "Quantity", "Mass", "Classical mechanics", "Wikipedia categories named after physical quantities", "Matter" ]
461,517
https://en.wikipedia.org/wiki/CASP
Critical Assessment of Structure Prediction (CASP), sometimes called Critical Assessment of Protein Structure Prediction, is a community-wide, worldwide experiment for protein structure prediction taking place every two years since 1994. CASP provides research groups with an opportunity to objectively test their structure prediction methods and delivers an independent assessment of the state of the art in protein structure modeling to the research community and software users. Even though the primary goal of CASP is to help advance the methods of identifying protein three-dimensional structure from its amino acid sequence many view the experiment more as a "world championship" in this field of science. More than 100 research groups from all over the world participate in CASP on a regular basis and it is not uncommon for entire groups to suspend their other research for months while they focus on getting their servers ready for the experiment and on performing the detailed predictions. Selection of target proteins In order to ensure that no predictor can have prior information about a protein's structure that would put them at an advantage, it is important that the experiment be conducted in a double-blind fashion: Neither predictors nor the organizers and assessors know the structures of the target proteins at the time when predictions are made. Targets for structure prediction are either structures soon-to-be solved by X-ray crystallography or NMR spectroscopy, or structures that have just been solved (mainly by one of the structural genomics centers) and are kept on hold by the Protein Data Bank. If the given sequence is found to be related by common descent to a protein sequence of known structure (called a template), comparative protein modeling may be used to predict the tertiary structure. Templates can be found using sequence alignment methods (e.g. BLAST or HHsearch) or protein threading methods, which are better in finding distantly related templates. Otherwise, de novo protein structure prediction must be applied (e.g. Rosetta), which is much less reliable but can sometimes yield models with the correct fold (usually, for proteins less than 100-150 amino acids). Truly new folds are becoming quite rare among the targets, making that category smaller than desirable. Evaluation The primary method of evaluation is a comparison of the predicted model α-carbon positions with those in the target structure. The comparison is shown visually by cumulative plots of distances between pairs of equivalents α-carbon in the alignment of the model and the structure, such as shown in the figure (a perfect model would stay at zero all the way across), and is assigned a numerical score GDT-TS (Global Distance Test—Total Score) describing percentage of well-modeled residues in the model with respect to the target. Free modeling (template-free, or de novo) is also evaluated visually by the assessors, since the numerical scores do not work as well for finding loose resemblances in the most difficult cases. High-accuracy template-based predictions were evaluated in CASP7 by whether they worked for molecular-replacement phasing of the target crystal structure with successes followed up later, and by full-model (not just α-carbon) model quality and full-model match to the target in CASP8. Evaluation of the results is carried out in the following prediction categories: tertiary structure prediction (all CASPs) secondary structure prediction (dropped after CASP5) prediction of structure complexes (CASP2 only; a separate experiment—CAPRI—carries on this subject) residue-residue contact prediction (starting CASP4) disordered regions prediction (starting CASP5) domain boundary prediction (CASP6–CASP8) function prediction (starting CASP6) model quality assessment (starting CASP7) model refinement (starting CASP7) high-accuracy template-based prediction (starting CASP7) Tertiary structure prediction category was further subdivided into: homology modeling fold recognition (also called protein threading; note that this naming is incorrect as threading is a method) de novo structure prediction, now referred to as 'New Fold' as many methods apply evaluation, or scoring, functions that are biased by knowledge of native protein structures, such as an artificial neural network. Starting with CASP7, categories have been redefined to reflect developments in methods. The 'Template based modeling' category includes all former comparative modeling, homologous fold based models and some analogous fold based models. The 'template free modeling (FM)' category includes models of proteins with previously unseen folds and hard analogous fold based models. Due to limited numbers of template free targets (they are quite rare), in 2011 so called CASP ROLL was introduced. This continuous (rolling) CASP experiment aims at more rigorous evaluation of template free prediction methods through assessment of a larger number of targets outside of the regular CASP prediction season. Unlike LiveBench and EVA, this experiment is in the blind-prediction spirit of CASP, i.e. all the predictions are made on yet unknown structures. The CASP results are published in special supplement issues of the scientific journal Proteins, all of which are accessible through the CASP website. A lead article in each of these supplements describes specifics of the experiment while a closing article evaluates progress in the field. AlphaFold In December 2018, CASP13 made headlines when it was won by AlphaFold, an artificial intelligence program created by DeepMind. In November 2020, an improved version 2 of AlphaFold won CASP14. According to one of CASP co-founders John Moult, AlphaFold scored around 90 on a 100-point scale of prediction accuracy for moderately difficult protein targets. AlphaFold was made open source in 2021, and in CASP15 in 2022, while DeepMind did not enter, virtually all of the high-ranking teams used AlphaFold or modifications of AlphaFold. See also Critical Assessment of Prediction of Interactions (CAPRI) Critical Assessment of Function Annotation (CAFA) Critical Assessment of Genome Interpretation (CAGI) References External links CASP ROLL FORCASP Forum Result ranking Automated assessments for CASP15 (2022) Official ranking for servers only Official ranking for humans and servers Automated assessments for CASP14 (2020) Official ranking for servers only Official ranking for humans and servers Ranking by Zhang Lab Automated assessments for CASP13 (2018) Official ranking for servers only Official ranking for humans and servers Ranking by Zhang Lab Automated assessments for CASP12 (2016) Official ranking for servers only Official ranking for humans and servers Ranking by Zhang Lab Automated assessments for CASP11 (2014) Official ranking for servers only (126 targets) Official ranking for humans and servers (78 targets) Ranking by Zhang Lab Automated assessments for CASP10 (2012) Official ranking for servers only (127 targets) Official ranking for humans and servers (71 targets) Ranking by Zhang Lab Automated assessments for CASP9 (2010) Official ranking for servers only (147 targets) Official ranking for humans and servers (78 targets) Ranking by Grishin Lab (for server only) Ranking by Grishin Lab (for human and servers) Ranking by Zhang Lab Ranking by Cheng Lab Automated assessments for CASP8 (2008) Official ranking for servers only Official ranking for humans and servers Ranking by Zhang Lab Ranking by Grishin Lab Ranking McGuffin Lab Ranking by Cheng Lab Automated assessments for CASP7 (2006) Ranking by Livebench Ranking by Zhang Lab Bioinformatics Computational chemistry
CASP
[ "Chemistry", "Engineering", "Biology" ]
1,501
[ "Bioinformatics", "Theoretical chemistry", "Computational chemistry", "Biological engineering" ]
6,176,600
https://en.wikipedia.org/wiki/Intramolecular%20force
An intramolecular force (from Latin intra- 'within') is any force that binds together the atoms making up a molecule. Intramolecular forces are stronger than the intermolecular forces that govern the interactions between molecules. Types The classical model identifies three main types of chemical bonds — ionic, covalent, and metallic — distinguished by the degree of charge separation between participating atoms. The characteristics of the bond formed can be predicted by the properties of constituent atoms, namely electronegativity. They differ in the magnitude of their bond enthalpies, a measure of bond strength, and thus affect the physical and chemical properties of compounds in different ways. % of ionic character is directly proportional difference in electronegativity of bonded atom. Ionic bond An ionic bond can be approximated as complete transfer of one or more valence electrons of atoms participating in bond formation, resulting in a positive ion and a negative ion bound together by electrostatic forces. Electrons in an ionic bond tend to be mostly found around one of the two constituent atoms due to the large electronegativity difference between the two atoms, generally more than 1.9, (greater difference in electronegativity results in a stronger bond); this is often described as one atom giving electrons to the other. This type of bond is generally formed between a metal and nonmetal, such as sodium and chlorine in NaCl. Sodium would give an electron to chlorine, forming a positively charged sodium ion and a negatively charged chloride ion. Covalent bond In a true covalent bond, the electrons are shared evenly between the two atoms of the bond; there is little or no charge separation. Covalent bonds are generally formed between two nonmetals. There are several types of covalent bonds: in polar covalent bonds, electrons are more likely to be found around one of the two atoms, whereas in nonpolar covalent bonds, electrons are evenly shared. Homonuclear diatomic molecules are purely covalent. The polarity of a covalent bond is determined by the electronegativities of each atom and thus a polar covalent bond has a dipole moment pointing from the partial positive end to the partial negative end. Polar covalent bonds represent an intermediate type in which the electrons are neither completely transferred from one atom to another nor evenly shared. Metallic bond Metallic bonds generally form within a pure metal or metal alloy. Metallic electrons are generally delocalized; the result is a large number of free electrons around positive nuclei, sometimes called an electron sea. Bond formation Comparison of the bond lengths between carbon and oxygen in a double and triple bond. Bonds are formed by atoms so that they are able to achieve a lower energy state. Free atoms will have more energy than a bonded atom. This is because some energy is released during bond formation, allowing the entire system to achieve a lower energy state. The bond length, or the minimum separating distance between two atoms participating in bond formation, is determined by their repulsive and attractive forces along the internuclear direction. As the two atoms get closer and closer, the positively charged nuclei repel, creating a force that attempts to push the atoms apart. As the two atoms get further apart, attractive forces work to pull them back together. Thus an equilibrium bond length is achieved and is a good measure of bond stability. Biochemistry Intramolecular forces are extremely important in the field of biochemistry, where it comes into play at the most basic levels of biological structures. Intramolecular forces such as disulfide bonds give proteins and DNA their structure. Proteins derive their structure from the intramolecular forces that shape them and hold them together. The main source of structure in these molecules is the interaction between the amino acid residues that form the foundation of proteins. The interactions between residues of the same proteins forms the secondary structure of the protein, allowing for the formation of beta sheets and alpha helices, which are important structures for proteins and in the case of alpha helices, for DNA. See also Chemical bond Intermolecular force References Chemical bonding
Intramolecular force
[ "Physics", "Chemistry", "Materials_science" ]
827
[ "Chemical bonding", "Condensed matter physics", "nan" ]
6,176,811
https://en.wikipedia.org/wiki/Newman%E2%80%93Penrose%20formalism
The Newman–Penrose (NP) formalism is a set of notation developed by Ezra T. Newman and Roger Penrose for general relativity (GR). Their notation is an effort to treat general relativity in terms of spinor notation, which introduces complex forms of the usual variables used in GR. The NP formalism is itself a special case of the tetrad formalism, where the tensors of the theory are projected onto a complete vector basis at each point in spacetime. Usually this vector basis is chosen to reflect some symmetry of the spacetime, leading to simplified expressions for physical observables. In the case of the NP formalism, the vector basis chosen is a null tetrad: a set of four null vectors—two real, and a complex-conjugate pair. The two real members often asymptotically point radially inward and radially outward, and the formalism is well adapted to treatment of the propagation of radiation in curved spacetime. The Weyl scalars, derived from the Weyl tensor, are often used. In particular, it can be shown that one of these scalars— in the appropriate frame—encodes the outgoing gravitational radiation of an asymptotically flat system. Newman and Penrose introduced the following functions as primary quantities using this tetrad: Twelve complex spin coefficients (in three groups) which describe the change in the tetrad from point to point: . Five complex functions encoding Weyl tensors in the tetrad basis: . Ten functions encoding Ricci tensors in the tetrad basis: (real); (complex). In many situations—especially algebraically special spacetimes or vacuum spacetimes—the Newman–Penrose formalism simplifies dramatically, as many of the functions go to zero. This simplification allows for various theorems to be proven more easily than using the standard form of Einstein's equations. In this article, we will only employ the tensorial rather than spinorial version of NP formalism, because the former is easier to understand and more popular in relevant papers. One can refer to ref. for a unified formulation of these two versions. Null tetrad and sign convention The formalism is developed for four-dimensional spacetime, with a Lorentzian-signature metric. At each point, a tetrad (set of four vectors) is introduced. The first two vectors, and are just a pair of standard (real) null vectors such that . For example, we can think in terms of spherical coordinates, and take to be the outgoing null vector, and to be the ingoing null vector. A complex null vector is then constructed by combining a pair of real, orthogonal unit space-like vectors. In the case of spherical coordinates, the standard choice is The complex conjugate of this vector then forms the fourth element of the tetrad. Two sets of signature and normalization conventions are in use for NP formalism: and . The former is the original one that was adopted when NP formalism was developed and has been widely used in black-hole physics, gravitational waves and various other areas in general relativity. However, it is the latter convention that is usually employed in contemporary study of black holes from quasilocal perspectives (such as isolated horizons and dynamical horizons). In this article, we will utilize for a systematic review of the NP formalism (see also refs.). It's important to note that, when switching from to , definitions of the spin coefficients, Weyl-NP scalars and Ricci-NP scalars need to change their signs; this way, the Einstein-Maxwell equations can be left unchanged. In NP formalism, the complex null tetrad contains two real null (co)vectors and two complex null (co)vectors . Being null (co)vectors, self-normalization of naturally vanishes, so the following two pairs of cross-normalization are adopted while contractions between the two pairs are also vanishing, Here the indices can be raised and lowered by the global metric which in turn can be obtained via NP quantities and tetrad equations Four covariant derivative operators In keeping with the formalism's practice of using distinct unindexed symbols for each component of an object, the covariant derivative operator is expressed using four separate symbols () which name a directional covariant derivative operator for each tetrad direction. Given a linear combination of tetrad vectors, , the covariant derivative operator in the direction is . The operators are defined as which reduce to when acting on scalar functions. Twelve spin coefficients In NP formalism, instead of using index notations as in orthogonal tetrads, each Ricci rotation coefficient in the null tetrad is assigned a lower-case Greek letter, which constitute the 12 complex spin coefficients (in three groups), Spin coefficients are the primary quantities in NP formalism, with which all other NP quantities (as defined below) could be calculated indirectly using the NP field equations. Thus, NP formalism is sometimes referred to as spin-coefficient formalism as well. Transportation equations: covariant derivatives of tetrad vectors The sixteen directional covariant derivatives of tetrad vectors are sometimes called the transportation/propagation equations, perhaps because the derivatives are zero when the tetrad vector is parallel propagated or transported in the direction of the derivative operator. These results in this exact notation are given by O'Donnell: Interpretation of κ, ε, ν, γ from Dℓa and Δna The two equations for the covariant derivative of a real null tetrad vector in its own direction indicate whether or not the vector is tangent to a geodesic and if so, whether the geodesic has an affine parameter. A null tangent vector is tangent to an affinely parameterized null geodesic if , which is to say if the vector is unchanged by parallel propagation or transportation in its own direction. shows that is tangent to a geodesic if and only if , and is tangent to an affinely parameterized geodesic if in addition . Similarly, shows that is geodesic if and only if , and has affine parameterization when . (The complex null tetrad vectors and would have to be separated into the spacelike basis vectors and before asking if either or both of those are tangent to spacelike geodesics.) Commutators The metric-compatibility or torsion-freeness of the covariant derivative is recast into the commutators of the directional derivatives, which imply that Note: (i) The above equations can be regarded either as implications of the commutators or combinations of the transportation equations; (ii) In these implied equations, the vectors can be replaced by the covectors and the equations still hold. Weyl–NP and Ricci–NP scalars The 10 independent components of the Weyl tensor can be encoded into 5 complex Weyl-NP scalars, The 10 independent components of the Ricci tensor are encoded into 4 real scalars , , , and 3 complex scalars (with their complex conjugates), In these definitions, could be replaced by its trace-free part or by the Einstein tensor because of the normalization relations. Also, is reduced to for electrovacuum (). Einstein–Maxwell–NP equations NP field equations In a complex null tetrad, Ricci identities give rise to the following NP field equations connecting spin coefficients, Weyl-NP and Ricci-NP scalars (recall that in an orthogonal tetrad, Ricci rotation coefficients would respect Cartan's first and second structure equations), These equations in various notations can be found in several texts. The notation in Frolov and Novikov is identical. Also, the Weyl-NP scalars and the Ricci-NP scalars can be calculated indirectly from the above NP field equations after obtaining the spin coefficients rather than directly using their definitions. Maxwell–NP scalars, Maxwell equations in NP formalism The six independent components of the Faraday-Maxwell 2-form (i.e. the electromagnetic field strength tensor) can be encoded into three complex Maxwell-NP scalars and therefore the eight real Maxwell equations and (as ) can be transformed into four complex equations, with the Ricci-NP scalars related to Maxwell scalars by It is worthwhile to point out that, the supplementary equation is only valid for electromagnetic fields; for example, in the case of Yang-Mills fields there will be where are Yang-Mills-NP scalars. To sum up, the aforementioned transportation equations, NP field equations and Maxwell-NP equations together constitute the Einstein-Maxwell equations in Newman–Penrose formalism. Applications of the NP formalism to gravitational radiation field The Weyl scalar was defined by Newman & Penrose as (note, however, that the overall sign is arbitrary, and that Newman & Penrose worked with a "timelike" metric signature of ). In empty space, the Einstein Field Equations reduce to . From the definition of the Weyl tensor, we see that this means that it equals the Riemann tensor, . We can make the standard choice for the tetrad at infinity: In transverse-traceless gauge, a simple calculation shows that linearized gravitational waves are related to components of the Riemann tensor as assuming propagation in the direction. Combining these, and using the definition of above, we can write Far from a source, in nearly flat space, the fields and encode everything about gravitational radiation propagating in a given direction. Thus, we see that encodes in a single complex field everything about (outgoing) gravitational waves. Radiation from a finite source Using the wave-generation formalism summarised by Thorne, we can write the radiation field quite compactly in terms of the mass multipole, current multipole, and spin-weighted spherical harmonics: Here, prefixed superscripts indicate time derivatives. That is, we define The components and are the mass and current multipoles, respectively. is the spin-weight −2 spherical harmonic. See also Light-cone coordinates GHP formalism Tetrad formalism Goldberg–Sachs theorem References Wald treats the more succinct version of the Newman–Penrose formalism in terms of more modern spinor notation. Hawking and Ellis use the formalism in their discussion of the final state of a collapsing star. External links Newman–Penrose formalism on Scholarpedia Theory of relativity Mathematical notation General relativity
Newman–Penrose formalism
[ "Physics", "Mathematics" ]
2,170
[ "General relativity", "nan", "Theory of relativity" ]
1,231,204
https://en.wikipedia.org/wiki/Constructive%20quantum%20field%20theory
In mathematical physics, constructive quantum field theory is the field devoted to showing that quantum field theory can be defined in terms of precise mathematical structures. This demonstration requires new mathematics, in a sense analogous to classical real analysis, putting calculus on a mathematically rigorous foundation. Weak, strong, and electromagnetic forces of nature are believed to have their natural description in terms of quantum fields. Attempts to put quantum field theory on a basis of completely defined concepts have involved most branches of mathematics, including functional analysis, differential equations, probability theory, representation theory, geometry, and topology. It is known that a quantum field is inherently hard to handle using conventional mathematical techniques like explicit estimates. This is because a quantum field has the general nature of an operator-valued distribution, a type of object from mathematical analysis. The existence theorems for quantum fields can be expected to be very difficult to find, if indeed they are possible at all. One discovery of the theory that can be related in non-technical terms, is that the dimension d of the spacetime involved is crucial. Notable work in the field by James Glimm and Arthur Jaffe showed that with d < 4 many examples can be found. Along with work of their students, coworkers, and others, constructive field theory resulted in a mathematical foundation and exact interpretation to what previously was only a set of recipes, also in the case d < 4. Theoretical physicists had given these rules the name "renormalization," but most physicists had been skeptical about whether they could be turned into a mathematical theory. Today one of the most important open problems, both in theoretical physics and in mathematics, is to establish similar results for gauge theory in the realistic case d = 4. The traditional basis of constructive quantum field theory is the set of Wightman axioms. Konrad Osterwalder and Robert Schrader showed that there is an equivalent problem in mathematical probability theory. The examples with d < 4 satisfy the Wightman axioms as well as the Osterwalder–Schrader axioms . They also fall in the related framework introduced by Rudolf Haag and Daniel Kastler, called algebraic quantum field theory. There is a firm belief in the physics community that the gauge theory of C.N. Yang and Robert Mills (the Yang–Mills theory) can lead to a tractable theory, but new ideas and new methods will be required to actually establish this, and this could take many years. External links Axiomatic quantum field theory Functional analysis
Constructive quantum field theory
[ "Mathematics" ]
505
[ "Functional analysis", "Functions and mappings", "Mathematical relations", "Mathematical objects" ]
1,231,776
https://en.wikipedia.org/wiki/Photoionization
Photoionization is the physical process in which an ion is formed from the interaction of a photon with an atom or molecule. Cross section Not every interaction between a photon and an atom, or molecule, will result in photoionization. The probability of photoionization is related to the photoionization cross section of the species – the probability of an ionization event conceptualized as a hypothetical cross-sectional area. This cross section depends on the energy of the photon (proportional to its wavenumber) and the species being considered i.e. it depends on the structure of the molecular species. In the case of molecules, the photoionization cross-section can be estimated by examination of Franck-Condon factors between a ground-state molecule and the target ion. This can be initialized by computing the vibrations of a molecule and associated cation (post ionization) using quantum chemical software e.g. QChem. For photon energies below the ionization threshold, the photoionization cross-section is near zero. But with the development of pulsed lasers it has become possible to create extremely intense, coherent light where multi-photon ionization may occur via sequences of excitations and relaxations. At even higher intensities (around of infrared or visible light), non-perturbative phenomena such as barrier suppression ionization and rescattering ionization are observed. Multi-photon ionization Several photons of energy below the ionization threshold may actually combine their energies to ionize an atom. This probability decreases rapidly with the number of photons required, but the development of very intense, pulsed lasers still makes it possible. In the perturbative regime (below about 1014 W/cm2 at optical frequencies), the probability of absorbing N photons depends on the laser-light intensity I as IN . For higher intensities, this dependence becomes invalid due to the then occurring AC Stark effect. Resonance-enhanced multiphoton ionization (REMPI) is a technique applied to the spectroscopy of atoms and small molecules in which a tunable laser can be used to access an excited intermediate state. Above-threshold ionization (ATI) is an extension of multi-photon ionization where even more photons are absorbed than actually would be necessary to ionize the atom. The excess energy gives the released electron higher kinetic energy than the usual case of just-above threshold ionization. More precisely, the system will have multiple peaks in its photoelectron spectrum which are separated by the photon energies, indicating that the emitted electron has more kinetic energy than in the normal (lowest possible number of photons) ionization case. The electrons released from the target will have approximately an integer number of photon-energies more kinetic energy. Tunnel ionization When either the laser intensity is further increased or a longer wavelength is applied as compared with the regime in which multi-photon ionization takes place, a quasi-stationary approach can be used and results in the distortion of the atomic potential in such a way that only a relatively low and narrow barrier between a bound state and the continuum states remains. Then, the electron can tunnel through or for larger distortions even overcome this barrier. These phenomena are called tunnel ionization and over-the-barrier ionization, respectively. See also Ion source Radiolysis References Further reading A. Lampros and A. Nikolopoulos (2019) Elements of Photoionization Quantum Dynamics Method, IOP Conscise Physics ISBN 978-1643276533 Spectroscopy Ionization
Photoionization
[ "Physics", "Chemistry" ]
714
[ "Ionization", "Physical phenomena", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Spectroscopy" ]
1,232,419
https://en.wikipedia.org/wiki/Weighted%20geometric%20mean
In statistics, the weighted geometric mean is a generalization of the geometric mean using the weighted arithmetic mean. Given a sample and weights , it is calculated as: The second form above illustrates that the logarithm of the geometric mean is the weighted arithmetic mean of the logarithms of the individual values. If all the weights are equal, the weighted geometric mean simplifies to the ordinary unweighted geometric mean. References See also Average Central tendency Summary statistics Weighted arithmetic mean Weighted harmonic mean External links Non-Newtonian calculus website Means Mathematical analysis Non-Newtonian calculus
Weighted geometric mean
[ "Physics", "Mathematics" ]
118
[ "Means", "Mathematical analysis", "Point (geometry)", "Calculus", "Geometric centers", "Non-Newtonian calculus", "Symmetry" ]
1,232,903
https://en.wikipedia.org/wiki/Combustor
A combustor is a component or area of a gas turbine, ramjet, or scramjet engine where combustion takes place. It is also known as a burner, burner can, combustion chamber or flame holder. In a gas turbine engine, the combustor or combustion chamber is fed high-pressure air by the compression system. The combustor then heats this air at constant pressure as the fuel/air mix burns. As it burns the fuel/air mix heats and rapidly expands. The burned mix is exhausted from the combustor through the nozzle guide vanes to the turbine. In the case of ramjet or scramjet engines, the exhaust is directly fed out through the nozzle. A combustor must contain and maintain stable combustion despite very high air flow rates. To do so combustors are carefully designed to first mix and ignite the air and fuel, and then mix in more air to complete the combustion process. Early gas turbine engines used a single chamber known as a can-type combustor. Today three main configurations exist: can, annular, and cannular (also referred to as can-annular tubo-annular). Afterburners are often considered another type of combustor. Combustors play a crucial role in determining many of an engine's operating characteristics, such as fuel efficiency, levels of emissions, and transient response (the response to changing conditions such as fuel flow and air speed). Fundamentals The objective of the combustor in a gas turbine is to add energy to the system to power the turbines, and produce a high-velocity gas to exhaust through the nozzle in aircraft applications. As with any engineering challenge, accomplishing this requires balancing many design considerations, such as the following: Completely combust the fuel. Otherwise, the engine wastes the unburned fuel and creates unwanted emissions of unburned hydrocarbons, carbon monoxide (CO), and soot. Low pressure loss across the combustor. The turbine which the combustor feeds needs high-pressure flow to operate efficiently. The flame (combustion) must be held (contained) inside of the combustor. If combustion happens further back in the engine, the turbine stages can easily be overheated and damaged. Additionally, as turbine blades continue to grow more advanced and are able to withstand higher temperatures, the combustors are being designed to burn at higher temperatures and the parts of the combustor need to be designed to withstand those higher temperatures. It should be capable of relighting at high altitude in an event of engine flame-out. Uniform exit temperature profile. If there are hot spots in the exit flow, the turbine may be subjected to thermal stress or other types of damage. Similarly, the temperature profile within the combustor should avoid hot spots, as those can damage or destroy a combustor from the inside. Small physical size and weight. Space and weight are at a premium in aircraft applications, so a well designed combustor strives to be compact. Non-aircraft applications, like power-generating gas turbines, are not as constrained by this factor. Wide range of operation. Most combustors must be able to operate with a variety of inlet pressures, temperatures, and mass flows. These factors change with both engine settings and environmental conditions (i.e., full throttle at low altitude can be very different from idle throttle at high altitude). Environmental emissions. There are strict regulations on aircraft emissions of pollutants like carbon dioxide and nitrogen oxides, so combustors need to be designed to minimize those emissions. (See Emissions section below) Sources: History Advancements in combustor technology focused on several distinct areas; emissions, operating range, and durability. Early jet engines produced large amounts of smoke, so early combustor advances, in the 1950s, were aimed at reducing the smoke produced by the engine. Once smoke was essentially eliminated, efforts turned in the 1970s to reducing other emissions, like unburned hydrocarbons and carbon monoxide (for more details, see the Emissions section below). The 1970s also saw improvement in combustor durability, as new manufacturing methods improved liner (see Components below) lifetime by nearly 100 times that of early liners. In the 1980s combustors began to improve their efficiency across the whole operating range; combustors tended to be highly efficient (99%+) at full power, but that efficiency dropped off at lower settings. Development over that decade improved efficiencies at lower levels. The 1990s and 2000s saw a renewed focus on reducing emissions, particularly nitrogen oxides. Combustor technology is still being actively researched and advanced, and much modern research focuses on improving the same aspects. Components The case is the outer shell of the combustor, and is a fairly simple structure. The casing generally requires little maintenance. The case is protected from thermal loads by the air flowing in it, so thermal performance is of limited concern. However, the casing serves as a pressure vessel that must withstand the difference between the high pressures inside the combustor and the lower pressure outside. That mechanical (rather than thermal) load is a driving design factor in the case. The purpose of the diffuser is to slow the high-speed, highly compressed, air from the compressor to a velocity optimal for the combustor. Reducing the velocity results in an unavoidable loss in total pressure, so one of the design challenges is to limit the loss of pressure as much as possible. Furthermore, the diffuser must be designed to limit the flow distortion as much as possible by avoiding flow effects like boundary layer separation. Like most other gas turbine engine components, the diffuser is designed to be as short and light as possible. The liner contains the combustion process and introduces the various airflows (intermediate, dilution, and cooling, see Air flow paths below) into the combustion zone. The liner must be designed and built to withstand extended high-temperature cycles. For that reason liners tend to be made from superalloys like Hastelloy X. Furthermore, even though high-performance alloys are used, the liners must be cooled with air flow. Some combustors also make use of thermal barrier coatings. However, air cooling is still required. In general, there are two main types of liner cooling; film cooling and transpiration cooling. Film cooling works by injecting (by one of several methods) cool air from outside of the liner to just inside of the liner. This creates a thin film of cool air that protects the liner, reducing the temperature at the liner from around 1800 kelvins (K) to around 830 K, for example. The other type of liner cooling, transpiration cooling, is a more modern approach that uses a porous material for the liner. The porous liner allows a small amount of cooling air to pass through it, providing cooling benefits similar to film cooling. The two primary differences are in the resulting temperature profile of the liner and the amount of cooling air required. Transpiration cooling results in a much more even temperature profile, as the cooling air is uniformly introduced through pores. Film cooling air is generally introduced through slats or louvers, resulting in an uneven profile where it is cooler at the slat and warmer between the slats. More importantly, transpiration cooling uses much less cooling air (on the order of 10% of total airflow, rather than 20-50% for film cooling). Using less air for cooling allows more to be used for combustion, which is more and more important for high-performance, high-thrust engines. The snout is an extension of the dome (see below) that acts as an air splitter, separating the primary air from the secondary air flows (intermediate, dilution, and cooling air; see Air flow paths section below). / The dome and swirler are the part of the combustor that the primary air (see Air flow paths below) flows through as it enters the combustion zone. Their role is to generate turbulence in the flow to rapidly mix the air with fuel. Early combustors tended to use bluff body domes (rather than swirlers), which used a simple plate to create wake turbulence to mix the fuel and air. Most modern designs, however, are swirl stabilized (use swirlers). The swirler establishes a local low pressure zone that forces some of the combustion products to recirculate, creating the high turbulence. However, the higher the turbulence, the higher the pressure loss will be for the combustor, so the dome and swirler must be carefully designed so as not to generate more turbulence than is needed to sufficiently mix the fuel and air. The fuel injector is responsible for introducing fuel to the combustion zone and, along with the swirler (above), is responsible for mixing the fuel and air. There are four primary types of fuel injectors; pressure-atomizing, air blast, vaporizing, and premix/prevaporizing injectors. Pressure atomizing fuel injectors rely on high fuel pressures (as much as ) to atomize the fuel. This type of fuel injector has the advantage of being very simple, but it has several disadvantages. The fuel system must be robust enough to withstand such high pressures, and the fuel tends to be heterogeneously atomized, resulting in incomplete or uneven combustion which has more pollutants and smoke. The second type of fuel injector is the air blast injector. This injector "blasts" a sheet of fuel with a stream of air, atomizing the fuel into homogeneous droplets. This type of fuel injector led to the first smokeless combustors. The air used is just some of the primary air (see Air flow paths below) that is diverted through the injector, rather than the swirler. This type of injector also requires lower fuel pressures than the pressure atomizing type. The vaporizing fuel injector, the third type, is similar to the air blast injector in that primary air is mixed with the fuel as it is injected into the combustion zone. However, the fuel-air mixture travels through a tube within the combustion zone. Heat from the combustion zone is transferred to the fuel-air mixture, vaporizing some of the fuel (mixing it better) before it is combusted. This method allows the fuel to be combusted with less thermal radiation, which helps protect the liner. However, the vaporizer tube may have serious durability problems with low fuel flow within it (the fuel inside of the tube protects the tube from the combustion heat). The premixing/prevaporizing injectors work by mixing or vaporizing the fuel before it reaches the combustion zone. This method allows the fuel to be very uniformly mixed with the air, reducing emissions from the engine. One disadvantage of this method is that fuel may auto-ignite or otherwise combust before the fuel-air mixture reaches the combustion zone. If this happens the combustor can be seriously damaged. Most igniters in gas turbine applications are electrical spark igniters, similar to automotive spark plugs. The igniter needs to be in the combustion zone where the fuel and air are already mixed, but it needs to be far enough upstream so that it is not damaged by the combustion itself. Once the combustion is initially started by the igniter, it is self-sustaining, and the igniter is no longer used. In can-annular and annular combustors (see Types of combustors below), the flame can propagate from one combustion zone to another, so igniters are not needed at each one. In some systems ignition-assist techniques are used. One such method is oxygen injection, where oxygen is fed to the ignition area, helping the fuel easily combust. This is particularly useful in some aircraft applications where the engine may have to restart at high altitude. Air flow paths Primary air This is the main combustion air. It is highly compressed air from the high-pressure compressor (often decelerated via the diffuser) that is fed through the main channels in the dome of the combustor and the first set of liner holes. This air is mixed with fuel, and then combusted. Intermediate air Intermediate air is the air injected into the combustion zone through the second set of liner holes (primary air goes through the first set). This air completes the reaction processes, diluting the high concentrations of carbon monoxide (CO) and hydrogen (H2), and also helps cooling down the gases from combustion. Dilution air Dilution air is air injected through holes in the liner at the end of the combustion chamber to cool the flue gas before it reaches the turbines. The air is carefully used to produce the uniform temperature profile desired in the combustor. However, as turbine blade technology improves, allowing them to withstand higher temperatures, dilution air is used less, allowing the use of more combustion air. Cooling air Cooling air is air that is injected through small holes in the liner to generate a layer (film) of cool air to protect the liner from the combustion temperatures. The implementation of cooling air has to be carefully designed so it does not directly interact with the combustion air and process. In some cases, as much as 50% of the inlet air is used as cooling air. There are several different methods of injecting this cooling air, and the method can influence the temperature profile that the liner is exposed to (see Liner, above). Types Can Can combustors are self-contained cylindrical combustion chambers. Each "can" has its own fuel injector, igniter, liner, and casing. The primary air from the compressor is guided into each individual can, where it is decelerated, mixed with fuel, and then ignited. The secondary air also comes from the compressor, where it is fed outside of the liner (inside of which is where the combustion is taking place). The secondary air is then fed, usually through slits in the liner, into the combustion zone to cool the liner via thin film cooling. In most applications, multiple cans are arranged around the central axis of the engine, and their shared exhaust is fed to the . Can-type combustors were most widely used in early gas turbine engines, owing to their ease of design and testing (one can test a single can, rather than have to test the whole system). Can-type combustors are easy to maintain, as only a single can needs to be removed, rather than the whole combustion section. Most modern gas turbine engines (particularly for aircraft applications) do not use can combustors, as they often weigh more than alternatives. Additionally, the pressure drop across the can is generally higher than other combustors (on the order of 7%). Most modern engines that use can combustors are turboshafts featuring centrifugal compressors. Can-annular The next type of combustor is the "can-annular" combustor. Like the can-type combustor, can-annular combustors have discrete combustion zones contained in separate liners with their own fuel injectors. Unlike the can combustor, all the combustion zones share a common ring (annulus) casing. Each combustion zone no longer has to serve as a pressure vessel. The combustion zones can also "communicate" with each other via liner holes or connecting tubes that allow some air to flow circumferentially. The exit flow from the can-annular combustor generally has a more uniform temperature profile, which is better for the turbine section. It also eliminates the need for each chamber to have its own igniter. Once the fire is lit in one or two cans, it can easily spread to and ignite the others. This type of combustor is also lighter than the can type, and has a lower pressure drop (on the order of 6%). However, a can-annular combustor can be more difficult to maintain than a can combustor. Examples of gas turbine engines utilizing a can-annular combustor include the General Electric J79 turbojet and the Pratt & Whitney TF30 and Rolls-Royce Tay turbofans. Annular The final, and most-commonly used type of combustor is the fully annular combustor. Annular combustors do away with the separate combustion zones and simply have a continuous liner and casing in a ring (the annulus). There are many advantages to annular combustors, including more uniform combustion, shorter size (therefore lighter), and less surface area. Additionally, annular combustors tend to have very uniform exit temperatures. They also have the lowest pressure drop of the three designs (on the order of 5%). The annular design is also simpler, although testing generally requires a full size test rig. An engine that uses an annular combustor is the CFM International CFM56, the General Electric F110 and the Pratt & Whitney F401. Almost all of the modern gas turbine engines use annular combustors; likewise, most combustor research and development focuses on improving this type. Double annular combustor One variation on the standard annular combustor is the double annular combustor (DAC). Like an annular combustor, the DAC is a continuous ring without separate combustion zones around the radius. The difference is that the combustor has two combustion zones around the ring; a pilot zone and a main zone. The pilot zone acts like that of a single annular combustor, and is the only zone operating at low power levels. At high power levels, the main zone is used as well, increasing air and mass flow through the combustor. GE's implementation of this type of combustor focuses on reducing and emissions. A good diagram of a DAC is available from Purdue. Extending the same principles as the double annular combustor, triple annular and "multiple annular" combustors have been proposed and even patented. Emissions One of the driving factors in modern gas turbine design is reducing emissions, and the combustor is the primary contributor to a gas turbine's emissions. Generally speaking, there are five major types of emissions from gas turbine engines: smoke, carbon dioxide (CO2), carbon monoxide (CO), unburned hydrocarbons (UHC), and nitrogen oxides (NOx). Smoke is primarily mitigated by more evenly mixing the fuel with air. As discussed in the fuel injector section above, modern fuel injectors (such as airblast fuel injectors) evenly atomize the fuel and eliminate local pockets of high fuel concentration. Most modern engines use these types of fuel injectors and are essentially smokeless. Carbon dioxide is a product of the combustion process, and it is primarily mitigated by reducing fuel usage. On average, 1 kg of jet fuel burned produces 3.2 kg of CO2. Carbon dioxide emissions will continue to drop as manufacturers improve the overall efficiency of gas turbine engines. Unburned-hydrocarbon (UHC) and carbon-monoxide (CO) emissions are highly related. UHCs are essentially fuel that was not completely combusted. They are mostly produced at low power levels (where the engine is not burning all the fuel). Much of the UHC content reacts and forms CO within the combustor, which is why the two types of emissions are heavily related. As a result of this close relation, a combustor that is well optimized for CO emissions is inherently well optimized for UHC emissions, so most design work focuses on CO emissions. Carbon monoxide is an intermediate product of combustion, and it is eliminated by oxidation. CO and OH react to form CO2 and H. This process, which consumes the CO, requires a relatively long time ("relatively" is used because the combustion process happens incredibly quickly), high temperatures, and high pressures. This fact means that a low-CO combustor has a long residence time (essentially the amount of time the gases are in the combustion chamber). Like CO, Nitrogen oxides (NOx) are produced in the combustion zone. However, unlike CO, it is most produced during the conditions that CO is most consumed (high temperature, high pressure, long residence time). This means that, in general, reducing CO emissions results in an increase in NOx, and vice versa. This fact means that most successful emission reductions require the combination of several methods. Afterburners An afterburner (or reheat) is an additional component added to some jet engines, primarily those on military supersonic aircraft. Its purpose is to provide a temporary increase in thrust, both for supersonic flight and for takeoff (as the high wing loading typical of supersonic aircraft designs means that take-off speed is very high). On military aircraft the extra thrust is also useful for combat situations. This is achieved by injecting additional fuel into the jet pipe downstream of (i.e. after) the turbine and combusting it. The advantage of afterburning is significantly increased thrust; the disadvantage is its very high fuel consumption and inefficiency, though this is often regarded as acceptable for the short periods during which it is usually used. Jet engines are referred to as operating wet when afterburning is being used and dry when the engine is used without afterburning. An engine producing maximum thrust wet is at maximum power or max reheat (this is the maximum power the engine can produce); an engine producing maximum thrust dry is at military power or max dry. As with the main combustor in a gas turbine, the afterburner has both a case and a liner, serving the same purpose as their main combustor counterparts. One major difference between a main combustor and an afterburner is that the temperature rise is not constrained by a turbine section, therefore afterburners tend to have a much higher temperature rise than main combustors. Another difference is that afterburners are not designed to mix fuel as well as primary combustors, so not all the fuel is burned within the afterburner section. Afterburners also often require the use of flameholders to keep the velocity of the air in the afterburner from blowing the flame out. These are often bluff bodies or "vee-gutters" directly behind the fuel injectors that create localized low-speed flow in the same manner the dome does in the main combustor. Ramjets Ramjet engines differ in many ways from traditional gas turbine engines, but most of the same principles hold. One major difference is the lack of rotating machinery (a turbine) after the combustor. The combustor exhaust is directly fed to a nozzle. This allows ramjet combustors to burn at a higher temperature. Another difference is that many ramjet combustors do not use liners like gas turbine combustors do. Furthermore, some ramjet combustors are dump combustors rather than a more conventional type. Dump combustors inject fuel and rely on recirculation generated by a large change in area in the combustor (rather than swirlers in many gas turbine combustors). That said, many ramjet combustors are also similar to traditional gas turbine combustors, such as the combustor in the ramjet used by the RIM-8 Talos missile, which used a can-type combustor. Scramjets Scramjet (supersonic combustion ramjet) engines present a much different situation for the combustor than conventional gas turbine engines (scramjets are not gas turbines, as they generally have few or no moving parts). While scramjet combustors may be physically quite different from conventional combustors, they face many of the same design challenges, like fuel mixing and flame holding. However, as its name implies, a scramjet combustor must address these challenges in a supersonic flow environment. For example, for a scramjet flying at Mach 5, the air flow entering the combustor would nominally be Mach 2. One of the major challenges in a scramjet engine is preventing shock waves generated by combustor from traveling upstream into the inlet. If that were to happen, the engine may unstart, resulting in loss of thrust, among other problems. To prevent this, scramjet engines tend to have an isolator section (see image) immediately ahead of the combustion zone. See also Components of jet engines Notes References Notes Bibliography External links Classification of Combustion Chamber Combustion chambers Jet engine technology Jet engines
Combustor
[ "Technology" ]
5,110
[ "Jet engines", "Engines" ]
1,232,940
https://en.wikipedia.org/wiki/Micellar%20electrokinetic%20chromatography
Micellar electrokinetic chromatography (MEKC) is a chromatography technique used in analytical chemistry. It is a modification of capillary electrophoresis (CE), extending its functionality to neutral analytes, where the samples are separated by differential partitioning between micelles (pseudo-stationary phase) and a surrounding aqueous buffer solution (mobile phase). The basic set-up and detection methods used for MEKC are the same as those used in CE. The difference is that the solution contains a surfactant at a concentration that is greater than the critical micelle concentration (CMC). Above this concentration, surfactant monomers are in equilibrium with micelles. In most applications, MEKC is performed in open capillaries under alkaline conditions to generate a strong electroosmotic flow. Sodium dodecyl sulfate (SDS) is the most commonly used surfactant in MEKC applications. The anionic character of the sulfate groups of SDS causes the surfactant and micelles to have electrophoretic mobility that is counter to the direction of the strong electroosmotic flow. As a result, the surfactant monomers and micelles migrate quite slowly, though their net movement is still toward the cathode. During a MEKC separation, analytes distribute themselves between the hydrophobic interior of the micelle and hydrophilic buffer solution as shown in figure 1. Analytes that are insoluble in the interior of micelles should migrate at the electroosmotic flow velocity, , and be detected at the retention time of the buffer, . Analytes that solubilize completely within the micelles (analytes that are highly hydrophobic) should migrate at the micelle velocity, , and elute at the final elution time, . Theory The micelle velocity is defined by: where is the electrophoretic velocity of a micelle. The retention time of a given sample should depend on the capacity factor, : where is the total number of moles of solute in the micelle and is the total moles in the aqueous phase. The retention time of a solute should then be within the range: Charged analytes have a more complex interaction in the capillary because they exhibit electrophoretic mobility, engage in electrostatic interactions with the micelle, and participate in hydrophobic partitioning. The fraction of the sample in the aqueous phase, , is given by: where is the migration velocity of the solute. The value can also be expressed in terms of the capacity factor: Using the relationship between velocity, tube length from the injection end to the detector cell (), and retention time, , and , a relationship between the capacity factor and retention times can be formulated: The extra term enclosed in parentheses accounts for the partial mobility of the hydrophobic phase in MEKC. This equation resembles an expression derived for in conventional packed bed chromatography: A rearrangement of the previous equation can be used to write an expression for the retention factor: From this equation it can be seen that all analytes that partition strongly into the micellar phase (where is essentially ∞) migrate at the same time, . In conventional chromatography, separation of similar compounds can be improved by gradient elution. In MEKC, however, techniques must be used to extend the elution range to separate strongly retained analytes. Elution ranges can be extended by several techniques including the use of organic modifiers, cyclodextrins, and mixed micelle systems. Short-chain alcohols or acetonitrile can be used as organic modifiers that decrease and to improve the resolution of analytes that co-elute with the micellar phase. These agents, however, may alter the level of the EOF. Cyclodextrins are cyclic polysaccharides that form inclusion complexes that can cause competitive hydrophobic partitioning of the analyte. Since analyte-cyclodextrin complexes are neutral, they will migrate toward the cathode at a higher velocity than that of the negatively charged micelles. Mixed micelle systems, such as the one formed by combining SDS with the non-ionic surfactant Brij-35, can also be used to alter the selectivity of MEKC. Applications The simplicity and efficiency of MEKC have made it an attractive technique for a variety of applications. Further improvements can be made to the selectivity of MEKC by adding chiral selectors or chiral surfactants to the system. Unfortunately, this technique is not suitable for protein analysis because proteins are generally too large to partition into a surfactant micelle and tend to bind to surfactant monomers to form SDS-protein complexes. Recent applications of MEKC include the analysis of uncharged pesticides, essential and branched-chain amino acids in nutraceutical products, hydrocarbon and alcohol contents of the marjoram herb. MEKC has also been targeted for its potential to be used in combinatorial chemical analysis. The advent of combinatorial chemistry has enabled medicinal chemists to synthesize and identify large numbers of potential drugs in relatively short periods of time. Small sample and solvent requirements and the high resolving power of MEKC have enabled this technique to be used to quickly analyze a large number of compounds with good resolution. Traditional methods of analysis, like high-performance liquid chromatography (HPLC), can be used to identify the purity of a combinatorial library, but assays need to be rapid with good resolution for all components to provide useful information for the chemist. The introduction of surfactant to traditional capillary electrophoresis instrumentation has dramatically expanded the scope of analytes that can be separated by capillary electrophoresis. MEKC can also be used in routine quality control of antibiotics in pharmaceuticals or feedstuffs. References Sources Kealey, D.;Haines P.J.; instant notes, Analytical Chemistry page 182-188 Chromatography
Micellar electrokinetic chromatography
[ "Chemistry" ]
1,268
[ "Chromatography", "Separation processes" ]
1,233,042
https://en.wikipedia.org/wiki/Hadronization
Hadronization (or hadronisation) is the process of the formation of hadrons out of quarks and gluons. There are two main branches of hadronization: quark-gluon plasma (QGP) transformation and colour string decay into hadrons. The transformation of quark-gluon plasma into hadrons is studied in lattice QCD numerical simulations, which are explored in relativistic heavy-ion experiments. Quark-gluon plasma hadronization occurred shortly after the Big Bang when the quark–gluon plasma cooled down to the Hagedorn temperature (about 150 MeV) when free quarks and gluons cannot exist. In string breaking new hadrons are forming out of quarks, antiquarks and sometimes gluons, spontaneously created from the vacuum. Statistical hadronization A highly successful description of QGP hadronization is based on statistical phase space weighting according to the Fermi–Pomeranchuk model of particle production. This approach was developed, since 1950, initially as a qualitative description of strongly interacting particle production. It was originally not meant to be an accurate description, but a phase space estimate of upper limit to particle yield. In the following years numerous hadronic resonances were discovered. Rolf Hagedorn postulated the statistical bootstrap model (SBM) allowing to describe hadronic interactions in terms of statistical resonance weights and the resonance mass spectrum. This turned the qualitative Fermi–Pomeranchuk model into a precise statistical hadronization model for particle production. However, this property of hadronic interactions poses a challenge for the statistical hadronization model as the yield of particles is sensitive to the unidentified high mass hadron resonance states. The statistical hadronization model was first applied to relativistic heavy-ion collisions in 1991, which lead to the recognition of the first strange anti-baryon signature of quark-gluon plasma discovered at CERN. Phenomenological studies of string model and fragmentation The QCD (Quantum Chromodynamics) of the hadronization process are not yet fully understood, but are modeled and parameterized in a number of phenomenological studies, including the Lund string model and in various long-range QCD approximation schemes. The tight cone of particles created by the hadronization of a single quark is called a jet. In particle detectors, jets are observed rather than quarks, whose existence must be inferred. The models and approximation schemes and their predicted jet hadronization, or fragmentation, have been extensively compared with measurement in a number of high energy particle physics experiments, e.g. TASSO, OPAL and H1. Hadronization can be explored using Monte Carlo simulation. After the particle shower has terminated, partons with virtualities (how far off shell the virtual particles are) on the order of the cut-off scale remain. From this point on, the parton is in the low momentum transfer, long-distance regime in which non-perturbative effects become important. The most dominant of these effects is hadronization, which converts partons into observable hadrons. No exact theory for hadronization is known but there are two successful models for parameterization. These models are used within event generators which simulate particle physics events. The scale at which partons are given to the hadronization is fixed by the shower Monte Carlo component of the event generator. Hadronization models typically start at some predefined scale of their own. This can cause significant issue if not set up properly within the Shower Monte Carlo. Common choices of shower Monte Carlo are PYTHIA and HERWIG. Each of these correspond to one of the two parameterization models. The top quark does not hadronize The top quark, however, decays via the weak force with a mean lifetime of 5×10−25 seconds. Unlike all other weak interactions, which typically are much slower than strong interactions, the top quark weak decay is uniquely shorter than the time scale at which the strong force of QCD acts, so a top quark decays before it can hadronize. The top quark is therefore almost a free particle. References Quantum chromodynamics Experimental particle physics
Hadronization
[ "Physics" ]
888
[ "Experimental physics", "Particle physics", "Experimental particle physics" ]
1,233,278
https://en.wikipedia.org/wiki/Capillary%20electrophoresis
Capillary electrophoresis (CE) is a family of electrokinetic separation methods performed in submillimeter diameter capillaries and in micro- and nanofluidic channels. Very often, CE refers to capillary zone electrophoresis (CZE), but other electrophoretic techniques including capillary gel electrophoresis (CGE), capillary isoelectric focusing (CIEF), capillary isotachophoresis and micellar electrokinetic chromatography (MEKC) belong also to this class of methods. In CE methods, analytes migrate through electrolyte solutions under the influence of an electric field. Analytes can be separated according to ionic mobility and/or partitioning into an alternate phase via non-covalent interactions. Additionally, analytes may be concentrated or "focused" by means of gradients in conductivity and pH. Instrumentation The instrumentation needed to perform capillary electrophoresis is relatively simple. A basic schematic of a capillary electrophoresis system is shown in figure 1. The system's main components are a sample vial, source and destination vials, a capillary, electrodes, a high voltage power supply, a detector, and a data output and handling device. The source vial, destination vial and capillary are filled with an electrolyte such as an aqueous buffer solution. To introduce the sample, the capillary inlet is placed into a vial containing the sample. Sample is introduced into the capillary via capillary action, pressure, siphoning, or electrokinetically, and the capillary is then returned to the source vial. The migration of the analytes is initiated by an electric field that is applied between the source and destination vials and is supplied to the electrodes by the high-voltage power supply. In the most common mode of CE, all ions, positive or negative, are pulled through the capillary in the same direction by electroosmotic flow. The analytes separate as they migrate due to their electrophoretic mobility, and are detected near the outlet end of the capillary. The output of the detector is sent to a data output and handling device such as an integrator or computer. The data is then displayed as an electropherogram, which reports detector response as a function of time. Separated chemical compounds appear as peaks with different migration times in an electropherogram. The technique is often attributed to James W. Jorgensen and Krynn DeArman Lukacs, who first demonstrated the capabilities of this technique. Capillary electrophoresis was first combined with mass spectrometry by Richard D. Smith and coworkers, and provides extremely high sensitivity for the analysis of very small sample sizes. Despite the very small sample sizes (typically only a few nanoliters of liquid are introduced into the capillary), high sensitivity and sharp peaks are achieved in part due to injection strategies that result in a concentration of analytes into a narrow zone near the inlet of the capillary. This is achieved in either pressure or electrokinetic injections simply by suspending the sample in a buffer of lower conductivity (e.g. lower salt concentration) than the running buffer. A process called field-amplified sample stacking (a form of isotachophoresis) results in concentration of analyte in a narrow zone at the boundary between the low-conductivity sample and the higher-conductivity running buffer. To achieve greater sample throughput, instruments with arrays of capillaries are used to analyze many samples simultaneously. Such capillary array electrophoresis (CAE) instruments with 16 or 96 capillaries are used for medium- to high-throughput capillary DNA sequencing, and the inlet ends of the capillaries are arrayed spatially to accept samples directly from SBS-standard footprint 96-well plates. Certain aspects of the instrumentation (such as detection) are necessarily more complex than for a single-capillary system, but the fundamental principles of design and operation are similar to those shown in Figure 1. Detection Separation by capillary electrophoresis can be detected by several detection devices. The majority of commercial systems use UV or UV-Vis absorbance as their primary mode of detection. In these systems, a section of the capillary itself is used as the detection cell. The use of on-tube detection enables detection of separated analytes with no loss of resolution. In general, capillaries used in capillary electrophoresis are coated with a polymer (frequently polyimide or Teflon) for increased flexibility. The portion of the capillary used for UV detection, however, must be optically transparent. For polyimide-coated capillaries, a segment of the coating is typically burned or scraped off to provide a bare window several millimeters long. This bare section of capillary can break easily, and capillaries with transparent coatings are available to increase the stability of the cell window. The path length of the detection cell in capillary electrophoresis (~ 50 micrometers) is far less than that of a traditional UV cell (~ 1 cm). According to the Beer-Lambert law, the sensitivity of the detector is proportional to the path length of the cell. To improve the sensitivity, the path length can be increased, though this results in a loss of resolution. The capillary tube itself can be expanded at the detection point, creating a "bubble cell" with a longer path length or additional tubing can be added at the detection point as shown in figure 2. Both of these methods, however, will decrease the resolution of the separation. This decrease is almost unnoticeable if a smooth aneurysm is produced in the wall of a capillary by heating and pressurization, as plug flow can be preserved. This invention by Gary Gordon, US Patent 5061361, typically triples the absorbance path length. When used with a UV absorbance detector, the wider cross-section of the analyte in the cell allows for an illuminating beam twice as large, which reduces shot noise by a factor of two. Together these two factors increase the sensitivity of Agilent Technologies's Bubble Cell CE Detector six times over that of one using a straight capillary. This cell and its manufacture are described on page 62 of the June 1995 issue of the Hewlett-Packard Journal. Fluorescence detection can also be used in capillary electrophoresis for samples that naturally fluoresce or are chemically modified to contain fluorescent tags. This mode of detection offers high sensitivity and improved selectivity for these samples, but cannot be utilized for samples that do not fluoresce. Numerous labeling strategies are used to create fluorescent derivatives or conjugates of non-fluorescent molecules, including proteins and DNA. The set-up for fluorescence detection in a capillary electrophoresis system can be complicated. The method requires that the light beam be focused on the capillary, which can be difficult for many light sources. Laser-induced fluorescence has been used in CE systems with detection limits as low as 10−18 to 10−21 mol. The sensitivity of the technique is attributed to the high intensity of the incident light and the ability to accurately focus the light on the capillary. Multi-color fluorescence detection can be achieved by including multiple dichroic mirrors and bandpass filters to separate the fluorescence emission amongst multiple detectors (e.g., photomultiplier tubes), or by using a prism or grating to project spectrally resolved fluorescence emission onto a position-sensitive detector such as a CCD array. CE systems with 4- and 5-color LIF detection systems are used routinely for capillary DNA sequencing and genotyping ("DNA fingerprinting") applications. In order to obtain the identity of sample components, capillary electrophoresis can be directly coupled with mass spectrometers or surface-enhanced Raman spectroscopy (SERS). In most systems, the capillary outlet is introduced into an ion source that utilizes electrospray ionization (ESI). The resulting ions are then analyzed by the mass spectrometer. This setup requires volatile buffer solutions, which will affect the range of separation modes that can be employed and the degree of resolution that can be achieved. The measurement and analysis are mostly done with a specialized. For CE-SERS, capillary electrophoresis eluants can be deposited onto a SERS-active substrate. Analyte retention times can be translated into spatial distance by moving the SERS-active substrate at a constant rate during capillary electrophoresis. This allows the subsequent spectroscopic technique to be applied to specific eluants for identification with high sensitivity. SERS-active substrates can be chosen that do not interfere with the spectrum of the analytes. Modes of separation The separation of compounds by capillary electrophoresis is dependent on the differential migration of analytes in an applied electric field. The electrophoretic migration velocity () of an analyte toward the electrode of opposite charge is: The electrophoretic mobility can be determined experimentally from the migration time and the field strength: where is the distance from the inlet to the detection point, is the time required for the analyte to reach the detection point (migration time), is the applied voltage (field strength), and is the total length of the capillary. Since only charged ions are affected by the electric field, neutral analytes are poorly separated by capillary electrophoresis. The velocity of migration of an analyte in capillary electrophoresis will also depend upon the rate of electroosmotic flow (EOF) of the buffer solution. In a typical system, the electroosmotic flow is directed toward the negatively charged cathode so that the buffer flows through the capillary from the source vial to the destination vial. Separated by differing electrophoretic mobilities, analytes migrate toward the electrode of opposite charge. As a result, negatively charged analytes are attracted to the positively charged anode, counter to the EOF, while positively charged analytes are attracted to the cathode, in agreement with the EOF as depicted in figure 3. The velocity of the electroosmotic flow, can be written as: where is the electroosmotic mobility, which is defined as: where is the zeta potential of the capillary wall, and is the relative permittivity of the buffer solution. Experimentally, the electroosmotic mobility can be determined by measuring the retention time of a neutral analyte. The velocity () of an analyte in an electric field can then be defined as: Since the electroosmotic flow of the buffer solution is generally greater than that of the electrophoretic mobility of the analytes, all analytes are carried along with the buffer solution toward the cathode. Even small, triply charged anions can be redirected to the cathode by the relatively powerful EOF of the buffer solution. Negatively charged analytes are retained longer in the capillary due to their conflicting electrophoretic mobilities. The order of migration seen by the detector is shown in figure 3: small multiply charged cations migrate quickly and small multiply charged anions are retained strongly. Electroosmotic flow is observed when an electric field is applied to a solution in a capillary that has fixed charges on its interior wall. Charge is accumulated on the inner surface of a capillary when a buffer solution is placed inside the capillary. In a fused-silica capillary, silanol (Si-OH) groups attached to the interior wall of the capillary are ionized to negatively charged silanoate (Si-O−) groups at pH values greater than three. The ionization of the capillary wall can be enhanced by first running a basic solution, such as NaOH or KOH through the capillary prior to introducing the buffer solution. Attracted to the negatively charged silanoate groups, the positively charged cations of the buffer solution will form two inner layers of cations (called the diffuse double layer or the electrical double layer) on the capillary wall as shown in figure 4. The first layer is referred to as the fixed layer because it is held tightly to the silanoate groups. The outer layer, called the mobile layer, is farther from the silanoate groups. The mobile cation layer is pulled in the direction of the negatively charged cathode when an electric field is applied. Since these cations are solvated, the bulk buffer solution migrates with the mobile layer, causing the electroosmotic flow of the buffer solution. Other capillaries including Teflon capillaries also exhibit electroosmotic flow. The EOF of these capillaries is probably the result of adsorption of the electrically charged ions of the buffer onto the capillary walls. The rate of EOF is dependent on the field strength and the charge density of the capillary wall. The wall's charge density is proportional to the pH of the buffer solution. The electroosmotic flow will increase with pH until all of the available silanols lining the wall of the capillary are fully ionized. In certain situations where strong electroosmotic flow toward the cathode is undesirable, the inner surface of the capillary can be coated with polymers, surfactants, or small molecules to reduce electroosmosis to very low levels, restoring the normal direction of migration (anions toward the anode, cations toward the cathode). CE instrumentation typically includes power supplies with reversible polarity, allowing the same instrument to be used in "normal" mode (with EOF and detection near the cathodic end of the capillary) and "reverse" mode (with EOF suppressed or reversed, and detection near the anodic end of the capillary). One of the most common approaches to suppressing EOF, reported by Stellan Hjertén in 1985, is to create a covalently attached layer of linear polyacrylamide. The silica surface of the capillary is first modified with a silane reagent bearing a polymerizable vinyl group (e.g. 3-methacryloxypropyltrimethoxysilane), followed by introduction of acrylamide monomer and a free radical initiator. The acrylamide is polymerized in situ, forming long linear chains, some of which are covalently attached to the wall-bound silane reagent. Numerous other strategies for covalent modification of capillary surfaces exist. Dynamic or adsorbed coatings (which can include polymers or small molecules) are also common. For example, in capillary sequencing of DNA, the sieving polymer (typically polydimethylacrylamide) suppresses electroosmotic flow to very low levels. Besides modulating electroosmotic flow, capillary wall coatings can also serve the purpose of reducing interactions between "sticky" analytes (such as proteins) and the capillary wall. Such wall-analyte interactions, if severe, manifest as reduced peak efficiency, asymmetric (tailing) peaks, or even complete loss of analyte to the capillary wall. Efficiency and resolution The number of theoretical plates, or separation efficiency, in capillary electrophoresis is given by: where is the number of theoretical plates, is the apparent mobility in the separation medium and is the diffusion coefficient of the analyte. According to this equation, the efficiency of separation is only limited by diffusion and is proportional to the strength of the electric field, although practical considerations limit the strength of the electric field to several hundred volts per centimeter. Application of very high potentials (>20-30 kV) may lead to arcing or breakdown of the capillary. Further, application of strong electric fields leads to resistive heating (Joule heating) of the buffer in the capillary. At sufficiently high field strengths, this heating is strong enough that radial temperature gradients can develop within the capillary. Since electrophoretic mobility of ions is generally temperature-dependent (due to both temperature-dependent ionization and solvent viscosity effects), a non-uniform temperature profile results in variation of electrophoretic mobility across the capillary, and a loss of resolution. The onset of significant Joule heating can be determined by constructing an "Ohm's Law plot", wherein the current through the capillary is measured as a function of applied potential. At low fields, the current is proportional to the applied potential (Ohm's Law), whereas at higher fields the current deviates from the straight line as heating results in decreased resistance of the buffer. The best resolution is typically obtained at the maximum field strength for which Joule heating is insignificant (i.e. near the boundary between the linear and nonlinear regimes of the Ohm's Law plot). Generally capillaries of smaller inner diameter support use of higher field strengths, due to improved heat dissipation and smaller thermal gradients relative to larger capillaries, but with the drawbacks of lower sensitivity in absorbance detection due to shorter path length, and greater difficulty in introducing buffer and sample into the capillary (small capillaries require greater pressure and/or longer times to force fluids through the capillary). The efficiency of capillary electrophoresis separations is typically much higher than the efficiency of other separation techniques like HPLC. Unlike HPLC, in capillary electrophoresis there is no mass transfer between phases. In addition, the flow profile in EOF-driven systems is flat, rather than the rounded laminar flow profile characteristic of the pressure-driven flow in chromatography columns as shown in figure 5. As a result, EOF does not significantly contribute to band broadening as in pressure-driven chromatography. Capillary electrophoresis separations can have several hundred thousand theoretical plates. The resolution () of capillary electrophoresis separations can be written as: According to this equation, maximum resolution is reached when the electrophoretic and electroosmotic mobilities are similar in magnitude and opposite in sign. In addition, it can be seen that high resolution requires lower velocity and, correspondingly, increased analysis time. Besides diffusion and Joule heating (discussed above), factors that may decrease the resolution in capillary electrophoresis from the theoretical limits in the above equation include, but are not limited to, the finite widths of the injection plug and detection window; interactions between the analyte and the capillary wall; instrumental non-idealities such as a slight difference in height of the fluid reservoirs leading to siphoning; irregularities in the electric field due to, e.g., imperfectly cut capillary ends; depletion of buffering capacity in the reservoirs; and electrodispersion (when an analyte has higher conductivity than the background electrolyte). Identifying and minimizing the numerous sources of band broadening is key to successful method development in capillary electrophoresis, with the objective of approaching as close as possible to the ideal of diffusion-limited resolution. Applications Capillary electrophoresis may be used for the simultaneous determination of the ions NH4+,, Na+, K+, Mg2+ and Ca2+ in saliva. One of the main applications of CE in forensic science is the development of methods for amplification and detection of DNA fragments using polymerase chain reaction (PCR), which has led to rapid and dramatic advances in forensic DNA analysis. DNA separations are carried out using thin CE 50-mm fused silica capillaries filled with a sieving buffer. These capillaries have excellent capabilities to dissipate heat, permitting much higher electric field strengths to be used than slab gel electrophoresis. Therefore separations in capillaries are rapid and efficient. Additionally, the capillaries can be easily refilled and changed for efficient and automated injections. Detection occurs via fluorescence through a window etched in the capillary. Both single-capillary and capillary-array instruments are available with array systems capable of running 16 or more samples simultaneously for increased throughput. A major use of CE by forensic biologists is typing of STR from biological samples to generate a profile from highly polymorphic genetic markers which differ between individuals. Other emerging uses for CE include the detection of specific mRNA fragments to help identify the biological fluid or tissue origin of a forensic sample. Another application of CE in forensics is ink analysis, where the analysis of inkjet printing inks is becoming more necessary due to increasingly frequent counterfeiting of documents printed by inkjet printers. The chemical composition of inks provides very important information in cases of fraudulent documents and counterfeit banknotes. Micellar electrophoretic capillary chromatography (MECC) has been developed and applied to the analysis of inks extracted from paper. Due to its high resolving power relative to inks containing several chemically similar substances, differences between inks from the same manufacturer can also be distinguished. This makes it suitable for evaluating the origin of documents based on the chemical composition of inks. It is worth noting that because of the possible compatibility of the same cartridge with different printer models, the differentiation of inks on the basis of their MECC electrophoretic profiles is a more reliable method for the determination of the ink cartridge of origin (its producer and cartridge number) rather than the printer model of origin. A specialized type of CE, affinity capillary electrophoresis (ACE), utilizes intermolecular binding interactions to understand protein-ligand interactions. Pharmaceutical companies use ACE for a multitude of reasons, with one of the main ones being the association/binding constants for drugs and ligands or drugs and certain vehicle systems like micelles. It is a widely used technique because of its simplicity, rapid results, and low analyte usage. The use of ACE can provide specific details in binding, separation, and detection of analytes and is proven to be highly practical for studies in life sciences. Aptamer-based affinity capillary electrophoresis is utilized for the analysis and modifications of specific affinity reagents. Modified aptamers ideally exhibit and high binding affinity, specificity, and nuclease resistance. Ren et al. incorporated modified nucleotides in aptamers to introduce new confrontational features and high affinity interactions from the hydrophobic and polar interactions between IL-1α and the aptamer. Huang et al. uses ACE to investigate protein-protein interactions using aptamers. A α-thrombin binding aptamer was labeled with 6-carboxyfluorescein for use as a selective fluorescent probe and was studied to elucidate information on binding sites for protein-protein and protein-DNA interactions. Capillary electrophoresis (CE) has become an important, cost-effective approach to do DNA sequencing that provides high throughput and high accuracy sequencing information. Woolley and Mathies used a CE chip to sequence DNA fragments with 97% accuracy and a speed of 150 bases in 540 seconds. They used a 4-color labeling and detection format to collect fluorescent data. Fluorescence is used to view the concentrations of each part of the nucleic acid sequence, A, T, C and G, and these concentration peaks that are graphed from the detection are used to determine the sequence of the DNA. References Further reading External links CE animations Chromatography Electrophoresis Forensic techniques Polymerase chain reaction
Capillary electrophoresis
[ "Chemistry", "Biology" ]
4,989
[ "Chromatography", "Biochemistry methods", "Genetics techniques", "Polymerase chain reaction", "Separation processes", "Instrumental analysis", "Biochemical separation processes", "Molecular biology techniques", "Electrophoresis" ]
1,233,715
https://en.wikipedia.org/wiki/Median%20voter%20theorem
In political science and social choice, the median voter theorem states that if voters and candidates are distributed along a one-dimensional spectrum and voters have single-peaked preferences, any voting method that is compatible with majority-rule will elect the candidate preferred by the median voter. The theorem was first set out by Duncan Black in 1948. He wrote that he saw a large gap in economic theory concerning how voting determines the outcome of decisions, including political decisions. Black's paper triggered research on how economics can explain voting systems. A different argument due to Anthony Downs and Harold Hotelling is only loosely-related to Black's median voter theorem, but is often confused with it. This model argues that politicians in a representative democracy will converge to the viewpoint of the median voter, because the median voter theorem implies that a candidate who wishes to win will adopt the positions of the median voter. However, this argument only applies to systems satisfying the median voter property, and cannot be applied to systems like ranked choice voting (RCV) or plurality voting outside of limited conditions (see . Statement and proof of the theorem Say there is an election where candidates and voters have opinions distributed along a one-dimensional political spectrum. Voters rank candidates by proximity, i.e. the closest candidate is their first preference, the second-closest is their second preference, and so on. Then, the median voter theorem says that the candidate closest to the median voter is a majority-preferred (or Condorcet) candidate. In other words, this candidate preferred to any one of their opponents by a majority of voters. When there are only two candidates, a simple majority vote satisfies this condition, while for multi-candidate votes any majority-rule (Condorcet) method will satisfy it. Proof sketch: Let the median voter be Marlene. The candidate who is closest to her will receive her first preference vote. Suppose that this candidate is Charles and that he lies to her left. Marlene and all voters to her left (by definition a majority of the electorate) will prefer Charles to all candidates to his right, and Marlene and all voters to her right (also a majority) will prefer Charles to all candidates to his left. The assumption that preferences are cast in order of proximity can be relaxed to say merely that they are single-peaked. The assumption that opinions lie along a real line can be relaxed to allow more general topologies. Spatial / valence models: Suppose that each candidate has a valence (attractiveness) in addition to his or her position in space, and suppose that voter i ranks candidates j in decreasing order of vj – dij where vj is j 's valence and dij is the distance from i to j. Then the median voter theorem still applies: Condorcet methods will elect the candidate voted for by the median voter. The median voter property We will say that a voting method has the "median voter property in one dimension" if it always elects the candidate closest to the median voter under a one-dimensional spatial model. We may summarize the median voter theorem as saying that all Condorcet methods possess the median voter property in one dimension. It turns out that Condorcet methods are not unique in this: Coombs' method is not Condorcet-consistent but nonetheless satisfies the median voter property in one dimension. Approval voting satisfies the same property under several models of strategic voting. Extensions to higher dimensions It is impossible to fully generalize the median voter theorem to spatial models in more than one dimension, as there is no longer a single unique "median" for all possible distributions of voters. However, it is still possible to demonstrate similar theorems under some limited conditions. The table shows an example of an election given by the Marquis de Condorcet, who concluded it showed a problem with the Borda count. The Condorcet winner on the left is A, who is preferred to B by 41:40 and to C by 60:21. The Borda winner is instead B. However, Donald Saari constructs an example in two dimensions where the Borda count (but not the Condorcet winner) correctly identifies the candidate closest to the center (as determined by the geometric median). The diagram shows a possible configuration of the voters and candidates consistent with the ballots, with the voters positioned on the circumference of a unit circle. In this case, A's mean absolute deviation is 1.15, whereas B's is 1.09 (and C's is 1.70), making B the spatial winner. Thus the election is ambiguous in that two different spatial representations imply two different optimal winners. This is the ambiguity we sought to avoid earlier by adopting a median metric for spatial models; but although the median metric achieves its aim in a single dimension, the property does not fully generalize to higher dimensions. Omnidirectional medians Despite this result, the median voter theorem can be applied to distributions that are rotationally symmetric, e.g. Gaussians, which have a single median that is the same in all directions. Whenever the distribution of voters has a unique median in all directions, and voters rank candidates in order of proximity, the median voter theorem applies: the candidate closest to the median will have a majority preference over all his or her rivals, and will be elected by any voting method satisfying the median voter property in one dimension. It follows that all median voter methods satisfy the same property in spaces of any dimension, for voter distributions with omnidirectional medians. It is easy to construct voter distributions which do not have a median in all directions. The simplest example consists of a distribution limited to 3 points not lying in a straight line, such as 1, 2 and 3 in the second diagram. Each voter location coincides with the median under a certain set of one-dimensional projections. If A, B and C are the candidates, then '1' will vote A-B-C, '2' will vote B-C-A, and '3' will vote C-A-B, giving a Condorcet cycle. This is the subject of the McKelvey–Schofield theorem. Proof. See the diagram, in which the grey disc represents the voter distribution as uniform over a circle and M is the median in all directions. Let A and B be two candidates, of whom A is the closer to the median. Then the voters who rank A above B are precisely the ones to the left (i.e. the 'A' side) of the solid red line; and since A is closer than B to M, the median is also to the left of this line. Now, since M is a median in all directions, it coincides with the one-dimensional median in the particular case of the direction shown by the blue arrow, which is perpendicular to the solid red line. Thus if we draw a broken red line through M, perpendicular to the blue arrow, then we can say that half the voters lie to the left of this line. But since this line is itself to the left of the solid red line, it follows that more than half of the voters will rank A above B. Relation between the median in all directions and the geometric median Whenever a unique omnidirectional median exists, it determines the result of Condorcet voting methods. At the same time the geometric median can arguably be identified as the ideal winner of a ranked preference election. It is therefore important to know the relationship between the two. In fact whenever a median in all directions exists (at least for the case of discrete distributions), it coincides with the geometric median. Lemma. Whenever a discrete distribution has a median M  in all directions, the data points not located at M  must come in balanced pairs (A,A ' ) on either side of M  with the property that A – M – A ' is a straight line (ie. not like A 0 – M – A 2 in the diagram). Proof. This result was proved algebraically by Charles Plott in 1967. Here we give a simple geometric proof by contradiction in two dimensions. Suppose, on the contrary, that there is a set of points Ai which have M  as median in all directions, but for which the points not coincident with M  do not come in balanced pairs. Then we may remove from this set any points at M, and any balanced pairs about M, without M  ceasing to be a median in any direction; so M  remains an omnidirectional median. If the number of remaining points is odd, then we can easily draw a line through M  such that the majority of points lie on one side of it, contradicting the median property of M. If the number is even, say 2n, then we can label the points A 0, A1,... in clockwise order about M  starting at any point (see the diagram). Let θ be the angle subtended by the arc from M –A 0 to M –A n . Then if θ < 180° as shown, we can draw a line similar to the broken red line through M  which has the majority of data points on one side of it, again contradicting the median property of M ; whereas if θ > 180° the same applies with the majority of points on the other side. And if θ = 180°, then A 0 and A n form a balanced pair, contradicting another assumption. Theorem. Whenever a discrete distribution has a median M  in all directions, it coincides with its geometric median. Proof. The sum of distances from any point P  to a set of data points in balanced pairs (A,A ' ) is the sum of the lengths A – P – A '. Each individual length of this form is minimized over P when the line is straight, as happens when P  coincides with M. The sum of distances from P to any data points located at M is likewise minimized when P  and M  coincide. Thus the sum of distances from the data points to P is minimized when P coincides with M. Hotelling–Downs model A related observation was discussed by Harold Hotelling as his 'principle of minimum differentiation', also known as 'Hotelling's law'. It states that if: Candidates can choose ideological positions without consequence, Candidates only care about winning the election (not their actual beliefs), All other criteria of the median voter theorem are met (i.e. voters rank candidates by ideological distance), The voting system satisfies the median voter criterion, Then all politicians will converge to the median voter. As a special case, this law applies to the situation where there are exactly two candidates in the race, if it is impossible or implausible that any more candidates will join the race, because a simple majority vote between two alternatives satisfies the Condorcet criterion. This theorem was first described by Hotelling in 1929. In practice, none of these conditions hold for modern American elections, though they may have held in Hotelling's time (when nominees were often previously-unknown and chosen by closed party caucuses in ideologically diverse parties). Most importantly, politicians must win primary elections, which often include challengers or competitors, to be chosen as major-party nominees. As a result, politicians must compromise between appealing to the median voter in the primary and general electorates. Similar effects imply candidates do not converge to the median voter under electoral systems that do not satisfy the median voter theorem, including plurality voting, plurality-with-primaries, plurality-with-runoff, or ranked-choice runoff (RCV). Uses of the median voter theorem The theorem is valuable for the light it sheds on the optimality (and the limits to the optimality) of certain voting systems. Valerio Dotti points out broader areas of application: The Median Voter Theorem proved extremely popular in the Political Economy literature. The main reason is that it can be adopted to derive testable implications about the relationship between some characteristics of the voting population and the policy outcome, abstracting from other features of the political process. He adds that... The median voter result has been applied to an incredible variety of questions. Examples are the analysis of the relationship between income inequality and size of governmental intervention in redistributive policies (Meltzer and Richard, 1981), the study of the determinants of immigration policies (Razin and Sadka, 1999), of the extent of taxation on different types of income (Bassetto and Benhabib, 2006), and many more. See also Arrow's impossibility theorem McKelvey–Schofield chaos theorem Median mechanism Ranked voting Median voting rule Notes References Further reading Dasgupta, Partha and Eric Maskin, "On the Robustness of Majority Rule", Journal of the European Economic Association, 2008. External links The Median Voter Model Political science theories Public choice theory Voting theory Game theory Mathematical economics
Median voter theorem
[ "Mathematics" ]
2,645
[ "Applied mathematics", "Game theory", "Mathematical economics" ]
1,234,125
https://en.wikipedia.org/wiki/Kleene%20fixed-point%20theorem
In the mathematical areas of order and lattice theory, the Kleene fixed-point theorem, named after American mathematician Stephen Cole Kleene, states the following: Kleene Fixed-Point Theorem. Suppose is a directed-complete partial order (dcpo) with a least element, and let be a Scott-continuous (and therefore monotone) function. Then has a least fixed point, which is the supremum of the ascending Kleene chain of The ascending Kleene chain of f is the chain obtained by iterating f on the least element ⊥ of L. Expressed in a formula, the theorem states that where denotes the least fixed point. Although Tarski's fixed point theorem does not consider how fixed points can be computed by iterating f from some seed (also, it pertains to monotone functions on complete lattices), this result is often attributed to Alfred Tarski who proves it for additive functions. Moreover, Kleene Fixed-Point Theorem can be extended to monotone functions using transfinite iterations. Proof Source: We first have to show that the ascending Kleene chain of exists in . To show that, we prove the following: Lemma. If is a dcpo with a least element, and is Scott-continuous, then Proof. We use induction: Assume n = 0. Then since is the least element. Assume n > 0. Then we have to show that . By rearranging we get . By inductive assumption, we know that holds, and because f is monotone (property of Scott-continuous functions), the result holds as well. As a corollary of the Lemma we have the following directed ω-chain: From the definition of a dcpo it follows that has a supremum, call it What remains now is to show that is the least fixed-point. First, we show that is a fixed point, i.e. that . Because is Scott-continuous, , that is . Also, since and because has no influence in determining the supremum we have: . It follows that , making a fixed-point of . The proof that is in fact the least fixed point can be done by showing that any element in is smaller than any fixed-point of (because by property of supremum, if all elements of a set are smaller than an element of then also is smaller than that same element of ). This is done by induction: Assume is some fixed-point of . We now prove by induction over that . The base of the induction obviously holds: since is the least element of . As the induction hypothesis, we may assume that . We now do the induction step: From the induction hypothesis and the monotonicity of (again, implied by the Scott-continuity of ), we may conclude the following: Now, by the assumption that is a fixed-point of we know that and from that we get See also Other fixed-point theorems References Order theory Fixed-point theorems
Kleene fixed-point theorem
[ "Mathematics" ]
617
[ "Theorems in mathematical analysis", "Order theory", "Fixed-point theorems", "Theorems in topology" ]
1,234,251
https://en.wikipedia.org/wiki/PCF%20theory
PCF theory is the name of a mathematical theory, introduced by Saharon , that deals with the cofinality of the ultraproducts of ordered sets. It gives strong upper bounds on the cardinalities of power sets of singular cardinals, and has many more applications as well. The abbreviation "PCF" stands for "possible cofinalities". Main definitions If A is an infinite set of regular cardinals, D is an ultrafilter on A, then we let denote the cofinality of the ordered set of functions where the ordering is defined as follows: if . pcf(A) is the set of cofinalities that occur if we consider all ultrafilters on A, that is, Main results Obviously, pcf(A) consists of regular cardinals. Considering ultrafilters concentrated on elements of A, we get that . Shelah proved, that if , then pcf(A) has a largest element, and there are subsets of A such that for each ultrafilter D on A, is the least element θ of pcf(A) such that . Consequently, . Shelah also proved that if A is an interval of regular cardinals (i.e., A is the set of all regular cardinals between two cardinals), then pcf(A) is also an interval of regular cardinals and |pcf(A)|<|A|+4. This implies the famous inequality assuming that ℵω is strong limit. If λ is an infinite cardinal, then J<λ is the following ideal on A. B∈J<λ if holds for every ultrafilter D with B∈D. Then J<λ is the ideal generated by the sets . There exist scales, i.e., for every λ∈pcf(A) there is a sequence of length λ of elements of which is both increasing and cofinal mod J<λ. This implies that the cofinality of under pointwise dominance is max(pcf(A)). Another consequence is that if λ is singular and no regular cardinal less than λ is Jónsson, then also λ+ is not Jónsson. In particular, there is a Jónsson algebra on ℵω+1, which settles an old conjecture. Unsolved problems The most notorious conjecture in pcf theory states that |pcf(A)|=|A| holds for every set A of regular cardinals with |A|<min(A). This would imply that if ℵω is strong limit, then the sharp bound holds. The analogous bound follows from Chang's conjecture (Magidor) or even from the nonexistence of a Kurepa tree (Shelah). A weaker, still unsolved conjecture states that if |A|<min(A), then pcf(A) has no inaccessible limit point. This is equivalent to the statement that pcf(pcf(A))=pcf(A). Applications The theory has found a great deal of applications, besides cardinal arithmetic. The original survey by Shelah, Cardinal arithmetic for skeptics, includes the following topics: almost free abelian groups, partition problems, failure of preservation of chain conditions in Boolean algebras under products, existence of Jónsson algebras, existence of entangled linear orders, equivalently narrow Boolean algebras, and the existence of nonisomorphic models equivalent in certain infinitary logics. In the meantime, many further applications have been found in Set Theory, Model Theory, Algebra and Topology. References Saharon Shelah, Cardinal Arithmetic, Oxford Logic Guides, vol. 29. Oxford University Press, 1994. External links Menachem Kojman: PCF Theory Set theory
PCF theory
[ "Mathematics" ]
766
[ "Mathematical logic", "Set theory" ]
1,234,368
https://en.wikipedia.org/wiki/Fractional%20quantum%20Hall%20effect
The fractional quantum Hall effect (FQHE) is the observation of precisely quantized plateaus in the Hall conductance of 2-dimensional (2D) electrons at fractional values of , where e is the electron charge and h is the Planck constant. It is a property of a collective state in which electrons bind magnetic flux lines to make new quasiparticles, and excitations have a fractional elementary charge and possibly also fractional statistics. The 1998 Nobel Prize in Physics was awarded to Robert Laughlin, Horst Störmer, and Daniel Tsui "for their discovery of a new form of quantum fluid with fractionally charged excitations". The microscopic origin of the FQHE is a major research topic in condensed matter physics. Descriptions The fractional quantum Hall effect (FQHE) is a collective behavior in a 2D system of electrons. In particular magnetic fields, the electron gas condenses into a remarkable liquid state, which is very delicate, requiring high quality material with a low carrier concentration, and extremely low temperatures. As in the integer quantum Hall effect, the Hall resistance undergoes certain quantum Hall transitions to form a series of plateaus. Each particular value of the magnetic field corresponds to a filling factor (the ratio of electrons to magnetic flux quanta) where p and q are integers with no common factors. Here q turns out to be an odd number with the exception of two filling factors 5/2 and 7/2. The principal series of such fractions are and Fractionally charged quasiparticles are neither bosons nor fermions and exhibit anyonic statistics. The fractional quantum Hall effect continues to be influential in theories about topological order. Certain fractional quantum Hall phases appear to have the right properties for building a topological quantum computer. History and developments The FQHE was experimentally discovered in 1982 by Daniel Tsui and Horst Störmer, in experiments performed on heterostructures made out of gallium arsenide developed by Arthur Gossard. There were several major steps in the theory of the FQHE. Laughlin states and fractionally-charged quasiparticles: this theory, proposed by Robert B. Laughlin, is based on accurate trial wave functions for the ground state at fraction as well as its quasiparticle and quasihole excitations. The excitations have fractional charge of magnitude . Fractional exchange statistics of quasiparticles: Bertrand Halperin conjectured, and Daniel Arovas, John Robert Schrieffer, and Frank Wilczek demonstrated, that the fractionally charged quasiparticle excitations of the Laughlin states are anyons with fractional statistical angle ; the wave function acquires phase factor of (together with an Aharonov-Bohm phase factor) when identical quasiparticles are exchanged in a counterclockwise sense. A recent experiment seems to give a clear demonstration of this effect. Hierarchy states: this theory was proposed by Duncan Haldane, and further clarified by Bertrand Halperin, to explain the observed filling fractions not occurring at the Laughlin states' . Starting with the Laughlin states, new states at different fillings can be formed by condensing quasiparticles into their own Laughlin states. The new states and their fillings are constrained by the fractional statistics of the quasiparticles, producing e.g. and states from the Laughlin state. Similarly constructing another set of new states by condensing quasiparticles of the first set of new states, and so on, produces a hierarchy of states covering all the odd-denominator filling fractions. This idea has been validated quantitatively, and brings out the observed fractions in a natural order. Laughlin's original plasma model was extended to the hierarchy states by Allan H. MacDonald and others. Using methods introduced by Greg Moore and Nicholas Read, based on conformal field theory explicit wave functions can be constructed for all hierarchy states. Composite fermions: this theory was proposed by Jainendra K. Jain, and further extended by Halperin, Patrick A. Lee and Read. The basic idea of this theory is that as a result of the repulsive interactions, two (or, in general, an even number of) vortices are captured by each electron, forming integer-charged quasiparticles called composite fermions. The fractional states of the electrons are understood as the integer QHE of composite fermions. For example, this makes electrons at filling factors 1/3, 2/5, 3/7, etc. behave in the same way as at filling factor 1, 2, 3, etc. Composite fermions have been observed, and the theory has been verified by experiment and computer calculations. Composite fermions are valid even beyond the fractional quantum Hall effect; for example, the filling factor 1/2 corresponds to zero magnetic field for composite fermions, resulting in their Fermi sea. Tsui, Störmer, and Robert B. Laughlin were awarded the 1998 Nobel Prize in Physics for their work. Evidence for fractionally-charged quasiparticles Experiments have reported results that specifically support the understanding that there are fractionally-charged quasiparticles in an electron gas under FQHE conditions. In 1995, the fractional charge of Laughlin quasiparticles was measured directly in a quantum antidot electrometer at Stony Brook University, New York. In 1997, two groups of physicists at the Weizmann Institute of Science in Rehovot, Israel, and at the Commissariat à l'énergie atomique laboratory near Paris, detected such quasiparticles carrying an electric current, through measuring quantum shot noise Both of these experiments have been confirmed with certainty. A more recent experiment, measures the quasiparticle charge. Impact The FQH effect shows the limits of Landau's symmetry breaking theory. Previously it was held that the symmetry breaking theory could explain all the important concepts and properties of forms of matter. According to this view, the only thing to be done was to apply the symmetry breaking theory to all different kinds of phases and phase transitions. From this perspective, the importance of the FQHE discovered by Tsui, Stormer, and Gossard is notable for contesting old perspectives. The existence of FQH liquids suggests that there is much more to discover beyond the present symmetry breaking paradigm in condensed matter physics. Different FQH states all have the same symmetry and cannot be described by symmetry breaking theory. The associated fractional charge, fractional statistics, non-Abelian statistics, chiral edge states, etc. demonstrate the power and the fascination of emergence in many-body systems. Thus FQH states represent new states of matter that contain a completely new kind of order—topological order. For example, properties once deemed isotropic for all materials may be anisotropic in 2D planes. The new type of orders represented by FQH states greatly enrich our understanding of quantum phases and quantum phase transitions. See also Hall probe Laughlin wavefunction Macroscopic quantum phenomena Quantum anomalous Hall effect Quantum Hall Effect Quantum spin Hall effect Topological order Fractional Chern insulator Notes References Hall effect Correlated electrons Quantum phases Mesoscopic physics Unsolved problems in physics Unexplained phenomena
Fractional quantum Hall effect
[ "Physics", "Chemistry", "Materials_science" ]
1,499
[ "Quantum phases", "Physical phenomena", "Hall effect", "Phases of matter", "Unsolved problems in physics", "Quantum mechanics", "Electric and magnetic fields in matter", "Electrical phenomena", "Condensed matter physics", "Correlated electrons", "Mesoscopic physics", "Solid state engineering",...
1,235,271
https://en.wikipedia.org/wiki/Betatron
A betatron is a type of cyclic particle accelerator for electrons. It consists of a torus-shaped vacuum chamber with an electron source. Circling the torus is an iron transformer core with a wire winding around it. The device functions similarly to a transformer, with the electrons in the torus-shaped vacuum chamber as its secondary coil. An alternating current in the primary coils accelerates electrons in the vacuum around a circular path. The betatron was the first machine capable of producing electron beams at energies higher than could be achieved with a simple electron gun, and the first circular accelerator in which particles orbited at a constant radius. The concept of the betatron had been proposed as early as 1922 by Joseph Slepian. Through the 1920s and 30s a number of theoretical problems related to the device were considered by scientists including Rolf Wideroe, Ernest Walton, and Max Steenbeck. The first working betatron was constructed by Donald Kerst at the University of Illinois Urbana-Champaign in 1940. History After the discovery in the 1800s of Faraday's law of induction, which showed that an electromotive force could be generated by a changing magnetic field, several scientists speculated that this effect could be used to accelerate charged particles to high energies. Joseph Slepian proposed a device in 1922 that would use permanent magnets to steer the beam while it was accelerated by a changing magnetic field. However, he did not pursue the idea past the theoretical stage. In the late 1920s, Gregory Breit and Merle Tuve at the Bureau of Terrestrial Magnetism constructed a working device that used varying magnetic fields to accelerate electrons. Their device placed two solenoidal magnets next to one another and fired electrons from a gun at the outer edge of the magnetic field. As the field was increased, the electrons accelerated in to strike a target at the center of the field, producing X-rays. This device took a step towards the betatron concept by shaping the magnetic field to keep the particles focused in the plane of acceleration. In 1929, Rolf Wideroe made the next major contribution to the development of the theory by deriving the Wideroe Condition for stable orbits. He determined that in order for the orbit radius to remain constant, the field at the radius must be exactly half of the average field over the area of the magnet. This critical calculation allowed for the development of accelerators in which the particles orbited at a constant radius, rather than spiraling inward, as in the case of Breit and Tuve's machine, or outward, as in the case of the cyclotron. Although Wideroe made valuable contributions to the development of the theory of the Betatron, he was unable to build a device in which the electrons orbited more than one and a half times, as his device had no mechanism to keep the beam focused. Simultaneously with Wideroe's experiments, Ernest Walton analyzed the orbits of electrons in a magnetic field, and determined that it was possible to construct an orbit that was radially focused in the plane of the orbit. Particles in such an orbit which moved a small distance away from the orbital radius would experience a force pushing them back to the correct radius. These oscillations about a stable orbit in a circular accelerator are now referred to as betatron oscillations. In 1935 Max Steenbeck applied in Germany for a patent on a device that would combine the radial focusing condition of Walton with the vertical focusing used in Breit and Tuve's machine. He later claimed to have built a working machine, but this claim was disputed. The first team unequivocally acknowledged to have built a working betatron was led by Donald Kerst at the University of Illinois. The accelerator was completed on July 15, 1940. Operation principle In a betatron, the changing magnetic field from the primary coil accelerates electrons injected into the vacuum torus, causing them to circle around the torus in the same manner as current is induced in the secondary coil of a transformer (Faraday's law). The stable orbit for the electrons satisfies where is the flux within the area enclosed by the electron orbit, is the radius of the electron orbit, and is the magnetic field at . In other words, the magnetic field at the orbit must be half the average magnetic field over its circular cross section: This condition is often called Widerøe's condition. Etymology The name "betatron" (a reference to the beta particle, a fast electron) was chosen during a departmental contest. Other proposals were "rheotron", "induction accelerator", "induction electron accelerator", and even "Außerordentlichehochgeschwindigkeitselektronenentwickelndesschwerarbeitsbeigollitron", a suggestion by a German associate, for "Hard working by golly machine for generating extraordinarily high velocity electrons" or perhaps "Extraordinarily high velocity electron generator, high energy by golly-tron." Applications Betatrons were historically employed in particle physics experiments to provide high-energy beams of electrons—up to about 300 MeV. If the electron beam is directed at a metal plate, the betatron can be used as a source of energetic x-rays, which may be used in industrial and medical applications (historically in radiation oncology). A small version of a betatron was also used to provide a source of hard X-rays (by deceleration of the electron beam in a target) for prompt initiation of some experimental nuclear weapons by means of photon-induced fission and photofission in the bomb core. The Radiation Center, the first private medical center to treat cancer patients with a betatron, was opened by Dr. O. Arthur Stiennon in a suburb of Madison, Wisconsin in the late 1950s. Limitations The maximum energy that a betatron can impart is limited by the strength of the magnetic field due to the saturation of iron and by practical size of the magnet core. The next generation of accelerators, the synchrotrons, overcame these limitations. References External links The Betatron at UIUC Accelerator physics German inventions of the Nazi period
Betatron
[ "Physics" ]
1,258
[ "Accelerator physics", "Applied and interdisciplinary physics", "Experimental physics" ]
2,561,255
https://en.wikipedia.org/wiki/Exploration%20geophysics
Exploration geophysics is an applied branch of geophysics and economic geology, which uses physical methods at the surface of the Earth, such as seismic, gravitational, magnetic, electrical and electromagnetic, to measure the physical properties of the subsurface, along with the anomalies in those properties. It is most often used to detect or infer the presence and position of economically useful geological deposits, such as ore minerals; fossil fuels and other hydrocarbons; geothermal reservoirs; and groundwater reservoirs. It can also be used to detect the presence of unexploded ordnance. Exploration geophysics can be used to directly detect the target style of mineralization by measuring its physical properties directly. For example, one may measure the density contrasts between the dense iron ore and the lighter silicate host rock, or one may measure the electrical conductivity contrast between conductive sulfide minerals and the resistive silicate host rock. Geophysical methods The main techniques used are: Seismic tomography to locate earthquakes and assist in Seismology. Reflection seismology and seismic refraction to map the surface structure of a region. Geodesy and gravity techniques, including gravity gradiometry. Magnetic techniques, including aeromagnetic surveys to map magnetic anomalies. Electrical techniques, including electrical resistivity tomography and induced polarization. Electromagnetic methods, such as magnetotellurics, ground penetrating radar, transient/time-domain electromagnetics, and SNMR. Borehole geophysics, also called well logging. Remote sensing techniques, including hyperspectral imaging. Many other techniques, or methods of integration of the above techniques, have been developed and are currently used. However these are not as common due to cost-effectiveness, wide applicability, and/or uncertainty in the results produced. Uses Exploration geophysics is also used to map the subsurface structure of a region, to elucidate the underlying structures, to recognize spatial distribution of rock units, and to detect structures such as faults, folds and intrusive rocks. This is an indirect method for assessing the likelihood of ore deposits or hydrocarbon accumulations. Methods devised for finding mineral or hydrocarbon deposits can also be used in other areas such as monitoring environmental impact, imaging subsurface archaeological sites, ground water investigations, subsurface salinity mapping, civil engineering site investigations, and interplanetary imaging. Mineral exploration Magnetometric surveys can be useful in defining magnetic anomalies which represent ore (direct detection), or in some cases gangue minerals associated with ore deposits (indirect or inferential detection). The most direct method of detection of ore via magnetism involves detecting iron ore mineralization via mapping magnetic anomalies associated with banded iron formations which usually contain magnetite in some proportion. Skarn mineralization, which often contains magnetite, can also be detected though the ore minerals themselves would be non-magnetic. Similarly, magnetite, hematite, and often pyrrhotite are common minerals associated with hydrothermal alteration, which can be detected to provide an inference that some mineralizing hydrothermal event has affected the rocks. Gravity surveying can be used to detect dense bodies of rocks within host formations of less dense wall rocks. This can be used to directly detect Mississippi Valley Type ore deposits, IOCG ore deposits, iron ore deposits, skarn deposits, and salt diapirs which can form oil and gas traps. Electromagnetic (EM) surveys can be used to help detect a wide variety of mineral deposits, especially base metal sulphides via detection of conductivity anomalies which can be generated around sulphide bodies in the subsurface. EM surveys are also used in diamond exploration (where the kimberlite pipes tend to have lower resistance than enclosing rocks), graphite exploration, palaeochannel-hosted uranium deposits (which are associated with shallow aquifers, which often respond to EM surveys in a conductive overburden). These are indirect inferential methods of detecting mineralization, as the commodity being sought is not directly conductive, or not sufficiently conductive to be measurable. EM surveys are also used in unexploded ordnance, archaeological, and geotechnical investigations. Regional EM surveys are conducted via airborne methods, using either fixed-wing aircraft or helicopter-borne EM rigs. Surface EM methods are based mostly on Transient EM methods using surface loops with a surface receiver, or a downhole tool lowered into a borehole which transects a body of mineralization. These methods can map out sulphide bodies within the earth in three dimensions, and provide information to geologists to direct further exploratory drilling on known mineralization. Surface loop surveys are rarely used for regional exploration, however in some cases such surveys can be used with success (e.g.; SQUID surveys for nickel ore bodies). Electric-resistance methods such as induced polarization methods can be useful for directly detecting sulfide bodies, coal, and resistive rocks such as salt and carbonates. Seismic methods can also be used for mineral exploration, since they can provide high-resolution images of geologic structures hosting mineral deposits. It is not just surface seismic surveys which are used, but also borehole seismic methods. All in all, the usage of seismic methods for mineral exploration is steadily increasing. Hydrocarbon exploration Seismic reflection and refraction techniques are the most widely used geophysical technique in hydrocarbon exploration. They are used to map the subsurface distribution of stratigraphy and its structure which can be used to delineate potential hydrocarbon accumulations, both stratigraphic and structural deposits or "traps". Well logging is another widely used technique as it provides necessary high resolution information about rock and fluid properties in a vertical section, although they are limited in areal extent. This limitation in areal extent is the reason why seismic reflection techniques are so popular; they provide a method for interpolating and extrapolating well log information over a much larger area. Gravity and magnetics are also used, with considerable frequency, in oil and gas exploration. These can be used to determine the geometry and depth of covered geological structures including uplifts, subsiding basins, faults, folds, igneous intrusions, and salt diapirs due to their unique density and magnetic susceptibility signatures compared to the surrounding rocks; the latter is particularly useful for metallic ores. Remote sensing techniques, specifically hyperspectral imaging, have been used to detect hydrocarbon microseepages using the spectral signature of geochemically altered soils and vegetation. Specifically at sea, two methods are used: marine seismic reflection and electromagnetic seabed logging (SBL). Marine magnetotellurics (mMT), or marine Controlled Source Electro-Magnetics (mCSEM), can provide pseudo-direct detection of hydrocarbons by detecting resistivity changes over geological traps (signalled by seismic surveys). Civil engineering Ground penetrating radar Ground penetrating radar is a non-invasive technique, and is used within civil construction and engineering for a variety of uses, including detection of utilities (buried water, gas, sewerage, electrical and telecommunication cables), mapping of soft soils, overburden for geotechnical characterization, and other similar uses. Spectral-Analysis-of-Surface-Waves The Spectral-Analysis-of-Surface-Waves (SASW) method is another non-invasive technique, which is widely used in practice to detect the shear wave velocity profile of the soil. The SASW method relies on the dispersive nature of Raleigh waves in layered media, i.e., the wave-velocity depends on the load's frequency. A material profile, based on the SASW method, is thus obtained according to: a) constructing an experimental dispersion curve, by performing field experiments, each time using a different loading frequency, and measuring the surface wave-speed for each frequency; b) constructing a theoretical dispersion curve, by assuming a trial distribution for the material properties of a layered profile; c) varying the material properties of the layered profile, and repeating the previous step, until a match between the experimental dispersion curve, and the theoretical dispersion curve is attained. The SASW method renders a layered (one-dimensional) shear wave velocity profile for the soil. Full waveform inversion Full-waveform-inversion (FWI) methods are among the most recent techniques for geotechnical site characterization, and are still under continuous development. The method is fairly general, and is capable of imaging the arbitrarily heterogeneous compressional and shear wave velocity profiles of the soil. Elastic waves are used to probe the site under investigation, by placing seismic vibrators on the ground surface. These waves propagate through the soil, and due to the heterogeneous geological structure of the site under investigation, multiple reflections and refractions occur. The response of the site to the seismic vibrator is measured by sensors (geophones), also placed on the ground surface. Two key-components are required for the profiling based on full-waveform inversion. These components are: a) a computer model for the simulation of elastic waves in semi-infinite domains; and b) an optimization framework, through which the computed response is matched to the measured response by iteratively updating an initially assumed material distribution for the soil. Other techniques Civil engineering can also use remote sensing information for topographical mapping, planning, and environmental impact assessment. Airborne electromagnetic surveys are also used to characterize soft sediments in planning and engineering roads, dams, and other structures. Magnetotellurics has proven useful for delineating groundwater reservoirs, mapping faults around areas where hazardous substances are stored (e.g. nuclear power stations and nuclear waste storage facilities), and earthquake precursor monitoring in areas with major structures such as hydro-electric dams subject to high levels of seismic activity. BS 5930 is the standard used in the UK as a code of practice for site investigations. Archaeology Ground penetrating radar can be used to map buried artifacts, such as graves, mortuaries, wreck sites, and other shallowly buried archaeological sites. Ground magnetometric surveys can be used for detecting buried ferrous metals, useful in surveying shipwrecks, modern battlefields strewn with metal debris, and even subtle disturbances such as large-scale ancient ruins. Sonar systems can be used to detect shipwrecks. Active sonar systems emit sound pulses into the water which then bounce off of objects and are returned to the sonar transducer. The sonar transducer is able to determine both the range and orientation of an underwater object by measuring the amount of time between the release of the sound pulse and its returned reception. Passive sonar systems are used to detect noises from marine objects or animals. This system does not emit sound pulses itself but instead focuses on sound detection from marine sources. This system simply 'listens' to the ocean, rather than measuring the range or orientation of an object. Forensics Ground penetrating radar can be used to detect grave sites. This detection is of both legal and cultural importance, providing an opportunity for affected families to pursue justice through legal punishment of those responsible and to experience closure over the loss of a loved one. Unexploded ordnance detection Unexploded ordnance (or UXO) refers to the dysfunction or non-explosion of military explosives. Examples of these include, but are not limited to: bombs, flares, and grenades. It is important to be able to locate and contain unexploded ordnance to avoid injuries, and even possible death, to those who may come in contact with them. The issue of unexploded ordnance originated as a result of the Crimean War (1853-1856). Before this, most unexploded ordnance was locally contained in smaller volumes, and was thus not a huge public issue. However, with the introduction of more widespread warfare, these quantities increased and were thus easy to lose track of and contain. According to Hooper & Hambric in their piece Unexploded Ordnance (UXO): The Problem, if we are unable to move away from war in the context of conflict resolution, this problem will only continue to get worse and will likely take more than a century to resolve. Since our global method of conflict resolution banks on warfare, we must be able to rely on specific practices to detect this unexploded ordnance, such as magnetic and electromagnetic surveys. By looking at differences in magnetic susceptibility and/or electrical conductivity in relation to the unexploded ordnance and the surrounding geology (soil, rock, etc.), we are able to detect and contain unexploded ordnance. See also Archaeological geophysics Hydrocarbon exploration Kola Superdeep Borehole Leibniz Institute for Applied Geophysics Mineral exploration Ore genesis Petroleum geology Society of Exploration Geophysicists References Geophysics Economic geology
Exploration geophysics
[ "Physics" ]
2,646
[ "Applied and interdisciplinary physics", "Geophysics" ]
2,561,815
https://en.wikipedia.org/wiki/Platform%20screen%20doors
Platform screen doors (PSDs), also known as platform edge doors (PEDs), are used at some train, rapid transit and people mover stations to separate the platform from train tracks, as well as on some bus rapid transit, tram and light rail systems. Primarily used for passenger safety, they are a relatively new addition to many metro systems around the world, some having been retrofitted to established systems. They are widely used in newer Asian and European metro systems, and Latin American bus rapid transit systems. History The idea of platform edge doors dates from as early as 1908, when Charles S. Shute of Boston was granted a patent for "Safety fence and gate for railway-platforms". The invention consisted of "a fence for railway platform edges", composed of a series of pickets bolted to the platform edge, and vertically movable pickets that could retract into a platform edge when there was a train in the station. In 1917, Carl Albert West was granted a patent for "Gate for subrailways and the like". The invention provided for spaced guides secured to a tunnel's side wall, with "a gate having its ends guided in the guides, the ends and intermediate portions of the gate having rollers engaging the side wall". Pneumatic cylinders with pistons would be used to raise the gates above the platform when a train was in the station. Unlike Shute's invention, the entire platform gate was movable, and was to retract upward. The first stations in the world with platform screen doors were the ten stations of the Saint Petersburg Metro's Line 2 that opened between 1961 and 1972. The platform "doors" are actually openings in the station wall which supports the ceiling of the platform. The track tunnels adjoining the ten stations' island platforms were built with tunnel boring machines (TBMs), and the island platforms were located in a separate vault between the two track tunnels. Usually, TBMs bore the deep-level tunnels between stations, while the station vaults are dug out manually and contain both the tracks and the platform. However, in the case of the Saint Petersburg Metro, the TBMs bored a pair of continuous tunnels that passed through ten stations, and the stations themselves were built in vaults that only contained the platform, with small openings on the sides of the vault, in order for passengers to access the trains in the tunnels. Singapore's Mass Rapid Transit, opened in 1987, is often described as the first heavy Metro system in the world to incorporate PSDs into its stations for climate control and safety reasons, rather than architectural constraints, though the light Lille Metro, opened in 1983, predates it. Types Although the terms are often used interchangeably, platform screen doors can refer to both full-height and half-height barriers. Full height platform screen doors are total barriers between the station floor and ceiling, while the half-height platform screen doors are referred to as platform edge doors or automatic platform gates, as they do not reach the ceiling and thus do not create a total barrier. Platform gates are usually only half of the height of the full-screen doors, are chest-height sliding doors at the edge of railway platforms to prevent passengers from falling off the platform edge onto the railway tracks. But they sometimes reach to the height of the train. Like full-height platform screen doors, these platform gates slide open or close simultaneously with the train doors. These two types of platform screen doors are presently the main types in the world. Platform screen doors and platform edge doors The doors help to: Prevent people from accidentally falling onto the tracks, getting too close to moving trains, and committing suicide (by jumping) or homicide (by pushing). Use of platform screen doors in South Korea has reduced rail related suicide by 89%. Prevent or reduce wind felt by the passengers caused by the piston effect which could in some circumstances make people lose their balance. Improve safety—reduce the risk of accidents, especially from trains passing through the station at high speeds. Improve climate control within the station (heating, ventilation, and air conditioning are more effective when the station is physically isolated from the tunnel). Improve security—access to the tracks and tunnels is restricted. Lower costs—eliminate the need for motormen or conductors when used in conjunction with automatic train operation, thereby reducing manpower costs. Prevent litter buildup on the tracks, which can be a fire risk, as well as damage and possibly obstruct trains. Improve the sound quality of platform announcements, as background noise from the tunnels and trains that are entering or exiting is reduced. At underground or indoor platforms, prevent the air from being polluted by the fumes caused by friction from the train wheels grinding against the tracks. Their primary disadvantage of PSDs is their cost. When used to retrofit older systems, they can limit the kind of rolling stock that may be used on a line, because the train doors must fit the spacing of the platform doors, which can result in additional costs, due to the otherwise unnecessary purchase of new rolling stock and consequent depot upgrades. Despite delivering an overwhelming improvement to passenger safety at the platform-train interface, platform screen doors do introduce new hazards which must be carefully managed in design and delivery. The principal hazard is entrapment between closed platform doors and the train carriage which, if undetected, can lead to fatality when the train begins to move (see ). Cases of this happening are rare, and the risk can be minimised with careful design, in particular by interlocking the door system with the signalling system, and by minimising the gap between the closed platform doors and the train body. In some cases active monitoring systems are used to monitor this gap. Half-height platform edge doors, also known as automatic platform gates, are cheaper to install than full-height platform screen doors, which require more metallic framework for support. Some railway operators may therefore prefer such an option to improve safety at railway platforms and, at the same time, keep costs low and non-air-conditioned platforms naturally ventilated. However, these gates are less effective than full platform screen doors in preventing people from intentionally jumping onto the tracks. These gates were in practical use by the Hong Kong MTR on the Disneyland Resort line for the open-air station designs. Most half-height platform edge door designs have taller designs than the ones installed on the Disneyland Resort line. Rope-type platform screen doors There are also rope-type platform screen doors at stations where a number of train types, with different lengths and train door spacings, use the same platforms. The barriers move upwards, rather than sideways, to let passengers through. Some Japanese, Korean, Chinese and Eastern European countries have stations that use rope-type screen doors, to lower the cost of installation and to deal with the problem of different train types and distances between car doors. Variable-type platform screen doors The first-ever full-height variable screen doors were installed on the underground platforms of Osaka Station, which opened in March 2023, but a few half-height variants can be found on a set installed at the Shinkansen platforms of Shinagawa Station in Tokyo. Their use is rare since they are a much costlier and more complicated alternative to rope-type screen doors. The only difference from the latter is that they move sideways when letting passengers through. At Osaka Station, the doors are designed as a single block (equivalent to the length of a train car). It consists of five units: one wall-like "parent door" suspended from the top and two sets of glass "child doors". When the train reaches the station, a special scanner on the platform reads the information on the ID tag placed on the train to identify its type and the number of cars. With the type and the number of cars having been instantly identified, each unit will slide automatically to match the configuration of the stopped train. The parent and child doors then slide into the optimal position to align precisely with the position of each car door. Since the technology is still new, such doors are still going through testing phases in several countries around the world. Use Argentina Line D of the Buenos Aires Subte is planned to have platform screen doors installed in the future, after the communications-based train control (CBTC) system has been installed. Australia Sydney Metro, which opened in May 2019, was the first-fully automated rapid transit rail system in Australia. There are full-height screen doors on most underground platforms, with full-height edge doors on at-grade, elevated and some underground platforms. The existing five stations on the Epping to Chatswood railway line were upgraded to rapid transit standard, all being fitted with full-height platform edge doors. In Melbourne, the Metro Tunnel, from South Kensington to South Yarra, due to open in 2025, will have platform screen doors on the underground stations. New rolling stock is being constructed, with doors that will line up with full-height PSDs on the platforms. The fully automated Suburban Rail Loop, which is due to open in 2035, will have platform screen doors at every station. The Cross River Rail in Brisbane, which is currently under construction and scheduled to open in 2026, will have platform screen doors on the new Boggo Road, Woolloongabba and Albert Street underground stations, and the new underground platforms of Roma Street station. Austria Currently, only the Serfaus U-Bahn and Line U2 of the Vienna U-Bahn (from Schottentor station to Karlsplatz station) use platform screen doors. The section was reopened on 6.12.2024 after 3 years of constructing. Bangladesh The Dhaka Metro Rail uses half-height platform screen doors at all of its elevated stations. Belarus Platform screen doors are being installed on Line 3 of the Minsk Metro, which first opened in late 2020, and will be installed at stations on the later sections of the line. Brazil The Platform Screen Doors have been present in the São Paulo Metro since 2010, when the Sacomã Station was opened. As of 2019, five of the six lines of the São Paulo Metro have the equipment: Lines 4 - Yellow, 5 - Lilac and 15 - Silver have the equipment installed in all of its stations. The feature is also present in some stations of Line 2 - Green and Line 3 - Red. They are planned to be installed in 41 stations of lines 1, 2 and 3 by the end of 2021, as well as all stations of line 5 by the end of 2020. PSDs are also found on the tube stations of the RIT BRT and in the Santos Light Rail since 2016. Bulgaria Half-height platform screen doors are in use on all stations of the Sofia Metro Line 3. In 2020, rope-type screen door (RSD) system was installed in Vasil Levski Stadium Metro Station and Opalchenska Metro Station of the Sofia Metro Line 1 and Line 2. In total, such rope-type safety barriers will be installed on more 10 of the busiest stations on the Line 1 and 2 of the Sofia Metro, providing increased safety for passengers and protecting against accidental falls. Canada Screen doors are in use at all three LINK Train stations and the Union and Pearson stations along the Union Pearson Express route to Toronto Pearson International Airport in Mississauga, Ontario. Platform screen doors will be installed at all stations on the forthcoming Ontario Line. In addition, as a part of major renovations and expansions to the Bloor-Yonge interchange, platform screen doors will be installed on both Line 1 platforms. The doors will also be installed on the Line 2 platforms once CBTC signalling upgrades are made to the line. The addition of such doors at Bloor-Yonge has prompted rumours of a broader system wide rollout, including in the forthcoming Scarborough Subway Extension and Yonge North Subway Extension, though no confirmation or funding has been announced by the Toronto Transit Commission or the Government of Ontario. Greater Montreal's forthcoming Réseau express métropolitain (REM), the 67-kilometre-long driverless complementary suburban rapid transit network opening in five phases between 2023 and 2027 will feature screen doors at each of its 26 stations. With the advent of the REM on the horizon, calls to retrofit platform edge doors in the Montreal Metro to combat delays arising from overcrowding are becoming more common. If full-height doors were to be installed, it may reduce the difficulty in opening station entrance doors at ground level due to the pressure imbalance caused by passing trains. Given that there are two different train door layouts on the Montreal Metro, with the older MR-73 trains having 4 doors on each side of the car, and MPM-10 having 3, it is unlikely platform doors will be showing up in the Montreal Metro until the retirement of the MR-73 fleet. In June 2023, the operator of the Vancouver SkyTrain, TransLink announced a feasibility study into installing platform screen doors on the Expo and Millennium lines. Such installation was previously deemed infeasible, due to SkyTrain's diverse fleet and different door positions. However, with the acquisition of the Alstom Mark V trains, which will replace the ageing Mark I, the door positions allow for a feasibility study to proceed. The results will be released sometime in 2025. Chile Platform edge doors are currently in use at Lines 3 and 6 of the Santiago Metro, being a novelty in the system. China Mainland All Chinese metro systems have platform screen doors installed on most of their lines. All stations built after the mid-2000s have some form of platform barrier. Guangzhou Metro Line 2, which opened in 2002, is the first metro system in mainland China to have installed platform screen doors since its completion. The older Guangzhou Metro Line 1 also completed the installation of platform screen doors between 2006 and 2009. Only the Dalian Metro lines 3, 12, and 13, Wuhan Metro line 1 and Changchun Metro lines 3, 4, and 8 have stations without the platform screen doors on their early lines (). However many are starting the process of retrofitting these lines with platform screen gates. In addition, many bus rapid transit systems such as the Guangzhou Bus Rapid Transit also have stops that are equipped with platform screen doors. Platform screen doors are also present in some tram and light rail stops such as the Xijiao Light rail, Nanjing tram and Chengdu tram. Several underground high speed railway stations of the CRH network use platform screen doors set back from the platform edge. In addition, Fengxian District in Shanghai installed platform gates at a road crossing. Colombia Several stations on Bogota's TransMilenio bus rapid transit system use platform screen doors. The Ayacucho Tram in Medellin also has half-height platform doors at every station. Denmark The Copenhagen Metro uses Westinghouse and Faiveley platform screen doors on all platforms. Full-height doors are used on underground stations while surface level stations have half-height doors (except from Lufthavnen and Orientkaj). Underground stations have had platform doors since opening, while above ground stations on lines 1 and 2 did not initially, and were installed later. Finland The Helsinki Metro had a trial run with Faiveley automatic platform gates installed on a single platform at Vuosaari metro station during phase one of the project. The doors, which are part of the Siemens metro automation project, were built in 2012. Phase 2 of the project has been delayed due to metro automation technical and safety related testings. The doors were removed in 2015. France All lines of the VAL automated subway system are equipped with platform screen doors at every station, starting with Lille subways in 1983. Those also include Toulouse and Rennes as well as the CDGVAL and Orlyval airport shuttles. Paris Métro's line 14 from Saint-Lazare to Bibliothèque François Mitterrand was inaugurated in 1998 with platform screen doors manufactured by Faiveley Transport. The new station Olympiades opened with platform screen doors in June 2007. Lines 1 and 4 have been retrofitted with platform edge doors, for full driverless automation effective in 2012 and 2023, respectively. Some stations on Line 13 have had platform edge doors since 2010 to manage their overcrowding, after tests conducted in 2006. Since 30 June 2020, a new kind of vertical platform screen doors, called platform curtains, are being tested on the platform 2bis of Vanves–Malakoff station (in Paris region) on the Transilien Line N commuter rail line. The experiment should end in February 2021. Transilien said that they preferred platform curtains to classical screen doors for this line because the positioning of the doors is not the same across the rolling stock, and that they plan to install them in other Transilien stations if the experiment is successful. Paris is now getting a new urban revolution : The Grand Paris Express. As of it, every new stations are getting brand new full platform screen doors, and it begins with the Line 14 extension inaugurated in 2024, from Saint-Denis pleyel to Orly Airport. Germany People movers at Frankfurt International Airport, Munich International Airport and Düsseldorf Airport are equipped with platform screen doors, as well as the suspended monorail in Dortmund, called H-Bahn. Plans are underway to test platform screen doors on the Munich U-Bahn in 2023 and line U5 & U6 will be installed in late 2026. All stations on the forthcoming line U5 on the Hamburg U-Bahn will feature full-height platform screen doors. Greece Platform screen doors will be used on the driverless Thessaloniki Metro, which opened in November 2024 and in the under construction Line 4 of the Athens Metro. Hong Kong Currently, all heavy rail and medium-capacity railway platforms outside the are equipped with either platform screen doors or automatic platform gates. On the East Rail line, PSDs are installed only at , and stations. Automatic platform gates have also been installed at Racecourse, Lok Ma Chau, Sha Tin, Sheung Shui, Tai Po Market and Tai Wai. Installation is still in progress or are soon to begin at the remaining stations. Automatic platform gates are currently only used in at-grade and elevated stations, while platform screen doors are used in all underground and some at-grade or elevated stations. None of the light rail platforms have platform screen doors or automatic platform gates installed. The MTR Corporation had since mid-1996, been studying the feasibility of installing PSDs at the older stations to reduce suicides on the MTR and reduce air-conditioning costs. Platforms 2 and 3 of were chosen for the trial due to them being redundant platforms and receiving low numbers of passengers. Platform screen doors of two and a half cars' length were installed on each of the two platforms during the trial in 1996. As the Kwun Tong line trains consisted of eight cars, it was decided that the PSDs were to be removed to allow for smoother train operations. With the opening of the and , Hong Kong had its first full-height PSDs fully operational in 1998. The MTR decided in 1999 to undertake the PSD Retrofitting Programme at 74 platforms of 30 select underground stations on the Kwun Tong, Island, and s. 2,960 pairs of PSDs were ordered from Gilgen Door Systems. Choi Hung became the first station to receive platform screen doors from this programme in August 2001. The Mass Transit Railway became the first metro system in the world to retrofit PSDs on a transit system already in operation. The program was completed in March 2006. All subsequent new stations or platforms installed with PSDs also used those manufactured by Gilgen Door Systems, until the cross-harbour extension of the East Rail Line which used platform screen doors manufactured by Fangda Group. The opening of the and stations in 2005 also meant the first platform-edge doors entering operation for the MTR network. These doors are currently the lowest in the entire network of being at around high, compared to on the Kwun Tong, Tsuen Wan, Island and Tung Chung lines and on the Tuen Ma and s. In 2006, the MTR began studying ways to introduce barriers at above-ground and at-grade stations, which was considered more complicated as those stations were naturally ventilated and the introduction of full-height platform screen doors would entail the installation of air conditioning systems. In 2008, the corporation decided to install automatic platform gates (APGs) at eight stations (the MTR Corporation Limited and KCR Corporation had been operationally merged since 2007, but KCR stations were not included in this study). The eight stations were retrofitted with APGs in 2011. From July 2000 to December 2013, the MTR Corporation collected a surcharge of 10 cents from each Octopus-paying passenger to help pay for the installation of PSDs and APGs. Over HK$1.15 billion was collected in total. Platform screen doors were also installed on all platforms of the West Rail line (now part of the Tuen Ma line), then built by the Kowloon-Canton Railway Corporation (KCRC) before the MTR–KCR merger. The Ma On Shan line did not have gates upon opening even though it was built at the same time as the West Rail; they were eventually added from 2014 to 2017 prior to the opening of the first phase of the Tuen Ma line on 14 February 2020. The installation of platform screen doors in Hong Kong has been effective in reducing railway injuries and service disruptions. The then-longest set of platform screen doors in the world can be found in East Tsim Sha Tsui station, where it first served the when 12-car MLR trains were still in service. Following the completion of the Kowloon Southern Link and handing over of the station to the (now part of the Tuen Ma line), the subsequent reduction of train length from 12 to 7 cars caused many of the screen doors to be put out of service, although the trains were lengthened to eight cars in May 2018. The West Rail line (now part of Tuen Ma line), had all stations installed with APGs, and another constituent line of the Tuen Ma line, the Ma On Shan line, had its final APG installed enter service on 20 December 2017. The last non-tram/light rail stations in Hong Kong without platform screen doors or gates are all on the East Rail line, a former KCR line not part of the MTR APG retrofitting programmes. The KCR Corporation found it difficult to install APGs because of the wide curves of the platforms and large gaps of their platforms, especially in , , and station. However, these remaining thirteen stations are all being retrofitted by Kaba as part of the Sha Tin to Central Link project. The APGs are estimated to be at around high. Adding APGs to the East Rail Line platforms requires platform strengthening with rebars and brackets as the gates, combined with heavy winds, can greatly increase structural load on the platform structure. Also extensive waterproofing work is needed as many of these platforms are directly exposed to the elements. As of May 15, 2022, three stations on the East Rail Line (Hung Hom, Exhibition Centre, Admiralty) are equipped with platform screen doors, while the remaining stations are undergoing retrofitting. The platform screen doors presently in service in the MTR have been supplied by the Swiss manufacturer Kaba Gilgen, the Japanese Nabtesco Corporation (under the Nabco brand), the French Faiveley Transport and Shenzhen Fangda Automatic System. Apart from the MTR, all stations on the Hong Kong International Airport Automated People Mover are equipped with platform screen doors made from Westinghouse (for Phase 1) and Panasonic (for Midfield Extension). The platforms for the shuttle bus service between the North Satellite Concourse and the East Hall of Terminal One at the HKIA, Chek Lap Kok, the New Territories and the bus platforms in Yue Man Square in Kwun Tong, New Kowloon are also retrofitted with PSDs. After it reopened on 27 August 2022, the Peak Tram was retrofitted with platform edge doors on the boarding side of the terminus stations. India On the Delhi Metro, all stations on the Delhi Airport Metro Express line, which links to Indira Gandhi International Airport have been equipped with full-height platform screen doors since 2011 and the six busiest stations on the Yellow Line have also been equipped with half height platform gates. Automatic platform gates on all the stations of the Pink, Magenta Line and Grey Line. Platform screen doors are also used in all underground stations of the Chennai Metro. There are platform screen doors in all elevated and underground stations of Kolkata Metro Line 2. Platform screen doors are planned to be introduced in underground stations of Kolkata Metro Line 3, Kolkata Metro Line 4 and Kolkata Metro Line 6. There are also plans to install platform screen doors in Kolkata Metro Line 1. All the stations of under-construction Hyderabad Airport Express Metro will have a provision of half-height platform screen doors (PSD) for improved passenger safety. On the Namma Metro in Bangalore, platform doors will be installed for its phase II operations and is expected to be completed by 2019. The Electronic City metro station in southern Bengaluru, on the Yellow Line, will be the first Namma Metro station to have platform screen doors installed. On the Mumbai Metro, all lines being made by MMRDA will have half-height platform screen doors on all elevated stations and full-height platform screen doors in the underground stations, as the trains used in these lines have a GoA level 4, and also to reduce risk of passenger deaths by overcrowding. In Line 2A, The Yellow Line, Line 7A, The Red Line and Line 3, the Aqua line, will have full-height platform screen doors, as the line is fully underground, and like the MMRDA lines above, will have GoA level 4 (Unattended train operation). All underground stations on the Pune Metro will have platform screen doors. Indonesia The Soekarno–Hatta Airport Skytrain, opened in 2017, has full-height platform screen doors. The Jakarta MRT, opened in 2019, has full-height PSDs in underground stations and half-height PSDs in elevated stations. The Jakarta LRT, opened in 2019, has half-height PSDs. The Greater Jakarta LRT, which opened in 2023, has half-height platform screen doors. PSDs are used in some TransJakarta bus stops, but they are often broken and have to be turned off. Ireland The future Dublin MetroLink shall have platform screen doors. Israel The underground stations on the Red Line on the Tel Aviv Light Rail have full height Platform screen doors. Also, Elifelet, Shenkar and Kiryat Arye stations have half-height Platform screen doors. Italy Platform screen doors are used in most newly built rapid transit lines and systems of new construction in Italy. PSDs are present on Turin Metro, the Venice People Mover, the Perugia Minimetrò, the Brescia Metro, Line 4 and Line 5 of the Milan Metro, Marconi Express Bologna, Pisa Mover (linking Pisa airport and Pisa Centrale station) and Line C of the Rome Metro. Japan The Tokyo Metro and Toei Subway began using barriers with the 1991 opening of the Namboku Line (which has full-height platform screen doors), and subsequently installed automatic platform gates on the Mita, Marunouchi, and Fukutoshin lines. Some railway lines, including the subway systems in Sapporo, Sendai, Nagoya, Osaka, Kyoto, and Fukuoka, also utilize barriers to some extent. In August 2012, the Japanese government announced plans to install barriers at stations used by 100,000 or more people per day, and the Ministry of Land, Infrastructure, Transport and Tourism allotted 36 million yen ($470,800) for research and development of the system the 2011-2012 fiscal year. A difficulty was the fact that some stations are used by different types of trains with different designs, making barrier design a challenge. , only 34 of 235 stations with over 100,000 users per day were able to implement the plan. The ministry stated that 539 of approximately 9,500 train stations across Japan have barriers. Of the Tokyo Metro stations, 78 of 179 have some type of platform barrier. In 2018, automatic platform gates were installed on the Sōbu Rapid Line platforms at . As the line's trains are long, the set of platform gates broke the world record for the longest platform doors at East Tsim Sha Tsui station in Hong Kong. In March 2023, the underground facilities at Osaka Station (nicknamed Ume-kita during planning and construction) opened. The platforms for the Haruka and Kuroshio limited express services have movable full-screen automated platform doors that cover the entire platform from the edge to the ceiling and such doors are the first of its kind. Malaysia Platform screen doors (PSD) are installed at all underground stations, from to , , from to stations and , from to . The automated announcement message reading "For safety reasons, please stand behind the yellow line" in both English and Malay languages are also heard before the train arrived at all stations. There are also platform screen doors (PSD) on the KLIA Ekspres at Kuala Lumpur Sentral and KLIA stations. Both stations at KLIA Aerotrain also have platform screen doors. The automatic platform gates (APG) also have been installed in all elevated and subsurface stations of the , and . Mexico Platform screen doors are present at various bus rapid transit systems in Mexico, such as at the stations of the Guadalajara Macrobús and the Ecovía system of Monterrey. Platform screen doors can be seen as well on the Aerotrén, an airport people mover at Mexico City International Airport. No metros in Mexico currently use any type of barrier however. Pakistan The Lahore Metro utilises half-height platform edge doors at elevated stations and full-height platform screen doors at underground stations. Many bus rapid transit systems have full-height platform screen doors installed, including the Lahore Metrobus, Rawalpindi-Islamabad Metrobus, Multan Metrobus, TransPeshawar, and Karachi Breeze. Philippines Half-height platform screen doors shall be installed on the North–South Commuter Railway, while full-height platform screen doors shall be installed on the Metro Manila Subway. The system is sought to open in stages between 2023 and 2025. Peru Full-height platform screen doors will be used in underground stations of Line 2 of the Lima Metro, which opened in 2023. Qatar Platform screen doors are in use in all stations of the Doha Metro. They are also found on the Lusail tram. Romania Platform screen doors shall be used on the future Cluj-Napoca Metro. Russia Park Pobedy (Russian: Парк Победы) is a station of the Saint Petersburg Metro that was the first station in the world with platform doors. The station was opened in 1961. Later, nine more stations of this type were built in Leningrad (nowadays Saint Petersburg): Petrogradskaya (Russian: Петроградская), Vasileostrovskaya (Russian: Василеостровская), Gostiny Dvor (Russian: Гостиный двор), Mayakovskaya (Russian: Маяковская), Ploshchad Alexandra Nevskogo I (Russian: Площадь Александра Невского-1), Moskovskaya (Russian: Московская), Yelizarovskaya (Russian: Елизаровская), Lomonosovskaya (Russian: Ломоносовская), and Zvyozdnaya (Russian: Звёздная). There was an electronic device to ensure that the train stopped with its doors adjacent to the platform doors; they were installed so that driverless trains could eventually be used on the lines. Line 2 uses GoA2 automatic train operation to make this easier, however, Line 3 does not. Unlike other platform screen doors, which are lightweight units with extensive glazing installed on a normal platform edge, the St Petersburg units give the appearance of a solid wall with heavyweight doorways and solid steel sliding doors, similar to a bank of elevators in a large building, and the train cannot be seen entering from the platform; passengers become familiar with the sound alone to indicate a train arrival. In May 2018, two other similar stations were opened: Novokrestovskaya (now Zenit) and Begovaya. Unlike the first ten stations that were built, these stations utilize glass screen doors, allowing the train to be seen entering from the platform, like most other systems. It is unclear why platform doors were installed here as they are absent in all other metros in Russia, the CIS (except that of Minsk, shown above), or the former Eastern bloc (excluding Sofia, also shown above, albeit on a line with equipment incompatible with that of the typical Eastern bloc metro). The only other platform doors in Russia are found on the Sheremetyevo International Airport people mover. Saudi Arabia The Al Mashaaer Al Mugaddassah Metro line in Mecca uses full platform screen doors. The Riyadh Metro which opened on 1 December 2024 uses full platform screen doors on all stations. Serbia The future Belgrade Metro will have platform screen doors in some stations. Singapore The Mass Rapid Transit (MRT) was the first rapid transit system in Asia to incorporate platform screen doors in its stations in 1987. Full height PSDs mainly manufactured by Westinghouse are installed at all underground MRT and sub-surface stations, while half-height platform screen doors were retrofitted into all elevated stations by March 2012. The LRT stations at Bukit Panjang, Sengkang and Punggol lack physical doors, only barriers with openings where the doors go (excluding the now-closed Ten Mile Junction station, which had full height doors) and vary in size according to their location on the platform. There are two variants of the full-height platform screen doors in use. The first variant, made by Westinghouse, was installed at all underground stations along the North South line and the East West line from 1987 to the completion of the initial system in 1990. The second variant incorporating more glass on the doors has since been used on all lines thereafter. Considered a novelty at the time of its installation, platform screen doors were introduced primarily to minimise hefty air-conditioning costs, especially since elevated stations are not air-conditioned and are much more economical to run in comparison. The safety aspects of these doors became more important in light of high-profile incidents where individuals were injured or killed by oncoming trains. In 2008, authorities began the process of retrofitting existing elevated stations with half-height screen doors. However, Land Transport Authority stated that the retrofit was not motivated by the need to make the stations safe, "but to prevent system-wide delay and service disruption and to reduce the social cost to all commuters caused by track intrusions." The retrofit was completed in 2012. South Korea Yongdu station of Seoul Subway Line 2 was the first station on the Seoul Subway to feature platform screen doors; the station opened in October 2005. By the end of 2009, many of the 289 stations operated by Seoul Metro had platform doors by Hyundai Elevator. Seoul Metro Lined 1, 2, 3, 4, 5, 6, 7, 8 and 9 were equipped with platform screen doors. Most of the stations operated by Korail have completed installation, but some of the stations are not yet equipped with platform screen doors. All stations in South Korea (except for Dorasan Station) will have platform screen doors by 2023. As of 2017, 100% of subway stations are equipped with platform screen doors in Daejeon, Gwangju, Busan, Incheon and Daegu. The platform screen doors, installed in Munyang station in Daegu Metro Line 2 by The Korea Transport Institute in 2013, have a unique rope-based platform screen named Rope type Platform Safety Door (RPSD). A door sets of rope blocks separate the platform from the rails. When the train arrives, the rope screen door sets are vertically opened and allow passenger boarding to and from the train. This RPSD was also used in Nokdong station on Gwangju Metro Line 1, but was removed in 2012, and a new full-height platform screen door was installed in 2016 instead. Spain Half platform screens were installed first in Provença FGC station (Barcelona) around 2003. Later doors were tested on Barcelona Metro line 11 before fitting them on all stations for the new lines 9 and 10, which operate driverless. Platform screen doors were also trialed on four stations of line 12 (MetroSur) of the Madrid Metro from November 2009 until January 2010. Platform doors are also found on the Madrid Barajas Airport People Mover at Adolfo Suárez Madrid–Barajas Airport and the Seville Metro line 1 light metro. Sweden Stockholm commuter rail has platform doors on two underground stations opened in July 2017, as part of the Stockholm City Line. The Stockholm Metro will test platform doors at Åkeshov metro station in 2015 and Bagarmossen metro station in 2021, the metro stations including Kungsträdgården metro station-Nacka Kungsträdgården metro station-Hagsätra metro station will have platform screen doors when it is completed between 2026 and 2030. As there are multiple door layouts in use on the Stockholm Metro (a full-length C20 having 21 doors on each side, and the older Cx series and newer C30 having 24), it is unlikely platform doors will be common anytime soon. The underground Liseberg station in Gothenburg has platform doors which were built before its opening 1993. The reason was safety against the freight trains that go in this tunnel. These doors are built one meter from the platform edge and do therefore not restrict the train type. Switzerland Zurich International Airport's Skymetro shuttle between the main building (hosting terminals A and B) and the detached terminal E has glass screen doors separating the tracks from the passenger hall platforms at both ends. Lausanne Metro's Line M2 has glass screen doors at every station, including a rare instance where platform doors are installed on a slanted surface, as the line was previously a funicular. Taiwan On Taipei Metro, platform screen doors were first installed on the Wenhu line (then known as Muzha line) in 1996. Older high-capacity MRT lines (Tamsui Line, Xindian Line, Zhonghe Line, and the Bannan Line) were initially constructed without platform screen doors but have now been retrofitted with automatic platform gates since 2018. Newer stations, on the Xinyi Line (part of the Tamsui-Xinyi Line), Luzhou and Xinzhuang Line (part of the Zhonghe-Xinlu Line), Songshan Line (part of the Songshan-Xindian Line), Circular line, and part of the Bannan Line's Dingpu Station and Taipei Nangang Exhibition Center Station) are constructed with platform screen doors. The Circular Line have installed platform screen doors since opening, but Danhai Light Rail did not, as is typical for most street railways to not have platform doors. On Kaohsiung Metro, all underground stations have installed platform screen doors, while elevated stations did not. Daliao Station installed half-height platform screen doors in 2020. On Taoyuan Metro and Taichung Metro, all elevated stations installed half-height platform screen doors while underground stations installed full-height platform screen doors. Thailand Platform screen doors were first installed on the BTS Skytrain and Bangkok MRT Systems, followed by the Airport Rail Link System in Makkasan Station (Express Platform) and Suvarnabhumi Station (both City and Express Line platforms). BTS Skytrain system first installed the platform screen doors at Siam Station, later upgrading other busy stations. Today, almost all stations on the Bangkok Electrified Rail System have installed platform screen doors to prevent people from falling onto the tracks. The BTS Skytrain has installed PSDs at 18 out of its 44 stations. PSDs have been installed at all of the stations on the Purple and Blue Lines of the Bangkok MRT system. Airport Rail Link has installed a stainless steel barrier to prevent people from falling, but has not installed full-height doors due to concerns that the high speed of the trains could break the glass. All new stations in Bangkok must install platform screen doors. Turkey Platform doors are found on Istanbul Metro lines M5, M7, M8 and M11, all fully driverless. Seyrantepe station on line M2 and F1, F3 and F4 also have platform doors. United Arab Emirates Platform screen doors are installed on all the platforms in the fully automated Dubai Metro, as well as on the Dubai Airport People Mover, Palm Jumeirah Monorail and Dubai Tram (the world's first tram system to feature platform screen doors). United Kingdom The Jubilee Line Extension project saw platform edge doors installed on its new stations that were underground, and were produced by Westinghouse. There are plans to install PEDs in existing London Underground stations along the Bakerloo, Central, Piccadilly, and Waterloo & City lines as part of New Tube for London. A provision for installing platform edge doors is found on the Northern line extension stations, but no doors were installed in the stations when they opened in 2021. PEDs are present on the Gatwick Airport shuttle system, Heathrow Airport Terminal 5 airside people-mover shuttle, Birmingham Airport AirRail Link, Stansted Airport Transit System and the Luton DART. The Elizabeth line, the new cross-city line for London (delivered as the Crossrail Project) has platform screen doors on each of the sixteen sub-surface platforms of its central section. Each platform has twenty-seven doors which align with the twenty-seven saloon doors of the new British Rail Class 345 which operates the service. The doors form a high glass and steel screen the entire length of the platform. The door opening is wide, and the system includes integrated passenger information and digital advertising screens. The system is unusual in that the trains served are full-sized commuter trains, larger and longer than the trains of metro systems more commonly equipped with platform screen doors. In total, some 4 km of platform screen is provided. The Glasgow Subway will have half-height screen doors after new rolling stock are introduced in 2023. United States Platform screen doors are rare in the United States, and are nearly exclusively found on small-scale systems. Honolulu's Skyline, which began operations in June 2023, is the first and only large-scale publicly-run metro system in the country to feature platform screen doors, with platform gates at every station manufactured by Stanley Access Technologies. They are also used by the general-purpose Las Vegas Monorail system. New York City's Metropolitan Transportation Authority has not committed to installing platform screen doors in its subway system, though it had been considering such an idea since the 1980s. Their installation presents substantial technical challenges, in part because of different placements of doors on New York City Subway rolling stock. Additionally, the majority of the system cannot accommodate platform doors regardless of door locations, due to factors such as narrow platforms and structurally insufficient platform slabs (see ). Following a series of incidents during one week in November 2016, in which three people were injured or killed after being pushed into tracks, the MTA started to consider installing platform edge doors for the 42nd Street Shuttle. In October 2017, the MTA formally announced that platform screen doors would be installed at the Third Avenue station on the as part of a pilot program, but the pilot was later postponed. Following several pushing incidents, the MTA announced a PSD pilot program at three stations in February 2022: the platform at Times Square; the platform at Sutphin Boulevard–Archer Avenue–JFK Airport; and the Third Avenue station. The MTA started soliciting bids from platform-door manufacturers in mid-2022; the doors are planned to be installed starting in December 2023 at a cost of $6 million. Designs for the platform doors were being finalized by June 2023. People movers, systems that ferry passengers across large distances they would otherwise walk, make use of platform screen doors. These systems are common at airports such as Hartsfield–Jackson Atlanta International Airport and Denver International Airport. The Port Authority of New York and New Jersey uses full height platform screen doors at two of its systems: AirTrain JFK and AirTrain Newark (serving John F. Kennedy International Airport and Newark Liberty International Airport respectively). San Francisco International Airport has AirTrain, a 6-mile-long line whose stations are fully enclosed with platform screen doors, allowing access to the fully automated people mover. Chicago O'Hare International Airport has a people mover system which operates 24 hours a day and is a 2.5 mile long (4 km) line that operates between the four terminals at the airport and parking areas; each station is fully enclosed with platform screen doors allowing access to the fully automated people mover trains. AeroTrain is a people mover system at Washington Dulles International Airport in Dulles, Virginia, with fully enclosed tracks including platform screen doors. The United States Capitol subway system, a train cart people mover system, uses platform gates. Venezuela Platform screen doors are in use on the Los Teques Metro. The first station to have screen doors implemented on the system was Guaicaipuro. Vietnam Platform screen doors will be used on the Ho Chi Minh City Metro. Incidents On the Shanghai Metro in 2007, a man forcing his way onto a crowded train became trapped between the train door and platform door as they closed. He was pulled under the departing train and killed. In 2010, a woman in Shanghai's Zhongshan Park Station was killed under the same circumstances when she got trapped between the train and platform doors. An almost identical death occurred on the Beijing Subway in 2014the third death involving platform doors in China within the several years preceding it. In 2018, a woman was similarly trapped between the platform doors and train at Shanghai's Bao'an Highway station. She escaped injury by standing still as the train departed. On 22 January 2022, an elderly woman was killed when she got trapped between the train doors and platform screen doors at Shanghai's Qi'an Road Station. Between 1999 and 2012, London Underground's platform doors, all on the Jubilee line, were the cause of 75 injuries including strikes to people's heads and arms. See also Anti-trespass panels, another safety technology meant to keep people off rail tracks Guard rail Pedestrian railroad safety in the United States Platform barriers, platform screens doors without the doors References External links Air pollution control systems Building biology Building engineering Construction Door automation Heating, ventilation, and air conditioning Mechanical engineering Noise control Protective barriers Railway platforms Railway safety Rapid transit Security engineering Security technology Soviet inventions Suicide prevention Train protection systems Vehicle safety technologies
Platform screen doors
[ "Physics", "Engineering" ]
9,684
[ "Systems engineering", "Applied and interdisciplinary physics", "Security engineering", "Building engineering", "Door automation", "Automation", "Construction", "Civil engineering", "Mechanical engineering", "Building biology", "Architecture" ]
2,563,319
https://en.wikipedia.org/wiki/Exchange%20bias
Exchange bias or exchange anisotropy occurs in bilayers (or multilayers) of magnetic materials where the hard magnetization behavior of an antiferromagnetic thin film causes a shift in the soft magnetization curve of a ferromagnetic film. The exchange bias phenomenon is of tremendous utility in magnetic recording, where it is used to pin the state of the readback heads of hard disk drives at exactly their point of maximum sensitivity; hence the term "bias." Fundamental science The essential physics underlying the phenomenon is the exchange interaction between the antiferromagnet and ferromagnet at their interface. Since antiferromagnets have a small or no net magnetization, their spin orientation is only weakly influenced by an externally applied magnetic field. A soft ferromagnetic film which is strongly exchange-coupled to the antiferromagnet will have its interfacial spins pinned. Reversal of the ferromagnet's moment will have an added energetic cost corresponding to the energy necessary to create a Néel domain wall within the antiferromagnetic film. The added energy term implies a shift in the switching field of the ferromagnet. Thus the magnetization curve of an exchange-biased ferromagnetic film looks like that of the normal ferromagnet except that is shifted away from the H=0 axis by an amount Hb. In most well-studied ferromagnet/antiferromagnet bilayers, the Curie temperature of the ferromagnet is larger than the Néel temperature TN of the antiferromagnet. This inequality means that the direction of the exchange bias can be set by cooling through TN in the presence of an applied magnetic field. The moment of the magnetically ordered ferromagnet will apply an effective field to the antiferromagnet as it orders, breaking the symmetry and influencing the formation of domains. The exchange bias effect is attributed to a ferromagnetic unidirectional anisotropy formed at the interface between different magnetic phases. Generally, the process of field cooling from higher temperature is used to obtain ferromagnetic unidirectional anisotropy in different exchange bias systems. In 2011, a large exchange bias has been realized after zero-field cooling from an unmagnetized state, which was attributed to the newly formed interface between different magnetic phases during the initial magnetization process. Exchange anisotropy has long been poorly understood due to the difficulty of studying the dynamics of domain walls in thin antiferromagnetic films. A naive approach to the problem would suggest the following expression for energy per unit area: where n is the number of interfacial spins interactions per unit area, Jex is the exchange constant at the interface, S refers to the spin vector, M refers to the magnetization, t refers to film thickness and H is the external field. The subscript F describes the properties of the ferromagnet and AF to the antiferromagnet. The expression omits magnetocrystalline anisotropy, which is unaffected by the presence of the antiferromagnet. At the switching field of the ferromagnet, the pinning energy represented by the first term and the Zeeman dipole coupling represented by the second term will exactly balance. The equation then predicts that the exchange bias shift Hb will be given by the expression Many experimental findings regarding the exchange bias contradict this simple model. For example, the magnitude of measured Hb values is typically 100 times less than that predicted by the equation for reasonable values of the parameters. The amount of hysteresis shift Hb is not correlated with the density n of uncompensated spins in the plane of the antiferromagnet that appears at the interface. In addition, the exchange bias effect tends to be smaller in epitaxial bilayers than in polycrystalline ones, suggesting an important role for defects. In recent years progress in fundamental understanding has been made via synchrotron radiation based element-specific magnetic linear dichroism experiments that can image antiferromagnetic domains and frequency-dependent magnetic susceptibility measurements that can probe the dynamics. Experiments on the Fe/FeF2 and Fe/MnF2 model systems have been particularly fruitful. Technological impact Exchange bias was initially used to stabilize the magnetization of soft ferromagnetic layers in readback heads based on the anisotropic magnetoresistance (AMR) effect. Without the stabilization, the magnetic domain state of the head could be unpredictable, leading to reliability problems. Currently, exchange bias is used to pin the harder reference layer in spin valve readback heads and MRAM memory circuits that utilize the giant magnetoresistance or magnetic tunneling effect. Similarly, the most advanced disk media are antiferromagnetically coupled, making use of interfacial exchange to effectively increase the stability of small magnetic particles whose behavior would otherwise be superparamagnetic. Desirable properties for an exchange bias material include a high Néel temperature, a large magnetocrystalline anisotropy and good chemical and structural compatibility with NiFe and Co, the most important ferromagnetic films. The most technologically significant exchange bias materials have been the rocksalt-structure antiferromagnetic oxides like NiO, CoO and their alloys and the rocksalt-structure intermetallics like FeMn, NiMn, IrMn and their alloys. History Exchange anisotropy was discovered by Meiklejohn and Bean of General Electric in 1956. The first commercial device to employ the exchange bias was IBM's anisotropic magnetoresistance (AMR) disk drive recording head, which was based on a design by Hunt in the 1970s but which didn't fully displace the inductive readback head until the early 1990s. By the mid-1990s, the spin valve head using an exchange-bias layer was well on its way to displacing the AMR head. References S. Chikazumi and S. H. Charap, Physics of Magnetism, ASIN B0007DODNA. John C. Mallinson, Magneto-Resistive and Spin Valve Heads: Fundamentals and Applications, . Ivan K. Schuller and G. Guntherodt, "The Exchange Bias Manifesto," 2002. Electric and magnetic fields in matter Magnetic hysteresis
Exchange bias
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,346
[ "Physical phenomena", "Electric and magnetic fields in matter", "Materials science", "Condensed matter physics", "Hysteresis", "Magnetic hysteresis" ]
18,525,612
https://en.wikipedia.org/wiki/Silicor%20Materials
Silicor Materials Inc. is a privately held manufacturer of solar silicon and aluminum alloy. Silicor is headquartered in San Jose, California, and its silicon purification operations are performed by its wholly owned subsidiary, Silicor Materials Canada Inc., in Ontario, Canada. Silicor also has a research and development facility in Berlin, Germany, and is currently building a commercial manufacturing facility in the port of Grundartangi, Iceland. The facility will have a nameplate capacity of 16,000 metric tons, with the ability to yield up to 19,000 metric tons of solar silicon each year. To date, more than 20 million solar cells have been made with Silicor's solar silicon. Technology Silicor Materials uses a unique silicon purification process, which it obtained with the acquisition of 6N Silicon, Inc. Silicor Materials also claims to have intellectual property and patents on processes related to silicon purification, crystallization, wafering, and cell processing that enable it to produce cells with efficiencies comparable to those made with conventionally produced silicon while using what Silicor Materials claims are lower-cost materials than created through the more conventional Siemens purification method. Silicor Materials' solar silicon is shipped globally to manufacturers who cast the material into ingots, cut the ingots into bricks, cut those bricks into wafers using wire saws and convert the wafers into solar cells. These cells are then assembled into conventional aluminum-framed, glass-encapsulated solar modules (also known as solar panels) for use in distributed and centralized solar applications. Technological Benefits In contrast to the Siemens Process, which requires a total of four phase changes—solid to liquid, liquid to gas, gas to liquid, liquid to solid—Silicor's silicon only goes through two phase changes- solid to liquid and liquid to solid. This change reduces the manufacturing energy requirements to an estimated total usage of 10 to 25 kWh/kg. Silicor's method does not require the use or handling of hazardous chemicals such as silane or trichlorosilane gas, which are required in both the Siemens and Fluid Bed silicon purification processes. This contributes to improved worker safety, the avoidance of recycle and disposal costs, faster facility permitting and reduced facility construction footprint. Additionally, Silicor's resulting solar silicon has a narrow resistivity range which is achieved through adjusting the relative mix of boron and phosphorus. This property is shown to improve ingot yields. History The organization was founded as a development company in 2006 under the name Calisolar, with the goal of manufacturing low-cost photovoltaic (PV) solar cells from silicon designed specifically for the solar industry. The company then acquired 6N Silicon, Inc. in 2010. The company was renamed "Silicor Materials" in 2012, when its focus shifted exclusively to solar silicon manufacturing. See also Solar energy Photovoltaics companies Photovoltaics Silicon References External links Official Silicor Materials Website Photovoltaics manufacturers Silicon solar cells Companies based in Sunnyvale, California
Silicor Materials
[ "Engineering" ]
626
[ "Photovoltaics manufacturers", "Engineering companies" ]
18,526,787
https://en.wikipedia.org/wiki/Fragment-based%20lead%20discovery
Fragment-based lead discovery (FBLD) also known as fragment-based drug discovery (FBDD) is a method used for finding lead compounds as part of the drug discovery process. Fragments are small organic molecules which are small in size and low in molecular weight. It is based on identifying small chemical fragments, which may bind only weakly to the biological target, and then growing them or combining them to produce a lead with a higher affinity. FBLD can be compared with high-throughput screening (HTS). In HTS, libraries with up to millions of compounds, with molecular weights of around 500 Da, are screened, and nanomolar binding affinities are sought. In contrast, in the early phase of FBLD, libraries with a few thousand compounds with molecular weights of around 200 Da may be screened, and millimolar affinities can be considered useful. FBLD is a technique being used in research for discovering novel potent inhibitors. This methodology could help to design multitarget drugs for multiple diseases. The multitarget inhibitor approach is based on designing an inhibitor for the multiple targets. This type of drug design opens up new polypharmacological avenues for discovering innovative and effective therapies. Neurodegenerative diseases like Alzheimer’s (AD) and Parkinson’s, among others, also show rather complex etiopathologies. Multitarget inhibitors are more appropriate for addressing the complexity of AD and may provide new drugs for controlling the multifactorial nature of AD, stopping its progression. Library design In analogy to the rule of five, it has been proposed that ideal fragments should follow the 'rule of three' (molecular weight < 300, ClogP < 3, the number of hydrogen bond donors and acceptors each should be < 3 and the number of rotatable bonds should be < 3). Since the fragments have relatively low affinity for their targets, they must have high water solubility so that they can be screened at higher concentrations. Library screening and quantification In fragment-based drug discovery, the low binding affinities of the fragments pose significant challenges for screening. Many biophysical techniques have been applied to address this issue. In particular, ligand-observe nuclear magnetic resonance (NMR) methods such as water-ligand observed via gradient spectroscopy (waterLOGSY), saturation transfer difference spectroscopy (STD-NMR), 19F NMR spectroscopy and inter-ligand Overhauser effect (ILOE) spectroscopy, protein-observe NMR methods such as 1H-15N heteronuclear single quantum coherence (HSQC) that utilises isotopically-labelled proteins, surface plasmon resonance (SPR), isothermal titration calorimetry (ITC) and Microscale Thermophoresis (MST) are routinely-used for ligand screening and for the quantification of fragment binding affinity to the target protein. At modern X-ray crystallography synchrotron beamlines, several hundred data sets of protein-ligand complex crystal structures can be obtained within 24 hours. This technology makes crystallographic fragment screening possible, i.e. the use of X-ray crystallography directly for the fragment screening step. Once a fragment (or a combination of fragments) have been identified, protein X-ray crystallography is used to obtain structural models of the protein- complexes. Such information can then be used to guide organic synthesis for high-affinity protein ligands and enzyme inhibitors. Advantages over traditional libraries Advantages of screening low molecular weight fragment based libraries over traditional higher molecular weight chemical libraries are several. These include: More hydrophilic hits in which hydrogen bonding is more likely to contribute to affinity (enthalpically driven binding). It is generally much easier to increase affinity by adding hydrophobic groups (entropically driven binding); starting with a hydrophilic ligand increases the chances that the final optimized ligand will not be too hydrophobic (log P < 5). Higher ligand efficiency so that the final optimized ligand will more likely be relatively low in molecular weight (MW < 500). Since two to three fragments in theory can be combined to form an optimized ligand, screening a fragment library of N compounds is equivalent to screening N2 - N3 compounds in a traditional library. Fragments are less likely to contain sterically blocking groups that interfere with an otherwise favorable ligand-protein interaction, increasing the combinatorial advantage of a fragment library even further. See also Druglikeness Protein-directed dynamic combinatorial chemistry Lipinski's rule of five References Further reading Download an example of a Fragment based library here (4,532 compounds, zipped SD-File) Drug discovery Biotechnology
Fragment-based lead discovery
[ "Chemistry", "Biology" ]
975
[ "Life sciences industry", "Drug discovery", "Biotechnology", "nan", "Medicinal chemistry" ]
18,527,330
https://en.wikipedia.org/wiki/Foldy%E2%80%93Wouthuysen%20transformation
The Foldy–Wouthuysen transformation was historically significant and was formulated by Leslie Lawrance Foldy and Siegfried Adolf Wouthuysen in 1949 to understand the nonrelativistic limit of the Dirac equation, the equation for spin-1/2 particles. A detailed general discussion of the Foldy–Wouthuysen-type transformations in particle interpretation of relativistic wave equations is in Acharya and Sudarshan (1960). Its utility in high energy physics is now limited due to the primary applications being in the ultra-relativistic domain where the Dirac field is treated as a quantised field. A canonical transform The FW transformation is a unitary transformation of the orthonormal basis in which both the Hamiltonian and the state are represented. The eigenvalues do not change under such a unitary transformation, that is, the physics does not change under such a unitary basis transformation. Therefore, such a unitary transformation can always be applied: in particular a unitary basis transformation may be picked which will put the Hamiltonian in a more pleasant form, at the expense of a change in the state function, which then represents something else. See for example the Bogoliubov transformation, which is an orthogonal basis transform for the same purpose. The suggestion that the FW transform is applicable to the state or the Hamiltonian is thus not correct. Foldy and Wouthuysen made use of a canonical transform that has now come to be known as the Foldy–Wouthuysen transformation. A brief account of the history of the transformation is to be found in the obituaries of Foldy and Wouthuysen and the biographical memoir of Foldy. Before their work, there was some difficulty in understanding and gathering all the interaction terms of a given order, such as those for a Dirac particle immersed in an external field. With their procedure the physical interpretation of the terms was clear, and it became possible to apply their work in a systematic way to a number of problems that had previously defied solution. The Foldy–Wouthuysen transform was extended to the physically important cases of spin-0 and spin-1 particles, and even generalized to the case of arbitrary spins. Description The Foldy–Wouthuysen (FW) transformation is a unitary transformation on a fermion wave function of the form: where the unitary operator is the 4 × 4 matrix: Above, is the unit vector oriented in the direction of the fermion momentum. The above are related to the Dirac matrices by and , with . A straightforward series expansion applying the commutativity properties of the Dirac matrices demonstrates that above is true. The inverse so it is clear that , where is a 4 × 4 identity matrix. Transforming the Dirac Hamiltonian for a free fermion This transformation is of particular interest when applied to the free-fermion Dirac Hamiltonian operator in biunitary fashion, in the form: Using the commutativity properties of the Dirac matrices, this can be massaged over into the double-angle expression: This factors out into: Choosing a particular representation: Newton–Wigner Clearly, the FW transformation is a continuous transformation, that is, one may employ any value for which one chooses. Choosing a particular value for amounts to choosing a particular transformed representation. One particularly important representation is that in which the transformed Hamiltonian operator is diagonalized. A completely diagonal representation can be obtained by choosing such that the term in vanishes. This is arranged by choosing: In the Dirac-Pauli representation where is a diagonal matrix, is then reduced to a diagonal matrix: By elementary trigonometry, also implies that: so that using in and then simplifying now leads to: Prior to Foldy and Wouthuysen publishing their transformation, it was already known that is the Hamiltonian in the Newton–Wigner (NW) representation (named after Theodore Duddell Newton and Eugene Wigner) of the Dirac equation. What therefore tells us, is that by applying a FW transformation to the Dirac–Pauli representation of Dirac's equation, and then selecting the continuous transformation parameter so as to diagonalize the Hamiltonian, one arrives at the NW representation of Dirac's equation, because NW itself already contains the Hamiltonian specified in (). See this link. If one considers an on-shell mass—fermion or otherwise—given by , and employs a Minkowski metric tensor for which , it should be apparent that the expression is equivalent to the component of the energy-momentum vector , so that is alternatively specified rather simply by . Correspondence between the Dirac–Pauli and Newton–Wigner representations, for a fermion at rest Now consider a fermion at rest, which we may define in this context as a fermion for which . From or , this means that , so that and, from , that the unitary operator . Therefore, any operator in the Dirac–Pauli representation upon which we perform a biunitary transformation, will be given, for an at-rest fermion, by: Contrasting the original Dirac–Pauli Hamiltonian operator with the NW Hamiltonian , we do indeed find the "at rest" correspondence: Transforming the velocity operator In the Dirac–Pauli representation Now, consider the velocity operator. To obtain this operator, we must commute the Hamiltonian operator with the canonical position operators , i.e., we must calculate One good way to approach this calculation, is to start by writing the scalar rest mass as and then to mandate that the scalar rest mass commute with the . Thus, we may write: where we have made use of the Heisenberg canonical commutation relationship to reduce terms. Then, multiplying from the left by and rearranging terms, we arrive at: Because the canonical relationship the above provides the basis for computing an inherent, non-zero acceleration operator, which specifies the oscillatory motion known as zitterbewegung. In the Newton–Wigner representation In the Newton–Wigner representation, we now wish to calculate If we use the result at the very end of section 2 above, , then this can be written instead as: Using the above, we need simply to calculate , then multiply by . The canonical calculation proceeds similarly to the calculation in section 4 above, but because of the square root expression in , one additional step is required. First, to accommodate the square root, we will wish to require that the scalar square mass commute with the canonical coordinates , which we write as: where we again use the Heisenberg canonical relationship . Then, we need an expression for which will satisfy . It is straightforward to verify that: will satisfy when again employing . Now, we simply return the factor via , to arrive at: This is understood to be the velocity operator in the Newton–Wigner representation. Because: it is commonly thought that the zitterbewegung motion arising out of vanishes when a fermion is transformed into the Newton–Wigner representation. Other applications The powerful machinery of the Foldy–Wouthuysen transform originally developed for the Dirac equation has found applications in many situations such as acoustics, and optics. It has found applications in very diverse areas such as atomic systems synchrotron radiation and derivation of the Bloch equation for polarized beams. The application of the Foldy–Wouthuysen transformation in acoustics is very natural; comprehensive and mathematically rigorous accounts. In the traditional scheme the purpose of expanding the optical Hamiltonian in a series using as the expansion parameter is to understand the propagation of the quasi-paraxial beam in terms of a series of approximations (paraxial plus nonparaxial). Similar is the situation in the case of charged-particle optics. Let us recall that in relativistic quantum mechanics too one has a similar problem of understanding the relativistic wave equations as the nonrelativistic approximation plus the relativistic correction terms in the quasi-relativistic regime. For the Dirac equation (which is first-order in time) this is done most conveniently using the Foldy–Wouthuysen transformation leading to an iterative diagonalization technique. The main framework of the newly developed formalisms of optics (both light optics and charged-particle optics) is based on the transformation technique of Foldy–Wouthuysen theory which casts the Dirac equation in a form displaying the different interaction terms between the Dirac particle and an applied electromagnetic field in a nonrelativistic and easily interpretable form. In the Foldy–Wouthuysen theory the Dirac equation is decoupled through a canonical transformation into two two-component equations: one reduces to the Pauli equation in the nonrelativistic limit and the other describes the negative-energy states. It is possible to write a Dirac-like matrix representation of Maxwell's equations. In such a matrix form the Foldy–Wouthuysen can be applied. There is a close algebraic analogy between the Helmholtz equation (governing scalar optics) and the Klein–Gordon equation; and between the matrix form of the Maxwell's equations (governing vector optics) and the Dirac equation. So it is natural to use the powerful machinery of standard quantum mechanics (particularly, the Foldy–Wouthuysen transform) in analyzing these systems. The suggestion to employ the Foldy–Wouthuysen Transformation technique in the case of the Helmholtz equation was mentioned in the literature as a remark. It was only in the recent works, that this idea was exploited to analyze the quasiparaxial approximations for specific beam optical system. The Foldy–Wouthuysen technique is ideally suited for the Lie algebraic approach to optics. With all these plus points, the powerful and ambiguity-free expansion, the Foldy–Wouthuysen Transformation is still little used in optics. The technique of the Foldy–Wouthuysen Transformation results in what is known as nontraditional prescriptions of Helmholtz optics and Maxwell optics respectively. The nontraditional approaches give rise to very interesting wavelength-dependent modifications of the paraxial and aberration behaviour. The nontraditional formalism of Maxwell optics provides a unified framework of light beam optics and polarization. The nontraditional prescriptions of light optics are closely analogous with the quantum theory of charged-particle beam optics. In optics, it has enabled the deeper connections in the wavelength-dependent regime between light optics and charged-particle optics to be seen (see Electron optics). See also Relativistic quantum mechanics Notes Fermions Dirac equation
Foldy–Wouthuysen transformation
[ "Physics", "Materials_science" ]
2,205
[ "Equations of physics", "Fermions", "Eponymous equations of physics", "Subatomic particles", "Condensed matter physics", "Dirac equation", "Matter" ]
768,566
https://en.wikipedia.org/wiki/List%20of%20important%20publications%20in%20physics
This is a list of noteworthy publications in physics, organized by type. General audience List of books on popular physics concepts Textbooks List of textbooks on classical mechanics and quantum mechanics List of textbooks in electromagnetism List of textbooks on relativity List of textbooks in thermodynamics and statistical mechanics Bibliographies by author Max Born Albert Einstein John von Neumann Emmy Noether Journals List of physics journals List of fluid mechanics journals List of materials science journals List of mathematical physics journals Applied and interdisciplinary physics Physics Publications in physics
List of important publications in physics
[ "Physics" ]
106
[ "Applied and interdisciplinary physics" ]
769,065
https://en.wikipedia.org/wiki/Niche%20construction
Niche construction is the ecological process by which an organism alters its own (or another species') local environment. These alterations can be a physical change to the organism’s environment, or it can encompass the active movement of an organism from one habitat to another where it then experiences different environmental pressures. Examples of niche construction include the building of nests and burrows by animals, the creation of shade, the influencing of wind speed, and alternations to nutrient cycling by plants. Although these modifications are often directly beneficial to the constructor, they are not necessarily always. For example, when organisms dump detritus, they can degrade their own local environments. Within some biological evolutionary frameworks, niche construction can actively beget processes pertaining to ecological inheritance whereby the organism in question “constructs” new or unique ecologic, and perhaps even sociologic environmental realities characterized by specific selective pressures. Evolution For niche construction to affect evolution it must satisfy three criteria: 1) the organism must significantly modify environmental conditions, 2) these modifications must influence one or more selection pressures on a recipient organism, and 3) there must be an evolutionary response in at least one recipient population caused by the environmental modification. The first two criteria alone provide evidence of niche construction. Recently, some biologists have argued that niche construction is an evolutionary process that works in conjunction with natural selection. Evolution entails networks of feedbacks in which previously selected organisms drive environmental changes, and organism-modified environments subsequently select for changes in organisms. The complementary match between an organism and its environment results from the two processes of natural selection and niche construction. The effect of niche construction is especially pronounced in situations where environmental alterations persist for several generations, introducing the evolutionary role of ecological inheritance. This theory emphasizes that organisms inherit two legacies from their ancestors: genes and a modified environment. A niche constructing organism may or may not be considered an ecosystem engineer. Ecosystem engineering is a related but non-evolutionary concept referring to structural changes brought about in the environment by organisms. Examples The following are some examples of niche construction: Earthworms physically and chemically modify the soil in which they live. Only by changing the soil can these primarily aquatic organisms live on land. Earthworm soil processing benefits plant species and other biota present in the soil, as originally pointed out by Darwin in his book The Formation of Vegetable Mould through the Action of Worms. Lemon ants (Myrmelachista schumanni) employ a specialized method of suppression that regulates the growth of certain trees. They live in the trunks of Duroia hirsuta trees found in the Amazonian rain forest of Peru. Lemon ants use formic acid (a chemical fairly common among species of ants) as a herbicide. By eliminating trees unsuitable for lemon ant colonies, these ants produce distinctive habitats known as Devil's gardens. Beavers build dams and thereby create lakes that drastically shape and alter riparian ecosystems. These activities modify nutrient cycling and decomposition dynamics, influence the water and materials transported downstream, and ultimately influence plant and community composition and diversity. Benthic diatoms living in estuarine sediments in the Bay of Fundy, Canada, secrete carbohydrate exudates that bind the sand and stabilize the environment. This changes the physical state of the sand which allows other organisms (such as the amphipod Corophium volutator) to colonize the area. Chaparrals and pines increase the frequency of forest fire through the dispersal of needles, cones, seeds and oils, essentially littering the forest floor. The benefit of this activity is facilitated by an adaptation for fire resistance which benefits them relative to their competitors. Saccharomyces cerevisiae yeast creates a novel environment out of fermenting fruit. This fermentation process in turn attracts fruit flies that it is closely associated with and utilizes for transportation. Cyanobacteria provide an example on a planetary scale through the production of oxygen as a waste product of photosynthesis (see Great Oxygenation Event). This dramatically changed the composition of the Earth’s atmosphere and oceans, with vast macroevolutionary and ecological consequences. Microbialites represent ancient niches constructed by bacterial communities which give evidence that niche construction was present on early life forms. Consequences As creatures construct new niches, they can have a significant effect on the world around them. An important consequence of niche construction is that it can affect the natural selection experienced by the species doing the constructing. The common cuckoo illustrates such a consequence. It parasitizes other birds by laying its eggs in their nests. This had led to several adaptations among the cuckoos, including a short incubation time for their eggs. The eggs need to hatch first so that the chick can push the host's eggs out of the nest, ensuring it has no competition for the parents' attention. Another adaptation it has acquired is that the chick mimics the calls of multiple young chicks, so that the parents are bringing in food not just for one offspring, but a whole brood. Niche construction can also generate co-evolutionary interactions, as illustrated by the above earthworm, beaver and yeast examples. The development of many organisms, and the recurrence of traits across generations, has been found to depend critically on the construction of developmental environments such as nests by ancestral organisms. Ecological inheritance refers to the inherited resources and conditions, and associated modified selection pressures, that ancestral organisms bequeath to their descendants as a direct result of their niche construction. Niche construction has important implications for understanding, managing, and conserving ecosystems. History Niche construction theory (NCT) has been anticipated by diverse people in the past, including by the physicist Erwin Schrödinger in his What Is Life? and Mind and Matter essays (1944). An early advocate of the niche construction perspective in biology was the developmental biologist, Conrad Waddington. He drew his attention to the many ways in which animals modify their selective environments throughout their lives, by choosing and changing their environmental conditions, a phenomenon that he termed "the exploitive system". The niche construction perspective was subsequently brought to prominence through the writings of Harvard evolutionary biologist, Richard Lewontin. In the 1970s and 1980s Lewontin wrote a series of articles on adaptation, in which he pointed out that organisms do not passively adapt through selection to pre-existing conditions, but actively construct important components of their niches. Oxford biologist John Odling-Smee (1988) was the first person to coin the term 'niche construction', and the first to make the argument that ‘niche construction’ and ‘ecological inheritance’ should be recognized as evolutionary processes. Over the next decade research into niche construction increased rapidly, with a rush of experimental and theoretical studies across a broad range of fields. Modeling niche construction Mathematical evolutionary theory explores both the evolution of niche construction, and its evolutionary and ecological consequences. These analyses suggest that niche construction is of considerable importance. For instance, niche construction can: fix genes or phenotypes that would otherwise be deleterious, create or eliminate equilibria, and affect evolutionary rates; cause evolutionary time lags, generate momentum, inertia, autocatalytic effects, catastrophic responses to selection, and cyclical dynamics; drive niche-constructing traits to fixation by creating statistical associations with recipient traits; facilitate the evolution of cooperation; regulate environmental states, allowing persistence in otherwise inhospitable conditions, facilitating range expansion and affecting carrying capacities; drive coevolutionary events, exacerbate and ameliorate competition, affect the likelihood of coexistence and produce macroevolutionary trends. Humans Niche construction theory has had a particular impact in the human sciences, including biological anthropology, archaeology, and psychology. Niche construction is now recognized to have played important roles in human evolution, including the evolution of cognitive capabilities. Its impact is probably because it is immediately apparent that humans possess an unusually potent capability to regulate, construct and destroy their environments, and that this is generating some pressing current problems (e.g. climate change, deforestation, urbanization). However, human scientists have been attracted to the niche construction perspective because it recognizes human activities as a directing process, rather than merely the consequence of natural selection. Cultural niche construction can also feed back to affect other cultural processes, even affecting genetics. Niche construction theory emphasizes how acquired characters play an evolutionary role, through transforming selective environments. This is particularly relevant to human evolution, where our species appears to have engaged in extensive environmental modification through cultural practices. Such cultural practices are typically not themselves biological adaptations (rather, they are the adaptive product of those much more general adaptations, such as the ability to learn, particularly from others, to teach, to use language, and so forth, that underlie human culture). Mathematical models have established that cultural niche construction can modify natural selection on human genes and drive evolutionary events. This interaction is known as gene-culture coevolution. There is now little doubt that human cultural niche construction has co-directed human evolution. Humans have modified selection, for instance, by dispersing into new environments with different climatic regimes, devising agricultural practices or domesticating livestock. A well-researched example is the finding that dairy farming created the selection pressure that led to the spread of alleles for adult lactase persistence. Analyses of the human genome have identified many hundreds of genes subject to recent selection, and human cultural activities are thought to be a major source of selection in many cases. The lactase persistence example may be representative of a very general pattern of gene-culture coevolution. Niche construction is also now central to several accounts of how language evolved. For instance, Derek Bickerton describes how our ancestors constructed scavenging niches that required them to communicate in order to recruit sufficient individuals to drive off predators away from megafauna corpses. He maintains that our use of language, in turn, created a new niche in which sophisticated cognition was beneficial. Current status While the fact that niche construction occurs is non-contentious, and its study goes back to Darwin's classic books on earthworms and corals, the evolutionary consequences of niche construction have not always been fully appreciated. Researchers differ over to what extent niche construction requires changes in understanding of the evolutionary process. Many advocates of the niche-construction perspective align themselves with other progressive elements in seeking an extended evolutionary synthesis, a stance that other prominent evolutionary biologists reject. Laubichler and Renn argue that niche construction theory offers the prospect of a broader synthesis of evolutionary phenomena through "the notion of expanded and multiple inheritance systems (from genomic to ecological, social and cultural)." Niche construction theory (NCT) remains controversial, particularly amongst orthodox evolutionary biologists. In particular, the claim that niche construction is an evolutionary process has excited controversy. A collaboration between some critics of the niche-construction perspective and one of its advocates attempted to pinpoint their differences. They wrote: "NCT argues that niche construction is a distinct evolutionary process, potentially of equal importance to natural selection. The skeptics dispute this. For them, evolutionary processes are processes that change gene frequencies, of which they identify four (natural selection, genetic drift, mutation, migration [ie. gene flow])... They do not see how niche construction either generates or sorts genetic variation independently of these other processes, or how it changes gene frequencies in any other way. In contrast, NCT adopts a broader notion of an evolutionary process, one that it shares with some other evolutionary biologists. Although the advocate agrees that there is a useful distinction to be made between processes that modify gene frequencies directly, and factors that play different roles in evolution... The skeptics probably represent the majority position: evolutionary processes are those that change gene frequencies. Advocates of NCT, in contrast, are part of a sizable minority of evolutionary biologists that conceive of evolutionary processes more broadly, as anything that systematically biases the direction or rate of evolution, a criterion that they (but not the skeptics) feel niche construction meets." The authors conclude that their disagreements reflect a wider dispute within evolutionary theory over whether the modern synthesis is in need of reformulation, as well as different usages of some key terms (e.g., evolutionary process). Further controversy surrounds the application of niche construction theory to the origins of agriculture within archaeology. In a 2015 review, archaeologist Bruce Smith concluded: "Explanations [for domestication of plants and animals] based on diet breadth modeling are found to have a number of conceptual, theoretical, and methodological flaws; approaches based on niche construction theory are far better supported by the available evidence in the two regions considered [eastern North America and the Neotropics]". However, other researchers see no conflict between niche construction theory and the application of behavioral ecology methods in archaeology. A critical review by Manan Gupta and colleagues was published in 2017 which led to a dispute amongst critics and proponents. In 2018 another review updates the importance of niche construction and extragenetic adaptation in evolutionary processes. See also Nest-building in primates Person–environment fit Structures built by animals References Further reading Ertsen, Maurits W., Christof Mauch, and Edmund Russell, eds. “Molding the Planet: Human Niche Construction at Work,” RCC Perspectives: Transformations in Environment and Society 2016, no. 5. doi.org/10.5282/rcc/7723. External links http://www.nicheconstruction.com/ Ecological niche Extended evolutionary synthesis Evolutionary biology
Niche construction
[ "Biology" ]
2,750
[ "Evolutionary biology" ]
769,434
https://en.wikipedia.org/wiki/Setoid
In mathematics, a setoid (X, ~) is a set (or type) X equipped with an equivalence relation ~. A setoid may also be called E-set, Bishop set, or extensional set. Setoids are studied especially in proof theory and in type-theoretic foundations of mathematics. Often in mathematics, when one defines an equivalence relation on a set, one immediately forms the quotient set (turning equivalence into equality). In contrast, setoids may be used when a difference between identity and equivalence must be maintained, often with an interpretation of intensional equality (the equality on the original set) and extensional equality (the equivalence relation, or the equality on the quotient set). Proof theory In proof theory, particularly the proof theory of constructive mathematics based on the Curry–Howard correspondence, one often identifies a mathematical proposition with its set of proofs (if any). A given proposition may have many proofs, of course; according to the principle of proof irrelevance, normally only the truth of the proposition matters, not which proof was used. However, the Curry–Howard correspondence can turn proofs into algorithms, and differences between algorithms are often important. So proof theorists may prefer to identify a proposition with a setoid of proofs, considering proofs equivalent if they can be converted into one another through beta conversion or the like. Type theory In type-theoretic foundations of mathematics, setoids may be used in a type theory that lacks quotient types to model general mathematical sets. For example, in Per Martin-Löf's intuitionistic type theory, there is no type of real numbers, only a type of regular Cauchy sequences of rational numbers. To do real analysis in Martin-Löf's framework, therefore, one must work with a setoid of real numbers, the type of regular Cauchy sequences equipped with the usual notion of equivalence. Predicates and functions of real numbers need to be defined for regular Cauchy sequences and proven to be compatible with the equivalence relation. Typically (although it depends on the type theory used), the axiom of choice will hold for functions between types (intensional functions), but not for functions between setoids (extensional functions). The term "set" is variously used either as a synonym of "type" or as a synonym of "setoid". Constructive mathematics In constructive mathematics, one often takes a setoid with an apartness relation instead of an equivalence relation, called a constructive setoid. One sometimes also considers a partial setoid using a partial equivalence relation or partial apartness (see e.g. Barthe et al., section 1). See also Groupoid Notes References . . External links Implementation of setoids in Coq Abstract algebra Category theory Proof theory Type theory Equivalence (mathematics)
Setoid
[ "Mathematics" ]
585
[ "Functions and mappings", "Mathematical structures", "Proof theory", "Mathematical logic", "Mathematical objects", "Type theory", "Fields of abstract algebra", "Category theory", "Mathematical relations", "Abstract algebra", "Algebra" ]
769,724
https://en.wikipedia.org/wiki/Hawaiian%E2%80%93Emperor%20seamount%20chain
The Hawaiian–Emperor seamount chain is a mostly undersea mountain range in the Pacific Ocean that reaches above sea level in Hawaii. It is composed of the Hawaiian ridge, consisting of the islands of the Hawaiian chain northwest to Kure Atoll, and the Emperor Seamounts: together they form a vast underwater mountain region of islands and intervening seamounts, atolls, shallows, banks and reefs along a line trending southeast to northwest beneath the northern Pacific Ocean. The seamount chain, containing over 80 identified undersea volcanoes, stretches about from the Aleutian Trench off the coast of the Kamchatka peninsula in the far northwest Pacific to the Kamaʻehuakanaloa Seamount (formerly Lōʻihi), the youngest volcano in the chain, which lies about southeast of the Island of Hawaiʻi. Regions The chain can be divided into three subsections. The first, the Hawaiian archipelago (also known as the Windward isles), consists of the islands comprising the U.S. state of Hawaii. As it is the closest to the hotspot, this volcanically active region is the youngest part of the chain, with ages ranging from 400,000 years to 5.1 million years. The island of Hawaii is composed of five volcanoes, of which four (Kilauea, Mauna Loa, Hualalai, and Mauna Kea) are active. The island of Maui has one active volcano, Haleakalā. Kamaʻehuakanaloa Seamount continues to grow offshore of Hawaii island, and is the only known volcano in the chain in the submarine pre-shield stage. The second part of the chain is composed of the Northwestern Hawaiian Islands, collectively referred to as the Leeward isles, the constituents of which are between 7.2 and 27.7 million years old. Erosion has long since overtaken volcanic activity at these islands, and most of them are atolls, atoll islands, and extinct islands. They contain many of the most northerly atolls in the world; Kure Atoll, in this group, is the northernmost atoll on Earth. On June 15, 2006, U.S. President George W. Bush issued a proclamation creating Papahānaumokuākea Marine National Monument under the Antiquities Act of 1906. The national monument, meant to protect the biodiversity of the Hawaiian isles, encompasses all of the northern isles, and is one of the largest such protected areas in the world. The proclamation limits tourism to the area, and called for a phase-out of fishing by 2011. The oldest and most heavily eroded part of the chain are the Emperor seamounts, which are 39 to 85 million years old. The Emperor and Hawaiian chains form an angle of about 120°. This bend was long attributed to a relatively sudden change of 60° in the direction of plate motion, but research conducted in 2003 suggests that it was the movement of the hotspot itself that caused the bend. The issue continues to remain under academic debate. All of the volcanoes in this part of the chain have long since subsided below sea level, becoming seamounts and guyots. Many of the volcanoes are named after former emperors of Japan. The seamount chain extends to the West Pacific, and terminates at the Kuril–Kamchatka Trench, a subduction zone at the border of Russia. Formation The oldest confirmed age for one of the Emperor Seamounts is 81 million years, for Detroit Seamount. However, Meiji Seamount, located to the north of Detroit Seamount, is likely somewhat older. In 1963, geologist John Tuzo Wilson hypothesized the origins of the Hawaiian–Emperor seamount chain, explaining that they were created by a hotspot of volcanic activity that was essentially stationary as the Pacific tectonic plate drifted in a northwesterly direction, leaving a trail of increasingly eroded volcanic islands and seamounts in its wake. An otherwise inexplicable kink in the chain marks a shift in the movement of the Pacific plate some 47 million years ago, from a northward to a more northwesterly direction, and the kink has been presented in geology texts as an example of how a tectonic plate can shift direction comparatively suddenly. A look at the USGS map on the origin of the Hawaiian Islands clearly shows this "spearpoint". In a more recent study, Sharp and Clague interpret the bend as starting at about 50 million years ago. They also conclude that the bend formed from a "traditional" cause—a change in the direction of motion of the Pacific plate. However, recent research shows that the hotspot itself may have moved with time. Some evidence comes from analysis of the orientation of the ancient magnetic field preserved by magnetite in ancient lava flows sampled at four seamounts: this evidence from paleomagnetism shows a more complex history than the commonly accepted view of a stationary hotspot. If the hotspot had remained above a fixed mantle plume during the past 80 million years, the latitude as recorded by the orientation of the ancient magnetic field preserved by magnetite (paleolatitude) should be constant for each sample; this should also signify original cooling at the same latitude as the current location of the Hawaiian hotspot. Instead of remaining constant, the paleolatitudes of the Emperor Seamounts show a change from north to south, with decreasing age. The paleomagnetic data from the seamounts of the Emperor chain suggest motion of the Hawaiian hotspot in Earth's mantle. Tarduno et al. have interpreted that the bend in the seamount chain may be caused by circulation patterns in the flowing solid mantle (mantle "wind") rather than a change in plate motion. There are two distinct interpretations for the cause of the bend in the seamounts of the Emperor chain as previously mentioned. First, that the bend was caused only by a change in the Pacific plate motion. Second, that the bend was caused by hotspot movement only. In 2004 geologist Yaoling Niu proposed a model that attributed the bend largely to a change in plate motion along with some motion in the hotspot. Niu proposes that the bend starts at 43 Ma which is caused by a "trench jam". This "trench jam" is caused by the arrival of the Emperor chain seamounts at the northern subduction zone. These thick, buoyant seamounts resisted subduction and caused a reorientation of plate motion. Thus explains the sudden change in plate motion and is supported by the orientation of nearby island chains which also have a sudden bend which mirror the Emperor chain. As shown by Tarduno et al., the hotspot does show some north-south motion, but Yaoling's model shows that for the bend to be attributed completely to hotspot motion, the pacific plate would have to remain stationary from 81 Ma to 43 Ma. Thus, is not true as magnetic anomalies on the pacific plate indicate motion of around 60 mm per year during that period. This model consisting of a change in plate motion combined with small north-south motions of the hotspot seems to be the best supported theory concerning the bend in the Emperor chain to date. In addition to previous interpretations of the cause of the bend in the seamount chain, Hu et al. have proposed a close relationship between mantle plume migration and change in plate tectonic motion. Expanding on previous models, it has been interpreted that the Pacific Plate's motion was predominantly in the northern direction prior to 47 million years ago. Traditionally, the force pulling the Pacific Plate to the north was attributed to the Izanagi - Pacific Ridge subduction zone. However, in a 2021 study, Hu et al. proposed that this subduction zone was not a strong enough force to have been pulling the Pacific Plate on its own. Instead, they introduced the concept that there was an intra-oceanic subduction zone involving the Kronotsky and Olyutorsky arcs. According to their findings, this subduction zone played a significant role in northern directional pull on the Pacific Plate. Around 47 million years ago, these northern forces came to an end. Near the same time, there were notable changes in the movement of the Hawaiian hotspot. Approximately 50 Ma, the Hawaiian hotspot started to drift to the south. However, there is not a widely accepted theory as to the mechanism that caused the hotspot to drift. The combination of these events along with new subduction zones in the west, could explain the large bend present in the Hawaiian - Emperor Seamount Chain. Aging The chain has been produced by the movement of the ocean crust over the Hawaii hotspot, an upwelling of hot rock from the Earth's mantle. As the oceanic crust moves the volcanoes farther away from their source of magma, their eruptions become less frequent and less powerful until they eventually cease altogether. At that point erosion of the volcano and subsidence of the seafloor cause the volcano to gradually diminish. As the volcano sinks and erodes, it first becomes an atoll island and then an atoll. Further subsidence causes the volcano to sink below the sea surface, becoming a seamount and/or a guyot. Economic activity From the 1960s to the 1980s, the seamounts were intensively bottom trawled. Trawling has continued since then at lower rates, particularly by Japanese ships seeking Pentaceros wheeleri. The North Pacific Fisheries Commission regulates fishing in the area. See also Isostasy Kodiak–Bowie Seamount chain New England Seamounts Oceanic trench Pacific-Kula Ridge Plate tectonics Timeline of the far future Vitória-Trindade Ridge References Informational notes Citations Further reading USGS, "The long trail of the Hawaiian hotspot" National Geographic News: John Roach, "Hot Spot That Spawned Hawaii Was on the Move, Study Finds" : August 14, 2003 Evolution of Hawaiian Volcanoes from the USGS. Ken Rubin, "The Formation of the Hawaiian Islands" with tables and diagrams illustrating the progressive age of the volcanoes. Hot Spots and Mantle Plumes Seamount chains Seamounts of the Pacific Ocean Volcanism of Oceania Volcanism of the Pacific Ocean Volcanism of Hawaii Guyots Landforms of Hawaii Physical oceanography Volcanoes of Hawaii Hotspot tracks Cretaceous volcanism Cretaceous Oceania Cenozoic volcanism Cenozoic Hawaii Cenozoic Oceania
Hawaiian–Emperor seamount chain
[ "Physics" ]
2,131
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
769,911
https://en.wikipedia.org/wiki/Pectic%20acid
Pectic acid, also known as polygalacturonic acid, is a water-soluble, transparent gelatinous acid existing in over-ripe fruit and some vegetables. It is a product of pectin degradation in plants, and is produced via the interaction between pectinase and pectin (the latter being common in the wine-making industry.) In the early stage of development of fruits, the pectic substance is a water-insoluble protopectin which is converted into pectin by the enzyme protopectinase during ripening of fruit. In over-ripe fruits, due to the presence of pectic methyl esterase enzyme, the pectin gets largely converted to pectic acid which is water-insoluble. Due to this reason both immature and over-ripe fruits are not suitable for making jelly and only ripe fruits are used. References Carboxylic acids Polysaccharides
Pectic acid
[ "Chemistry" ]
196
[ "Carbohydrates", "Carboxylic acids", "Functional groups", "Organic compounds", "Organic compound stubs", "Organic chemistry stubs", "Polysaccharides" ]
770,314
https://en.wikipedia.org/wiki/Blot%20%28biology%29
In molecular biology and genetics, a blot is a method of transferring large biomolecules (proteins, DNA or RNA) onto a carrier, such as a membrane composed of nitrocellulose, polyvinylidene fluoride or nylon. In many instances, this is done after a gel electrophoresis, transferring the molecules from the gel onto the blotting membrane, and other times adding the samples directly onto the membrane. After the blotting, the transferred molecules are then visualized by colorant staining (for example, silver staining of proteins), autoradiographic visualization of radiolabelled molecules (performed before the blot), or specific labelling of some proteins or nucleic acids. The latter is done with antibodies or hybridization probes that bind only to some molecules of the blot and have an enzyme joined to them. After proper washing, this enzymatic activity (and so, the molecules found in the blot) is visualized by incubation with a proper reagent, rendering either a colored deposit on the blot or a chemiluminescent reaction which is registered by photographic film. Southern blot A Southern blot is a method routinely used in molecular biology for detection of a specific DNA sequence in DNA samples. Southern blotting combines transfer of electrophoresis-separated DNA fragments to a filter membrane and subsequent fragment detection by probe hybridization. Western blot A western blot is used for the detection of specific proteins in complex samples. Proteins are first separated by size using electrophoresis before being transferred to an appropriate blotting matrix (usually polyvinylidene fluoride or nitrocellulose) and subsequent detection with antibodies. Far-western blot Similar to a western blot, the far-western blot uses protein–protein interactions to detect the presence of a specific protein immobilized on a blotting matrix. Antibodies are then used to detect the presence of the protein–protein complex, making the Far-Western blot a specific case of the Western blot. Southwestern blot A southwestern blot is based on Southern blot and is used to identify and characterize DNA-binding proteins by their ability to bind to specific oligonucleotide probes. The proteins are separated by gel electrophoresis and are subsequently transferred to nitrocellulose membranes similar to other types of blotting. Eastern blot The eastern blot is used for the detection of specific posttranslational modifications of proteins. Proteins are separated by gel electrophoresis before being transferred to a blotting matrix whereupon posttranslational modifications are detected by specific substrates (cholera toxin, concanavalin, phosphomolybdate, etc.) or antibodies. Far-eastern blot The far-eastern blot is for the detection of lipid-linked oligosaccharides. High-performance thin-layer chromatography is first used to separate the lipids by physical and chemical characteristics, then transferred to a blotting matrix before the oligosaccharides are detected by a specific binding protein (i.e. antibodies or lectins). Northern blot The northern blot is for the detection of specific RNA sequences in complex samples. Northern blotting first separates samples by size via gel electrophoresis before they are transferred to a blotting matrix and detected with labeled RNA probes. Reverse northern blot The reverse northern blot differs from both northern and Southern blot in that DNA is first immobilized on a blotting matrix and specific sequences are detected with labeled RNA probes. Dot blot A dot blot is a special case of any of the above blots where the analyte is added directly to the blotting matrix (and appears as a "dot") as opposed to separating the sample by electrophoresis prior to blotting. List of blots Southern blot for DNA northern blot for RNA reverse northern blot for RNA western blot for proteins far-western blot for protein–protein interactions eastern blot for post-translational modification far-eastern blot for glycolipids dot blot See also Immunoscreening References Molecular biology Laboratory techniques
Blot (biology)
[ "Chemistry", "Biology" ]
892
[ "Biochemistry", "nan", "Molecular biology" ]
770,467
https://en.wikipedia.org/wiki/Tandem%20mass%20spectrometry
Tandem mass spectrometry, also known as MS/MS or MS2, is a technique in instrumental analysis where two or more stages of analysis using one or more mass analyzer are performed with an additional reaction step in between these analyses to increase their abilities to analyse chemical samples. A common use of tandem MS is the analysis of biomolecules, such as proteins and peptides. The molecules of a given sample are ionized and the first spectrometer (designated MS1) separates these ions by their mass-to-charge ratio (often given as m/z or m/Q). Ions of a particular m/z-ratio coming from MS1 are selected and then made to split into smaller fragment ions, e.g. by collision-induced dissociation, ion-molecule reaction, or photodissociation. These fragments are then introduced into the second mass spectrometer (MS2), which in turn separates the fragments by their m/z-ratio and detects them. The fragmentation step makes it possible to identify and separate ions that have very similar m/z-ratios in regular mass spectrometers. Structure Typical tandem mass spectrometry instrumentation setups include triple quadrupole mass spectrometer (QqQ), multi-sector mass spectrometer, ion trap, quadrupole–time of flight (Q-TOF), Fourier transform ion cyclotron resonance (FT-ICR), and hybrid mass spectrometers. Triple quadrupole mass spectrometer Triple quadrupole mass spectrometers use the first and third quadrupoles as mass filters. When analytes pass the second quadrupole, the fragmentation proceeds through collision with gas. Quadrupole–time of flight (Q-TOF) Q-TOF mass spectrometer combines quadrupole and TOF instruments, which together enable fragmentation experiments that yield highly accurate mass quantitations for product ions. This is a method of mass spectrometry in which fragmented ion (m/z) ratios are determined through a time of flight measurement. Hybrid mass spectrometer Hybrid mass spectrometer consists of more than two mass analyzers. Instrumentation Multiple stages of mass analysis separation can be accomplished with individual mass spectrometer elements separated in space or using a single mass spectrometer with the MS steps separated in time. For tandem mass spectrometry in space, the different elements are often noted in a shorthand, giving the type of mass selector used. Tandem in space In tandem mass spectrometry in space, the separation elements are physically separated and distinct, although there is a physical connection between the elements to maintain high vacuum. These elements can be sectors, transmission quadrupole, or time-of-flight. When using multiple quadrupoles, they can act as both mass analyzers and collision chambers. Common notation for mass analyzers is Q – quadrupole mass analyzer; q – radio frequency collision quadrupole; TOF – time-of-flight mass analyzer; B – magnetic sector, and E – electric sector. The notation can be combined to indicate various hybrid instrument, for example QqQ – triple quadrupole mass spectrometer; QTOF – quadrupole time-of-flight mass spectrometer (also QqTOF); and BEBE – four-sector (reverse geometry) mass spectrometer. Tandem in time By doing tandem mass spectrometry in time, the separation is accomplished with ions trapped in the same place, with multiple separation steps taking place over time. A quadrupole ion trap or Fourier transform ion cyclotron resonance (FTICR) instrument can be used for such an analysis. Trapping instruments can perform multiple steps of analysis, which is sometimes referred to as MSn (MS to the n). Often the number of steps, n, is not indicated, but occasionally the value is specified; for example MS3 indicates three stages of separation. Tandem in time MS instruments do not use the modes described next, but typically collect all of the information from a precursor ion scan and a parent ion scan of the entire spectrum. Each instrumental configuration utilizes a unique mode of mass identification. Tandem in space MS/MS modes When tandem MS is performed with an in space design, the instrument must operate in one of a variety of modes. There are a number of different tandem MS/MS experimental setups and each mode has its own applications and provides different information. Tandem MS in space uses the coupling of two instrument components which measure the same mass spectrum range but with a controlled fractionation between them in space, while tandem MS in time involves the use of an ion trap. There are four main scan experiments possible using MS/MS: precursor ion scan, product ion scan, neutral loss scan, and selected reaction monitoring. For a precursor ion scan, the product ion is selected in the second mass analyzer, and the precursor masses are scanned in the first mass analyzer. Note that precursor ion is synonymous with parent ion and product ion with daughter ion; however the use of these anthropomorphic terms is discouraged. In a product ion scan, a precursor ion is selected in the first stage, allowed to fragment and then all resultant masses are scanned in the second mass analyzer and detected in the detector that is positioned after the second mass analyzer. This experiment is commonly performed to identify transitions used for quantification by tandem MS. In a neutral loss scan, the first mass analyzer scans all the masses. The second mass analyzer also scans, but at a set offset from the first mass analyzer. This offset corresponds to a neutral loss that is commonly observed for the class of compounds. In a constant-neutral-loss scan, all precursors that undergo the loss of a specified common neutral are monitored. To obtain this information, both mass analyzers are scanned simultaneously, but with a mass offset that correlates with the mass of the specified neutral. Similar to the precursor-ion scan, this technique is also useful in the selective identification of closely related class of compounds in a mixture. In selected reaction monitoring, both mass analyzers are set to a selected mass. This mode is analogous to selected ion monitoring for MS experiments. A selective analysis mode, which can increase sensitivity. Fragmentation Fragmentation of gas-phase ions is essential to tandem mass spectrometry and occurs between different stages of mass analysis. There are many methods used to fragment the ions and these can result in different types of fragmentation and thus different information about the structure and composition of the molecule. In-source fragmentation Often, the ionization process is sufficiently violent to leave the resulting ions with sufficient internal energy to fragment within the mass spectrometer. If the product ions persist in their non-equilibrium state for a moderate amount of time before auto-dissociation this process is called metastable fragmentation. Nozzle-skimmer fragmentation refers to the purposeful induction of in-source fragmentation by increasing the nozzle-skimmer potential on usually electrospray based instruments. Although in-source fragmentation allows for fragmentation analysis, it is not technically tandem mass spectrometry unless metastable ions are mass analyzed or selected before auto-dissociation and a second stage of analysis is performed on the resulting fragments. In-source fragmentation can be used in lieu of tandem mass spectrometry through the utilization of Enhanced in-Source Fragmentation Annotation (EISA) technology which generates fragmentation that directly matches tandem mass spectrometry data. Fragments observed by EISA have higher signal intensity than traditional fragments which suffer losses in the collision cells of tandem mass spectrometers. EISA enables fragmentation data acquisition on MS1 mass analyzers such as time-of-flight and single quadrupole instruments. In-source fragmentation is often used in addition to tandem mass spectrometry (with post-source fragmentation) to allow for two steps of fragmentation in a pseudo MS3-type of experiment. Collision-induced dissociation Post-source fragmentation is most often what is being used in a tandem mass spectrometry experiment. Energy can also be added to the ions, which are usually already vibrationally excited, through post-source collisions with neutral atoms or molecules, the absorption of radiation, or the transfer or capture of an electron by a multiply charged ion. Collision-induced dissociation (CID), also called collisionally activated dissociation (CAD), involves the collision of an ion with a neutral atom or molecule in the gas phase and subsequent dissociation of the ion. For example, consider {AB+} + M -> {A} + {B+} + M where the ion AB+ collides with the neutral species M and subsequently breaks apart. The details of this process are described by collision theory. Due to different instrumental configuration, two main different types of CID are possible: (i) beam-type (in which precursor ions are fragmented on-the-flight) and (ii) ion trap-type (in which precursor ions are first trapped, and then fragmented). A third and more recent type of CID fragmentation is higher-energy collisional dissociation (HCD). HCD is a CID technique specific to orbitrap mass spectrometers in which fragmentation takes place external to the ion trap, it happens in the HCD cell (in some instruments named "ion routing multipole"). HCD is a trap-type fragmentation that has been shown to have beam-type characteristics. Freely available large scale high resolution tandem mass spectrometry databases exist (e.g. METLIN with 850,000 molecular standards each with experimental CID MS/MS data), and are typically used to facilitate small molecule identification. Electron capture and transfer methods The energy released when an electron is transferred to or captured by a multiply charged ion can induce fragmentation. Electron-capture dissociation If an electron is added to a multiply charged positive ion, the Coulomb energy is liberated. Adding a free electron is called electron-capture dissociation (ECD), and is represented by for a multiply protonated molecule M. Electron-transfer dissociation Adding an electron through an ion-ion reaction is called electron-transfer dissociation (ETD). Similar to electron-capture dissociation, ETD induces fragmentation of cations (e.g. peptides or proteins) by transferring electrons to them. It was invented by Donald F. Hunt, Joshua Coon, John E. P. Syka and Jarrod Marto at the University of Virginia. ETD does not use free electrons but employs radical anions (e.g. anthracene or azobenzene) for this purpose: where A is the anion. ETD cleaves randomly along the peptide backbone (c and z ions) while side chains and modifications such as phosphorylation are left intact. The technique only works well for higher charge state ions (z>2), however relative to collision-induced dissociation (CID), ETD is advantageous for the fragmentation of longer peptides or even entire proteins. This makes the technique important for top-down proteomics. Much like ECD, ETD is effective for peptides with modifications such as phosphorylation. Electron-transfer and higher-energy collision dissociation (EThcD) is a combination ETD and HCD where the peptide precursor is initially subjected to an ion/ion reaction with fluoranthene anions in a linear ion trap, which generates c- and z-ions. In the second step HCD all-ion fragmentation is applied to all ETD derived ions to generate b- and y- ions prior to final analysis in the orbitrap analyzer. This method employs dual fragmentation to generate ion- and thus data-rich MS/MS spectra for peptide sequencing and PTM localization. Negative electron-transfer dissociation Fragmentation can also occur with a deprotonated species, in which an electron is transferred from the species to a cationic reagent in a negative electron transfer dissociation (NETD): Following this transfer event, the electron-deficient anion undergoes internal rearrangement and fragments. NETD is the ion/ion analogue of electron-detachment dissociation (EDD). NETD is compatible with fragmenting peptide and proteins along the backbone at the Cα-C bond. The resulting fragments are usually a•- and x-type product ions. Electron-detachment dissociation Electron-detachment dissociation (EDD) is a method for fragmenting anionic species in mass spectrometry. It serves as a negative counter mode to electron capture dissociation. Negatively charged ions are activated by irradiation with electrons of moderate kinetic energy. The result is ejection of electrons from the parent ionic molecule, which causes dissociation via recombination. Charge-transfer dissociation Reaction between positively charged peptides and cationic reagents, also known as charge transfer dissociation (CTD), has recently been demonstrated as an alternative high-energy fragmentation pathway for low-charge state (1+ or 2+) peptides. The proposed mechanism of CTD using helium cations as the reagent is: Initial reports are that CTD causes backbone Cα-C bond cleavage of peptides and provides a•- and x-type product ions. Photodissociation The energy required for dissociation can be added by photon absorption, resulting in ion photodissociation and represented by {AB+} + \mathit{h\nu} -> {A} + B+ where represents the photon absorbed by the ion. Ultraviolet lasers can be used, but can lead to excessive fragmentation of biomolecules. Infrared multiphoton dissociation Infrared photons will heat the ions and cause dissociation if enough of them are absorbed. This process is called infrared multiphoton dissociation (IRMPD) and is often accomplished with a carbon dioxide laser and an ion trapping mass spectrometer such as a FTMS. Blackbody infrared radiative dissociation Blackbody radiation can be used for photodissociation in a technique known as blackbody infrared radiative dissociation (BIRD). In the BIRD method, the entire mass spectrometer vacuum chamber is heated to create infrared light. BIRD uses this radiation to excite increasingly more energetic vibrations of the ions, until a bond breaks, creating fragments. This is similar to infrared multiphoton dissociation which also uses infrared light, but from a different source. BIRD is most often used with Fourier transform ion cyclotron resonance mass spectrometry. Surface-induced dissociation With surface-induced dissociation (SID), the fragmentation is a result of the collision of an ion with a surface under high vacuum. Today, SID is used to fragment a wide range of ions. Years ago, it was only common to use SID on lower mass, singly charged species because ionization methods and mass analyzer technologies weren't advanced enough to properly form, transmit, or characterize ions of high m/z. Over time, self-assembled monolayer surfaces (SAMs) composed of CF3(CF2)10CH2CH2S on gold have been the most prominently used collision surfaces for SID in a tandem spectrometer. SAMs have acted as the most desirable collision targets due to their characteristically large effective masses for the collision of incoming ions. Additionally, these surfaces are composed of rigid fluorocarbon chains, which don't significantly dampen the energy of the projectile ions. The fluorocarbon chains are also beneficial because of their ability to resist facile electron transfer from the metal surface to the incoming ions. SID's ability to produce subcomplexes that remain stable and provide valuable information on connectivity is unmatched by any other dissociation technique. Since the complexes produced from SID are stable and retain distribution of charge on the fragment, this produces a unique, spectra which the complex centers around a narrower m/z distribution. The SID products and the energy at which they form are reflective of the strengths and topology of the complex. The unique dissociation patterns help discover the Quaternary structure of the complex. The symmetric charge distribution and dissociation dependence are unique to SID and make the spectra produced distinctive from any other dissociation technique. The SID technique is also applicable to ion-mobility mass spectrometry (IM-MS). Three different methods for this technique include analyzing the characterization of topology, intersubunit connectivity, and the degree of unfolding for protein structure. Analysis of protein structure unfolding is the most commonly used application of the SID technique. For Ion-mobility mass spectrometry (IM-MS), SID is used for dissociation of the source activated precursors of three different types of protein complexes: C-reactive protein (CRP), transthyretin (TTR), and concanavalin A (Con A). This method is used to observe the unfolding degree for each of these complexes. For this observation, SID showed the precursor ions' structures that exist before the collision with the surface. IM-MS utilizes the SID as a direct measure of the conformation for each proteins' subunit. Fourier-transform ion cyclotron resonance (FTICR) are able to provide ultrahigh resolution and high mass accuracy to instruments that take mass measurements. These features make FTICR mass spectrometers a useful tool for a wide variety of applications such as several dissociation experiments such as collision-induced dissociation (CID, electron transfer dissociation (ETD), and others. In addition, surface-induced dissociation has been implemented with this instrument for the study of fundamental peptide fragmentation. Specifically, SID has been applied to the study of energetics and the kinetics of gas-phase fragmentation within an ICR instrument. This approach has been used to understand the gas-phase fragmentation of protonated peptides, odd-electron peptide ions, non-covalent ligand-peptide complexes, and ligated metal clusters. Quantitative proteomics Quantitative proteomics is used to determine the relative or absolute amount of proteins in a sample. Several quantitative proteomics methods are based on tandem mass spectrometry. MS/MS has become a benchmark procedure for the structural elucidation of complex biomolecules. One method commonly used for quantitative proteomics is isobaric tag labeling. Isobaric tag labeling enables simultaneous identification and quantification of proteins from multiple samples in a single analysis. To quantify proteins, peptides are labeled with chemical tags that have the same structure and nominal mass, but vary in the distribution of heavy isotopes in their structure. These tags, commonly referred to as tandem mass tags, are designed so that the mass tag is cleaved at a specific linker region upon higher-energy collisional-induced dissociation (HCD) during tandem mass spectrometry yielding reporter ions of different masses. Protein quantitation is accomplished by comparing the intensities of the reporter ions in the MS/MS spectra. Two commercially available isobaric tags are iTRAQ and TMT reagents. Isobaric tags for relative and absolute quantitation (iTRAQ) An isobaric tag for relative and absolute quantitation (iTRAQ) is a reagent for tandem mass spectrometry that is used to determine the amount of proteins from different sources in a single experiment. It uses stable isotope labeled molecules that can form a covalent bond with the N-terminus and side chain amines of proteins. The iTRAQ reagents are used to label peptides from different samples that are pooled and analyzed by liquid chromatography and tandem mass spectrometry. The fragmentation of the attached tag generates a low molecular mass reporter ion that can be used to relatively quantify the peptides and the proteins from which they originated. Tandem mass tag (TMT) A tandem mass tag (TMT) is an isobaric mass tag chemical label used for protein quantification and identification. The tags contain four regions: mass reporter, cleavable linker, mass normalization, and protein reactive group. TMT reagents can be used to simultaneously analyze 2 to 11 different peptide samples prepared from cells, tissues or biological fluids. Recent developments allow up to 16 and even 18 samples (16plex or 18plex respectively) to be analyzed. Three types of TMT reagents are available with different chemical reactivities: (1) a reactive NHS ester functional group for labeling primary amines (TMTduplex, TMTsixplex, TMT10plex plus TMT11-131C), (2) a reactive iodoacetyl functional group for labeling free sulfhydryls (iodoTMT) and (3) reactive alkoxyamine functional group for labeling of carbonyls (aminoxyTMT). Multiplexed DIA (plexDIA) The progress in data independent acquisition (DIA) enabled multiplexed quantitative proteomics with non-isobaric mass tags and a new method called plexDIA introduced in 2021. This new approach increases the number of data points by parallelizing both samples and peptides, thus achieving multiplicative gains. It has the potential to continue scaling proteomic throughput with new mass tags and algorithms. plexDIA is applicable to both bulk and single-cell samples and is particularly powerful for single-cell proteomics. Applications Peptides Tandem mass spectrometry can be used for protein sequencing. When intact proteins are introduced to a mass analyzer, this is called "top-down proteomics" and when proteins are digested into smaller peptides and subsequently introduced into the mass spectrometer, this is called "bottom-up proteomics". Shotgun proteomics is a variant of bottom up proteomics in which proteins in a mixture are digested prior to separation and tandem mass spectrometry. Tandem mass spectrometry can produce a peptide sequence tag that can be used to identify a peptide in a protein database. A notation has been developed for indicating peptide fragments that arise from a tandem mass spectrum. Peptide fragment ions are indicated by a, b, or c if the charge is retained on the N-terminus and by x, y or z if the charge is maintained on the C-terminus. The subscript indicates the number of amino acid residues in the fragment. Superscripts are sometimes used to indicate neutral losses in addition to the backbone fragmentation, * for loss of ammonia and ° for loss of water. Although peptide backbone cleavage is the most useful for sequencing and peptide identification other fragment ions may be observed under high energy dissociation conditions. These include the side chain loss ions d, v, w and ammonium ions and additional sequence-specific fragment ions associated with particular amino acid residues. Oligosaccharides Oligosaccharides may be sequenced using tandem mass spectrometry in a similar manner to peptide sequencing. Fragmentation generally occurs on either side of the glycosidic bond (b, c, y and z ions) but also under more energetic conditions through the sugar ring structure in a cross-ring cleavage (x ions). Again trailing subscripts are used to indicate position of the cleavage along the chain. For cross ring cleavage ions the nature of the cross ring cleavage is indicated by preceding superscripts. Oligonucleotides Tandem mass spectrometry has been applied to DNA and RNA sequencing. A notation for gas-phase fragmentation of oligonucleotide ions has been proposed. Newborn screening Newborn screening is the process of testing newborn babies for treatable genetic, endocrinologic, metabolic and hematologic diseases. The development of tandem mass spectrometry screening in the early 1990s led to a large expansion of potentially detectable congenital metabolic diseases that affect blood levels of organic acids.Small molecule analysis' It has been shown that tandem mass spectrometry data is highly consistent across instrument and manufacturer platforms including quadrupole time-of-flight (QTOF) and Q Exactive instrumentation, especially at 20 eV. Limitation Tandem mass spectrometry cannot be applied for single-cell analyses as it is insensitive to analyze such small amounts of a cell. These limitations are primarily due to a combination of inefficient ion production and ion losses within the instruments due to chemical noise sources of solvents. Future outlook Tandem mass spectrometry will be a useful tool for protein characterization, nucleoprotein complexes, and other biological structures. However, some challenges left such as analyzing the characterization of the proteome quantitatively and qualitatively. See also Accelerator mass spectrometry Cross section (physics) Mass-analyzed ion-kinetic-energy spectrometry Unimolecular ion decomposition References Bibliography External links An Introduction to Mass Spectrometry by Dr Alison E. Ashcroft
Tandem mass spectrometry
[ "Physics" ]
5,174
[ "Mass spectrometry", "Spectrum (physical sciences)", "Tandem mass spectrometry" ]
770,546
https://en.wikipedia.org/wiki/Piping
Within industry, piping is a system of pipes used to convey fluids (liquids and gases) from one location to another. The engineering discipline of piping design studies the efficient transport of fluid. Industrial process piping (and accompanying in-line components) can be manufactured from wood, fiberglass, glass, steel, aluminum, plastic, copper, and concrete. The in-line components, known as fittings, valves, and other devices, typically sense and control the pressure, flow rate and temperature of the transmitted fluid, and usually are included in the field of piping design (or piping engineering), though the sensors and automatic controlling devices may alternatively be treated as part of instrumentation and control design. Piping systems are documented in piping and instrumentation diagrams (P&IDs). If necessary, pipes can be cleaned by the tube cleaning process. Piping sometimes refers to piping design, the detailed specification of the physical piping layout within a process plant or commercial building. In earlier days, this was sometimes called drafting, technical drawing, engineering drawing, and design, but is today commonly performed by designers that have learned to use automated computer-aided drawing or computer-aided design (CAD) software. Plumbing is a piping system with which most people are familiar, as it constitutes the form of fluid transportation that is used to provide potable water and fuels to their homes and businesses. Plumbing pipes also remove waste in the form of sewage, and allow venting of sewage gases to the outdoors. Fire sprinkler systems also use piping, and may transport nonpotable or potable water, or other fire-suppression fluids. Piping also has many other industrial applications, which are crucial for moving raw and semi-processed fluids for refining into more useful products. Some of the more exotic materials used in pipe construction are Inconel, titanium, chrome-moly and various other steel alloys. Engineering sub-fields Generally, industrial piping engineering has three major sub-fields: Piping material Piping design Stress analysis Stress analysis Process piping and power piping are typically checked by pipe stress engineers to verify that the routing, nozzle loads, hangers, and supports are properly placed and selected such that allowable pipe stress is not exceeded under different loads such as sustained loads, operating loads, pressure testing loads, etc., as stipulated by the ASME B31, EN 13480, GOST 32388, RD 10-249 or any other applicable codes and standards. It is necessary to evaluate the mechanical behavior of the piping under regular loads (internal pressure and thermal stresses) as well under occasional and intermittent loading cases such as earthquake, high wind or special vibration, and water hammer. This evaluation is usually performed with the assistance of a specialized (finite element) pipe stress analysis computer programs such as AutoPIPE, CAEPIPE, CAESAR, PASS/START-PROF, or ROHR2. In cryogenic pipe supports, most steel become more brittle as the temperature decreases from normal operating conditions, so it is necessary to know the temperature distribution for cryogenic conditions. Steel structures will have areas of high stress that may be caused by sharp corners in the design, or inclusions in the material. When 3D pipe stress is analyzed, it (3D Pipes) will be considered as 3D beams with supports on both sides. Moreover, the 3D pipe stress determines the bending moments of the pipes. Allowable (ASME) Pipe grades permitted for Oil and gas industries are : Carbon Steel Pipes and tubes (A53 Grade [A & B], A106 Grade [B & C]), Low & Intermediate alloy steel Pipes (A333 Grade [6], A335 Grade [P5, P9, P11, P12, P91]) Materials The material with which a pipe is manufactured often forms as the basis for choosing any pipe. Materials that are used for manufacturing pipes include: Carbon steel ASTM A252 Spec Grade 1, Grade 2, Grade 3 Steel Pile Pipe Plastic piping, e.g. HDPE pipe, PE-X pipe, PP-R pipe or LDPE pipe. Low temperature service carbon steel Stainless steel Nonferrous metals, e.g. cupro-nickel, tantalum lined, etc. Nonmetallic, e.g. tempered glass, Teflon lined, PVC, etc. History Early wooden pipes were constructed out of logs that had a large hole bored lengthwise through the center. Later wooden pipes were constructed with staves and hoops similar to wooden barrel construction. Stave pipes have the advantage that they are easily transported as a compact pile of parts on a wagon and then assembled as a hollow structure at the job site. Wooden pipes were especially popular in mountain regions where transport of heavy iron or concrete pipes would have been difficult. Wooden pipes were easier to maintain than metal, because the wood did not expand or contract with temperature changes as much as metal and so consequently expansion joints and bends were not required. The thickness of wood afforded some insulating properties to the pipes which helped prevent freezing as compared to metal pipes. Wood used for water pipes also does not rot very easily. Electrolysis does not affect wood pipes at all, since wood is a much better electrical insulator. In the Western United States where redwood was used for pipe construction, it was found that redwood had "peculiar properties" that protected it from weathering, acids, insects, and fungus growths. Redwood pipes stayed smooth and clean indefinitely while iron pipe by comparison would rapidly begin to scale and corrode and could eventually plug itself up with the corrosion. Standards There are certain standard codes that need to be followed while designing or manufacturing any piping system. Organizations that promulgate piping standards include: ASME – The American Society of Mechanical Engineers – B31 series ASME B31.1 Power piping (steam piping etc.) ASME B31.3 Process piping ASME B31.4 Pipeline Transportation Systems for Liquid Hydrocarbons and Other Liquids and oil and gas ASME B31.5 Refrigeration piping and heat transfer components ASME B31.8 Gas transmission and distribution piping systems ASME B31.9 Building services piping ASME B31.11 Slurry Transportation Piping Systems (Withdrawn, Superseded by B31.4) ASME B31.12 Hydrogen Piping and Pipelines ASTM – American Society for Testing and Materials ASTM A252 Standard Specification for Welded and Seamless Steel Pipe Piles API – American Petroleum Institute API 5L Petroleum and natural gas industries—Steel pipe for pipeline transportation systems CWB – Canadian Welding Bureau EN 13480 – European metallic industrial piping code EN 13480-1 Metallic industrial piping – Part 1: General EN 13480-2 Metallic industrial piping – Part 2: Materials EN 13480-3 Metallic industrial piping – Part 3: Design and calculation EN 13480-4 Metallic industrial piping – Part 4: Fabrication and installation EN 13480-5 Metallic industrial piping – Part 5: Inspection and testing EN 13480-6 Metallic industrial piping – Part 6: Additional requirements for buried piping PD TR 13480-7 Metallic industrial piping – Part 7: Guidance on the use of conformity assessment procedures EN 13480-8 Metallic industrial piping – Part 8: Additional requirements for aluminium and aluminium alloy piping EN 13941 District heating pipes GOST, RD, SNiP, SP – Russian piping codes RD 10-249 Power Piping GOST 32388 Process Piping, HDPE Piping SNiP 2.05.06-85 & SP 36.13330.2012 Gas and Oil transmission piping systems GOST R 55990-2014 & SP 284.1325800.2016 Field pipelines SP 33.13330.2012 Steel Pipelines GOST R 55596-2013 District heating networks EN 1993-4-3 Eurocode 3 – Design of steel structures – Part 4-3: Pipelines AWS – American Welding Society AWWA – American Water Works Association MSS – Manufacturers' Standardization Society ANSI – American National Standards Institute NFPA – National Fire Protection Association EJMA – Expansion Joint Manufacturers Association Intro to pipe stress – https://web.archive.org/web/20161008161619/http://oakridgebellows.com/metal-expansion-joints/metal-expansion-joints-in-one-minute/part-1-thermal-growth%26#x20 (one minute) See also Drainage Firestop Gasket HDPE pipe Hydraulic machinery Hydrogen piping Hydrostatic test MS Pipe, MS Tube Pipe Cutting Pipefitter Pipe network analysis Pipe marking Pipe support Piping and plumbing fitting Coupling (piping) Double-walled pipe Elbow (piping) Nipple (plumbing) Pipe cap Street elbow Union (plumbing) Valve Victaulic Pipeline pre-commissioning Plastic pipework Plastic Pressure Pipe Systems Plumbing Riser clamp Thermal insulation References Further reading ASME B31.3 Process Piping Guide, Revision 2 from Los Alamos National Laboratory Engineering Standards Manual OST220-03-01-ESM Seismic Design and Retrofit of Piping Systems, July 2002 from American Lifelines Alliance website Engineering and Design, Liquid Process Piping. Engineer manual, entire document • (index page) • U.S. Army Corps of Engineers, EM 1110-l-4008, May 1999 External links Plumbing Mechanical engineering Building engineering Chemical engineering
Piping
[ "Physics", "Chemistry", "Engineering" ]
1,941
[ "Applied and interdisciplinary physics", "Building engineering", "Chemical engineering", "Plumbing", "Construction", "Civil engineering", "nan", "Mechanical engineering", "Piping", "Architecture" ]
771,168
https://en.wikipedia.org/wiki/Polynomial%20remainder%20theorem
In algebra, the polynomial remainder theorem or little Bézout's theorem (named after Étienne Bézout) is an application of Euclidean division of polynomials. It states that, for every number , any polynomial is the sum of and the product of and a polynomial in of degree one less than the degree of . In particular, is the remainder of the Euclidean division of by , and is a divisor of if and only if , a property known as the factor theorem. Examples Example 1 Let . Polynomial division of by gives the quotient and the remainder . By the polynomial remainder theorem, . Example 2 Proof that the polynomial remainder theorem holds for an arbitrary second degree polynomial by using algebraic manipulation: So, which is exactly the formula of Euclidean division. The generalization of this proof to any degree is given below in . Proofs Using Euclidean division The polynomial remainder theorem follows from the theorem of Euclidean division, which, given two polynomials (the dividend) and (the divisor), asserts the existence (and the uniqueness) of a quotient and a remainder such that If the divisor is where r is a constant, then either or its degree is zero; in both cases, is a constant that is independent of ; that is Setting in this formula, we obtain: Direct proof A constructive proofthat does not involve the existence theorem of Euclidean divisionuses the identity If denotes the large factor in the right-hand side of this identity, and one has (since ). Adding to both sides of this equation, one gets simultaneously the polynomial remainder theorem and the existence part of the theorem of Euclidean division for this specific case. Applications The polynomial remainder theorem may be used to evaluate by calculating the remainder, . Although polynomial long division is more difficult than evaluating the function itself, synthetic division is computationally easier. Thus, the function may be more "cheaply" evaluated using synthetic division and the polynomial remainder theorem. The factor theorem is another application of the remainder theorem: if the remainder is zero, then the linear divisor is a factor. Repeated application of the factor theorem may be used to factorize the polynomial. References Theorems about polynomials
Polynomial remainder theorem
[ "Mathematics" ]
439
[ "Theorems in algebra", "Theorems about polynomials" ]
771,562
https://en.wikipedia.org/wiki/Mikhael%20Gromov%20%28mathematician%29
Mikhael Leonidovich Gromov (also Mikhail Gromov, Michael Gromov or Misha Gromov; ; born 23 December 1943) is a Russian-French mathematician known for his work in geometry, analysis and group theory. He is a permanent member of Institut des Hautes Études Scientifiques in France and a professor of mathematics at New York University. Gromov has won several prizes, including the Abel Prize in 2009 "for his revolutionary contributions to geometry". Biography Mikhail Gromov was born on 23 December 1943 in Boksitogorsk, Soviet Union. His father Leonid Gromov was Russian-Slavic and his mother Lea was of Jewish heritage. Both were pathologists. His mother was the cousin of World Chess Champion Mikhail Botvinnik, as well as of the mathematician Isaak Moiseevich Rabinovich. Gromov was born during World War II, and his mother, who worked as a medical doctor in the Soviet Army, had to leave the front line in order to give birth to him. When Gromov was nine years old, his mother gave him the book The Enjoyment of Mathematics by Hans Rademacher and Otto Toeplitz, a book that piqued his curiosity and had a great influence on him. Gromov studied mathematics at Leningrad State University where he obtained a master's degree in 1965, a doctorate in 1969 and defended his postdoctoral thesis in 1973. His thesis advisor was Vladimir Rokhlin. Gromov married in 1967. In 1970, he was invited to give a presentation at the International Congress of Mathematicians in Nice, France. However, he was not allowed to leave the USSR. Still, his lecture was published in the conference proceedings. Disagreeing with the Soviet system, he had been thinking of emigrating since the age of 14. In the early 1970s he ceased publication, hoping that this would help his application to move to Israel. He changed his last name to that of his mother. He received a coded letter saying that, if he could get out of the Soviet Union, he could go to Stony Brook, where a position had been arranged for him. When the request was granted in 1974, he moved directly to New York and worked at Stony Brook. In 1981 he left Stony Brook University to join the faculty of University of Paris VI and in 1982 he became a permanent professor at the Institut des Hautes Études Scientifiques where he remains today. At the same time, he has held professorships at the University of Maryland, College Park from 1991 to 1996, and at the Courant Institute of Mathematical Sciences in New York since 1996. He adopted French citizenship in 1992. Work Gromov's style of geometry often features a "coarse" or "soft" viewpoint, analyzing asymptotic or large-scale properties. He is also interested in mathematical biology, the structure of the brain and the thinking process, and the way scientific ideas evolve. Motivated by Nash and Kuiper's isometric embedding theorems and the results on immersions by Morris Hirsch and Stephen Smale, Gromov introduced the h-principle in various formulations. Modeled upon the special case of the Hirsch–Smale theory, he introduced and developed the general theory of microflexible sheaves, proving that they satisfy an h-principle on open manifolds. As a consequence (among other results) he was able to establish the existence of positively curved and negatively curved Riemannian metrics on any open manifold whatsoever. His result is in counterpoint to the well-known topological restrictions (such as the Cheeger–Gromoll soul theorem or Cartan–Hadamard theorem) on geodesically complete Riemannian manifolds of positive or negative curvature. After this initial work, he developed further h-principles partly in collaboration with Yakov Eliashberg, including work building upon Nash and Kuiper's theorem and the Nash–Moser implicit function theorem. There are many applications of his results, including topological conditions for the existence of exact Lagrangian immersions and similar objects in symplectic and contact geometry. His well-known book Partial Differential Relations collects most of his work on these problems. Later, he applied his methods to complex geometry, proving certain instances of the Oka principle on deformation of continuous maps to holomorphic maps. His work initiated a renewed study of the Oka–Grauert theory, which had been introduced in the 1950s. Gromov and Vitali Milman gave a formulation of the concentration of measure phenomena. They defined a "Lévy family" as a sequence of normalized metric measure spaces in which any asymptotically nonvanishing sequence of sets can be metrically thickened to include almost every point. This closely mimics the phenomena of the law of large numbers, and in fact the law of large numbers can be put into the framework of Lévy families. Gromov and Milman developed the basic theory of Lévy families and identified a number of examples, most importantly coming from sequences of Riemannian manifolds in which the lower bound of the Ricci curvature or the first eigenvalue of the Laplace–Beltrami operator diverge to infinity. They also highlighted a feature of Lévy families in which any sequence of continuous functions must be asymptotically almost constant. These considerations have been taken further by other authors, such as Michel Talagrand. Since the seminal 1964 publication of James Eells and Joseph Sampson on harmonic maps, various rigidity phenomena had been deduced from the combination of an existence theorem for harmonic mappings together with a vanishing theorem asserting that (certain) harmonic mappings must be totally geodesic or holomorphic. Gromov had the insight that the extension of this program to the setting of mappings into metric spaces would imply new results on discrete groups, following Margulis superrigidity. Richard Schoen carried out the analytical work to extend the harmonic map theory to the metric space setting; this was subsequently done more systematically by Nicholas Korevaar and Schoen, establishing extensions of most of the standard Sobolev space theory. A sample application of Gromov and Schoen's methods is the fact that lattices in the isometry group of the quaternionic hyperbolic space are arithmetic. Riemannian geometry In 1978, Gromov introduced the notion of almost flat manifolds. The famous quarter-pinched sphere theorem in Riemannian geometry says that if a complete Riemannian manifold has sectional curvatures which are all sufficiently close to a given positive constant, then must be finitely covered by a sphere. In contrast, it can be seen by scaling that every closed Riemannian manifold has Riemannian metrics whose sectional curvatures are arbitrarily close to zero. Gromov showed that if the scaling possibility is broken by only considering Riemannian manifolds of a fixed diameter, then a closed manifold admitting such a Riemannian metric, with sectional curvatures sufficiently close to zero, must be finitely covered by a nilmanifold. The proof works by replaying the proofs of the Bieberbach theorem and Margulis lemma. Gromov's proof was given a careful exposition by Peter Buser and Hermann Karcher. In 1979, Richard Schoen and Shing-Tung Yau showed that the class of smooth manifolds which admit Riemannian metrics of positive scalar curvature is topologically rich. In particular, they showed that this class is closed under the operation of connected sum and of surgery in codimension at least three. Their proof used elementary methods of partial differential equations, in particular to do with the Green's function. Gromov and Blaine Lawson gave another proof of Schoen and Yau's results, making use of elementary geometric constructions. They also showed how purely topological results such as Stephen Smale's h-cobordism theorem could then be applied to draw conclusions such as the fact that every closed and simply-connected smooth manifold of dimension 5, 6, or 7 has a Riemannian metric of positive scalar curvature. They further introduced the new class of enlargeable manifolds, distinguished by a condition in homotopy theory. They showed that Riemannian metrics of positive scalar curvature cannot exist on such manifolds. A particular consequence is that the torus cannot support any Riemannian metric of positive scalar curvature, which had been a major conjecture previously resolved by Schoen and Yau in low dimensions. In 1981, Gromov identified topological restrictions, based upon Betti numbers, on manifolds which admit Riemannian metrics of nonnegative sectional curvature. The principal idea of his work was to combine Karsten Grove and Katsuhiro Shiohama's Morse theory for the Riemannian distance function, with control of the distance function obtained from the Toponogov comparison theorem, together with the Bishop–Gromov inequality on volume of geodesic balls. This resulted in topologically controlled covers of the manifold by geodesic balls, to which spectral sequence arguments could be applied to control the topology of the underlying manifold. The topology of lower bounds on sectional curvature is still not fully understood, and Gromov's work remains as a primary result. As an application of Hodge theory, Peter Li and Yau were able to apply their gradient estimates to find similar Betti number estimates which are weaker than Gromov's but allow the manifold to have convex boundary. In Jeff Cheeger's fundamental compactness theory for Riemannian manifolds, a key step in constructing coordinates on the limiting space is an injectivity radius estimate for closed manifolds. Cheeger, Gromov, and Michael Taylor localized Cheeger's estimate, showing how to use Bishop−Gromov volume comparison to control the injectivity radius in absolute terms by curvature bounds and volumes of geodesic balls. Their estimate has been used in a number of places where the construction of coordinates is an important problem. A particularly well-known instance of this is to show that Grigori Perelman's "noncollapsing theorem" for Ricci flow, which controls volume, is sufficient to allow applications of Richard Hamilton's compactness theory. Cheeger, Gromov, and Taylor applied their injectivity radius estimate to prove Gaussian control of the heat kernel, although these estimates were later improved by Li and Yau as an application of their gradient estimates. Gromov made foundational contributions to systolic geometry. Systolic geometry studies the relationship between size invariants (such as volume or diameter) of a manifold M and its topologically non-trivial submanifolds (such as non-contractible curves). In his 1983 paper "Filling Riemannian manifolds" Gromov proved that every essential manifold with a Riemannian metric contains a closed non-contractible geodesic of length at most . Gromov−Hausdorff convergence and geometric group theory In 1981, Gromov introduced the Gromov–Hausdorff metric, which endows the set of all metric spaces with the structure of a metric space. More generally, one can define the Gromov-Hausdorff distance between two metric spaces, relative to the choice of a point in each space. Although this does not give a metric on the space of all metric spaces, it is sufficient in order to define "Gromov-Hausdorff convergence" of a sequence of pointed metric spaces to a limit. Gromov formulated an important compactness theorem in this setting, giving a condition under which a sequence of pointed and "proper" metric spaces must have a subsequence which converges. This was later reformulated by Gromov and others into the more flexible notion of an ultralimit. Gromov's compactness theorem had a deep impact on the field of geometric group theory. He applied it to understand the asymptotic geometry of the word metric of a group of polynomial growth, by taking the limit of well-chosen rescalings of the metric. By tracking the limits of isometries of the word metric, he was able to show that the limiting metric space has unexpected continuities, and in particular that its isometry group is a Lie group. As a consequence he was able to settle the Milnor-Wolf conjecture as posed in the 1960s, which asserts that any such group is virtually nilpotent. Using ultralimits, similar asymptotic structures can be studied for more general metric spaces. Important developments on this topic were given by Bruce Kleiner, Bernhard Leeb, and Pierre Pansu, among others. Another consequence is Gromov's compactness theorem, stating that the set of compact Riemannian manifolds with Ricci curvature ≥ c and diameter ≤ D is relatively compact in the Gromov–Hausdorff metric. The possible limit points of sequences of such manifolds are Alexandrov spaces of curvature ≥ c, a class of metric spaces studied in detail by Burago, Gromov and Perelman in 1992. Along with Eliyahu Rips, Gromov introduced the notion of hyperbolic groups. Symplectic geometry Gromov's theory of pseudoholomorphic curves is one of the foundations of the modern study of symplectic geometry. Although he was not the first to consider pseudo-holomorphic curves, he uncovered a "bubbling" phenomena paralleling Karen Uhlenbeck's earlier work on Yang–Mills connections, and Uhlenbeck and Jonathan Sack's work on harmonic maps. In the time since Sacks, Uhlenbeck, and Gromov's work, such bubbling phenomena has been found in a number of other geometric contexts. The corresponding compactness theorem encoding the bubbling allowed Gromov to arrive at a number of analytically deep conclusions on existence of pseudo-holomorphic curves. A particularly famous result of Gromov's, arrived at as a consequence of the existence theory and the monotonicity formula for minimal surfaces, is the "non-squeezing theorem," which provided a striking qualitative feature of symplectic geometry. Following ideas of Edward Witten, Gromov's work is also fundamental for Gromov-Witten theory, which is a widely studied topic reaching into string theory, algebraic geometry, and symplectic geometry. From a different perspective, Gromov's work was also inspirational for much of Andreas Floer's work. Yakov Eliashberg and Gromov developed some of the basic theory for symplectic notions of convexity. They introduce various specific notions of convexity, all of which are concerned with the existence of one-parameter families of diffeomorphisms which contract the symplectic form. They show that convexity is an appropriate context for an h-principle to hold for the problem of constructing certain symplectomorphisms. They also introduced analogous notions in contact geometry; the existence of convex contact structures was later studied by Emmanuel Giroux. Prizes and honors Prizes Prize of the Mathematical Society of Moscow (1971) Oswald Veblen Prize in Geometry (AMS) (1981) Prix Elie Cartan de l'Academie des Sciences de Paris (1984) Prix de l'Union des Assurances de Paris (1989) Wolf Prize in Mathematics (1993) Leroy P. Steele Prize for Seminal Contribution to Research (AMS) (1997) Lobachevsky Medal (1997) Balzan Prize for Mathematics (1999) Kyoto Prize in Mathematical Sciences (2002) Nemmers Prize in Mathematics (2004) Bolyai Prize in 2005 Abel Prize in 2009 "for his revolutionary contributions to geometry" Honors Invited speaker to International Congress of Mathematicians: 1970 (Nice), 1978 (Helsinki), 1983 (Warsaw), 1986 (Berkeley) Foreign member of the National Academy of Sciences (1989), the American Academy of Arts and Sciences (1989), the Norwegian Academy of Science and Letters, the Royal Society (2011), and the National Academy of Sciences of Ukraine (2023). Member of the French Academy of Sciences (1997) Delivered the 2007 Paul Turán Memorial Lectures. See also Cartan–Hadamard conjecture Cartan–Hadamard theorem Collapsing manifold Lévy–Gromov inequality Taubes's Gromov invariant Mostow rigidity theorem Ramsey–Dvoretzky–Milman phenomenon Systoles of surfaces Publications Books Major articles Notes References Marcel Berger, "Encounter with a Geometer, Part I", AMS Notices, Volume 47, Number 2 Marcel Berger, "Encounter with a Geometer, Part II"", AMS Notices, Volume 47, Number 3 External links Personal page at Institut des Hautes Études Scientifiques Personal page at NYU Anatoly Vershik, "Gromov's Geometry" 1943 births Living people Jewish French scientists People from Boksitogorsk Russian people of Jewish descent Russian emigrants to France Foreign associates of the National Academy of Sciences Foreign members of the Russian Academy of Sciences Kyoto laureates in Basic Sciences Differential geometers Russian mathematicians 20th-century French mathematicians 21st-century French mathematicians French people of Russian-Jewish descent Group theorists New York University faculty Wolf Prize in Mathematics laureates Geometers Members of the French Academy of Sciences Members of the Norwegian Academy of Science and Letters Abel Prize laureates Foreign members of the Royal Society Soviet mathematicians University of Maryland, College Park faculty Russian scientists
Mikhael Gromov (mathematician)
[ "Mathematics" ]
3,595
[ "Geometers", "Geometry" ]
772,031
https://en.wikipedia.org/wiki/Linkless%20embedding
In topological graph theory, a mathematical discipline, a linkless embedding of an undirected graph is an embedding of the graph into three-dimensional Euclidean space in such a way that no two cycles of the graph are linked. A flat embedding is an embedding with the property that every cycle is the boundary of a topological disk whose interior is disjoint from the graph. A linklessly embeddable graph is a graph that has a linkless or flat embedding; these graphs form a three-dimensional analogue of the planar graphs. Complementarily, an intrinsically linked graph is a graph that does not have a linkless embedding. Flat embeddings are automatically linkless, but not vice versa. The complete graph , the Petersen graph, and the other five graphs in the Petersen family do not have linkless embeddings. Every graph minor of a linklessly embeddable graph is again linklessly embeddable, as is every graph that can be reached from a linklessly embeddable graph by YΔ- and ΔY-transformations. The linklessly embeddable graphs have the Petersen family graphs as their forbidden minors, and include the planar graphs and apex graphs. They may be recognized, and a flat embedding may be constructed for them, in . Definitions When the circle is mapped to three-dimensional Euclidean space by an injective function (a continuous function that does not map two different points of the circle to the same point of space), its image is a closed curve. Two disjoint closed curves that both lie on the same plane are unlinked, and more generally a pair of disjoint closed curves is said to be unlinked when there is a continuous deformation of space that moves them both onto the same plane, without either curve passing through the other or through itself. If there is no such continuous motion, the two curves are said to be linked. For example, the Hopf link is formed by two circles that each pass through the disk spanned by the other. It forms the simplest example of a pair of linked curves, but it is possible for curves to be linked in other more complicated ways. If two curves are not linked, then it is possible to find a topological disk in space, having the first curve as its boundary and disjoint from the second curve. Conversely if such a disk exists then the curves are necessarily unlinked. The linking number of two closed curves in three-dimensional space is a topological invariant of the curves: it is a number, defined from the curves in any of several equivalent ways, that does not change if the curves are moved continuously without passing through each other. The version of the linking number used for defining linkless embeddings of graphs is found by projecting the embedding onto the plane and counting the number of crossings of the projected embedding in which the first curve passes over the second one, modulo 2. The projection must be "regular", meaning that no two vertices project to the same point, no vertex projects to the interior of an edge, and at every point of the projection where the projections of two edges intersect, they cross transversally; with this restriction, any two projections lead to the same linking number. The linking number of the unlink is zero, and therefore, if a pair of curves has nonzero linking number, the two curves must be linked. However, there are examples of curves that are linked but that have zero linking number, such as the Whitehead link. An embedding of a graph into three-dimensional space consists of a mapping from the vertices of the graph to points in space, and from the edges of the graph to curves in space, such that each endpoint of each edge is mapped to an endpoint of the corresponding curve, and such that the curves for two different edges do not intersect except at a common endpoint of the edges. Any finite graph has a finite (though perhaps exponential) number of distinct simple cycles, and if the graph is embedded into three-dimensional space then each of these cycles forms a simple closed curve. One may compute the linking number of each disjoint pair of curves formed in this way; if all pairs of cycles have zero linking number, the embedding is said to be linkless. In some cases, a graph may be embedded in space in such a way that, for each cycle in the graph, one can find a disk bounded by that cycle that does not cross any other feature of the graph. In this case, the cycle must be unlinked from all the other cycles disjoint from it in the graph. The embedding is said to be flat if every cycle bounds a disk in this way. A flat embedding is necessarily linkless, but there may exist linkless embeddings that are not flat: for instance, if G is a graph formed by two disjoint cycles, and it is embedded to form the Whitehead link, then the embedding is linkless but not flat. A graph is said to be intrinsically linked if, no matter how it is embedded, the embedding is always linked. Although linkless and flat embeddings are not the same, the graphs that have linkless embeddings are the same as the graphs that have flat embeddings. Examples and counterexamples As showed, each of the seven graphs of the Petersen family is intrinsically linked: no matter how each of these graphs is embedded in space, they have two cycles that are linked to each other. These graphs include the complete graph K6, the Petersen graph, the graph formed by removing an edge from the complete bipartite graph K4,4, and the complete tripartite graph K3,3,1. Every planar graph has a flat and linkless embedding: simply embed the graph into a plane and embed the plane into space. If a graph is planar, this is the only way to embed it flatly and linklessly into space: every flat embedding can be continuously deformed to lie on a flat plane. And conversely, every nonplanar linkless graph has multiple linkless embeddings. An apex graph, formed by adding a single vertex to a planar graph, also has a flat and linkless embedding: embed the planar part of the graph on a plane, place the apex above the plane, and draw the edges from the apex to its neighbors as line segments. Any closed curve within the plane bounds a disk below the plane that does not pass through any other graph feature, and any closed curve through the apex bounds a disk above the plane that does not pass through any other graph feature. If a graph has a linkless or flat embedding, then modifying the graph by subdividing or unsubdividing its edges, adding or removing multiple edges between the same pair of points, and performing YΔ- and ΔY-transformations that replace a degree-three vertex by a triangle connecting its three neighbors or the reverse all preserve flatness and linklessness. In particular, in a cubic planar graph (one in which all vertices have exactly three neighbors, such as the cube) it is possible to make duplicates of any independent set of vertices by performing a YΔ-transformation, adding multiple copies of the resulting triangle edges, and then performing the reverse ΔY-transformations. Characterization and recognition If a graph G has a linkless or flat embedding, then every minor of G (a graph formed by contraction of edges and deletion of edges and vertices) also has a linkless or flat embedding. Deletions cannot destroy the flatness of an embedding, and a contraction can be performed by leaving one endpoint of the contracted edge in place and rerouting all the edges incident to the other endpoint along the path of the contracted edge. Therefore, by the Robertson–Seymour theorem, the linklessly embeddable graphs have a forbidden graph characterization as the graphs that do not contain any of a finite set of minors. The set of forbidden minors for the linklessly embeddable graphs was identified by : the seven graphs of the Petersen family are all minor-minimal intrinsically linked graphs. However, Sachs was unable to prove that these were the only minimal linked graphs, and this was finally accomplished by . The forbidden minor characterization of linkless graphs leads to a polynomial time algorithm for their recognition, but not for actually constructing an embedding. described a linear time algorithm that tests whether a graph is linklessly embeddable and, if so, constructs a flat embedding of the graph. Their algorithm finds large planar subgraphs within the given graph such that, if a linkless embedding exists, it has to respect the planar embedding of the subgraph. By repeatedly simplifying the graph whenever such a subgraph is found, they reduce the problem to one in which the remaining graph has bounded treewidth, at which point it can be solved by dynamic programming. The problem of efficiently testing whether a given embedding is flat or linkless was posed by . It remains unsolved, and is equivalent in complexity to unknotting problem, the problem of testing whether a single curve in space is unknotted. Testing unknottedness (and therefore, also, testing linklessness of an embedding) is known to be in NP but is not known to be NP-complete. Related families of graphs Graphs with small Colin de Verdière invariant The Colin de Verdière graph invariant is an integer defined for any graph using algebraic graph theory. The graphs with Colin de Verdière graph invariant at most μ, for any fixed constant μ, form a minor-closed family, and the first few of these are well-known: the graphs with μ ≤ 1 are the linear forests (disjoint unions of paths), the graphs with μ ≤ 2 are the outerplanar graphs, and the graphs with μ ≤ 3 are the planar graphs. As conjectured and proved, the graphs with μ ≤ 4 are exactly the linklessly embeddable graphs. Apex graphs The planar graphs and the apex graphs are linklessly embeddable, as are the graphs obtained by YΔ- and ΔY-transformations from these graphs. The YΔY reducible graphs are the graphs that can be reduced to a single vertex by YΔ- and ΔY-transformations, removal of isolated vertices and degree-one vertices, and compression of degree-two vertices; they are also minor-closed, and include all planar graphs. However, there exist linkless graphs that are not YΔY reducible, such as the apex graph formed by connecting an apex vertex to every degree-three vertex of a rhombic dodecahedron. There also exist linkless graphs that cannot be transformed into an apex graph by YΔ- and ΔY-transformation, removal of isolated vertices and degree-one vertices, and compression of degree-two vertices: for instance, the ten-vertex crown graph has a linkless embedding, but cannot be transformed into an apex graph in this way. Knotless graphs Related to the concept of linkless embedding is the concept of knotless embedding, an embedding of a graph in such a way that none of its simple cycles form a nontrivial knot. The graphs that do not have knotless embeddings (that is, they are intrinsically knotted) include K7 and K3,3,1,1. However, there also exist minimal forbidden minors for knotless embedding that are not formed (as these two graphs are) by adding one vertex to an intrinsically linked graph, but the list of these is unknown. One may also define graph families by the presence or absence of more complex knots and links in their embeddings, or by linkless embedding in three-dimensional manifolds other than Euclidean space. define a graph embedding to be triple linked if there are three cycles no one of which can be separated from the other two; they show that K9 is not intrinsically triple linked, but K10 is. More generally, one can define an n-linked embedding for any n to be an embedding that contains an n-component link that cannot be separated by a topological sphere into two separated parts; minor-minimal graphs that are intrinsically n-linked are known for all n. History The question of whether K6 has a linkless or flat embedding was posed within the topology research community in the early 1970s by . Linkless embeddings were brought to the attention of the graph theory community by , who posed several related problems including the problem of finding a forbidden graph characterization of the graphs with linkless and flat embeddings; Sachs showed that the seven graphs of the Petersen family (including K6) do not have such embeddings. As observed, linklessly embeddable graphs are closed under graph minors, from which it follows by the Robertson–Seymour theorem that a forbidden graph characterization exists. The proof of the existence of a finite set of obstruction graphs does not lead to an explicit description of this set of forbidden minors, but it follows from Sachs' results that the seven graphs of the Petersen family belong to the set. These problems were finally settled by , who showed that the seven graphs of the Petersen family are the only minimal forbidden minors for these graphs. Therefore, linklessly embeddable graphs and flat embeddable graphs are both the same set of graphs, and are both the same as the graphs that have no Petersen family minor. also asked for bounds on the number of edges and the chromatic number of linkless embeddable graphs. The number of edges in an n-vertex linkless graph is at most 4n − 10: maximal apex graphs with n > 4 have exactly this many edges, and proved a matching upper bound on the more general class of K6-minor-free graphs. observed that Sachs' question about the chromatic number would be resolved by a proof of Hadwiger's conjecture that any k-chromatic graph has as a minor a k-vertex complete graph. The proof by of the case k = 6 of Hadwiger's conjecture is sufficient to settle Sachs' question: the linkless graphs can be colored with at most five colors, as any 6-chromatic graph contains a K6 minor and is not linkless, and there exist linkless graphs such as K5 that require five colors. The snark theorem implies that every cubic linklessly embeddable graph is 3-edge-colorable. Linkless embeddings started being studied within the algorithms research community in the late 1980s through the works of and . Algorithmically, the problem of recognizing linkless and flat embeddable graphs was settled once the forbidden minor characterization was proven: an algorithm of can be used to test in polynomial time whether a given graph contains any of the seven forbidden minors. This method does not construct linkless or flat embeddings when they exist, but an algorithm that does construct an embedding was developed by , and a more efficient linear time algorithm was found by . A final question of on the possibility of an analogue of Fáry's theorem for linkless graphs appears not to have been answered: when does the existence of a linkless or flat embedding with curved or piecewise linear edges imply the existence of a linkless or flat embedding in which the edges are straight line segments? Notes References . As cited by . . As cited by . . . . . . . . . . . . . . . . . . . . . . . . . Further reading . Topological graph theory Knot theory Graph minor theory
Linkless embedding
[ "Mathematics" ]
3,266
[ "Graph theory", "Topology", "Mathematical relations", "Topological graph theory", "Graph minor theory" ]
772,150
https://en.wikipedia.org/wiki/Degenerate%20bilinear%20form
In mathematics, specifically linear algebra, a degenerate bilinear form on a vector space V is a bilinear form such that the map from V to V∗ (the dual space of V&hairsp;) given by is not an isomorphism. An equivalent definition when V is finite-dimensional is that it has a non-trivial kernel: there exist some non-zero x in V such that for all Nondegenerate forms A nondegenerate or nonsingular form is a bilinear form that is not degenerate, meaning that is an isomorphism, or equivalently in finite dimensions, if and only if for all implies that . The most important examples of nondegenerate forms are inner products and symplectic forms. Symmetric nondegenerate forms are important generalizations of inner products, in that often all that is required is that the map be an isomorphism, not positivity. For example, a manifold with an inner product structure on its tangent spaces is a Riemannian manifold, while relaxing this to a symmetric nondegenerate form yields a pseudo-Riemannian manifold. Using the determinant If V is finite-dimensional then, relative to some basis for V, a bilinear form is degenerate if and only if the determinant of the associated matrix is zero – if and only if the matrix is singular, and accordingly degenerate forms are also called singular forms. Likewise, a nondegenerate form is one for which the associated matrix is non-singular, and accordingly nondegenerate forms are also referred to as non-singular forms. These statements are independent of the chosen basis. Related notions If for a quadratic form Q there is a non-zero vector v ∈ V such that Q(v) = 0, then Q is an isotropic quadratic form. If Q has the same sign for all non-zero vectors, it is a definite quadratic form or an anisotropic quadratic form. There is the closely related notion of a unimodular form and a perfect pairing; these agree over fields but not over general rings. Examples The study of real, quadratic algebras shows the distinction between types of quadratic forms. The product zz* is a quadratic form for each of the complex numbers, split-complex numbers, and dual numbers. For z = x + ε y, the dual number form is x2 which is a degenerate quadratic form. The split-complex case is an isotropic form, and the complex case is a definite form. The most important examples of nondegenerate forms are inner products and symplectic forms. Symmetric nondegenerate forms are important generalizations of inner products, in that often all that is required is that the map be an isomorphism, not positivity. For example, a manifold with an inner product structure on its tangent spaces is a Riemannian manifold, while relaxing this to a symmetric nondegenerate form yields a pseudo-Riemannian manifold. Infinite dimensions Note that in an infinite-dimensional space, we can have a bilinear form ƒ for which is injective but not surjective. For example, on the space of continuous functions on a closed bounded interval, the form is not surjective: for instance, the Dirac delta functional is in the dual space but not of the required form. On the other hand, this bilinear form satisfies for all implies that In such a case where ƒ satisfies injectivity (but not necessarily surjectivity), ƒ is said to be weakly nondegenerate. Terminology If f vanishes identically on all vectors it is said to be totally degenerate. Given any bilinear form f on V the set of vectors forms a totally degenerate subspace of V. The map f is nondegenerate if and only if this subspace is trivial. Geometrically, an isotropic line of the quadratic form corresponds to a point of the associated quadric hypersurface in projective space. Such a line is additionally isotropic for the bilinear form if and only if the corresponding point is a singularity. Hence, over an algebraically closed field, Hilbert's Nullstellensatz guarantees that the quadratic form always has isotropic lines, while the bilinear form has them if and only if the surface is singular. See also References Bilinear forms Functional analysis
Degenerate bilinear form
[ "Mathematics" ]
926
[ "Functional analysis", "Mathematical objects", "Functions and mappings", "Mathematical relations" ]
22,223,374
https://en.wikipedia.org/wiki/Multiscale%20geometric%20analysis
Multiscale geometric analysis or geometric multiscale analysis is an emerging area of high-dimensional signal processing and data analysis. See also Wavelet Scale space Multi-scale approaches Multiresolution analysis Singular value decomposition Compressed sensing Further reading Signal processing Spatial analysis
Multiscale geometric analysis
[ "Physics", "Technology", "Engineering" ]
53
[ "Telecommunications engineering", "Computer engineering", "Signal processing", "Spatial analysis", "Space", "Spacetime" ]
22,228,777
https://en.wikipedia.org/wiki/Interferome
Interferome is an online bioinformatics database of interferon-regulated genes (IRGs). These Interferon Regulated Genes are also known as Interferon Stimulated Genes (ISGs). The database contains information on type I (IFN alpha, beta), type II (IFN gamma) and type III (IFN lambda) regulated genes and is regularly updated. It is used by the interferon and cytokine research community both as an analysis tool and an information resource. Interferons were identified as antiviral proteins more than 50 years ago. However, their involvement in immunomodulation, cell proliferation, inflammation and other homeostatic processes has been since identified. These cytokines are used as therapeutics in many diseases such as chronic viral infections, cancer and multiple sclerosis. These interferons regulate the transcription of approximately 2000 genes in an interferon subtype, dose, cell type and stimulus dependent manner. This database of interferon regulated genes is an attempt at integrating information from high-throughput experiments and molecular biology databases to gain a detailed understanding of interferon biology. Contents Interferome comprises the following data sets: Gene expression data of interferon regulated genes from Homo sapiens, Mus musculus, and Pan troglodytes, manually curated from more than 30 public and inhouse microarray and proteomic datasets. Tools Interferome offers many ways of searching and retrieving data from the database: Identify interferon regulated gene signatures in microarray data; Gene Ontology analysis and annotation; Normal tissue expression of interferon regulated genes; Regulatory analysis of interferon regulated genes; BLAST (Basic Local Alignment Search Tool) analysis and orthologue sequence download; Interferome Management Interferome is managed by a team at Monash University :Monash Institute of Medical Research and the University of Cambridge References External links INTERFEROME Biological databases Immunology Cytokines Gene expression
Interferome
[ "Chemistry", "Biology" ]
415
[ "Gene expression", "Signal transduction", "Bioinformatics", "Cytokines", "Immunology", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry", "Biological databases" ]
4,720,804
https://en.wikipedia.org/wiki/Cycle%20index
In combinatorial mathematics a cycle index is a polynomial in several variables which is structured in such a way that information about how a group of permutations acts on a set can be simply read off from the coefficients and exponents. This compact way of storing information in an algebraic form is frequently used in combinatorial enumeration. Each permutation π of a finite set of objects partitions that set into cycles; the cycle index monomial of π is a monomial in variables a1, a2, … that describes the cycle type of this partition: the exponent of ai is the number of cycles of π of size i. The cycle index polynomial of a permutation group is the average of the cycle index monomials of its elements. The phrase cycle indicator is also sometimes used in place of cycle index. Knowing the cycle index polynomial of a permutation group, one can enumerate equivalence classes due to the group's action. This is the main ingredient in the Pólya enumeration theorem. Performing formal algebraic and differential operations on these polynomials and then interpreting the results combinatorially lies at the core of species theory. Permutation groups and group actions A bijective map from a set X onto itself is called a permutation of X, and the set of all permutations of X forms a group under the composition of mappings, called the symmetric group of X, and denoted Sym(X&hairsp;). Every subgroup of Sym(X&hairsp;) is called a permutation group of degree |X&hairsp;|. Let G be an abstract group with a group homomorphism φ from G into Sym(X&hairsp;). The image, φ(G), is a permutation group. The group homomorphism can be thought of as a means for permitting the group G to "act" on the set X (using the permutations associated with the elements of G). Such a group homomorphism is formally called a permutation representation of G. A given group can have many different permutation representations, corresponding to different actions. Suppose that group G acts on set X (that is, a group action exists). In combinatorial applications the interest is in the set X; for instance, counting things in X and knowing what structures might be left invariant by G. Little is lost by working with permutation groups in such a setting, so in these applications, when a group is considered, it is a permutation representation of the group which will be worked with, and thus, a group action must be specified. Algebraists, on the other hand, are more interested in the groups themselves and would be more concerned with the kernels of the group actions, which measure how much is lost in passing from the group to its permutation representation. Disjoint cycle representation of permutations Finite permutations are most often represented as group actions on the set X = {1,2, ..., n}. A permutation in this setting can be represented by a two-line notation. Thus, corresponds to a bijection on X = {1, 2, 3, 4, 5} which sends 1 ↦ 2, 2 ↦ 3, 3 ↦ 4, 4 ↦ 5 and 5 ↦ 1. This can be read off from the columns of the notation. When the top row is understood to be the elements of X in an appropriate order, only the second row need be written. In this one-line notation, our example would be [2 3 4 5 1]. This example is known as a cyclic permutation because it "cycles" the numbers around, and a third notation for it would be (1 2 3 4 5). This cycle notation is to be read as: each element is sent to the element on its right, but the last element is sent to the first one (it "cycles" to the beginning). With cycle notation, it does not matter where a cycle starts, so (1 2 3 4 5) and (3 4 5 1 2) and (5 1 2 3 4) all represent the same permutation. The length of a cycle is the number of elements in the cycle. Not all permutations are cyclic permutations, but every permutation can be written as a product of disjoint (having no common element) cycles in essentially one way. As a permutation may have fixed points (elements that are unchanged by the permutation), these will be represented by cycles of length one. For example: This permutation is the product of three cycles, one of length two, one of length three, and a fixed point. The elements in these cycles are disjoint subsets of X and form a partition of X. The cycle structure of a permutation can be coded as an algebraic monomial in several (dummy) variables in the following way: a variable is needed for each distinct cycle length of the cycles that appear in the cycle decomposition of the permutation. In the previous example there were three different cycle lengths, so we will use three variables, a1, a2 and a3 (in general, use the variable ak to correspond to length k cycles). The variable ai will be raised to the ji&hairsp;(g) power where ji&hairsp;(g) is the number of cycles of length i in the cycle decomposition of permutation g. We can then associate the cycle index monomial to the permutation g. The cycle index monomial of our example would be a1a2a3, while the cycle index monomial of the permutation (1 2)(3 4)(5)(6 7 8 9)(10 11 12 13) would be a1a22a42. Definition The cycle index of a permutation group G is the average of the cycle index monomials of all the permutations g in G. More formally, let G be a permutation group of order m and degree n. Every permutation g in G has a unique decomposition into disjoint cycles, say c1 c2 c3 ... . Let the length of a cycle c be denoted by |c&hairsp;|. Now let jk(g) be the number of cycles of g of length k, where We associate to g the monomial in the variables a1, a2, ..., an. Then the cycle index Z(G) of G is given by Example Consider the group G of rotational symmetries of a square in the Euclidean plane. Its elements are completely determined by the images of just the corners of the square. By labeling these corners 1, 2, 3 and 4 (consecutively going clockwise, say) we can represent the elements of G as permutations of the set X = {1,2,3,4}. The permutation representation of G consists of the four permutations (1 4 3 2), (1 3)(2 4), (1 2 3 4) and e = (1)(2)(3)(4) which represent the counter-clockwise rotations by 90°, 180°, 270° and 360° respectively. Notice that the identity permutation e is the only permutation with fixed points in this representation of G. As an abstract group, G is known as the cyclic group C4, and this permutation representation of it is its regular representation. The cycle index monomials are a4, a22, a4, and a14 respectively. Thus, the cycle index of this permutation group is: The group C4 also acts on the unordered pairs of elements of X in a natural way. Any permutation g would send {x,y} → {x&hairsp;g, y&hairsp;g} (where x&hairsp;g is the image of the element x under the permutation g). The set X is now {A, B, C, D, E, F} where A = {1,2}, B = {2,3}, C = {3,4}, D = {1,4}, E = {1,3} and F = {2,4}. These elements can be thought of as the sides and diagonals of the square or, in a completely different setting, as the edges of the complete graph K4. Acting on this new set, the four group elements are now represented by (A D C B)(E F), (A C)(B D)(E)(F), (A B C D)(E F) and e = (A)(B)(C)(D)(E)(F), and the cycle index of this action is: The group C4 can also act on the ordered pairs of elements of X in the same natural way. Any permutation g would send (x,y) → (x&hairsp;g, y&hairsp;g) (in this case we would also have ordered pairs of the form (x, x)). The elements of X could be thought of as the arcs of the complete digraph D4 (with loops at each vertex). The cycle index in this case would be: Types of actions As the above example shows, the cycle index depends on the group action and not on the abstract group. Since there are many permutation representations of an abstract group, it is useful to have some terminology to distinguish them. When an abstract group is defined in terms of permutations, it is a permutation group and the group action is the identity homomorphism. This is referred to as the natural action. The symmetric group S3 in its natural action has the elements and so, its cycle index is: A permutation group G on the set X is transitive if for every pair of elements x and y in X there is at least one g in G such that y = x&hairsp;g. A transitive permutation group is regular (or sometimes referred to as sharply transitive) if the only permutation in the group that has fixed points is the identity permutation. A finite transitive permutation group G on the set X is regular if and only if |G| = |X&hairsp;|. Cayley's theorem states that every abstract group has a regular permutation representation given by the group acting on itself (as a set) by (right) multiplication. This is called the regular representation of the group. The cyclic group C6 in its regular representation contains the six permutations (one-line form of the permutation is given first): [1 2 3 4 5 6] = (1)(2)(3)(4)(5)(6) [2 3 4 5 6 1] = (1 2 3 4 5 6) [3 4 5 6 1 2] = (1 3 5)(2 4 6) [4 5 6 1 2 3] = (1 4)(2 5)(3 6) [5 6 1 2 3 4] = (1 5 3)(2 6 4) [6 1 2 3 4 5] = (1 6 5 4 3 2). Thus its cycle index is: Often, when an author does not wish to use the group action terminology, the permutation group involved is given a name which implies what the action is. The following three examples illustrate this point. The cycle index of the edge permutation group of the complete graph on three vertices We will identify the complete graph K3 with an equilateral triangle in the Euclidean plane. This permits us to use geometric language to describe the permutations involved as symmetries of the triangle. Every permutation in the group S3 of vertex permutations (S3 in its natural action, given above) induces an edge permutation. These are the permutations: The identity: No vertices are permuted, and no edges; the contribution is Three reflections in an axis passing through a vertex and the midpoint of the opposite edge: These fix one edge (the one not incident on the vertex) and exchange the remaining two; the contribution is Two rotations, one clockwise, the other counterclockwise: These create a cycle of three edges; the contribution is The cycle index of the group G of edge permutations induced by vertex permutations from S3 is It happens that the complete graph K3 is isomorphic to its own line graph (vertex-edge dual) and hence the edge permutation group induced by the vertex permutation group is the same as the vertex permutation group, namely S3 and the cycle index is Z(S3). This is not the case for complete graphs on more than three vertices, since these have strictly more edges () than vertices (). The cycle index of the edge permutation group of the complete graph on four vertices This is entirely analogous to the three-vertex case. These are the vertex permutations (S4 in its natural action) and the edge permutations (S4 acting on unordered pairs) that they induce: The identity: This permutation maps all vertices (and hence, edges) to themselves and the contribution is Six permutations that exchange two vertices: These permutations preserve the edge that connects the two vertices as well as the edge that connects the two vertices not exchanged. The remaining edges form two two-cycles and the contribution is Eight permutations that fix one vertex and produce a three-cycle for the three vertices not fixed: These permutations create two three-cycles of edges, one containing those not incident on the vertex, and another one containing those incident on the vertex; the contribution is Three permutations that exchange two vertex pairs at the same time: These permutations preserve the two edges that connect the two pairs. The remaining edges form two two-cycles and the contribution is Six permutations that cycle the vertices in a four-cycle: These permutations create a four-cycle of edges (those that lie on the cycle) and exchange the remaining two edges; the contribution is We may visualize the types of permutations geometrically as symmetries of a regular tetrahedron. This yields the following description of the permutation types. The identity. Reflection in the plane that contains one edge and the midpoint of the edge opposing it. Rotation by 120 degrees about the axis passing through a vertex and the midpoint of the opposite face. Rotation by 180 degrees about the axis connecting the midpoints of two opposite edges. Six rotoreflections by 90 degrees. The cycle index of the edge permutation group G of K4 is: The cycle index of the face permutations of a cube Consider an ordinary cube in three-space and its group of symmetries, call it C. It permutes the six faces of the cube. (We could also consider edge permutations or vertex permutations.) There are twenty-four symmetries. The identity: There is one such permutation and its contribution is Six 90-degree face rotations: We rotate about the axis passing through the centers of the face and the face opposing it. This will fix the face and the face opposing it and create a four-cycle of the faces parallel to the axis of rotation. The contribution is Three 180-degree face rotations: We rotate about the same axis as in the previous case, but now there is no four cycle of the faces parallel to the axis, but rather two two-cycles. The contribution is Eight 120-degree vertex rotations: This time we rotate about the axis passing through two opposite vertices (the endpoints of a main diagonal). This creates two three-cycles of faces (the faces incident on the same vertex form a cycle). The contribution is Six 180-degree edge rotations: These edge rotations rotate about the axis that passes through the midpoints of opposite edges not incident on the same face and parallel to each other and exchanges the two faces that are incident on the first edge, the two faces incident on the second edge, and the two faces that share two vertices but no edge with the two edges, i.e. there are three two-cycles and the contribution is The conclusion is that the cycle index of the group C is Cycle indices of some permutation groups Identity group En This group contains one permutation that fixes every element (this must be a natural action). Cyclic group Cn A cyclic group, Cn is the group of rotations of a regular n-gon, that is, n elements equally spaced around a circle. This group has φ(d&hairsp;) elements of order d for each divisor d of n, where φ(d&hairsp;) is the Euler φ-function, giving the number of natural numbers less than d which are relatively prime to d. In the regular representation of Cn, a permutation of order d has n/d cycles of length d, thus: Dihedral group Dn The dihedral group is like the cyclic group, but also includes reflections. In its natural action, Alternating group An The cycle index of the alternating group in its natural action as a permutation group is The numerator is 2 for the even permutations, and 0 for the odd permutations. The 2 is needed because . Symmetric group Sn The cycle index of the symmetric group Sn in its natural action is given by the formula: that can be also stated in terms of complete Bell polynomials: This formula is obtained by counting how many times a given permutation shape can occur. There are three steps: first partition the set of n labels into subsets, where there are subsets of size k. Every such subset generates cycles of length k. But we do not distinguish between cycles of the same size, i.e. they are permuted by . This yields The formula may be further simplified if we sum up cycle indices over every , while using an extra variable to keep track of the total size of the cycles: thus giving a simplified form for the cycle index of : There is a useful recursive formula for the cycle index of the symmetric group. Set and consider the size l of the cycle that contains n, where There are ways to choose the remaining elements of the cycle and every such choice generates different cycles. This yields the recurrence or Applications Throughout this section we will modify the notation for cycle indices slightly by explicitly including the names of the variables. Thus, for the permutation group G we will now write: Let G be a group acting on the set X. G also induces an action on the k-subsets of X and on the k-tuples of distinct elements of X (see #Example for the case k = 2), for 1 ≤ k ≤ n. Let fk and Fk denote the number of orbits of G in these actions respectively. By convention we set f0 = F0 = 1. We have: a) The ordinary generating function for fk is given by: and b) The exponential generating function for Fk is given by: Let G be a group acting on the set X and h a function from X to Y. For any g in G, h(x&hairsp;g) is also a function from X to Y. Thus, G induces an action on the set Y X of all functions from X to Y. The number of orbits of this action is Z(G; b, b, ..., b) where b = |Y |. This result follows from the orbit counting lemma (also known as the Not Burnside's lemma, but traditionally called Burnside's lemma) and the weighted version of the result is Pólya's enumeration theorem. The cycle index is a polynomial in several variables and the above results show that certain evaluations of this polynomial give combinatorially significant results. As polynomials they may also be formally added, subtracted, differentiated and integrated. The area of symbolic combinatorics provides combinatorial interpretations of the results of these formal operations. The question of what the cycle structure of a random permutation looks like is an important question in the analysis of algorithms. An overview of the most important results may be found at random permutation statistics. Notes References External links Marko Riedel, Pólya's enumeration theorem and the symbolic method Marko Riedel, Cycle indices of the set / multiset operator and the exponential formula Combinatorics Enumerative combinatorics
Cycle index
[ "Mathematics" ]
4,310
[ "Discrete mathematics", "Enumerative combinatorics", "Combinatorics" ]
4,721,451
https://en.wikipedia.org/wiki/Liquid-crystal%20polymer
Liquid crystal polymers (LCPs) are polymers with the property of liquid crystal, usually containing aromatic rings as mesogens. Despite uncrosslinked LCPs, polymeric materials like liquid crystal elastomers (LCEs) and liquid crystal networks (LCNs) can exhibit liquid crystallinity as well. They are both crosslinked LCPs but have different cross link density. They are widely used in the digital display market. In addition, LCPs have unique properties like thermal actuation, anisotropic swelling, and soft elasticity. Therefore, they can be good actuators and sensors. One of the most famous and classical applications for LCPs is Kevlar, a strong but light fiber with wide applications, notably bulletproof vests. Background Liquid crystallinity in polymers may occur either by dissolving a polymer in a solvent (lyotropic liquid-crystal polymers) or by heating a polymer above its glass or melting transition point (thermotropic liquid-crystal polymers). Liquid-crystal polymers are present in melted/liquid or solid form. In solid form, the main example of lyotropic LCPs is the commercial aramid known as Kevlar. The chemical structure of this aramid consists of linearly substituted aromatic rings linked by amide groups. In a similar way, several series of thermotropic LCPs have been commercially produced by several companies. A high number of LCPs, produced in the 1980s, displayed order in the melt phase analogous to that exhibited by nonpolymeric liquid crystals. Processing of LCPs from liquid-crystal phases (or mesophases) gives rise to fibers and injected materials having high mechanical properties as a consequence of the self-reinforcing properties derived from the macromolecular orientation in the mesophase. LCPs can be melt-processed on conventional equipment at high speeds with excellent replication of mold details. The high ease of forming of LCPs is an important competitive advantage against other plastics, as it offsets high raw material cost. Polar and bowlic LCPs, which have unique properties and potential applications, have not been widely produced for industrial purposes. Mesophases Same as the small molecular liquid crystal, liquid crystal polymers also have different mesophases. The mesogen cores of the polymers will aggregate into different mesophases: nematics, cholesterics, smectics and compounds with highly polar end groups. More information about the mesophases can be found on liquid crystal page. Classification LCPs are categorized by the location of liquid crystal cores. Due to the creation and research of different classes of LCPs, different prefixes are used to help the classification of LCPs. Main chain liquid crystal polymers (MCLCPs) have liquid crystal cores in the main chain. By contrast, side chain liquid crystal polymers (SCLCPs) have pendant side chains containing the liquid crystal cores. Main chain LCP Main chain LCPs have rigid, rod-like mesogens in the polymer backbones, which indirectly leads to the high melting temperature of this kind of LCPs. To make this kind of polymer easy to process, different methods are applied to lower the transition temperature: introducing flexible sequences, introducing bends or kinks, or adding substituent groups to the aromatic mesogens. Side Chain LCP In side-chain LCPs, the mesogens are in the polymer side chains. The mesogens usually are linked to the backbones through flexible spacers, although for a few LCPs, the side chains directly link to the backbones. If the mesogens are directly linked to the backbones, the coil-like conformation of the backbones will impede the mesogens from forming an orientational structure. Conversely, by introducing flexible spacers between the backbones and the mesogens, the ordering of mesogens can be decoupled from the conformation of the backbones. Mechanism Mesogens in LCPs can self-organize to form liquid crystal regions in different conditions. LCPs can be roughly divided into two subcategories based on the mechanism of aggregation and ordering, but the distinction is not rigidly defined. LCPs can be transformed into liquid crystals with more than one method. Lyotropic systems Lyotropic main chain LCPs have rigid mesogen cores (such as aromatic rings) in the backbones. This type of LCPs forms liquid crystals due to their rigid chain conformation but not only the aggregation of mesogen cores. Because of the rigid structure, strong solvent is needed to dissolve the lyotropic main chain polymers. When the concentration of the polymers reaches critical concentration, the mesophases begin to form and the viscosity of the polymer solution begins to decrease. Lyotropic main chain LCPs have been mainly used to generate high-strength fibers such as Kevlar. Side chain LCPs usually consist of both hydrophobic and hydrophilic segments. Usually, the side chain ends are hydrophilic. When they are dissolved in water, micelles will form due to hydrophobic force. If the volume fraction of the polymers exceeds the critical volume fraction, the micellar segregates will be packed to form a liquid crystal structure. As the concentration varies above the critical volume fraction, the liquid crystal generated may be packed in different structures. Temperature, the stiffness of the polymers, and the molecular weight of the polymers can affect the liquid crystal transformation. Lyotropic side chain LCPs such as alkyl polyoxyethylene surfactants attached to polysiloxane polymers may be used in personal care products like liquid soap. Thermotropic systems The study of thermotropic LCPs was catalyzed by the success of lyotropic LCPs. Thermotropic LCPs can only be processed when the melting temperature is far below the decomposition temperature. When above the melting temperature but below the clearing point, the thermotropic LCPs will form liquid crystals. Above the clearing point, the melt will be isotropic and clear again. Frozen liquid crystals can be obtained by quenching liquid crystal polymers below the glass transition temperature. Copolymerization can be used to adjust the melting temperature and mesophase temperature. Liquid crystal elastomers (LCEs) Finkelmann first proposed LCEs in 1981. LCEs attracted attention from researchers and industry. LCEs can be synthesized both from polymeric precursors and from monomers. LCEs can respond to heat, light, and magnetic fields. Nanomaterials can be introduced into LCE matrices (LCE-based composites) to provide different properties and tailor LCEs' ability to respond to different stimuli. Applications LCEs have many applications. For example, LCE films can be used as optical retarders due to their anisotropic structure. Because they can control the polarization state of transmitted light, they are commonly used in 3D glasses, patterned retarders for transflective displays, and flat panel LC displays. Modifying LCE with azobenzene, allows it to show light response properties. It can be applied for controlled wettability, autonomous lenses, and haptic surfaces. Besides the display application, research has focused on other interesting properties such as its special thermally and photogenerated macroscale mechanical responses, which means they can be good actuators. LCEs are used to make actuators and artificial muscles for robotics. They have been studied for use as lightweight energy absorbers, with potential applications in helmets, body armor, vehicle bumpers, using multi-layered, tilted beams of LCE, sandwiched between stiff supporting structures. Synthesis Polymeric precursors LCEs synthesized from the polymeric precursors can be divided into two subcategories: Poly(hydrosiloxane): A two-step crosslinking technique is applied to derive LCEs from poly(hydrosiloxane). Poly(dydrosiloxane) is mixed with a monovinyl-functionalized liquid crystalline monomer, a multifunctional vinyl crosslinker, and catalyst. This mixture is used to generate a weakly crosslinked gel, in which the monomers are linked to the poly(dydrosiloxane) backbones. During the first crosslinking step or shortly after that, orientation is introduced into the mesogen cores of the gel with mechanical alignment methods. After that, the gel is dehydrated and the crosslinking reaction is completed. Therefore, the orientation is kept in the elastomer by crosslinking. In this way, highly ordered side chain LCEs can be produced, which are also called single-crystal or monodomain LCEs. LCPs: With LCPs as precursors, a similar two-step method can be applied. Aligned LCPs mixed with multifunctional crosslinkers directly generate LCEs. The mixture is first heated to isotopic. Fibers are drawn from the mixture and then crosslinked, thus the orientation can be trapped in the LCE. However, it is limited by the difficulty of processing caused by the high viscosity of the starting material. Low molar mass monomers Liquid crystal low molar mass monomers are mixed with crosslinkers and catalysts. The monomers can be aligned and then polymerized to keep the orientation. One advantage of this method is that the low molar mass monomers can be aligned by not only mechanical alignment, but also diamagnetic, dielectric, surface alignment. For example, thiol-ene radical step-growth polymerization and Michael addition produce well-ordered LCEs. This is also a good way to synthesize moderately to densely crosslinked glassy LCNs. The main difference between LCEs and LCNs is the cross link density. LCNs are primarily synthesized from (meth)acrylate-based multifunctional monomers while LCEs usually come from crosslinked polysiloxanes. Properties A unique class of partially crystalline aromatic polyesters based on p-hydroxybenzoic acid and related monomers, liquid-crystal polymers are capable of forming regions of highly ordered structure while in the liquid phase. However, the degree of order is somewhat less than that of a regular solid crystal. Typically, LCPs have a high mechanical strength at high temperatures, extreme chemical resistance, inherent flame retardancy, and good weatherability. Liquid-crystal polymers come in a variety of forms from sinterable high temperature to injection moldable compounds. LCPs can be welded, though the lines created by welding are a weak point in the resulting product. LCPs have a high Z-axis coefficient of thermal expansion. LCPs are exceptionally inert. They resist stress cracking in the presence of most chemicals at elevated temperatures, including aromatic or halogenated hydrocarbons, strong acids, bases, ketones, and other aggressive industrial substances. Hydrolytic stability in boiling water is excellent. Environments that deteriorate the polymers are high-temperature steam, concentrated sulfuric acid, and boiling caustic materials. Polar and bowlic LCPs are ferroelectrics, with reaction time order-of-magnitudes smaller than that in conventional LCs and could be used to make ultrafast switches. Bowlic columnar polymers possess long, hollow tubes; with metal or transition metal atoms added into the tube, they could potentially form ultrahigh-Tc superconductors. Uses Because of their various properties, LCPs are useful for electrical and mechanical parts, food containers, and any other applications requiring chemical inertness and high strength. LCP is particularly good for microwave frequency electronics due to low relative dielectric constants, low dissipation factors, and commercial availability of laminates. Packaging microelectromechanical systems (MEMS) is another area that LCP has recently gained more attention. The superior properties of LCPs make them especially suitable for automotive ignition system components, heater plug connectors, lamp sockets, transmission system components, pump components, coil forms and sunlight sensors and sensors for car safety belts. LCPs are also well-suited for computer fans, where their high tensile strength and rigidity enable tighter design tolerances, higher performance, and less noise, albeit at a significantly higher cost. Trade names LCP is sold by manufacturers under a variety of trade names. These include: Zenite Vectra Laperos Zenite 5145L Xydar References External links Prospector Bowlic liquid crystal from San Jose State University Liquid crystals Polymer material properties Thermoplastics ja:液晶ポリマー
Liquid-crystal polymer
[ "Chemistry", "Materials_science" ]
2,615
[ "Polymer material properties", "Polymer chemistry" ]
4,722,074
https://en.wikipedia.org/wiki/Poincar%C3%A9%20inequality
In mathematics, the Poincaré inequality is a result in the theory of Sobolev spaces, named after the French mathematician Henri Poincaré. The inequality allows one to obtain bounds on a function using bounds on its derivatives and the geometry of its domain of definition. Such bounds are of great importance in the modern, direct methods of the calculus of variations. A very closely related result is Friedrichs' inequality. Statement of the inequality The classical Poincaré inequality Let p, so that 1 ≤ p < ∞ and Ω a subset bounded at least in one direction. Then there exists a constant C, depending only on Ω and p, so that, for every function u of the Sobolev space W01,p(Ω) of zero-trace (a.k.a. zero on the boundary) functions, Poincaré–Wirtinger inequality Assume that 1 ≤ p ≤ ∞ and that Ω is a bounded connected open subset of the n-dimensional Euclidean space with a Lipschitz boundary (i.e., Ω is a Lipschitz domain). Then there exists a constant C, depending only on Ω and p, such that for every function u in the Sobolev space , where is the average value of u over Ω, with |Ω| standing for the Lebesgue measure of the domain Ω. When Ω is a ball, the above inequality is called a -Poincaré inequality; for more general domains Ω, the above is more familiarly known as a Sobolev inequality. The necessity to subtract the average value can be seen by considering constant functions for which the derivative is zero while, without subtracting the average, we can have the integral of the function as large as we wish. There are other conditions instead of subtracting the average that we can require in order to deal with this issue with constant functions, for example, requiring trace zero, or subtracting the average over some proper subset of the domain. The constant C in the Poincare inequality may be different from condition to condition. Also note that the issue is not just the constant functions, because it is the same as saying that adding a constant value to a function can increase its integral while the integral of its derivative remains the same. So, simply excluding the constant functions will not solve the issue. Generalizations In the context of metric measure spaces, the definition of a Poincaré inequality is slightly different. One definition is: a metric measure space supports a (q,p)-Poincare inequality for some if there are constants C and so that for each ball B in the space, Here we have an enlarged ball in the right hand side. In the context of metric measure spaces, is the minimal p-weak upper gradient of u in the sense of Heinonen and Koskela. Whether a space supports a Poincaré inequality has turned out to have deep connections to the geometry and analysis of the space. For example, Cheeger has shown that a doubling space satisfying a Poincaré inequality admits a notion of differentiation. Such spaces include sub-Riemannian manifolds and Laakso spaces. There exist other generalizations of the Poincaré inequality to other Sobolev spaces. For example, consider the Sobolev space H1/2(T2), i.e. the space of functions u in the L2 space of the unit torus T2 with Fourier transform û satisfying In this context, the Poincaré inequality says: there exists a constant C such that, for every with u identically zero on an open set , where denotes the harmonic capacity of } when thought of as a subset of . Yet another generalization involves weighted Poincaré inequalities where the Lebesgue measure is replaced by a weighted version. The Poincaré constant The optimal constant C in the Poincaré inequality is sometimes known as the Poincaré constant for the domain Ω. Determining the Poincaré constant is, in general, a very hard task that depends upon the value of p and the geometry of the domain Ω. Certain special cases are tractable, however. For example, if Ω is a bounded, convex, Lipschitz domain with diameter d, then the Poincaré constant is at most d/2 for , for , and this is the best possible estimate on the Poincaré constant in terms of the diameter alone. For smooth functions, this can be understood as an application of the isoperimetric inequality to the function's level sets. In one dimension, this is Wirtinger's inequality for functions. However, in some special cases the constant C can be determined concretely. For example, for p = 2, it is well known that over the domain of unit isosceles right triangle, C = 1/π ( < d/π where ). Furthermore, for a smooth, bounded domain , since the Rayleigh quotient for the Laplace operator in the space is minimized by the eigenfunction corresponding to the minimal eigenvalue of the (negative) Laplacian, it is a simple consequence that, for any , and furthermore, that the constant λ1 is optimal. Poincaré inequality on metric-measure spaces Since the 90s there have been several fruitful ways to make sense of Sobolev functions on general metric measure spaces (metric spaces equipped with a measure that is often compatible with the metric in certain senses). For example, the approach based on "upper gradients" leads to Newtonian-Sobolev space of functions. Thus, it makes sense to say that a space "supports a Poincare inequality". It turns out that whether a space supports any Poincare inequality and if so, the critical exponent for which it does, is tied closely to the geometry of the space. For example, a space that supports a Poincare inequality must be path connected. Indeed, between any pair of points there must exist a rectifiable path with length comparable to the distance of the points. Much deeper connections have been found, e.g. through the notion of modulus of path families. A good and rather recent reference is the monograph "Sobolev Spaces on Metric Measure Spaces, an approach based on upper gradients" written by Heinonen et al. Sobolev Slobodeckij Spaces and Poincaré Inequality Given and , the Sobolev Slobodeckij space is defined as the set of all functions such that and the seminorm is finite. The seminorm is defined by: The Poincaré Inequality in this context can be generalized as follows: where is the average of over and is a constant dependent on , and . This inequality holds for every bounded . Proof of the Poincaré Inequality The proof follows that of Irene Drelichman and Ricardo G. Durán. Let . By applying Jensen's inequality, we obtain: By exploiting the boundedness of and further estimates: It follows that the constant is given as , however, the reference with Theorem 1 indicates that this is not the optimal constant. Poincaré on Balls We can derive a growth constant for Balls in a manner similar to previous cases. The relationship is given by the following inequality: Sketch of the Proof The proof proceeds similarly to the classical one, by using the scaling . Then, by using a form of chain rule for the fractional derivative, we get as a result. See also Friedrichs' inequality Korn's inequality Spectral gap References Leoni, Giovanni (2009), A First Course in Sobolev Spaces, Graduate Studies in Mathematics, American Mathematical Society, pp. xvi+607 , , , MAA Theorems in analysis Inequalities Sobolev spaces Inequality
Poincaré inequality
[ "Mathematics" ]
1,583
[ "Theorems in mathematical analysis", "Mathematical analysis", "Mathematical theorems", "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems" ]
4,723,118
https://en.wikipedia.org/wiki/Detent
A detent is a mechanical or magnetic means to resist or arrest the movement of a mechanical device. Such a device can be anything ranging from a simple metal pin to a machine. The term is also used for the method involved. Magnetic detents are most often used to divide a shaft rotation into discrete increments. Magnetic detents are inherent in some types of electric motors, most often stepper motors. Mechanics Arresting movement Ratchet and pawl The ratchet-and-pawl design arrests movements by employing a small gravity- or spring-actuated lever paired with a notched wheel. The lever is mounted on a pivot point in proximity to the wheel. The vertical angle of the sides of the notches that face the direction of rotation desired is generally very acute (45 degrees or less), so that as the wheel rotates in that direction, the end of the lever is easily lifted or pushed out and over the top of a notch. Following this, the lever drops into the next notch and the next et cetera as the wheel or shaft continues to spin. The angle of the backside of the notch is severe (usually 90 degrees or greater to the end of the lever) so that the lever cannot be pushed up or out of the notch if wheel attempts to turn in the opposite direction. The lever is jammed between the back of the notch and its pivot point, stopping movement in that direction against any force that the materials used can withstand. The wheel has little resistance moving in the direction desired, other than that required to lift or push the lever over the next notch. Resisting movement To resist movement (or when creating incremental steps), methods are employed which include a spring-loaded ball detent that locates in small incremental depressions, or a piece of spring steel that snaps into position on flat surfaces or shallow notches milled into the shaft or wheel. Motor detents Stepper motors rely on magnetic detents to retain step location when winding power is removed. They are well suited to be used in printers and numerical control (CNC) devices. Examples A well-known example of a detent can be seen on the popular game show Wheel of Fortune, which employs rubber flippers to help disambiguate on which wedge the wheel has stopped after being spun by a contestant. Other common examples include: A balance control on a piece of stereo equipment which seems to "click" or "snap" into the center position of its rotation, indicating the point where the volumes of the left and right channels are equal or "balanced", or volume controls with a separate detent to match each of the digits on the control knob (typically 10). Rotary switches typically employ detents to keep the control shaft properly aligned with the appropriate contact. Any spring-powered wind-up toy employs one, in order to disallow unwinding of the spring. The ratchet wrench, which is employed to intentionally use force against the detent and comes in an increasing variety of types. It was designed to allow one to keep the wrench engaged with the bolt or nut which it is turning, in an area where the swing arc of the wrench is limited, while being able to continue to turn it in one direction by simply pulling the handle back and letting the detent reposition itself. The repositioning allows the wrench to be forcibly turned again. The scroll wheels on many computer mice employ detents to divide scrolling into discrete steps. The shutter button on many cameras has a detent to selectively activate autofocusing and exposure calculation. Further pressure on the button then passes the detent and activates the shutter. Many aircraft have detents on the thrust lever for pre-set thrust values e.g. climb or reverse thrust. See also Ball detent Spring plunger Ratchet (device) References Mechanisms (engineering)
Detent
[ "Engineering" ]
788
[ "Mechanical engineering", "Mechanisms (engineering)" ]
4,723,516
https://en.wikipedia.org/wiki/Firefly%20luciferase
Firefly luciferase is the light-emitting enzyme responsible for the bioluminescence of fireflies and click beetles. The enzyme catalyses the oxidation of firefly luciferin, requiring oxygen and ATP. Because of the requirement of ATP, firefly luciferases have been used extensively in biotechnology. Mechanism of reaction The chemical reaction catalyzed by firefly luciferase takes place in two steps: luciferin + ATP → luciferyl adenylate + PPi luciferyl adenylate + O2 → oxyluciferin + AMP + light Light is produced because the reaction forms oxyluciferin in an electronically excited state. The reaction releases a photon of light as oxyluciferin goes back to the ground state. Luciferyl adenylate can additionally participate in a side reaction with O2 to form hydrogen peroxide and dehydroluciferyl-AMP. About 20% of the luciferyl adenylate intermediate is oxidized in this pathway. Firefly luciferase generates light from luciferin in a multistep process. First, D-luciferin is adenylated by MgATP to form luciferyl adenylate and pyrophosphate. After activation by ATP, luciferyl adenylate is oxidized by molecular oxygen to form a dioxetanone ring. A decarboxylation reaction forms an excited state of oxyluciferin, which tautomerizes between the keto-enol form. The reaction finally emits light as oxyluciferin returns to the ground state. Bifunctionality Luciferase can function in two different pathways: a bioluminescence pathway and a CoA-ligase pathway. In both pathways, luciferase initially catalyzes an adenylation reaction with MgATP. However, in the CoA-ligase pathway, CoA can displace AMP to form luciferyl CoA. Fatty acyl-CoA synthetase similarly activates fatty acids with ATP, followed by displacement of AMP with CoA. Because of their similar activities, luciferase is able to replace fatty acyl-CoA synthetase and convert long-chain fatty acids into fatty-acyl CoA for beta oxidation. Structure The protein structure of firefly luciferase consists of 550 amino acids in two compact domains: the N-terminal domain and the C-terminal domain. The N-terminal domain is composed of two β-sheets in an αβαβα structure and a β barrel. The two β-sheets stack on top of each other, with the β-barrel covering the end of the sheets. The C-terminal domain is connected to the N-terminal domain by a flexible hinge, which can separate the two domains. The amino acid sequences on the surface of the two domains facing each other are conserved in bacterial and firefly luciferase, thereby strongly suggesting that the active site is located in the cleft between the domains. During a reaction, luciferase has a conformational change and goes into a "closed" form with the two domains coming together to enclose the substrate. This ensures that water is excluded from the reaction and does not hydrolyze ATP or the electronically excited product. Spectral differences in bioluminescence Firefly luciferase bioluminescence color can vary between yellow-green (λmax = 550 nm) to red (λmax = 620). There are currently several different mechanisms describing how the structure of luciferase affects the emission spectrum of the photon and effectively the color of light emitted. One mechanism proposes that the color of the emitted light depends on whether the product is in the keto or enol form. The mechanism suggests that red light is emitted from the keto form of oxyluciferin, while green light is emitted from the enol form of oxyluciferin. However, 5,5-dimethyloxyluciferin emits green light even though it is constricted to the keto form because it cannot tautomerize. Another mechanism proposes that twisting the angle between benzothiazole and thiazole rings in oxyluciferin determines the color of bioluminescence. This explanation proposes that a planar form with an angle of 0° between the two rings corresponds to a higher energy state and emits a higher-energy green light, whereas an angle of 90° puts the structure in a lower energy state and emits a lower-energy red light. The most recent explanation for the bioluminescence color examines the microenvironment of the excited oxyluciferin. Studies suggest that the interactions between the excited state product and nearby residues can force the oxyluciferin into an even higher energy form, which results in the emission of green light. For example, Arg 218 has electrostatic interactions with other nearby residues, restricting oxyluciferin from tautomerizing to the enol form. Similarly, other results have indicated that the microenvironment of luciferase can force oxyluciferin into a more rigid, high-energy structure, forcing it to emit a high-energy green light. Regulation D-luciferin is the substrate for firefly luciferase's bioluminescence reaction, while L-luciferin is the substrate for luciferyl-CoA synthetase activity. Both reactions are inhibited by the substrate's enantiomer: L-luciferin and D-luciferin inhibit the bioluminescence pathway and the CoA-ligase pathway, respectively. This shows that luciferase can differentiate between the isomers of the luciferin structure. L-luciferin is able to emit a weak light even though it is a competitive inhibitor of D-luciferin and the bioluminescence pathway. Light is emitted because the CoA synthesis pathway can be converted to the bioluminescence reaction by hydrolyzing the final product via an esterase back to D-luciferin. Luciferase activity is additionally inhibited by oxyluciferin and allosterically activated by ATP. When ATP binds to the enzyme's two allosteric sites, luciferase's affinity to bind ATP in its active site increases. Homology Firefly luciferase is thought to be a homolog of long-chain fatty acyl-CoA synthetase because of its ability to synthesize luciferyl-CoA from CoA and dehydroluciferyl-AMP. Inouye tested this hypothesis in 2010 by expressing the cDNA of Photinus pyralis and Lychocoriolaus lateralis luciferses in E. coli through cold shock gene expression. The resulting enzymes were then exposed to long-chain fatty acids, short-chain fatty acids, amino acids, and imino acids. Unsurprisingly, Inouye found that the luciferases only showed adenylation activity when exposed to long-chain fatty acids. The gene product of CG6178 in Drosophila was also found to have high amino acid sequence similarity with firefly luciferase. While it did show high adenyltation activity when exposed to long-chain fatty acids, there was no luminescence when exposed to oxygen and LH2-AMP– further suggesting that luciferase emerged as a long-chain fatty acyl-CoA homolog due to gene duplication. Evolution Phylogenetic analyses performed by Zhang et al. (2020) suggest that the luciferses of the Lampyridae, Rhagopthalmidae, and Phenogodidae families diverged from the Elateridae family 205 Mya. According to phylogenetic data, the emergences of these two luciferases appeared even before the families could diverge– indicating their analogous nature due phenotypic convergences. See also Bioluminescence imaging References Protein domains Oxidoreductases Bioluminescence Enzymes of known structure
Firefly luciferase
[ "Chemistry", "Biology" ]
1,612
[ "Luminescence", "Oxidoreductases", "Protein classification", "Protein domains", "Biochemistry", "Bioluminescence", "Bioinorganic chemistry" ]
4,723,731
https://en.wikipedia.org/wiki/Critical%20points%20of%20the%20elements%20%28data%20page%29
Critical point References CRC.a-d David R. Lide (ed), CRC Handbook of Chemistry and Physics, 85th Edition, online version. CRC Press. Boca Raton, Florida, 2003; Section 6, Fluid Properties; Critical Constants. Also agrees with Celsius values from Section 4: Properties of the Elements and Inorganic Compounds, Melting, Boiling, Triple, and Critical Point Temperatures of the Elements Estimated accuracy for Tc and Pc is indicated by the number of digits. Above 750 K Tc values may be in error by 10 K or more. Vc values are not assumed accurate more than to a few percent. Parentheses indicate extrapolated values. From these sources: (a) D. Ambrose, Vapor-Liquid Constants of Fluids, in R.M. Stevenson, S. Malanowski, Handbook of the Thermodynamics of Organic Compounds, Elsevier, New York, (1987). (b) I.G. Dillon, P.A. Nelson, B.S. Swanson, J. Chem. Phys. 44, 4229, (1966). (c) O. Sifner, J. Klomfar, J. Phys. Chem. Ref. Data 23, 63, (1994). (d) N.B. Vargaftik, Int. J. Thermophys. 11, 467, (1990). LNG J.A. Dean (ed), Lange's Handbook of Chemistry (15th Edition), McGraw-Hill, 1999; Section 6; Table 6.5 Critical Properties KAL National Physical Laboratory, Kaye and Laby Tables of Physical and Chemical Constants; D. Ambrose, M.B. Ewing, M.L. McGlashan, Critical constants and second virial coefficients of gases (retrieved Dec 2005) SMI W.E. Forsythe (ed.), Smithsonian Physical Tables 9th ed., online version (1954; Knovel 2003). Table 259, Critical Temperatures, Pressures, and Densities of Gases See also Phase transitions Units of pressure Temperature Properties of chemical elements Chemical element data pages
Critical points of the elements (data page)
[ "Physics", "Chemistry", "Mathematics" ]
445
[ "Physical phenomena", "Physical quantities", "Properties of chemical elements", "Units of pressure", "Phases of matter", "Chemical element data pages", "Thermodynamics", "Statistical mechanics", "Phase transitions", "Chemical data pages", "Wikipedia categories named after physical quantities", ...
4,724,116
https://en.wikipedia.org/wiki/ISO%2015926
The ISO 15926 is a standard for data integration, sharing, exchange, and hand-over between computer systems. The title, "Industrial automation systems and integration—Integration of life-cycle data for process plants including oil and gas production facilities", is regarded too narrow by the present ISO 15926 developers. Having developed a generic data model and reference data library for process plants, it turned out that this subject is already so wide, that actually any state information may be modelled with it. History In 1991 a European Union ESPRIT-, named ProcessBase, started. The focus of this research project was to develop a data model for lifecycle information of a facility that would suit the requirements of the process industries. At the time that the project duration had elapsed, a consortium of companies involved in the process industries had been established: EPISTLE (European Process Industries STEP Technical Liaison Executive). Initially individual companies were members, but later this changed into a situation where three national consortia were the only members: PISTEP (UK), POSC/Caesar (Norway), and USPI-NL (Netherlands). (later PISTEP merged into POSC/Caesar, and USPI-NL was renamed to USPI). EPISTLE took over the work of the ProcessBase project. Initially this work involved a standard called ISO 10303-221 (referred to as "STEP AP221"). In that AP221 we saw, for the first time, an Annex M with a list of standard instances of the AP221 data model, including types of objects. These standard instances would be for reference and would act as a knowledge base with knowledge about the types of objects. In the early nineties EPISTLE started an activity to extend Annex M to become a library of such object classes and their relationships: STEPlib. In the STEPlib activities a group of approx. 100 domain experts from all three member consortia, spread over the various expertises (e.g. Electrical, Piping, Rotating equipment, etc.), worked together to define the "core classes". The development of STEPlib was extended with many additional classes and relationships between classes and published as Open source data. Furthermore, the concepts and relation types from the AP221 and ISO 15926-2 data models were also added to the STEPlib dictionary. This resulted in the development of Gellish English, whereas STEPlib became the Gellish English dictionary. Gellish English is a structured subset of natural English and is a modeling language suitable for knowledge modeling, product modeling and data exchange. It differs from conventional modeling languages (meta languages) as used in information technology as it not only defines generic concepts, but also includes an English dictionary. The semantic expression capability of Gellish English was significantly increased by extending the number of relation types that can be used to express knowledge and information. For modelling-technical reasons POSC/Caesar proposed another standard than ISO 10303, called ISO 15926. EPISTLE (and ISO) supported that proposal, and continued the modelling work, thereby writing Part 2 of ISO 15926. This Part 2 has official ISO IS (International Standard) status since 2003. POSC/Caesar started to put together their own RDL (Reference Data Library). They added many specialized classes, for example for ANSI (American National Standards Institute) pipe and pipe fittings. Meanwhile, STEPlib continued its existence, mainly driven by some members of USPI. Since it was clear that it was not in the interest of the industry to have two libraries for, in essence, the same set of classes, the Management Board of EPISTLE decided that the core classes of the two libraries shall be merged into Part 4 of ISO 15926. This merging process has been finished. Part 4 should act as reference data for part 2 of ISO 15926 as well as for ISO 10303-221 and replaced its Annex M. On June 5, 2007 ISO 15926-4 was signed off as a TS (Technical Specification). In 1999 the work on an earlier version of Part 7 started. Initially this was based on XML Schema (the only useful W3C Recommendation available then), but when Web Ontology Language (OWL) became available it was clear that provided a far more suitable environment for Part 7. Part 7 passed the first ISO ballot by the end of 2005, and an implementation project started. A formal ballot for TS (Technical Specification) was planned for December 2007. However, it was decided then to split Part 7 into more than one part, because the scope was too wide. Need for ISO15926 In 2004, the National Institute of Science and Technology (NIST) released a report on the impact of the lack of digital interoperability in the capital projects industry. They pegged the cost of inadequate interoperability to be $5.8 billion per year. The full report is over 200 pages. The standard ISO 15926 has thirteen parts (as of February 2022): Part 1 - Overview and fundamental principles Part 2 - Data model Part 3 - Reference data for geometry and topology Part 4 - Reference Data, the terms used within facilities for the process industry Part 6 - Methodology for the development and validation of reference data (under development) Part 7 - Template methodology Part 8 - OWL/RDF implementation Part 9 - Implementation standards, with the focus on standard web servers, web services, and security (under development) Part 10 - Conformance testing Part 11 - Methodology for simplified industrial usage of reference data (under development) Part 12 - Life cycle integration ontology in Web Ontology Language (OWL2) Part 13 - Integrated lifecycle asset planning Description The model and the library are suitable for representing lifecycle information about technical installations and their components. They can also be used for defining the terms used in product catalogs in e-commerce. Another, more limited, use of the standard is as a reference classification for harmonization purposes between shared databases and product catalogues that are not based on ISO 15926. The purpose of ISO 15926 is to provide a Lingua Franca for computer systems, thereby integrating the information produced by them. Although set up for the process industries with large projects involving many parties, and involving plant operations and maintenance lasting decades, the technology can be used by anyone willing to set up a proper vocabulary of reference data in line with Part 4. In Part 7 the concept of Templates is introduced. These are semantic constructs, using Part 2 entities, that represent a small piece of information. These constructs then are mapped to more efficient classes of n-ary relations that interlink the Nodes that are involved in the represented information. In Part 8 the Part 7 Templates are defined in OWL and instantiated in RDF. For validation and reasoning purposes all are represented in First-Order Logic as well. In Part 9 these Node and Template instances are stored in an RDF triple store, set up to a standard schema and an API. Each participating computer system maps its data from its internal format to such ISO-standard Node and Template instances. Data can be "handed over" from one triple store to another in cases where data custodianship is handed over (e.g. from a contractor to a plant owner, or from a manufacturer to the owners of the manufactured goods). Hand-over can be for a part of all data, whilst maintaining full referential integrity. Documents are user-definable. They are defined in XML Schema and they are, in essence, only a structure containing cells that make reference to instances of Templates. This represents a view on all lifecycle data: since the data model is a 4D (space-time) model, it is possible to present the data that was valid at any given point in time, thus providing a true historical record. It is expected that this will be used for Knowledge Mining. Data can be queried by means of SPARQL. In any implementation a restricted number of triple stores can be involved, with different access rights. This is done by means of creating a CPF Server (= Confederation of Participating Façades). An Ontology Browser allows for access to one or more triple stores in a given CPF, depending on the access rights. Projects and applications There are a number of projects working on the extension of the ISO 15926 standard in different application areas. Capital-intensive projects Within the application of Capital Intensive projects, some cooperating implementation projects are running: The DEXPI project: The objective of DEXPI is to develop and promote a general standard for the process industry covering all phases of the lifecycle of a (petro-)chemical plant, ranging from specification of functional requirements to assets in operation. Finalised projects include: The EDRC Project of FIATECH Capturing Equipment Data Requirements Using ISO 15926 and Assessing Conformance. The ADI Project of FIATECH, to build the tools (which will then be made available in the public domain) The tools and deliverables can be seen on the ISO 15926 knowledge base The IDS Project of POSC Caesar Association, to define product models required for data sheets A joint ADI-IDS project is the ISO 15926 WIP Upstream Oil and Gas industry The Norwegian Oil Industry Association (OLF) has decided to use ISO 15926 (also known as the Oil and Gas Ontology) as the instrument for integrating data across disciplines and business domains for the Upstream Oil and Gas industry. It is seen as one of the enablers of what has been called the next (or second) generation of Integrated operations, where a better integration across companies is the goal. The following projects are currently running (May 2009): The Integrated Operations in the High North (IOHN) project is working on extending ISO 15926 to handle real-time data transmission and (pre-)processing to enable the next generation of Integrated Operations. The Environment Web project to include environmental reporting terms and definitions as used in EPIM's EnvironmentWeb in ISO 15926. Finalised projects include: The Integrated Information Platform (IIP) project working on establishing a real-time information pipeline based on open standards. It worked among others on: Daily Drilling Report (DDR) to including all terms and definitions in ISO 15926. This standard became mandatory on February 1, 2008 for reporting on the Norwegian Continental Shelf by the Norwegian Petroleum Directorate (NPD) and Safety Authority Norway (PSA). NPD says that the quality of the reports has improved considerably since. Daily Production Report (DPR) to including all terms and definitions in ISO 15926. This standard was tested successfully on the Valhall (BP-operated) and Åsgard (StatoilHydro-operated) fields offshore Norway. The terminology and XML schemata developed have also been included in Energistics’ PRODML standard. Some technical background One of the main requirements was (and still is) that the scope of the data model covers the entire lifecycle of a facility (e.g. oil refinery) and its components (e.g. pipes, pumps and their parts, etc.). Since such a facility over such a long time entails many different types of activities on a myriad of different objects it became clear that a generic and data-driven data model would be required. A simple example will illustrate this. There are thousands of different types of physical objects in a facility (pumps, compressors, pipes, instruments, fluids, etc). Each of these has many properties. If all combinations would be modelled in a "hard-coded" fashion, the number of combinations would be staggering, and unmanageable. The solution is a "template" that represents the semantics of: "This object has a property of X yyyy" (where yyyy is the unit of measure). Any instance of that template refers to the applicable reference data: physical object (e.g. my Induction Motor) indirect property type (e.g. the class "cold locked rotor time") base property type (here: time) scale (here: seconds) Without being able to make reference to those classes, via the Internet, it will be impossible to express this information. References External links 15926.org: A forum for ISO 15926 discussions and team collaboration. iringtoday.com: - An online ISO 15926 thought leadership community geared toward engineering management. .15926 Editor Open-source software to view, edit and verify ISO 15926 data. XMpLant - A translation tool to convert 2D and 3D plant and process CAD data to ISO 15926 Against Idiosyncrasy in Ontology Development: A critical study of ISO 15926 and of the claims made on its behalf. A Response to "Against Idiosyncrasy in Ontology Development": A rebuttal of "Against Idiosyncrasy in Ontology Development". 15926 Semantic Web Knowledge engineering Technical communication Information science Ontology (information science) Knowledge representation
ISO 15926
[ "Engineering" ]
2,649
[ "Systems engineering", "Knowledge engineering" ]
4,725,234
https://en.wikipedia.org/wiki/Extragalactic%20background%20light
The diffuse extragalactic background light (EBL) is all the accumulated radiation in the universe due to star formation processes, plus a contribution from active galactic nuclei (AGNs). This radiation covers almost all wavelengths of the electromagnetic spectrum, except the microwave, which is dominated by the primordial cosmic microwave background. The EBL is part of the diffuse extragalactic background radiation (DEBRA), which by definition covers the entire electromagnetic spectrum. After the cosmic microwave background, the EBL produces the second-most energetic diffuse background, thus being essential for understanding the full energy balance of the universe. The understanding of the EBL is also fundamental for extragalactic very-high-energy (VHE, 30 GeV-30 TeV) astronomy. VHE photons coming from cosmological distances are attenuated by pair production with EBL photons. This interaction is dependent on the spectral energy distribution (SED) of the EBL. Therefore, it is necessary to know the SED of the EBL in order to study intrinsic properties of the emission in the VHE sources. Observations The direct measurement of the EBL is difficult mainly due to the contribution of zodiacal light that is orders of magnitude higher than the EBL. Different groups have claimed the detection of the EBL in the optical and near-infrared. However, it has been proposed that these analyses have been contaminated by zodiacal light. Recently, two independent groups using different technique have claimed the detection of the EBL in the optical with no contamination from zodiacal light. There are also other techniques that set limits to the background. It is possible to set lower limits from deep galaxy surveys. On the other hand, VHE observations of extragalactic sources set upper limits to the EBL. In November 2018, astronomers reported that the EBL amounted to photons. Empirical modelings There are empirical approaches that predict the overall SED of the EBL in the local universe as well as its evolution over time. These types of modeling can be divided in four different categories according to: (i) Forward evolution, which begins with cosmological initial conditions and follows a forward evolution with time by means of semi-analytical models of galaxy formation. (ii) Backward evolution, which begins with existing galaxy populations and extrapolates them backwards in time. (iii) Evolution of the galaxy populations that is inferred over a range of redshifts. The galaxy evolution is inferred here using some quantity derived from observations such as the star formation rate density of the universe. (iv) Evolution of the galaxy populations that is directly observed over the range of redshifts that contribute significantly to the EBL. See also Cosmic infrared background Cosmic microwave background (CMB) radiation Diffuse extragalactic background radiation References Physical cosmology
Extragalactic background light
[ "Physics", "Astronomy" ]
572
[ "Astronomical sub-disciplines", "Theoretical physics", "Physical cosmology", "Astrophysics" ]
4,725,430
https://en.wikipedia.org/wiki/Axial%20multipole%20moments
Axial multipole moments are a series expansion of the electric potential of a charge distribution localized close to the origin along one Cartesian axis, denoted here as the z-axis. However, the axial multipole expansion can also be applied to any potential or field that varies inversely with the distance to the source, i.e., as . For clarity, we first illustrate the expansion for a single point charge, then generalize to an arbitrary charge density localized to the z-axis. Axial multipole moments of a point charge The electric potential of a point charge q located on the z-axis at (Fig. 1) equals If the radius r of the observation point is greater than a, we may factor out and expand the square root in powers of using Legendre polynomials where the axial multipole moments contain everything specific to a given charge distribution; the other parts of the electric potential depend only on the coordinates of the observation point P. Special cases include the axial monopole moment , the axial dipole moment and the axial quadrupole moment . This illustrates the general theorem that the lowest non-zero multipole moment is independent of the origin of the coordinate system, but higher multipole moments are not (in general). Conversely, if the radius r is less than a, we may factor out and expand in powers of , once again using Legendre polynomials where the interior axial multipole moments contain everything specific to a given charge distribution; the other parts depend only on the coordinates of the observation point P. General axial multipole moments To get the general axial multipole moments, we replace the point charge of the previous section with an infinitesimal charge element , where represents the charge density at position on the z-axis. If the radius r of the observation point P is greater than the largest for which is significant (denoted ), the electric potential may be written where the axial multipole moments are defined Special cases include the axial monopole moment (=total charge) the axial dipole moment , and the axial quadrupole moment . Each successive term in the expansion varies inversely with a greater power of , e.g., the monopole potential varies as , the dipole potential varies as , the quadrupole potential varies as , etc. Thus, at large distances (), the potential is well-approximated by the leading nonzero multipole term. The lowest non-zero axial multipole moment is invariant under a shift b in origin, but higher moments generally depend on the choice of origin. The shifted multipole moments would be Expanding the polynomial under the integral leads to the equation If the lower moments are zero, then . The same equation shows that multipole moments higher than the first non-zero moment do depend on the choice of origin (in general). Interior axial multipole moments Conversely, if the radius r is smaller than the smallest for which is significant (denoted ), the electric potential may be written where the interior axial multipole moments are defined Special cases include the interior axial monopole moment ( the total charge) the interior axial dipole moment , etc. Each successive term in the expansion varies with a greater power of , e.g., the interior monopole potential varies as , the dipole potential varies as , etc. At short distances (), the potential is well-approximated by the leading nonzero interior multipole term. See also Potential theory Multipole expansion Spherical multipole moments Cylindrical multipole moments Solid harmonics Laplace expansion References Electromagnetism Potential theory Moment (physics)
Axial multipole moments
[ "Physics", "Mathematics" ]
718
[ "Electromagnetism", "Physical phenomena", "Functions and mappings", "Physical quantities", "Quantity", "Mathematical objects", "Potential theory", "Mathematical relations", "Fundamental interactions", "Moment (physics)" ]
4,726,131
https://en.wikipedia.org/wiki/Photomedicine
Photomedicine is an interdisciplinary branch of medicine that involves the study and application of light with respect to health and disease. Photomedicine may be related to the practice of various fields of medicine including dermatology, surgery, interventional radiology, optical diagnostics, cardiology, circadian rhythm sleep disorders and oncology. A branch of photomedicine is light therapy in which bright light strikes the retinae of the eyes, used to treat circadian rhythm disorders and seasonal affective disorder (SAD). The light can be sunlight or from a light box emitting white or blue (blue/green) light. Examples Photomedicine is used as a treatment for many different conditions: PUVA for the treatment of psoriasis Photodynamic therapy (PDT) for treatment of cancer and macular degeneration - Nontoxic light-sensitive compounds are targeted to malignant or other diseased cells, then exposed selectively to light, whereupon they become toxic and destroy these cells phototoxicity. One dermatological example of PDT is the targeting malignant cells by bonding the light-sensitive compounds to antibodies to these cells; light exposure at particular wavelengths mediates release of free radicals or other photosensitizing agents, destroying the targeted cells. Treating circadian rhythm disorders Alopecia, pattern hair loss, etc. Free electron laser Laser hair removal IPL Photobiomodulation Optical diagnostics, for example optical coherence tomography of coronary plaques using infrared light Confocal microscopy and fluorescence microscopy of in vivo tissue Diffuse reflectance infrared fourier transform for in vivo quantification of pigments (normal and cancerous), and hemoglobin Perpendicular-polarized flash photography and fluorescence photography of the skin See also Blood irradiation therapy Aesthetic medicine Laser hair removal Laser medicine Light therapy Neuromodulation Neurostimulation Neurotechnology Rox Anderson References Further reading Rünger, Thomas M. Photodermatology, Photoimmunology & Photomedicine Wiley. Online . External links Article: Role of Photomedicine in Gynecological Ontology Medical physics Laser medicine Light therapy
Photomedicine
[ "Physics" ]
447
[ "Applied and interdisciplinary physics", "Medical physics" ]
23,600,625
https://en.wikipedia.org/wiki/Lawson%20topology
In mathematics and theoretical computer science, the Lawson topology, named after Jimmie D. Lawson, is a topology on partially ordered sets (posets) used in the study of domain theory. The lower topology on a poset P is generated by the subbasis consisting of all complements of principal filters on P. The Lawson topology on P is the smallest common refinement of the lower topology and the Scott topology on P. Properties If P is a complete upper semilattice, the Lawson topology on P is always a complete T1 topology. See also Formal ball References G. Gierz, K. H. Hofmann, K. Keimel, J. D. Lawson, M. Mislove, D. S. Scott (2003), Continuous Lattices and Domains, Encyclopedia of Mathematics and its Applications, Cambridge University Press. External links "How Do Domains Model Topologies?," Paweł Waszkiewicz, Electronic Notes in Theoretical Computer Science 83 (2004) Domain theory General topology Order theory
Lawson topology
[ "Mathematics" ]
210
[ "General topology", "Topology stubs", "Topology", "Domain theory", "Order theory" ]
23,602,727
https://en.wikipedia.org/wiki/Modulational%20instability
In the fields of nonlinear optics and fluid dynamics, modulational instability or sideband instability is a phenomenon whereby deviations from a periodic waveform are reinforced by nonlinearity, leading to the generation of spectral-sidebands and the eventual breakup of the waveform into a train of pulses. It is widely believed that the phenomenon was first discovered − and modeled − for periodic surface gravity waves (Stokes waves) on deep water by T. Brooke Benjamin and Jim E. Feir, in 1967. Therefore, it is also known as the Benjamin−Feir instability. However, spatial modulation instability of high-power lasers in organic solvents was observed by Russian scientists N. F. Piliptetskii and A. R. Rustamov in 1965, and the mathematical derivation of modulation instability was published by V. I. Bespalov and V. I. Talanov in 1966. Modulation instability is a possible mechanism for the generation of rogue waves. Initial instability and gain Modulation instability only happens under certain circumstances. The most important condition is anomalous group velocity dispersion, whereby pulses with shorter wavelengths travel with higher group velocity than pulses with longer wavelength. (This condition assumes a focusing Kerr nonlinearity, whereby refractive index increases with optical intensity.) The instability is strongly dependent on the frequency of the perturbation. At certain frequencies, a perturbation will have little effect, while at other frequencies, a perturbation will grow exponentially. The overall gain spectrum can be derived analytically, as is shown below. Random perturbations will generally contain a broad range of frequency components, and so will cause the generation of spectral sidebands which reflect the underlying gain spectrum. The tendency of a perturbing signal to grow makes modulation instability a form of amplification. By tuning an input signal to a peak of the gain spectrum, it is possible to create an optical amplifier. Mathematical derivation of gain spectrum The gain spectrum can be derived by starting with a model of modulation instability based upon the nonlinear Schrödinger equation which describes the evolution of a complex-valued slowly varying envelope with time and distance of propagation . The imaginary unit satisfies The model includes group velocity dispersion described by the parameter , and Kerr nonlinearity with magnitude A periodic waveform of constant power is assumed. This is given by the solution where the oscillatory phase factor accounts for the difference between the linear refractive index, and the modified refractive index, as raised by the Kerr effect. The beginning of instability can be investigated by perturbing this solution as where is the perturbation term (which, for mathematical convenience, has been multiplied by the same phase factor as ). Substituting this back into the nonlinear Schrödinger equation gives a perturbation equation of the form where the perturbation has been assumed to be small, such that The complex conjugate of is denoted as Instability can now be discovered by searching for solutions of the perturbation equation which grow exponentially. This can be done using a trial function of the general form where and are the wavenumber and (real-valued) angular frequency of a perturbation, and and are constants. The nonlinear Schrödinger equation is constructed by removing the carrier wave of the light being modelled, and so the frequency of the light being perturbed is formally zero. Therefore, and don't represent absolute frequencies and wavenumbers, but the difference between these and those of the initial beam of light. It can be shown that the trial function is valid, provided and subject to the condition This dispersion relation is vitally dependent on the sign of the term within the square root, as if positive, the wavenumber will be real, corresponding to mere oscillations around the unperturbed solution, whilst if negative, the wavenumber will become imaginary, corresponding to exponential growth and thus instability. Therefore, instability will occur when that is for This condition describes the requirement for anomalous dispersion (such that is negative). The gain spectrum can be described by defining a gain parameter as so that the power of a perturbing signal grows with distance as The gain is therefore given by where as noted above, is the difference between the frequency of the perturbation and the frequency of the initial light. The growth rate is maximum for Modulation instability in soft systems Modulation instability of optical fields has been observed in photo-chemical systems, namely, photopolymerizable medium. Modulation instability occurs owing to inherent optical nonlinearity of the systems due to photoreaction-induced changes in the refractive index. Modulation instability of spatially and temporally incoherent light is possible owing to the non-instantaneous response of photoreactive systems, which consequently responds to the time-average intensity of light, in which the femto-second fluctuations cancel out. References Further reading Nonlinear optics Photonics Water waves Fluid dynamic instabilities
Modulational instability
[ "Physics", "Chemistry" ]
1,009
[ "Physical phenomena", "Fluid dynamic instabilities", "Water waves", "Waves", "Fluid dynamics" ]
23,605,014
https://en.wikipedia.org/wiki/1%2C1-Dichloro-1-fluoroethane
1,1-Dichloro-1-fluoroethane is a haloalkane with the formula . It is one of the three isomers of dichlorofluoroethane. It belongs to the hydrochlorofluorocarbon (HCFC) family of man-made compounds that contribute significantly to both ozone depletion and global warming when released into the environment. Physiochemical properties 1,1-Dichloro-1-fluoroethane can be a non-flammable, colourless liquid under room-temperature atmospheric conditions. The compound is very volatile with a boiling point of 32°C. Its critical temperature is near 204°C. Its smell has been described as "usually ethereal" (like ether). Production and use 1,1-Dichloro-1-fluoroethane is mainly used as a solvent and foam blowing agent under the names R-141b and HCFC-141b. It is a class 2 ozone depleting substance undergoing a global phaseout from production and use under the Montreal Protocol since the late 1990s. It is being replaced by HFCs within some applications. Environmental effects The concentration of HCFC-141b in the atmosphere grew to near 25 parts per trillion by year 2016. It has an ozone depletion potential (ODP) of 0.12. This is low compared to the ODP=1 of trichlorofluoromethane (CFC-11, R-11), which also grew about ten times more abundant in the atmosphere prior to introduction of HFC-141b and subsequent adoption of the Montreal Protocol. HFC-141b is also a minor but potent greenhouse gas. It has an estimated lifetime of about 10 years and a 100-year global warming potential ranging 725 to 2500. This compares to the GWP=1 of carbon dioxide, which had a much greater atmospheric concentration near 400 parts per million in year 2020. See also IPCC list of greenhouse gases List of refrigerants References Hydrochlorofluorocarbons Halogenated solvents Ozone-depleting chemical substances Refrigerants Greenhouse gases
1,1-Dichloro-1-fluoroethane
[ "Chemistry", "Environmental_science" ]
450
[ "Greenhouse gases", "Harmful chemical substances", "Environmental chemistry", "Ozone-depleting chemical substances" ]
23,606,750
https://en.wikipedia.org/wiki/Werner%20state
A Werner state is a -dimensional bipartite quantum state density matrix that is invariant under all unitary operators of the form . That is, it is a bipartite quantum state that satisfies for all unitary operators U acting on d-dimensional Hilbert space. These states were first developed by Reinhard F. Werner in 1989. General definition Every Werner state is a mixture of projectors onto the symmetric and antisymmetric subspaces, with the relative weight being the main parameter that defines the state, in addition to the dimension : where are the projectors and is the permutation or flip operator that exchanges the two subsystems A and B. Werner states are separable for p ≥ and entangled for p < . All entangled Werner states violate the PPT separability criterion, but for d ≥ 3 no Werner state violates the weaker reduction criterion. Werner states can be parametrized in different ways. One way of writing them is where the new parameter α varies between −1 and 1 and relates to p as Two-qubit example Two-qubit Werner states, corresponding to above, can be written explicitly in matrix form asEquivalently, these can be written as a convex combination of the totally mixed state with (the projection onto) a Bell state: where (or, confining oneself to positive values, ) is related to by . Then, two-qubit Werner states are separable for and entangled for . Werner-Holevo channels A Werner-Holevo quantum channel with parameters and integer is defined as where the quantum channels and are defined as and denotes the partial transpose map on system A. Note that the Choi state of the Werner-Holevo channel is a Werner state: where . Multipartite Werner states Werner states can be generalized to the multipartite case. An N-party Werner state is a state that is invariant under for any unitary U on a single subsystem. The Werner state is no longer described by a single parameter, but by N! − 1 parameters, and is a linear combination of the N! different permutations on N systems. References Quantum states
Werner state
[ "Physics" ]
438
[ "Quantum states", "Quantum mechanics", "Quantum physics stubs" ]
23,607,237
https://en.wikipedia.org/wiki/Single-source%20data
Single-source data (also single source) is the measurement of TV and/or other mass media's advertising exposure and purchase behavior, over time for the same individual or household. This measurement is gauged through the collection of data components supplied by one or more parties overlapped through a single integrated system of data collection matched to the person or household level. How these data are stored is known as a single-source database. In TV advertising measurement, single-source data is used to explore how advertising exposure influences individuals or households' loyalty and buying behavior across different windows of time, e.g., year, quarter, month, and week. Single-source data is a compilation of 1, home-scanned sales records and/or loyalty card purchases from retail or grocery stores and other commercial operations. 2, ad exposure (or not) from TV tune-in data from cable set-top boxes or people meters (pushbutton or passive) or household tuning meters. Lastly, 3, Household demographic information. The significance of single-source data resides in its ability to provide a natural and controlled measurement of advertising effectiveness within the market, particularly through the comparison of exposed and non-exposed consumers. This data exhibits a longitudinal structure and offers a high level of dis-aggregation, both at the individual and temporal levels. Single-source data serves to illuminate variations in household exposure to a brand's advertisements and their corresponding purchasing patterns in the context of advertising fluctuations. Companies "Project Apollo" was designed to be a single source, national market research service based on Nielsen's Home Scan technology for measuring consumer purchase behavior, combined with Arbitron's Portable People Meter system, measuring electronic media exposure. In January 2006, The Nielsen Company and Arbitron Inc. completed the deployment of a national pilot panel of more than 11,000 persons in 5,000 households. Seven advertisers (P&G, Unilever, Walmart, Pfizer, Pepsi, Kraft), S.C. Johnson, signed on as members of the Project Apollo Steering Committee. The Committee worked with Arbitron and Nielsen to evaluate the utility of multimedia and purchase information from a common sample of consumers. Individuals within the sample were given incentives to voluntarily carry Arbitron's Portable People Meter, a small, pager-sized device that collects the person's exposure to electronic media sources: broadcast television networks, cable networks, and network radio as well as audio-based commercials broadcast on these platforms. Consumer exposure to other media such as newspapers, magazines, and circulars was collected through additional survey instruments. The project was shuttered in February 2008 due to sample size, cost, and insufficient client commitment. Prior to Apollo, there were other attempts to provide single-source measurement in the U.S., including Arbitron's Scan America, which used pushbutton people meters; IRI's use of household tuning meters in its Behavior Scan markets; ADTEL; and ERIM. Outside the U.S. there have also been such efforts that have been discontinued except in England and in France where small single-source panels survive. In Germany and the Netherlands, the research agency GFK is currently running single-source panels under the name "Media Efficiency Panel" (MEP). In MEP online behavior and advertising contacts are measured in detail using a NURAGO browser plug-in. Off-line media consumption is measured using validated media consumption questionnaires. FMCG purchases are captured using a household scanner and durable purchases using an online system asking respondents to check in and register what goods they bought, where they bought them, and for what price. The German panel was launched in 2008 and is currently experimenting with the audio measurement using a mobile phone to capture advertising contacts on TV. More than 70 studies have currently been done in this panel by a large variety of advertisers. The Dutch panel was launched in July 2010. To circumvent the problem of unsustainable costs which brought down Project Apollo, cross-media analytics company All Media Count uses survey data from a longitudinal panel and behavioral modeling to generate individual panelists' daily media contact data for more than 10,000 media vehicles across eleven media types in China. The term “single source” is often credited to Colin McDonald who while at BMRB in England in 1966 used to purchase and view diaries rather than electronic means to conduct the first quasi-single source measurement. Although electronic means of data capture are preferred for accuracy and to minimize respondent fatigue, cost-effective methods for doing this do not yet exist for several media (magazines, newspapers, subway, transport, ambient). Also, many markets - such as China - do not have the universal electronic measurement, even for TV. Services such as MRI, Roy Morgan, Simmons, TGI, and others around the world that collect such information by non-electronic or hybrid means are sometimes considered to be single source, where the data are obtained from a single panel of respondents. Despite the enthusiasm and the success of such data, single source has been plagued by high costs and small sample sizes. See also Audience measurement Media market References Broadcasting Television advertising Television technology
Single-source data
[ "Technology" ]
1,057
[ "Information and communications technology", "Television technology" ]
20,756,012
https://en.wikipedia.org/wiki/Rayleigh%20wave
Rayleigh waves are a type of surface acoustic wave that travel along the surface of solids. They can be produced in materials in many ways, such as by a localized impact or by piezo-electric transduction, and are frequently used in non-destructive testing for detecting defects. Rayleigh waves are part of the seismic waves that are produced on the Earth by earthquakes. When guided in layers they are referred to as Lamb waves, Rayleigh–Lamb waves, or generalized Rayleigh waves. Characteristics Rayleigh waves are a type of surface wave that travel near the surface of solids. Rayleigh waves include both longitudinal and transverse motions that decrease exponentially in amplitude as distance from the surface increases. There is a phase difference between these component motions. The existence of Rayleigh waves was predicted in 1885 by Lord Rayleigh, after whom they were named. In isotropic solids these waves cause the surface particles to move in ellipses in planes normal to the surface and parallel to the direction of propagation – the major axis of the ellipse is vertical. At the surface and at shallow depths this motion is retrograde, that is the in-plane motion of a particle is counterclockwise when the wave travels from left to right. At greater depths the particle motion becomes prograde. In addition, the motion amplitude decays and the eccentricity changes as the depth into the material increases. The depth of significant displacement in the solid is approximately equal to the acoustic wavelength. Rayleigh waves are distinct from other types of surface or guided acoustic waves such as Love waves or Lamb waves, both being types of guided waves supported by a layer, or longitudinal and shear waves, that travel in the bulk. Rayleigh waves have a speed slightly less than shear waves by a factor dependent on the elastic constants of the material. The typical speed of Rayleigh waves in metals is of the order of 2–5 km/s, and the typical Rayleigh speed in the ground is of the order of 50–300 m/s for shallow waves less than 100-m depth and 1.5–4 km/s at depths greater than 1 km. Since Rayleigh waves are confined near the surface, their in-plane amplitude when generated by a point source decays only as , where is the radial distance. Surface waves therefore decay more slowly with distance than do bulk waves, which spread out in three dimensions from a point source. This slow decay is one reason why they are of particular interest to seismologists. Rayleigh waves can circle the globe multiple times after a large earthquake and still be measurably large. There is a difference in the behavior (Rayleigh wave velocity, displacements, trajectories of the particle motion, stresses) of Rayleigh surface waves with positive and negative Poisson's ratio. In seismology, Rayleigh waves (called "ground roll") are the most important type of surface wave, and can be produced (apart from earthquakes), for example, by ocean waves, by explosions, by railway trains and ground vehicles, or by a sledgehammer impact. Speed and dispersion In isotropic, linear elastic materials described by Lamé parameters and , Rayleigh waves have a speed given by solutions to the equation where , , , and . Since this equation has no inherent scale, the boundary value problem giving rise to Rayleigh waves are dispersionless. An interesting special case is the Poisson solid, for which , since this gives a frequency-independent phase velocity equal to . For linear elastic materials with positive Poisson ratio (), the Rayleigh wave speed can be approximated as , where is the shear-wave velocity. The elastic constants often change with depth, due to the changing properties of the material. This means that the velocity of a Rayleigh wave in practice becomes dependent on the wavelength (and therefore frequency), a phenomenon referred to as dispersion. Waves affected by dispersion have a different wave train shape. Rayleigh waves on ideal, homogeneous and flat elastic solids show no dispersion, as stated above. However, if a solid or structure has a density or sound velocity that varies with depth, Rayleigh waves become dispersive. One example is Rayleigh waves on the Earth's surface: those waves with a higher frequency travel more slowly than those with a lower frequency. This occurs because a Rayleigh wave of lower frequency has a relatively long wavelength. The displacement of long wavelength waves penetrates more deeply into the Earth than short wavelength waves. Since the speed of waves in the Earth increases with increasing depth, the longer wavelength (low frequency) waves can travel faster than the shorter wavelength (high frequency) waves. Rayleigh waves thus often appear spread out on seismograms recorded at distant earthquake recording stations. It is also possible to observe Rayleigh wave dispersion in thin films or multi-layered structures. In non-destructive testing Rayleigh waves are widely used for materials characterization, to discover the mechanical and structural properties of the object being tested – like the presence of cracking, and the related shear modulus. This is in common with other types of surface waves. The Rayleigh waves used for this purpose are in the ultrasonic frequency range. They are used at different length scales because they are easily generated and detected on the free surface of solid objects. Since they are confined in the vicinity of the free surface within a depth (~ the wavelength) linked to the frequency of the wave, different frequencies can be used for characterization at different length scales. In electronic devices Rayleigh waves propagating at high ultrasonic frequencies (10–1000 MHz) are used widely in different electronic devices. In addition to Rayleigh waves, some other types of surface acoustic waves (SAW), e.g. Love waves, are also used for this purpose. Examples of electronic devices using Rayleigh waves are filters, resonators, oscillators, sensors of pressure, temperature, humidity, etc. Operation of SAW devices is based on the transformation of the initial electric signal into a surface wave that, after achieving the required changes to the spectrum of the initial electric signal as a result of its interaction with different types of surface inhomogeneity, is transformed back into a modified electric signal. The transformation of the initial electric energy into mechanical energy (in the form of SAW) and back is usually accomplished via the use of piezoelectric materials for both generation and reception of Rayleigh waves as well as for their propagation. In geophysics Generation from earthquakes Because Rayleigh waves are surface waves, the amplitude of such waves generated by an earthquake generally decreases exponentially with the depth of the hypocenter (focus). However, large earthquakes may generate Rayleigh waves that travel around the Earth several times before dissipating. In seismology longitudinal and shear waves are known as P waves and S waves, respectively, and are termed body waves. Rayleigh waves are generated by the interaction of P- and S- waves at the surface of the earth, and travel with a velocity that is lower than the P-, S-, and Love wave velocities. Rayleigh waves emanating outward from the epicenter of an earthquake travel along the surface of the earth at about 10 times the speed of sound in air (0.340 km/s), that is ~3 km/s. Due to their higher speed, the P and S waves generated by an earthquake arrive before the surface waves. However, the particle motion of surface waves is larger than that of body waves, so the surface waves tend to cause more damage. In the case of Rayleigh waves, the motion is of a rolling nature, similar to an ocean surface wave. The intensity of Rayleigh wave shaking at a particular location is dependent on several factors: The size of the earthquake. The distance to the earthquake. The depth of the earthquake. The geologic structure of the crust. The focal mechanism of the earthquake. The rupture directivity of the earthquake. Local geologic structure can serve to focus or defocus Rayleigh waves, leading to significant differences in shaking over short distances. In seismology Low frequency Rayleigh waves generated during earthquakes are used in seismology to characterise the Earth's interior. In intermediate ranges, Rayleigh waves are used in geophysics and geotechnical engineering for the characterisation of oil deposits. These applications are based on the geometric dispersion of Rayleigh waves and on the solution of an inverse problem on the basis of seismic data collected on the ground surface using active sources (falling weights, hammers or small explosions, for example) or by recording microtremors. Rayleigh ground waves are important also for environmental noise and vibration control since they make a major contribution to traffic-induced ground vibrations and the associated structure-borne noise in buildings. Possible animal reaction Low frequency (< 20 Hz) Rayleigh waves are inaudible, yet they can be detected by many mammals, birds, insects and spiders. Humans should be able to detect such Rayleigh waves through their Pacinian corpuscles, which are in the joints, although people do not seem to consciously respond to the signals. Some animals seem to use Rayleigh waves to communicate. In particular, some biologists theorize that elephants may use vocalizations to generate Rayleigh waves. Since Rayleigh waves decay slowly, they should be detectable over long distances. Note that these Rayleigh waves have a much higher frequency than Rayleigh waves generated by earthquakes. After the 2004 Indian Ocean earthquake, some people have speculated that Rayleigh waves served as a warning to animals to seek higher ground, allowing them to escape the more slowly traveling tsunami. At this time, evidence for this is mostly anecdotal. Other animal early warning systems may rely on an ability to sense infrasonic waves traveling through the air. See also Linear elasticity Longitudinal wave Love wave Phonon Surface acoustic wave References Further reading Viktorov, I.A. (2013) "Rayleigh and Lamb Waves: Physical Theory and Applications", Springer; Reprint of the original 1st 1967 edition by Plenum Press, New York. . Aki, K. and Richards, P. G. (2002). Quantitative Seismology (2nd ed.). University Science Books. . Fowler, C. M. R. (1990). The Solid Earth. Cambridge, UK: Cambridge University Press. . Lai, C.G., Wilmanski, K. (Eds.) (2005). Surface Waves in Geomechanics: Direct and Inverse Modelling for Soils and Rocks Series: CISM International Centre for Mechanical Sciences, Number 481, Springer, Wien, External links Real-time imaging of Rayleigh waves Acoustics Seismology Surface waves Waves
Rayleigh wave
[ "Physics" ]
2,192
[ "Physical phenomena", "Surface waves", "Classical mechanics", "Acoustics", "Waves", "Motion (physics)" ]
20,756,214
https://en.wikipedia.org/wiki/Zineb
Zineb is the chemical compound with the formula {Zn[S2CN(H)CH2CH2N(H)CS2]}n. Structurally, it is classified as a coordination polymer and a dithiocarbamate complex. This pale yellow solid is used as fungicide. Production and applications It is produced by treating ethylene bis(dithiocarbamate) sodium salt, "nabam", with zinc sulfate. This procedure can be carried out by mixing nabam and zinc sulfate in a spray tank. Its uses include control of downy mildews, rusts, and redfire disease. In the US it was once registered as a "General Use Pesticide", however all registrations were voluntarily cancelled following an EPA special review. It continues to be used in many other countries. Structure Zineb is a polymeric complex of zinc with a dithiocarbamate. The polymer is composed of Zn(dithiocarbamate)2 subunits linked by an ethylene (-CH2CH2-) backbone. A reference compound is [Zn(S2CNEt2)2]2, which features a pair of tetrahedral Zn centers bridged by one sulfur center. See also Metam sodium - A related dithiocarbamate salt which is also used as a fungicide. Maneb - ethylene bis(dithiocarbamate) with manganese instead of zinc. Mancozeb - A common fungicide containing Zineb and Maneb. References External links Fungicides Endocrine disruptors Zinc compounds Dithiocarbamates Polymers
Zineb
[ "Chemistry", "Materials_science", "Biology" ]
342
[ "Fungicides", "Dithiocarbamates", "Endocrine disruptors", "Functional groups", "Polymer chemistry", "Polymers", "Biocides" ]
20,757,063
https://en.wikipedia.org/wiki/Hyperbolic%20law%20of%20cosines
In hyperbolic geometry, the "law of cosines" is a pair of theorems relating the sides and angles of triangles on a hyperbolic plane, analogous to the planar law of cosines from plane trigonometry, or the spherical law of cosines in spherical trigonometry. It can also be related to the relativistic velocity addition formula. History Describing relations of hyperbolic geometry, Franz Taurinus showed in 1826 that the spherical law of cosines can be related to spheres of imaginary radius, thus he arrived at the hyperbolic law of cosines in the form: which was also shown by Nikolai Lobachevsky (1830): Ferdinand Minding gave it in relation to surfaces of constant negative curvature: as did Delfino Codazzi in 1857: The relation to relativity using rapidity was shown by Arnold Sommerfeld in 1909 and Vladimir Varićak in 1910. Hyperbolic laws of cosines Take a hyperbolic plane whose Gaussian curvature is . Given a hyperbolic triangle with angles and side lengths , , and , the following two rules hold. The first is an analogue of Euclidean law of cosines, expressing the length of one side in terms of the other two and the angle between the latter: The second law has no Euclidean analogue, since it expresses the fact that lengths of sides of a hyperbolic triangle are determined by the interior angles: Houzel indicates that the hyperbolic law of cosines implies the angle of parallelism in the case of an ideal hyperbolic triangle: Hyperbolic law of Haversines In cases where is small, and being solved for, the numerical precision of the standard form of the hyperbolic law of cosines will drop due to rounding errors, for exactly the same reason it does in the Spherical law of cosines. The hyperbolic version of the law of haversines can prove useful in this case: Relativistic velocity addition via hyperbolic law of cosines Setting in (), and by using hyperbolic identities in terms of the hyperbolic tangent, the hyperbolic law of cosines can be written: In comparison, the velocity addition formulas of special relativity for the x and y-directions as well as under an arbitrary angle , where is the relative velocity between two inertial frames, the velocity of another object or frame, and the speed of light, is given by It turns out that this result corresponds to the hyperbolic law of cosines - by identifying with relativistic rapidities the equations in () assume the form: See also Hyperbolic law of sines Hyperbolic triangle trigonometry History of Lorentz transformations References Bibliography External links Non Euclidean Geometry, Math Wiki at TU Berlin Velocity Compositions and Rapidity, at MathPages Hyperbolic geometry Special relativity es:Teorema del coseno#Geometría hiperbólica fr:Théorème d'Al-Kashi#Géométrie hyperbolique pl:Twierdzenie cosinusów#Wzory cosinusów w geometriach nieeuklidesowych
Hyperbolic law of cosines
[ "Physics" ]
644
[ "Special relativity", "Theory of relativity" ]