id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
53,268,205
https://en.wikipedia.org/wiki/Siva%20Brata%20Bhattacherjee
Siva Brata Bhattacherjee (1921–2003)—sometimes spelt Sibabrata Bhattacherjee—was a professor of physics at the University of Calcutta. He studied with the physicist, Satyendra Nath Bose, under whose supervision he completed his doctoral thesis in solid-state physics at the University College of Science (commonly known as Rajabazar Science College). In 1945, he came from the University of Dhaka to join the Khaira Laboratory of Physics at the Science College, and specialised in the field of X-ray crystallography. Dr Bhattacherjee also served as a faculty member of the Department of Technology at the erstwhile University of Manchester Institute of Science and Technology. He was married to Lilabati Bhattacharjee, Director (Mineral Physics) of the Geological Survey of India. Siva Brata is survived by their son Dr Subrata Bhattacherjee, and daughter Mrs Sonali Karmakar née Bhattacherjee. References 1921 births 2003 deaths X-ray crystallography 20th-century Indian physicists Indian crystallographers Academic staff of the University of Calcutta University of Dhaka people Bengali scientists Academics of the University of Manchester Institute of Science and Technology Indian expatriates in the United Kingdom Scientists from West Bengal
Siva Brata Bhattacherjee
[ "Chemistry", "Materials_science" ]
269
[ "X-ray crystallography", "Crystallography" ]
56,118,194
https://en.wikipedia.org/wiki/Sarracenia%20%C3%97%20swaniana
Sarracenia × swaniana is a nothospecies of carnivorous plant from the genus Sarracenia in the family Sarraceniaceae described by hort. and Nichols. It is a hybrid between Sarracenia purpurea subsp. venosa and Sarracenia minor var. minor. References swaniana Hybrid plants
Sarracenia × swaniana
[ "Biology" ]
70
[ "Hybrid plants", "Plants", "Hybrid organisms" ]
56,125,271
https://en.wikipedia.org/wiki/Tucker%20Prize
The Tucker Prize for outstanding theses in the area of optimization is sponsored by the Mathematical Optimization Society (MOS). Up to three finalists are presented at each (triennial) International Symposium of the MOS. The winner will receive an award of $1000 and a certificate. The Albert W. Tucker Prize was approved by the Society in 1985, and was first awarded at the Thirteenth International Symposium on Mathematical Programming in 1988. Winners and finalists 1988: Andrew V. Goldberg for "Efficient graph algorithms for sequential and parallel computers". 1991: Michel Goemans for "Analysis of Linear Programming Relaxations for a Class of Connectivity Problems". Other Finalists: Leslie Hall and Mark Hartmann 1994: David P. Williamson for "On the Design of Approximation Algorithms for a Class of Graph Problems". Other Finalists: Dick Den Hertog and Jiming Liu 1997: David Karger for "Random Sampling in Graph Optimization Problems". Other Finalists: Jim Geelen and Luis Nunes Vicente 2000: Bertrand Guenin for his PhD thesis. Other Finalists: Kamal Jain and Fabian Chudak 2003: Tim Roughgarden for "Selfish Routing". Other Finalists: Pablo Parrilo and Jiming Peng 2006: Uday V. Shanbhag for "Decomposition and Sampling Methods for Stochastic Equilibrium Problems". Other Finalists: José Rafael Correa and Dion Gijswijt 2009: Mohit Singh for "Iterative Methods in Combinatorial Optimization". Other Finalists: Tobias Achterberg and Jiawang Nie 2012: Oliver Friedmann for "Exponential Lower Bounds for Solving Infinitary Payoff Games and Linear Programs". Other Finalists: Amitabh Basu and Guanghui Lan 2015: Daniel Dadush for "Integer Programming, Lattice Algorithms, and Deterministic Volume Computation". Other Finalists: Dmitriy Drusvyatskiy and Marika Karbstein 2018: Yin Tat Lee for "Faster Algorithms for Convex and Combinatorial Optimization". Other Finalists: Damek Davis and Adrien Taylor 2021: Jakub Tarnawski for "New Graph Algorithms via Polyhedral Techniques". Other Finalists: Georgina Hall and Yair Carmon See also List of computer science awards References External links Official web page (MOS) Computer science awards Triennial events Awards of the Mathematical Optimization Society Awards established in 1988 1988 establishments in the United States
Tucker Prize
[ "Technology" ]
482
[ "Science and technology awards", "Computer science", "Computer science awards" ]
56,125,679
https://en.wikipedia.org/wiki/Journal%20of%20Commutative%20Algebra
The Journal of Commutative Algebra is a peer-reviewed academic journal of mathematical research that specializes in commutative algebra and closely related fields. It has been published by the Rocky Mountain Mathematics Consortium (RMMC) since its establishment in 2009. It is currently published four times per year. Historically, the Journal of Commutative Algebra filled a niche for the Rocky Mountain Mathematics Consortium when the Canadian Applied Mathematics Quarterly, formerly published by the RMMC, was acquired by the Applied Mathematics Institute of the University of Alberta. Founding editors Jim Coykendall (currently at Clemson University) and Hal Schenck (currently at Auburn University) began the journal with the goal of creating a top-tier journal in commutative algebra. Abstracting and indexing The journal is abstracted and indexed in Current Contents/Physical, Chemical & Earth Sciences, Science Citation Index Expanded, Scopus, MathSciNet, and zbMATH. References External links Academic journals established in 2009 Algebra journals English-language journals Quarterly journals Delayed open access journals
Journal of Commutative Algebra
[ "Mathematics" ]
209
[ "Algebra journals", "Algebra" ]
56,127,040
https://en.wikipedia.org/wiki/C2-Symmetric%20ligands
{{DISPLAYTITLE:C2-Symmetric ligands}} In homogeneous catalysis, C2-symmetric ligands refer to ligands that lack mirror symmetry but have C2 symmetry (two-fold rotational symmetry). Such ligands are usually bidentate and are valuable in catalysis. The C2 symmetry of ligands limits the number of possible reaction pathways and thereby increases enantioselectivity, relative to asymmetrical analogues. C2-symmetric ligands are a subset of chiral ligands. Chiral ligands, including C2-symmetric ligands, combine with metals or other groups to form chiral catalysts. These catalysts engage in enantioselective chemical synthesis, in which chirality in the catalyst yields chirality in the reaction product. Examples An early C2-symmetric ligand, diphosphine catalytic ligand DIPAMP, was developed in 1968 by William S. Knowles and coworkers of Monsanto Company, who shared the 2001 Nobel Prize in Chemistry. This ligand was used in the industrial production of -DOPA. Some classes of C2-symmetric ligands are called privileged ligands, which are ligands that are broadly applicable to multiple catalytic processes, not only a single reaction type. Mechanistic concepts While the presence of any symmetry element within a ligand intended for asymmetric induction might appear counterintuitive, asymmetric induction only requires that the ligand be chiral (i.e. have no improper rotation axis). Asymmetry (i.e. absence of any symmetry elements) is not required. C2 symmetry improves the enantioselectivity of the complex by reducing the number of unique geometries in the transition states. Steric and kinetic factors then usually favor the formation of a single product. Chiral fence Chiral ligands work by asymmetric induction somewhere along the reaction coordinate. The image to the right illustrates how a chiral ligand may induce an enantioselective reaction. The ligand (in green) has C2 symmetry with its nitrogen, oxygen or phosphorus atoms hugging a central metal atom (in red). In this particular ligand the right side is sticking out and its left side points away. The substrate in this reduction is acetophenone and the reagent (in blue) a hydride ion. In absence of the metal and the ligand the Re face approach of the hydride ion gives the (S)-enantiomer and the Si face approach the (R)-enantiomer in equal amounts (a racemic mixture like expected). The ligand and metal presence changes all that. The carbonyl group will coordinate with the metal and due to the steric bulk of the phenyl group it will only be able to do so with its Si face exposed to the hydride ion with in the ideal situation exclusive formation of the (R) enantiomer. The re face will simply hit the chiral fence. Note that when the ligand is replaced by its mirror image the other enantiomer will form and that a racemic mixture of ligand will once again yield a racemic product. Also note that if the steric bulk of both carbonyl substituents is very similar the strategy will fail. Other C2-symmetric complexes Many C2-symmetric complexes are known. Some arise not from C2-symmetric ligands, but from the orientation or disposition of high symmetry ligands within the coordination sphere of the metal. Notably, EDTA and triethylenetetraamine form complexes that are C2-symmetric by virtue of the way the ligands wrap around the metal centers. Two isomers are possible for (indenyl)2MX2, Cs- and C2-symmetric. The C2-symmetric complexes are optically stable. Asymmetric ligands Ligands containing atomic chirality centers such asymmetric carbon, which usually do not have C2-symmetry, remain important in catalysis. Examples include cinchona alkaloids and certain phosphoramidites. P-chiral monophosphines have also been investigated. See also Chiral anion catalysis Further reading References Coordination chemistry Stereochemistry Organometallic chemistry Ligands
C2-Symmetric ligands
[ "Physics", "Chemistry" ]
863
[ "Ligands", "Stereochemistry", "Coordination chemistry", "Space", "nan", "Spacetime", "Organometallic chemistry" ]
76,089,082
https://en.wikipedia.org/wiki/Khazan%20system
The Khazan is a traditional farming system of Goa, India. It comprises mainly rice-fish fields established on reclaimed coastal wetlands, salt marshes and mangrove forests. It involves construction of levees and sluice gates to prevent sea water from entering the fields. The Bandora (Bandiwade) copper-plate inscription of Anirjita-varman (likely a Konkan Maurya king), dated to 5th-6th century on palaeographical grounds, refers to the khazan system as khajjana. It records the grant of tax-exempt land in Dwadasa-desha (modern Bardez), including one hala (a unit) of khajjana land. The recipient of the grant was expected to convert this wetland into a cultivated field by constructing a bund to prevent the salty sea water from entering the land. Historically, an association of villagers (gaunkaris) maintained the local khazan fields and its associated levees. This system continued under the Portuguese rule, with communidades maintaining the khazan system through an association of farmers (bhous or bhaus). References Bibliography Flood control in India Coastal construction Wetlands of India
Khazan system
[ "Engineering" ]
250
[ "Construction", "Coastal construction" ]
65,933,012
https://en.wikipedia.org/wiki/The%20Meaning%20of%20Relativity
The Meaning of Relativity: Four Lectures Delivered at Princeton University, May 1921 is a book published by Princeton University Press in 1922 that compiled the 1921 Stafford Little Lectures at Princeton University, given by Albert Einstein. The lectures were translated into English by Edwin Plimpton Adams. The lectures and the subsequent book were Einstein's last attempt to provide a comprehensive overview of his theory of relativity and is his only book that provides an accessible overview of the physics and mathematics of general relativity. Einstein explained his goal in the preface of the book's German edition by stating he "wanted to summarize the principal thoughts and mathematical methods of relativity theory" and that his "principal aim was to let the fundamentals in the entire train of thought of the theory emerge clearly". Among other reviews, the lectures were the subject of the 2017 book The Formative Years of Relativity: The History and Meaning of Einstein's Princeton Lectures by Hanoch Gutfreund and Jürgen Renn. Background The book contains four of Einstein's Stafford Little Lectures that were given at Princeton University in 1921. The lectures follow a series of 1915 publications by Einstein developing the theory of general relativity. During this time, there were still many controversial issues surrounding the theories and he was still defending several of his views. The lectures and the subsequent book were Einstein's last attempt to provide a comprehensive overview of his theory of relativity. It is also his only book that provides an overview of the physics and mathematics of general relativity in a comprehensive manner that was accessible to non-specialists. Einstein explained his goal in the preface of the book's German edition by stating he "wanted to summarize the principal thoughts and mathematical methods of relativity theory" and that his "principal aim was to let the fundamentals in the entire train of thought of the theory emerge clearly". On December 27, 1949, The New York Times ran a story titled "New Einstein theory gives a master key to the universe" in reaction to the new appendix in the book's fifth edition in which Einstein expounded upon his latest unification efforts. Einstein had nothing to do with the article and subsequently refused to speak with any reporters on the matter; he reportedly used the message "[c]ome back and see me in twenty years" to brush off their inquiries. Content The book is made of four lectures. The first is titled "Space and Time in Pre-Relativity Physics". The second lecture is titled The Theory of Special Relativity and discusses the special theory of relativity. The third and fourth lectures cover the general theory of relativity in two parts. Einstein added an appendix to update the book for its second edition, which published in 1945. A second appendix was later added for the fifth edition as well, in 1955, which discusses the nonsymmetric field. The second appendix contains Einstein's attempts at a unified field theory. Reception The book has received many reviews since its initial publication. The first edition of the book was reviewed by Nature in 1923. Other early versions of the book were reviewed by George Yuri Rainich in 1946, as well as Abraham H. Taub, Philip Morrison, and I. M. Levitt in 1950. Reviews for the book's fifth edition include a short announcement in 1955 that called the book "a well-known classic". A 1956 review of the fifth edition summarizes its publication history and contents and closes by stating "Einstein's little book then serves as an excellent tying-together of loose ends and as a broad survey of the subject." Among other references to the book, a 2005 column of The Physics Teacher, included the work in a list of books "by and about Einstein that all physics teachers should have" and "should have immediate access to", while a 2019 review of another work opened by stating: "Every teacher of General Relativity depends heavily on two texts: one, the massive Gravitation by Misner, Thorne and Wheeler, the second the diminutive The Meaning of Relativity by Einstein." The Meaning of Relativity is the focus of a 2017 book, The Formative Years of Relativity by Hanoch Gutfreund and Jürgen Renn, which described The Meaning of Relativity as "Einstein's definitive exposition of his special and general theories of relativity". Publication history Original English editions Notable reprints German editions See also List of scientific publications by Albert Einstein Annus Mirabilis papers History of general relativity History of special relativity References Further reading External links The Meaning of Relativity 5th edition at Princeton University Press The Meaning of Relativity 5th edition at JSTOR The Meaning of Relativity at Springer Link An insightful tome recounts the heady early days of general relativity review by Andrew Robinson at sciencemag.org 1922 non-fiction books Physics books Theory of relativity Works by Albert Einstein Princeton University Press books
The Meaning of Relativity
[ "Physics" ]
966
[ "Theory of relativity" ]
65,937,478
https://en.wikipedia.org/wiki/Principles%20of%20Optics
Principles of Optics, colloquially known as Born and Wolf, is an optics textbook written by Max Born and Emil Wolf that was initially published in 1959 by Pergamon Press. After going through six editions with Pergamon Press, the book was transferred to Cambridge University Press who issued an expanded seventh edition in 1999. A 60th anniversary edition was published in 2019 with a foreword by Sir Peter Knight. It is considered a classic science book and one of the most influential optics books of the twentieth century. Background In 1933, Springer published Max Born's book Optik, which dealt with all optical phenomena for which the methods of classical physics, and Maxwell's equations in particular, were applicable. In 1950, with encouragement from Sir Edward Appleton, the principal of Edinburgh University, Born decided to produce an updated version of Optik in English. He was partly motivated by the need to make money, as he had not been working long enough at Edinburgh to earn a decent pension, and at that time, was not entitled to any pension from his time working in Germany. The first problem that Born had to tackle was that after the US joined the war in 1941, Optik had been reproduced and sold widely in the US, along with many other books and periodicals. This had been done under the aegis of the Office of Alien Property which was authorised to confiscate enemy property, so that neither the authors nor the publishers received any payment for these sales. When the war ended, the printing continued, still with no payment of royalties to authors or publishers. Born had been writing regularly to try and reclaim his book, pointing out that he was not an alien, as he had been a British citizen at the start of the war. He enlisted the support of various people and organisations, including the British Ambassador in Washington. In response, he got a letter saying that he would have to pay 2% of the retail price of any new book he wrote which was based on Optik. An article in the Manchester Guardian about how Jean Sibelius had been deprived of royalties in the same way, prompted him to write a letter describing his own situation. Eventually, his rights to the book were returned and he received backdated royalties. He quickly realised that the important developments in optics which had occurred in the years since the original book had been written would need to be covered. He approached Dennis Gabor, the inventor of holography to collaborate with him in writing the book. Emil Wolf, a research assistant at Cambridge University, was invited to write a chapter in the book. Gabor subsequently dropped out because of time constraints. Born and Wolf were then the main authors with specialist contributions from other authors. Wolf wrote several chapters and edited the other contributions; Born's input was a modified version of Optik and also collaboration with Wolf in the planning of the book, and many discussions concerning disputed points, presentation and so on. They hoped to complete the book by the end of 1951, but they were "much too optimistic". The book was actually first published in December 1959. Problems with Pergamon Press and Robert Maxwell Pergamon Press was a scientific publishing company which was set up in 1948 by Robert Maxwell and Paul Rosbaud. The latter had been a scientific advisor for Springer in Germany before and during the war and was one of the editors dealing with Optik. He was also an undercover agent for the Allies during the war. He persuaded the authors to place the book with Pergamon Press, a decision which they would later regret. A detailed account is given by Gustav Born, Max's son He explains how the libel laws in the UK prevented him from speaking about this until after Maxwell's death. Maxwell tried to get the authors to agree to a much lower rate of royalties for US sales than was agreed in their contract because the book was to be marketed by a different publisher which would mean reduced profits for Pergamon. It was then actually marketed through the US branch of Pergamon but the authors still received reduced royalties. They also found that the sales figures in their statements were lower than the true figures. A clause in the contract meant that they had to go to arbitration rather than go to court to resolve this. Gustav acted for his father in the matter as Max Born was now living in Germany and was in his late seventies. The case was heard by Desmond Ackner(later Lord Ackner) in 1962. He found in favour of the authors on all counts. Nonetheless, they continued to be underpaid. Opening figures in one year's statement did not agree with closing figures from the previous year's statement. Some editions were reprinted several times but did not appear in the accounts at all. After Born's death, Wolf found that an international edition was being distributed in the Far East which he had not been told about. Pergamon sent him a small cheque when he raised the matter with them. When he threatened them with legal action, they sent another cheque for three times the amount. Wolf said that the book was re-printed seventeen times (not counting unauthorized editions and translations). Rosbaud left Pergamon Press in 1956 “because he found Maxwell to be completely dishonest”. Other authors told Gustav Born that they had had the same problems with Maxwell. They included Sir Henry Dale, who shared the Nobel prize in medicine in 1936 and Edward Appleton. Contents 1st edition The book aimed to cover only those optical phenomena which can be derived from Maxwell's electromagnetic theory and is intended to give a complete picture of what was then known derived from Maxwell's equations. 2nd edition This was published in 1962. It contained corrections of errors and misprints. Lasers had been developed since the 1st edition was published but were not covered because laser operation is outside the scope of classical optics. Some references to research which used lasers were included. 3rd edition This was published in 1965. It again had correction of errors and misprints, and references to recent publications were added. A new figure (8.54), donated by Leith and Upatnieks, showed images of the first 3-dimensional holographic image. This related to the section in Chapter VIII which described Gabor's wavefront re-construction technique (holography). 4th edition This was published in 1968 and included corrections, improvements to the text, and additional references. 5th edition This was published in 1974 and again included corrections, improvements to the text, and additional references. Significant changes were made to Sections 13.1-13.3. which deals with the optical properties of metals. It is not possible to describe the interaction of an optical electromagnetic wave with a metal using classical optical theory. Nonetheless, some of the main features can be described, at least in quantitative terms, provided the frequency dependence of conductivity and the role of free and bound electrons are taken into account. 6th edition This was published in 1985, and contained a small number of corrections 7th edition In 1997, publication of the book was transferred to Cambridge University Press, who were willing to reset the text, thus providing an opportunity to make substantial changes to the book. The invention of the laser in 1960, a year after the first edition was published, had led to many new activities and entirely new fields in optics. A fully updated "Principles of Optics" would have required several new volumes so Wolf decided to add only a few new topics, which would not require major revisions to the text. A new section was added to Chapter IV, presenting the principles of computerised axial tomography (or CAT) which has revolutionised diagnosis in medicine. There is also an account of the Radon transform developed in 1917, which underlies the theory of CAT. An account of Kirchhoff-Rayleigh diffraction theory was added to Chapter VIII as it had become more popular. There is a debate as to whether it or the older Kirchhoff theory best describes diffraction effects. A recently discovered phenomenon is presented, in which spectral analysis of the light distribution of superimposed broad-band light fields provides important physical information from which the coherence properties of the light can be deduced. Chapter XIII was added, entitled "The theory of scattering of light by inhomogeneous media". The underlying theory was developed many years before in the analysis of the quantum mechanical potential scattering, and had more recently been derived for optical scattering. Diffraction tomography is discussed. It is applied when the finite wavelength of the waves involved, e.g. optical and ultrasonic waves, cannot be ignored as is the case in X-ray tomography. Three new appendices were also added: Proof of the inequality for the spectral degree of coherence Evaluation of two integrals Proof of Jones' lemma Publication history To date, there have been seven editions of the book. The first six were published by Pergamon Press in 1959, 1962, 1965, 1968, 1974 and 1980. Cambridge University Press took over the book in 1997, and published an expanded seventh edition in 1999 A special Sixtieth Anniversary version was released in 2019, sixty years after the first edition. Original editions Reprints In 1999, Wolf commented that there had been seventeen authorised reprints and an unknown number of unauthorised reprints. The fifth edition was reprinted in 1975 and 1977. Between 1983 and 1993, the sixth edition of the book was reprinted seven times. Some of these reprints, including those in the years 1983 and 1986, included corrections. Cambridge University Press produced a reprint of the 6th Edition in 1997. A reprint of the 7th Edition was produced in 2002 with corrections. Fifteen reprints were made before the 60th Anniversary edition was printed in 2019. Translations Reception The first edition was very well received. A biography of Max Born said: "it presents a systematic treatment based on electromagnetic theory for all optical phenomena that can be described in terms of a continuous distribution of matter". Its timing was very opportune. The arrival of the laser shortly after its publication meant that the insights it provided into the description and analysis of light were directly applicable to the behaviour of laser light. It was extensively used by university teachers, researchers used it as a source of rigorous information. Its excellent sales reflected its value to the world optics community. Gabor said that the account of holography in the book was the first systematic description of the technique in an authoritative text book. Gabor sent Wolf a copy of one of his papers with the inscription "Dear Emil, I consider you my chief prophet, Love, Dennis" The seventh edition was reviewed by Peter W. Milonni, Eugene Hecht, and William Maxwell Steen. Previous editions of the book were reviewed by Léon Rosenfeld, Walter Thompson Welford, John D. Strong, and Edgar Adrian, among others. Peter W. Milonni opened his review of the book by endorsing the book's dust jacket description, stating it is "one of the classic science books of the twentieth century, and probably the most influential book in optics published in the past 40 years." Eugene Hecht opened his review of the book by comparing the task to reviewing The Odyssey, in that it "cannot be approached without a certain awe and the foreknowledge that whatever you say is essentially irrelevant". Hecht then summarizes his own review, in order to help "anyone who hasn't the time to read the rest of this essay" by stating: "Principles of Optics is a great book, the seventh edition is a fine one, and if you work in the field you probably ought to own it." Hecht went on to state that the book "is a great, rigorous, ponderous, unwavering mathematical tract that deals with a wealth of topics in classical optics." He noted that the book can be hard to understand; he wrote: "This is a tour de force, never meant for easy reading." After analyzing some of the changes to the new edition, Hecht ended the review with the same summary as the introduction, emphasizing again that "if you work in the field you probably ought to own it". See also Bibliography of Max Born List of textbooks in electromagnetism References Further reading External links 1959 non-fiction books 1964 non-fiction books 1965 non-fiction books 1970 non-fiction books 1975 non-fiction books 1980 non-fiction books 1999 non-fiction books 2019 non-fiction books Max Born Optics Physics education in the United Kingdom Physics textbooks Pergamon Press books
Principles of Optics
[ "Physics", "Chemistry" ]
2,533
[ "Applied and interdisciplinary physics", "Optics", " molecular", "Atomic", " and optical physics" ]
65,937,796
https://en.wikipedia.org/wiki/Illustrative%20model%20of%20greenhouse%20effect%20on%20climate%20change
There is a strong scientific consensus that greenhouse effect due to carbon dioxide is a main driver of climate change. Following is an illustrative model meant for a pedagogical purpose, showing the main physical determinants of the effect. Under this understanding, global warming is determined by a simple energy budget: In the long run, Earth emits radiation in the same amount as it receives from the sun. However, the amount emitted depends both on Earth's temperature and on its albedo: The more reflective the Earth in a certain wavelength, the less radiation it would both receive and emit in this wavelength; the warmer the Earth, the more radiation it emits. Thus changes in the albedo may have an effect on Earth's temperature, and the effect can be calculated by assuming a new steady state would be arrived at. In most of the electromagnetic spectrum, atmospheric carbon dioxide either blocks the radiation emitted from the ground almost completely, or is almost transparent, so that increasing the amount of carbon dioxide in the atmosphere, e.g. doubling the amount, will have negligible effects. However, in some narrow parts of the spectrum this is not so; doubling the amount of atmospheric carbon dioxide will make Earth's atmosphere relatively opaque to in these wavelengths, which would result in Earth emitting light in these wavelengths from the upper layers of the atmosphere, rather from lower layers or from the ground. Since the upper layers are colder, the amount emitted would be lower, leading to warming of Earth until the reduction in emission is compensated by the rise in temperature. Furthermore, such warming may cause a feedback mechanism due to other changes in Earth's albedo, e.g. due to ice melting. Structure of the atmosphere Most of the air—including ~88% of the CO2—is located in the lower part of the atmosphere known as troposphere. The troposphere is thicker in the equator and thinner at the poles, but the global mean of its thickness is around 11 km. Inside the troposphere, the temperature drops approximately linearly at a rate of 6.5 Celsius degrees per km, from a global mean of 288 Kelvin (15 Celsius) on the ground to 220 K (-53 Celsius). At higher altitudes, up to 20 km, the temperature is approximately constant; this layer is called the tropopause. The troposphere and tropopause together consist of ~99% of the atmospheric CO2. Inside the troposphere, the CO2 drops with altitude approximately exponentially, with a typical length of 6.3 km; this means that the density at height y is approximately proportional to exp(-y/6.3 km), and it goes down to 37% at 6.3 km, and to 17% at 11 km. Higher through the tropopause, density continues dropping exponentially, albeit faster, with a typical length of 4.2 km. Effect of carbon dioxide on the Earth's energy budget Earth constantly absorbs energy from sunlight and emits thermal radiation as infrared light. In the long run, Earth radiates the same amount of energy per second as it absorbs, because the amount of thermal radiation emitted depends upon temperature: If Earth absorbs more energy per second than it radiates, Earth heats up and the thermal radiation will increase, until balance is restored; if Earth absorbs less energy than it radiates, it cools down and the thermal radiation will decrease, again until balance is restored. Atmospheric CO2 absorbs some of the energy radiated by the ground, but it emits itself thermal radiation: For example, in some wavelengths the atmosphere is totally opaque due to absorption by CO2; at these wavelengths, looking at Earth from outer space one would not see the ground, but the atmospheric CO2, and hence its thermal radiation—rather than the ground's thermal radiation. Had the atmosphere been at the same temperature as the ground, this would not change Earth's energy budget; but since the radiation is emitted from atmosphere layers that are cooler than the ground, less radiation is emitted. As CO2 content of the atmosphere increases due to human activity, this process intensifies, and the total radiation emitted by Earth diminishes; therefore, Earth heats up until the balance is restored. Radiation absorption by carbon dioxide CO2 absorbs the ground's thermal radiation mainly at wavelengths between 13 and 17 micron. At this wavelength range, it is almost solely responsible for the attenuation of radiation from the ground. The amount of ground radiation that is transmitted through the atmosphere in each wavelength is related to the optical depth of the atmosphere at this wavelength, OD, by: The optical depth itself is given by Beer–Lambert law: where σ is the absorption cross section of a single CO2 molecule, and n(y) is the number density of these molecules at altitude y. Due to the high dependence of the cross section in wavelength, the OD changes from around 0.1 at 13 microns to ~10 at 14 microns and even higher beyond 100 at 15 microns, then dropping off to ~10 at 16 microns, ~1 at 17 microns and below 0.1 at 18 microns. Note that the OD depends on the total number of molecules per unit area in the atmosphere, and therefore rises linearly with its CO2 content. Looked upon from outer space into the atmosphere at a specific wavelength, one would see to different degrees different layers of the atmosphere, but on average one would see down to an altitude such that the part of the atmosphere from this altitude and up has an optical depth of ~1. Earth will therefore radiate at this wavelength approximately according to the temperature of that altitude. The effect of increasing CO2 atmospheric content means that the optical depth increases, so that the altitude seen from outer space increases; as long as it increases within the troposphere, the radiation temperature drops and the radiation decreases. When it reaches the tropopause, any further increase in CO2 levels will have no noticeable effect, since the temperature no longer depends there on the altitude. At wavelengths of 14 to 16 microns, even the tropopause, having ~0.12 of the amount of CO2 of the whole atmosphere, has OD>1. Therefore, at these wavelengths Earth radiates mainly in the tropopause temperature, and addition of CO2 does not change this. At wavelengths smaller than 13 microns or larger than 18 microns, the atmospheric absorption is negligible, and addition of CO2 hardly changes this. Therefore, the effect of CO2 increase on radiation is relevant in wavelengths 13–14 and 16–18 microns, and addition on CO2 mainly contributes to the opacity of the troposphere, changing the altitude that is effectively seen from outer space within the troposphere. Calculating the effect on radiation One layer model We now turn to calculating the effect of CO2 on radiation, using a one-layer model, i.e. we treat the whole troposphere as a single layer: Looking at a particular wavelength λ up to λ+dλ, the whole atmosphere has an optical depth OD, while the tropopause has an optical depth 0.12*OD; the troposphere has an optical depth of 0.88*OD. Thus, of the radiation from below the tropopause is transmitted out, but this includes of the radiation that originates from the ground. Thus, the weight of the troposphere in determining the radiation that is emitted to outer space is: A relative increase in the CO2 concentration means an equal relative increase in the total CO2 content of the atmosphere, dN/N where N is the number of CO2 molecules. Adding a minute number of such molecules dN will increase the troposphere's weight in determining the radiation for the relevant wavelengths, approximately by the relative amount dN/N, and thus by: Since CO2 hardly influences sunlight absorption by Earth, the radiative forcing due to an increase in CO2 content is equal to the difference in the flux radiated by Earth due to such an increase. To calculate this, one must multiply the above by the difference in radiation due to the difference in temperature. According to Planck's law, this is: The ground is at temperature T0 = 288 K, and for the troposphere we will take a typical temperature, the one at the average height of molecules, 6.3 km, where the temperature is T1247 K. Therefore, dI, the change in Earth's emitted radiation is, in a rough approximation, is: Since dN/N = d(ln N), this can be written as: The function is maximal for x = 2.41, with a maximal value of 0.66, and it drops to half this value at x=0.5 and x = 9.2. Thus we look at wavelengths for which the OD is between 0.5 and 9.2: This gives a wavelength band at the width of approximately 1 micron around 17 microns, and less than 1 micron around 13.5 microns. We therefore take: λ = 13.5 microns and again 17 microns (summing contributions from both) dλ = 0.5 micron for the 13.5 microns band, and 1 micron for the 17 microns band. Which gives -2.3 W/m2 for the 13.5 microns band, and -2.7 W/m2 for the 17 microns band, for a total of 5 W/m2. A 2-fold increase in CO2 content changes the wavelengths ranges only slightly, and so this derivative is approximately constant along such an increase. Thus, a 2-fold increase in CO2 content will reduce the radiation emitted by Earth by approximately: ln(2)*5 W/m2 = 3.4 W/m2. More generally, an increase by a factor c/c0 gives: ln(c/c0)*5 W/m2 These results are close to the approximation of a more elaborate yet simplified model giving ln(c/c0)*5.35 W/m2, and the radiative forcing due to CO2 doubling with much more complicated models giving 3.1 W/m2. Emission Layer Displacement Model We may make a more elaborate calculation by treating the atmosphere as compounded of many thin layers. For each such layer, at height y and thickness dy, the weight of this layer in determining the radiation temperature seen from outer space is a generalization of the expression arrived at earlier for the troposphere. It is: where OD(y) is the optical depth of the part of the atmosphere from y upwards. The total effect of CO2 on the radiation at wavelengths λ to λ+dλ is therefore: where B is the expression for radiation according to Planck's law presented above: and the infinity here can be taken actually as the top of the tropopause. Thus the effect of a relative change in CO2 concentration, dN/N = dn0/n0 (where n0 is the density number near ground), would be (noting that dN/N = d(ln N) = d(ln n0): where we have used integration by part. Because B does not depend on N, and because , we have: Now, is constant in the troposphere and zero in the tropopause. We denote the height of the border between them as U. So: The optical depth is proportional to the integral of the number density over y, as does the pressure. Therefore, OD(y) is proportional to the pressure p(y), which within the troposphere (height 0 to U) falls exponentially with decay constant 1/Hp (Hp~5.6 km for CO2), thus: Since + constant, viewed as a function of both y and N, we have: And therefore differentiating with respect to ln N is the same as differentiating with respect to y, times a factor of . We arrive at: . Since the temperature only changes by ~25% within the troposphere, one may take a (rough) linear approximation of B with T at the relevant wavelengths, and get: Due to the linear approximation of B we have: with T1 taken at Hp, so that totally: giving the same result as in the one-layer model presented above, as well as the logarithmic dependence on N, except that now we see T1 is taken at 5.6 km (the pressure drop height scale), rather than 6.3 km (the density drop height scale). Comparison to the total radiation emitted by Earth The total average energy per unit time radiated by Earth is equal to the average energy flux j times the surface area 4πR2, where R is Earth's radius. On the other hand, the average energy flux absorbed from sunlight is the solar constant S0 times Earth's cross section of πR2, times the fraction absorbed by Earth, which is one minus Earth's albedo a. The average energy per unit time radiated out is equal to the average energy per unit time absorbed from sunlight, so: giving: Based on the value of 3.1 W/m^2 obtained above in the section on the one layer model, the radiative forcing due to CO2 relative to the average radiated flux is therefore: An exact calculation using the MODTRAN model, over all wavelengths and including methane and ozone greenhouse gasses, as shown in the plot above, gives, for tropical latitudes, an outgoing flux 298.645 W/m2 for current CO2 levels and 295.286 W/m2 after CO2 doubling, i.e. a radiative forcing of 1.1%, under clear sky conditions, as well as a ground temperature of 299.7o K (26.6o Celsius). The radiative forcing is largely similar in different latitudes and under different weather conditions. Effect on global warming On average, the total power of the thermal radiation emitted by Earth is equal to the power absorbed from sunlight. As CO2 levels rise, the emitted radiation can maintain this equilibrium only if the temperature increases, so that the total emitted radiation is unchanged (averaged over enough time, in the order of few years so that diurnal and annual periods are averaged upon). According to Stefan–Boltzmann law, the total emitted power by Earth per unit area is: where σB is Stefan–Boltzmann constant and ε is the emissivity in the relevant wavelengths. T is some average temperature representing the effective radiation temperature. CO2 content changes the effective T, but instead one may treat T to be a typical ground or lower-atmosphere temperature (same as T0 or close to it) and consider CO2 content as changing the emissivity ε. We thus re-interpret ε in the above equation as an effective emissivity that includes the CO2 effect;, and take T=T0. A change in CO2 content thus causes a change dε in this effective emissivity, so that is the radiative forcing, divided by the total energy flux radiated by Earth. The relative change in the total radiated energy flux due to changes in emissivity and temperature is: Thus, if the total emitted power is to remain unchanged, a radiative forcing relative to the total energy flux radiated by Earth, causes a 1/4-fold relative change in temperature. Thus: Ice–albedo feedback Since warming of Earth means less ice on the ground on average, it would cause lower albedo and more sunlight absorbed, hence further increasing Earth's temperature. As a rough estimate, we note that the average temperature on most of Earth are between -20 and +30 Celsius degree, a good guess will be that 2% of its surface are between -1 and 0 °C, and thus an equivalent area of its surface will be changed from ice-covered (or snow-covered) to either ocean or forest. For comparison, in the northern hemisphere, the arctic sea ice has shrunk between 1979 and 2015 by 1.43x1012 m2 at maxima and 2.52x1012 m2 at minima, for an average of almost 2x1012 m2, which is 0.4% of Earth's total surface of 510x1012 m2. At this time the global temperature rose by ~0.6 °C. The areas of inland glaciers combined (not including the antarctice ice sheet), the antarctic sea ice, and the arctic sea ice are all comparable, so one may expect the change in ice of the arctic sea ice is roughly a third of the total change, giving 1.2% of the Earth surface turned from ice to ocean or bare ground per 0.6 °C, or equivalently 2% per 1 °C. The antarctic ice cap size oscillates, and it is hard to predict its future course, with factors such as relative thermal insulated and constraints due to the Antarctic Circumpolar Current probably playing a part. As the difference in albedo between ice and e.g. ocean is around 2/3, this means that due to a 1 °C rise, the albedo will drop by 2%*2/3 = 4/3%. However this will mainly happen in northern and southern latitudes, around 60 degrees off the equator, and so the effective area is actually 2% * cos(60o) = 1%, and the global albedo drop would be 2/3%. Since a change in radiation of 1.3% causes a direct change of 1 degree Celsius (without feedback), as calculated above, and this causes another change of 2/3% in radiation due to positive feedback, whice is half the original change, this means the total factor caused by this feedback mechanism would be: Thus, this feedback would double the effect of the change in radiation, causing a change of ~ 2 K in the global temperature, which is indeed the commonly accepted short-term value. For long-term value, including further feedback mechanisms, ~3K is considered more probable. References Climatology Greenhouse gases Carbon dioxide
Illustrative model of greenhouse effect on climate change
[ "Chemistry", "Environmental_science" ]
3,769
[ "Greenhouse gases", "Carbon dioxide", "Environmental chemistry" ]
51,724,747
https://en.wikipedia.org/wiki/Single%20pushout%20graph%20rewriting
In computer science, a single pushout graph rewriting or SPO graph rewriting refers to a mathematical framework for graph rewriting, and is used in contrast to the double-pushout approach of graph rewriting. References Further reading Graph rewriting
Single pushout graph rewriting
[ "Mathematics", "Technology" ]
52
[ "Graph theory stubs", "Graph theory", "Computer science stubs", "Computer science", "Mathematical relations", "Computing stubs", "Graph rewriting" ]
51,725,805
https://en.wikipedia.org/wiki/BOD%20bottle
BOD Bottle or an incubation bottle is a main apparatus used for the Biological Oxygen Demand (BOD) test. During the five-day BOD or BOD5 test process, the BOD bottle is used for incubating diluted samples under the 20 °C or 68 °F of temperature. Structure The bottle is normally designed to have a special shoulder radius to push out all air from the inside of the bottle when a sample solution is being filled. According to Method 5210 in Standard Methods for the Examination of Water and Wastewater, the BOD bottle should include a ground-glass stopper and a flared mouth which form a water seal preventing the air from the outside of the bottle coming in. Method 5210 also recommends to use a paper, a foil or a plastic cup to cap over the mouth of the bottle reducing the evaporation during the incubation. Generally, the side of the BOD bottle is permanently screened with white writing area, and is printed with a specific number; both for the aid of the sample identification. Stopper There are two kinds of stopper: the Glass Pennyhead and the Glass Robotic stopper. Sizes There are many BOD bottle sizes. The dose of the mixture of the solution (nutrient, mineral and buffer solution) is related to the size of the bottle. For the Standard Methods 5210, the BOD bottle “having 60 mL or greater capacity (300-mL)” is mentioned as one of the apparatus for the BOD test. A 60 mL BOD bottle is available and listed as "often convenient" by EPA (Environmental Protection Agency) Method 405.1. However, EPA Method 405.1 was written in 1974 and is no longer an EPA-approved method per 40CFR Part 136. Materials Glass is a material being specified in the Standard Methods 5210 of the BOD5 test. The glass bottles are manufactured from Type 1 borosilicate glass. A black BOD bottle A black BOD bottle is coated with PVC plastic that blocks visible light. Black bottles are used in marine photosynthesis projects which needs to compare oxygen levels in light and dark conditions. Disposable BOD Bottle or Carbonaceous Biochemical Oxygen Demand (CBOD) Bottle It is a carbon-coated polyethylene terephthalate (PET) bottle that is solely manufactured by Environmental Express in Charleston, SC. The bottle is lightweight, unbreakable, and recyclable. Since the bottle is designed for single-use, it eliminates any potential for cross-contamination between samples. The bottle does not require any resources nor energy for cleaning and rinsing as it is disposable. CBOD bottle is also claimed to be cheaper, and cause less contamination in the sample solution than the conventional- BOD bottle. References Measuring instruments Liquid containers
BOD bottle
[ "Technology", "Engineering" ]
577
[ "Measuring instruments" ]
51,726,771
https://en.wikipedia.org/wiki/Paper-ruling%20machine
A paper-ruling machine is a device for ruling paper. In 1770, John Tetlow was awarded a patent for a "machine for ruling paper for music and other purposes." William Orville Hickok invented an "improved ruling machine" in the mid-19th century. As the device is designed for drawing lines on paper, it can produce tables and ruled paper. The functionality of the machine is based on pens manufactured especially for the device. The pens have multiple tips side by side, and water-based ink is led into them along threads. It is possible to program stop-lines on the equipment by mounting pens on shafts equipped with cams that lower and raise them at predetermined points. The spread of computerized accounting between the 1960s and 1980s significantly decreased the demand for accounting tables and ruled paper. Nowadays, their demand is primarily filled by using offset printing. References External links Hickok paper-ruling machines, automatic paper feeders, ruling pens and inks, Catalog No. 89 Machines History of printing
Paper-ruling machine
[ "Physics", "Technology", "Engineering" ]
210
[ "Physical systems", "Machines", "Mechanical engineering" ]
51,728,633
https://en.wikipedia.org/wiki/Nitrolic%20acid
Nitrolic acids are organic compounds with the functional group RC(NO2)=NOH. They are prepared by the reaction of nitroalkanes with base and nitrite sources: RCH2NO2 + HNO2 → RC(NO2)=NOH + H2O The conversion was first demonstrated by Victor Meyer using nitroethane. The reaction proceeds via the intermediacy of the nitronate anion. Occurrence Most nitrolic acids are laboratory curiosities. One exception is the compound HO2C(CH2)4C(NO2)=NOH, which is produced by the oxidation of cyclohexanone with nitric acid. This species decomposes to adipic acid and nitrous oxide: HO2C(CH2)4C(NO2)=NOH → HO2C(CH2)4CO2H + N2O This conversion is thought to be the largest anthropogenic route to N2O, which, on a molecule-to-molecule basis, has 298 times the atmospheric heat-trapping ability of carbon dioxide. Adipic acid is a precursor to many nylon polymers. In the end, nitrous oxide is produced in about one to one mole ratio to the adipic acid. References Functional groups Organonitrogen compounds
Nitrolic acid
[ "Chemistry" ]
277
[ "Organic compounds", "Organonitrogen compounds", "Functional groups" ]
74,567,901
https://en.wikipedia.org/wiki/Doron%20Levy
Doron Levy is a mathematician, scientist, magician, and academic. He is a Professor and chair at the Department of Mathematics at the University of Maryland, College Park. He is also the Director of the Brin Mathematics Research Center. Levy's research encompasses the field of numerical analysis, applied nonlinear partial differential equations, and biology and medical applications, particularly focusing on analyzing cancer dynamics, immunology, and cell motility. He has written more than 100 peer-reviewed articles. He is the recipient of the National Science Foundation Career Award. Levy is a Fellow of the John Simon Guggenheim Memorial Foundation He is an Editorial Board Member of the Bulletin of Mathematical Biology, Discrete and Continuous Dynamics Systems Series B, Le Matematiche, Acta Applicandae Mathematicae, Frontiers in Systems Biology, Cancer Research, Applied Mathematics Modelling, PLoS One, and Differential Equations and Dynamical Systems. He is the Editor-in-Chief at ImmunoInformatics. Education Levy earned his Baccalaureate degree in Mathematics and Physics in 1991 and completed a master's degree in Applied Mathematics in 1994 from Tel Aviv University. His Master's thesis was titled "From Semi-Discrete to Fully-Discrete: The Stability of Runge-Kutta Schemes by the Energy Method". In 1997, he received a Ph.D. in Applied Mathematics under the guidance of Eitan Tadmor, with a thesis on "Topics in Approximate Methods for Non-Linear Partial Differential Equations." Afterward, he held several post-doctorate fellowships at Laboratoire d'Analyse Numerique (University of Paris 6), École normale supérieure (Paris), University of California, Berkeley, and the Lawrence Berkeley National Laboratory. Career Following his post-doctoral fellowship at Berkeley in 2000, Levy joined the Department of Mathematics at Stanford University as an assistant professor. In 2007, he was appointed as associate professor of mathematics and a member of the Center for Scientific Computation and Mathematical Modeling at the University of Maryland, College Park. In 2014, he became a Pauli Fellow at the Wolfgang Pauli Institute of the University of Vienna in Austria. Since 2011, he has been a professor at the Department of Mathematics & Center for Scientific Computation and Mathematical Modeling of the University of Maryland, College Park. Levy served as a Member of the Board of Governors of the Institute for Mathematics and Its Applications (IMA) at the University of Minnesota in 2018 for one year, and a Member of the Board of Directors of the Society for Mathematical Biology from 2018 to 2022. Since 2022, he has been serving as the Founding Director of the Brin Mathematics Research Center at the University of Maryland, College Park. As of 2020, Levy has been a chair at the Department of Mathematics and the Director of the Center for Scientific Computation and Mathematical Modeling of the University of Maryland, College Park. Research Levy's research is focused on mathematical equations and biomedical applications of mathematics with a particular interest in cancer dynamics, drug resistance, drug delivery, immunology, imaging, and cell motility. Numerical analysis During his early research career, Levy worked on developing and analyzing high-order numerical methods for approximating solutions to hyperbolic conservation law and related equations. He developed novel methods for approximating solutions to nonlinear partial differential equations including Euler equations, Navier-Stokes equations, Hamilton-Jacobi equations, nonlinear dispersive equations. Some of the approximation methods he developed used Weighted Essentially Non-Oscillatory (WENO) schemes. He developed a third-order central scheme for approximating solutions of multidimensional hyperbolic conservation laws and 2D conservation laws using compact central WENO reconstructions. In a series of works with Steve Bryson, he proposed new high-order central schemes for approximating solutions of multidimensional Hamilton-Jacobi equations. Cancer dynamics and the immune system Levy contributed to cancer dynamics by formulating a set of computational and mathematical tools designed for specific types of cancer. He discussed the need for mathematical models to understand the complexity of breast and ovarian cancers and proposed a model to explain the failure of transvaginal ultrasound-based screening in detecting low-volume high-grade serous ovarian cancer. In a collaborative study, he investigated the effects of regulatory T cell switching the immune response and identified a biologically testable range for the switching parameter. Furthermore, he presented mathematical models for studying cancer cell growth dynamics in response to antimitotic drug treatment in vitro, to understand the immunogenic effects of LSD1 inhibition on tumor growth and T cell dynamics, and for the interaction between immune response and cancer cells in chronic myelogenous leukemia and analyzes the stability of steady states. Levy analyzed cancer's immune response mechanisms, particularly in chronic myeloid leukemia, providing insights into the role of the immune response and drug therapy in controlling the disease. He also demonstrated that the autologous immune system may play a role in the BCR-ABL transcript variations observed in chronic phase chronic myelogenous leukemia patients on imatinib therapy. Considering the problem of drug resistance in cancer he suggested a simple compartmental system of ordinary differential equations to model it and stated that drug resistance depends on the turnover rate of cancer cells. Additionally, he extended a model of drug resistance in solid tumors to explore the dynamics of resistance levels and the emergence of heterogeneous tumors in response to chemotherapy. Conducting a study on cervical cancer, he investigated the efficacy of combination immunotherapy using engineered T cells and IL-2. Moreover, he assessed the influence of cell density, intratumoral heterogeneity, and mutations in multidrug resistance, considering the continuum model as the most suitable approach for modeling resistance heterogeneity in metastasis. In collaboration with Heyrim Cho, he also investigated the impact of competition between cancer cells and healthy cells on optimal drug delivery and indicated that in scenarios with moderate competition, combination therapies are more effective, whereas in highly competitive situations, targeted drugs prove to be more effective. Personal life Levy is a magician member of the Academy of Magical Arts in Hollywood (Magic Castle) and a member of the Order of Merlin of the International Brotherhood of Magicians (I.B.M.). Throughout his academic career, he has highlighted the connection between performing arts and the academic world. Awards and honors 1998 – Haim Nessyahu Prize, Israeli Union of Mathematics 2002 – Career Award, National Science Foundation 2014 – Fellow, John Simon Guggenheim Memorial Foundation Fellow of the American Mathematical Society in the 2024 class of fellows Selected articles Levy, D., & Tadmor, E. (1998). From semidiscrete to fully discrete: Stability of Runge—Kutta schemes by the energy method. SIAM review, 40(1), 40–73. Kim, P. S., Lee, P. P., & Levy, D. (2008). Dynamics and potential impact of the immune response to chronic myelogenous leukemia. PLoS computational biology, 4(6), e1000095. Tomasetti, C., & Levy, D. (2010). Role of symmetric and asymmetric division of stem cells in developing drug resistance. Proceedings of the National Academy of Sciences, 107(39), 16766–16771. Lavi, O., Greene, J. M., Levy, D., & Gottesman, M. M. (2013). The role of cell density and intratumoral heterogeneity in multidrug resistance. Cancer research, 73(24), 7168–7175. Cho, H., & Levy, D. (2018). Modeling the chemotherapy-induced selection of drug-resistant traits during tumor growth. Journal of theoretical biology, 436, 120–134. References Living people Year of birth missing (living people) Applied mathematicians Israeli magicians Tel Aviv University alumni University of Maryland, College Park faculty Stanford University faculty Fellows of the American Mathematical Society
Doron Levy
[ "Mathematics" ]
1,632
[ "Applied mathematics", "Applied mathematicians" ]
74,575,092
https://en.wikipedia.org/wiki/Kohn%E2%80%93Luttinger%20superconductivity
Kohn–Luttinger superconductivity is a theoretical mechanism for unconventional superconductivity proposed by Walter Kohn and Joaquin Mazdak Luttinger based on Friedel oscillations. In contrast to BCS theory, in which Cooper pairs are formed due to electron–phonon interaction, Kohn–Luttinger mechanism is based on fact that screened Coulomb interaction oscillates as and can create Cooper instability for non-zero angular momentum . Since Kohn–Luttinger mechanism does not require any additional interactions beyond Coulomb interactions, it can lead to superconductivity in any electronic system. However, the estimated critical temperature, , for Kohn–Luttinger superconductor is exponential in and thus is extremely small. For example, for metals the critical temperature is given by where is Boltzmann constant and is Fermi energy. However, Kohn and Luttinger conjectured that nonspherical Fermi surfaces and variation of parameters may enhance the effect. Indeed, it is proposed that Kohn–Luttinger mechanism is responsible for superconductivity in rhombohedral graphene, which has an annular Fermi surface. Further reading References Condensed matter physics Superconductivity
Kohn–Luttinger superconductivity
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
262
[ "Matter", "Physical quantities", "Superconductivity", "Phases of matter", "Materials science", "Condensed matter physics", "Electrical resistance and conductance" ]
74,576,446
https://en.wikipedia.org/wiki/Kac%20ring
In statistical mechanics, the Kac ring is a toy model introduced by Mark Kac in 1956 to explain how the second law of thermodynamics emerges from time-symmetric interactions between molecules (see reversibility paradox). Although artificial, the model is notable as a mathematically transparent example of coarse-graining and is used as a didactic tool in non-equilibrium thermodynamics. Formulation The Kac ring consists of equidistant points in a circle. Some of these points are marked. The number of marked points is , where . Each point represents a site occupied by a ball, which is black or white. After a unit of time, each ball moves to a neighboring point counterclockwise. Whenever a ball leaves a marked site, it switches color from black to white and vice versa. (If, however, the starting point is not marked, the ball completes its move without changing color.) An imagined observer can only measure coarse-grained (or macroscopic) quantities: the ratio and the overall color where , denote the total number of black and white balls respectively. Without the knowledge of detailed (microscopic) configuration, any distribution of marks is considered equally likely. This assumption of equiprobability is comparable to Stosszahlansatz, which leads to Boltzmann equation. Detailed evolution Let denote the color of a ball at point and time with a convention The microscopic dynamics can be mathematically formulated as where and is taken modulo . In analogy to molecular motion, the system is time-reversible. Indeed, if balls would move clockwise (instead of counterclockwise) and marked points changed color upon entering them (instead of leaving), the motion would be equivalent, except going backward in time. Moreover, the evolution of is periodic, where the period is at most . (After steps, each ball visits all marked points and changes color by a factor .) Periodicity of the Kac ring is a manifestation of more general Poincaré recurrence. Coarse-graining Assuming that all balls are initially white, where is the number of times the ball will leave a marked point during its journey. When marked locations are unknown (and all possibilities equally likely), becomes a random variable. Considering the limit when approaches infinity but , , and remain constant, the random variable converges to the binomial distribution, i.e.: Hence, the overall color after steps will be Since the overall color will, on average, converge monotonically and exponentially to 50% grey (a state that is analogical to thermodynamic equilibrium). An identical result is obtained for a ring rotating clockwise. Consequently, the coarse-grained evolution of the Kac ring is irreversible. It is also possible to show that the variance approaches zero: Therefore, when is huge (of order 1023), the observer has to be extremely lucky (or patient) to detect any significant deviation from the ensemble averaged behavior. See also Ehrenfest model References Statistical mechanics
Kac ring
[ "Physics" ]
622
[ "Statistical mechanics" ]
74,578,936
https://en.wikipedia.org/wiki/Sodium%20ammonium%20tartrate
Sodium ammonium tartrate (NAT) is an organic compound with the formula . The salt is derived from tartaric acid by neutralizing with ammonia and with sodium hydroxide. Louis Pasteur obtained enantiopure crystals of the tetrahydrate of NAT, via the process of spontaneous resolution. His discovery led to increased study of optical activity, which eventually was shown to have broad implications. Many modifications of this salt have been investigated by X-ray crystallography, including the racemate, which crystallizes as the monohydrate. Related compounds , known as Rochelle salt, was the first ferroelectric material discovered. References Organic sodium salts Ferroelectric materials Tartrates Double salts Ammonium compounds
Sodium ammonium tartrate
[ "Physics", "Chemistry", "Materials_science" ]
152
[ "Physical phenomena", "Ferroelectric materials", "Double salts", "Salts", "Organic sodium salts", "Materials", "Electrical phenomena", "Ammonium compounds", "Hysteresis", "Matter" ]
47,605,998
https://en.wikipedia.org/wiki/Multipolar%20exchange%20interaction
Magnetic materials with strong spin-orbit interaction, such as: LaFeAsO, PrFe4P12, YbRu2Ge2, UO2, NpO2, Ce1−xLaxB6, URu2Si2 and many other compounds, are found to have magnetic ordering constituted by high rank multipoles, e.g. quadruple, octople, etc. Due to the strong spin-orbit coupling, multipoles are automatically introduced to the systems when the total angular momentum quantum number J is larger than 1/2. If those multipoles are coupled by some exchange mechanisms, those multipoles could tend to have some ordering as conventional spin 1/2 Heisenberg problem. Except the multipolar ordering, many hidden order phenomena are believed closely related to the multipolar interactions Tensor operator expansion Basic concepts Consider a quantum mechanical system with Hilbert space spanned by , where is the total angular momentum and is its projection on the quantization axis. Then any quantum operators can be represented using the basis set as a matrix with dimension . Therefore, one can define matrices to completely expand any quantum operator in this Hilbert space. Taking J=1/2 as an example, a quantum operator A can be expanded as Obviously, the matrices: form a basis set in the operator space. Any quantum operator defined in this Hilbert can be expended by operators. In the following, let's call these matrices as a super basis to distinguish the eigen basis of quantum states. More specifically the above super basis can be called a transition super basis because it describes the transition between states and . In fact, this is not the only super basis that does the trick. We can also use Pauli matrices and the identity matrix to form a super basis Since the rotation properties of follow the same rules as the rank 1 tensor of cubic harmonics and the identity matrix follows the same rules as the rank 0 tensor , the basis set can be called cubic super basis. Another commonly used super basis is spherical harmonic super basis which is built by replacing the to the raising and lowering operators Again, share the same rotational properties as rank 1 spherical harmonic tensors , so it is called spherical super basis. Because atomic orbitals are also described by spherical or cubic harmonic functions, one can imagine or visualize these operators using the wave functions of atomic orbitals although they are essentially matrices not spatial functions. If we extend the problem to , we will need 9 matrices to form a super basis. For transition super basis, we have . For cubic super basis, we have . For spherical super basis, we have . In group theory, are called scalar or rank 0 tensor, are called dipole or rank 1 tensors, are called quadrupole or rank 2 tensors. The example tells us, for a -multiplet problem, one will need all rank tensor operators to form a complete super basis. Therefore, for a system, its density matrix must have quadrupole components. This is the reason why a problem will automatically introduce high-rank multipoles to the system Formal definitions A general definition of spherical harmonic super basis of a -multiplet problem can be expressed as where the parentheses denote a 3-j symbol; K is the rank which ranges ; Q is the projection index of rank K which ranges from −K to +K. A cubic harmonic super basis where all the tensor operators are hermitian can be defined as Then, any quantum operator defined in the -multiplet Hilbert space can be expanded as where the expansion coefficients can be obtained by taking the trace inner product, e.g. . Apparently, one can make linear combination of these operators to form a new super basis that have different symmetries. Multi-exchange description Using the addition theorem of tensor operators, the product of a rank n tensor and a rank m tensor can generate a new tensor with rank n+m ~ |n-m|. Therefore, a high rank tensor can be expressed as the product of low rank tensors. This convention is useful to interpret the high rank multipolar exchange terms as a "multi-exchange" process of dipoles (or pseudospins). For example, for the spherical harmonic tensor operators of case, we have If so, a quadrupole-quadrupole interaction (see next section) can be considered as a two steps dipole-dipole interaction. For example, , so the one step quadrupole transition on site now becomes a two steps of dipole transition . Hence not only inter-site-exchange but also intra-site-exchange terms appear (so called multi-exchange). If is even larger, one can expect more complicated intra-site-exchange terms would appear. However, one has to note that it is not a perturbation expansion but just a mathematical technique. The high rank terms are not necessarily smaller than low rank terms. In many systems, high rank terms are more important than low rank terms. Multipolar exchange interactions There are four major mechanisms to induce exchange interactions between two magnetic moments in a system: 1). Direct exchange 2). RKKY 3). Superexchange 4). Spin-Lattice. No matter which one is dominated, a general form of the exchange interaction can be written as where are the site indexes and is the coupling constant that couples two multipole moments and . One can immediately find if is restricted to 1 only, the Hamiltonian reduces to conventional Heisenberg model. An important feature of the multipolar exchange Hamiltonian is its anisotropy. The value of coupling constant is usually very sensitive to the relative angle between two multipoles. Unlike conventional spin only exchange Hamiltonian where the coupling constants are isotropic in a homogeneous system, the highly anisotropic atomic orbitals (recall the shape of the wave functions) coupling to the system's magnetic moments will inevitably introduce huge anisotropy even in a homogeneous system. This is one of the main reasons that most multipolar orderings tend to be non-colinear. Antiferromagnetism of multipolar moments Unlike magnetic spin ordering where the antiferromagnetism can be defined by flipping the magnetization axis of two neighbor sites from a ferromagnetic configuration, flipping of the magnetization axis of a multipole is usually meaningless. Taking a moment as an example, if one flips the z-axis by making a rotation toward the y-axis, it just changes nothing. Therefore, a suggested definition of antiferromagnetic multipolar ordering is to flip their phases by , i.e. . In this regard, the antiferromagnetic spin ordering is just a special case of this definition, i.e. flipping the phase of a dipole moment is equivalent to flipping its magnetization axis. As for high rank multipoles, e.g. , it actually becomes a rotation and for it is even not any kind of rotation. Computing coupling constants Calculation of multipolar exchange interactions remains a challenging issue in many aspects. Although there were many works based on fitting the model Hamiltonians with experiments, predictions of the coupling constants based on first-principle schemes remain lacking. Currently there are two studies implemented first-principles approach to explore multipolar exchange interactions. An early study was developed in 80's. It is based on a mean field approach that can greatly reduce the complexity of coupling constants induced by RKKY mechanism, so the multipolar exchange Hamiltonian can be described by just a few unknown parameters and can be obtained by fitting with experiment data. Later on, a first-principles approach to estimate the unknown parameters was further developed and got good agreements with a few selected compounds, e.g. cerium momnpnictides. Another first-principle approach was also proposed recently. It maps all the coupling constants induced by all static exchange mechanisms to a series of DFT+U total energy calculations and got agreement with uranium dioxide. References Magnetic ordering Magnetic exchange interactions
Multipolar exchange interaction
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,635
[ "Magnetic ordering", "Condensed matter physics", "Electric and magnetic fields in matter", "Materials science" ]
59,524,587
https://en.wikipedia.org/wiki/Applied%20Spectral%20Imaging
Applied Spectral Imaging or ASI is a multinational biomedical company that develops and manufactures microscopy imaging and digital analysis tools for hospitals, service laboratories and research centers. The company provides cytogenetic, pathology, and research laboratories with bright-field, fluorescence and spectral imaging in clinical applications. Test slides can be scanned, captured, archived, reviewed on the screen, analyzed with computer-assisted algorithms, and reported. ASI system platforms automate the workflow process to reduce human error in the identification and classification of chromosomal disorders, genome instability, various oncological malignancies, among other diseases. History Founded in 1993, ASI initially focused on spectral imaging devices for the research community. In 2002, ASI made a strategic move to expand into the clinical cytogenetics market and thereby, introduced its CytoLabView system for karyotyping and FISH imaging. In 2005, ASI launched its automated scanning system in order to increase throughput for case analysis, compensating for higher sample volumes and helping laboratories to better cope with a deficit of laboratory technicians and other professions. As the demand increased for more diagnostics, ASI focused on providing faster imaging and analysis to improve turn-around-time for patient results. Scanning automation and algorithms enabled laboratory technologists to spend more time on results and analysis rather than manual labor. In 2011, ASI launched a proprietary software platform named GenASIs. The software automates the diagnostic manual process. Physicians, medical scientists and laboratory technicians integrate digital technology to manage the visualization of the slide and compute the analysis. Through algorithms, tissue suspension cell and chromosomes are analyzed for aberrations, cell classification, tumor proportion score etc. ASI's high throughput tray loader, introduced the same year, was manufactured to automate the sample and scanning process. In 2017, ASI introduced PathFusion and HiPath Pro- the company's full pathology imaging suite for H&E, IHC, and FISH visualization and analysis software including tissue matching and whole slide imaging. FDA Clearances ASI has a wide FDA cleared portfolio. Its products and Quality System (QS) are compliant with IVD medical Device Standards and Regulations. 2001: FDA cleared for BandView product 2005: FDA cleared for FISHView product 2007: FDA cleared for SpotScan application for CEP XY 2010: FDA cleared for SpotScan application for HER2/neu 2011: FDA cleared for SpotScan application for UroVysion 2013: FDA cleared for SpotsScan application for ALK 2015: FDA cleared for HiPath system for IHC family HER2, ER, PR and Ki67 Patents ASI patents cover methods and instrumentation for general fields in the life sciences. Some of the claims are specific to a special type of hardware. Others have a more general scope and refer to the application rather than the instrument. Some of the original patents are related to spectral imaging systems based on interferometry and other spectral imaging instrumentation. Functionalities The functionalities that Applied Spectral Imaging provides laboratories and hospitals include automated slide scanning, applications interface, whole slide imaging, scoring and analysis, sharing capabilities for team review and final sign off, database management, secure archiving of reports, connectivity to the LIS and standardized testing. Clinical applications ASI's clinical applications for laboratories include the scoring of chromosome analysis and karyotyping, fluorescent karyotyping, spectral karyotyping, karyotyping of multiple species, scanning and detection of metaphases and interphases, FISH review and analysis, matching of tissue FISH with H&E/ IHC, Brightfield whole slide imaging, IHC quantitative scoring, Cytokinesis-blocked micronucleus, region of interest annotating and measuring, tissue matching and FISH imaging, analysis and documentation of membrane IHC stain, analysis and documentation of nuclear IHC stain, chromosome comparison modules, Whole Slide Image viewing, enhancement and documentation, data case management and network connectivity of multiple systems in a network. Products ASI HiPath Pro - Brightfield imaging analysis system for a variety of histopathology needs, including IHC scoring and Whole Slide Imaging of H&E and IHC samples. ASI PathFusion - Bridges the gap between Brightfield pathology and FISH. Combines Whole Slide Imaging, computational Tissue FISH and digital tissue matching of FISH with Haemotoxylin and Eosin (H&E) or Immunohistochemistry (IHC) samples. ASI HiBand - Digital chromosome analysis for counting, indexing and karyotyping. ASI HiFISH - Computational FISH diagnostics for classification, scanning and imaging analysis. ASI CytoPower - Complete chromosomes' analysis, Karyotyping and FISH cell classification platform. ASI Rainbow - Analysis & multicolor imaging solution for Fluorescence and Brightfield samples References External links Companies based in Carlsbad, California Companies established in 1993 Bioinformatics companies Multinational companies headquartered in the United States Biotechnology companies established in 1993 Biomedical engineering Biological engineering Medical technology companies of the United States Medical imaging
Applied Spectral Imaging
[ "Engineering", "Biology" ]
1,036
[ "Biological engineering", "Medical technology", "Biomedical engineering" ]
59,525,996
https://en.wikipedia.org/wiki/4%2C4%E2%80%B2-%28Hexafluoroisopropylidene%29diphthalic%20anhydride
4,4′-(Hexafluoroisopropylidene)diphthalic anhydride (6FDA) is an aromatic organofluorine compound and the dianhydride of 4,4′-(hexafluoroisopropylidene)bisphthalic acid (name derived from phthalic acid). Synthesis The raw materials for 6FDA are hexafluoroacetone and orthoxylene. With hydrogen fluoride as a catalyst, the compounds react to 4,4′-(hexafluoroisopropylidene)bis(o-xylene). This is oxidized with potassium permanganate to 4,4′-(hexafluoroisopropylidene)bisphthalic acid. Dehydration gives the dianhydride 6FDA. Applications 6FDA is used as monomer for the synthesis of fluorinated polyimides. These are prepared by the polymerisation of 6FDA with an aromatic diamine such as 3,5-diaminobenzoic acid or 4,4'-diaminodiphenyl sulfide. Such fluorinated polyimides are used in special applications, e. g. used to make gas-permeable polymer membranes, in the field of microelectronics and optics, such as optical lenses from polymers, OLEDs, or high-performance CMOS-contact image sensors (CISs). These polyimides are typically soluble in common organic solvents, facilitating their production and processing. They have very low water absorption, which makes them particularly suitable for special optical applications. References Monomers Carboxylic anhydrides Trifluoromethyl compounds
4,4′-(Hexafluoroisopropylidene)diphthalic anhydride
[ "Chemistry", "Materials_science" ]
373
[ "Monomers", "Polymer chemistry" ]
54,556,438
https://en.wikipedia.org/wiki/Komar%20superpotential
In general relativity, the Komar superpotential, corresponding to the invariance of the Hilbert–Einstein Lagrangian , is the tensor density: associated with a vector field , and where denotes covariant derivative with respect to the Levi-Civita connection. The Komar two-form: where denotes interior product, generalizes to an arbitrary vector field the so-called above Komar superpotential, which was originally derived for timelike Killing vector fields. Komar superpotential is affected by the anomalous factor problem: In fact, when computed, for example, on the Kerr–Newman solution, produces the correct angular momentum, but just one-half of the expected mass. See also Superpotential Einstein–Hilbert action Komar mass Tensor calculus Christoffel symbols Riemann curvature tensor Notes References Equations of physics Tensors General relativity Potentials
Komar superpotential
[ "Physics", "Mathematics", "Engineering" ]
179
[ "Tensors", "Equations of physics", "Mathematical objects", "Equations", "General relativity", "Relativity stubs", "Theory of relativity" ]
54,564,418
https://en.wikipedia.org/wiki/Chandrasekhar%27s%20H-function
In atmospheric radiation, Chandrasekhar's H-function appears as the solutions of problems involving scattering, introduced by the Indian American astrophysicist Subrahmanyan Chandrasekhar. The Chandrasekhar's H-function defined in the interval , satisfies the following nonlinear integral equation where the characteristic function is an even polynomial in satisfying the following condition . If the equality is satisfied in the above condition, it is called conservative case, otherwise non-conservative. Albedo is given by . An alternate form which would be more useful in calculating the H function numerically by iteration was derived by Chandrasekhar as, . In conservative case, the above equation reduces to . Approximation The H function can be approximated up to an order as where are the zeros of Legendre polynomials and are the positive, non vanishing roots of the associated characteristic equation where are the quadrature weights given by Explicit solution in the complex plane In complex variable the H equation is then for , a unique solution is given by where the imaginary part of the function can vanish if is real i.e., . Then we have The above solution is unique and bounded in the interval for conservative cases. In non-conservative cases, if the equation admits the roots , then there is a further solution given by Properties . For conservative case, this reduces to . . For conservative case, this reduces to . If the characteristic function is , where are two constants(have to satisfy ) and if is the nth moment of the H function, then we have and See also Chandrasekhar's X- and Y-function External links MATLAB function to calculate the H function https://www.mathworks.com/matlabcentral/fileexchange/29333-chandrasekhar-s-h-function References Special functions Integral equations Scattering Scattering, absorption and radiative transfer (optics)
Chandrasekhar's H-function
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
385
[ " absorption and radiative transfer (optics)", "Special functions", "Integral equations", "Mathematical objects", "Equations", "Combinatorics", "Scattering", "Condensed matter physics", "Particle physics", "Nuclear physics" ]
77,526,605
https://en.wikipedia.org/wiki/Cyber%20Security%20and%20Resilience%20Bill
On July 17th 2024, it was announced at the State Opening of Parliament that the Labour government will introduce the Cyber Security and Resilience Bill (CS&R). The proposed legislation is intended to update the existing Network and Information Security Regulations 2018, known as UK NIS. CS&R will strengthen the UK's cyber defences and resilience to hostile attacks thus ensuring that the infrastructure and critical services relied upon by UK companies are protected by addressing vulnerabilities, while ensuring the digital economy can deliver growth. The legislation will expand the remit of the existing regulations and put regulators on a stronger footing, as well as increasing the reporting requirements placed on businesses to help build a better picture of cyber threats. Its aim is to strengthen UK cyber defences, ensuring that the critical infrastructure and digital services which companies rely on are secure. The Bill will extend and apply UK-wide. The new laws are part of the Government's pledge to enhance and strengthen UK cyber security measures and protect the digital economy. CS&R will introduce a comprehensive regulatory framework designed to enforce stringent cyber security measures across various sectors. This framework will include mandatory compliance with established cyber security standards and practices to ensure essential cyber safety measures are being implemented. Ultimately, businesses will need to demonstrate their adherence to these standards through regular audits and reporting. Also included in the legislation are potential cost recovery mechanisms to provide resources to regulators and provide powers to proactively investigate potential vulnerabilities. Key facts The key facts from the King's Speech are: Consequences It will introduce compulsory ransomware reporting so that the authorities can better understand the threat and "alert us to potential attacks by expanding the type and nature of incidents that regulated entities must report." While this information collection is likely to increase resilience to attacks, the administrative burden for businesses from this reporting might well bring with it additional costs as well as the original cyber incident's expense. As modern business practices are interconnected, organisations must ensure that their partners and suppliers also adhere to the standards set by the CS&R. In the EU, the original Network and Information Security Directive (NIS Directive 2016/1148) is being updated to Directive 2022/2555, known as EU NIS 2. EU NIS 2 introduces wide-reaching changes to the existing EU cyber security laws for network and information systems. The CS&R should bring the existing UK NIS regulations 2018 to a framework similar to that of the EU. The Bill as yet has no information on any punishments for non-compliance or what the data regulators' demands from an organisation that has experienced a cyber security incident will be. Reaction Jon Ellison, NCSC Director of National Resilience, said that the proposed bill was "a landmark moment tackling the growing threat to the UK's critical systems". He continued that it will be "a crucial step towards a more comprehensive regulatory regime, fit for our volatile world". Former head of the NCSC Ciaran Martin along with other experts welcomed the legislative proposal. On social media, he wrote that the proposed legislation seemed sensible, with mandatory reporting requirements being significant and positive steps. A representative of the CyberUp Campaign Matt Hull said that the organisation is looking forward to the Government updating UK cyber resilience and in particular the Computer Misuse Act 1990. Any updates to this Act would help cyber professionals protect the U.K., safeguard the digital economy and unlock the potential growth within the cybersecurity industry. Schedule The Bill will proceed through seven stages of the legislative process which happens in both houses of the UK parliament: first reading, second reading, committee stage, report stage, third reading, opposite house and royal assent. July 17th Bill announced. Stage: Pre-legislative Scrutiny (current). Stage: First reading - The Bill will be introduced to Parliament in 2025. See also Cyber Resilience Act - EU regulation to improve cybersecurity and cyber resilience. GDPR - The General Data Protection Regulation. Malware - Examples include Computer viruses, spyware and adware. References External links Cyber security in the UK Research Briefing - House of Commons Library Cybercrime in the United Kingdom Computer network security Department for Science, Innovation and Technology Internet security Labour Party (UK) Malware Technology
Cyber Security and Resilience Bill
[ "Technology", "Engineering" ]
867
[ "Malware", "Cybersecurity engineering", "Computer networks engineering", "Computer network security", "Computer security exploits" ]
77,531,717
https://en.wikipedia.org/wiki/Benzgalantamine
Benzgalantamine, sold under the brand name Zunveyl, is a medication used for the treatment of mild to moderate dementia of the Alzheimer's type. It is a cholinesterase inhibitor. Benzgalantamine is a prodrug of galantamine. The most common side effects include nausea, vomiting, diarrhea, dizziness, headache, and decreased appetite. Benzgalantamine was approved for medical use in the United States in July 2024. Medical uses Benzgalantamine is indicated for the treatment of mild to moderate dementia of the Alzheimer's type in adults. Side effects The most common side effects include nausea, vomiting, diarrhea, dizziness, headache, and decreased appetite. Society and culture Legal status Benzgalantamine was approved for medical use in the United States in July 2024. Names Benzgalantamine is the international nonproprietary name. References External links Treatment of Alzheimer's disease Benzoate esters Prodrugs
Benzgalantamine
[ "Chemistry" ]
206
[ "Chemicals in medicine", "Prodrugs" ]
70,302,266
https://en.wikipedia.org/wiki/Gamas%27s%20theorem
Gamas's theorem is a result in multilinear algebra which states the necessary and sufficient conditions for a tensor symmetrized by an irreducible representation of the symmetric group to be zero. It was proven in 1988 by Carlos Gamas. Additional proofs have been given by Pate and Berget. Statement of the theorem Let be a finite-dimensional complex vector space and be a partition of . From the representation theory of the symmetric group it is known that the partition corresponds to an irreducible representation of . Let be the character of this representation. The tensor symmetrized by is defined to be where is the identity element of . Gamas's theorem states that the above symmetrized tensor is non-zero if and only if it is possible to partition the set of vectors into linearly independent sets whose sizes are in bijection with the lengths of the columns of the partition . See also Algebraic combinatorics Immanant Schur polynomial References Algebraic combinatorics Theorems Multilinear algebra
Gamas's theorem
[ "Mathematics" ]
210
[ "Fields of abstract algebra", "Algebraic combinatorics", "Combinatorics" ]
70,302,410
https://en.wikipedia.org/wiki/4D%20scanning%20transmission%20electron%20microscopy
4D scanning transmission electron microscopy (4D STEM) is a subset of scanning transmission electron microscopy (STEM) which utilizes a pixelated electron detector to capture a convergent beam electron diffraction (CBED) pattern at each scan location. This technique captures a 2 dimensional reciprocal space image associated with each scan point as the beam rasters across a 2 dimensional region in real space, hence the name 4D STEM. Its development was enabled by evolution in STEM detectors and improvements computational power. The technique has applications in visual diffraction imaging, phase orientation and strain mapping, phase contrast analysis, among others. The name 4D STEM is common in literature, however it is known by other names: 4D STEM EELS, ND STEM (N- since the number of dimensions could be higher than 4), position resolved diffraction (PRD), spatial resolved diffractometry, momentum-resolved STEM, "nanobeam precision electron diffraction", scanning electron nano diffraction (SEND), nanobeam electron diffraction (NBED), or pixelated STEM. History The use of diffraction patterns as a function of position dates back to the earliest days of STEM, for instance the early review of John M. Cowley and John C. H. Spence in 1978 or the analysis in 1983 by Laurence D. Marks and David J. Smith of the orientation of different crystalline segments in nanoparticles. Later work includes the analysis of diffraction patterns as a function of probe position in 1995, where Peter Nellist, B.C. McCallum and John Rodenburg attempted electron ptychography analysis of crystalline silicon. There is also fluctuation electron microscopy (FEM) technique, proposed in 1996 by Treacy and Gibson, which also included quantitative analysis of the differences in images or diffraction patterns taken at different locations on a given sample. The field of 4D STEM remained underdeveloped due to the limited capabilities of detectors available at the time. The earliest work used either Grigson coils to scan the diffraction pattern, or an optical camera pickup from a phosphur screen. Later on CCD detectors became available, but while these are commonly used in transmission electron microscopy (TEM) they had limited data acquisition rates, could not distinguish where on the detector an electron strikes with high accuracy, and had low dynamic range which made them undesirable for use in 4D STEM. In the late 2010s, the development of hybrid pixel array detectors (PAD) with single electron sensitivity, high dynamic range, and fast readout speeds allowed for practical 4D STEM experiments. Operating Principle While the process of data collection in 4D STEM is identical to that of standard STEM, each technique utilizes different detectors and collects different data. In 4D STEM there is a pixelated electron detector located at the back focal plane which collects the CBED pattern at each scan location. An image of the sample can be constructed from the CBED patterns by selecting an area in reciprocal space and assigning the average intensity of that area in each CBED pattern to the real space pixel the pattern corresponds to. It is also possible for there to be a(n) ADF or HAADF image taken concurrently with the CBED pattern collection, depending on where the detector is located on the microscope. An annular dark-field image taken may be complementary to a bright-field image constructed from the captured CBED images. The use of a hollow detector with a hole in the middle can allow for transmitted electrons to be passed to an EELS detector while scanning. This allows for the simultaneous collection of chemical spectra information and structure information. Detectors In traditional TEM, imaging detectors use phosphorescent scintillators paired with a charge coupled device (CCD) to detect electrons. While these devices have good electron sensitivity, they lack the necessary readout speed and dynamic range necessary for 4D STEM. Additionally, the use of a scintillator can worsen the point spread function (PSF) of the detector due to the electron's interaction with the scintillator resulting in a broadening of the signal. In contrast, traditional annular STEM detectors have the necessary readout speed, but instead of collecting a full CBED pattern the detector integrates the collected intensity over a range of angles into a single data point. The development of pixelated detectors in the 2010s with single electron sensitivity, fast readout speeds, and high dynamic range has enabled 4D STEM as a viable experimental method. 4D STEM detectors are typically built as either a monolithic active pixel sensor (MAPS) or as a hybrid pixel array detector (PAD). Monolithic active pixel sensor (MAPS) A MAPS detector consists of a complementary metal–oxide–semiconductor (CMOS) chip paired with a doped epitaxial surface layer which converts high energy electrons into many lower energy electrons that travel down to the detector. MAPS detectors must be radiation hardened as their direct exposure to high energy electrons makes radiation damage a key concern. Due to its monolithic nature and straightforward design, MAPS detectors can attain high pixel densities on the order of 4000 x 4000. This high pixel density when paired with low electron doses can enable single electron counting for high efficiency imaging. Additionally, MAPS detectors tend to have electron high sensitivities and fast readout speeds, but suffer from limited dynamic range. Pixel array detector (PAD) PAD detectors consist of a photodiode bump bonded to an integrated circuit, where each solder bump represents a single pixel on the detector. These detectors typically have lower pixel densities on the order of 128 x 128 but can achieve much higher dynamic range on the order of 32 bits. These detectors can achieve relatively high readout speeds on the order of 1 ms/pixel but are still lacking compared to their annular detector counterparts in STEM which can achieve readout speeds on the order of 10 μs/pixel. Detector noise performance is often measured by its detective quantum efficiency (DQE) defined as: where is output signal to noise ratio squared and is the input signal to noise ratio squared. Ideally the DQE of a sensor is 1 indicating the sensor generates zero noise. The DQE of MAPS, APS and other direct electron detectors tend to be higher than their CCD camera counterparts. Computational Methods A major issue in 4D STEM is the large quantity of data collected by the technique. With upwards of 100s of TB of data produced over the course of an hour of scanning, finding pertinent information is challenging and requires advanced computation. Analysis of such large datasets can be quite complex and computational methods to process this data are being developed. Many code repositories for analysis of 4D STEM are currently in development including: HyperSpy, , LiberTEM, Pycroscopy, and . AI driven analysis is possible. However, some methods require databases of information to train on which currently do not exist. Additionally, lack of metrics for data quality, limited scalability due to poor cross-platform support across different manufacturers, and lack of standardization in analysis and experimental methods brings up questions of comparability across different datasets as well as reproducibility. Selected Applications 4D STEM has been utilized in a wide array of applications, the most common uses include virtual diffraction imaging, orientation and strain mapping, and phase contrast analysis which are covered below. The technique has also been applied in: medium range order measurement, Higher order Laue zone (HOLZ) channeling contrast imaging, Position averaged CBED, fluctuation electron microscopy, biomaterials characterization, and medical fields (microstructure of pharmaceutical materials and orientation mapping of peptide crystals). This list is in no way exhaustive and as the field is still relatively young more applications are actively being developed. Virtual Diffraction (Dark Field / Bright Field) Imaging Virtual diffraction imaging is a method developed to generate real space images from diffraction patterns. This technique has been used in characterizing material structures since the 90s but more recently has been applied in 4D STEM applications. This technique often works best with scanning electron nano diffraction (SEND), where the probe convergence angle is relatively low to give separated diffraction disks (thus also giving a resolution measured in nm, not Å). A "virtual detector," is not a detector at all but rather a method of data processing which integrates a subset of pixels in diffraction patterns at each raster position to create a bright-field or dark-field image. A region of interest is selected on some representative diffraction pattern, and only those pixels within the aperture summed to form the image. This virtual aperture can be any size/shape desired and can be created using the 4D dataset gathered from a single scan. This ability to apply different apertures to the same dataset is possible because of having the whole diffraction pattern in the 4D STEM dataset. This eliminates a typical weaknesses in conventional STEM operation as STEM bright-field and dark-field detectors are placed at fixed angles and cannot be changed during imaging. With a 4D dataset bright/dark-field images can be obtained by integrating diffraction intensities from diffracted and transmitted beams respectively. Creating images from these patterns can give nanometer or atomic resolution information (depending on the pixel step size and the range of diffracted angles used to form the image) and is typically used to characterize the structure of nanomaterials. Additionally, these diffraction patterns can be indexed and analyzed using other 4DSTEM techniques, such as orientation and phase mapping, or strain mapping. A key advantage of performing virtual diffraction imaging in 4D STEM is the flexibility. Any shape of aperture could be used: a circle (cognate with traditional TEM bright/dark field imaging), a rectangle, an annulus (cognate with STEM ADF/ABF imaging), or any combination of apertures in a more complex pattern. The use of regular grids of apertures is particularly powerful at imaging a crystal with high signal to noise and minimising the effects of bending and has been used by McCartan et al.; this also allowed the imaging of an array of superlattice spots associated with a particular crystal ordering in part of the crystal as a result of chemical segregation. Virtual diffraction imaging has been used to map interfaces, select intensity from selected areas of the diffraction plane to form enhanced dark field images, map positions of nanoscale precipitates, create phase maps of beam sensitive battery cathode materials, and measure degree of crystallinity in metal-organic frameworks (MOFs). Recent work has further extended the possibilities of virtual diffraction imaging, by applying a more digital approach adapted from one developed for orientation and phase mapping, or strain mapping. In these methods, the diffraction spot positions in a 4D dataset are determined for each diffraction pattern and turned into a list, and operations are performed on the list, not on the whole images. For dark field imaging, the centroid positions for the list of diffraction spots can be simply compared against a list of centroid positions for where spots are expected and intensity only added where diffraction spot centroids agree with the selected positions. This gives far more selectivity than simply integrating all intensity in an aperture (particularly because it ignores diffuse intensity that does not fall in spots), and consequently, much higher contrast in the resulting images and has recently been submitted to arXiv. Phase Orientation Mapping Phase orientation mapping is typically done with electron back scattered diffraction in SEM which can give 2D maps of grain orientation in polycrystalline materials. The technique can also be done in TEM using Kikuchi lines, which is more applicable for thicker samples since formation of Kikuchi lines relies on diffuse scattering being present. Alternatively, in TEM one can utilize precession electron diffraction (PED) to record a large number of diffraction patterns and through comparison to known patterns, the relative orientation of grains in can be determined. 4D STEM can also be used to map orientations, in a technique called Bragg spot imaging. The use of traditional TEM techniques typically results in better resolution than the 4D STEM approach but can fail in regions with high strain as the DPs become too distorted. In Bragg spot imaging, first correlation analysis method is performed to group diffraction patterns (DPs) using a correlation method between 0 (no correlation) and 1 (exact match); then the DP's are grouped by their correlation using a correlation threshold. A correlation image can then be obtained from each group. These are summed and averaged to obtain an overall representative diffraction template from each grouping. Different orientations can be assigned colors which helps visualize individual grain orientations. With proper tilting and utilizing precession electron diffraction (PED) it is even possible to make 3D tomographic renderings of grain orientation and distribution. Since the technique is computationally intensive, recent efforts have been focused on a machine learning approach to analysis of diffraction patterns. Strain Mapping TEM can measure local strains and is often used to map strain in samples using condensed beam electron diffraction CBED. The basis of this technique is to compare an unstrained region of the sample's diffraction pattern with a strained region to see the changes in the lattice parameter. With STEM, the disc positions diffracted from an area of a specimen can provide spatial strain information. The use of this technique with 4D STEM datasets includes fairly involved calculations. Utilizing SEND, bright and dark field images can be obtained from diffraction patterns by integration of direct and diffracted beams respectively, as discussed previously. During 4D STEM operation the ADF detector can be used to visualize a particular region of interest through a collection of scattered electrons to large angles to correlate probe location with diffraction during measurements. There is a tradeoff between resolution and strain information; since larger probes can average strain measurements over a large volume, but moving to smaller probe sizes gives higher real space resolution. There are ways to combat this issue such as spacing probes further apart than the resolution limit to increase the field of view. This strain mapping technique has been applied in many crystalline materials and has been extended to semi-crystalline and amorphous materials (such as metallic glasses) since they too exhibit deviations from mean atomic spacing in regions of high strain Phase Contrast Analysis Differential phase contrast The differential phase contrast imaging technique (DPC) can be used in STEM to characterise magnetic and electric fields inside a thin specimen. The electric or magnetic field in samples is estimated by measuring the deflection of the electron beam caused by the field at each scan point. This differs from the more traditional annular dark field (ADF) measurements by the placement of the detector in the bright field area such that the center of mass of the (mostly) unscattered electron beam may be measured. Additionally, segmented or pixelated detectors are used in order to gain the necessary radial resolution. ADF detectors are typically monolithic (single-segment) and are placed in the dark field region, such that they collect the electrons that have been scattered by the sample. Using DPC to image the local electric fields surrounding single atoms or atomic columns is possible. The use of a pixelated detector in 4D STEM and a computer to track the movement of the "center of mass" of the CBED patterns was found to provide comparable results to those found using segmented detectors. 4D STEM allows for phase change measurements along all directions to be measured without the need to rotate the segmented detector to align with specimen orientation. The ability to measure local polarization in parallel with the local electric field has also been demonstrated with 4D STEM. DPC imaging with 4D STEM is up to 2 orders of magnitude slower than DPC with segmented detectors and requires advanced analysis of large four-dimensional datasets. Ptychography The overlapping CBED measurements present in a 4D STEM dataset allow for the construction of the complex electron probe and complex sample potential using the ptychography technique. Ptychographic reconstructions with 4D STEM data were shown to provide higher contrast than ADF, BF, ABF, and segmented DPC imaging in STEM. The high signal-to-noise ratio of this technique under 4D STEM makes it attractive for imaging radiation sensitive specimens such as biological specimens The use of a pixelated detector with a hole in the middle to allow the unscattered electron beam to pass to a spectrometer has been shown to allow ptychographic analysis in conjunction with chemical analysis in 4D STEM. MIDI STEM This technique MIDI-STEM (matched illumination and detector interferometry-STEM), while being less common, is used with ptychography to create higher contrast phase images. The placement of a phase plate with zones of 0 and π/2 phase shift in the probe forming aperture creates a series of concentric rings in the resulting CBED pattern. The difference in counts between the 0 and π/2 regions allows for direct measurement of local sample phase. The counts in the different regions could be measured via complex standard detector geometries or the use of a pixelated detector in 4D STEM. Pixelated detectors have been shown to utilize this technique with atomic resolution. (MIDI)-STEM produces image contrast information with less high-pass filtering than DPC or ptychography but is less efficient at high spatial frequencies than those techniques. (MIDI)-STEM used in conjunction with ptychography has been shown to be more efficient in providing contrast information than either technique individually. See also Electron diffraction Detectors for transmission electron microscopy Energy filtered transmission electron microscopy (EFTEM) High-resolution transmission electron microscopy (HRTEM) Scanning confocal electron microscopy (SCEM) Scanning electron microscope (SEM) Scanning Transmission Electron Microscopy (STEM) Transmission electron microscopy (TEM) References Electron beam Electron microscopy techniques
4D scanning transmission electron microscopy
[ "Chemistry" ]
3,706
[ "Electron", "Electron beam" ]
70,302,736
https://en.wikipedia.org/wiki/Dilauroyl%20peroxide
Dilauroyl peroxide is an organic compound with the formula (C11H23CO2)2. A colorless solid, it is often sold as a water-damped solid. It is the symmetrical peroxide of lauric acid. It is produced by treating lauroyl chloride with hydrogen peroxide in the presence of base: 2C11H23COCl + H2O2 + 2NaOH → (C11H23CO2)2 + 2HCl References Organic peroxides Radical initiators
Dilauroyl peroxide
[ "Chemistry", "Materials_science" ]
111
[ "Radical initiators", "Organic compounds", "Polymer chemistry", "Reagents for organic chemistry", "Organic peroxides" ]
70,312,163
https://en.wikipedia.org/wiki/M.%20Grace%20Burke
Mary Grace Burke is an American materials scientist who is an emeritus professor at the University of Manchester. She was awarded the 2020 International Metallographic Society Henry Clifton Sorby Award and was the 2019-2023 President of the Royal Microscopical Society. Early life and education Burke was raised in Pittsburgh. She remained in Pittsburgh for undergraduate studies, during which she specialized in metallurgical engineering at the University of Pittsburgh. Burke attended Imperial College London, where she did her PhD research on stress corrosion cracking (SCC). Working under the supervision of P. R. Swann and F. J. Humphreys, Burke studied the mechanism of SCC of austenitic stainless steel. Burke was interested in the relationship between materials behavior and microstructure. Research and career After earning her doctorate, Burke returned to the United States, where she worked at the U.S. Steel Research Laboratory in Monroeville, Pennsylvania. She studied thermomechanical processing effects on microstructural evolution in steels, using analytical transmission electron microscopy. She also performed correlative TEM analyses in combination with atom probe field ion microscopy (APFIM). Burke also studies irradiation embrittlement of the steels and alloys used in light water reactor systems. She joined the Westinghouse Science and Technology Center, where she studied a broad range materials and alloys for nuclear power systems. She transferred to the Bettis Atomic Power Laboratory, where she studied how microstructure impacted the performance of materials. Burke joined the University of Manchester in England as a Professor of Materials Performance and Director of the Materials Performance Centre in 2011. Awards and honors 1995 Elected Fellow of ASM International 2005 President of the Microscopy Society of America 2015 Elected Fellow of the Institute of Materials, Minerals and Mining 2018 MicroAnalysis Society President's Award 2018 Elected Fellow of The Microanalysis Society 2019 Elected President of the Royal Microscopical Society 2019 Elected Fellow of The Metals, Materials and Minerals Society (TMS) 2020 International Metallographic Society Henry Clifton Sorby Award 2021 Henri Coriou Award References Living people Scientists from Pittsburgh Alumni of Imperial College London University of Pittsburgh alumni Academics of the University of Manchester Materials scientists and engineers 21st-century American scientists 21st-century American women scientists Year of birth missing (living people)
M. Grace Burke
[ "Materials_science", "Engineering" ]
464
[ "Materials scientists and engineers", "Materials science" ]
71,754,236
https://en.wikipedia.org/wiki/CHIMERE%20chemistry-transport%20model
CHIMERE is a chemistry-transport model. It is a computer code that unites a set of equations representing the transport and the chemistry of atmospheric species making it possible to quantify the evolution of air masses and pollution plumes as a function of time on different scales (from urban to continental). Using meteorological inputs and emission fluxes, CHIMERE calculates three-dimensional concentrations of pollutants in the atmosphere. Due to the input data used, the number of equations that are solved and the physico-chemistry included in the model, CHIMERE is considered to be a mesoscale model, i.e. simulating the troposphere (from the surface to 20 hPa) for a horizontal resolution of 1 to 100 km and over study areas ranging from the city to the hemisphere. Simulated pollutants Atmospheric pollutants are gaseous molecules or particles present in the Earth's atmosphere and are considered to be in excess. Beyond a certain concentration threshold, their content can be toxic to the vegetation or to human health. These thresholds are different for each pollutant and are monitored hourly on surface-level atmosphere. CHIMERE simulates around a hundred gaseous and aerosol chemical species, including those monitored on a daily basis: ozone O3, nitrogen oxides NO and NO2, particulate matter PM, carbon monoxide CO and sulfur dioxide SO2. Possible applications This numerical model can have several applications: analyze past pollution episodes, by comparing available measurements to model results: this allows not only to better understand the mechanics of a particular episode but also to highlight the weaknesses of the model and therefore to guide the path for future development. make scenarios: in particular by simulating a period for the first time in realistic conditions, then by redoing the simulation by modifying the emissions for example. For this example, this type of exercise makes it possible to quantify the gain that a decline in emissions could have or, on the contrary, to estimate the damage in advance in a possible future where emissions keep increasing. carry out air quality forecasts: this is done typically two to three days in advance and over a given region. The CHIMERE model is used by a large number of air quality monitoring agencies in Europe for this purpose. In France, it is notably the model implemented daily for pollution forecasts (as part of the “Air Quality and the Rational Use of Energy” law passed on the 30 September 1996) by AIRPARIF in the Paris region and Atmo Grand Est in the Grand Est region. In France on a national level, CHIMERE is the modeling tool implemented by INERIS for the PREVAIR air quality forecasting platform. Basis of the model The CHIMERE model necessitates three main phases: a data preparation phase (pre-processing) essential for a simulation, the model itself for calculating atmospheric concentrations and a results exploitation phase (post-processing). This principle is true for all digital tools of this type (see figure). Phase 1 - pre-processing: The preprocessing phase contains the preparation of the input information necessary for running a chemistry-transport model throughout its simulation (several days or weeks are calculated with a time step of a few seconds): meteorological fields and emissions (of different sources). Additional inputs are also prepared during this step, which represent the initial and boundary conditions (chemical concentrations) and land surface information (soil and surface types, vegetation). Phase 2 - CHIMERE model After reading all the input data, systems of stiff differential equations, including all chemical reactions included in the model (with species having from a few microseconds to several days of lifetime in the atmosphere) are integrated over time and space. At the same time, transport (advection and convection), turbulence, emissions (sources) and dry and wet deposition (sinks) are treated in the form of flows which will increase (in case of sources) or decrease (in case of sinks) the pollutant concentrations for each chemical species, cell by cell and minute by minute. Phase 3 - Post-processing: Post-processing allows to analyze the results of the simulation. Concentration fields at thousands of cell points and hour by hour often represent too much information to draw conclusions from directly. This step makes it possible to calculate scores (by comparing the results of the simulation to surface stations, by which we seek to quantify the precision of the simulations in relation to observations), synthetic maps (for example maximum ozone or particles over a day, average daily SO2 concentration, etc.). Current research with the model The CHIMERE model is under continuous development and a new version of the code is made available to users about once a year. If the regional modeling of gaseous species is relatively well represented at this point, there are still great uncertainties in the simulation of aerosols. Aerosols have different origins (anthropogenic and urban, fire combustion aerosols, or mineral aerosols) and different lifetimes, making modeling them correctly complex. Current research is revolving around impact of air quality on health, including design of models that can compute the exposure of the population to different pollutants but also to around the consideration of new species to monitor (such as pollen, which are highly allergenic). The latest version of the model includes so-called "on-line" effects. Until recently, this type of model was always in "off-line" mode; i.e. the meteorology was pre-calculated and then used to calculate the chemistry and transport of pollutants. In the latest v2020 version, feedback between meteorology and atmospheric chemistry have been implemented, making it possible to calculate the radiative impact of aerosols (direct effects) and cloud formation (indirect effects) more realistically. Development and distribution of the model The model is developed by researchers from the P.S.Laplace CNRS Institute (IPSL). The code is developed under the GNU GPL free software license and is available on a website, http://www.lmd.polytechnique.fr/chimere. References Air pollution in France Computational chemistry
CHIMERE chemistry-transport model
[ "Chemistry" ]
1,254
[ "Theoretical chemistry", "Computational chemistry" ]
71,765,640
https://en.wikipedia.org/wiki/TMEM144
Transmembrane Protein 144 (TMEM144) is a protein in humans encoded by the TMEM144 gene. Gene Transmembrane Protein 144 is located on the plus strand of chromosome 4 (4q32.1), spanning a total of 40,857 base pairs. The TMEM144 gene transcribes a mRNA sequence 3,210 nucleotides in length and composed of 13 exons. Protein There exist two isoforms of human Transmembrane Protein 144. Isoform one consist of 345 amino acids with a total mass of 37.6 kDa. This isoform has a theoretical isoelectric point of 6.63. The second isoform is 169 amino acids long with a mass of 18.3 kDa. Expression TMEM144 is over-expressed in adult brain tissue with low regional specificity. TMEM144 appears enriched in oligodendrocytes and immune cells, such as dendritic cells and monocytes. Cellular Localization Precise cell localization has multiple predicted locations. Localization tools state TMEM144 is likely found in the plasma membrane, endoplasmic reticulum, Lysosome/Vacuole, or Golgi apparatus. However, an immunofluorescent staining of various human cell lines display localization to the mitochondria. Post Translational Modifications There exists five predicted post translational modifications for TMEM144, including four sites of phosphorylation and a sumoylation site. Interacting Proteins Several proteins have been observed to be physically associated with TMEM144, including Transmembrane Protein 237, Homocysteine-Responsive Endoplasmic Reticulum-Resident Ubiquitin-Like Domain Member 2 Protein, Translocase of Inner Mitochondrial Membrane Domain-Containing Protein 1, Free Fatty Acid Receptor 2, Aquaporin 6, Serine Rich Single-Pass Membrane Protein 1, and Adrenoceptor Beta 2. Homology Transmembrane Protein 144 arose approximately 694 million years ago in desert locust. It can be found in both vertebrates and invertebrates. It takes TMEM144 approximately 6.8 million years to make a 1% change to its amino acid sequence, indicating a moderately low rate of evolution. Ortholog Table Clinical Significance Transmembrane Protein 144 is predicted to be a direct or indirect negative regulator of kisspeptin. High expression of TMEM144 is prognostically favorable for patient with endometrial cancer. Whereas in patients with pancreatic cancer, high expression of TMEM144 is associated with poor prognostic outcomes. References Genes on human chromosome 4 Proteins
TMEM144
[ "Chemistry" ]
568
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
53,280,578
https://en.wikipedia.org/wiki/Box-counting%20content
In mathematics, the box-counting content is an analog of Minkowski content. Definition Let be a bounded subset of -dimensional Euclidean space such that the box-counting dimension exists. The upper and lower box-counting contents of are defined by where is the maximum number of disjoint closed balls with centers and radii . If , then the common value, denoted , is called the box-counting content of . If , then is said to be box-counting measurable. Examples Let denote the unit interval. Note that the box-counting dimension and the Minkowski dimension coincide with a common value of 1; i.e. Now observe that , where denotes the integer part of . Hence is box-counting measurable with . By contrast, is Minkowski measurable with . See also Box counting References Fractals
Box-counting content
[ "Mathematics" ]
169
[ "Mathematical analysis", "Functions and mappings", "Mathematical objects", "Fractals", "Mathematical relations" ]
73,189,075
https://en.wikipedia.org/wiki/Koml%C3%B3s%27%20theorem
Komlós' theorem is a theorem from probability theory and mathematical analysis about the Cesàro convergence of a subsequence of random variables (or functions) and their subsequences to an integrable random variable (or function). It's also an existence theorem for an integrable random variable (or function). There exist a probabilistic and an analytical version for finite measure spaces. The theorem was proven in 1967 by János Komlós. There exists also a generalization from 1970 by Srishti D. Chatterji. Komlós' theorem Probabilistic version Let be a probability space and be a sequence of real-valued random variables defined on this space with Then there exists a random variable and a subsequence , such that for every arbitrary subsequence when then -almost surely. Analytic version Let be a finite measure space and be a sequence of real-valued functions in and . Then there exists a function and a subsequence such that for every arbitrary subsequence if then -almost everywhere. Explanations So the theorem says, that the sequence and all its subsequences converge in Césaro. Literature Kabanov, Yuri & Pergamenshchikov, Sergei. (2003). Two-scale stochastic systems. Asymptotic analysis and control. 10.1007/978-3-662-13242-5. Page 250. References Probability theorems Theorems in analysis
Komlós' theorem
[ "Mathematics" ]
308
[ "Theorems in mathematical analysis", "Mathematical analysis", "Theorems in probability theory", "Mathematical problems", "Mathematical theorems" ]
73,190,019
https://en.wikipedia.org/wiki/Four-dimensional%20Chern%E2%80%93Simons%20theory
In mathematical physics, four-dimensional Chern–Simons theory, also known as semi-holomorphic or semi-topological Chern–Simons theory, is a quantum field theory initially defined by Nikita Nekrasov, rediscovered and studied by Kevin Costello, and later by Edward Witten and Masahito Yamazaki. It is named after mathematicians Shiing-Shen Chern and James Simons who discovered the Chern–Simons 3-form appearing in the theory. The gauge theory has been demonstrated to be related to many integrable systems, including exactly solvable lattice models such as the six-vertex model of Lieb and the Heisenberg spin chain and integrable field theories such as principal chiral models, symmetric space coset sigma models and Toda field theory, although the integrable field theories require the introduction of two-dimensional surface defects. The theory is also related to the Yang–Baxter equation and quantum groups such as the Yangian. The theory is similar to three-dimensional Chern–Simons theory which is a topological quantum field theory, and the relation of 4d Chern–Simons theory to the Yang–Baxter equation bears similarities to the relation of 3d Chern–Simons theory to knot invariants such as the Jones polynomial discovered by Witten. Formulation The theory is defined on a 4-dimensional manifold which is a product of two 2-dimensional manifolds: , where is a smooth orientable 2-dimensional manifold, and is a complex curve (hence has real dimension 2) endowed with a meromorphic one-form . The field content is a gauge field . The action is given by wedging the Chern–Simons 3-form with : Restrictions on underlying manifolds A heuristic puts strong restrictions on the to be considered. This theory is studied perturbatively, in the limit that the Planck constant . In the path integral formulation, the action will contain a ratio . Therefore, zeroes of naïvely correspond to points at which , at which point perturbation theory breaks down. So may have poles, but not zeroes. A corollary of the Riemann–Roch theorem relates the degree of the canonical divisor defined by (equal to the difference between the number of zeros and poles of , with multiplicity) to the genus of the curve , giving Then imposing that has no zeroes, must be or . In the latter case, has no poles and a complex torus (with a 2d lattice). If , then is the complex projective line. The form has two poles; either a single pole with multiplicity 2, in which case it can be realized as on , or two poles of multiplicity one, which can be realized as on . Therefore is either a complex plane, cylinder or torus. There is also a topological restriction on , due to a possible framing anomaly. This imposes that must be a parallelizable 2d manifold, which is also a strong restriction: for example, if is compact, then it is a torus. Surface defects and field theories The above is sufficient to obtain spin chains from the theory, but to obtain 2-dimensional integrable field theories, one must introduce so-called surface defects. A surface defect, often labelled , is a 2-dimensional 'object' which is considered to be localized at a point on the complex curve but covers which is fixed to be for engineering integrable field theories. This defect is then the space on which a 2-dimensional field theory lives, and this theory couples to the bulk gauge field . Supposing the bulk gauge field has gauge group , the field theory on the defect can interact with the bulk gauge field if it has global symmetry group , so that it has a current which can couple via a term which is schematically . In general, one can have multiple defects with , and the action for the coupled theory is then with the collection of fields for the field theory on , and coordinates for . There are two distinct classes of defects: Order defects, which introduce new degrees of freedom on the defect which couple to the bulk gauge field. Disorder defects, where the bulk gauge field has some singularities. Order defects are easier to define, but disorder defects are required to engineer many of the known 2-dimensional integrable field theories. Systems described by 4d Chern–Simons theory Spin chains Six-vertex model Eight-vertex model XXZ Heisenberg spin-chain Integrable field theories Gross–Neveu model Thirring model Wess–Zumino–Witten model Principal chiral model and deformations Symmetric space coset sigma models Master theories of integrable systems 4d Chern–Simons theory is a 'master theory' for integrable systems, providing a framework that incorporates many integrable systems. Another theory which shares this feature, but with a Hamiltonian rather than Lagrangian description, is classical affine Gaudin models with a 'dihedral twist', and the two theories have been shown to be closely related. Another 'master theory' for integrable systems is the anti-self-dual Yang–Mills (ASDYM) system. Ward's conjecture is the conjecture that in fact all integrable ODEs or PDEs come from ASDYM. A connection between 4d Chern–Simons theory and ASDYM has been found so that they in fact come from a six-dimensional holomorphic Chern–Simons theory defined on twistor space. The derivation of integrable systems from this 6d Chern–Simons theory through the alternate routes of 4d Chern–Simons theory and ASDYM in fact fit into a commuting square. See also Chern–Simons theory Integrable system Classical Gaudin model Anti-self-dual Yang–Mills equations External links nLab page References Quantum field theory Integrable systems
Four-dimensional Chern–Simons theory
[ "Physics" ]
1,215
[ "Integrable systems", "Quantum field theory", "Theoretical physics", "Quantum mechanics" ]
68,789,736
https://en.wikipedia.org/wiki/Lenglart%27s%20inequality
In the mathematical theory of probability, Lenglart's inequality was proved by Èrik Lenglart in 1977. Later slight modifications are also called Lenglart's inequality. Statement Let be a non-negative right-continuous -adapted process and let be a non-negative right-continuous non-decreasing predictable process such that for any bounded stopping time . Then References Citations General sources Stochastic differential equations Articles containing proofs Probabilistic inequalities
Lenglart's inequality
[ "Mathematics" ]
96
[ "Theorems in probability theory", "Probabilistic inequalities", "Articles containing proofs", "Inequalities (mathematics)" ]
68,789,853
https://en.wikipedia.org/wiki/Stochastic%20Gronwall%20inequality
Stochastic Gronwall inequality is a generalization of Gronwall's inequality and has been used for proving the well-posedness of path-dependent stochastic differential equations with local monotonicity and coercivity assumption with respect to supremum norm. Statement Let be a non-negative right-continuous -adapted process. Assume that is a deterministic non-decreasing càdlàg function with and let be a non-decreasing and càdlàg adapted process starting from . Further, let be an - local martingale with and càdlàg paths. Assume that for all , where . and define . Then the following estimates hold for and : If and is predictable, then ; If and has no negative jumps, then ; If then ; Proof It has been proven by Lenglart's inequality. References Stochastic differential equations Articles containing proofs Probabilistic inequalities
Stochastic Gronwall inequality
[ "Mathematics" ]
185
[ "Theorems in probability theory", "Probabilistic inequalities", "Articles containing proofs", "Inequalities (mathematics)" ]
68,793,989
https://en.wikipedia.org/wiki/Solid-phase%20reversible%20immobilization
Solid-phase reversible immobilization, or SPRI, is a method of purifying nucleic acids from solution. It uses silica- or carboxyl-coated paramagnetic beads, which reversibly bind to nucleic acids in the presence of polyethylene glycol and a salt. A common application of SPRI technology is purifying samples of DNA amplified by PCR for sequencing reactions:. Use in nucleic acid purification SPRI beads are paramagnetic beads coated with silica or carboxyl groups. When the beads are resuspended in solutions with high concentrations of polyethylene glycol and salts, they are capable of binding reversibly to nucleic acids. This binding is size selective, in that longer polymers of nucleic acids bind more efficiently than shorter ones. A SPRI purification typically includes the following steps: SPRI beads in a solution of polyethylene glycol and sodium chloride are mixed with a sample of nucleic acids. The nucleic acids bind to the beads. The mixture is placed in a magnetic field, which separates the nucleic-acid bound beads from the solution. The solution is removed and the beads are washed multiple times with 80% ethanol in water. The beads are allowed to dry to remove residual ethanol. The beads are removed from the magnetic field and resuspended in water or an elution buffer, which releases the nucleic acids from the beads. The mixture is once again placed in a magnetic field, separating the beads from the solution. The solution, which now contains the purified nucleic acids, is removed and used for downstream applications. See also Nucleic acid methods DNA sequencing Solid-phase extraction References Nucleic acids
Solid-phase reversible immobilization
[ "Chemistry" ]
360
[ "Biomolecules by chemical classification", "Nucleic acids" ]
68,796,562
https://en.wikipedia.org/wiki/Daniele%20Dini
Daniele Dini is an Italian/British Mechanical Engineer. He is a Professor of Tribology at Imperial College London, where he is Head of the Tribology Group. Tribology is the science and engineering of friction, lubrication and wear. Education Dini received an M.Eng. degree in Mechanical Engineering from the Politecnico di Bari, Italy in 2000, He then studied for a D.Phil. in the Department of Engineering Science at the University of Oxford, which he obtained in 2004. His D.Phil. research was performed under the supervision of Professor David Hills. Research and career Dini is currently the Shell-Royal Academy of Engineering Chair in Complex Engineering Interfaces. Previously, he was an Engineering and Physical Sciences Research Council Established Career Fellow. Dini has published over 250 peer-reviewed papers in the field of tribology. According to Google Scholar, his research has been cited over 6000 times and he has a h-index of 43. He is an expert in the modelling and simulation of tribological systems across scales. Dini was promoted to full Professor in 2017, his inaugural lecture was entitled 'Releasing friction's potential'. In the same year, he succeeded Professor Hugh Spikes as Head of the Tribology Group. He is an Assistant Editor for the Elsevier journal International Journal of Solids and Structures and is on the International Advisory Editorial Board for Tribology International. He is a Co-Director of both the Shell University Technology Centre (UTC) for Fuels and Lubricants and the SKF UTC, which are based in the Department of Mechanical Engineering at Imperial College London. He is also currently Director of Research for the Department of Mechanical Engineering at Imperial College London. Honours and awards Dini is the recipient of a number of awards, including: the Tribology Bronze Medal (IMechE, 2004); the Jacob Wallenberg Foundation Award (Royal Swedish Academy of Engineering Sciences, 2007); three best paper awards: Thomas Bernard Hall Prize (IMechE, 2008 and 2010) and the Kenneth L. Johnson Award (ASME, 2012); Teaching Excellence in Engineering Education (Imperial College London, 2014). He was the recipient of the prestigious EPSRC Established Career Fellowship, awarded in 2016. The strong links of his group with industrial partners was recognised through the Imperial College President’s Award and Medal for Excellence in External Collaboration and Partnerships in 2017. He received the Donald Julius Groen Prize from the IMechE in 2019. In 2022, Dini was presented with the inaugural Peter Jost Tribology Award from the International Tribology Council at the 7th World Tribology Congress. Dini received the Tribology Silver Medal from the IMechE in 2022. In 2014, Dini was elected as a Fellow of the Institution of Mechanical Engineers (FIMechE). In 2021, he was also elected as a Fellow of the Society of Tribologists and Lubrication Engineers (FSTLE) and a Fellow of the Royal Academy of Engineering (FREng). References External links Fellows of the Royal Academy of Engineering Fellows of the Institution of Mechanical Engineers British mechanical engineers Tribologists Alumni of the University of Oxford Academics of Imperial College London Living people Year of birth missing (living people)
Daniele Dini
[ "Materials_science" ]
663
[ "Tribology", "Tribologists" ]
68,796,711
https://en.wikipedia.org/wiki/Rural-Urban%20gradient
The Rural-Urban gradient is a gradient that is used to describe how Anthropocene effects affect their surroundings and how they compare to areas less affected by Anthropocene effects. Effects such as but, not limited to disturbance, change in biota, pollution, and landscape modification. Mainly used in the context of ecosystem services, it has also been used to describe biodiversity along the gradient, as well as behavioral change. Research Individual research on the topic is often done by taking multiple samples along a transect from a city center and working outwards. At first, research mainly focused on characteristics involved in land cover structures, the biota of the rural-urban areas and socio-economic structures. However, nowadays research also focuses on many ecosystem services, as well as on biodiversity and evolution. Ecosystem services In ecosystem services, rural-urban gradients have shown Anthropocene effects affect their surroundings in multiple ways. For example, research has shown that energy consumption increases with increases population and industrialization. As of now, there is no clear pattern on how ecosystem services are affected by the rural-urban gradient, as it still differs widely between different cities and is dependent on other factors. Biodiversity In biodiversity, the rural-urban gradient is sometimes also used to describe the species richness distribution along the gradient. It is known that for most groups of organisms when urbanization is high, species richness decreases. However, when urbanization is at a low to medium level, species richness tends to increase. These are mostly suburban, low-density housing and there are several reasons why the species richness tends to be higher there. For instance, the large presence of private gardens. In these gardens, a great floral diversity exists, largely mostly existing of non-native plants. This, combined with the combined size of all the gardens, create a large, diverse floral area, attracting more fauna than the more urbanized cores of cities. In return, this also creates a greater species richness than both the more urbanized city cores, as well as the rural lands further away from the city. Another factor of biodiversity on the rural-urban gradient is the effect of invasive and introduced species. With an increase in human activity comes a greater introduction of non-native species. This, combined with research that traffic corridors help to disperse non-native species, make that non-native species also follow a rural-urban gradient, with the highest concentration in the cities and lower concentrations as you go outwards from the city. In evolution The rural-urban gradient is also studied in the light of evolution. Research on the common sparrow (Passer domesticus) has shown that populations along a rural-urban gradient can also genetically differentiate from one another over relatively small distances. In contrast, research on the black-headed gull (Chroicocephalus ridibundus) has shown that this genetic differentiation does not always appear along a rural-urban gradient, as the research did not show any significant difference between the genetic make-up of urban and rural populations. Behavior In behavioral biology, the rural-urban gradient has mainly been studied in the context of songbirds. Research on European blackbirds (Turdus merula) has shown that there is a significant variation of songs of the European blackbird along a rural-urban gradient. This is probably to avoid the song from being masked by the background noises. However, since the different populations are not isolated, it is unclear whether this is an evolutionary change or part of behavioral plasticity. References Urban economics Urban planning Anthropology
Rural-Urban gradient
[ "Engineering" ]
718
[ "Urban planning", "Architecture" ]
68,797,857
https://en.wikipedia.org/wiki/Ionometallurgy
Mineral processing and extraction of metals are very energy-intensive processes, which are not exempted of producing large volumes of solid residues and wastewater, which also require energy to be further treated and disposed. Moreover, as the demand for metals increases, the metallurgical industry must rely on sources of materials with lower metal contents both from a primary (e.g., mineral ores) and/or secondary (e.g., slags, tailings, municipal waste) raw materials. Consequently, mining activities and waste recycling must evolve towards the development of more selective, efficient and environmentally friendly mineral and metal processing routes. Mineral processing operations are needed firstly to concentrate the mineral phases of interest and reject the unwanted material physical or chemically associated to a defined raw material. The process, however, demand about 30 GJ/tonne of metal, which accounts about 29% of the total energy spent on mining in the USA. Meanwhile, pyrometallurgy is a significant producer of greenhouse gas emissions and harmful flue dust. Hydrometallurgy entails the consumption of large volumes of lixiviants such as H2SO4, HCl, KCN, NaCN which have poor selectivity. Moreover, despite the environmental concern and the use restriction imposed by some countries, cyanidation is still considered the prime process technology to recover gold from ores. Mercury is also used by artisanal miners in less economically developed countries to concentrate gold and silver from minerals, despite its obvious toxicity. Bio-hydro-metallurgy make use of living organisms, such as bacteria and fungi, and although this method demands only the input of and from the atmosphere, it requires low solid-to-liquid ratios and long contact times, which significantly reduces space-time yields. Ionometallurgy makes use of non-aqueous ionic solvents such ionic liquids (ILs) and deep eutectic solvents (DESs), which allows the development of closed-loop flow sheet to effectively recover metals by, for instance, integrating the metallurgical unit operations of leaching and electrowinning. It allows to process metals at moderate temperatures in a non-aqueous environment which allows controlling metal speciation, tolerates impurities and at the same time exhibits suitable solubilities and current efficiencies. This simplify conventional processing routes and allows a substantial reduction in the size of a metal processing plant. Metal extraction with ionic fluids DESs are fluids generally composed of two or three cheap and safe components that are capable of self-association, often through hydrogen bond interactions, to form eutectic mixtures with a melting point lower than that of each individual component. DESs are generally liquid at temperatures lower than 100 °C, and they exhibit similar physico-chemical properties to traditional ILs, while being much cheaper and environmentally friendlier. Most of them are mixtures of choline chloride and a hydrogen-bond donor (e.g., urea, ethylene glycol, malonic acid) or mixtures of choline chloride with a hydrated metal salt. Other choline salts (e.g. acetate, citrate, nitrate) have a much higher costs or need to be synthesised, and the DES formulated from these anions are typically much more viscous and can have higher conductivities than for choline chloride. This results in lower plating rates and poorer throwing power and for this reason chloride-based DES systems are still favoured. For instance, Reline (a 1:2 mixture of choline chloride and urea) has been used to selectively recover Zn and Pb from a mixed metal oxide matrix. Similarly, Ethaline (a 1: 2 mixture of choline chloride and ethylene glycol) facilitates metal dissolution in electropolishing of steels. DESs have also demonstrated promising results to recover metals from complex mixtures such Cu/Zn and Ga/As, and precious metals from minerals. It has also been demonstrated that metals can be recovered from complex mixtures by electrocatalysis using a combination of DESs as lixiviants and an oxidising agent, while metal ions can be simultaneously separated from the solution by electrowinning. Recovery of precious metals by ionometallurgy Precious metals are rare, naturally occurring metallic chemical elements of high economic value. Chemically, the precious metals tend to be less reactive than most elements. They include gold and silver, but also the so-called platinum group metals: ruthenium, rhodium, palladium, osmium, iridium, and platinum (see precious metals). Extraction of these metals from their corresponding hosting minerals would typically require pyrometallurgy (e.g., roasting), hydrometallurgy (cyanidation), or both as processing routes. Early studies have demonstrated that gold dissolution rate in Ethaline compares very favourably to the cyanidation method, which is further enhanced by the addition of iodine as an oxidising agent. In an industrial process the iodine has the potential to be employed as an electrocatalyst, whereby it is continuously recovered in situ from the reduced iodide by electrochemical oxidation at the anode of an electrochemical cell. Dissolved metals can be selectively deposited at the cathode by adjusting the electrode potential. The method also allows better selectivity as part of the gangue (e.g., pyrite) tend to be dissolved more slowly. Sperrylite (PtAs2) and moncheite (PtTe2), which are typically the more abundant platinum minerals in many orthomagmatic deposits, do not react under the same conditions in Ethaline because they are disulphide (pyrite), diarsenide (sperrylite) or ditellurides (calaverite and moncheite) minerals, which are particularly resistant to iodine oxidation. The reaction mechanism by which dissolution of platinum minerals is taking place is still under investigation. Metal recovery from sulfide minerals with ionometallurgy Metal sulfides (e.g., pyrite FeS2, arsenopyrite FeAsS, chalcopyrite CuFeS2) are normally processed by chemical oxidation either in aqueous media or at high temperatures. In fact, most base metals, e.g., aluminium, chromium, must be (electro)chemically reduced at high temperatures by which the process entails a high energy demand, and sometimes large volumes of aqueous waste is generated. In aqueous media chalcopyrite, for instance, is more difficult to dissolve chemically than covellite and chalcocite due to surface effects (formation of polysulfide species,). The presence of Cl− ions has been suggested to alter the morphology of any sulfide surface formed, allowing the sulfide mineral to leach more easily by preventing passivation. DESs provide a high Cl− ion concentration and low water content, whilst reducing the need for either high additional salt or acid concentrations, circumventing most oxide chemistry. Thus, the electrodissolution of sulfide minerals has demonstrated promising results in DES media in absence of passivation layers, with the release into the solution of metal ions which could be recovered from solution. During extraction of copper from copper sulfide minerals with Ethaline, chalcocite (Cu2S) and covellite (CuS) produce a yellow solution, indicating that [CuCl4]2− complex are formed. Meanwhile, in the solution formed from chalcopyrite, Cu2+ and Cu+ species co-exist in solution due to the generation of reducing Fe2+ species at the cathode. The best selective recovery of copper (>97 %) from chalcopyrite can be obtained with a mixed DES of 20 wt.% ChCl-oxalic acid and 80 wt.% Ethaline. Metal recovery from oxide compounds with Ionometallurgy Recovery of metals from oxide matrixes is generally carried out using mineral acids. However, electrochemical dissolution of metal oxides in DES can allow to enhance the dissolution up to more than 10 000 times in pH neutral solutions. Studies have shown that ionic oxides such as ZnO tend to have high solubility in ChCl:malonic acid, ChCl:urea and Ethaline, which can resemble the solubilities in aqueous acidic solutions, e.g., HCl. Covalent oxides such as TiO2, however, exhibits almost no solubility. The electrochemical dissolution of metal oxides is strongly dependent on the proton activity from the HBD, i.e. capability of the protons to act as oxygen acceptors, and on the temperature. It has been reported that eutectic ionic fluids of lower pH-values, such as ChCl:oxalic acid and ChCl:lactic acid, allow a better solubility than that of higher pH (e.g., ChCl:acetic acid). Hence, different solubilities can be obtained by using, for instance, different carboxylic acids as HBD. Outlook Currently, the stability of most ionic liquids under practical electrochemical conditions is unknown, and the fundamental choice of ionic fluid is still empirical as there is almost no data on metal ion thermodynamics to feed into solubility and speciation models. Also, there are no Pourbaix diagrams available, no standard redox potentials, and bare knowledge of speciation or pH-values. It must be noticed that most processes reported in the literature involving ionic fluids have a Technology Readiness Level (TRL) 3 (experimental proof-of-concept) or 4 (technology validated in the lab), which is a disadvantage for short-term implementation. However, ionometallurgy has the potential to effectively recover metals in a more selective and sustainable way, as it considers environmentally benign solvents, reduction of greenhouse gas emissions and avoidance of corrosive and harmful reagents. References Metallurgy
Ionometallurgy
[ "Chemistry", "Materials_science", "Engineering" ]
2,087
[ "Metallurgy", "Materials science", "nan" ]
68,801,278
https://en.wikipedia.org/wiki/Ecological%20evolutionary%20developmental%20biology
Ecological evolutionary developmental biology (eco-evo-devo) is a field of biology combining ecology, developmental biology and evolutionary biology to examine their relationship. The concept is closely tied to multiple biological mechanisms. The effects of eco-evo-devo can be a result of developmental plasticity, the result of symbiotic relationships or epigenetically inherited. The overlap between developmental plasticity and symbioses rooted in evolutionary concepts defines ecological evolutionary developmental biology. Host- microorganisms interactions during development characterize symbiotic relationships, whilst the spectrum of phenotypes rooted in canalization with response to environmental cues highlights plasticity. Developmental plasticity that is controlled by environmental temperature may put certain species at risk as a result of climate change. Phenotypic plasticity Phenotypic or developmental plasticity is the alteration of development through environmental factors. These factors can induce multiple types of variants that increase the fitness of an organism based on the environment they are in. These alterations can be for defense, predation, sex determination, and sexual selection. Plasticity-driven adaptation acts on evolution in three ways by phenotypic accommodation, genetic accommodation, and genetic assimilation. Phenotypic accommodation is when an organism adjusts its phenotype to better fit its environment without being genetically induced. The trait that is selected by the environment through phenotypic accommodation can then be integrated into the genome. This process is called genetic accommodation. Genetic accommodation allows for traits that were produced by the environment to be passed on, and it gives better responses to environmental changes. Lastly, genetic assimilation is when the induced phenotype is fixed into the genome. The trait is no longer environmentally induced. At this stage plasticity is lost because when the environmental stimulus is lost the phenotype still remains. In some cases species change their environment to suit them. This phenomenon is called niche construction. These organisms can change unfavorable conditions to fit them. These changes relieve selective pressures to give an advantage they would have otherwise. These advantages could be creating shelters like nests and burrows, modifying the environment physically or chemically, or making shade. Epigenetic inheritance Epigenetic heritance is the inheritance of epigenetic marks on the DNA induced by environmental factors. A simple examples of this is permutation, this was described first in plants. What happens is the shape or color of the seed alters the homologous allele. These marks alter gene expression patterns, which can be transmitted to the next generation. This means that environmental cues can influence the development of the organism’s offspring. This is similar to the evolution theory of Lamarck. He stated that an organism can pass physical characteristics that the parent organism acquired through use or disuse during its lifetime on to its offspring. Though, this is not entirely true, a lot of organisms have traits or genes that they don't use but epigenetic inheritance, like environmental factors such as temperature or food availability during the parent’s life can impact the development of the offspring. An example of this is nutrition in the youth, genes aren't the only thing that control things in the body. Poor nutrition can slow down and heavily delay the smooth transition of puberty in a child. This can also force some genes that were null to become activated and other genes to turn off. Many do not consider this phenomenon, and it is quite interesting to consider that things like malnutrition and temperature in one organism can affect the following generations of that organism. Symbiotic interactions Symbiosis describes the relationship between two species living closely together in an environment, and symbiotic interactions are significant influences on eco-evo-devo dynamics. Many symbiotic organisms have co-evolved and, over time, have become reliant on these relationships. The effect on either involved organism may be positive, neutral, or negative, and these effects are used to broadly categorize different types of symbiotic relationships. Symbiotic relationships generally fall into the categories of mutualism, commensalism, parasitism/predation, amensalism, or competition, although other categorizations may be used to describe more complex or uncommon interactions. The relationship between clownfish and anemones is one example of a mutualistic symbiosis. Mutualisms are particularly common between ectotherms, making these symbiotic relationships some of the most threatened by climate change. Climate change Climate change may alter the development of organisms. As a type of developmental plasticity, the sex determination of particular animals can be influenced by the temperature of the environment. Some Reptiles and ray-finned fish rely on temperature-dependent sex determination (TSD). The determination takes place during a specific period of the embryonic development. Although the exact mechanisms of this type of sex determination remains unknown for most species, temperature sensitive proteins that determine the sex of alligators have been found. The effects of rising temperatures can already be seen in animals, for example the green sea turtle. Sea turtles produce more females when exposed to higher temperatures. As a result adult green turtle populations are currently 65% female on cooler beaches, but can reach 85% on their warmer nesting beaches. In contrast to the rising female proportion of sea turtles, the fish that use TSD, such as the southern flounder, generally produce more males in response to higher temperatures. Species that are strongly influenced by temperature in their sex determination may be particularly at risk from climate change. From an evolutionary standpoint, sea turtles' sex chromosomes differ from other species of reptiles, and this difference makes them susceptible to TSD. Researchers believe this phenomenon is worth studying as climate change may one day have an effect on other types of vertebrates. Rising global temperatures may decrease the amount of genetic variation, hurting specific species' chance at survival. Having a large gene pool is crucial when it comes to being able to adapt to environmental conditions and disease. Climate change can lower the amount of genetic diversity in a population over time and is extremely detrimental to the overall fitness of individuals in a given population. Climate change affects more than just animals when it comes to development. It affects people as well, especially those in developing countries. For example, expecting mothers who are in areas where droughts are more common due to climate change, may suffer from dehydration which can have harmful effects on their child's development. Dehydration can cause amniotic fluid levels to be lower, which directly correlates to the baby's development and can even cause premature birth. Malnutrition in children is a huge problem in developing countries. Rising global temperatures can alter growing seasons for certain food groups, making it hard for children to get the proper nutrients they need for ideal human development. Ecological, evolutionary, developmental biology compares these subgenres of biology. Interaction between organisms and the environment is very important. Climate change intensely alters these interactions and is cause for concern in regard to the overall well-being of our ecological landscape. Climate change affects humans, animals, plants, and bacteria and their symbiotic relationships with each other drastically. It is important for scientists, researchers, and people around the world to work together to find the best strategy to preserve biological diversity and to slow down the rising global temperatures and the effects of climate change. See also Climate change mitigation References Ecology Evolutionary biology Developmental biology
Ecological evolutionary developmental biology
[ "Biology" ]
1,510
[ "Evolutionary biology", "Behavior", "Developmental biology", "Reproduction", "Ecology" ]
76,101,893
https://en.wikipedia.org/wiki/Altermagnetism
In condensed matter physics, altermagnetism is a type of persistent magnetic state in ideal crystals. Altermagnetic structures are collinear and crystal-symmetry compensated, resulting in zero net magnetisation. Unlike in an ordinary collinear antiferromagnet, another magnetic state with zero net magnetization, the electronic bands in an altermagnet are not Kramers degenerate, but instead depend on the wavevector in a spin-dependent way. Related to this feature, key experimental observations were published in 2024. It has been speculated that altermagnetism may have applications in the field of spintronics. Crystal structure and symmetry In altermagnetic materials, atoms form a regular pattern with alternating spin and spatial orientation at adjacent magnetic sites in the crystal. Atoms with opposite magnetic moment are in altermagnets coupled by crystal rotation or mirror symmetry. The spatial orientation of magnetic atoms may originate from the surrounding cages of non-magnetic atoms. The opposite spin sublattices in altermagnetic manganese telluride (MnTe) are related by spin rotation combined with six-fold crystal rotation and half-unit cell translation. In altermagnetic ruthenium dioxide (RuO2), the opposite spin sublattices are related by four-fold crystal rotation. Electronic structure One of the distinctive features of altermagnets is a specifically spin-split band structure which was first experimentally observed in work that was published in 2024. Altermagnetic band structure breaks time-reversal symmetry, Eks=E-ks (E is energy, k wavevector and s spin) as in ferromagnets, however unlike in ferromagnets, it does not generate net magnetization. The altermagnetic spin polarisation alternates in wavevector space and forms characteristic 2, 4, or 6 spin-degenerate nodes, respectively, which correspond to d-, g, or i-wave order parameters. A d-wave altermagnet can be regarded as the magnetic counterpart of a d-wave superconductor. The altermagnetic spin polarization in band structure (energy–wavevector diagram) is collinear and does not break inversion symmetry. The altermagnetic spin splitting is even in wavector, i.e. (kx2-ky2)sz. It is thus also distinct from noncollinear Rasba or Dresselhaus spin texture which break inversion symmetry in noncentrosymmetric nonmagnetic or antiferromagnetic materials due to the spin-orbit coupling. Unconventional time-reversal symmetry breaking, giant ~1eV spin splitting and anomalous Hall effect was first theoretically predicted and experimentally confirmed in RuO2. Materials Direct experimental evidence of altermagnetic band structure in semiconducting MnTe and metallic RuO2 was first published in 2024. Many more materials are predicted to be altermagnets – ranging from insulators, semiconductors, and metals to superconductors. Altermagnetism was predicted in 3D and 2D materials with both light as well as heavy elements and can be found in nonrelativistic as well as relativistic band structures. Properties Altermagnets exhibit an unusual combination of ferromagnetic and antiferromagnetic properties, which remarkably more closely resemble those of ferromagnets. Hallmarks of altermagnetic materials such as the anomalous Hall effect have been observed before (but this effect occurs also in other magnetically compensated systems such as non-collinear antiferromagnets). Altermagnets also exhibit unique properties such as anomalous and spin currents that can change sign as the crystal rotates. Experimental observations In December 2024, researchers from the University of Nottingham provided the first experimental imaging of altermagnetism, confirming its unique spin-symmetry properties. Using Nitrogen-vacancy center microscopy and X-ray magnetic linear dichroism (XMLD), they visualized spin-polarized currents arising from the crystal-symmetry-protected altermagnetic order. This order featured antiparallel spin alignment within distinct crystal sublattices, creating a compensating spin polarization without macroscopic magnetization. These findings validated theoretical predictions and demonstrated the potential of altermagnetic materials in high-speed, low-energy spintronic devices. References Magnetic ordering 2024 in science
Altermagnetism
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
916
[ "Magnetic ordering", "Condensed matter physics", "Electric and magnetic fields in matter", "Materials science" ]
62,176,140
https://en.wikipedia.org/wiki/Runway%2018%20West
Runway 18 West (German Startbahn 18 West) is a 4000-meter-long runway that runs from north to south on the western edge of Frankfurt Airport. A small northern portion of the runway is located in the Frankfurt district of Flughafen, while the larger southern portion lies in the Rüsselsheim am Main district. Before going into operation in 1984, the runway met with considerable opposition, becoming an important symbol of the German environmental movement in the 1970s and 1980s. History Planning In 1962, Frankfurt Airport/Main AG, Frankfurt Rhein-Main Airport's operating company, decided to design a new arrivals terminal and a third runway. Dramatic growth in air traffic had pushed both the old airport buildings and adjoining railway system, which still exist to this day, to their limits. The Rhine-Main region was experiencing a steady economic upswing, thanks in no small part to the role played by Frankfurt Airport as a European airline hub. Airport expansion was complicated because the site was completely surrounded by forests, including protected Bannwald areas. Other obstacles included the east-to-west Bundesautobahn 3 north of the airport, the north-to-south Bundesautobahn 5 to the east, an overhead power line to the west, and the now-closed U.S. Rhein-Main Air Base to the south. Only the southwest corner of the airport offered the possibility of building a new runway, running north to south. But this would entail both immense logging operations and extension of the airport to a municipal area not belonging to the Frankfurt metropolitan zone. Economic factors took precedence over environmental considerations and on 28 December 1965, Flughafen AG applied for a construction permit for "Runway 18 West". In May 1966, the Hessian Parliament (Landtag) decided to build a new 4000-metre-long, north-south runway. Following the approval, Frankfurt Airport/Main AG decided in November 1967 to build the new runway at a cost of DM 78 million. At a time of increasing environmental awareness, more and more citizens grew skeptical about the airport expansion. Following planning approval by the Transportation Minister in March 1968, 44 legal actions for cancellation of the project (Anfechtungsklagen in German) were brought before the courts. Lawsuits Frankfurt Airport's Terminal Mitte (now Terminal 1) opened in March 1972 and the planning approval procedure for the new Runway 18 West was initiated the following year. The result was more than 100 lawsuits brought before Hessian administrative courts. Runway opponents, who increasingly joined forces in citizens' initiatives (BI), were growing in number, as both reduced flights and the 1973 oil crisis mitigated the need for a new runway. Some runway opponents also feared it could be used by NATO. Administrative courts dealt with the planned expansion for nearly a decade before construction approval was annulled for technical reasons. In March 1971, the Transportation Ministry issued a second planning approval order, which again ended up before the courts. At the end of 1978, a citizens' initiative (BI) against the expansion was founded, principally in the affected town of Mörfelden-Walldorf, but also in Frankfurt and areas surrounding the airport. In July 1978, the Federal Administrative Court referred runway opponents' claims back to the Hessian Administrative Court. In December that year, the State of Hesse sold 303 hectares of land to Frankfurt Airport/Main AG for the construction of the new runway. The expected logging zone amounted to 129 hectares. Intensification of the conflict When the Hessian Administrative Court ruled in favor of the new runway construction on 21 October 1980, the legal dispute ended but resistance on the ground intensified. On the planned site of Runway 18 West, opponents erected a citizens' initiative (BI) hut as of May 1980, using it as an information kiosk for people walking in the area. In July, Hessian Minister of Economics and Transport (FDP), ordered the "immediate implementation" of runway construction. In October however, the Hessian Administrative Court rejected this stop request, restoring the construction halt. Yet the first tree felling work began before the winter for technical reasons. First, a seven-hectare site was cleared directly at the airport site. On 2 November 1980, 15,000 people, mainly environmentalists and students, as well as numerous elderly people from the region, demonstrated at the edge of the forest in Walldorf. Since planned occupation actions by protesters failed due to long-running police efforts, the citizens' initiative group decided to expand its "BI-Hütte" into a permanently inhabited village to be able to react more quickly and effectively to potential land seizure and clearing operations. As a result, several illegal huts were built, in addition to a hut church of the Walldorf parish, on airport grounds. In May 1981, the Darmstadt city government president ordered that the site be seized. On 6 October, the previously cleared seven-hectare site was occupied by protesters then retaken by police. Hundreds had gathered on the site, excavated a triangular trench, and built a tower inside. The first hut village was evacuated on the morning of 2 November 1981; most protesters were removed peacefully. While the tower was more difficult to evacuate, squatters left it voluntarily the following evening. A few days after the site was cleared, a 2.5-metre-high concrete wall was erected to secure construction work. While the hut village eviction itself was peaceful, thousands gathered in the woods outside police cordons during the day and several controversial police operations were carried out against the protesters. Late in the late evening of November 3, 1981, a police operation against an anti-runway demonstration took place on Rohrbachstraße in the Nordend district of Frankfurt, seriously injuring several demonstrators. After the removal of protesters, logging and construction work began under massive police protection. Repeated attacks by demonstrators took place against the concrete wall and police officers. Frequent attempts by runway opponents to build permanent hut villages were thwarted by the police. A planned reoccupation of the Hüttendorf site on 7 November, after a rally attended by tens of thousands of protesters, was not carried out after disagreements within the movement over the question of violence. Instead of the planned mass crossing of the police cordons, fifty selected demonstrators with bare torsos were allowed onto the premises by police. In an event that would come to be called "Naked Saturday", four BI spokespersons then held an inconclusive discussion with Interior Minister Ekkehard Gries (FDP) on the cleared area of the hut village about halting the tree-felling work until a decision could be reached by the State Court. Another version of the day's events claims it was called Naked Saturday because many protesters were too lightly clothed for the colder-than-anticipated weather. Demonstrations On 14 November 1981, over 120,000 people demonstrated in Wiesbaden against Runway 18 West. The Land returning officer was handed 220,000 signatures in support of a referendum. At the rally, Frankfurt Magistrate Director Alexander Schubart called for a "visit" to the airport the next day. The following day, runway opponents blocked airport entrances for hours. When the police used force against them, the demonstrators fled to the adjacent highway and erected barricades. In order to clear the motorway, the police deployed federal border protection units dispatched by helicopter. For over a week, the city centre of Frankfurt and other cities in the Rhine-Main region were effectively shut down by daily protests. Police prevented protestors from occupying Frankfurt Central Station. Alexander Schubart was sentenced to two years' imprisonment on probation for coercing the state government (Section 105, Section 125 and 240 StGB) and for his call for violence and discharged from the civil service. After ten years of legal battles, he was able to spend only eight months on probation and remain in the civil service. The referendum request – the final remaining legal method to prevent runway construction – ended in 1982 with a decision by the Hessian Landtag under Minister-President Holger Börner (SPD), and rejection due to non-jurisdiction by the Hessian State Court. In the following period, the runway movement, which had shrunk after the events of the autumn of 1981, primarily shifted to weekly "Sunday walks" to the concrete wall around the construction site. During these weekly demonstrations, repeated attempts were made to dismantle the wall, obstruct construction work, and attack police forces. After Construction On 12 April 1984, Runway 18 West began operating, although opening ceremonies were dispensed with. Two days later, approximately 15,000 people demonstrated against the commissioning of the runway at the perimeter wall in the forest. On 2 November 1987, during a demonstration marking the sixth anniversary of the hut village evacuation, 14 police officers were shot at with a police firearm stolen from an earlier anti-nuclear demonstration in nearby Hanau on 8 November 1986. Nine police officers were hit, and officers Thorsten Schwalm and Klaus Eichhöfer succumbed to their injuries. The same night, a massive wave of searches and arrests began against the entire anti-runway movement. Runway opponents Andreas E. and Frank H. were indicted by the Federal Prosecutor's Office as the gunmen responsible for the two police deaths. Andreas E. was found guilty of manslaughter and sentenced to 15 years in prison. Frank H. was sentenced in 1991 to four and a half years in prison for offences unrelated to the fatal shootings. As a result of these events, the remnants of the protest movement against Runway 18 West fell apart. In 2011, a fourth runway, the Northwest Runway, was built at Frankfurt Airport despite significant resistance from the public. A year after the fourth runway's construction, the website Airport Watch reported that weekly protests against the runway were occurring at the airport. Initially, the original concrete perimeter wall remained as a relic of the Runway 18 West protests, a rare security barrier for a German airport in the period before September 11th. As of February 2018, the wall had been replaced with a modern, combined-wall-and-fence barrier, partly secured with NATO barbed wire. A section of the old wall, approximately 6 m long, has been preserved as a monument. Flight Specifications Runway West is called '18' because it is faces almost exactly south, a course angle of 180 degrees. Because the Taunus Mountains prevent departures towards the north, only southbound takeoffs are permitted, in the direction of the Upper Rhine Plain. Since aircraft are supposed to take off against the wind, strong northerly winds limit or prevent takeoffs from the runway. Movies Keine Startbahn West – Trilogie eines Widerstandes (No Runway West - Trilogy of Resistance). 1981. Documentary film by Thomas Frickel and others. Keine Startbahn West – Eine Region wehrt sich (No Runway West - A Region Fights Back). 1982. Documentary film by Thomas Frickel and others. Wertvolle Jahre (Valuable years). 1989/90. Documentary film by Thomas Carlé and Gruscha Rode. Literature Wolf Wetzel: Tödliche Schüsse. Eine dokumentarische Erzählung (Deadly Shots. A Documentary Narrative). Unrast, Münster 2008, . Horst Karasek: Das Dorf im Flörsheimer Wald. Eine Chronik gegen die Startbahn West (The Village in the Flörsheim Forest). A chronicle against the West runway. Luchterhand Verlag, Darmstadt/Neuwied 1981, . Volker Luley: Trotzdem gehört uns der Wald! von einem, der auszog das Fürchten zu verlernen (Nevertheless, the Forest Belongs to Us! From Someone Who Set Out to Unlearn Fear). Saalbau Verlag, Offenbach 1981, . Bruno Struif (ed.): Kunst gegen StartbahnWest. Arbeiten von Betroffenen (Art vs. Runway West. Work of Those Affected). Anabas, casting 1982, . Ulrich Cremer: Bauen als Urerfahrung: dargestellt am Beispiel des Hüttendorfes gegen die Startbahn West (Building as a Primal Experience: Illustrated by the Example of the Hut Village Against Runway West). E. Weiss Verlag, Munich 1982, . External links (Runway West - Collection of images, videos and audio files) Wer nicht kämpft, hat schon verloren ("Those who do not fight have already lost.") Photos of the runway wall, 2017 References Airport infrastructure Autonomism Environmental protests Former squats Squats in Germany
Runway 18 West
[ "Engineering" ]
2,575
[ "Airport infrastructure", "Aerospace engineering" ]
62,184,291
https://en.wikipedia.org/wiki/Limnological%20tower
A limnological tower is a structure constructed in a body of water to facilitate the study of aquatic ecosystems (limnology). They play an important role in drinking water infrastructure by allowing the prediction of algal blooms which can block filters and affect the taste of the water. Purpose Limnological towers provide a fixed structure to which sensors and sampling devices can be affixed. The depth of the structure below water level allows for study of the various layers of water in the lake or reservoir. The management of limnological conditions can be important in reservoirs used to supply drinking water treatment plants. In certain conditions algal blooms can occur which can block filters, change the pH of the water and cause taste and odour problems. If the sensors extend to the bed level the tower can also be used to monitor the hypolimnion (lowest layer of water) which in some conditions can become anoxic (of low oxygen content) which may affect the lake ecology. Limnological towers have been constructed in reservoirs used to supply drinking water in the United Kingdom since algal blooms began causing problems with water quality. By providing data on water conditions and algae levels the towers can predict the behaviour of the algae and allow managers to make decisions to alter conditions to prevent algal blooms. These decisions may include altering water inflows (particularly where nutrient-rich intakes are considered), activating water jets to promote the mixing of different layers of water and altering the depth from which water is abstracted. These decisions can affect the behaviour of the reservoir over a period from a few hours to a few years. Examples North America Six combined limnological and meteorological observation towers were established in the Great Lakes on the US-Canadian border in 1961. Three were installed in Lake Huron, two in Lake Ontario and one in Lake Erie by the Great Lakes Institute. These were innovative in design and cheap to construct, being built largely from water pipe. Constructed in water depths of the towers provided measurements of wind speed, air temperature and rainfall as well as water temperature and current flows at different depth. The shorter towers (in water less than of depth) were attached directly to the bed, towers in greater depths of water were floating units, with a submerged ballast tank, that were anchored to the lake bed by means of cables and weights. A further two limnological towers were constructed near Douglas Point in Lake Huron in the 1960s. One, high was built offshore in 1961 and a second high in 1969. They are poles anchored to the lake bed by means of a gimbal and braced by tensioned cables and anchor guys. They featured a mobile thermistor sensor that could be moved to any depth on the tower as well as fixed thermometers at various depths and were intended to monitor the temperatures of different water layers in the lake. United Kingdom A concrete limnological tower was installed at Rutland Water, England's largest reservoir by surface area, when it was built in the early 1970s. The design of the tower was influenced by consultation with the Water Research Centre and was intended to provide the best possible tools to monitor the ecological conditions of the reservoir so that it could be best managed by its operator (the Anglian Water Authority). The tower monitors water temperature, dissolved oxygen levels and water fluorescence (which is a measure of algal content) at 2m depth intervals. The tower also has the ability to draw water samples for further testing from the various depths and also mounts an automatic weather station. The data is continuous and displayed visually in real-time at the reservoir control centre, situated at the dam. The site of the tower was chosen to best suit the needs of the operator. The reservoir consists of two arms – northern and southern – and has been designed such that all nutrient-rich water enters the southern arm. The intention being that nutrients will be depleted before the water is abstracted for use at the eastern end of the site. The northern-arm is fed by nutrient-poor sources and should be relatively unaffected by algal blooms. A secondary outlet is available that draws solely from the northern arm, in cases that the southern arm is affected by algal growth. Additionally the operators are able to draw directly from the River Nene if the reservoir water is unusable. The Queen Mother Reservoir near London also has a limnological tower. References Limnology Water supply
Limnological tower
[ "Chemistry", "Engineering", "Environmental_science" ]
885
[ "Hydrology", "Water supply", "Environmental engineering" ]
51,739,274
https://en.wikipedia.org/wiki/Argonium
Argonium (also called the argon hydride cation, the hydridoargon(1+) ion, or protonated argon; chemical formula ArH+) is a cation combining a proton and an argon atom. It can be made in an electric discharge, and was the first noble gas molecular ion to be found in interstellar space. Properties Argonium is isoelectronic with hydrogen chloride. Its dipole moment is 2.18 D for the ground state. The binding energy is 369 kJ mol−1 (3.9 eV). This is smaller than that of and many other protonated species, but more than that of . Rotationless radiative lifetimes of different vibrational states vary with isotope and become shorter for the more rapid high-energy vibrations: {|class="wikitable" |+Lifetimes (ms) !v !!ArH+ !!ArD+ |- |1 ||2.28 ||9.09 |- |2 ||1.20 ||4.71 |- |3 ||0.85 ||3.27 |- |4 ||0.64 ||2.55 |- |5 ||0.46 ||2.11 |} The force constant in the bond is calculated at 3.88 mdyne/Å2. Reactions ArH+ + H2 → Ar + ArH+ + C → Ar + CH+ ArH+ + N → Ar + NH+ ArH+ + O → Ar + OH+ ArH+ + CO → Ar + COH+ But the reverse reaction happens: Ar + → ArH+ + H. Ar + → *ArH+ + H2 Ar+ + H2 has a cross section of 10−18 m2 for low energy. It has a steep drop off for energies over 100 eV Ar + has a cross sectional area of for low energy , but when the energy exceeds 10 eV yield reduces, and more Ar+ and H2 is produced instead. Ar + has a maximum yield of ArH+ for energies between 0.75 and 1 eV with a cross section of . 0.6 eV is needed to make the reaction proceed forward. Over 4 eV more Ar+ and H starts to appear. Argonium is also produced from Ar+ ions produced by cosmic rays and X-rays from neutral argon. Ar+ + H2 → *ArH+ + H 1.49 eV When ArH+ encounters an electron, dissociative recombination can occur, but it is extremely slow for lower energy electrons, allowing ArH+ to survive for a much longer time than many other similar protonated cations. ArH+ + e− → Ar + H Because ionisation potential of argon atoms is lower than that of the hydrogen molecule (in contrast to that of helium or neon), the argon ion reacts with molecular hydrogen, but for helium and neon ions, they will strip an electron from a hydrogen molecule. Ar+ + H2 → ArH+ + H Ne+ + H2 → Ne + H+ + H (dissociative charge transfer) He+ + H2 → He + H+ + H Spectrum Artificial ArH+ made from earthly argon contains mostly the isotope 40Ar rather than the cosmically abundant 36Ar. Artificially it is made by an electric discharge through an argon–hydrogen mixture. Brault and Davis were the first to detect the molecule using infrared spectroscopy to observe vibration–rotation bands. The UV spectrum has two absorption points resulting in the ion breaking up. The 11.2 eV conversion to the B1Π state has a low dipole and so does not absorb much. A 15.8 eV to a repulsive A1Σ+ state is at a shorter wavelength than the Lyman limit, and so there are very few photons around to do this in space. Natural occurrence ArH+ occurs in interstellar diffuse atomic hydrogen gas. For argonium to form, the fraction of molecular hydrogen H2 must be in the range 0.0001 to 0.001. Different molecular ions form in correlation with different concentrations of H2. Argonium is detected by its absorption lines at 617.525 GHz (J = 1→0), and 1234.602 GHz (J = 2→1). These lines are due to the isotopolog 36Ar1H+ undergoing rotational transitions. The lines have been detected in the direction of the galactic centre SgrB2(M) and SgrB2(N), G34.26+0.15, W31C (G10.62−0.39), W49(N), and W51e, however where absorption lines are observed, argonium is not likely to be in the microwave source, but instead in the gas in front of it. Emission lines are found in the Crab Nebula. In the Crab Nebula ArH+ occurs in several spots revealed by emission lines. The strongest place is in the Southern Filament. This is also the place with the strongest concentration of Ar+ and Ar2+ ions. The column density of ArH+ in the Crab Nebula is between 1012 and 1013 atoms per square centimeter. Possible the energy required to excite the ions so that then can emit comes from collisions with electrons or hydrogen molecules. Towards the Milky Way centre the column density of ArH+ is around . Two isotopologs of argonium 36ArH+ and 38ArH+ are known to be in a distant unnamed galaxy with a redshift of z = 0.88582 (7.5 billion light years away) which is on the line of sight to the blazar PKS 1830−211. Electron neutralization and destruction of argonium outcompletes the formation rate in space if the H2 concentration is below 1 in 10−4. History Using the McMath solar Fourier transform spectrometer at Kitt Peak National Observatory, James W. Brault and Sumner P. Davis observed ArH+ vibration-rotation infrared lines for the first time. J. W. C. Johns also observed the infrared spectrum. Use Argon facilitates the reaction of tritium (T2) with double bonds in fatty acids by forming an ArT+ (tritium argonium) intermediate. When gold is sputtered with an argon-hydrogen plasma, the actual displacement of gold is done by ArH+. References Argon compounds Cations
Argonium
[ "Physics", "Chemistry" ]
1,347
[ "Cations", "Ions", "Matter" ]
51,746,867
https://en.wikipedia.org/wiki/Alkali%20metal%20nitrate
Alkali metal nitrates are chemical compounds consisting of an alkali metal (lithium, sodium, potassium, rubidium and caesium) and the nitrate ion. Only two are of major commercial value, the sodium and potassium salts. They are white, water-soluble salts with melting points ranging from 255 °C () to 414 °C () on a relatively narrow span of 159 °C The melting point of the alkali metal nitrates tends to increase from 255 °C to 414 °C (with an anomaly for rubidium being not properly aligned in the series) as the atomic mass and the ionic radius (naked cation) of the alkaline metal increases, going down in the column. Similarly, but not presented here in the table, the solubility of these salts in water also decreases with the atomic mass of the metal. Applications Sodium and potassium nitrates are commonly used as fertilizers. As they are also strong oxidizers, they enter pyrotechnic compositions and the manufacturing of explosives. Eutectic mixtures of alkali metal nitrates are used as molten salts. For example, a 40:7:53 mixture of NaNO2: NaNO3:KNO3 melts at 142 °C and is stable to about 600 °C. A minor use is for coloring the light emitted by fireworks: lithium nitrate produces a red color, sodium nitrate produces a yellow/orange color, potassium nitrate and rubidium nitrate produce violet colors, caesium nitrate produces an indigo color. In a general way, the emitted color progressively turns from the red to the violet in the visible spectrum of light when going down in the column of the alkaline metals in the periodic table of Mendeleev. It corresponds to a decrease of the wavelength of the light emitted during the electrons de-excitation step in the atoms brought at high temperature. The photons emitted by caesium are more energetic than these of lithium. See also Alkali metal hydride Alkali metal halide Ammonium nitrate Nitric acid References Nitrates
Alkali metal nitrate
[ "Chemistry" ]
417
[ "Oxidizing agents", "Physical chemistry stubs", "Nitrates", "Salts" ]
47,609,709
https://en.wikipedia.org/wiki/Mean%20operation
In algebraic topology, a mean or mean operation on a topological space X is a continuous, commutative, idempotent binary operation on X. If the operation is also associative, it defines a semilattice. A classic problem is to determine which spaces admit a mean. For example, Euclidean spaces admit a mean -- the usual average of two vectors -- but spheres of positive dimension do not, including the circle. Further reading . . . . Binary operations Means
Mean operation
[ "Physics", "Mathematics" ]
102
[ "Means", "Mathematical analysis", "Point (geometry)", "Geometric centers", "Binary relations", "Binary operations", "Topology stubs", "Topology", "Mathematical relations", "Symmetry" ]
47,611,264
https://en.wikipedia.org/wiki/Phase-locked%20loop%20range
The terms hold-in range, pull-in range (acquisition range), and lock-in range are widely used by engineers for the concepts of frequency deviation ranges within which phase-locked loop-based circuits can achieve lock under various additional conditions. History In the classic books on phase-locked loops, published in 1966, such concepts as hold-in, pull-in, lock-in, and other frequency ranges for which PLL can achieve lock, were introduced. They are widely used nowadays (see, e.g. contemporary engineering literature and other publications). Usually in engineering literature only non-strict definitions are given for these concepts. Many years of using definitions based on the above concepts has led to the advice given in a handbook on synchronization and communications, namely to check the definitions carefully before using them. Later some rigorous mathematical definitions were given in. Gardner problem on the lock-in range definition In the 1st edition of his well-known work, Phaselock Techniques, Floyd M. Gardner introduced a lock-in concept: If, for some reason, the frequency difference between input and VCO is less than the loop bandwidth, the loop will lock up almost instantaneously without slipping cycles. The maximum frequency difference for which this fast acquisition is possible is called the lock-in frequency. His notion of the lock-in frequency and corresponding definition of the lock-in range have become popular and nowadays are given in various engineering publications. However, since even for zero frequency difference there may exist initial states of loop such that cycle slipping may take place during the acquisition process, the consideration of initial state of the loop is of utmost importance for the cycle slip analysis and, therefore, Gardner’s concept of lock-in frequency lacked rigor and required clarification. In the 2nd edition of his book, Gardner stated: "there is no natural way to define exactly any unique lock-in frequency", and he wrote that "despite its vague reality, lock-in range is a useful concept". Definitions phase difference between input (reference) signal and local oscillator (VCO, NCO) signal. initial phase difference between input signal and VCO signal. frequency difference between input signal frequency and VCO signal. frequency difference between input signal frequency and VCO free running frequency. Note that in general , because also depends on initial input of VCO. Locked state Definition of locked state In a locked state: 1) the phase error fluctuations are small, the frequency error is small; 2) PLL approaches the same locked state after small perturbations of the phases and filter state. Hold-in range Definition of hold-in range. A largest interval of frequency deviations for which a locked state exists is called a hold-in range, and is called hold-in frequency. Value of frequency deviation belongs to the hold-in range if the loop re-achieves locked state after small perturbations of the filter's state, the phases and frequencies of VCO and the input signals. This effect is also called steady-state stability. In addition, for a frequency deviation within the hold-in range, after a small changes in input frequency loop re-achieves a new locked state (tracking process). Pull-in range Also called acquisition range, capture range. Assume that the loop power supply is initially switched off and then at the power is switched on, and assume that the initial frequency difference is sufficiently large. The loop may not lock within one beat note, but the VCO frequency will be slowly tuned toward the reference frequency (acquisition process). This effect is also called a transient stability. The pull-in range is used to name such frequency deviations that make the acquisition process possible (see, for example, explanations in and ). Definition of pull-in range. Pull-in range is a largest interval of frequency deviations such that PLL acquires lock for arbitrary initial phase, initial frequency, and filter state. Here is called pull-in frequency. The difficulties of reliable numerical analysis of the pull-in range may be caused by the presence of hidden attractors in dynamical model of the circuit. Lock-in range Assume that PLL is initially locked. Then the reference frequency is suddenly changed in an abrupt manner(step change). Pull-in range guarantees that PLL will eventually synchronize, however this process may take a long time. Such long acquisition process is called cycle slipping. If difference between initial and final phase deviation is larger than , we say that cycle slipping takes place. Here, sometimes, the limit of the difference or the maximum of the difference is considered Definition of lock-in range. If the loop is in a locked state, then after an abrupt change of free within a lock-in range , the PLL acquires lock without cycle slipping. Here is called lock-in frequency. References Electronic oscillators Communication circuits Electronic design Radio electronics Hidden oscillation
Phase-locked loop range
[ "Mathematics", "Engineering" ]
1,008
[ "Radio electronics", "Telecommunications engineering", "Electronic design", "Electronic engineering", "Design", "Hidden oscillation", "Communication circuits", "Dynamical systems" ]
47,616,206
https://en.wikipedia.org/wiki/Belavkin%20equation
In quantum probability, the Belavkin equation, also known as Belavkin-Schrödinger equation, quantum filtering equation, stochastic master equation, is a quantum stochastic differential equation describing the dynamics of a quantum system undergoing observation in continuous time. It was derived and henceforth studied by Viacheslav Belavkin in 1988. Overview Unlike the Schrödinger equation, which describes the deterministic evolution of the wavefunction of a closed system (without interaction), the Belavkin equation describes the stochastic evolution of a random wavefunction of an open quantum system interacting with an observer: Here, is a self-adjoint operator (or a column vector of operators) of the system coupled to the external field, is the Hamiltonian, is the imaginary unit, is the Planck constant, and is a stochastic process representing the measurement noise that is a martingale with independent increments with respect to the input probability measure . Note that this noise has dependent increments with respect to the output probability measure representing the output innovation process (the observation). For , the equation becomes the standard Schrödinger equation. The stochastic process can be a mixture of two basic types: the Poisson (or jump) type , where is a Poisson process corresponding to counting observation, and the Brownian (or diffusion) type , where is the standard Wiener process corresponding to continuous observation. The equations of the diffusion type can be derived as the central limit of the jump type equations with the expected rate of the jumps increasing to infinity. The random wavefunction is normalized only in the mean-squared sense , but generally fails to be normalized for each . The normalization of for each gives the random posterior state vector , the evolution of which is described by the posterior Belavkin equation, which is nonlinear, because operators and depend on due to normalization. The stochastic process in the posterior equation has independent increments with respect to the output probability measure , but not with respect to the input measure. Belavkin also derived linear equation for unnormalized density operator and the corresponding nonlinear equation for the normalized random posterior density operator . For two types of measurement noise, this gives eight basic quantum stochastic differential equations. The general forms of the equations include all types of noise and their representations in Fock space. The nonlinear equation describing observation of position of a free particle, which is a special case of the posterior Belavkin equation of the diffusion type, was also obtained by Diosi and appeared in the works of Gisin, Ghirardi, Pearle and Rimini, although with a rather different motivation or interpretation. Similar nonlinear equations for posterior density operators were postulated (although without derivation) in quantum optics and the quantum trajectories theory, where they are called stochastic master equations. The averaging of the equations for the random density operators over all random trajectories leads to the Lindblad equation, which is deterministic. The nonlinear Belavkin equations for posterior states play the same role as the Stratonovich–Kushner equation in classical probability, while the linear equations correspond to the Zakai equation. The Belavkin equations describe continuous-time decoherence of initially pure state into a mixed posterior state giving a rigorous description of the dynamics of the wavefunction collapse due to an observation or measurement. Non-demolition measurement and quantum filtering Noncommutativity presents a major challenge for probabilistic interpretation of quantum stochastic differential equations due to non-existence of conditional expectations for general pairs of quantum observables. Belavkin resolved this issue by discovering the error-perturbation uncertainty relation and formulating the non-demolition principle of quantum measurement. In particular, if the stochastic process corresponds to the error (white noise in the diffusive case) of a noisy observation of operator with the accuracy coefficient , then the indirect observation perturbs the dynamics of the system by a stochastic force , called the Langevin force, which is another white noise of intensity that does not commute with the error . The result of such a perturbation is that the output process is commutative , and hence corresponds to a classical observation, while the system operators satisfy the non-demolition condition: all future observables must commute with the past observations (but not with the future observations): for all (but not ). Note that commutation of with and another operator with does not imply commutation of with , so that the algebra of future observables is still non-commutative. The non-demolition condition is necessary and sufficient for the existence of conditional expectations , which makes the quantum filtering possible. Posterior state equations Counting observation Let be a Poisson process with forward increments almost everywhere and otherwise and having the property . The expected number of events is , where is the expected rate of jumps. Then substituting for the stochastic process gives the linear Belavkin equation for the unnormalized random wavefunction undergoing counting observation. Substituting , where is the collapse operator, and , where is the energy operator, this equation can be written in the following form Normalized wavefunction is called the posterior state vector, the evolution of which is described by the following nonlinear equation where has expectation . The posterior equation can be written in the standard form with , , and . The corresponding equations for the unnormalized random density operator and for the normalized random posterior density operator are as follows where . Note that the latter equation is nonlinear. Continuous observation Stochastic process , defined in the previous section, has forward increments , which tend to as . Therefore, becomes standard Wiener process with respect to the input probability measure. Substituting for gives the linear Belavkin equation for the unnormalized random wavefunction undergoing continuous observation. The output process becomes the diffusion innovation process with increments . The nonlinear Belavkin equation of the diffusion type for the posterior state vector is with and . The corresponding equations for the unnormalized random density operator and for the normalized random posterior density operator are as follows where . The second equation is nonlinear due to normalization. Because , taking the average of these stochastic equations over all leads to the Lindblad equation Example: continuous observation of position of a free particle Consider a free particle of mass . The position and momentum observables correspond respectively to operators of multiplication by and . Making the following substitutions in the Belavkin equation the posterior stochastic equation becomes where is the posterior expectation of . Motivated by the spontaneous collapse theory rather than the filtering theory, this equation was also obtained by Diosi, showing that the measurement noise is the increment of a standard Wiener process. There are closed-form solutions to this equation, as well as equations for a particle in a linear or quadratic potentials. For a Gaussian initial state these solutions correspond to optimal quantum linear filter. Solutions to the Belavkin equation show that in the limit the wavefunction has finite dispersion, therefore resolving the quantum Zeno effect. References Quantum measurement Equations
Belavkin equation
[ "Physics", "Mathematics" ]
1,472
[ "Quantum measurement", "Mathematical objects", "Quantum mechanics", "Equations" ]
57,863,122
https://en.wikipedia.org/wiki/Journal%20of%20Mathematical%20Physics%2C%20Analysis%2C%20Geometry
The Journal of Mathematical Physics, Analysis, Geometry is a quarterly peer-reviewed scientific journal covering mathematics as applied to physics. It is published by the Verkin Institute for Low Temperature Physics and Engineering and was established in 1994 as Mathematical Physics, Analysis, Geometry. Papers are published in English, Ukrainian, and Russian. The journal is abstracted and indexed by Scopus. According to the Journal Citation Reports, the journal has a 2017 impact factor of 0.531. Editors-in-chief The following persons are or have been editors-in-chief: Vladimir Marchenko: 1994—1999 Iossif Ostrovskii: 2000—2004 Leonid Pastur: 2005—present History The Kharkov Mathematical Society was founded in 1879 and, starting in 1880, the society published the journal named Communications of the Kharkov Mathematical Society (Russian Сообщения и протоколы заседаний математического общества при Императорском Харьковском университете). Publication was suspended in 1960, but in 1965 due to the efforts of Naum Akhiezer the journals Theory of functions, functional analysis and their applications, and Ukrainian Geometric Collection» were established. In 1994, these journals were merged by the Mathematical Division of the Verkin Institute to establish the current journal. The first editor was Vladimir Marchenko. References External links Mathematical physics journals Physics journals Academic journals published in Ukraine Quarterly journals Academic journals established in 1994 Multilingual journals Mathematical analysis journals Geometry journals
Journal of Mathematical Physics, Analysis, Geometry
[ "Mathematics" ]
358
[ "Geometry journals", "Mathematical analysis", "Geometry", "Mathematical analysis journals" ]
56,133,478
https://en.wikipedia.org/wiki/Kelvin%27s%20minimum%20energy%20theorem
In fluid mechanics, Kelvin's minimum energy theorem (named after William Thomson, 1st Baron Kelvin who published it in 1849) states that the steady irrotational motion of an incompressible fluid occupying a simply connected region has less kinetic energy than any other motion with the same normal component of velocity at the boundary (and, if the domain extends to infinity, with zero value values there). Mathematical Proof Let be the velocity field of an incompressible irrotational fluid and be that of any other incompressible fluid motion with same normal component velocity at the boundary of the domain, where is the unit vector of the bounding surface (and, if the domain extends to infinity, there). Then the difference between the kinetic energy is given by can be rearranged to give Since is irrotational and the domain is simply-connected, a single-valued velocity potential exists, i.e., . Using this, the second integral in the above equation can be written as The second integral is identically zero for steady incompressible fluid, i.e., . Applying the Gauss theorem for the first integral we find where the surface integral is zero since normal component of velocities are equal there. Thus, one concludes or in other words, , where the equality holds only if , thereby proving the theorem. References Fluid dynamics
Kelvin's minimum energy theorem
[ "Chemistry", "Engineering" ]
282
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
56,133,906
https://en.wikipedia.org/wiki/Helmholtz%20minimum%20dissipation%20theorem
In fluid mechanics, Helmholtz minimum dissipation theorem (named after Hermann von Helmholtz who published it in 1868) states that the steady Stokes flow motion of an incompressible fluid has the smallest rate of dissipation than any other incompressible motion with the same velocity on the boundary. The theorem also has been studied by Diederik Korteweg in 1883 and by Lord Rayleigh in 1913. This theorem is, in fact, true for any fluid motion where the nonlinear term of the incompressible Navier-Stokes equations can be neglected or equivalently when , where is the vorticity vector. For example, the theorem also applies to unidirectional flows such as Couette flow and Hagen–Poiseuille flow, where nonlinear terms disappear automatically. Mathematical proof Let and be the velocity, pressure and strain rate tensor of the Stokes flow and and be the velocity, pressure and strain rate tensor of any other incompressible motion with on the boundary. Let and be the representation of velocity and strain tensor in index notation, where the index runs from one to three. Let be a bounded domain with boundary of class . Consider the following integral, where in the above integral, only symmetrical part of the deformation tensor remains, because the contraction of symmetrical and antisymmetrical tensor is identically zero. Integration by parts gives The first integral is zero because velocity at the boundaries of the two fields are equal. Now, for the second integral, since satisfies the Stokes flow equation, i.e., , we can write Again doing an Integration by parts gives The first integral is zero because velocities are equal and the second integral is zero because the flow is incompressible, i.e., . Therefore we have the identity which says, The total rate of viscous dissipation energy over the whole volume of the field is given by and after a rearrangement using above identity, we get If is the total rate of viscous dissipation energy over the whole volume of the field , then we have . The second integral is non-negative and zero only if , thus proving the theorem (). Poiseuille flow theorem The Poiseuille flow theorem is a consequence of the Helmholtz theorem states that The steady laminar flow of an incompressible viscous fluid down a straight pipe of arbitrary cross-section is characterized by the property that its energy dissipation is least among all laminar (or spatially periodic) flows down the pipe which have the same total flux. References Fluid dynamics
Helmholtz minimum dissipation theorem
[ "Chemistry", "Engineering" ]
538
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
56,135,961
https://en.wikipedia.org/wiki/Catalyst%20transfer%20polymerization
Catalyst transfer polymerization (CTP), or catalyst transfer polycondensation, is a type of living chain-growth polymerization that is used for synthesizing conjugated polymers. Benefits to using CTP over other methods are low polydispersity and control over number average molecular weight in the resulting polymers. Very few monomers have been demonstrated to undergo CTP. History The first reports of CTP came simultaneously from the labs of Yokozawa and McCullough in 2004, with the recognition that polythiophene can be synthesized with low dispersity and with control over molecular weight. This recognition sparked interest in polymerization mechanism so that it could be expanded to other monomers. Few polymers can be synthesized via CTP, so most conjugated polymers are synthesized via step-growth using palladium catalyzed cross-coupling reactions. Characteristics CTP is exclusively performed on arene monomers to give conjugated polymers. The polymers obtained from CTP are often low dispersity due to its living, chain growth nature. Mass spectrometry can be used to identify end-groups on the polymer to determine if the polymer was synthesized via chain growth. Types CTP utilizes cross coupling reactions (see Mechanism below) with monomers containing magnesium-, zinc-, boron-, and tin-based transmetallating groups, giving rise to Kumada CTP, Negishi CTP, Suzuki CTP, and Stille CTP reactions. Mechanism The mechanism of CTP has been debated. The living chain-growth nature of CTP can be explained by the existence of a π-complex (as described in this section) but can also be explained via polymer reactivity. Initiation Initiation from a metal(II) species (either Ni or Pd) involves two monomers transmetalating onto the metal center to form a complex that can undergo reductive elimination. The complex formed after reductive elimination is referred to as a π-complex because the catalyst bound to the π system of the monomer. The catalyst can isomerize to other π-complexes via a process known as "ring-walking" to the π-bond adjacent to a C-X bond at the end of the chain allowing oxidative addition to occur. The product of oxidative addition is an active polymer-metal(II)-halide, and it can react with monomers in the propagation reaction. Propagation The propagation steps of CTP occurs through a cycle of transmetalation, reductive elimination, ring walking, and oxidative addition. The existence of a π-complex allows for the polymerization to be controlled as it ensures that the catalyst cannot dissociate from the polymer chain (and start new chains). This means that the number of polymer chains at the end of the polymerization should be equal to the number of catalysts in solution, and that the average degree of polymerization of the sample at the end of polymerization should be equal to the ratio of monomers to catalysts in solution. Termination A characteristic of CTP is living chain-growth character, meaning that the catalyst will have a reactive chain end for the entirety of the polymerization. Therefore, to terminate the polymerization, a quenching agent must be added, such as a strong acid to protonate the polymer, or a nucleophile to add an end cap the polymer. If the π-complex is too weakly bound, termination of polymer chains can occur before a quenching agent is added, causing lower molecular weight polymers to form. Current research into CTP focuses on finding catalysts that form strong catalyst-polymer π-complexes such that the polymerization remains living. Analysis Success of CTP is often evaluated using gel permeation chromatography, matrix-assisted laser desorption/ionization, nuclear magnetic resonance spectroscopy. GPC characterization enables determination of average molecular weight. MALDI and NMR allow for identification of end groups of the polymer chain. Polymer reactivity versus π-complex The chain growth nature of CTP can also be described without invoking a catalyst-polymer π-complex. If we assume that no π-complex forms and instead every time a monomer was added to a polymer, the polymer becomes more reactive, we would also see chain growth since the largest polymers in the reaction would be the most reactive and would react with monomers preferentially. The presence of this mechanism and one mediated by a π-complex can be elucidated by studying the end groups of the polymers using mass spectrometry. Polymers that can be synthesized by CTP A non-exhaustive list of the polymers that can be synthesized using CTP: Polythiophene Polyphenylene Polyselenophene Polytellurophene Polythiazole Polybenzothiadiazole Polypyrrole Polyfluorene References Polymerization reactions
Catalyst transfer polymerization
[ "Chemistry", "Materials_science" ]
1,007
[ "Polymerization reactions", "Polymer chemistry" ]
56,138,626
https://en.wikipedia.org/wiki/Nut%20shell%20filter
A nut shell filter is a device to remove oil from water. In the oil and gas industry, the term walnut shell filter is common since black walnuts are most often used. Typically nut shell filters are designed for loadings under 100 mg/L oil and 100 mg/L suspended solids and operate with 90–95% removal efficiency. High oil and solids loadings reduce run times between backwashes and results in reduced effluent quality. Design A bed of nut shell media is contained in a vessel. Vessels are typically vertical, but may also be horizontal.  Particles are captured as flow penetrates through the media bed.  Although it is possible to use other medias for this purpose, walnut and pecan shells are most commonly used since they have several desirable properties making them well suited for oil removal.  First, nut shells are hard with a high modulus of elasticity, resulting in a low attrition rate and minimal media replacement, typically <5% per year. Nut shells also have an equal affinity for water and oil, allowing oil to be adsorbed during normal operation, but also enable oil removed from the bed during agitation allowing for media reuse. During normal operation, water typically flows down through the media bed where oil is coalesced and attracted to the nut shells and accumulates in the interstitial spaces between the media. Typical nut shell media is 12/20 (0.8 to 1.7 mm) and 12/16 mesh (1.2 to 1.7 mm). Although not designed for solids removal, an added benefit is that solids accumulate in the bed. As solids are collected, the differential pressure across the bed increases. Periodical backwashes are initiated to regenerate the media. Typically, backwash is triggered by one of the following: Differential pressure Timer (often 24 hours) Operator initiated (often due to exceeding limit for effluent quality) Backwash occurs through mechanical agitation such as Backwash through draft tube Backwash through external media scrubber Mechanical mixer If backwash is not sufficient, oil can cause media to agglomerate, known as mudballing. Typical flux of nut shell filters is 7 to 27 gpm/ft2. Commercial vessels are sized to accommodate the flow rate of water and range up to 14 feet in diameter. For continuous operation, multiple vessels are frequently used so flow can continue to be treated while backwash occurs in one vessel. For large flows, several vessels may be used. Unlike some oil / water separators, no chemicals are required for oil removal in nut shell filters. Uses Nut shell filters were designed to separate crude oil from oilfield produced water in the 1970s, which remains the principal use. Nut shell filters can be used onshore and offshore, but are more common onshore where the treatment requirements are typically more stringent and footprint is not limited. Nut shell filters are used for tertiary treatment following primary and secondary treatment which removes the bulk of the oil and suspended solids. Typically, effluent is reinjected for reuse or disposal or discharged to a surface body of water. Categories Media filter References Filters Water filters Walnut
Nut shell filter
[ "Chemistry", "Engineering" ]
637
[ "Water filters", "Water treatment", "Chemical equipment", "Filters", "Filtration" ]
74,583,808
https://en.wikipedia.org/wiki/Pines%27%20demon
In condensed matter physics, Pines' demon or, simply demon is a collective excitation of electrons which corresponds to electrons in different energy bands moving out of phase with each other. Equivalently, a demon corresponds to counter-propagating currents of electrons from different bands. Named after David Pines, who coined the term in 1956, demons are quantum mechanical excited states of a material belonging to a broader class of exotic collective excitations, such as the magnon, phason, or exciton. Pines' demon was first experimentally observed in 2023 by A. A. Husain et al. within the transition-metal oxide distrontium ruthenate (Sr2RuO4). History Demons were originally theorized in 1956 by David Pines in the context of multiband metals with two energy bands: a heavy electron band with large effective mass and a light electron band with effective mass . In the limit of , the two bands are kinematically decoupled, so electrons in one band are unable to scatter to the other band while conserving momentum and energy. Within this limit, Pines pointed out that the two bands can be thought of as two distinct species of charge particles, so that it becomes possible for excitations of the two bands to be either in-phase or out-of-phase with each other. The in-phase excitation of the two bands was not a new type of excitation, it was simply the plasmon, an excitation proposed earlier by David Pines and David Bohm in 1952 which explained peaks observed in early electron energy-loss spectra of solids. The out-of-phase excitation was termed the "demon" by Pines after James Clerk Maxwell, since he thought Maxwell "lived too early to have a particle or excitation named in his honor." Pines explained his terminology by making the term a half backronym because particles commonly have suffix "-on" and the excitation involved distinct electron motion, resulting in D.E.M.on, or simply demon for short. The demon was historically referred to as an acoustic plasmon, due to its gapless nature which is also shared with acoustic phonons. However, with the rise of two-dimensional materials (such as graphene) and surface plasmons, the term acoustic plasmon has taken on a very different meaning as the ordinary plasmon in a low-dimensional system. Such acoustic plasmons are distinct from the demon because they do not consist of out-of-phase currents from different bands, do not exist in bulk materials, and do couple to light, unlike the demon. A more detailed comparison of plasmons and demons is shown in the table below. The demon excitation, unlike the plasmon, was only discovered many decades later in 2023 by A. A. Husain et al. in the unconventional superconducting material Sr2RuO4 using a momentum-resolved variant of high-resolution electron energy-loss spectroscopy. Relationship with the plasmon The plasmon is a quantized vibration of the charge density in a material where all electron bands move in-phase. The plasmon is also massive (i.e., has an energy gap) in bulk materials due to the energy cost needed to overcome the long-ranged Coulomb interaction, with the energy cost being the plasma frequency . Plasmons exist in all conducting materials and play a dominant role in shaping the dielectric function of a metal at optical frequencies. Historically, plasmons were observed as early as 1941 by G. Ruthemann. The behavior of plasmons has widespread implications,as they play a role as a tool for biological microscopy (surface plasmon resonance microscopy), plasmon-based electronics (plasmonics), and underlay the original formulation of the transmission-line with a junction plasmon (transmon) device now used in superconducting qubits for quantum computing. The demon excitation on the other hand holds a number key distinctions from the plasmon (and acoustic plasmon), as summarized in the table below. Theoretical significance Early studies of the demon in the context of superconductivity showed, under the two band picture presented by Pines, that superconducting pairing of the light electron band can be enhanced through the existence of demons, while the pairing of the heavy electrons would be more or less unaffected. The implication being that demons would allow for orbital-selective effects on superconducting pairing. However, for the simple case of spherically symmetric metals with two bands, natural realizations of demon-enhanced superconductivity seemed unlikely, as the heavy (d-)electrons play the dominant role in superconductivity of most transition metal considered at the time. However, more recent studies on high-temperature superconducting metal hydrides, where light electron bands participate in superconductivity, suggest demons may be playing an active role in such systems. References Quasiparticles
Pines' demon
[ "Physics", "Materials_science" ]
1,046
[ "Quasiparticles", "Subatomic particles", "Condensed matter physics", "Matter" ]
74,589,966
https://en.wikipedia.org/wiki/VITO%20experiment
The Versatile Ion polarisation Technique Online (VITO) experiment is a permanent experimental setup located in the ISOLDE facility at CERN, in the form of a beamline. The purpose of the beamline is to perform a wide range of studies using spin-polarised short-lived atomic nuclei. VITO uses circularly-polarised laser light to obtain polarised radioactive beams of different isotopes delivered by ISOLDE. These have already been used for weak-interaction studies, biological investigations, and more recently nuclear structure research. The beamline is located at the site of the former Ultra High Vacuum (UHV) beamline hosting ASPIC. Beamline setup Radioactive ion beams (RIBs) are produced by the ISOLDE facility, using a beam of high-energy protons from the ProtonSynchrotron Booster (PSB) incident on a target. The interaction of the beam and the target produces radioactive species, which are extracted through thermal diffusion by heating the target. The beam of radioactive ions is then separated by mass number by one of the two mass separators at the facility. The resulting low-energy beam is delivered to the various experimental stations. The VITO beamline is modular. The first part is common for all projects and is devoted to atomic polarisation via optical pumping with circularly polarised laser light. The singly-charged ion beam of short-lived isotopes from ISOLDE (RIB) is Doppler-tuned in resonance with the laser light provided by a continuous-wave tunable laser. Next, the beam may be neutralised, before it reaches a 1.5 m long section in which the ion or atom beam is overlapped with the laser and they interact many times (many excitation-decay cycles take place), leading to the polarisation of the atomic spins. The polarised beam is then transported to one of the setups that can be placed behind the polarisation line. At this point the polarised beam is implanted into a solid or liquid host. A strong magnetic field surrounding the sample allowing the nuclear spin polarisation to be maintained for dozens of milliseconds to seconds, by decoupling the electron and nuclear spin. In these conditions, the degree of spin polarisation and its changes can be monitored extremely efficiently by observing the spatial asymmetry in the emission of beta particles by the decaying short-lived nuclei. This is possible, because the weak force that is responsible for the beta decay does not conserve parity. As few as several thousands decays might be enough to record a good signal. Nuclear Magnetic Resonance (NMR) Nuclear Magnetic Resonance (NMR) is a technique that provides information on the environment of a nucleus, from calculations based on the shift in Larmor frequency or relaxation time. β-NMR is a modification of this basic technique using the idea that beta decay from polarised radioactive nuclei is anisotropic (directional) in space. The resonances are detected as change in the beta-decay asymmetry which gives it a much higher signal strength than conventional NMR (up to 10 orders of magnitude). Results One of the first experiments using polarised beams at VITO was devoted polarisation of a mirror-nucleus argon-35. The scientific motivation for this project was provided by the weak interaction studies and the determination of the Vud matrix element in the CKM quark mixing matrix. The next, gradually upgraded, setup is centred around a high-field magnet, liquid samples and radio frequency excitations. The aim is to develop a method of beta-detected Nuclear Magnetic Resonance (β-NMR) to investigate the interaction of metal ions with biomolecules in liquids. The most recent studies at VITO concern the determination of spins and parities in excited nuclear states, poplulated by beta decay. In this case, the setup consists of a solid sample, surrounded by a compact magnet that allows for gamma radiation and neutrons to reach the decay spectroscopy setup. External links VITO page on the ISOLDE website References Physics experiments CERN experiments
VITO experiment
[ "Physics" ]
822
[ "Experimental physics", "Physics experiments" ]
60,951,296
https://en.wikipedia.org/wiki/Machine%20learning%20in%20video%20games
Artificial intelligence and machine learning techniques are used in video games for a wide variety of applications such as non-player character (NPC) control and procedural content generation (PCG). Machine learning is a subset of artificial intelligence that uses historical data to build predictive and analytical models. This is in sharp contrast to traditional methods of artificial intelligence such as search trees and expert systems. Information on machine learning techniques in the field of games is mostly known to public through research projects as most gaming companies choose not to publish specific information about their intellectual property. The most publicly known application of machine learning in games is likely the use of deep learning agents that compete with professional human players in complex strategy games. There has been a significant application of machine learning on games such as Atari/ALE, Doom, Minecraft, StarCraft, and car racing. Other games that did not originally exists as video games, such as chess and Go have also been affected by the machine learning. Overview of relevant machine learning techniques Deep learning Deep learning is a subset of machine learning which focuses heavily on the use of artificial neural networks (ANN) that learn to solve complex tasks. Deep learning uses multiple layers of ANN and other techniques to progressively extract information from an input. Due to this complex layered approach, deep learning models often require powerful machines to train and run on. Convolutional neural networks Convolutional neural networks (CNN) are specialized ANNs that are often used to analyze image data. These types of networks are able to learn translation invariant patterns, which are patterns that are not dependent on location. CNNs are able to learn these patterns in a hierarchy, meaning that earlier convolutional layers will learn smaller local patterns while later layers will learn larger patterns based on the previous patterns. A CNN's ability to learn visual data has made it a commonly used tool for deep learning in games. Recurrent neural network Recurrent neural networks are a type of ANN that are designed to process sequences of data in order, one part at a time rather than all at once. An RNN runs over each part of a sequence, using the current part of the sequence along with memory of previous parts of the current sequence to produce an output. These types of ANN are highly effective at tasks such as speech recognition and other problems that depend heavily on temporal order. There are several types of RNNs with different internal configurations; the basic implementation suffers from a lack of long term memory due to the vanishing gradient problem, thus it is rarely used over newer implementations. Long short-term memory A long short-term memory (LSTM) network is a specific implementation of a RNN that is designed to deal with the vanishing gradient problem seen in simple RNNs, which would lead to them gradually "forgetting" about previous parts of an inputted sequence when calculating the output of a current part. LSTMs solve this problem with the addition of an elaborate system that uses an additional input/output to keep track of long term data. LSTMs have achieved very strong results across various fields, and were used by several monumental deep learning agents in games. Reinforcement learning Reinforcement learning is the process of training an agent using rewards and/or punishments. The way an agent is rewarded or punished depends heavily on the problem; such as giving an agent a positive reward for winning a game or a negative one for losing. Reinforcement learning is used heavily in the field of machine learning and can be seen in methods such as Q-learning, policy search, Deep Q-networks and others. It has seen strong performance in both the field of games and robotics. Neuroevolution Neuroevolution involves the use of both neural networks and evolutionary algorithms. Instead of using gradient descent like most neural networks, neuroevolution models make use of evolutionary algorithms to update neurons in the network. Researchers claim that this process is less likely to get stuck in a local minimum and is potentially faster than state of the art deep learning techniques. Deep learning agents Machine learning agents have been used to take the place of a human player rather than function as NPCs, which are deliberately added into video games as part of designed gameplay. Deep learning agents have achieved impressive results when used in competition with both humans and other artificial intelligence agents. Chess Chess is a turn-based strategy game that is considered a difficult AI problem due to the computational complexity of its board space. Similar strategy games are often solved with some form of a Minimax Tree Search. These types of AI agents have been known to beat professional human players, such as the historic 1997 Deep Blue versus Garry Kasparov match. Since then, machine learning agents have shown ever greater success than previous AI agents. Go Go is another turn-based strategy game which is considered an even more difficult AI problem than chess. The state space of is Go is around 10^170 possible board states compared to the 10^120 board states for Chess. Prior to recent deep learning models, AI Go agents were only able to play at the level of a human amateur. AlphaGo Google's 2015 AlphaGo was the first AI agent to beat a professional Go player. AlphaGo used a deep learning model to train the weights of a Monte Carlo tree search (MCTS). The deep learning model consisted of 2 ANN, a policy network to predict the probabilities of potential moves by opponents, and a value network to predict the win chance of a given state. The deep learning model allows the agent to explore potential game states more efficiently than a vanilla MCTS. The network were initially trained on games of humans players and then were further trained by games against itself. AlphaGo Zero AlphaGo Zero, another implementation of AlphaGo, was able to train entirely by playing against itself. It was able to quickly train up to the capabilities of the previous agent. StarCraft series StarCraft and its sequel StarCraft II are real-time strategy (RTS) video games that have become popular environments for AI research. Blizzard and DeepMind have worked together to release a public StarCraft 2 environment for AI research to be done on. Various deep learning methods have been tested on both games, though most agents usually have trouble outperforming the default AI with cheats enabled or skilled players of the game. Alphastar Alphastar was the first AI agent to beat professional StarCraft 2 players without any in-game advantages. The deep learning network of the agent initially received input from a simplified zoomed out version of the gamestate, but was later updated to play using a camera like other human players. The developers have not publicly released the code or architecture of their model, but have listed several state of the art machine learning techniques such as relational deep reinforcement learning, long short-term memory, auto-regressive policy heads, pointer networks, and centralized value baseline. Alphastar was initially trained with supervised learning, it watched replays of many human games in order to learn basic strategies. It then trained against different versions of itself and was improved through reinforcement learning. The final version was hugely successful, but only trained to play on a specific map in a protoss mirror matchup. Dota 2 Dota 2 is a multiplayer online battle arena (MOBA) game. Like other complex games, traditional AI agents have not been able to compete on the same level as professional human player. The only widely published information on AI agents attempted on Dota 2 is OpenAI's deep learning Five agent. OpenAI Five OpenAI Five utilized separate LSTM networks to learn each hero. It trained using a reinforcement learning technique known as Proximal Policy Learning running on a system containing 256 GPUs and 128,000 CPU cores. Five trained for months, accumulating 180 years of game experience each day, before facing off with professional players. It was eventually able to beat the 2018 Dota 2 esports champion team in a 2019 series of games. Planetary Annihilation Planetary Annihilation is a real-time strategy game which focuses on massive scale war. The developers use ANNs in their default AI agent. Supreme Commander 2 Supreme Commander 2 is a real-time strategy (RTS) video game. The game uses Multilayer Perceptrons (MLPs) to control a platoon’s reaction to encountered enemy units. Total of four MLPs are used, one for each platoon type: land, naval, bomber, and fighter. Generalized games There have been attempts to make machine learning agents that are able to play more than one game. These "general" gaming agents are trained to understand games based on shared properties between them. AlphaZero AlphaZero is a modified version of AlphaGo Zero which is able to play Shogi, chess, and Go. The modified agent starts with only basic rules of the game, and is also trained entirely through self-learning. DeepMind was able to train this generalized agent to be competitive with previous versions of itself on Go, as well as top agents in the other two games. Strengths and weaknesses of deep learning agents Machine learning agents are often not covered in many game design courses. Previous use of machine learning agents in games may not have been very practical, as even the 2015 version of AlphaGo took hundreds of CPUs and GPUs to train to a strong level. This potentially limits the creation of highly effective deep learning agents to large corporations or extremely wealthy individuals. The extensive training time of neural network based approaches can also take weeks on these powerful machines. The problem of effectively training ANN based models extends beyond powerful hardware environments; finding a good way to represent data and learn meaningful things from it is also often a difficult problem. ANN models often overfit to very specific data and perform poorly in more generalized cases. AlphaStar shows this weakness, despite being able to beat professional players, it is only able to do so on a single map when playing a mirror protoss matchup. OpenAI Five also shows this weakness, it was only able to beat professional player when facing a very limited hero pool out of the entire game. This example show how difficult it can be to train a deep learning agent to perform in more generalized situations. Machine learning agents have shown great success in a variety of different games. However, agents that are too competent also risk making games too difficult for new or casual players. Research has shown that challenge that is too far above a player's skill level will ruin lower player enjoyment. These highly trained agents are likely only desirable against very skilled human players who have many of hours of experience in a given game. Given these factors, highly effective deep learning agents are likely only a desired choice in games that have a large competitive scene, where they can function as an alternative practice option to a skilled human player. Computer vision-based players Computer vision focuses on training computers to gain a high-level understanding of digital images or videos. Many computer vision techniques also incorporate forms of machine learning, and have been applied on various video games. This application of computer vision focuses on interpreting game events using visual data. In some cases, artificial intelligence agents have used model-free techniques to learn to play games without any direct connection to internal game logic, solely using video data as input. Pong Andrej Karpathy has demonstrated that relatively trivial neural network with just one hidden layer is capable of being trained to play Pong based on screen data alone. Atari games In 2013, a team at DeepMind demonstrated the use of deep Q-learning to play a variety of Atari video games — Beamrider, Breakout, Enduro, Pong, Q*bert, Seaquest, and Space Invaders — from screen data. The team expanded their work to create a learning algorithm called MuZero that was able to "learn" the rules and develop winning strategies for over 50 different Atari games based on screen data. Doom Doom (1993) is a first-person shooter (FPS) game. Student researchers from Carnegie Mellon University used computer vision techniques to create an agent that could play the game using only image pixel input from the game. The students used convolutional neural network (CNN) layers to interpret incoming image data and output valid information to a recurrent neural network which was responsible for outputting game moves. Super Mario Other uses of vision-based deep learning techniques for playing games have included playing Super Mario Bros. only using image input, using deep Q-learning for training. Minecraft Researchers with OpenAI created about 2000 hours of video plays of Minecraft coded with the necessary human inputs, and then trained a machine learning model to comprehend the video feedback from the input. The researchers then used that model with 70,000 hours of Minecraft playthroughs offered on YouTube to see how well the model could create the input to match that behavior and learn further from it, such as being able to learn the steps and process of creating a diamond pickaxe tool. Machine learning for procedural content generation in games Machine learning has seen research for use in content recommendation and generation. Procedural content generation is the process of creating data algorithmically rather than manually. This type of content is used to add replayability to games without relying on constant additions by human developers. PCG has been used in various games for different types of content generation, examples of which include weapons in Borderlands 2, all world layouts in Minecraft and entire universes in No Man's Sky. Common approaches to PCG include techniques that involve grammars, search-based algorithms, and logic programming. These approaches require humans to manually define the range of content possible, meaning that a human developer decides what features make up a valid piece of generated content. Machine learning is theoretically capable of learning these features when given examples to train off of, thus greatly reducing the complicated step of developers specifying the details of content design. Machine learning techniques used for content generation include Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNN), Generative Adversarial networks (GAN), and K-means clustering. Not all of these techniques make use of ANNs, but the rapid development of deep learning has greatly increased the potential of techniques that do. Galactic Arms Race Galactic Arms Race is a space shooter video game that uses neuroevolution powered PCG to generate unique weapons for the player. This game was a finalist in the 2010 Indie Game Challenge and its related research paper won the Best Paper Award at the 2009 IEEE Conference on Computational Intelligence and Games. The developers use a form of neuroevolution called cgNEAT to generate new content based on each player's personal preferences. Each generated item is represented by a special ANN known as a Compositional Pattern Producing Network (CPPNs). During the evolutionary phase of the game cgNEAT calculates the fitness of current items based on player usage and other gameplay metrics, this fitness score is then used decide which CPPNs will reproduce to create a new item. The ending result is the generation of new weapon effects based on the player's preference. Super Mario Bros. Super Mario Bros. has been used by several researchers to simulate PCG level creation. Various attempts having used different methods. A version in 2014 used n-grams to generate levels similar to the ones it trained on, which was later improved by making use of MCTS to guide generation. These generations were often not optimal when taking gameplay metrics such as player movement into account, a separate research project in 2017 tried to resolve this problem by generating levels based on player movement using Markov Chains. These projects were not subjected to human testing and may not meet human playability standards. The Legend of Zelda PCG level creation for The Legend of Zelda has been attempted by researchers at the University of California, Santa Cruz. This attempt made use of a Bayesian Network to learn high level knowledge from existing levels, while Principal Component Analysis (PCA) was used to represent the different low level features of these levels. The researchers used PCA to compare generated levels to human made levels and found that they were considered very similar. This test did not include playability or human testing of the generated levels. Music generation Music is often seen in video games and can be a crucial element for influencing the mood of different situations and story points. Machine learning has seen use in the experimental field of music generation; it is uniquely suited to processing raw unstructured data and forming high level representations that could be applied to the diverse field of music. Most attempted methods have involved the use of ANN in some form. Methods include the use of basic feedforward neural networks, autoencoders, restricted boltzmann machines, recurrent neural networks, convolutional neural networks, generative adversarial networks (GANs), and compound architectures that use multiple methods. VRAE video game melody symbolic music generation system The 2014 research paper on "Variational Recurrent Auto-Encoders" attempted to generate music based on songs from 8 different video games. This project is one of the few conducted purely on video game music. The neural network in the project was able to generate data that was very similar to the data of the games it trained off of. The generated data did not translate into good quality music. References External links Machine learning Game artificial intelligence
Machine learning in video games
[ "Mathematics", "Engineering" ]
3,490
[ "Artificial intelligence engineering", "Game theory", "Game artificial intelligence", "Machine learning" ]
60,953,954
https://en.wikipedia.org/wiki/Schl%C3%BCsselger%C3%A4t%2041
The Schlüsselgerät 41 ("Cipher Machine 41"), also known as the SG-41 or Hitler mill, was a rotor cipher machine, first produced in 1941 in Nazi Germany, that was designed as a potential successor for the Enigma machine. It saw limited use by the Abwehr (military intelligence) towards the end of World War II. History The SG-41 was created under order of the Heereswaffenamt (Inspectorate 7/VI organisation) as a collaboration between German cryptographer Fritz Menzer and Wanderer, a leading typewriter manufacturer. The machine also acquired the nickname "Hitler mill" because of the large crank attached to the side of the unit. Instead of using a lampboard like the Enigma, the SG-41 printed both the plaintext and ciphertext of the message onto two paper tapes. Due to wartime shortages of light metals such as aluminium and magnesium, the SG-41 weighed approximately , which made it unsuitable for the front lines. Menzer intended for the SG-41 to fully replace Enigma, which he considered to no longer be secure; the Luftwaffe and Heer ordered around 11,000 units. A total of 1,000 units were produced. Various sources have reported production figures as low as 500 units due to materiel shortages, but production was halted after 1,000 units, as it was considered too heavy for use on the front. In December 1943, General Fritz Thiele ordered production to cease by the end of 1944. Beginning on 12 October 1944, the first deliveries to the Abwehr began. In the final months of the war, the SG-41 was used instead of the Abwehr Enigma. Function Functionally, the machine had greater similarities with the Boris Hagelin C-Series. The SG-41 had six encryption rotors, compared to the Enigma, which had either three or four, in addition to a number of advanced features, making it much more resistant to cryptanalysis than the Enigma or other contemporary Hagelin machines. While the Enigma rotors advanced by one for each letter enciphered, the SG-41's wheels interacted with each other and moved irregularly. Similar functionality was not adopted in a mass-produced cipher machine until 1952 with the advent of the Hagelin CX-52. Cryptanalysis The Allied codebreakers in Bletchley Park considered the device a "mystery". Only a handful of messages were able to be deciphered during the war, namely when two messages were "in depth" i.e. encrypted with the same key. The inner workings of the device were unclear until after the war, so it was not possible to perform a systematic cryptanalysis on the messages. Allied codebreakers referred to it as a "remarkable machine". SG-41Z In the final months of the war, an additional 550 units were built, which are referred to as the SG-41Z. This model only allowed the numbers 0–9 to be enciphered and was used by the Luftwaffe for weather reports. Find near Aying On 5 May 2017, two hobbyist treasure hunters found an SG-41 using a metal detector in a forest near the Bavarian city of Aying, buried approximately deep. The hobbyists donated their find to the Deutsches Museum in Munich instead of selling it privately. The museum intends to conserve it in its current condition and display it in a new permanent exhibit, BildSchriftCodes. See also Schlüsselgerät 39 References Products introduced in 1941 Broken stream ciphers Cryptographic hardware Rotor machines History of telecommunications in Germany Signals intelligence of World War II World War II military equipment of Germany Encryption devices Enigma machine Military communications of Germany
Schlüsselgerät 41
[ "Physics", "Technology" ]
754
[ "Physical systems", "Machines", "Rotor machines" ]
60,958,267
https://en.wikipedia.org/wiki/Reshetikhin%E2%80%93Turaev%20invariant
In the mathematical field of quantum topology, the Reshetikhin–Turaev invariants (RT-invariants) are a family of quantum invariants of framed links. Such invariants of framed links also give rise to invariants of 3-manifolds via the Dehn surgery construction. These invariants were discovered by Nicolai Reshetikhin and Vladimir Turaev in 1991, and were meant to be a mathematical realization of Witten's proposed invariants of links and 3-manifolds using quantum field theory. Overview To obtain an RT-invariant, one must first have a -linear ribbon category at hand. Each -linear ribbon category comes equipped with a diagrammatic calculus in which morphisms are represented by certain decorated framed tangle diagrams, where the initial and terminal objects are represented by the boundary components of the tangle. In this calculus, a (decorated framed) link diagram , being a (decorated framed) tangle without boundary, represents an endomorphism of the monoidal identity (the empty set in this calculus), or in other words, an element of . This element of is the RT-invariant associated to . Given any closed oriented 3-manifold , there exists a framed link in the 3-sphere so that is homeomorphic to the manifold obtained by surgering along . Two such manifolds and are homeomorphic if and only if and are related by a sequence of Kirby moves. Reshetikhin and Turaev used this idea to construct invariants of 3-manifolds by combining certain RT-invariants into an expression which is invariant under Kirby moves. Such invariants of 3-manifolds are known as Witten–Reshetikhin–Turaev invariants (WRT-invariants). Examples Let be a ribbon Hopf algebra over a field (one can take, for example, any quantum group over ). Consider the category , of finite dimensional representations of . There is a diagrammatic calculus in which morphisms in are represented by framed tangle diagrams with each connected component decorated by a finite dimensional representation of . That is, is a -linear ribbon category. In this way, each ribbon Hopf algebra gives rise to an invariant of framed links colored by representations of (an RT-invariant). For the quantum group over the field , the corresponding RT-invariant for links and 3-manifolds gives rise to the following family of link invariants, appearing in skein theory. Let be a framed link in with components. For each , let denote the RT-invariant obtained by decorating each component of by the unique -dimensional representation of . Then where the -tuple, denotes the Kauffman polynomial of the link , where each of the components is cabled by the Jones–Wenzl idempotent , a special element of the Temperley–Lieb algebra. To define the corresponding WRT-invariant for 3-manifolds, first of all we choose to be either a -th root of unity or an -th root of unity with odd . Assume that is obtained by doing Dehn surgery on a framed link . Then the RT-invariant for the 3-manifold is defined to be where is the Kirby coloring, are the unknot with framing, and are the numbers of positive and negative eigenvalues for the linking matrix of respectively. Roughly speaking, the first and second bracket ensure that is invariant under blowing up/down (first Kirby move) and the third bracket ensures that is invariant under handle sliding (second Kirby move). Properties The Witten–Reshetikhin–Turaev invariants for 3-manifolds satisfy the following properties: where denotes the connected sum of and where is the manifold with opposite orientation, and denotes the complex conjugate of These three properties coincide with the properties satisfied by the 3-manifold invariants defined by Witten using Chern–Simons theory (under certain normalization) Open problems Witten's asymptotic expansion conjecture Pick . Witten's asymptotic expansion conjecture suggests that for every 3-manifold , the large -th asymptotics of is governed by the contributions of flat connections. Conjecture: There exists constants and (depending on ) for and for such that the asymptotic expansion of in the limit is given by where are the finitely many different values of the Chern–Simons functional on the space of flat -connections on . Volume conjecture for the Reshetikhin–Turaev invariant The Witten's asymptotic expansion conjecture suggests that at , the RT-invariants grow polynomially in . On the contrary, at with odd , in 2018 Q. Chen and T. Yang suggested the volume conjecture for the RT-invariants, which essentially says that the RT-invariants for hyperbolic 3-manifolds grow exponentially in and the growth rate gives the hyperbolic volume and Chern–Simons invariants for the 3-manifold. Conjecture: Let be a closed oriented hyperbolic 3-manifold. Then for a suitable choice of arguments, where is odd positive integer. References External links https://ncatlab.org/nlab/show/Reshetikhin-Turaev+construction Quantum groups Quantum field theory
Reshetikhin–Turaev invariant
[ "Physics" ]
1,072
[ "Quantum field theory", "Quantum mechanics" ]
60,962,435
https://en.wikipedia.org/wiki/Photothermal%20ratio
The photothermal ratio (PTR), also named photothermal quotient, is a variable that characterizes the amount of light available to plants relative to the temperature level. It is used in plant biology to characterize the growth environment of plants. Rationale Both light and temperature are important environmental variables that determine the growth and development of plants. Light is especially important in driving photosynthesis and producing sugars. Temperature is a strong driver of cell division, where available sugars are converted to produce new leaf, stem, root or reproductive biomass. As such, both are important factors – along with nutrient and water availability – in determining the source:sink balance of a plant, the amount of sugar available for plant in relation to its growth potential. The photothermal ratio is a quantitative descriptor that can be used to approximate this balance. Calculation and units The photothermal ratio is calculated by dividing the Daily Light Integral (photosynthetic photon flux density integrated over a day; DLI) plants are exposed to by a baseline daily temperature(Tb). PTR = DLI / Tb. Units are therefore mol quanta m−2 day−1 °C−1. Alternatively, the number of degree days have been used rather than Tb per se, with units of the form mol degree-day−1. The PTR concept has been introduced in detailed studies of growth and productivity of a particular species. For these species, a baseline temperature Tb is chosen for which it is known than no leaf elongation takes place below that temperature, which for many temperate species will be a temperature around 5 °C. In characterizing the growth environment of a broad range of plants without reference to any specific species, Tb has been taken to be zero °C. Normal ranges The photothermal ratio is relatively constant over the year in the tropics, with lowland values around 1.3 mol m−2 day−1 °C−1. At higher latitudes PTR changes with seasons, being high in spring, and low in autumn. Averaged over the growing season, PTR values are around 3 in boreal zones, and around 2 in temperate zones. Plants growing in glasshouses often grow at a PTR of ~1, experiments with Arabidopsis are often carried out at a PTR around 0.2. Effects on plants Many effects that have been ascribed to light are actually dependent on temperature as well. For example, strong stem elongation at low light will only take place when temperatures are high, but not when temperatures are close to 0 °C. In wheat, PTR in the month before anthesis strongly determines the number of kernels. In horticulture, plants grown at a high PTR generally have thicker stems, shorter internodes and more flowers, and therefore have higher marketable yield. See also Daily light integral Climate References Plants Light Botany
Photothermal ratio
[ "Physics", "Biology" ]
576
[ "Physical phenomena", "Spectrum (physical sciences)", "Plants", "Electromagnetic spectrum", "Waves", "Light", "Botany" ]
64,458,178
https://en.wikipedia.org/wiki/Combination%20antibiotic
A combination antibiotic is one in which two ingredients are added together for additional therapeutic effect. One or both ingredients may be antibiotics. Antibiotic combinations are increasingly important because of antimicrobial resistance. This means that individual antibiotics that used to be effective are no longer effective, and because of the absence of new classes of antibiotic, they allow old antibiotics to be continue to be used. In particular, they may be required to treat multiresistant organisms, such as carbapenem-resistant Enterobacteriaceae. Some combinations are more likely to result in successful treatment of an infection. Uses Antibiotics are used in combination for a number of reasons: to treat multiresistant organisms, such as carbapenem-resistant Enterobacteriaceae. Amoxicillin/clavulanic acid, which includes the beta lactam amoxicillin with the suicide inhibitor clauvanic acid, which helps the amoxicillin overcome the action of beta lactamase because a person may be infected with more than one microbe simultaneously, for example infections of the abdominal cavity after bowel perforation. because antibiotics used together may act synergistically to increase the efficacy of both, because antibiotics used together may have a broader spectrum than each antibiotic used individually. Examples Examples of combinations include: Amoxicillin/clavulanic acid, which includes the beta lactam amoxicillin with the suicide inhibitor clauvanic acid, which helps the amoxicillin overcome the action of beta lactamase Trimethoprim/sulfamethoxazole Research Research into combination antibiotics is ongoing. References Antibiotics
Combination antibiotic
[ "Biology" ]
338
[ "Antibiotics", "Biocides", "Biotechnology products" ]
67,434,035
https://en.wikipedia.org/wiki/Zinc%20cycle
The zinc cycle is a biogeochemical cycle that transports zinc through the lithosphere, hydrosphere, and biosphere. Natural Cycle Lithosphere Zinc-containing minerals in the Earth's crust exist primarily as sulfides, such as sphalerite and wurtzite, and carbonates such as smithsonite. Zinc minerals enter the terrestrial environment through weathering and human activities. Zinc is used by plants and other organisms, and then enters aquatic systems where it either settles into sediments or eventually enter the oceans. Oceans Zinc is a marine micronutrient that tends to be in higher concentration in the deep ocean and is transformed into organic zinc which enters the food chain by diatom blooms during upwelling events in the Southern Ocean. Zinc settles to the ocean floor and is returned to the mantle from the subduction of marine sediments. The zinc cycle has historically been characterized by episodic changes in zinc deposits. Major global events such as the formation or breakup of supercontinents and periods of significant volcanic activity tend to create new deposits of zinc in the lithosphere. In between these events, zinc tends to cycle through the biosphere at a lower rate of change. Anthropogenic influences The anthropogenic effect on the zinc cycle has been significant. Zinc is mined as a mineral resource used by humans at a rate of 9800 Gg/yr for use in metal alloys including brass and nickel silver, for galvanizing steel, and in zinc compounds such as zinc oxide. Half of zinc waste from industrial use is from tailings and slag; the rest comes from the oxidation of zinc metals and landfill waste. Scientists estimate that 85% of all zinc that has been mined for human use is still in use; therefore, the amount of zinc waste going into landfills is expected to increase. Zinc is a trace nutrient present in fertilizers, which contribute to 21 Gg/yr in agricultural cycling. Commercial fertilizers contain as much as 36% zinc. Only a small portion of the zinc that enters the agricultural system is removed in crops that are consumed by humans; a significant portion is recycled in manure and compost, and accumulates in the soil. References Zinc Biogeochemical cycle
Zinc cycle
[ "Chemistry" ]
464
[ "Biogeochemical cycle", "Biogeochemistry" ]
67,437,670
https://en.wikipedia.org/wiki/Replica%20cluster%20move
Replica cluster move in condensed matter physics refers to a family of non-local cluster algorithms used to simulate spin glasses. It is an extension of the Swendsen-Wang algorithm in that it generates non-trivial spin clusters informed by the interaction states on two (or more) replicas instead of just one. It is different from the replica exchange method (or parallel tempering), as it performs a non-local update on a fraction of the sites between the two replicas at the same temperature, while parallel tempering directly exchanges all the spins between two replicas at different temperature. However, the two are often used alongside to achieve state-of-the-art efficiency in simulating spin-glass models. The Chayes-Machta-Redner representation The Chayes-Machta-Redner (CMR) representation is a graphical representation of the Ising spin glass which extends the standard FK representation. It is based on the observation that the total Hamiltonian of two independent Ising replicas α and β, can be written as the Hamiltonian of a 4-state clock model. To see this, we define the following mapping where is the orientation of the 4-state clock, then the total Hamiltonian can be represented as In the graphical representation of this model, there are two types of bonds that can be open, referred to as blue and red. To generate the bonds on the lattice, the following rules are imposed: If , or when the interactions on edge are satisfied on both replicas, then a blue bond is open with probability . If , or when the interaction on edge is satisfied in exactly one replica, then a red bond is open with probability . Otherwise, a closed bond is formed. Under these rules, it can be checked that a cycle of open bonds can only contain an even number of red bonds. A cluster formed with blue bonds is referred to as a blue cluster, and a super-cluster formed together with both blue and red bonds is referred to as a grey cluster. Once the clusters are generated, there are two types of non-local updates that can be made to the clock states independently in the clock clusters (and thus the spin states in both replicas). First, for every blue cluster, we can flip (or rotate ) the clock states with some arbitrary probability. Following this, for every grey cluster (blue clusters connected with red bonds), we can rotate all the clock states simultaneously by a random angle. It can be shown that both updates are consistent with the bond-formation rules, and satisfy detailed balance. Therefore, an algorithm based on this CMR representation will be correct when used in conjunction with other ergodic algorithms. However, the algorithm is not necessarily efficient, as a giant grey cluster will tend to span the entire lattice at sufficiently low temperatures (e.g. even at paramagnetic phases of spin-glass models). Houdayer cluster move The Houdayer cluster move is a simpler cluster algorithm based on a site percolation process on sites with negative spin overlaps. It is discovered by Jerome Houdayer in 2001. For two independent Ising replicas, we can define the spin overlap as and a cluster is formed by randomly selecting a site and percolating through the adjacent sites with (with a percolation ratio of 1) until the maximal cluster is formed. The spins in the cluster are then exchanged between the two replicas. It can be shown that the exchange update is isoenergetic, meaning that the total energy is conserved in the update. This gives an acceptance ratio of 1 as calculated from the Metropolis-Hastings rule. In other words, the update is rejection-free. Suppressing percolation of large clusters The efficiency of this algorithm is highly sensitive to the site percolation threshold of the underlying lattice. If the percolation threshold is too small, then a giant cluster will likely span the entire lattice, resulting in the trivial update of exchanging nearly all the spins between the replicas. This is why the original algorithm only performs well in low dimensional settings (where the site percolation ratio is sufficiently high). To efficiently extend this algorithm to higher dimensions, one has to perform certain algorithmic interventions. For instance, one can restrict the cluster moves to low-temperature replicas where one expects only a few number of negative-overlap sites to appear (such that the algorithm does not percolate supercritically). In addition, one can perform a global spin-flip in one of the two replicas when the number of negative-overlap sites exceeds half the lattice size, in order to further suppress the percolation process. The Jorg cluster move is another way to reduce the sizes of the Houdayer clusters. In each Houdayer cluster, the algorithm forms open bonds with probability , similar to the Swensden-Wang algorithm. This will form sub-clusters that are smaller than the Houdayer clusters, and the spins in these sub-clusters can then be exchange between replicas in a similar fashion as a Houdayer cluster move. References Statistical mechanics Monte Carlo methods Condensed matter physics
Replica cluster move
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,041
[ "Monte Carlo methods", "Phases of matter", "Materials science", "Computational physics", "Condensed matter physics", "Statistical mechanics", "Matter" ]
77,538,436
https://en.wikipedia.org/wiki/Combustion%20efficiency
Combustion efficiency refers to the effectiveness of the burning process in converting fuel into heat energy. It is measured by the proportion of fuel that is efficiently burned and converted into useful heat, while minimizing the emissions of pollutants. Specifically, it may refer to: fuel efficiency engine efficiency depending on whether the level of efficiency is determined by the fuel itself or the combustion chamber or engine. References Energy conversion Combustion engineering
Combustion efficiency
[ "Engineering" ]
84
[ "Combustion engineering", "Industrial engineering" ]
77,541,366
https://en.wikipedia.org/wiki/Sheath%20instability
Sheath instability is an instability in a plasma where ions form sheaths near an electrode, making the plasma unstable. It is a type of Rayleigh–Taylor instability. Formula References Plasma instabilities Stability theory Plasma physics stubs Plasma phenomena Systems theory
Sheath instability
[ "Physics", "Mathematics" ]
51
[ "Physical phenomena", "Plasma physics", "Plasma phenomena", "Plasma instabilities", "Stability theory", "Plasma physics stubs", "Dynamical systems" ]
77,547,459
https://en.wikipedia.org/wiki/Tearing%20mode
Tearing mode is distruptions seen in Tokamak in which Ideal MHD Instability grow of the order of 10−1 microsec. Types of Tearing mode Rayleigh–Taylor instability Magnetic reconnection Ballooning instability Resistive ballooning mode Universal instability Kelvin–Helmholtz instability Disruption instability Edge-localized mode Transport barrier mode References Plasma instabilities Stability theory
Tearing mode
[ "Physics", "Mathematics" ]
76
[ "Physical phenomena", "Plasma phenomena", "Plasma instabilities", "Stability theory", "Dynamical systems" ]
77,552,742
https://en.wikipedia.org/wiki/1-Fluorohexane
1-Fluorohexane is a chemical compound from the group of aliphatic saturated halogenated hydrocarbons. The chemical formula is . Synthesis 1-Fluorohexane can be obtained by reacting 1-chlorohexane or 1-bromohexane with potassium fluoride in ethylene glycol. Physical properties 1-Fluorohexane is a colorless liquid that is soluble in ether and benzene. Chemical properties The compound reacts with activated Mg: Uses The compound is primarily used in the field of organic chemistry as a reagent or solvent. Also, 1-fluorohexane is used in physical chemistry as a model compound for understanding the physico-chemical properties of fluorinated hydrocarbons. See also 1-Bromohexane 1-Chlorohexane 1-Iodohexane Perfluorohexane References Fluoroalkanes Alkylating agents
1-Fluorohexane
[ "Chemistry" ]
201
[ "Alkylating agents", "Reagents for organic chemistry" ]
77,556,775
https://en.wikipedia.org/wiki/Tritium%20breeding%20module
A tritium breeding module or TBM is a component of a fusion reactor that produces tritium. ITER will have four easily removable TBMs in order to test various material combinations in order to develop the breeding process. See also Breeding blanket References ITER Fusion reactors
Tritium breeding module
[ "Chemistry" ]
56
[ "Nuclear fusion", "Fusion reactors" ]
77,557,393
https://en.wikipedia.org/wiki/Normalization%20%28machine%20learning%29
In machine learning, normalization is a statistical technique with various applications. There are two main forms of normalization, namely data normalization and activation normalization. Data normalization (or feature scaling) includes methods that rescale input data so that the features have the same range, mean, variance, or other statistical properties. For instance, a popular choice of feature scaling method is min-max normalization, where each feature is transformed to have the same range (typically or ). This solves the problem of different features having vastly different scales, for example if one feature is measured in kilometers and another in nanometers. Activation normalization, on the other hand, is specific to deep learning, and includes methods that rescale the activation of hidden neurons inside neural networks. Normalization is often used to: increase the speed of training convergence, reduce sensitivity to variations and feature scales in input data, reduce overfitting, and produce better model generalization to unseen data. Normalization techniques are often theoretically justified as reducing covariance shift, smoothing optimization landscapes, and increasing regularization, though they are mainly justified by empirical success. Batch normalization Batch normalization (BatchNorm) operates on the activations of a layer for each mini-batch. Consider a simple feedforward network, defined by chaining together modules: where each network module can be a linear transform, a nonlinear activation function, a convolution, etc. is the input vector, is the output vector from the first module, etc. BatchNorm is a module that can be inserted at any point in the feedforward network. For example, suppose it is inserted just after , then the network would operate accordingly: The BatchNorm module does not operate over individual inputs. Instead, it must operate over one batch of inputs at a time. Concretely, suppose we have a batch of inputs , fed all at once into the network. We would obtain in the middle of the network some vectors: The BatchNorm module computes the coordinate-wise mean and variance of these vectors: where indexes the coordinates of the vectors, and indexes the elements of the batch. In other words, we are considering the -th coordinate of each vector in the batch, and computing the mean and variance of these numbers. It then normalizes each coordinate to have zero mean and unit variance: The is a small positive constant such as added to the variance for numerical stability, to avoid division by zero. Finally, it applies a linear transformation: Here, and are parameters inside the BatchNorm module. They are learnable parameters, typically trained by gradient descent. The following is a Python implementation of BatchNorm: import numpy as np def batchnorm(x, gamma, beta, epsilon=1e-9): # Mean and variance of each feature mu = np.mean(x, axis=0) # shape (N,) var = np.var(x, axis=0) # shape (N,) # Normalize the activations x_hat = (x - mu) / np.sqrt(var + epsilon) # shape (B, N) # Apply the linear transform y = gamma * x_hat + beta # shape (B, N) return y Interpretation and allow the network to learn to undo the normalization, if this is beneficial. BatchNorm can be interpreted as removing the purely linear transformations, so that its layers focus solely on modelling the nonlinear aspects of data, which may be beneficial, as a neural network can always be augmented with a linear transformation layer on top. It is claimed in the original publication that BatchNorm works by reducing internal covariance shift, though the claim has both supporters and detractors. Special cases The original paper recommended to only use BatchNorms after a linear transform, not after a nonlinear activation. That is, , not . Also, the bias does not matter, since it would be canceled by the subsequent mean subtraction, so it is of the form . That is, if a BatchNorm is preceded by a linear transform, then that linear transform's bias term is set to zero. For convolutional neural networks (CNNs), BatchNorm must preserve the translation-invariance of these models, meaning that it must treat all outputs of the same kernel as if they are different data points within a batch. This is sometimes called Spatial BatchNorm, or BatchNorm2D, or per-channel BatchNorm. Concretely, suppose we have a 2-dimensional convolutional layer defined by: where: is the activation of the neuron at position in the -th channel of the -th layer. is a kernel tensor. Each channel corresponds to a kernel , with indices . is the bias term for the -th channel of the -th layer. In order to preserve the translational invariance, BatchNorm treats all outputs from the same kernel in the same batch as more data in a batch. That is, it is applied once per kernel (equivalently, once per channel ), not per activation : where is the batch size, is the height of the feature map, and is the width of the feature map. That is, even though there are only data points in a batch, all outputs from the kernel in this batch are treated equally. Subsequently, normalization and the linear transform is also done per kernel: Similar considerations apply for BatchNorm for n-dimensional convolutions. The following is a Python implementation of BatchNorm for 2D convolutions: import numpy as np def batchnorm_cnn(x, gamma, beta, epsilon=1e-9): # Calculate the mean and variance for each channel. mean = np.mean(x, axis=(0, 1, 2), keepdims=True) var = np.var(x, axis=(0, 1, 2), keepdims=True) # Normalize the input tensor. x_hat = (x - mean) / np.sqrt(var + epsilon) # Scale and shift the normalized tensor. y = gamma * x_hat + beta return y Improvements BatchNorm has been very popular and there were many attempted improvements. Some examples include: ghost batching: randomly partition a batch into sub-batches and perform BatchNorm separately on each; weight decay on and ; and combining BatchNorm with GroupNorm. A particular problem with BatchNorm is that during training, the mean and variance are calculated on the fly for each batch (usually as an exponential moving average), but during inference, the mean and variance were frozen from those calculated during training. This train-test disparity degrades performance. The disparity can be decreased by simulating the moving average during inference: where is a hyperparameter to be optimized on a validation set. Other works attempt to eliminate BatchNorm, such as the Normalizer-Free ResNet. Layer normalization Layer normalization (LayerNorm) is a popular alternative to BatchNorm. Unlike BatchNorm, which normalizes activations across the batch dimension for a given feature, LayerNorm normalizes across all the features within a single data sample. Compared to BatchNorm, LayerNorm's performance is not affected by batch size. It is a key component of transformer models. For a given data input and layer, LayerNorm computes the mean and variance over all the neurons in the layer. Similar to BatchNorm, learnable parameters (scale) and (shift) are applied. It is defined by: where: and the index ranges over the neurons in that layer. Examples For example, in CNN, a LayerNorm applies to all activations in a layer. In the previous notation, we have: Notice that the batch index is removed, while the channel index is added. In recurrent neural networks and transformers, LayerNorm is applied individually to each timestep. For example, if the hidden vector in an RNN at timestep is , where is the dimension of the hidden vector, then LayerNorm will be applied with: where: Root mean square layer normalization Root mean square layer normalization (RMSNorm) changes LayerNorm by: Essentially, it is LayerNorm where we enforce . Adaptive Adaptive layer norm (adaLN) computes the in a LayerNorm not from the layer activation itself, but from other data. It was first proposed for CNNs, and has been used effectively in diffusion transformers (DiTs). For example, in a DiT, the conditioning information (such as a text encoding vector) is processed by a multilayer perceptron into , which is then applied in the LayerNorm module of a transformer. Weight normalization Weight normalization (WeightNorm) is a technique inspired by BatchNorm that normalizes weight matrices in a neural network, rather than its activations. One example is spectral normalization, which divides weight matrices by their spectral norm. The spectral normalization is used in generative adversarial networks (GANs) such as the Wasserstein GAN. The spectral radius can be efficiently computed by the following algorithm: By reassigning after each update of the discriminator, we can upper-bound , and thus upper-bound . The algorithm can be further accelerated by memoization: at step , store . Then, at step , use as the initial guess for the algorithm. Since is very close to , so is to , thus allowing rapid convergence. CNN-specific normalization There are some activation normalization techniques that are only used for CNNs. Response normalization Local response normalization was used in AlexNet. It was applied in a convolutional layer, just after a nonlinear activation function. It was defined by: where is the activation of the neuron at location and channel . I.e., each pixel in a channel is suppressed by the activations of the same pixel in its adjacent channels. are hyperparameters picked by using a validation set. It was a variant of the earlier local contrast normalization. where is the average activation in a small window centered on location and channel . The hyperparameters , and the size of the small window, are picked by using a validation set. Similar methods were called divisive normalization, as they divide activations by a number depending on the activations. They were originally inspired by biology, where it was used to explain nonlinear responses of cortical neurons and nonlinear masking in visual perception. Both kinds of local normalization were obviated by batch normalization, which is a more global form of normalization. Response normalization reappeared in ConvNeXT-2 as global response normalization. Group normalization Group normalization (GroupNorm) is a technique also solely used for CNNs. It can be understood as the LayerNorm for CNN applied once per channel group. Suppose at a layer , there are channels , then it is partitioned into groups . Then, LayerNorm is applied to each group. Instance normalization Instance normalization (InstanceNorm), or contrast normalization, is a technique first developed for neural style transfer, and is also only used for CNNs. It can be understood as the LayerNorm for CNN applied once per channel, or equivalently, as group normalization where each group consists of a single channel: Adaptive instance normalization Adaptive instance normalization (AdaIN) is a variant of instance normalization, designed specifically for neural style transfer with CNNs, rather than just CNNs in general. In the AdaIN method of style transfer, we take a CNN and two input images, one for content and one for style. Each image is processed through the same CNN, and at a certain layer , AdaIn is applied. Let be the activation in the content image, and be the activation in the style image. Then, AdaIn first computes the mean and variance of the activations of the content image , then uses those as the for InstanceNorm on . Note that itself remains unchanged. Explicitly, we have: Transformers Some normalization methods were designed for use in transformers. The original 2017 transformer used the "post-LN" configuration for its LayerNorms. It was difficult to train, and required careful hyperparameter tuning and a "warm-up" in learning rate, where it starts small and gradually increases. The pre-LN convention, proposed several times in 2018, was found to be easier to train, requiring no warm-up, leading to faster convergence. FixNorm and ScaleNorm both normalize activation vectors in a transformer. The FixNorm method divides the output vectors from a transformer by their L2 norms, then multiplies by a learned parameter . The ScaleNorm replaces all LayerNorms inside a transformer by division with L2 norm, then multiplying by a learned parameter (shared by all ScaleNorm modules of a transformer). Query-Key normalization (QKNorm) normalizes query and key vectors to have unit L2 norm. In nGPT, many vectors are normalized to have unit L2 norm: hidden state vectors, input and output embedding vectors, weight matrix columns, and query and key vectors. Miscellaneous Gradient normalization (GradNorm) normalizes gradient vectors during backpropagation. See also Data preprocessing Feature scaling References Further reading Articles with example Python (programming language) code Deep learning Statistical data transformation Machine learning Neural networks
Normalization (machine learning)
[ "Engineering" ]
2,804
[ "Artificial intelligence engineering", "Neural networks", "Machine learning" ]
77,557,454
https://en.wikipedia.org/wiki/Schenck%20ene%20reaction
The Schenck ene reaction or the Schenk reaction is the reaction of singlet oxygen with alkenes to yield hydroperoxides. The hydroperoxides can be reduced to allylic alcohols or eliminate to form unsaturated carbonyl compounds. It is a type II photooxygenation reaction, and is discovered in 1944 by Günther Otto Schenck. Its results are similar to ene reactions, hence its name. Reaction conditions The singlet oxygen reagent can be produced via photochemical activation of triplet oxygen (regular oxygen) in the presence of photosensitizers like rose bengal. Chemical processes like the reaction between hydrogen peroxide and sodium hypochlorite are also viable. Mechanism and selectivity Historically, four mechanisms have been proposed: Experimental and computational studies show that the reaction actually proceeds via a two step no intermediate process. One can loosely interpret it as a mix of the perepoxide mechanism and the concerted mechanism. There is no perepoxide intermediate as in the classical sense of reaction intermediates, for there exists no energy barrier between it and the hydroperoxide product. Such a mechanism can account for the selectivity of the Schenck ene reaction. The singlet oxygen is more likely to abstract hydrogen from the side with more C-H bonds due to favorable interactions in the transition state: Very bulky groups, like the tertiary butyl group, will hinder hydrogen abstraction on that side. Applications The Schenck ene reaction is utilized in the biological and biomimetic synthesis of rhodonoids, yield Many hydroperoxides derived from fatty acids, steroids, and terpenes are also formed via the Schenck ene reaction. For instance, the generation of cis-3-hexenal from linolenic acid: It must be noted, however, that this enzyme catalyzed path follows a different mechanism from the usual Schenck ene reaction. Radicals are involved, and triplet oxygen is used instead of singlet oxygen. See also Ene reaction Photooxygenation Singlet oxygen Hydroperoxide References Organic reactions Reaction mechanisms
Schenck ene reaction
[ "Chemistry" ]
439
[ "Reaction mechanisms", "Chemical kinetics", "Physical organic chemistry", "Organic reactions" ]
70,321,679
https://en.wikipedia.org/wiki/Wetting%20solution
Wetting solutions are liquids containing active chemical compounds that minimise the distance between two immiscible phases by lowering the surface tension to induce optimal spreading. The two phases, known as an interface, can be classified into five categories, namely, solid-solid, solid-liquid, solid-gas, liquid-liquid and liquid-gas. Although wetting solutions have a long history of acting as detergents for four thousand plus years, the fundamental chemical mechanism was not fully discovered until 1913 by the pioneer McBain. Since then, diverse studies have been conducted to reveal the underlying mechanism of micelle formation and working principle of wetting solutions, broadening the area of applications.   The addition of wetting solution to an aqueous droplet leads to the formation of a thin film due to its intrinsic spreading property. This property favours the formation of micelles which are specific chemical structures consisting of a cluster of surfactant molecules that has a hydrophobic core and a hydrophilic surface that can lower the surface tension between two different phases. In addition, wetting solutions can be further divided into four classes; non-ionic, anionic, cationic and zwitterionic. The spreading property may be examined by adding a drop of the liquid onto an oily surface. If the liquid is not a wetting solution, the droplet will remain intact. If the liquid is a wetting solution, the droplet will spread uniformly on the oily surface because the formation of the micelles lowers the surface tension of the liquid. Wetting solutions can be applied in pharmaceuticals, cosmetics and agriculture. Albeit a number of practical uses of wetting solutions, the presence of wetting solution can be a hindrance to water purification in industrial membrane distillation. History Wetting agent was used as soap for cleansing purposes for thousands of years. The oldest evidence of wetting solution went back to 2800 BC in ancient Babylon. The earliest credible reference of soap is in the writings of Galen, the Greek physician, around 200 AD. Over the following centuries, wetting solutions mainly functioned as detergents due to their wetting properties. Despite the extensive use of wetting solutions, the underlying chemical mechanism remained unknown until the emergence of McBain's proposed theory in 1913. Founded on his research on how the electrical conductivity of a solution of surfactant molecules changed with concentration, he raised the possibility of surfactant molecules in the form of self-assembled aggregates. Not until Debye published his original hypothesis in 1949 did he described the reason of micelle formation and the existence of finite-shaped micelles. McBain's discovery sparked numerous studies by Hobbs, Ooshika, Reich and Halsey from 1950 to 1956. These scholars intended to correct some of the foundational theories of the description of an equilibrium system, as well as emphasising the role of surface energy which was overlooked in Debye's prototype. In 1976, the fundamental theory for understanding the mechanism of micelle formation was developed by Tanford's free energy model. Apart from integrating all relevant physicochemical elements and explaining the growth of micells, he provided a comprehensive reasoning of why micelles are finite in terms of opposing interactional forces. Mechanism The chemical structure of wetting solution molecules consist of a hydrophilic head and a long hydrophobic tail. Its distinct amphiphilicity allows it to bury its hydrophilic head in an aqueous bulk phase and hydrophobic part in the organic bulk phase respectively. Wetting solution molecules break the intermolecular forces between each molecule in the organic phase and each water molecule in the aqueous phase by displacement. Due to the lowered attractive forces, the surface tension is reduced. Upon adding more wetting solution, the elevated concentration of wetting solution molecules leads to a further decrease in surface tension and makes the molecules at the surfaces become more crowded. The molecules will be forced to remain in the aqueous phase when there are no more vacancies for them to stay on the surface. At this point, the surface tension is maximally lowered and is termed as the critical micelle concentration (CMC). The lower the CMC, the more efficient the wetting solution is in reducing surface tension. Any additional wetting solution molecules will undergo self-aggregation into several special structures called micelles. Micelles are spheres with a hydrophobic core formed by the non-polar tail of wetting solution molecules and are surrounded by a hydrophilic layer arising from the molecules’ polar heads. Extra wetting solution molecules will be forced to form micelles instead of adhering to the surface, hence the surface tension remains constant. Due to the minimised surface tension, the droplet can now spread thoroughly and form a thin film on the surface. Classification Generally, the wetting solution molecules consist of a hydrophilic head and a long hydrophobic tail. The hydrophobic region usually contains saturated or unsaturated hydrocarbon chains, heterocyclic rings or aromatic rings. Despite the similar amphiphilic composition, the molecules can be divided into four classes with respect to the nature of the hydrophilic group, namely, non-ionic, anionic, cationic and zwitterionic. The following table shows the composition, special features of the corresponding classes and common examples of various forms of the respective wetting solutions. Applications Generally, wetting solution is applied in pharmaceuticals, cosmetics and agriculture. McBain’s research on maximising the application of wetting solutions have an important role in enabling a range of options to both manufacturers and consumers and improving product performance in the respective areas of application, such as modifying the stability of pharmaceuticals, delivery of drugs, effectiveness of cleansing products and water retention in soils. Pharmaceuticals Specific properties of different wetting solutions are able to alternate drug delivery which is beneficial in improving drug safety and patients' experiences . For example, solulan C-24, a non-ionic wetting solution, forms large bilayers of wetting solution molecules known as discosomes that have a lower risk of causing systemic adverse effects. Non-ionic wetting solutions are found to have a wider usage and are more efficient in reducing surface tension compared to ionic wetting solutions which have higher toxicity and CMC value in general. To ensure the safety, efficacy and quality of the preparations, toxicity and interaction profiles of the choice of wetting solutions are carefully investigated. Dosage form: Suspensions Suspension preparation is a liquid dosage form that contains insoluble solid drug particles. The suspension preparation is ideal if solid particles that have become compacted together during storage can re-disperse throughout the liquid vehicle readily with gentle shaking for a period of time that is sufficient for measuring the required dosage. Solid particles have a natural tendency to aggregate and eventually cause caking due to the presence of air film coating. A solution to this is using a wetting solution as the liquid vehicle for suspension preparation. Wetting solution increases the dispersal ability of the solid particles by replacing the air film to increase steric hindrance and minimise interactions between solid particles and resulting in a decreased rate of aggregation. Topical ophthalmic solutions Wetting solutions lowers the surface tension of topical ophthalmic solutions and induces instant spreading when applied onto the cornea by increasing the interaction between the two. The instant spreading increases the amount of drug molecules that are exposed to the cornea for absorption and therefore a quicker onset of action. The increased interaction allow the topical ophthalmic solutions to remain on the corneal surface for a longer period of time to maximise the amount of drug that can diffuse from the applied topical ophthalmic solution layer to the corneal epithelium through tear film, the protective layer of the cornea from the external environment. Cosmetics: Skin cleansing products Skin cleansing products including facial cleanser, body wash and shampoo consist of wetting solutions. Wetting solutions allow efficient spreading and wetting of the surface of skin and scalp by reducing the surface tension between the hydrophobic sebum secreted by the sebaceous gland in our skin. An efficient wetting solution penetrates the skin and clears any topical applications, body fluids including sebum secreted via openings of hair follicles, dead skin cells and microbes. Non-ionic wetting solutions have a low risk of causing skin irritation and are efficient in reducing surface tension between different ingredients, for example, fragrance and essential oils extracted from plants, in skin cleansing products to produce a consistent liquid formula. However, non-ionic wetting solutions are of higher cost than the other types of wetting solutions hence are less favourable for commercial products. Cationic wetting solutions cause more severe skin irritation problems hence are not used in skin cleansing products. They are used in hair conditioners that are only applied to the second half hair length and washed off after a short period of time. Anionic and amphoteric wetting solutions are often used as a mixture in body wash and shampoo. The anionic wetting solutions formulated into skin cleansing products have often undergone chemical modification as they often contain sulphur which triggers skin irritation by causing collagen in skin cells to swell and sometimes cell death. Examples of modified anionic wetting solutions include ammonium laureth sulphate and modified sulfosuccinates, both reported to exhibit low skin irritation. Agriculture Wetting solutions are widely used in Agriculture to increase crop yield which is affected by the degree of infiltration and penetration of water, nutrients and chemicals such as fertilisers and pesticides. Wetting solutions reduce surface runoff of water and nutrients and enhance water infiltration in water repelling soil by reducing surface tension. Wetting-solution-treated soil has shown to retain high water content and an even distribution of nutrients in the root zone that are in deep soil areas, benefiting crop yield and improving water efficiency.  Examples of wetting solutions used in agriculture are modified alkylated polyol, mixture of polyether polyol and glycol ether and mixture of poloxalene, 2-butoxyethanol. Industrial concerns Membrane distillation is a water purification process that utilises a hydrophobic membrane with pores to separate water vapour from contaminants, for example, oil and unwanted chemicals. The filtration efficiency and stability of the membrane can be diminished by wetting. Wetting of the hydrophobic membrane is resulted from the presence of wetting solution in sewage due to its increasing large variety of usage in different fields, for example, pharmaceuticals, cosmetics and agriculture. A possible solution is to pretreat the sewage to remove wetting solutions, limiting the amount of wetting solution in contact with the membrane. Other possible solutions to lengthen durability of the membrane include modification of the membrane material repellent to water and oil, air-backwashing and membrane surface geometry modification. These solutions are costly and require further research and development to optimise the durability and efficiency of membrane distillation. References Chemical compounds Chemical substances Liquids
Wetting solution
[ "Physics", "Chemistry" ]
2,260
[ "Molecules", "Chemical compounds", "Phases of matter", "Materials", "nan", "Chemical substances", "Matter", "Liquids" ]
70,322,089
https://en.wikipedia.org/wiki/Spenolimycin
Spenolimycin is a spectinomycin-type antibiotic which has been isolated from the bacterium Streptomyces gilvospiralissp. Spenolimycin has the molecular formula C15H26N2O7. References Further reading Spenolimycin Antibiotics Oxygen heterocycles Secondary amines Methoxy compounds Diamines
Spenolimycin
[ "Chemistry", "Biology" ]
78
[ "Antibiotics", "Biocides", "Biotechnology products" ]
70,322,404
https://en.wikipedia.org/wiki/Actinobolin
Actinobolin is a antibiotic with the molecular formula C13H20N2O6. Actinobolin is produced by the bacterium Streptomyces griseoviridus var atrofaciens. References Further reading Antibiotics Lactones Amides Isochromenes
Actinobolin
[ "Chemistry", "Biology" ]
60
[ "Biocides", "Biotechnology products", "Functional groups", "Organic compounds", "Antibiotics", "Amides", "Organic compound stubs", "Organic chemistry stubs" ]
70,328,185
https://en.wikipedia.org/wiki/Cycloheximide%20chase
Cycloheximide chase assays are an experimental technique used in molecular and cellular biology to measure steady state protein stability. Cycloheximide is a drug that inhibits the elongation step in eukaryotic protein translation, thereby preventing protein synthesis. The addition of cycloheximide to cultured cells followed by protein lysis at multiple timepoints is conducted to observe protein degradation over time and can be used to determine a protein's half-life. These assays are often followed by western blotting to assess protein abundance and can be analyzed using quantitative tools such as ImageJ. Implementation Cycloheximide chase assays have been conducted using a variety of cell types such as yeast and mammalian cell lines. Depending on the cell system used for analysis, the assay may vary in application and time course. For example, yeast cells expressing a protein substrate of interest typically require cycloheximide chases lasting up to 90 minutes to allow protein turnover to occur. In contrast, proteins that are expressed in mammalian cell lines tend to me more stable at steady state and may require a chase lasting 3 to 8 hours. Depending on the complexity of the protein and whether it is overexpressed or endogenous to the model system, the required length of the chase may vary. To ensure that protein synthesis is inhibited during the entire chase, cycloheximide is often spiked into the sample every few hours. In yeast, deletion strains are frequently used to assess protein stability over time with cycloheximide chases. For example, yeast strains lacking critical degradation machinery such as chaperones, E3 ligases, and vacuolar proteins are often used to determine the mechanism of degradation for a protein substrate of interest. Drug treatments (such as MG132) are also used to inhibit steps of degradation, followed by a cycloheximide chase to observe how the stability of a protein of interest is affected. These experiments may be conducted in mammalian cells with the implementation of compatible knockdown procedures in place of the yeast deletion strains. Cycloheximide chases are also valuable for assessing how different mutations affect the stability of a protein. Experiments have been conducted in yeast and mammalian cells to determine the critical residues required for protein stability and how disease-associated mutations may be affecting protein half-lives within the cell. This information is valuable for understanding the complexities of protein folding and how mutations contribute to the pathogenesis of the diseases they are associated with. Advantages There are many benefits to using cycloheximide chase assays as opposed to other methods that assess protein stability. Cycloheximide chases can be used with a wide variety of model systems and can be implemented to study almost any protein substrate. Cycloheximide is relatively inexpensive compound compared to other drugs and it is effective when used in low doses for short periods of time. Pulse chase assays are an alternative method to cycloheximide chase assays and involve the radioactive labeling of newly translated proteins followed by a similar “chase” period. While this method is informative and provides the benefit of observing nascent protein abundance, the radioactive material it requires is expensive with a shorter shelf-life and demands more caution to use than cycloheximide. Disadvantages Some disadvantages to conducting cycloheximide chase assays include the toxic nature of cycloheximide. When used at high concentrations over a long period of time, cycloheximide will damage the DNA within the cell and impair critical cellular function. For this reason, cycloheximide chases do not typically last for more than 12 hours. This presents a limitation if the turnover of a particularly stable protein is being studied. Additionally, cycloheximide chases only offer the ability to look at steady state proteins levels as opposed to newly translated protein levels such as with pulse chase. Therefore, only protein degradation and not protein maturation is able to be observed. References Molecular biology techniques
Cycloheximide chase
[ "Chemistry", "Biology" ]
830
[ "Molecular biology techniques", "Molecular biology" ]
70,329,400
https://en.wikipedia.org/wiki/PITPNM3
Nir1 or membrane-associated phosphatidylinositol transfer protein 3 (PITPNM3) is a mammalian protein that localizes to endoplasmic reticulum (ER) and plasma membrane (PM) membrane contact sites (MCS) and aids the transfer of phosphatidylinositol between these two membranes, potentially by recruiting additional proteins to the ER-PM MCS. It is encoded by the gene PITPNM3. Classification Nir1 has been classically categorized as a class IIA phosphatidylinositol transfer protein (PITP) that transfers phosphatidylinositol (PI) and phosphatidic acid (PA) between membranes. Class IIA PITPs are the multi-domain proteins PITPNM1/Nir2 (Drosophila homolog RdgBaI), PITPNM2/Nir3 (Drosophila homolog RdgBaII).. Nir1 shares high sequence similarity with Nir2 and Nir3, which led to its original categorization as a PITP. However, it was determined that Nir1 is not directly responsible for PI transfer, as it lacks the functional PITP domain seen within Nir2 and Nir3 Localization Recently, Nir1 has been shown to localize to ER-PM MCS, both under basal conditions and upon phospholipase C (PLC) activation. Notably, PLC activation has previously been shown to regulate the localization of Nir2 and Nir3 at ER-PM MCS well.. The MCS-targeting by Nir1 is achieved by the N-terminus of Nir1 localizing to the ER and the C-terminus of Nir1 localizing to the PM. The domains responsible for binding these membranes are discussed below. Structure Nir1 contains three main structural elements that are shared with Nir2 and Nir3: an N-terminal FFAT motif, a DDHD domain, and a C-terminal Lipin/Ndel/Smp2 (LNS2) domain. FFAT motif The FFAT motif is made up of double phenylalanines (FF) in an Acidic Tract. This motif, made of residues EFFDA in Nir1, has been shown to be necessary for the Nir proteins to associate with the ER proteins VAPA and VAPB. Mutation of the phenylalanine residues in this motif or knockout of the VAPA and VAPB proteins results in a loss of ER-PM MCS localization and causes Nir1 to become fully localized to the PM. DDHD domain The DDHD domain, made up of 3 Asp and 1 His residues, bears some similarities to that seen in PLA1 enzymes, which hydrolyze fatty acids of glycerolphospholipids, including phosphatidic acid (PA). However, this domain is still largely uncharacterized. It is a putative metal binding domain, but a role for metal binding in PITPNM function has not been established LNS2 domain The LNS2 domain is the Lipin/Nde1/Smp2 domain. This domain was discovered as having sequence similarities to the phosphatidic acid (PA) binding region found within the Lipin family of proteins. It is also responsible for PA-binding within Nir1, as it has been shown to co-localize with PA biosensors. The LNS2 domain targets the C-terminus of Nir1 to the plasma membrane in order to allow the protein to bridge the ER-PM MCS. Deletion of this domain results in Nir1 localization to the ER. It should be noted however, that the exact domain boundaries of the LNS2 domain are still being debated, especially given the boundaries of the folded domains predicted by the AlphaFold Protein Structure Database. (Alphafold structure of Nir1) Function The PITPNM family of proteins has been shown to participate in the phosphoinositide cycle. Lipids cycle between the PM and the ER in order to replenish levels after signaling events deplete lipid species such as PI.. When a stimulus results in the production of PA at the PM, Nir2 and Nir3 move to the ER-PM MCS, where they exchange the PA at the PM for PI that has been produced in the ER. As Nir1 is localized to the ER-PM MCS even without a stimulus, it is thought that Nir1 helps to recruit Nir2 to the MCS. There is evidence that Nir1 recruits Nir2 directly via binding to the uncharacterized domain between the FFAT and DDHD of Nir1 References Proteins Endoplasmic reticulum Membrane biology
PITPNM3
[ "Chemistry" ]
1,016
[ "Biomolecules by chemical classification", "Proteins", "Membrane biology", "Molecular biology" ]
76,113,618
https://en.wikipedia.org/wiki/Bioinspired%20armor
Bioinspired armor are materials that were inspired by the composition, and most importantly, the microstructures commonly found in nature’s natural defense mechanisms. These microstructures have been evolved by organisms to protect themselves from exterior forces, such as predatory attacks. These materials and their microstructures are optimized to withstand large forces. By taking inspiration from these materials we can design armor that has better penetration resistance and force dissipation properties than previously possible. Nature uses abundantly available materials to develop structures that have the most efficiently aimed mechanical properties. By examining the microstructures produced in nature, scientists can engineer these structures with more optimal materials to produce the most mechanically robust version of these structures. Biological/Bioinspired armor is specifically aimed at producing protective materials by optimizing naturally occurring defensive materials. This article will cover common types of defensive materials observed in nature, how these microstructures contribute to the impressive material properties, and how scientists have used this knowledge to develop novel protective materials. Function and microstructure Armor for High-Velocity Collision Protection Nacre Nacre is the composite biological material that makes up the shell of mollusks, featuring high strength and toughness. Layers of nacre work together to protect soft-bodied organisms from external loads, mainly including predatory attack and underwater currents, and can thus dissipate a large amount of energy during impact. The hierarchical structure of nacre has been observed on many length scales, from the microscale to the nanoscale. On the microscale, the structure of nacre is akin to “brick and mortar”: composed of long, thin brittle plates in a polygonal shape held together by a soft and flexible polymer. The mineral tiles are composed of aragonite, a polymorph of calcium carbonate, and are held together with a polymer of chitin and asparagine- rich protein. The aragonite tiles are 0.5 um thick, while the biopolymer layer is 50 nm. The arrangement of these tiles affects the mechanical properties of the nacre. Columnar nacre occurs when the tablets stack, creating regions of aragonite platelet overlap. Sheet nacre occurs when the tablets are arranged randomly between adjacent layers. Other features on the nanoscale, such as mineral bridges and asperities contribute to the high fracture resistance of nacre. Nacre has been studied for potential applications in human armor. Engineered nacre-like composites have been shown to possess improved ballistic-penetration resistance compared to monolithic armor of the same area and density. Turtle Shells A turtle shell is a bio-composite consisting of a keratin-coated dorsal shell, or carapace, and a flat interior made of cancellous bone sandwiched between thin layers of cortical bone. The flat, bony configuration allows for significant weight reduction, thereby resulting in higher stiffness, strength, and toughness-to-weight ratios. This design renders the shell particularly effective against sharp, high-strain assaults, providing vital protection from predators such as alligators, jaguars, and birds. Soft collagenous tissue that joins adjacent interdigitating bone plates, called sutures or interdigitating sutures, allows a turtle to respond to a variety of different loading regimes. Turtle shells were considered to be biological polymers before synthetic polymers were developed, and were thus one of the first natural analogs of early man-made armor. Armor for Low-Velocity Blunt Impact Loading A key application for biological-inspired armor is protection against low-velocity blunt-impact loading. Blunt impact loading refers to the direct contact of a blunt object on a body, resulting in physical force or trauma. It involves the transfer of kinetic energy, without penetration, and usually can result in compression, deformation or material fracture. Common examples would include sports injuries (impacts that result in concussions), assaults (punches, kicks), or falls. Many instances in nature provide organisms with an ability to sustain these loads, and hence provide models for biomimetic armors. Hooves Bovine and equine hooves are highly studied examples of blunt impact loading and shock absorption in nature. This is primarily because the hooves act as a natural energy absorber for animals such as horses and cattle with high force and velocity gait patterns. Structure Both bovine and equine hooves are made of the protein α-Keratin. α-Keratin is a structural, fibrous protein, the same one found in our hair and nails. The keratin molecules are held together by H-bonding and disulfide cross-linked bonds, which enhance the rigidity of the protein. α-Keratin chains twist together to form coiled-coil dimers. Coiled dimers bind with other coiled dimers to produce photo filaments, which further bind to form a photo fibril, then eventually an intermediate filament (IF). Within the outer wall of the hoof, are tubule structures oriented parallel to the leg (220x140 um, 50 um medullary cavity). Keratin intermediate filaments are organized as circular lamellae that surround these tubules. Properties Many of the mechanical properties associated with hooves are due to the presence of tubules. The layers of tubules with their lamellar keratin surroundings provide excellent crack deflection and fracture toughness. When a crack is introduced into a material, it will continue to propagate until it hits a new interface. Hooves have layers of keratin IFs surrounding each tubule, each in slightly changing orientations, creating multiple interfaces that the crack will “bump” into, and thus makes the hoof a highly fracture-resistant material. The high fiber alignment and density of keratin as well as the longitudinal orientation of the tubules helps with energy absorption from blunt loading. Finally, the intratubular matrix within the tubules of the hooves is less stiff and dense than that surrounding it, reducing the weight of the hoof. All these properties help hooves support large compressive and impact loads while providing shock absorption from the impact, and make for promising impact loading models. Horns Horns, such as those found in goats, buffalo, and rhinoceroses are another keratin based structure that may provide inspiration for biological armor. Given that horns serve as a defense mechanism for these animals, these structures are well able to withstand significant impacts. Structure Horn has a very similar structure to hoof, as they are both composed of α-Keratin. The structural makeup of these types of horns involves a lamellar keratin structure, with microtubules radially oriented. These tubules are much larger than those found in hooves, with exact dimensions depending on the species, but are also surrounded by keratin IF lamella. This gives the horns an ability to maintain stiffness in tension while still dissipating kinetic energy from compressive forces and impact. Properties Given the extremely similar structure of horns to hooves, they share many similar properties. One such property is crack deflection and fracture toughness. Again, the layered lamellar hierarchy of keratin surrounding the tubules creates many interfaces for cracks to run into. One study found that the critical crack length for horn was 60%, meaning that a crack would have to extend through 60% of the transverse direction of a horn before it became critical, which shows extremely high toughness. Horn also has extremely high energy absorption and ability to withstand stress. This is attributed to the tubule orientation and keratin fiber density; unique to certain animals, the spiral geometry of the horn also allows for higher energy absorption. Specifically to bighorn sheep, studies found that their horns had a compressive stress of 4.0 MPa, a tensile stress of 1.4 MPa, and a fighting force of 3400N. The compressive stress is obviously much larger than the tensile stress, but like many other anisotropic materials, this is because an animal such as the bighorn sheep who uses their horns for fighting will be more likely to experience compressive stress and blunt impacts rather than tension. Finally, similar to hooves, horns are extremely tough but also lightweight, due to the lighter, less dense intratubular matrix. Crustaceans Aside from nacre and conch, crustacean exoskeleton and cortical bone are other biological structures that possess intricate features that may be beneficial for ballistic protection and low velocity blunt impact. Structure The exoskeleton of crustaceans such as crab is made of chitin, which is an amino polysaccharide polymer. A polysaccharide polymer is derived from an amino sugar, a type of sugar where the hydroxyl group is replaced with an amine group. The chitin fibers are embedded in a calcite matrix. The crustacean exoskeleton, like many of the other models discussed previously, is hierarchical. It starts with the chitin polymer fibers arranged in a honeycomb pattern. These “honeycomb” planes are then stacked in a Bouligand pattern, consisting of several layered, rotating planes. This makes up the inner, endocuticle layer of the crustacean exoskeleton and accounts for 90% of the total exoskeleton. The outer exocuticle layer contains more densely packed chitin layers that end up being 200 μm thick. Thickness of the exocuticle and endocuticle layers vary in different parts of crustaceans. Properties One major mechanical property of the crustacean exoskeleton is high toughness and stiffness. This is attributed to the thick, densely layered exocuticle layer. Energy Dispersive X-Ray mapping found that the exocuticle layer has higher calcite mineralization in its matrix, increasing the “hardness” of the material. The endocuticle layer of the exoskeleton is responsible for energy absorption and crack deflection properties. This layer has a much lower stacking density of the chitin polymer layers than the exocuticle layer. This means that when placed under compressive forces, it is able to dissipate the load. Like mentioned in hoof and horn, the Bouligand pattern provides several interfaces for cracks to run into, creating high crack deflection. Overall, the combination of the hard, stiff exocuticle and the energy absorbing, compressible endocuticle create a structure that is extremely difficult to penetrate and serve as a good model for both blunt impact and ballistic shock applications. Conch Shell Structure Conch is a type of shell composed of calcium carbonate composite material that has a three-order lamellar structure. The first order contains ceramic plates that are 5-60 μm thick. The second order consists of ceramic beams at two different 45º orientations, about 5-30 μm thick, and the third order is made up of thousands of tiny ceramic planks, 75-100 nm thick. Properties The lamellar layers that make up the conch shell provide strong crack propagation. This structure also gives the conch shell several modes of fracture, which act as toughening mechanisms when put under different loads. Mode 1 fracture, which involves loading orthogonal to the plane of the crack, exists in the third order lamellae due to delamination of the layers. Mode 2 fracture, which involves loading parallel to the plane of the crack, also exist between third order lamella and cause a buckling effect. Compressive and bending tests showed that conch shell was highly anisotropic. It was found that conch is about 60MPa stronger in parallel loading compared to perpendicular loading. This was validated through a Weibull analysis for dynamic compression in surfaces both parallel and perpendicular to the load, the parallel loading was found to fail at higher fracture stresses. Armor for Sharp Impact Loading Crocodiles and Alligators Crocodilian skin has the potential to be used as armor for sharp impact loading because its skin is embedded with bony particles (scutes) (i.e., osteoderms) . Dubbed “armored skin”, osteoderms are composed of hydroxyapatite and collagen, the same components found in bone. The scutes have a bony network on their surface and are connected with the collagen fibrils. Osteoderms are present in reptiles (e.g., lizards, crocodilia, dinosaurs), fish, and some mammals (e.g., armadillos, mice), for protection from predators. The scutes on crocodiles have multiple functions, like thermoregulation, calcium storage, and toughening. The low weight and flexibility of the scutes are attributed to their porosity (~12%), while the matrix surrounding the pores gives them their hardness. Compression tests indicate that crocodilian scutes are strongest in the axial direction, with a strength of 67 MPa. The toughness of the scutes can be attributed to pore flattening, mineral bridge formation, and collagen bridge growth as energy dissipation mechanisms upon impact loading. Crocodiles also have highly irregular polygons of keratinized skin on their head and faces, which offers additional protection. The cracks on the head and face of the crocodilian skin are generated from a cracking response, i.e., “crocodile cracking” that occurs when stress is applied. This cracking mechanism releases deformation energy and helps the skin maintain its hard exterior, with increased flexibility. Armor for Mobility and Movement In nature an efficient biological armor needs to be able to protect the organism while introducing the least amount of hindrance on its function. For many organisms' movement is one of their most important abilities, therefore, the armor cannot limit their movement mechanics. Unlike the continuous structure used to protect stationary organisms, like clam shells, these types of armor are usually composed of many separate structures to allow for the elongation and contraction required for movement, while maintaining complete protection. There are two major classes of biological armor found in nature, these are scale armors and osteoderm armor. Both of these biological armors have specialized microstructures and macrostructures that produce the impressive properties of these materials. Scales Scale armor is the most widely expressed armor in nature. It is mostly seen on aquatic animals; however, it is also seen on some land animals such as pangolins. This armor is known for its impressive flexibility, while also its impressive compressive and puncture resistance. There are many subcategories of fish scales, but the main three are Elasmoid, Ganoid, and Placoid. These scale types are distinguished by their mechanical properties, geometric shape, and macroscopic alignments. Elasmoids are the typical oval shaped scales found on ray-finned fish. These scales are known for their ultra lightweight, puncture resistant, and flexibility which allows for the propulsion movements required for swimming. Ganoid scales are rhomboid in shape and exhibit enhanced stiffness, which is due to their thicker layer of mineralized material. Lastly, Placoid scales are best known by their shape. They have spines that run against the pattern of the scale which results in a sharp or rough feeling to the touch. These scales are found on animals like sharks and stingrays. Structure Elasmoid scale strength against force loading are significant due to their macrostructure. The macrostructure is composed of their positioning, shape, and scale features. Elasmoid scales are oval in shape, three quarters of which is covered by neighboring scales. This overlapping does not only allow for the movement of the organism while maintaining complete coverage, but also aids in the compressive force resistance. When the scales are loaded with a compressive force, it is distributed across all the neighboring scales. As these scales are bent into each other, in compressive loading or in natural movement, the scales exhibit a material hardening effect. This allows for each scale to be produced below the required stiffness to protect against an attack, however when joined together in an overlapping pattern, the material is able to resist large compressive forces. This is one structure that this armor design uses to maintain light weight. Another major macroscale system of the elasmoid scales are the physical scale features. The scales have grooves running from the focal point of the scale towards the edge, known as radii, as well as rings around the focal point in a concentric pattern, known as circuli. Both the radii and circuli are hypothesized to help in the bending mechanisms of the scale as well as aid in anchoring the scale into the dermis. The macroscale structures of the scales are largely important for the armors function; however, the microstructures give the materials their impressive properties. These scales are made from composite materials. These are mineralized protein matrices that allow for the strength and toughness of the mineral while reducing the brittle effects of these materials with protein components. In the elasmoid scale there are three main layers. The Limiting layer, which is the outermost layer; the elasmodine outer, and the elasmodine inner layers which are defined by their level of mineralization. The limiting layer is found just on the surface of the scale where it is the first line of defense against puncture. This layer varies its thickness depending where it is located on the scale as well as which species scale it is. It is typically on the 10-1000 micrometer scale in thickness. The limiting layer also forms various shapes depending on if it is posterior or anterior on the scale. Posterior limiting layer commonly forms varying pillar structures assumed to be for varying water interface functions, while the anterior portion is most commonly formed into the circuli shape discussed earlier. The cross-section of the anterior region shows a saw tooth shape. This is assumed to help with dermal integration. This composite material is almost completely mineral apatite with small amounts of collagen. Between varying fish species researchers noted a carbonate substitution in the apatite structure. This material's structure is well developed for its application. This layer of the scales needs to be tough and stiff to help defend from punctures which explains the high level of mineral in this layer. The apatite percent volume in the limiting layer was around 65%. Below the limiting layer there is a thicker basal plate composed of larger collagen fibrils, called the elasmodine layer. The elasmodine layer is split into the external elasmodine and internal elasmodine which are distinguished by the difference in their mineral content within the composite material. The external layer contains higher concentration of the mineral component while the internal layer contains almost only collagen. This variation between slightly mineralized to almost completely unmineralized composite leads to the great flexibility of the scales. As a force is applied to the surface of the scale, as seen in predatory attack, the outermost layer is put under compressive force and needs to resist puncture, while the innermost layer is experiencing tensile forces. The high mineral concentration of the limiting layer and the external elasmodine are better suited for dealing with compressive forces, while the almost completely unmineralized material of the inner elasmodine is better suited for stretching forces. The outer elamsodine layer is around 35% mineralized. The collagen fibril alignment in the composite material plays a large role in the material properties. These collagen fibrils form a structure known as the Bouligand structure. The Bouligand structure is a rotated plywood design that imparts multidirectional strength. The collagen fibers in one layer are all aligned linearly in a single direction to give strengthening in that one direction. These unidirectional collagen fibril plies are then slightly offset from their neighboring layers to help with the materials overall multidirectional strength. The scale’s strength and elastic modulus correlates to the number of elasmodine layers and the thickness of those layers compared to the overall scale thickness. Additionally, the collagen fibrils within each lamella layer do not demonstrate grouping. Each fibril is isolated and connected to its neighbors through sacrificial bonds. This connection allows for another level of force dissipation as the sacrificial bonds will break first under force instead of the macroscale material. The structure of the scale armor is specifically designed to provide protection during movement, dissipate forces, and maintain lightweight. This is then combined with the microstructures of the scale to produce a material that is best suited for the required parameters. The composite material provides force resistance and puncture resistance, while still providing the flexibility required for the macrostructure movements. This combination of aspects provides great protective armor for many animals on earth. An example of this armor structure in nature are the Arapaimas fish. These are large fish that exhibit these elasmoid scales and heavily rely on their microstructure and macrostructures to protect them. These fish live in the same water as piranhas, so having effective armor to protect themselves from possible piranha attacks are vital to their survival. The overlapping of the scales allows for the armor to absorb kinetic energy by transmitting the impact energy to adjacent discs. A larger number of scale layers can protect from larger forced attacks. For example, the Arapaima have an average of three scale layers which is capable of protecting from piranha attack. Without this efficiently designed armor these fish would not be protected in their habitats. Osteoderms Osteoderms are bony deposits formed inside the dermis, commonly found in lizard species and alligators. Osteoderms form inside the dermis with or without skeletal connection. The level of osteoderm distribution varies heavily between species. Some animals are completely covered, while others only have the armor in certain areas of the body. Additionally, the size of the osteoderms varies highly depending on species and area of the body. There are typically larger plates on the back, side and belly, then smaller plates around the head and tail. There are also many types of osteoderm structures. They can form in isolated groups creating a partial coverage of the organism, but they can also form a structure more similar to the fish scales, where they overlap each other to form a more complete armor. Depending on the macroscale structure of armor the osteoderm morphology changes. The overlapping layers are typically thinner than standalone plates seen in the isolated groups. Osteoderms are made of mineral composite materials. They include various types of bone, mineralized and unmineralized collagen bundles, as well as blood vessels. The composite material that makes up osteoderms are made from calcium phosphate and collagen. Due to the bone-like material structure, these materials are much more stiff than the fish scales previously discussed. With a complete coating of this armor, it would inhibit the mobility of the organism. However, these structures are connected by stiff fibers and dermal tissue which allow for the movement of the osteoderms. The soft regions, however, are not protected by the armor. Osteoderm structures are formed similarly to bone; the outer layer is made of parallel fibered bone, with a cancellous core lined with lamellar bone. These bone structures have very similar mechanical properties to skeletal bone. By producing this dermal layer of bone, the organism creates a sacrificial layer of stiff bone at the surface of the skin to defend against point force attack. This structure is then surrounded by flexible dermal material to allow for movement, however as stated before, the gaps between the osteoderms are not protected leaving some vulnerable areas. This armor combines larger and more mechanically robust materials in a macrostructure design that still allows for movement. Implications Military Given their combined lightweight and protective nature, nacre, conch shell and fish scales are a few of the many organisms being studied by US military departments for bioinspired armor applications. The major complication that must be improved for these types of armor is enhancing flexibility and reducing weight of armor without compromising protection from ballistic type impacts. The primary benefit from nacre-inspired biological armor is its extreme ability to resist penetration through energy dissipation. This is especially prevalent when compared to monolithic ceramic panels. Additionally, conch inspired materials provide improved tortuosity when compared to monolithic plates. Biological materials derived from these structures show promise for lightweight armor systems without compromising impact resistance. Materials inspired from fish scales have the potential to provide bending and rotation abilities to wearable armor systems as well. Bone inspired bio-armor systems are also being explored for ballistic shock mitigation applications. One such system developed by the US Army Research Laboratory utilizes an alternating soft and stiff material distribution as found in the bones located in the forelimbs of horses. This particular bone system not only provides exceptional load bearing and impact resistance abilities, but has a design replicable for alternative armor applications. Using metallic foam materials, a “coupon” panel was designed to mimic the mechanical properties of the bone systems, and placed on a cylindrical steel support. The system was tested using gas gun testing, in which a high speed projectile was shot at the coupon. An Alulight aluminum foam (which is the same material consisting of the soft layers of the coupon) at different densities were used as a baseline for comparison. It was found that the bone and hoof inspired system was not only lighter, but had less projectile penetrations and shock absorption. The bone system had significantly reduced the accelerations of peak stress waves over both high and low frequencies, showing enhanced shock absorption. Sports helmets and Equipment Bioinspired armor also presents several applications in athletic equipment including helmets and other protective gear. The alligator gar fish has scales resistant to cuts and punctures. Using 3D printing technology, various sized scales can be added to rubber pads providing these properties to regular clothing. This was tested on Kevlar gloves in particular, with smaller scales around finger joints and larger scales around the base of the hand. This was found to be useful in protecting against stab wounds or other sharp objects. Polymer materials inspired by conch shells have also been explored, as they are difficult to break in drop tests, simulating the same impact as a bullet wound. A conch inspired design could also have the potential to make helmets thinner and lighter while offering the same level of protection. For example, a research group at MIT developed a 3D printed conch inspired prototype for use in helmets and body armor. This prototype was fabricated using proprietary Stralasys photopolymers Veromagenta and Tangoblackplus, which were deposited and cured under UV light. Using 3D printing and additive manufacturing techniques, the research group arranged the layers with 2 orders of hierarchy, and mimicked the alternating plank angles seen in conch. Drop tower testing was conducted where the composites were dropped at different heights, reaching up to 3 m/s. A composite with only a single order of hierarchy was used for comparison. The prototype with 2 orders of hierarchy was able to withstand damage at all incident velocities, while the control was damaged at a velocity of 2.5 m/s. This composite was found to enhance impact performance by 70% compared to the single order prototype, and 85% compared to a stiff material control. This study confirmed that hierarchical designs inspired from conch shell may have impressive applications in helmets due to their extraordinary ability to withstand blunt impacts. Dragon silk is a type of spider silk that is thinner than human hair, yet stronger than Kevlar. Utilizing this material would create protective gear that is light, breathable, biodegradable and durable, with potential applications in bullet proof vests. Other Applications Overview In addition to creating armor for protection, bio-inspired armor also has medicinal implications. For instance, sponges, sealants, and powders can simulate blood coagulation and provide a barrier to an injury site. In particular, phenol-amine crosslinking found in insect exoskeletons has anti-cellular adhesion, antithrombotic, and antifouling properties. Combining the properties of the sponges and insect skeleton with the bioinspired materials mentioned above could lead to the creation of hybrid bioinspired armor with antimicrobial, antithrombotic properties and toughness. Insect Sclerotization-Inspired Technologies The process of insect sclerotization involves the hardening of the exoskeleton shell which is facilitated by the covalent crosslinking of dopamine derivatives with chitin fibers and other endogenous proteins to generate the chitinous exoskeleton. One research group was able to create a crosslinked version of Bovine Serum Albumin (BSA) and Hydro caffeic acid (HCA) grafted with polyethylene glycol (PEG), by using the same mechanism found in insect sclerotization. The new cross-linked molecule had antifouling properties and prevented the formation of biofilms on biomedical devices, helping maintain sterility. Using the same crosslinking method, catechol and collagen were crosslinked and chelated with zinc ions to create a “metal-phenol-polyamine system” on the surface of a sponge. The presence of the metal ions gave the sponge antimicrobial properties against gram-negative and gram-positive bacteria. The cross-linked substrates activated coagulation pathways to promote platelet aggregation, allowing the sponge armor to have hemostatic and would-healing properties. The efficacy of this technology was modeled in rabbit and rat studies, which demonstrated successful hemostasis and wound healing when these animals were treated with these sponges. Medicinal Applications Investigating the mechanisms behind the formation of biological armor can also provide insights into human disorders and diseases. In particular, the development of the osteoderms in alligator skin can be used to investigate the disease progression of heterotrophic ossification. Heterotrophic ossification, also known as “Stone Man” disease, is a human disease that results in bone formation in mature soft tissues. While the causes of heterotrophic ossification are known, the cellular mechanism has remained elusive. Alligator scutes have the potential to be a useful disease model for heterotrophic ossification because the osteoderms in alligators form in soft tissues at a late stage in development (post-embryonic stage), similar to the bone formations in heterotrophic ossification. Using the alligator osteoderms to study heterotrophic ossification's mechanism and disease progression can potentially lead to more effective treatment options for the disease (e.g., pharmaceutical). Researchers at the Naval Medical Research Institute in Shanghai investigated the healing properties of shark skin on wounds exposed to seawater. Because wounds immersed in seawater are exposed to low temperatures, high salt concentrations, and various microbes they tend to have a slower healing process. Collagen (type I) extracted from Blue sharks and an anti-seawater immersion Polyurethane film was deposited on a sponge to create a shark skin collagen sponge bandage. The shark skin bandage shielded the wound from seawater for up to four hours after submersion and strongly promoted wound healing, compared to the gauze and chitosan bandages treated with Polyurethane film. Shark skin has also been investigated for applications in the transportation industry as a coating for airplanes to reduce drag. Manufacturing Processes Additive Manufacturing Additive manufacturing methods, including 3D printing/fused deposition modeling, material jetting, and powder based fusion have been used to manufacture bio-inspired armor. 3D printing and fused deposition modeling of biomimetic armor is popular due to its cost effectiveness and ease of manufacturing. The filament-based 3D printing process involves extruding a polymer filament through a nozzle and printing objects layer-by-layer onto a hot surface. Common filaments used in this process are ABS, PLA, TPU, and PE. Once finished, the part requires no additional treatment. Powder-based 3D printing involves spreading a thin layer of powder onto a surface and connecting it with a binder into the desired geometry. Undesired powder is removed once the object is printed. Some common powders used are Sr-HT, aluminum oxide powder, and calcium polyphosphate. Compared to filament-based 3D printing, powder-based manufacturing provides higher manufacturing accuracy and finer detail on the micro scale. Overall, challenges with 3D printing include recreating the precise details on the nanoscale, poor surface quality, and slow printing rate. 3D printing and FDM have been used to create armors mimicking the structure of nacre, conch, and fish scale. In biomimetic nacre-like armors created through FDM show improved impact resistance compared to a monolithic panel, as well as minimized damage. For overlapping scales based on fish, 3D printing allowed flexibility while maintaining a multiple layer defense. Stiff and soft layered extruded armor based on horse hooves also allowed for greater energy absorption relative to a monolayer. Material jetting (MJ), also referred to as 3D inkjet printing, is the process by which a photopolymer resin is deposited onto a surface as droplets, then cured by UV light. MJ features high resolution, desirable surface properties, and multi-material prints. MJ has been used to create composite stiff/soft biomimetic materials, as well as common structural motifs like Bouligand, helicoidal, overlapping, and lamellar. MJ is therefore useful for printing conch shell inspired (cross-lamellar), fish scale inspired, and chitinous inspired armors. Despite these benefits, MJ is expensive, requires post-processing, and uses sensitive materials that may degrade the final quality of the part. Subtractive Manufacturing Subtractive manufacturing methods have been used in the production of biomimetic armor, although they are not as common as additive manufacturing methods in this field. Biomimetic armor often takes inspiration from natural structures and organisms to create materials with enhanced properties such as strength, flexibility, and lightweight characteristics. Some subtractive manufacturing methods that have been employed in the fabrication of biomimetic armor include CNC machining,  and laser cutting/engraving. Computer Numerical Control (CNC) machining involves the use of computer-controlled machinery to remove material from a workpiece. It allows for precise shaping and detailing of various materials, including metals, ceramics, and composites, which can be utilized in biomimetic armor production. CNC machining has been used to create nacre-like dovetail tablets using PMMA. Laser cutting/engraving involves the use of a high-powered laser beam to cut through materials. It offers high precision and can be used with a variety of materials, including metals, polymers, and composites. Laser cutting can be employed in the fabrication of biomimetic armor to create intricate designs and patterns, and has notably been used to engrave the dovetail patterns of nacre tablets onto carbon-fiber/epoxy composites. Additionally, laser engraving has been used to manufacture a segmented plate inspired by fish scales that was then placed on a soft elastomeric substrate. These subtractive manufacturing methods can be combined with other techniques and processes to create biomimetic armor that mimics the structural and functional characteristics of natural organisms, providing enhanced protection and performance in various applications. References Bioinspiration Armour
Bioinspired armor
[ "Engineering", "Biology" ]
7,369
[ "Biological engineering", "Bioinspiration" ]
76,117,102
https://en.wikipedia.org/wiki/NAURA%20Technology%20Group
NAURA Technology Group (Naura; ) is a partially state-owned publicly listed Chinese company that manufactures semiconductor chip production equipment. It is currently the largest semiconductor equipment manufacturer in China. History In September 2001, Beijing Electronics Holdings, a government SASAC entity initiated the establishment of Beijing Sevenstar Electronics (Sevenstar Electronics). On 16 March 2010, the company held its initial public offering on the ChiNext of Shenzhen Stock Exchange. It was the biggest gainer for mainland China stocks on that day where its initial price of 33 yuan per share jumped 79% percent to 59 yuan. In 2016, Sevenstar Electronics acquired Beijing North Microelectronics (NMC) from its parent, Beijing Electronics Holdings. NMC specialized in Silicon etching and Physical vapor deposition (PVD) equipement. On 24 February 2017, after the company restructured following its acquisition, it was renamed to NAURA Technology Group. In January 2018, the Committee on Foreign Investment in the United States (CFIUS) approved Naura's purchase of Akrion Systems, a Pennsylvania-based rival that was a supplier of advanced wafer surface preparation technology. This was the first takeover of an American company by a Chinese one that was approved since Donald Trump became President of the United States. Prior to that the Trump administration had blocked all Chinese acquisitions of US target companies as a result of the China–United States trade war. In October 2022, Naura told its American employees in China to stop taking part in R&D activities to comply with the United States New Export Controls on Advanced Computing and Semiconductors to China. Naura stated its subsidiary, Beijing Naura Magnetoelectric Technology was on the Bureau of Industry and Security unverified list although it accounted for only 0.5% of the company's annual revenue. Its share price dropped 20% that week. A week later, US trade officials from American embassy in Beijing held talks with executives of Naura. In December 2022, Beijing Naura Magnetoelectric Technology was removed from the Bureau of Industry and Security unverified list after their bona fides were able to be verified. In February 2023, Yangtze Memory Technologies reduced its equipment purchase orders by 70% from Naura. The cancellation orders stated in October 2022 which was around the same time the US export controls came into effect. In January 2024, Naura stated it expected its 2023 revenue to increase by around half from a year earlier as its technology developments allowed it fulfil local demands and gain a greater market share in the country. In February 2024, Bloomberg News reported that Naura was one of the top investment picks among Wall Street firms such as Barclays and Sanford C. Bernstein. A company comparable to Applied Materials, it would be able to fulfill local market demands in China and fill the void left by foreign firms being unable to continue doing business due to geopolitical restrictions. In April 2024, it was reported that Naura was starting research on lithography systems. In September 2024, Taiwanese authorities accused eight mainland Chinese technology companies which included Naura of illegally poaching talent from Taiwan. Naura denied poaching local workers and stated its office in Taiwan operated in accordance with local laws and regulations. In December 2024, Naura was targeted in a new round of US export controls and added to the United States Department of Commerce's Entity List. Business lines Naura has four business lines: Semiconductors (plasma etching, PVD, CVD, oxidation/diffusion, cleaning system, and annealing) Vacuum technology (heat treatment, crystal growth and magnetic material) Lithium battery equipment Precision components (resistors, capacitors, crystal devices, and module power supplies) See also Semiconductor industry in China Applied Materials References External links 2001 establishments in China 2010 initial public offerings Companies based in Beijing Companies listed on the Shenzhen Stock Exchange Electronics companies established in 2001 Equipment semiconductor companies Government-owned companies of China Semiconductor companies of China Companies in the CSI 100 Index 2001 in Beijing
NAURA Technology Group
[ "Engineering" ]
817
[ "Equipment semiconductor companies", "Semiconductor fabrication equipment" ]
59,533,535
https://en.wikipedia.org/wiki/Smog%20tower
Smog towers or smog free towers (see below for other names) are structures designed as large-scale air purifiers to reduce air pollution particles (smog). This approach to the problem of urban air pollution involves air filtration and removal of suspended mechanical particulates such as soot and requires energy or power. Another approach is to remove urban air pollution by a chimney effect in a tall stack or updraft tower, which may be either filtered or released at altitude as with a solar updraft tower and which may not require operating energy beyond what may be produced by the updraft. World’s first air cleaning tower The world's first smog-free tower was built by Dutch artist Daan Roosegaarde. It was unveiled in September 2015 in Rotterdam and later similar structures toured or were installed in Beijing and Tianjin, China, Kraków, Poland, Anyang, South Korea and Abu Dhabi. The 7-meter (23 ft) tall tower uses patented positive ionisation technology and is expected to clean 30,000 m3 of air per hour. Gallery World’s largest air cleaning tower In 2016, a tower has been built in Xi'an, Shaanxi to tackle the city's pollution. It was funded by the provincial government and costs US$2 million. The running cost is $30000 per year. It is under testing by researchers at the Institute of Earth Environment of the Chinese Academy of Sciences. The experimental demonstration urban updraft tower is cleaning the air in central China with little external energy input. A 60-metre urban chimney is surrounded by solar collector. This project was led by Cao Jun Ji, a chemist at the Chinese Academy of Sciences' Key Laboratory of Aerosol Chemistry and Physics. This work has since been published on, with the performance data and modelling. "I like to tell my students that we don’t need to be medical doctors to save lives ... If we can just reduce the air pollution in major metropolitan areas by 20 percent, for example, we can save tens of thousands of lives each year ... I hope that people will realize that this is a really effective and cheap way to solve the PM2.5 problem." “In the case of India, their population is more packed together, so the towers will be more effective in mitigating PM2.5 … At least during the next 10-15 years, they can use them to provide relief to residents while they invest in clean energy technology.” —David Pui, Regents Professor and LM Fingerson/TSI Chair in Mechanical Engineering of the University of Minnesota, explained. Other towers India , there are at least eight smog towers in India, some of which are smaller in scale: Connaught Place (around 80 ft; since Aug 2021) Anand Vihar (around 80 ft) Lajpat Nagar Central market (20 ft; since Jan 2020) Gandhi Nagar market (12 ft) Krishna Nagar market (12 ft) Bangalore (15 more maybe installed later) Chandigarh (24–25 m; water used to remove pollutants) Jaipur Projects under development In Delhi, India Kurin Systems is developing a tall smog tower, called the "Kurin City Cleaner". It is different from Daan Roosegaarde's Smog Tower in that it won't depend on the ionization technique to clean the air. The H14 grade HEPA Filter, known for being able to clean up to 99.99% of the particulate matter, will be used instead, together with a pre-filter and activated carbon. It is claimed the tower will filter air for up to 75,000 people within a radius. and cleaning more than 32 million cubic metres of air every day. ZNera Space proposed Lutyens' Delhi smog tower network. Efficacy In 2023, some researchers from IIT Bombay conducted a study on the smog tower in Connaught Place, Delhi. They found that the tower's air cleaning efficiency varies with distance. At the source, it operates at 50% efficiency, but this drops to 30% just 50 meters away, and further decreases to slightly over 10% at a distance of 500 meters. They also found that the filter housing was not properly sealed, allowing contaminated air to circumvent the filtration process. Reception There are air pollution experts who view smog filtration tower projects with scepticism. For example, Professor Alastair Lewis, Science Director at the NCAS, has argued that static air cleaners, like the prototypes in Beijing and Delhi, cannot process enough city air, quickly enough, to make a meaningful difference to urban pollution. He said that it was "easier to come up with technologies and schemes that stop harmful emissions at source, rather than to try to capture the resulting pollution once it's free and in the air". Noting that the Delhi tower would be powered by (mostly) coal-fired electricity, Sunil Dahiya from India's Centre for Research on Energy and Clean Air has commented that "so we will only be adding to pollution elsewhere in the country". According to The Times, environmentalists said that "given the city[Delhi]'s size and the scale of its pollution, 2.5 million smog towers would be needed to clean its air". As a refute, "The objective is not to clear entire Delhi's air, it is to create special zones where people can breathe," Anwar Ali Khan, the engineer in charge of the project said. See also Air-supported structure Biofilter CityTrees Domed city Green building Green wall List of tallest buildings and structures Sustainable city Direct air capture References Further reading (machine translation, original page in Chinese) External links "Filtration Solutions to Mitigate Coronavirus Aerosol and PM2.5 Pollutants" by Professor David Pui David Y. H. Pui talked about the smog free towers (SALSCS) in Xi'an and Delhi (video starts from 33:44) IFC Mall installs extra-large air purifiers to manage indoor fine dust (machine translation, original text in Korean) Development of Passive/Active integrated module device for fine dust free zone implementation (2nd year) | Bucheon City, Korea's first fine dust reduction device pilot operation (machine translation, original text in Korean) Nutan Labs Smog Towers, Nutan Labs is the producer of the tower in Bangalore This device can purify air over 500 sqm WAYU air purifiers on Delhi roads turn dustbins, spittoons Vast grid of filter towers proposed across Delhi to combat toxic smog Studio Symbiosis proposes Aũra towers to alleviate air pollution in Delhi Air pollution Building biology Energy conversion Filters Industrial gases Gas technologies Scrubbers Solar power
Smog tower
[ "Chemistry", "Engineering" ]
1,397
[ "Chemical equipment", "Building engineering", "Filters", "Scrubbers", "Industrial gases", "Filtration", "Chemical process engineering", "Building biology" ]
59,537,838
https://en.wikipedia.org/wiki/Chip%20on%20board
Chip on board (COB) is a method of circuit board manufacturing in which integrated circuits (e.g. microprocessors) are attached (wired, bonded directly) to a printed circuit board, and covered by a blob of epoxy. Chip on board eliminates the packaging of individual semiconductor devices, which allows a completed product to be less costly, lighter, and more compact. In some cases, COB construction improves the operation of radio frequency systems by reducing the inductance and capacitance of integrated circuit leads. COB effectively merges two levels of electronic packaging: level 1 (components) and level 2 (wiring boards), and may be referred to as "level 1.5". Construction A finished semiconductor wafer is cut into dies. Each die is then physically bonded to the PCB. Three different methods are used to connect the terminal pads of the integrated circuit (or other semiconductor device) with the conductive traces of the printed circuit board. Flip chip In "flip chip on board", the device is inverted, with the top layer of metallization facing the circuit board. Small balls of solder are placed on the circuit board traces where connections to the chip are required. The chip and board are passed through a reflow soldering process to make the electrical connections. Wire bonding In "wire bonding", the chip is attached to the board with an adhesive. Each pad on the device is connected with a fine wire lead that is welded to the pad and to the circuit board. This is similar to the way that an integrated circuit is connected to its lead frame, but instead the chip is wire-bonded directly to the circuit board. Glob-top Flexible circuit board In "tape-automated bonding", thin flat metal tape leads are attached to the semiconductor device pads, then welded to the printed circuit board. In all cases, the chip and connections are covered with an encapsulant to reduce entry of moisture or corrosive gases to the chip, to protect the wire bonds or tape leads from physical damage, and to help dissipate heat. The printed circuit board substrate may be assembled into the final product, for example, as in a pocket calculator, or, in the case of a multi-chip module, the module may be inserted in a socket or otherwise attached to yet another circuit board. The substrate wiring board may include heat-dissipating layers where the mounted devices handle significant power, such as in LED lighting or power semiconductors. Or, the substrate may have low-loss properties required at microwave radio frequencies. COB LED modules COBs containing arrays of light-emitting diodes have made LED lighting more efficient. LED COBs include a layer of silicone containing yellow Ce:YAG phosphor that encapsulates the LEDs and turns the blue light of the LEDs into white light. The COB is usually built on an aluminum PCB that provides good thermal conductivity to a heatsink. COB LEDs could be compared with multi chip modules or hybrid integrated circuits since all three can incorporate multiple dies into a single unit. COB variants are also used in newer LED bulbs as in this case the substrate can be either glass, sapphire or sometimes regular phenolic. With a transparent substrate the LED chips may be installed "upside down" shining through for higher outcoupling. Typically they are glued to the substrate with UV setting glue, interconnects attached, and the encapsulant and phosphor applied in a single step with a back reflective coating applied to channel light out of the device. References Electronics manufacturing Printed circuit board manufacturing
Chip on board
[ "Engineering" ]
751
[ "Electrical engineering", "Electronic engineering", "Electronics manufacturing", "Printed circuit board manufacturing" ]
53,292,889
https://en.wikipedia.org/wiki/Applications%20of%203D%20printing
In recent years, 3D printing has developed significantly and can now perform crucial roles in many applications, with the most common applications being manufacturing, medicine, architecture, custom art and design, and can vary from fully functional to purely aesthetic applications. 3D printing processes are finally catching up to their full potential, and are currently being used in manufacturing and medical industries, as well as by sociocultural sectors which facilitate 3D printing for commercial purposes. There has been a lot of hype in the last decade when referring to the possibilities we can achieve by adopting 3D printing as one of the main manufacturing technologies. Utilizing this technology would replace traditional methods that can be costly and time consuming. There have been case studies outlining how the customization abilities of 3D printing through modifiable files have been beneficial for cost and time effectiveness in a healthcare applications. There are different types of 3D printing such as fused filament fabrication (FFF), stereolithography (SLA), selective laser sintering (SLS), polyjet printing, Multi-Jet Fusion (MJF), Direct Metal Laser Sintering (DMLS), and Electron Beam Melting (EBM). For a long time, the issue with 3D printing was that it has demanded very high entry costs, which does not allow profitable implementation to mass-manufacturers when compared to standard processes. However, recent market trends spotted have found that this is finally changing. As the market for 3D printing has shown some of the quickest growth within the manufacturing industry in recent years. The applications of 3D printing are vast due to the ability to print complex pieces with a use of a wide range of materials. Materials can range from plastic and polymers as thermoplastic filaments, to resins, and even stem cells. Manufacturing applications AM technologies found applications starting in the 1980s in product development, data visualization, rapid prototyping, and specialized manufacturing. Their expansion into production (job production, mass production, and distributed manufacturing) has been under development in the decades since. Industrial production roles within the metalworking industries achieved significant scale for the first time in the early 2010s. Since the start of the 21st century there has been a large growth in the sales of AM machines, and their price has dropped substantially. According to Wohlers Associates, a consultancy, the market for 3D printers and services was worth $2.2 billion worldwide in 2012, up 29% from 2011. McKinsey predicts that additive manufacturing could have an economic impact of $550 billion annually by 2025. There are many applications for AM technologies, including architecture, construction (AEC), industrial design, automotive, aerospace, military, engineering, dental and medical industries, biotech (human tissue replacement), fashion, footwear, jewelry, eyewear, education, geographic information systems, food, and many other fields. Additive manufacturing's earliest applications have been on the toolroom end of the manufacturing spectrum. For example, rapid prototyping was one of the earliest additive variants, and its mission was to reduce the lead time and cost of developing prototypes of new parts and devices, which was earlier only done with subtractive toolroom methods such as CNC milling and turning, and precision grinding, far more accurate than 3D printing with accuracy down to 0.00005" and creating better quality parts faster, but sometimes too expensive for low accuracy prototype parts. With technological advances in additive manufacturing, however, and the dissemination of those advances into the business world, additive methods are moving ever further into the production end of manufacturing in creative and sometimes unexpected ways. Parts that were formerly the sole province of subtractive methods can now in some cases be made more profitably via additive ones. In addition, new developments in RepRap technology allow the same device to perform both additive and subtractive manufacturing by swapping magnetic-mounted tool heads. Cloud-based additive manufacturing Additive manufacturing in combination with cloud computing technologies allows decentralized and geographically independent distributed production. Cloud-based additive manufacturing refers to a service-oriented networked manufacturing model in which service consumers are able to build parts through Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), Hardware-as-a-Service (HaaS), and Software-as-a-Service (SaaS). Distributed manufacturing as such is carried out by some enterprises; there is also a services like 3D Hubs that put people needing 3D printing in contact with owners of printers. Some companies offer online 3D printing services to both commercial and private customers, working from 3D designs uploaded to the company website. 3D-printed designs are either shipped to the customer or picked up from the service provider. There are many open source websites that have downloadable STL files which are able to be modified or printed as is. Files ranging from functional tools to aesthetic figurines are available to the general public. Open source files can be beneficial for the user as the printed object can be more cost effective than commercial counterparts. Mass customization Companies have created services where consumers can customize objects using simplified web based customization software, and order the resulting items as 3D printed unique objects. This now allows consumers to create things like custom cases for their mobile phones or scans of their brains. Nokia has released the 3D designs for its case so that owners can customize their own case and have it 3D printed. Rapid manufacturing Advances in RP technology have introduced materials that are appropriate for final manufacture, which has in turn introduced the possibility of directly manufacturing finished components. One advantage of 3D printing for rapid manufacturing lies in the relatively quick and inexpensive production of small numbers of parts. Rapid manufacturing is a new method of manufacturing and many of its processes remain unproven. 3D printing is now entering the field of rapid manufacturing and was identified as a "next level" technology by many experts in a 2009 report. One of the most promising processes looks to be the adaptation of selective laser sintering (SLS), or direct metal laser sintering (DMLS) some of the better-established rapid prototyping methods. , however, these techniques were still very much in their infancy, with many obstacles to be overcome before RM could be considered a realistic manufacturing method. There have been patent lawsuits concerning 3-D printing for manufacturing. Rapid prototyping Industrial 3D printers have existed since the early 1980s and have been used extensively for rapid prototyping and research purposes. These are generally larger machines that use proprietary powdered metals, casting media (e.g. sand), plastics, paper or cartridges, and are used for rapid prototyping by universities and commercial companies. Research 3D printing can be particularly useful in research labs due to its ability to make specialized, bespoke geometries. In 2012 a proof of principle project at the University of Glasgow, UK, showed that it is possible to use 3D printing techniques to assist in the production of chemical compounds. They first printed chemical reaction vessels, then used the printer to deposit reactants into them. They have produced new compounds to verify the validity of the process, but have not pursued anything with a particular application. Usually, the FDM process is used to print hollow reaction vessels or microreactors. If the 3D print is performed within an inert gas atmosphere, the reaction vessels can be filled with highly reactive substances during the print. The 3D printed objects are air- and watertight for several weeks. By the print of reaction vessels in the geometry of common cuvettes or measurement tubes, routine analytical measurements such as UV/VIS-, IR- and NMR-spectroscopy can be performed directly in the 3D printed vessel. In addition, 3D printing has been used in research labs as alternative method to manufacture components for use in experiments, such as magnetic shielding and vacuum components with demonstrated performance comparable to traditionally produced parts. Food Additive manufacturing of food is being developed by squeezing out food, layer by layer, into three-dimensional objects. A large variety of foods are appropriate candidates, such as chocolate and candy, and flat foods such as crackers, pasta, and pizza. NASA has considered the versatility of the concept, awarding a contract to the Systems and Materials Research Consultancy to study the feasibility of printing food in space. NASA is also looking into the technology in order to create 3D printed food to limit food waste and to make food that are designed to fit an astronaut's dietary needs. A food-tech startup Novameat from Barcelona 3D-printed a steak from peas, rice, seaweed, and some other ingredients that were laid down criss-cross, imitating the intracellular proteins. One of the problems with food printing is the nature of the texture of a food. For example, foods that are not strong enough to be filed are not appropriate for 3D printing. Agile tooling Agile tooling is the process of using modular means to design tooling that is produced by additive manufacturing or 3D printing methods to enable quick prototyping and responses to tooling and fixture needs. Agile tooling uses a cost-effective and high-quality method to quickly respond to customer and market needs. It can be used in hydro-forming, stamping, injection molding and other manufacturing processes. Medical applications Surgical uses of 3D printing-centric therapies have a history beginning in the mid-1990s with anatomical modeling for bony reconstructive surgery planning. By practicing on a tactile model before surgery, surgeons were more prepared and patients received better care. Patient-matched implants were a natural extension of this work, leading to truly personalized implants that fit one unique individual. Virtual planning of surgery and guidance using 3D printed, personalized instruments have been applied to many areas of surgery including total joint replacement and craniomaxillofacial reconstruction with great success. Further study of the use of models for planning heart and solid organ surgery has led to increased use in these areas. Hospital-based 3D printing is now of great interest and many institutions are pursuing adding this specialty within individual radiology departments. The technology is being used to create unique, patient-matched devices for rare illnesses. One example of this is the bioresorbable trachial splint to treat newborns with tracheobronchomalacia developed at the University of Michigan. Several devices manufacturers have also begun using 3D printing for patient-matched surgical guides (polymers). The use of additive manufacturing for serialized production of orthopedic implants (metals) is also increasing due to the ability to efficiently create porous surface structures that facilitate osseointegration. Printed casts for broken bones can be custom-fitted and open, letting the wearer scratch any itches, wash and ventilate the damaged area. They can also be recycled. Fused filament fabrication (FFF) has been used to create microstructures with a three-dimensional internal geometry. Sacrificial structures or additional support materials are not needed. Structure using polylactic acid (PLA) can have fully controllable porosity in the range 20%–60%. Such scaffolds could serve as biomedical templates for cell culturing, or biodegradable implants for tissue engineering. 3D printing has been used to print patient-specific implant and device for medical use. Successful operations include a titanium pelvis implanted into a British patient, titanium lower jaw transplanted to a Dutch patient, and a plastic tracheal splint for an American infant. The hearing aid and dental industries are expected to be the biggest areas of future development using custom 3D printing technology. In March 2014, surgeons in Swansea used 3D printed parts to rebuild the face of a motorcyclist who had been seriously injured in a road accident. Research is also being conducted on methods to bio-print replacements for lost tissue due to arthritis and cancer . 3D printing technology can now be used to make exact replicas of organs. The printer uses images from patients' MRI or CT scan images as a template and lays down layers of rubber or plastic. These models can be used to plan difficult operations, as was the case in May 2018, when surgeons used a 3D printed replica of a kidney to practice a kidney transplant on a three-year-old boy. Thermal degradation during 3D printing of resorbable polymers, same as in surgical sutures, has been studied, and parameters can be adjusted to minimize the degradation during processing. Soft pliable scaffold structures for cell cultures can be printed. In 3D printing, computer-simulated microstructures are commonly used to fabricate objects with spatially varying properties. This is achieved by dividing the volume of the desired object into smaller subcells using computer aided simulation tools and then filling these cells with appropriate microstructures during fabrication. Several different candidate structures with similar behaviours are checked against each other and the object is fabricated when an optimal set of structures are found. Advanced topology optimization methods are used to ensure the compatibility of structures in adjacent cells. This flexible approach to 3D fabrication is widely used across various disciplines from biomedical sciences where they are used to create complex bone structures and human tissue to robotics where they are used in the creation of soft robots with movable parts. 3D printing also finds its uses more and more in design and fabrication of laboratory apparatuses. 3D printing technology can also be used to produce personal protective equipment, also known as PPE, is worn by medical and laboratory professionals to protect themselves from infection when they are treating patients. Examples of PPE include face masks, face shields, connectors, gowns, and goggles. The most popular forms of 3D printed PPE are face masks, face shields, and connectors. Nowadays, Additive Manufacturing is also employed in the field of pharmaceutical sciences to create 3D printed medication. Different techniques of 3D printing (e.g. FDM, SLS, Inkjet Printing etc) are utilized according to their respective advantages and drawbacks for various applications regarding drug delivery. Bio-printing In 2006, researchers at Cornell University published some of the pioneer work in 3D printing for tissue fabrication, successfully printing hydrogel bio-inks. The work at Cornell was expanded using specialized bioprinters produced by Seraph Robotics, Inc., a university spin-out, which helped to catalyze a global interest in biomedical 3D printing research. 3D printing has been considered as a method of implanting stem cells capable of generating new tissues and organs in living humans. With their ability to transform into any other kind of cell in the human body, stem cells offer huge potential in 3D bioprinting. Professor Leroy Cronin of Glasgow University proposed in a 2012 TED Talk that it was possible to use chemical inks to print medicine. In 2015 the FDA approved Spritam ®, a 3D printed drug also known as levetiracetam. Currently, there are three methods of 3D printing that have been explored for the production of drug making: laser based writing systems, printing-based inkjet systems, and nozzle based systems. , 3D bio-printing technology has been studied by biotechnology firms and academia for possible use in tissue engineering applications in which organs and body parts are built using inkjet techniques. In this process, layers of living cells are deposited onto a gel medium or sugar matrix and slowly built up to form three-dimensional structures including vascular systems. The first production system for 3D tissue printing was delivered in 2009, based on NovoGen bioprinting technology. Several terms have been used to refer to this field of research: organ printing, bio-printing, body part printing, and computer-aided tissue engineering, among others. The possibility of using 3D tissue printing to create soft tissue architectures for reconstructive surgery is also being explored. In 2013, Chinese scientists began printing ears, livers and kidneys, with living tissue. Researchers in China have been able to successfully print human organs using specialized 3D bioprinters that use living cells instead of plastic . Researchers at Hangzhou Dianzi University designed the "3D bioprinter" dubbed the "Regenovo". Xu Mingen, Regenovo's developer, said that it can produce a miniature sample of liver tissue or ear cartilage in less than an hour, predicting that fully functional printed organs might take 10 to 20 years to develop. Medical devices On October 24, 2014, a five-year-old girl born without fully formed fingers on her left hand became the first child in the UK to have a prosthetic hand made with 3D printing technology. Her hand was designed by US-based e-NABLE, an open source design organisation which uses a network of volunteers to design and make prosthetics mainly for children. The prosthetic hand was based on a plaster cast made by her parents. A boy named Alex was also born with a missing arm from just above the elbow. The team was able to use 3D printing to upload an e-NABLE Myoelectric arm that runs off of servos and batteries that are actuated by the electromyography muscle. With the use of 3D printers, e-NABLE has so far distributed thousands of plastic hands to children. Another example is Open Bionics, a company that makes fully functional bionic arms through 3D printing technology. 3D printing allows Open Bionics to create personalized designs for their clients, as there can be different colours, textures, patterns, and even "Hero Arms" that emulate superheroes like Ironman or characters from Star Wars. Printed prosthetics have been used in rehabilitation of crippled animals. In 2013, a 3D printed foot let a crippled duckling walk again. 3D printed hermit crab shells let hermit crabs inhabit a new style home. A prosthetic beak was another tool developed by the use of 3D printing to help aid a bald eagle named Beauty, whose beak was severely mutilated from a shot in the face. Since 2014, commercially available titanium knee implants made with 3D printer for dogs have been used to restore the animals' mobility. Over 10,000 dogs in Europe and the United States have been treated after only one year. In February 2015, FDA approved the marketing of a surgical bolt which facilitates less-invasive foot surgery and eliminates the need to drill through bone. The 3D printed titanium device, 'FastForward Bone Tether Plate' is approved to use in correction surgery to treat bunion. In October 2015, the group of Professor Andreas Herrmann at the University of Groningen has developed the first 3D printable resins with antimicrobial properties. Employing stereolithography, quaternary ammonium groups are incorporated into dental appliances that kill bacteria on contact. This type of material can be further applied in medical devices and implants. 3D Printing has been especially beneficial for the creation of patient specific prosthetics for large or invasive surgeries. In a case study published in 2020 about the benefits of 3D printing for hip prostheses, three patients with acetabular defects needed revisions of total hip arthroplasty (THA). 3D printing was utilized to produce prostheses that were specific to each of the three patients and their complex bone defect, which resulted in better post procedure recovery and prognosis of the individual. In a case study about the applications of 3D printing in occupational therapy, the aspect of customization and quick fabrication at a low cost is utilized in different tools such as customized scissor handles and bottle openers for someone with hand motor complications. Beverage holders, writing guides, grip strengtheners, and other occupational therapy items were designed, printed, and compared with commercially available counterparts in a cost analysis. It found that the 3D printed items were on average 10.5 times more cost effective than commercial alternatives. 3D printing for medical devices can range from human prosthetics applications, to animal prostheses, to medical machine tools: On June 6, 2011, the company Xilloc Medical together with researchers at the University of Hasselt, in Belgium had successfully printed a new jawbone for an 83-year-old Dutch woman from the province of Limburg. 3D printing has been used to produce prosthetic beaks for eagles, a Brazilian goose named Victoria, and a Costa Rican toucan called Grecia. In March 2020, the Isinnova company in Italy printed 100 respirator valves in 24 hours for a hospital that lacked them in the midst of the coronavirus outbreak. It's clear that 3D printing technology is beneficial in many areas of healthcare. Pharmaceutical Formulations In May 2015 the first formulation manufactured by 3D printing was produced. In August 2015 the FDA approved the first 3D printed tablet. Binder-jetting into a powder bed of the drug allows very porous tablets to be produced, which enables high drug doses in a single formulation that rapidly dissolves and is easily absorbed. This has been demonstrated for Spritam, a reformulation of levetiracetam for the treatment of epilepsy. Additive Manufacturing has been increasingly utilized by scientists in the pharmaceutical field. However, after the first FDA approval of a 3D printed formulation, scientific interest for 3D applications in drug delivery grew even bigger. Research groups around the world are studying different ways of incorporating drugs within a 3D printed formulation, for example by incorporating poorly water-soluble drugs in self-emulsifying systems or emulsion gels. 3D printing technology allows scientists to develop formulations with a personalized approach, i.e. dosage forms tailored specifically to an individual patient. Moreover, according to the advantages of the diverse utilized techniques, formulations with various properties can be achieved. These may contain multiple drugs in a single dosage form, multi-compartmental designs, drug delivery systems with distinct release characteristics, etc. During the earlier years, researchers have mainly focused on the Fused Deposition Modelling (FDM) technique. Nowadays, other printing techniques such as Selective Laser Sintering (SLS), Stereolithography (SLA) and Semi-solid extrusion (SSE) are also gaining traction and are being used for pharmaceutical applications. Industrial applications Apparel 3D printing has entered the world of clothing with fashion designers experimenting with 3D-printed bikinis, shoes, and dresses. In commercial production, Nike used 3D printing to prototype and manufacture the 2012 Vapor Laser Talon football shoe for players of American football, and New Balance is 3D manufacturing custom-fit shoes for athletes. 3D printing has come to the point where companies are printing consumer grade eyewear with on-demand custom fit and styling (although they cannot print the lenses). On-demand customization of glasses is possible with rapid prototyping. However, comments have been made in academic circles as to the potential limitation of the human acceptance of such mass customized apparel items due to the potential reduction of brand value communication. In the world of high fashion courtiers such as Karl Lagerfeld designing for Chanel, Iris van Herpen and Noa Raviv working with technology from Stratasys, have employed and featured 3d printing in their collections. Selections from their lines and other working with 3d printing were showcased at the 2016 Metropolitan Museum of Art Anna Wintour Costume Center, exhibition "Manus X Machina". Vanessa Friedman, fashion director and chief fashion critic at The New York Times, says 3D printing will have a significant value for fashion companies down the road, especially if it transforms into a print-it-yourself tool for shoppers. "There's real sense that this is not going to happen anytime soon," she says, "but it will happen, and it will create dramatic change in how we think both about intellectual property and how things are in the supply chain". She adds: "Certainly some of the fabrications that brands can use will be dramatically changed by technology." During the COVID-19 pandemic, the Ukrainian-American undergraduate Karina Popovich founded Markers for COVID-19 which used 3D printing to create face shields, face masks and other items of personal protective equipment. Industrial art and jewelry 3D printing is used to manufacture moulds for making jewelry, and even the jewelry itself. 3D printing is becoming popular in the customisable gifts industry, with products such as personalized models of art and dolls, in many shapes: in metal or plastic, or as consumable art, such as 3D printed chocolate. Transportation Industries In cars, trucks, and aircraft, additive manufacturing is beginning to transform both unibody and fuselage design and production, and powertrain design and production. For example, General Electric uses high-end 3D printers to build parts for turbines. Many of these systems are used for rapid prototyping before mass production methods are employed. In early 2014, Swedish supercar manufacturer Koenigsegg announced the One:1, a supercar that utilizes many components that were 3D printed. In the limited run of vehicles Koenigsegg produces, the One:1 has side-mirror internals, air ducts, titanium exhaust components, and complete turbocharger assemblies that were 3D printed as part of the manufacturing process. Urbee is the name of the first car in the world car mounted using the technology 3D printing (its bodywork and car windows were "printed"). Created in 2010 through the partnership between the US engineering group Kor Ecologic and the company Stratasys (manufacturer of printers Stratasys 3D), it is a hybrid vehicle with futuristic look. In 2014, Local Motors debuted Strati, a functioning vehicle that was entirely 3D Printed using ABS plastic and carbon fiber, except the powertrain. In 2015, the company produced another iteration known as the LM3D Swim that was 80 percent 3D-printed. In 2016, the company has used 3D printing in the creation of automotive parts, such ones used in Olli, a self-driving vehicle developed by the company. In May 2015 Airbus announced that its new Airbus A350 XWB included over 1000 components manufactured by 3D printing. 3D printing is also being utilized by air forces to print spare parts for planes. In 2015, a Royal Air Force Eurofighter Typhoon fighter jet flew with printed parts. The United States Air Force has begun to work with 3D printers, and the Israeli Air Force has also purchased a 3D printer to print spare parts. In 2017, GE Aviation revealed that it had used design for additive manufacturing to create a helicopter engine with 16 parts instead of 900, weighing 40% lighter and being 60% cheaper. This also led to a simplified supply chain with less support from outer suppliers, as many of the parts could be produced in-house. Construction, home development The use of 3D printing to produce scale models within architecture and construction has steadily increased in popularity as the cost of 3D printers has reduced. This has enabled faster turn around of such scale models and allowed a steady increase in the speed of production and the complexity of the objects being produced. Construction 3D printing, the application of 3D printing to fabricate construction components or entire buildings has been in development since the mid-1990s, development of new technologies has steadily gained pace since 2012 and the sub-sector of 3D printing is beginning to mature. Firearms In 2012, the US-based group Defense Distributed disclosed plans to "design a working plastic gun that could be downloaded and reproduced by anybody with a 3D printer." Defense Distributed has also designed a 3D printable AR-15 type rifle lower receiver (capable of lasting more than 650 rounds) and a 30-round M16 magazine. The AR-15 has multiple receivers (both an upper and lower receiver), but the legally controlled part is the one that is serialized (the lower, in the AR-15's case). Soon after Defense Distributed succeeded in designing the first working blueprint to produce a plastic gun with a 3D printer in May 2013, the United States Department of State demanded that they remove the instructions from their website. After Defense Distributed released their plans, questions were raised regarding the effects that 3D printing and widespread consumer-level CNC machining may have on gun control effectiveness. In 2014, a man from Japan became the first person in the world to be imprisoned for making 3D printed firearms. Yoshitomo Imura posted videos and blueprints of the gun online and was sentenced to jail for two years. Police found at least two guns in his household that were capable of firing bullets. Computers and robots 3D printing can also be used to make laptops and other computers and cases. For example, Novena and VIA OpenBook standard laptop cases. I.e. a Novena motherboard can be bought and be used in a printed VIA OpenBook case. Open-source robots are built using 3D printers. Double Robotics grant access to their technology (an open SDK). On the other hand, 3&DBot is an Arduino 3D printer-robot with wheels and ODOI is a 3D printed humanoid robot. Soft sensors and actuators 3D printing has found its place in soft sensors and actuators manufacturing inspired by 4D printing concept. The majority of the conventional soft sensors and actuators are fabricated using multistep low yield processes entailing manual fabrication, post-processing/assembly, and lengthy iterations with less flexibility in customization and reproducibility of final products. 3D printing has been a game changer in these fields with introducing the custom geometrical, functional, and control properties to avoid the tedious and time-consuming aspects of the earlier fabrication processes. Sociocultural applications In 2005, a rapidly expanding hobbyist and home-use market was established with the inauguration of the open-source RepRap and Fab@Home projects. Virtually all home-use 3D printers released to-date have their technical roots in the ongoing RepRap Project and associated open-source software initiatives. In distributed manufacturing, one study has found that 3D printing could become a mass market product enabling consumers to save money associated with purchasing common household objects. For example, instead of going to a store to buy an object made in a factory by injection molding (such as a measuring cup or a funnel), a person might instead print it at home from a downloaded 3D model. Art and jewellery In 2005, academic journals began to report on the possible artistic applications of 3D printing technology, being used by artists such as Martin John Callanan at The Bartlett school of architecture. By 2007 the mass media followed with an article in the Wall Street Journal and Time magazine, listing a printed design among their 100 most influential designs of the year. During the 2011 London Design Festival, an installation, curated by Murray Moss and focused on 3D Printing, was held in the Victoria and Albert Museum (the V&A). The installation was called Industrial Revolution 2.0: How the Material World will Newly Materialize. At the 3DPrintshow in London, which took place in November 2013 and 2014, the art sections had works made with 3D printed plastic and metal. Several artists such as Joshua Harker, Davide Prete, Sophie Kahn, Helena Lukasova, Foteini Setaki showed how 3D printing can modify aesthetic and art processes. In 2015, engineers and designers at MIT's Mediated Matter Group and Glass Lab created an additive 3D printer that prints with glass, called G3DP. The results can be structural as well as artistic. Transparent glass vessels printed on it are part of some museum collections. The use of 3D scanning technologies allows the replication of real objects without the use of moulding techniques that in many cases can be more expensive, more difficult, or too invasive to be performed, particularly for precious artwork or delicate cultural heritage artifacts where direct contact with the moulding substances could harm the original object's surface. 3D selfies A 3D photo booth such as the Fantasitron located at Madurodam, the miniature park, generates 3D selfie models from 2D pictures of customers. These selfies are often printed by dedicated 3D printing companies such as Shapeways. These models are also known as 3D portraits, 3D figurines or mini-me figurines. Communication Employing additive layer technology offered by 3D printing, Terahertz devices which act as waveguides, couplers and bends have been created. The complex shape of these devices could not be achieved using conventional fabrication techniques. Commercially available professional grade printer EDEN 260V was used to create structures with minimum feature size of 100 μm. The printed structures were later DC sputter coated with gold (or any other metal) to create a Terahertz Plasmonic Device. In 2016 artist/scientist Janine Carr Created the first 3d printed vocal percussion (beatbox) as a waveform, with the ability to play the soundwave by laser, along with four vocalised emotions these were also playable by laser. Domestic use Some early consumer examples of 3d printing include the 64DD released in 1999 in Japan. As of 2012, domestic 3D printing was mainly practiced by hobbyists and enthusiasts. However, little was used for practical household applications, for example, ornamental objects. Some practical examples include a working clock and gears printed for home woodworking machines among other purposes. Web sites associated with home 3D printing tended to include backscratchers, coat hooks, door knobs, etc. As of 2023 consumer 3D printing has become increasingly common, an estimated 85% of 3D printers sold now are of the personal/desktop markets. Now more than ever its increasingly common to see 3D printing utilized by at home DIY/maker communities as 3D printers have become significantly more affordable for consumer audiences in recent years. The open source Fab@Home project has developed printers for general use. They have been used in research environments to produce chemical compounds with 3D printing technology, including new ones, initially without immediate application as proof of principle. The printer can print with anything that can be dispensed from a syringe as liquid or paste. The developers of the chemical application envisage both industrial and domestic use for this technology, including enabling users in remote locations to be able to produce their own medicine or household chemicals. 3D printing is now working its way into households, and more and more children are being introduced to the concept of 3D printing at earlier ages. The prospects of 3D printing are growing, and as more people have access to this new innovation, new uses in households will emerge. The OpenReflex SLR film camera was developed for 3D printing as an open-source student project. Education and research 3D printing, and open source 3D printers in particular, are the latest technology making inroads into the classroom. 3D printing allows students to create prototypes of items without the use of expensive tooling required in subtractive methods. Students design and produce actual models they can hold. The classroom environment allows students to learn and employ new applications for 3D printing. RepRaps, for example, have already been used for an educational mobile robotics platform. Some authors have claimed that 3D printers offer an unprecedented "revolution" in STEM education. The evidence for such claims comes from both the low cost ability for rapid prototyping in the classroom by students, but also the fabrication of low-cost high-quality scientific equipment from open hardware designs forming open-source labs. Engineering and design principles are explored as well as architectural planning. Students recreate duplicates of museum items such as fossils and historical artifacts for study in the classroom without possibly damaging sensitive collections. Other students interested in graphic designing can construct models with complex working parts easily. 3D printing gives students a new perspective with topographic maps. Science students can study cross-sections of internal organs of the human body and other biological specimens. And chemistry students can explore 3D models of molecules and the relationship within chemical compounds. The true representation of exactly scaled bond length and bond angles in 3D printed molecular models can be used in organic chemistry lecture courses to explain molecular geometry and reactivity. According to a recent paper by Kostakis et al., 3D printing and design can electrify various literacies and creative capacities of children in accordance with the spirit of the interconnected, information-based world. Future applications for 3D printing might include creating open-source scientific equipment. Environmental use In Bahrain, large-scale 3D printing using a sandstone-like material has been used to create unique coral-shaped structures, which encourage coral polyps to colonize and regenerate damaged reefs. These structures have a much more natural shape than other structures used to create artificial reefs, and, unlike concrete, are neither acid nor alkaline with neutral pH. Cultural heritage In the last several years 3D printing has been intensively used by in the cultural heritage field for preservation, restoration and dissemination purposes. Many Europeans and North American Museums have purchased 3D printers and actively recreate missing pieces of their relics. Scan the World is the largest archive of 3D printable objects of cultural significance from across the globe. Each object, originating from 3D scan data provided by their community, is optimised for 3D printing and free to download on MyMiniFactory. Through working alongside museums, such as The Victoria and Albert Museum and private collectors, the initiative serves as a platform for democratizing the art object. The Metropolitan Museum of Art and the British Museum have started using their 3D printers to create museum souvenirs that are available in the museum shops. Other museums, like the National Museum of Military History and Varna Historical Museum, have gone further and sell through the online platform Threeding digital models of their artifacts, created using Artec 3D scanners, in 3D printing friendly file format, which everyone can 3D print at home. Specialty materials Consumer grade 3D printing has resulted in new materials that have been developed specifically for 3D printers. For example, filament materials have been developed to imitate wood in its appearance as well as its texture. Furthermore, new technologies, such as infusing carbon fiber into printable plastics, allowing for a stronger, lighter material. In addition to new structural materials that have been developed due to 3D printing, new technologies have allowed for patterns to be applied directly to 3D printed parts. Iron oxide-free Portland cement powder has been used to create architectural structures up to 9 feet in height. See also 3D printing processes 3D printing Construction 3D printing Health and safety hazards of 3D printing References Sources 3D printing Industrial design Industrial processes
Applications of 3D printing
[ "Engineering" ]
7,761
[ "Industrial design", "Design engineering", "Design" ]
53,293,378
https://en.wikipedia.org/wiki/Position-specific%20isotope%20analysis
Position-specific isotope analysis, also called site-specific isotope analysis, is a branch of isotope analysis aimed at determining the isotopic composition of a particular atom position in a molecule. Isotopes are elemental variants with different numbers of neutrons in their nuclei, thereby having different atomic masses. Isotopes are found in varying natural abundances depending on the element; their abundances in specific compounds can vary from random distributions (i.e., stochastic distribution) due to environmental conditions that act on the mass variations differently. These differences in abundances are called "fractionations," which are characterized via stable isotope analysis. Isotope abundances can vary across an entire substrate (i.e., “bulk” isotope variation), specific compounds within a substrate (i.e., compound-specific isotope variation), or across positions within specific molecules (i.e., position specific isotope variation). Isotope abundances can be measured in a variety of ways (e.g., isotope ratio mass spectrometry, laser spectrometry,  NMR, ESI-MS). Early analyses varied in technique, but were commonly limited by their ability to only measure average isotope compositions over molecules or samples. While this allows isotope analysis of the bulk substrate, it eliminates the ability to distinguish variation between different sites of the same element within the molecule. The field of position-specific isotope biogeochemistry studies these intramolecular variations, known as “position-specific isotope” and “site-specific isotope” enrichments. It focuses on position-specific isotope fractionations in many contexts, development of technologies to measure these fractionations and the application of position-specific isotope enrichments to questions surrounding biogeochemistry, microbiology, enzymology, medicinal chemistry, and earth history. Position-specific isotope enrichments can retain critical information about synthesis and source of the atoms in the molecule. Indeed, bulk isotope analysis averages site-specific isotope effects across the molecule, and so while all those values have an influence on the bulk value, signatures of specific processes may be diluted or indistinguishable. While the theory of position-specific isotope analysis has existed for decades, new technologies exist now to allow these methods to be much more common. The potential applications of this approach are widespread, such as understanding metabolism in biomolecules, environmental pollutants in air, inorganic reaction mechanisms, etc. Clumped isotope analysis, a subset of position-specific isotope analysis, has already proven useful in characterizing sources of methane, paleoenvironment, paleoaltimetry, among many other applications. More specific case studies of position-specific isotope fractionation are detailed below. Theory Stable isotopes do not decay, and the heavy and light isotope masses affect how they partition within the environment. Any deviation from a random distribution of the light and heavy isotopes within the environment is called fractionation, and consistent fractionations as a result of a particular process or reaction are called "isotope effects." Isotope Effects Isotope effects are recurring patterns in the partitioning of heavy and light isotopes across different chemical species or compounds, or between atomic sites within a molecule. These isotope effects can come about from a near infinite number of processes, but most of them can be narrowed down into two main categories, based on the nature of the chemical reaction creating or destroying the compound of interest: (1) Kinetic isotope effects manifest in irreversible reactions, when one isotopologue is preferred in the transition state due to the lowest energy state. The preferred isotopologue will depend on whether the transition state of the molecule during a chemical reaction is more like the reactant or the product. Normal isotope effects are defined as those which partition the lighter isotope into the products of the reaction. Inverse isotope effects are less common as they preferentially partition the heavier isotope into the products. (2) Equilibrium isotope effects manifest in reversible reactions, when molecules can exchange freely to reach the lowest possible energy state. These variations can occur on a compound-specific level, but also on a position-specific level within a molecule. For instance, the carboxyl site of amino acids is exchangeable and therefore its carbon isotope signature can change over time and may not represent the original carbon source of the molecule. Biological fractionation Chemical reactions in biological processes are controlled by enzymes that catalyze the conversion of substrate to product. Since enzymes can alter the transition state structure for reactions, they also change kinetic and equilibrium isotope effects.  Placed in the context of a metabolism, the expression of isotope effects on biomolecules is further controlled by branch points. Different pathways of biosynthesis will use different enzymes, yielding a range of position specific isotope enrichments. This variability allows position-specific isotope measurements to discern multiple biosynthetic pathways from the same metabolic product. Biogeochemists use position specific isotope enrichments from amino acids, lipids, and sugars in nature to interpret the relative importance of different metabolisms. Mechanism The position-specific isotope effect of an enzymatic reaction is expressed as the ratio of rate constants for a monoisotopic substrate and a substrate substituted with one rare isotope. For example, enzyme formate dehydrogenase catalyzes the reaction of formate and NAD+ to carbon dioxide and NADH. The hydrogen of formate is directly transferred to NAD+. This step has an isotope effect, because the rate of protium transfer from formate to NAD+ is nearly three times faster than the rate of the same reaction with a deuterium transfer. This is also an example of a primary isotope effect. A primary isotope effect is one in which the rare isotope is substituted where a bond is broken or formed. Secondary isotope effects occur on other positions in the molecule and are controlled by the molecular geometry of the transition state. These are generally considered to be negligible but do arise in certain cases, especially for hydrogen isotopes. Unlike abiotic reactions, enzymatic reactions occur through a series of steps, including substrate-enzyme binding, conversion of substrate to product, and dissociation of enzyme-product complex. The observed isotope effect of an enzyme will be controlled by the rate limiting step in this mechanism. If the step that converts substrate to product is rate limiting, the enzyme will express its intrinsic isotope effect, that of the bond forming or breaking reaction. Abiological fractionation Like biotic molecules, position specific isotope enrichments in abiotic molecules can reflect the source of chemical precursors and synthesis pathways. The energy for abiotic reactions can come from many different sources, which will affect fractionation. For instance, metal catalysts can speed up abiotic reactions. Reactions can be slowed down or sped up by different temperature and pressure conditions, which will affect the equilibrium constant or activation energy of reversible and irreversible reactions, respectively. For example, carbon in the interstellar medium and solar nebula partition into distinct states based on thermodynamic favorability. Measuring site-specific isotope enrichments of carbon from organic molecules extracted from carbonaceous chondrites can elucidate where each carbon atom comes from, and how organic molecules can be synthesized abiotically. More broadly, these isotope enrichments can provide information about physical processes in the region where the molecular precursors were formed, and where the molecule formed in the solar system (i.e., nucleosynthetic heterogeneity, mass independent fractionation, self-shielding, etc.). Another example of distinct site-specific fractionations in abiotic molecules is Fischer-Tropsch-type synthesis, which is thought to produce abiogenic hydrocarbon chains. Through this reaction mechanism, site enrichments of carbon would deplete as carbon chain length increases, and be distinct from site-specific enrichments of hydrocarbons of biological origins. Analysis Substrates need to be prepared and analyzed in a specific way to elucidate site specific isotope enrichments. This requires clean separation of the compound of interest from the original sample, which can require a variety of different preparatory chemistries. Once isolated, position-specific isotope enrichments can be analyzed with a variety of instruments, which all have different advantages and provide varying degrees of precision. Enzymatic Reaction To measure the kinetic isotope effects of enzymatic reactions, biochemists perform in vitro experiments with enzymes and substrates. The goal of these experiments is to measure the difference in the enzymatic reaction rates for the monoisotopic substrate and the substrate with one rare isotope. There are two popularly used techniques in these experiments: Internal competition studies and direct comparison experiments. Both measure position-specific isotope effects. Direct Comparison Direct comparison experiments are primarily used for measuring hydrogen/deuterium isotope effects in enzymatic reactions. The monoisotopic substrate and a deuterated form of the substrate are separately exposed to the enzyme of interest over a range of concentrations. The Michaelis-Menten kinetic parameters for both substrates are determined and the position-specific isotope effect at the site of deuteration is expressed as the ratio of the monoisotopic rate constant over the rare isotope rate constant. Internal Competition For isotopes of elements like carbon and sulfur, the difference in kinetic parameters is too small, and the measurement precision too low, to measure an isotope effect by directly comparing the rates of the monoisotopic and rare isotope substrates. Instead, the two are mixed together using the natural abundance of stable isotopes in molecules. The enzyme is exposed to both isotopes simultaneously and its preference for the light isotope is analyzed by collecting the product of the reaction and measuring its isotope composition. For example, if an enzyme removes a carbon from a molecule by turning it into carbon dioxide, that carbon dioxide product can be collected and measured on an Isotope Ratio Mass Spectrometer for its carbon isotope composition. If the carbon dioxide has less 13C than the substrate mixture, the enzyme has preferentially reacted with the substrate that has a 12C at the site that is decarboxylated. In this way, internal competition experiments are also position-specific. If only the CO2 is measured, then only the isotope effect on the site of decarboxylation is recorded. Chemical degradation Before the advent of technologies that analyze whole molecules for their intramolecular isotopic structure, molecules were sequentially degraded and converted to CO2 and measured on an Isotope Ratio Mass Spectrometer, revealing position-specific 13C enrichments. Ninhydrin Reaction In 1961, Abelson and Hoering developed a technique for removing the carboxylic acid of amino acids using the ninhydrin reaction. This reaction converts the carboxylic acid to a molecule of CO2 which is measured via an Isotope Ratio Mass Spectrometer. Ozonolysis Reaction Lipids are of particular interest to stable isotope geochemists because they are preserved in rocks for millions of years. Monson & Hayes used ozonolysis to characterize the position-specific isotope abundances of unsaturated fatty acids, turning different carbon positions into carbon dioxide. Using this technique, they directly measured an isotopic pattern in fatty acids that had been predicted for years. Preparatory Chemistry Derivatization In some cases, additional functional groups will need to be added to molecules to facilitate the other separation and analysis methods. Derivatization can change the properties of an analyte; for instance, it would make a polar and non-volatile compound non-polar and more volatile, which would be necessary for analysis in certain types of chromatography. It is important to note, however, that derivatization is not ideal for site-specific analyses as it adds additional elements that must be accounted for in analyses. Chromatography Chromatography facilitates separation of distinct molecules within a mixture based on their respective chemical properties, and how those properties interact with the substrate coating the chromatographic column. This separation can happen “on-line,” during the measurement itself, or prior to measurements to isolate a pure compound. Gas and liquid chromatography have distinct advantages, based on the molecules of interest. For example, aqueously soluble molecules are more easily separated with liquid chromatography, while volatile, nonpolar molecules like propane or ethane are separated with gas chromatography. Instrumental Analysis A variety of different instruments can be used to perform position-specific isotope analysis, and each have distinct advantages and drawbacks. Many of them require comparison the sample of interest to a standard of known isotopic composition; fractionation within the instrument and variation of instrumental conditions over time can affect accuracy of individual measurements if not standardized. GC-IRMS and LC-MS Initial measurements of position specific isotope enrichments were measured using isotope ratio mass spectrometry in which sites on a molecule were first degraded to , the was captured and purified, and then the CO2 was measured for its isotope composition on an Isotope Ratio Mass Spectrometer (IRMS). Py-GC-MS was also used in these experiments to degrade molecules even further and characterize their intramolecular isotopic distributions. Both GC-MS and LC-MS are capable of characterizing position specific isotope enrichments in isotopically labelled molecules. In these molecules, 13C is so abundant that it can be seen on a mass spectrometer with low sensitivity. The resolution of these instruments can distinguish two molecules with a 1 Dalton difference in their molecular masses; however, this difference could arise from the addition of many rare isotopes (17O, 13C, 2H, etc.). For this reason, mass spectrometers using quadrupoles or time-of-flight detection techniques cannot be used for measuring position-specific enrichments at natural abundances. Spectroscopy Laser spectroscopy can be used to measure isotope enrichments of gases in the environment. Laser spectroscopy takes advantage of the different vibrational frequencies of isotopologues which cause them to absorb different wavelengths of light. Transmission of light through the gaseous sample at a controlled temperature can be quantitatively converted into a statement about isotopic composition. For N2O, these measurements can determine the position specific isotope enrichments of (15N. These measurements are fast and can reach relatively good precision (1-10 per mille). It is used to characterize environmental gas fluxes, and effects on these fluxes. This method is limited to measurement and characterization of gases. Nuclear magnetic resonance (NMR) Nuclear magnetic resonance observes small differences in molecular reactions to oscillating magnetic fields. It is able to characterize atoms with active nuclides that have a non-zero nuclear spin (e.g., 13C, 1H, 17O 35Cl, 15N, 37Cl), which makes it particularly useful for identifying certain isotopes. In typical proton or 13C NMR, the chemical shifts of protiums (1H) and carbon-13 atoms within a molecule are measured, respectively, as they are excited by a magnetic field and then relax with a diagnostic resonance frequency. With site specific natural isotope fractionation (SNIF) NMR, the relaxation resonances of the deuterium and 13C atoms. NMR does not have the sensitivity to detect isotopologues with multiple rare isotopes. The only peaks that appear in a SNIF-NMR spectra are those of the isotopologues with a single rare isotope. Since the instrument is only measuring the resonances of the rare isotopes, each isotopologue will have one peak. For example, a molecule with six chemically unique carbon atoms will have six peaks in a 13C SNIF NMR spectrum. The site of 13C substitution can be determined by the chemical shift of each of the peaks. As a result, NMR is able to identify site specific isotope enrichments within molecules. Orbitrap Mass Spectrometry The Orbitrap is a high-resolution Fourier transform mass spectrometer that has recently been adapted to allow for site-specific analyses. Molecules introduced into the Orbitrap are fragmented, accelerated, and analyzed. Because the Orbitrap characterizes molecular masses by measuring oscillations at radio frequencies, it is able to reach very high levels of precision, depending on measurement method (i.e., down to 0.1 per mille for long integration times). It is significantly faster than site-specific isotope measurements that can be performed using NMR, and can measure molecules with different rare isotopes but the same nominal mass at natural abundances (unlike GC and LCMS). It is also widely generalizable to molecules that can be introduced via gas or liquid solvent. Resolution of the Orbitrap is such that nominal isobars (e.g., 2H versus 15N versus 13C enrichments) can be distinguished from one another, and so molecules do not need to be converted into a homogeneous substrate to facilitate isotope analysis. Like other isotope measurements, measurements of site-specific enrichments on the Orbitrap should be compared to a standard of known composition. Case studies To illustrate the utility of position-specific isotope enrichments, several case studies are described below in which scientists used position-specific isotope analyses to answer important questions about biochemistry, pollution, and climate. Phosphoenolpyruvate carboxylase Phosphoenolpyruvate carboxylase (PEPC) is an enzyme that combines bicarbonate and phosphoenolpyruvate (PEP) to form the four-carbon acid, oxaloacetate. It is an important enzyme in C4 photosynthesis and anaplerotic pathways. It is also responsible for the position-specific enrichment of oxaloacetate, due to the equilibrium isotope effect of converting the linear molecule CO2 into the trigonal planar molecule HCO3-, which partitions 13C into bicarbonate. Inside the PEPC enzyme, H12CO3- reacts 1.0022 times faster than  H13CO3- so that PEPC has a 0.22% kinetic isotope effect. This is not enough to compensate for the 13C enrichment in bicarbonate. Thus, oxaloacetate is left with a 13C-enriched carbon at the C4 position. However, the C1 site experiences a small inverse secondary isotope effect due to its bonding environment in the transition state, leaving the C1 site of oxaloacetate enriched in 13C. In this way, PEPC simultaneously partitions 12C into the C4 site and 13C into the C1 site of oxaloacetate, an example of multiple position-specific isotope effects. Amino acids The first paper on site-specific enrichment used the ninhydrin reaction to cleave the carboxyl site off alpha-amino acids in photosynthetic organisms. The authors demonstrated an enriched carboxyl site relative to the bulk δ13C of the molecules, which they attribute to uptake of heavier CO2 through the Calvin cycle.  A recent study applied similar theory to understand enrichments in methionine, which they suggested would be powerful in origin and synthesis studies. Carbohydrates In 2012, a team of scientists used NMR spectroscopy to measure all of the position-specific carbon isotope abundances of glucose and other sugars. It was shown that the isotope abundances are heterogeneous. Different portions of the sugar molecules are used for biosynthesis based on the metabolic pathway an organism uses. Therefore, any interpretations of position-specific isotopes of molecules downstream of glucose have to consider this intramolecular heterogeneity. Glucose is the monomer of cellulose, the polymer that makes plants and trees rigid. After the advent of position-specific analyses of glucose, biogeochemists from Sweden looked the concentric tree rings of a Pinus nigra that recorded yearly growth between 1961 and 1995. They digested the cellulose down to its glucose units and used NMR spectroscopy to analyze its intramolecular isotopic patterns. They found correlations with position-specific isotope enrichments that were not apparent with whole molecule carbon isotope analysis of glucose. By measuring position-specific enrichments in the 6-carbon glucose molecule, they gathered six times more information from the same sample. Fatty acids The biosynthesis of fatty acids begins with acetyl-CoA precursors that are brought together to make long straight chain lipids. Acetyl-CoA is produced in aerobic organisms by pyruvate dehydrogenase, an enzyme that has been shown to express a large, 2.3% isotope effect on the C2 site of pyruvate and a small fractionation on the C3 site. These become the odd and even carbon positions of fatty acids respectively and in theory would result in a pattern of 13C depletions and enrichments at odd and even positions, respectively. In 1982, Monson and Hayes developed technology for measuring the position specific carbon isotope abundances of fatty acids. Their experiments on Escherichia coli revealed the predicted relative 13C enrichments at odd numbered carbon sites. However, this pattern was not found in Saccharomyces cerevisiae that were fed glucose. Instead, its fatty acids were 13C enriched at the odd positions. This has been interpreted as either a product of isotope effects during fatty acid degradation or the intramolecular isotopic heterogeneity of glucose that ultimately is reflected in the position-specific patterns of fatty acids. Nitrous Oxide Site specific isotope enrichments of N2O is measured in the environment to help disentangle microbial sources and sinks in the environment. Different isotopologues of N2O absorb light at different wavelengths. Laser spectroscopy converts these differences as it scans across wavelengths to measure the abundance of 14N-15N-16O vs. 15N-14N-16O, a distinction that is impossible on other instruments. These measurements have achieved very high precision, down to 0.2 per mille. Environmental pollutants Position-specific isotopes can be used to trace environmental pollutants through local and global environment. This is specifically useful as heavy isotopes are often used to synthesize chemicals and then will get incorporated into the natural environment through biodegradation. Thus, tracing position-specific isotopes in the environment can help trace the movement of these pollutants and chemical products. Case study conclusions These case studies represent some potential applications for position specific isotope analysis, but certainly not all. The opportunities for samples to measure and processes to characterize are virtually unlimited, and new methodological developments will help make these measurements possible going forward. References Analytical chemistry Isotopes
Position-specific isotope analysis
[ "Physics", "Chemistry" ]
4,667
[ "nan", "Isotopes", "Nuclear physics" ]
53,299,610
https://en.wikipedia.org/wiki/Tommy%20Gate
Tommy Gate is an American brand of hydraulic liftgate, or tail lift, manufactured by Woodbine Manufacturing Company. The company was formed in 1965 by Delbert "Bus" Brown and its production facility is located in Woodbine, Iowa. History Prior to founding Woodbine Manufacturing Company, Delbert Brown manufactured farming equipment under the name of Brown Manufacturing Company. After inventing what was then one of the first trenching machines, Brown Manufacturing Company was sold to Omaha Steel Works. Three years later, Brown founded Woodbine Manufacturing Company and launched the Tommy Gate brand. Expansion The Woodbine manufacturing facility was initially built in 1965 to occupy 70,000 square feet of production space. It expanded in 1980 to 90,000 square feet and once again in 2000 when it grew to 140,000 square feet. The most recent expansion, completed in 2011, grew the plant to an overall 200,000 square feet (including 40,000 square feet of warehouse space). Products Tommy Gate manufactures a variety of hydraulic liftgates for trucks and other vehicles. Their main product lines include: Parallel-arm: Versatile and capable of handling heavy loads. Rail-gate: Ideal for low-clearance items. Tuckunder: Compact design for smaller vehicles. They also offer specialized liftgates like dump-through, level-ride, and side-loader models. Tommy Gate focuses on durability, reliability, and ease of use, with options for customization like remote controls and platform extensions. References External links Companies based in Iowa Manufacturing companies based in Iowa Logistics industry in the United States Mechanical engineering Hydraulics
Tommy Gate
[ "Physics", "Chemistry", "Engineering" ]
319
[ "Applied and interdisciplinary physics", "Physical systems", "Hydraulics", "Mechanical engineering", "Fluid dynamics" ]
53,302,354
https://en.wikipedia.org/wiki/Polymerization-induced%20phase%20separation
Polymerization-induced phase separation (PIPS) is the occurrence of phase separation in a multicomponent mixture induced by the polymerization of one or more components. The increase in molecular weight of the reactive component renders one or more components to be mutually immiscible in one another, resulting in spontaneous phase segregation. Types Polymerization-induced phase separation can be initiated either through thermally induced polymerization or photopolymerization. The process general occurs through spinodal decomposition, commonly resulting in the formation of co-continuous phases. Control over morphology The morphology of the final phase separated structures are generally random owing to the stochastic nature of the onset and process of phase separation. Several approaches have been investigated to control morphology. Tran-Cong-Miyata and co-workers using periodic irradiation in photoreactive polymer blends to control morphology, specifically width of the resultant spinodal modes in the phase separated morphology. Li and co-workers employed holography, a process of holographic polymerization, in to order to direct the phase separated structure to have the same patterns as the holographic field. Recently, Hosein and co-workers demonstrated that nonlinear optical pattern formations that occur in photopolymer systems may be used to direct the organization of blends to have the same morphology as the light pattern. Applications The process is commonly used in control of the morphology of polymer blends, for applications in thermoelectrics, solid-state lighting, polymer electrolytes, composites, membrane formation, and surface pattern formations. References __notoc__ Polymer chemistry
Polymerization-induced phase separation
[ "Chemistry", "Materials_science", "Engineering" ]
331
[ "Materials science", "Polymer chemistry" ]
73,196,215
https://en.wikipedia.org/wiki/Keratin-associated%20protein
Keratin-associated proteins (KRTAPs, KAPs) and keratins are the major components of hair and nails. The content of KRTAPs in hair varies considerably between species, ranging from less than 3% in human hair to 30–40% in echidna quill. Both keratin and KRTAPs are extensively cross-linked in hair through disulfide bonds via numerous cysteine residues in keratins. Given the economic importance of wool, the KRTAP family has been studied intensively in sheep. Genetics The KRTAP family of genes is unique to mammals. The family has evolved rapidly with about 188 genes in the mouse genome, 175 in the sloth, 122 in humans, but only 35 in dolphins (where only 9 genes are functional). In humans, there are 101 intact KRTAP genes and 21 (non-functional) pseudogenes. There are two major groups of KRTAP genes: high/ultrahigh cysteine (HS-KRTAP) and high glycine-tyrosine (HGT-KRTAP), that are thought to have independently originated based on their distinct amino acid compositions. Human KRTAP loci The KRTAP locus on human chromosome 17 includes the following 40 genes (in this order on the chromosome; lower-case "p" indicates pseudogenes): KRTAP3-3, KRTAP3-2, KRTAP3p1, KRTAP3-1, KRTAP1-5, KRTAP1-4, KRTAP1-3, KRTAP1-1, KRTAP2-1, KRTAP2-2, KRTAP2-3, KRTAP2-4, KRTAP4p2, KRTAP4-7, KRTAP4-8, KRTAP4p1, KRTAP4-9, KRTAP4-11, KRTAP4-12, KRTAP4-6, KRTAP4-5, KRTAP4-4, KRTAP4-3, KRTAP4-2, KRTAP4-1, KRTAP4p3, KRTAP9-1, KRTAP9-9, KRTAP9-2, KRTAP9-3, KRTAP9-8, KRTAP9-4, KRTAP9-5, KRTAP9-6, KRTAP9-12 KRTAP9-7, KRTAP9-10, KRTAP29-1, KRTAP16-1, and KRTAP17-1. Similarly, the KRTAP locus on human chromosome 21 contains the following genes: KRTAP24-1, KRTAP25-1, KRTAP26-1, KRTAP27-1, KRTAP13-6, KRTAP13p2, KRTAP13-2, KRTAP13-1, KRTAP13-3, KRTAP13-4, KRTAP13p1, KRTAP13-5, KRTAP19-1, KRTAP19-2, KRTAP19-3, KRTAP19-4, KRTAP19-5, KRTAP19p1, KRTAP19p2, KRTAP19p3, KRTAP19-6, KRTAP19p5, KRTAP19-7, KRTAP6-3, KRTAP6-2, KRTAP19-9, KRTAP6-1, KRTAP20-1, KRTAP20-2, KRTAP19p4, KRTAP21p1, KRTAP21-2, KRTAP21-1, KRTAP8p1, KRTAP8p2, KRTAP8-1, KRTAP7p1, KRTAP11-1, KRTAP19-8, KRTAP10-1, KRTAP10-2, KRTAP10-3, KRTAP10-4, KRTAP10-5, KRTAP10-6, KRTAP10-7, KRTAP10-8, KRTAP10-9, KRTAP10-10, KRTAP10-11, KRTAP12-4, KRTAP12-3, KRTAP12-2, KRTAP12-1, KRTAP12p1, KRTAP10-12, KRTAP10p1. The other KRTAP genes form similar, but smaller clusters on chromosomes 2 and 11. It has been proposed to change the protein names from KRTAP to KAP, with the numbering scheme remaining the same. However, the gene names would remain the same (KRTAPx-x etc.). See also Human chromosome 17 Keratin Keratin-associated protein 5-6 References External links KRTAP proteins in Uniprot The KRTAP locus on human chromosome 17 (NCBI genome viewer) hair proteins
Keratin-associated protein
[ "Chemistry", "Biology" ]
1,089
[ "Biomolecules by chemical classification", "Organ systems", "Molecular biology", "Proteins", "Hair" ]
73,196,457
https://en.wikipedia.org/wiki/DESY%20%28particle%20accelerator%29
The particle accelerator DESY (acronym for Deutsches Elektronen-Synchrotron or German Electron Synchrotron) was the first particle accelerator of the DESY research centre in Hamburg and the one that gave the research centre its name. The DESY synchrotron was used for research in particle physics from 1964 to 1978 and served as a pre-accelerator for other accelerator facilities at DESY. Construction of the synchrotron started in 1960. With a circumference of 300 m, it was the world's largest facility of its kind and accelerated electrons to 7.4 GeV. The first electrons circulated in acceleration on 25 February 1964, and research activities into elementary particles at the DESY synchrotron started in May 1964. In the experiments carried out at DESY, the electron beams were directed at fixed targets. Research at the DESY particle accelerator DESY first attracted international attention in 1966 with its confirmation of the theory of quantum electrodynamics. A world-first, production of proton–antiproton pairs using high-energy radiation, was also achieved at the DESY accelerator in 1966. Additionally, protons were very accurately scanned, showing that they do not have a solid nucleus. In the following decade, DESY established itself as a skills centre for developing and operating particle accelerator facilities. Before 1964 no continuous soft-x-ray radiation sources existed. In that year, research began using the synchrotron radiation that occurs as a side effect of electron acceleration in the DESY ring. Synchrotron radiation was first used for absorption spectroscopy at the synchrotron in 1967. The European Molecular Biology Laboratory (EMBL) made use of this new technology's potential and 1972 established a permanent branch at DESY with the aim of analyzing the structure of biological molecules through synchrotron radiation. Pre-accelerator and test beam facility The particle physics experiments at the original DESY synchrotron ran until 1978. After that, it was rebuilt and upgraded several times, serving as a pre-accelerator for DESY's larger accelerator facilities starting in 1973 for the storage ring DORIS, and from 1978 mainly for PETRA. After a fundamental modification to become the proton synchrotron DESY III, the facility went back into operation in 1987 together with the newly built electron synchrotron DESY II as a pre-accelerator for HERA. With the shutdown of HERA in 2007, the proton synchrotron DESY III was also decommissioned after 43 years of operation. Today, the DESY II electron synchrotron still serves as a pre-accelerator for PETRA III and as a test beam facility with three beamlines used by research groups worldwide to test detector components. References External links DESY Particle accelerators Particle physics facilities Synchrotron radiation facilities Buildings and structures in Altona, Hamburg
DESY (particle accelerator)
[ "Materials_science" ]
584
[ "Materials testing", "Synchrotron radiation facilities" ]
73,196,880
https://en.wikipedia.org/wiki/David%20R.%20Shonnard
David R. Shonnard is an American engineer. He is a Richard and Bonnie Robbins Chair in Sustainable Use of Materials and former director of the Michigan Technological University Sustainable Futures Institute. He has expertise in systems analysis for sustainability, environmental life cycle assessments of renewable energy technologies, and chemical recycling of waste plastics for a circular economy. Biography Shonnard earned a Ph.D. from the University of California at Davis and had appointments at Lawrence Livermore National Laboratory and the University of California at Berkeley prior to joining Michigan Technological University in 1993. He has served on advisory committees for the DOE, UDSA, and the REMADE Institute in areas of biomass research and development and materials circular economy. He is co-author of two green engineering and sustainable engineering textbooks and has published over 200 works appearing in peer-reviewed research journals, technical reports, and conference proceedings. Research interests Shonnard has broad research interests that include diffusion and adsorption of pollutants in soils, atmospheric transport of hazardous compounds, environmental risk assessment, in-situ subsurface remediation, environmentally-conscious design of chemical processes, advanced biofuels reaction engineering, and applications of pyrolysis to waste plastics recycling. Sponsors of his research program include federal (NSF, DOE, DARPA, USDA, FAA), state (MI MTRAC), and numerous industrial firms. He holds patents in enzymatic and chemical conversion technologies and is founder of a company, SuPyRec, to commercialize chemical recycling of waste plastics. Select publications References Living people Year of birth missing (living people) Place of birth missing (living people) Chemical engineering academics University of California, Davis alumni University of Nevada, Reno alumni Michigan Technological University faculty American company founders
David R. Shonnard
[ "Chemistry" ]
351
[ "Chemical engineering academics", "Chemical engineers" ]
73,198,875
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Kaplansky%20theorem
The Erdős–Kaplansky theorem is a theorem from functional analysis. The theorem makes a fundamental statement about the dimension of the dual spaces of infinite-dimensional vector spaces; in particular, it shows that the algebraic dual space is not isomorphic to the vector space itself. A more general formulation allows to compute the exact dimension of any function space. The theorem is named after Paul Erdős and Irving Kaplansky. Statement Let be an infinite-dimensional vector space over a field and let be some basis of it. Then for the dual space , By Cantor's theorem, this cardinal is strictly larger than the dimension of . More generally, if is an arbitrary infinite set, the dimension of the space of all functions is given by: When is finite, it's a standard result that . This gives us a full characterization of the dimension of this space. References Functional analysis Paul Erdős
Erdős–Kaplansky theorem
[ "Mathematics" ]
179
[ "Theorems in mathematical analysis", "Theorems in functional analysis" ]
73,200,138
https://en.wikipedia.org/wiki/EPIC-Seq
EPIC-seq, (short for Epigenetic Expression Inference by Cell-free DNA Sequencing), is a high-throughput method that specifically targets gene promoters using cell-free DNA (cfDNA) sequencing. By employing non-invasive techniques such as blood sampling, it infers the expression levels of targeted genes. It consists of both wet and dry lab stages. EPIC-seq involves deep sequencing of the transcription start sites (TSS). It hypothesizes that with deep sequencing of these TSSs, usage of fragmentomic features, chromatin fragmentation patterns or properties, can allow high-resolution analyses, as opposed to its alternatives. The method has been shown effective for gene-level expression inference, molecular subtyping of diffuse large B cell lymphoma (DLBCL), histological classification of nonsmall-cell lung cancer (NSCLC), evaluation of results of immunotherapy agents, and assessment of the genes' prognostic importance. EPIC-seq uses machine learning to deduce the RNA expression of the genes and proposes two new metrics: promoter fragmentation entropy (PFE), an adjusted Shannon Index for entropy, and nucleosome-depleted region (NDR) score, the depth of sequencing in NDR regions. PFE showed superior performance compared to earlier metrics for fragmentomic features. Additionally, EPIC-seq has been mentioned as a possible solution for detecting tissue damage and esophagus cancer using methylation profiles of cfDNAs, profiling of donor liver molecular networks, and inflammatory bowel disease (IBD) detection. Background Historical Usage of cfDNA and fragmentomic features cfDNA, cell death-related and chromatin fragmented DNA molecules contained in blood plasma, has been used to detect transplant tissue rejection, prenatal fetal aneuploidy testing, tumour profiling, and early cancer detection in previous research. Nevertheless, prevalent liquid biopsy methods for cfDNA profiling depend on detecting germline or somatic genetic variations, which may be absent even in high disease burden-bearing patients and cancers with high tumour mutation rates. Historically, the usage of fragmentomic features of cfDNA samples was shown to be another method to approach the problems mentioned. They demonstrated the capability to inform about the originated tissue classification of cfDNA molecules, which can help segregate tumour-related somatic mutations. However, current methods that use fragmentomic features, such as shallow whole genome sequencing (WGS) on cfDNA, do not fully cover all the tissues' effects and provide low sequencing depth and breadth to infer low-level, for example, gene level, properties. Hence, these methods require a high tumour burden from the patients. Circulating Tumor DNA profiling Circulating tumour DNA (ctDNA) molecules are tumour-derived cell-free DNA (cfDNA) circulating in the bloodstream and are not associated with cells. CtDNA primarily arises from chromatin fragmentation accompanying tumour cell death and can be extracted by liquid biopsy. CtDNA analysis has been implemented for noninvasive identification of tumour genetic characteristics and early recognition of various cancer forms. The majority of current ctDNA analysis depends on genetic differences in germline or somatic cells to diagnose diseases and detect tumour cells at an early stage. While looking at genetic variations of ctDNA can be beneficial, not all ctDNAs contain genetic mutations. EPIC-seq unitized epigenetic features of ctDNA to inform tissue-of-origin of these unmutated molecules, which is helpful for cancer classification. Fragmentomic Features for Tissue-of-origin classification The majority of circulating cfDNA molecules are fragments linked to nucleosomes, so they represent unique chromatin arrangements found in the nuclear genomes of the cells they originate from. In particular, open chromatin areas j, whereas genomic regions linked to nucleosomal complexes are often shielded from endonuclease activity. Several studies have identified specific chromatin fragmentomic characteristics that aid in informing tissue origins through cfDNA profiling. These features include: Reduced sequencing coverage depth Disruption of nucleosome positioning near transcription start sites (TSSs) Length of cfDNA fragments Principles of EPIC-seq Currently, the majority of circulating tumour DNA (ctDNA) fragmentomic techniques lack the ability to achieve gene-level resolution and are effective mainly in inferring expression at elevated ctDNA levels. Consequently, they are primarily applicable to patients with notably advanced tumour burdens typically seen in late-stage cancer. To address this limitation, EPIC-seq employs hybrid capture-based targeted deep sequencing of regions flanking transcription start sites (TSS) in cfDNA. This approach allows for the acquisition of ctDNA fragmentation features crucial for predicting gene expressions, such as Promoter Fragmentation Entropy (PFE) and Nucleosome Depleted Region (NDR). These key fragmentomic features possess the capability to capture associations at the gene level with expression levels throughout the genome, enabling the construction of a predictive model for transcriptional output. This would allows for the high-resolution monitoring of cfDNA fragmentation and gene-level analysis. Promoter Fragmentation entropy Epic-seq hypothesizes that cfDNA fragments originating from active promoters, which are less shielded by nucleosomes and thus more susceptible to endonuclease cleavage, will display more erratic cleavage patterns compared to fragments from inactive promoters, which are better protected by nucleosomes. PFE is a variation of the Shannon Index, which is a quantitative measure for estimating diversity. In the context of Epic-seq, PFE calculates the diversity of cfDNA fragment lengths where both ends of the fragment are situated within the 2 kb flanking region of each gene's TSS. The higher the PFE of a gene's TSS, the more likely the gene is highly expressed. Nucleosome Depleted region Actively expressed genes have open chromatin at their TSS region, they are less shielded by nucleosomes and, therefore, more susceptible to endonuclease cleavage. Consequently, the depth of cfDNA originating from the TSS of active genes tends to be shallower compared to that of inactive genes. NDR quantifies the normalized depth within each 2-kilobase window surrounding each TSS. The lower the NDR of a gene TSS site, the more likely the gene is highly expressed. Methods Wet Lab workflow 1. Collection and Processing of plasma Peripheral blood samples were obtained and processed to isolate plasma following standard protocols. Upon centrifugation, plasma specimens were preserved at −80 °C, awaiting the extraction of ctDNA. The extraction of cfDNA from plasma volumes ranging from 2 to 16 ml was carried out using established laboratory procedures. Following isolation, the concentration of cfDNA was determined using fluorescence-based quantification methods. 2. Sequencing Library preparation A typical amount of 32 ng of cfDNA was utilized for library preparation. DNA input was adjusted to mitigate the effects of high molecular-weight DNA contamination. The library preparation process encompassed end repair, A-tailing, and adapter ligation, which also incorporated molecular barcodes into each read. These procedures were conducted according to ligation based library preparation standardized protocols, with overnight ligation performed at 4 °C. Following this, shotgun cfDNA libraries underwent hybrid capture targeting specific genomic regions, as detailed below. 3. Custom Capture Panels sequencing Custom capture panels tailored to specific cancer types or personalized selectors were utilized in EPIC-seq. The capture panels targeted transcription start site regions of genes of interest. Enrichment for EPIC-seq was performed following established laboratory protocols. Subsequently, hybridization captures were pooled, and the pooled samples underwent sequencing using short read sequencing. Dry Lab workflow Since EPIC-seq contains certain computational parts after the wet-lab portion for further processing, the following steps are summarized based on the developers' steps provided in the original paper. 4. Demultiplexing and Error correction If multiplexed paired-end sequencing is used, then demultiplexing needs to be done to sort reads for different samples to different files. After the demultiplexing, error correction and read pair elimination based on unique identifier and barcode matching of pairs can be done. Developers adapt the demultiplexing and error correction steps from the CAPP-seq demultiplexing pipeline. 5. Outer Sequence Removal and trimming For the preservation of shorter fragment reads, barcode removal and adapter trimming need to be done. After read preprocessing, the alignment of reads to the human genome reference should be performed. Original EPIC-seq used hg19 but for better results, an updated version of human genome reference can be used. One should be careful about their aligner's options since some aligners can interfere with the inclusion of shorter reads paired with longer ones. For the deduplication, attached molecular customized barcodes should be exploited. These barcodes include endogenous and exogenous unique molecular identifiers (UMIs) and are handy for distinguishing Polymerase Chain Reaction (PCR) duplicates from real duplicates and hence for PCR duplicate cleansing. This portion is especially important for oncologic applications since the low mutation abundance can be suppressed by PCR duplicates. 6. Read Normalization and quality control If the data for different samples are going to be contrasted with each other, one can perform downsampling on the reads to achieve comparability. The reported sequencing coverage depth for reasonable analysis results was reported as bigger than 500 folds, thus any sample whose mean sequencing depth does not exceed the number can be dropped for more accurate outcomes. Also, EPIC-seq uses estimated expected cfDNA fragment length density of 140–185, based on chromatosomal length. The samples that have outlier fragment length density can be dropped for higher correlation results.  As the last quality control step, mapping quality should be considered. A looser threshold can be dictated on EPIC-seq reads, compared to WGS, due to the TSS selection criteria imposed during design phases making the reads more unique for EPIC-seq. Fragmentomic Feature Analysis 7. Shannon's entropy For the measurement of the diversity of fragmentomic features, the PFE metric, derived from Shannon's Index of entropy, is developed. The default number of 201 bins of lengths 100 to 300 are used for density estimation by the maximum likelihood method. The probability of having a fragment with size , () is computed by the division of the number of fragments with size by the total number of fragments. Shannon's entropy is calculated with the formula: . 8. Dirichlet-Multionomial model Next, as a cleansing against different sequencing depths from different runs and other factors that can hinder the fragment length distribution sanity, Bayesian normalization via the Dirichlet-multinomial model should be done. Per every sample, based on the fragment lengths observed in that sample, a multinomial maximum likelihood estimation-based fragment length distribution is generated.  Two intervals of 250 base pair length are used, located between -1000th base pair and -750th base pair, and between 750th base pair and 1000th base pair locations to the centre of TSS. This is done due to the prevention of the impact of gene expression on the generated distribution, as the selected intervals are relatively far away from TSS. Then, the fragment length densities from that distribution are sampled for each 201-fragment size and used as a parameter for Dirichlet distribution generation. The initial parameter for Dirichlet distribution is set to 20. From the obtained Dirichlet distribution, 2000 fragments are sampled, and Shannon's entropy is calculated for those. The Shannon entropies are subsequently compared with the Shannon entropy values of five randomly selected background sets ( where ). 9. PFE calculation PFE is calculated as the probability of gene-specific entropy being higher than times all other background set entropies individually. The variable is sampled from the Gamma distribution with shape 1 and rate 0.5. Also, as the last step, the expected value for the sum of gene-specific entropy probability for each background is reported as PFE. That probability is based on the Dirichlet distribution generated in the previous step. 10. NDR calculation NDR is the normalized measure of sequencing depth, which was downsampled to 2000 folds as a default in the 2000 base pair windows during read preprocessing and quality control steps. 11. Machine Learning for Expression prediction With deep WGS data of cfDNA from a carcinoma of unknown primary patient with very low ctDNA concentration quantified, they trained a machine learning model using bootstrapping. The results of RNA-sequencing on PBMC runs for the 5 different patients are recorded and the average of 3 of these individuals' expression levels is used as a reference for gene expression. The genes are clustered into 10 clusters based on reference gene expression to increase the resolution at the core promoters. Then, genes used as a background value for PFE calculation are removed. Next, all the fragments in extended TSS regions, a region that has the center as TSS regions' center and the length of 2000 base pairs, are pooled. The PFE and NDR scores are calculated for the fragments pooled. Further normalization of these scores is done based on their 95th percentile. Using these two features, they bootstrapped, used in a weighted fashion, 600 expression prediction models developed for WGS data. Among those models, there are 200 univariable standalone NDR, 200 univariable standalone PFE, and 200 NDR-PFE integrated models. Advantages High throughputness EPIC-seq inherits the advantages of high-throughput sequencing: fast sequencing times, high scalability, higher sequencing depths, lower costs, and low error rates. Another advantage of EPIC-seq is that it is non-invasive. This also eliminates the risks of invasive methods done over risky tissues and allows scientists to study tissues that are too dangerous or difficult to do so. Indepency of High Tumour Burden requirement As mentioned in the introduction, two major limitations of the predecessor methods are not inherited by EPIC-Seq: germline or somatic variant dependency of common liquid biopsy methods which is also not certain to be found even in high-disease burden patients and methods like shallow WGS's insufficient range of cfDNA tissue consideration, genomic breadth and genomic depth which causes low-resolution and level of inference of gene expression and, again, requires high tumour burden for higher resolution. EPIC-seq uses fragmentomic features instead of variant calling, thus it is not bound by the existence of the variation. Also, since it does targeted sequencing instead of whole genome, it allows scientists to increase the sequencing depth and hence provide a better resolution. Moreover, it also provides more sensitive and comprehensive tissue-of-origin information. Different Prediction sensitivities Furthermore, the method showed consistent performance in cancer identification, classification, and treatment effect problems like NSCLC and DLBCL identification, histological classification of subtypes of NSCLC, molecular classification of subtypes of DLBCL, DLBCL COO detection, programmed death-ligand 1 immune-checkpoint inhibition response prediction against advanced NSCLC cases, and prognostic value detection of individual genes. Generalizability WES was done with EPIC-seq and it detected a correlation between the biological signal and active genes' exonic regions; this shows that EPIC-seq can be generalized for expression of genes of interest rather than only cancer genes Robustness on cfDNA levels In general, EPIC-seq analysis results showed a significant correlation between the inspected biological effect and the developed score. For the classification tasks Area Under the ROC (receiver operating characteristic curve) Curve (AUC) scores were over 90% with a sufficient significance interval. Also, for these tasks, cfDNA levels did not change the performance unfavourably even when the levels were below 1%. So, the method shows a good robustness against cfDNA levels as well. Finally, EPIC-seq did not show any significant changes under different pre-analytical factors, which proves that the method is robust under different circumstances that can be caused by the instruments and tools used before the analysis. Limitations While EPIC-seq offers significant potential in various biomedical applications, it also has limitations that warrant consideration in its implementation and interpretation. Dependency on Known Cancer-Associated genes One limitation of EPIC-seq is its reliance on prior knowledge of genes associated with specific cancers. The effectiveness of the EPIC-seq model hinges on the availability of comprehensive gene expression profiles for the targeted cancer types. This dependency may restrict its applicability to cancers with well-characterized gene expression patterns, limiting its utility in cancers with less understood molecular signatures. Limited applicability to specific cancer types EPIC-seq may be more effective in cancers with prominent genes or well-defined molecular subtypes. Consequently, its utility may be limited in cancers with less distinct genetic profiles or those characterized by significant interpatient variability. This restricts its generalizability across different cancer types and necessitates cautious interpretation of results in diverse oncological contexts. Limited Performance in Early-stage cancer EPIC-seq may exhibit enhanced performance in detecting late-stage cancer due to higher levels of ctDNA and more pronounced genetic alterations. For example, EPIC-seq's sensitivity for detecting NSCLC diminishes significantly in patients with low tumor-DNA burden (below 1%), resulting in decreased detection rates by approximately 34%. Applications Noninvasive cancer detection EPIC-seq has demonstrated remarkable potential in noninvasive cancer detection, notably in the diagnosis of lung cancer, the leading cause of cancer-related mortality. Using EPIC-seq, researchers have achieved high accuracy in distinguishing between NSCLC patients, DLBCL patients and healthy individuals. Noninvasive Classification of Cancer subtypes EPIC-seq enables the subclassification of NSCLC into histological subtypes such as lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC). EPIC-seq can also aid with the classification of cell-of-origin (COO) subtypes in DLBCL. By analyzing epigenetic and transcriptional signatures, EPIC-seq-derived classifiers provide valuable insights into tumor heterogeneity and molecular subtyping, providing valuable insights for tailored treatment strategies. Therapeutic Response prediction In addition to diagnosis and classification, EPIC-seq holds promise in predicting patient response to various cancer therapies, including immune-checkpoint inhibition (ICI). By analyzing changes in gene expression patterns captured through EPIC-seq, researchers can forecast patient response to PD-(L)1 blockade therapy, which can provide great help in personalized cancer treatment. EPIC-seq-derived indices have shown significant correlation with treatment response, offering potential prognostic markers for therapy outcome prediction. Immunotranscriptomic profiling of Classical Hodgkin Lymphoma EPIC-seq has been shown to be effective for inferral of epigenetic expression of classical Hodgkin Lymphoma's (cHL) subtypes. Hodgkin and Reed/Sternberg cells and their corresponding T cells' expression were inferred with EPIC-seq. Bulk single-cell RNA sequencing results shows significant correlation with EPIC-seq profilings of these cell types. Possible use cases Research in different areas mention possible use cases of EPIC-seq. Integrated analysis toolkit for whole-genome-wide features of cfDNA (INAC) compiles different tools, including EPIC-seq's PFE and NDR scores, to provide in comprehensive silico analysis of cfDNA which can be exemplified disease state and clinical outcome inference, transcriptome modeling, and copy number profiling. EPIC-seq is also mentioned to be a potential application in clinica IBD cases. It can be used for survailance of IBD in high-risk groups and precancerous development caused by IBD. It is also named as a possible superior method in clinical IBD gut damage detection, compared to the current methods. Alternatives As EPIC-seq studies epigenetic markers to infer gene expression, one can study epigenetic sequencing methods like ChIP-seq, ATAC-seq, MeDIP-seq, and Bisulfite-Free DNA Methylation sequencing in combination with methods for profiling RNA expression such as RNA-seq and scRNA-seq. Considering the method is mainly developed for early cancer detection or subgrouping, liquid biopsy methods, such as Twist cfDNA Pan-Cancer Reference Standard, can be used as an alternative. Different liquid biopsy methods focus on cell-free tumour markers, tumour methylation markers, exomes, proteins, lipids, carbohydrates, electrolytes, metabolites, RNA, extracellular vesicles, circulating tumour cells, and tumour-educated platelets for early identification of cancer non-invasively. Some of the proposed liquid biopsy methods provide a comprehensive detection of cancer types, such as ATR-FTIR spectroscopy and CancerSEEK, while others, like Dxcover and SelectMdx operate on more specific (even single) cancer targets. EPIC-seq utilizes fragmentomic features to infer expression levels of genes. Several studies also employ fragmentomic features to infer cancer existence, infer cell death, and detect other clinical conditions such as transplant failure. ctDNA by Fragment Size analysis This method uses in vivo and silico ctDNA fragment length selection to enrich the variant proportion in the plasma. The method is decided on size selection criteria based on blood ctDNA fragment length properties, so it may not generalize well for other non-invasive sampling methods. Furthermore, it employs supervised machine learning methods like Random Forest and Logistic Regression on shallow WGS to classify cancer and healthy patients. The method can be used for different cancer types. Plasma DNA End-Motif profiling This method tries to identify 4-bp long end motifs from each stand's 5' end on bisulfite sequencing reads of plasma cfDNAs. Hierarchical clustering of the motifs is done to detect any under/overrepresentation of these motifs due to cancer existence. The method incorporates Support Vector Machines and Logistic Regression to predict cancer patients from healthy ones. The method is also applied to transplant patients with clustering and multidimensional scaling (MDS) analysis and shows applicability. The same analysis types also proved that this method applies to prenetal testing. This method is also informative for cell type origins. Orientation-aware Plasma cell-free DNA Fragmentation analysis Sequencing depth inconsistencies on open chromatin regions and signals derived from up/downstream orientation-sensitive sequencing read densities, this method infers the tissue of origin of the cfDNA fragments obtained from bisulfite sequencing. The method uses a mathematical formulation to generate signals for orientation-aware cfDNA fragmentation based on the empirical peak periods and positions of up/downstream ends of the reads. The method shown to be useful for inferring the tissue-of-origin, pregnancy identification, cancer detection, and transplant monitoring. This method also provides information on which tissue-of-origin contributes how much to cfDNA reads. DNA Evaluation of Fragments for early interception The method analyzes the shallow WGS reads in windows while considering the cfDNA fragment length and coverage. The genome-wide pattern of cfDNA fragmentation features is then fed to a gradient tree-boosting machine learning model to predict their cancer situation.  They also used machine learning classifiers to predict the tissue of origin. Overall, the method can be used to identify if a patient has cancer. Even though the method does not specifically classify the cancer types during prediction, it is used for the detection of different cancers. In vivo Nucleosome footprinting The method produces genome-wide mappings of in vivo nucleosome occupancy to detect the tissue-of-origin of cfDNA molecules. The method uses reads' endpoint position aligned which are expected to be close to nucleosome core particle (NCP) sites. Windowed Protection Score (WPS) is proposed to quantify the cfDNA density close to NCPs using the frequency of cfDNA particles that cover 120 base pairs centred at a given location minus the frequency of fragments with an endpoint at the same interval. Then, the peaks are called heuristically for WPS to identify footprints. The cells contributing to cfDNA are then predicted from the footprints. These footprints can be used for identifying non-malignant epigenetic or genetic sites like transcription factor binding sites, and detection of malignancy-related biomarkers based on the extent of tissue damage and cell deaths. ctDNA Nucleosome Pattern Employment for Transcriptional Regulation profiling The method has mainly been developed for detecting the various phenotypes of metastatic castration-resistant prostate cancer. It requires the usage of patient-derived xenografts for enrichment of ctDNA in blood for further analysis. After WGS, the method utilizes the tool Griffin for inspection of local promoter coverage, nucleosome positioning, fragment size analysis, and composite transcription factor binding sites plus open chromatin sites of ctDNA reads. It also checks the histone modifications and applies dimensionality reduction on the found sites to identify putative promoter, enhancer, and gene repressive heterochromatic marks. To interrogate the chromatine phasing, distance between open chromatin regions, the method uses TritonNP, newly developed software, that uses Fourier transforms and band-pass filters. XGBoost is utilized for classification on cancer subtype with using the features detected in previous steps. cfDNA Methylation, Copy Number, and Fragmentation Analysis for early detection of multiple cancer types The method is proposed as an assay that employs both cfDNA whole genome methylation sequencing and fragmentomic feature information for multicancer classification. Copy number ratios calculated for healthy and cancerous tissues are used as a cancer type and cancer existence identifier. As done in EPIC-seq, the method also utilizes fragment lengths. Short fragment over long fragment ratio is used in the method as an identifier score. Using the single base or region level methylation percentages on detected cancer methylation markers for each cancer type, copy number ratios, and short/long fragment ratios; the method employs a custom Support Vector Machines algorithm to classify the cancer type if there exists one. This method reports the cancer detection and tissue-of-origin of 4 cancer types. However, it requires detection of specific methylation sites/regions of interest for cancer types References Biochemistry methods Molecular biology
EPIC-Seq
[ "Chemistry", "Biology" ]
5,551
[ "Biochemistry methods", "Biochemistry", "Molecular biology" ]
73,203,014
https://en.wikipedia.org/wiki/Combined%20cycle%20powered%20railway%20locomotive
A combined cycle powered locomotive is a patented idea to use two primary movers, a gas turbine with a steam turbine to gain the efficiency of a combined cycle power plant or a combined gas and steam engine. Steam locomotives were tested in the past but were not ideal for low speeds, and gas turbine locomotives (GTELs) were used by Union Pacific until the 1970s. Combined cycle power uses the heat from the gas turbine to make steam from the water to turn a steam turbine, instead of that heat getting exhausted out and wasted. Engine efficiency for combined cycle can achieve 60% compared to diesel motors' 45% efficiency. The gas and steam turbines would turn their separate generators and the steam turbine would have a clutch between it and its generator because steam power is not easily adjustable. Compressed hydrogen would be in one fuel tank, and water would be in another storage tank for the steam, and the Rankine cycle could condense most of the steam back to water to put back into the water tank to repeat the cycle for the steam turbine. Current diesel electric locomotives such as the GE Evolution Series with a cab could still be the lead cab, pusher, and distributed power; with the combined cycle powered locomotive as a slug. See also Hydrogen economy Hydrogen fuel cell train Turbine-electric powertrain Schnabel car References External links Researching an Air-Steam Combined-cycle Locomotive Steam locomotives Gas turbine locomotives Steam vehicles Freight transport Locomotive engines
Combined cycle powered railway locomotive
[ "Technology" ]
286
[ "Locomotive engines", "Engines" ]
63,087,895
https://en.wikipedia.org/wiki/Energy-based%20model
An energy-based model (EBM) (also called Canonical Ensemble Learning or Learning via Canonical Ensemble – CEL and LCE, respectively) is an application of canonical ensemble formulation from statistical physics for learning from data. The approach prominently appears in generative artificial intelligence. EBMs provide a unified framework for many probabilistic and non-probabilistic approaches to such learning, particularly for training graphical and other structured models. An EBM learns the characteristics of a target dataset and generates a similar but larger dataset. EBMs detect the latent variables of a dataset and generate new datasets with a similar distribution. Energy-based generative neural networks is a class of generative models, which aim to learn explicit probability distributions of data in the form of energy-based models, the energy functions of which are parameterized by modern deep neural networks. Boltzmann machines are a special form of energy-based models with a specific parametrization of the energy. Description For a given input , the model describes an energy such that the Boltzmann distribution is a probability (density), and typically . Since the normalization constant: (also known as the partition function) depends on all the Boltzmann factors of all possible inputs , it cannot be easily computed or reliably estimated during training simply using standard maximum likelihood estimation. However, for maximizing the likelihood during training, the gradient of the log-likelihood of a single training example is given by using the chain rule: The expectation in the above formula for the gradient can be approximately estimated by drawing samples from the distribution using Markov chain Monte Carlo (MCMC). Early energy-based models, such as the 2003 Boltzmann machine by Hinton, estimated this expectation via blocked Gibbs sampling. Newer approaches make use of more efficient Stochastic Gradient Langevin Dynamics (LD), drawing samples using: , where . A replay buffer of past values is used with LD to initialize the optimization module. The parameters of the neural network are therefore trained in a generative manner via MCMC-based maximum likelihood estimation: the learning process follows an "analysis by synthesis" scheme, where within each learning iteration, the algorithm samples the synthesized examples from the current model by a gradient-based MCMC method (e.g., Langevin dynamics or Hybrid Monte Carlo), and then updates the parameters based on the difference between the training examples and the synthesized ones – see equation . This process can be interpreted as an alternating mode seeking and mode shifting process, and also has an adversarial interpretation. Essentially, the model learns a function that associates low energies to correct values, and higher energies to incorrect values. After training, given a converged energy model , the Metropolis–Hastings algorithm can be used to draw new samples. The acceptance probability is given by: History The term "energy-based models" was first coined in a 2003 JMLR paper where the authors defined a generalisation of independent components analysis to the overcomplete setting using EBMs. Other early work on EBMs proposed models that represented energy as a composition of latent and observable variables. Characteristics EBMs demonstrate useful properties: Simplicity and stability–The EBM is the only object that needs to be designed and trained. Separate networks need not be trained to ensure balance. Adaptive computation time–An EBM can generate sharp, diverse samples or (more quickly) coarse, less diverse samples. Given infinite time, this procedure produces true samples. Flexibility–In Variational Autoencoders (VAE) and flow-based models, the generator learns a map from a continuous space to a (possibly) discontinuous space containing different data modes. EBMs can learn to assign low energies to disjoint regions (multiple modes). Adaptive generation–EBM generators are implicitly defined by the probability distribution, and automatically adapt as the distribution changes (without training), allowing EBMs to address domains where generator training is impractical, as well as minimizing mode collapse and avoiding spurious modes from out-of-distribution samples. Compositionality–Individual models are unnormalized probability distributions, allowing models to be combined through product of experts or other hierarchical techniques. Experimental results On image datasets such as CIFAR-10 and ImageNet 32x32, an EBM model generated high-quality images relatively quickly. It supported combining features learned from one type of image for generating other types of images. It was able to generalize using out-of-distribution datasets, outperforming flow-based and autoregressive models. EBM was relatively resistant to adversarial perturbations, behaving better than models explicitly trained against them with training for classification. Applications Target applications include natural language processing, robotics and computer vision. The first energy-based generative neural network is the generative ConvNet proposed in 2016 for image patterns, where the neural network is a convolutional neural network. The model has been generalized to various domains to learn distributions of videos, and 3D voxels. They are made more effective in their variants. They have proven useful for data generation (e.g., image synthesis, video synthesis, 3D shape synthesis, etc.), data recovery (e.g., recovering videos with missing pixels or image frames, 3D super-resolution, etc), data reconstruction (e.g., image reconstruction and linear interpolation ). Alternatives EBMs compete with techniques such as variational autoencoders (VAEs), generative adversarial networks (GANs) or normalizing flows. Extensions Joint energy-based models Joint energy-based models (JEM), proposed in 2020 by Grathwohl et al., allow any classifier with softmax output to be interpreted as energy-based model. The key observation is that such a classifier is trained to predict the conditional probability where is the y-th index of the logits corresponding to class y. Without any change to the logits it was proposed to reinterpret the logits to describe a joint probability density: with unknown partition function and energy . By marginalization, we obtain the unnormalized density therefore, so that any classifier can be used to define an energy function . See also Empirical likelihood Posterior predictive distribution Contrastive learning Literature Implicit Generation and Generalization in Energy-Based Models Yilun Du, Igor Mordatch https://arxiv.org/abs/1903.08689 Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One, Will Grathwohl, Kuan-Chieh Wang, Jörn-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, Kevin Swersky https://arxiv.org/abs/1912.03263 References External links Statistical models Machine learning Statistical mechanics Hamiltonian mechanics
Energy-based model
[ "Physics", "Mathematics", "Engineering" ]
1,424
[ "Machine learning", "Theoretical physics", "Classical mechanics", "Hamiltonian mechanics", "Artificial intelligence engineering", "Statistical mechanics", "Dynamical systems" ]
63,090,080
https://en.wikipedia.org/wiki/Using%20the%20Borsuk%E2%80%93Ulam%20Theorem
Using the Borsuk–Ulam Theorem: Lectures on Topological Methods in Combinatorics and Geometry is a graduate-level mathematics textbook in topological combinatorics. It describes the use of results in topology, and in particular the Borsuk–Ulam theorem, to prove theorems in combinatorics and discrete geometry. It was written by Czech mathematician Jiří Matoušek, and published in 2003 by Springer-Verlag in their Universitext series (). Topics The topic of the book is part of a relatively new field of mathematics crossing between topology and combinatorics, now called topological combinatorics. The starting point of the field, and one of the central inspirations for the book, was a proof that László Lovász published in 1978 of a 1955 conjecture by Martin Kneser, according to which the Kneser graphs have no graph coloring with colors. Lovász used the Borsuk–Ulam theorem in his proof, and Matoušek gathers many related results, published subsequently, to show that this connection between topology and combinatorics is not just a proof trick but an area. The book has six chapters. After two chapters reviewing the basic notions of algebraic topology, and proving the Borsuk–Ulam theorem, the applications to combinatorics and geometry begin in the third chapter, with topics including the ham sandwich theorem, the necklace splitting problem, Gale's lemma on points in hemispheres, and several results on colorings of Kneser graphs. After another chapter on more advanced topics in equivariant topology, two more chapters of applications follow, separated according to whether the equivariance is modulo two or using a more complicated group action. Topics in these chapters include the van Kampen–Flores theorem on embeddability of skeletons of simplices into lower-dimensional Euclidean spaces, and topological and multicolored variants of Radon's theorem and Tverberg's theorem on partitions into subsets with intersecting convex hulls. Audience and reception The book is written at a graduate level, and has exercises making it suitable as a graduate textbook. Some knowledge of topology would be helpful for readers but is not necessary. Reviewer Mihaela Poplicher writes that it is not easy to read, but is "very well written, very interesting, and very informative". And reviewer Imre Bárány writes that "The book is well written, and the style is lucid and pleasant, with plenty of illustrative examples." Matoušek intended this material to become part of a broader textbook on topological combinatorics, to be written jointly with him, Anders Björner, and Günter M. Ziegler. However, this was not completed before Matoušek's untimely death in 2015. References Combinatorics Algebraic topology Mathematics textbooks 2003 non-fiction books
Using the Borsuk–Ulam Theorem
[ "Mathematics" ]
584
[ "Discrete mathematics", "Algebraic topology", "Combinatorics", "Fields of abstract algebra", "Topology" ]
63,090,329
https://en.wikipedia.org/wiki/Brequinar
Brequinar (DuP-785) is a drug that acts as a potent and selective inhibitor of the enzyme dihydroorotate dehydrogenase. It blocks synthesis of pyrimidine based nucleotides in the body and so inhibits cell growth. Brequinar was invented by DuPont Pharmaceuticals in the 1980s. In 2001, Bristol-Myers Squibb acquired DuPont, and in 2017, Clear Creek Bio acquired the rights to brequinar from BMS. Brequinar has been investigated as an immunosuppressant for preventing rejection after organ transplant and also as an anti-cancer drug, but was not accepted for medical use in either application largely due to its narrow therapeutic dose range and severe side effects when dosed inappropriately. It has been researched both as part of a potential combination therapy for some cancers, or alternatively as an antiparasitic, or antiviral drug. Clear Creek Bio is currently developing brequinar as a potential treatment for COVID-19. Inhibition of dihydroorotate dehydrogenase activity by brequinar may represent an efficient approach to the elimination of undifferentiated cells for safe PSC-derived differentiated cells based therapies. Brequinar has been shown to inhibit completely vaccinia virus in cell based assay. See also Leflunomide - Clinically used DHODH inhibitor Methotrexate - the most widely used pyrimidine synthesis inhibitor References Antiviral drugs Quinolines Biphenyls Carboxylic acids Organofluorides
Brequinar
[ "Chemistry", "Biology" ]
331
[ "Antiviral drugs", "Carboxylic acids", "Biocides", "Functional groups" ]
63,093,632
https://en.wikipedia.org/wiki/NGC%20970
NGC 970 is an interacting galaxy pair in the constellation Triangulum. It is estimated to be 471 million light-years from the Milky Way and has a diameter of approximately 100,000 ly. The object was discovered on September 14, 1850, by Bindon Blood Stoney. See also List of NGC objects (1–1000) Notes References 970 Interacting galaxies Triangulum Spiral galaxies 009786
NGC 970
[ "Astronomy" ]
87
[ "Triangulum", "Constellations" ]
63,094,759
https://en.wikipedia.org/wiki/Poincar%C3%A9%20and%20the%20Three-Body%20Problem
Poincaré and the Three-Body Problem is a monograph in the history of mathematics on the work of Henri Poincaré on the three-body problem in celestial mechanics. It was written by June Barrow-Green, as a revision of her 1993 doctoral dissertation, and published in 1997 by the American Mathematical Society and London Mathematical Society as Volume 11 in their shared History of Mathematics series (). The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries. Topics The three-body problem concerns the motion of three bodies interacting under Newton's law of universal gravitation, and the existence of orbits for those three bodies that remain stable over long periods of time. This problem has been of great interest mathematically since Newton's formulation of the laws of gravity, in particular with respect to the joint motion of the sun, earth, and moon. The centerpiece of Poincaré and the Three-Body Problem is a memoir on this problem by Henri Poincaré, entitled Sur le problème des trois corps et les équations de la dynamique [On the problem of the three bodies and the equations of dynamics]. This memo won the King Oscar Prize in 1889, commemorating the 60th birthday of Oscar II of Sweden, and was scheduled to be published in Acta Mathematica on the king's birthday, until Lars Edvard Phragmén and Poincaré determined that there were serious errors in the paper. Poincaré called for the paper to be withdrawn, spending more than the prize money to do so. In 1890 it was finally published in revised form, and over the next ten years Poincaré expanded it into a monograph, Les méthodes nouvelles de la mécanique céleste [New methods in celestial mechanics]. Poincare's work led to the discovery of chaos theory, set up a long-running separation between mathematicians and dynamical astronomers over the convergence of series, and became the initial claim to fame for Poincaré himself. The detailed story behind these events, long forgotten, was brought back to life in a sequence of publications by multiple authors in the early and mid 1990s, including Barrow-Green's dissertation, a journal publication based on the dissertation, and this book. The first chapter of Poincaré and the Three-Body Problem introduces the problem and its second chapter surveys early work on this problem, in which some particular solutions were found by Newton, Jacob Bernoulli, Daniel Bernoulli, Leonhard Euler, Joseph-Louis Lagrange, Pierre-Simon Laplace, Alexis Clairaut, Charles-Eugène Delaunay, Hugo Glydén, Anders Lindstedt, George William Hill, and others. The third chapter surveys the early work of Poincaré, which includes work on differential equations, series expansions, and some special solutions of the three-body problem, and the fourth chapter surveys this history of the founding of Acta Arithmetica by Gösta Mittag-Leffler and of the prize competition announced by Mittag-Leffler in 1885, which Barrow-Green suggests may have been deliberately set with Poincaré's interests in mind and which Poincaré's memoir would win. The fifth chapter concerns Poincaré's memoir itself; it includes a detailed comparison of the significant differences between the withdrawn and published versions, and overviews the new mathematical content it contained, including not only the possibility of chaotic orbits but also homoclinic orbits and the use of integrals to construct invariants of systems. After a chapter on Poincaré's expanded monograph and his other later work on the three-body problem, the remainder of the book discusses the influence of Poincaré's work on later mathematicians. This includes contributions on the singularities of solutions by Paul Painlevé, Edvard Hugo von Zeipel, Tullio Levi-Civita, Jean Chazy, Richard McGehee, Donald G. Saari, and Zhihong Xia, on the stability of solutions by Aleksandr Lyapunov, on numerical results by George Darwin, Forest Ray Moulton, and Bengt Strömgren, on power series by Giulio Bisconcini and Karl F. Sundman, and on the KAM theory by Andrey Kolmogorov, Vladimir Arnold, and Jürgen Moser, and additional contributions by George David Birkhoff, Jacques Hadamard, V. K. Melnikov, and Marston Morse. However, much of modern chaos theory is left out of the story "as amply dealt with elsewhere", and the work of Qiudong Wang generalizing Sundman's convergent series from three bodies to arbitrary numbers of bodies is also omitted. An epilogue considers the impact of modern computer power on the numerical study of Poincaré's theories. Audience and reception This book is aimed at specialists in the history of mathematics, but can be read by any student of mathematics familiar with differential equations, although the central part of the book, analyzing Poincaré's work, may be too light on mathematical detail to be readily understandable without reference to other material. Reviewer Ll. G. Chambers writes "This is a superb piece of work and it throws new light on one of the most fundamental topics of mechanics." Reviewer Jean Mawhin calls it "the definitive work about the chaotic story of the King Oscar Prize" and "pleasantly accessible"; reviewer R. Duda calls it "clearly organized, well written, richly documented", and both Mawhin and Duda call it a "valuable addition" to the literature. And reviewer Albert C. Lewis writes that it "provides insights into higher mathematics that justify its being on every university mathematics student's reading list". Although reviewer Florin Diacu (himself a noted researcher on the -body problem) complains that Wang was omitted, that Barrow-Green "sometimes fails to see connections ... within Poincaré's own work" and that some of her translations are inaccurate, he also recommends the book. References Astronomical dynamical systems Books about the history of mathematics 1997 non-fiction books
Poincaré and the Three-Body Problem
[ "Astronomy", "Mathematics" ]
1,249
[ "Astronomical objects", "Astronomical dynamical systems", "Dynamical systems" ]
63,095,115
https://en.wikipedia.org/wiki/Jenara%20Vicenta%20Arnal%20Yarza
Jenara Vicenta Arnal Yarza (September 19, 1902 – May 27, 1960), was the first woman to hold a Ph.D. in chemistry (Chemical Sciences) in Spain. She was noted for her work in electrochemistry and her research into the formation of fluorine from potassium biflouride. In later years, she was recognized for her contribution to the pedagogy of teaching science on the elementary and secondary levels, with a focus on the practical uses of chemistry in daily life. She was awarded a national honor, the Orden Civil de Alfonso X el Sabio. Early life and education Born into a humble family, Arnal Yarza's father was Luis Arnal Foz, a laborer from Zaragoza who later repaired pianos. Her mother, Vicenta Yarza Marquina, of Brea (Zaragoza), was a housewife. After the death of her parents, she had the responsibility of taking care of her two younger siblings. Her sister Pilar was a pianist who studied in Paris and gave concerts in the Teatro Real de Madrid. Her brother Pablo died young, but had a short career as a professor of Physics and Chemistry at the Consejo Superior de Investigaciones Científicas (CSIC). Jenara's vocation lead her to her teacher training studies at the Escuela de Zaragoza and to a degree in Elementary Education (primary school teaching) on December 3, 1921. Her desire for learning impelled her to continue her studies at the School of Sciences at the University of Zaragoza in the realm of Chemical Sciences, first as a non-matriculated student in 1922–23. Later she continued her studies as a matriculated student, and received high grades and honors in all of her classes. She received her graduate degree from the University of Zaragoza on March 12, 1927. She defended her doctoral thesis on October 6, 1929, and obtained her Ph.D. in chemistry from the Faculty of Sciences of the University of Zaragoza on December 13, 1929. Her doctoral thesis was titled Estudio potenciométrico del ácido hipocloroso y de sus sales ("Potentiometric study of hypochlorous acid and its salts ”). Thus, Arnal Yarza became the first woman to obtain a doctorate in Chemical Sciences in Spain, later followed by the researchers y . Career in chemistry After completing her studies, in 1926 she began work as a researcher in theoretical Chemistry in the laboratories of the Faculty of the University of Zaragoza. Her research would later take her to other public and private research centers, such as the Escuela Industrial of Zaragoza, the Escuela Superior de Trabajo of Madrid, the Anstalt für Anorganische Chemie of the University of Basel (as a fellowship recipient of the Junta para Ampliación de Estudios e Investigaciones Científicas), and the National Institute of Physics and Chemistry (Instituto Nacional de Física y Química) of Madrid in the electrochemistry department (continuing and expanding upon the works she began in Switzerland and Germany, where she had gone to research electrochemistry as a fellow of the JAE). During her tenure at the INFQ, she published 11 articles about electrochemical research, and in particular, electrolytic analysis. In 1929, Dr. Arnal Yarza became a member of the Spanish Society for Physics and Chemistry () for her distinguished research career in Spain and abroad. While she worked at the laboratories of the Anstalt für anorganische Chemie in Basel, she studied under Friedrich Fichter, professor of inorganic chemistry and then vice-president of the International Union of Chemistry. Together they worked on the chemical oxidation of various metals, but specifically the creation of fluorine and of persulfates of zinc and lanthanum from the electrolysis of molten potassium biflouride. They published the results of their work in 1931 in the notable Swiss periodical Helvetica Chimica Acta. Arnal Yarza also researched chemical oxidation produced by the action of fluorine in gaseous states. She spent some time studying in the Technische Hochschule in Dresden thanks to a two-semester extension of her original scholarship from 1932. After the Spanish Civil War began in Madrid, in 1937 Arnal Yarza left Spain and resided for a time in France. She later returned to the Spanish "National Zone" (). Throughout the war she was able to continue her research work without being sanctioned. During the Spanish Civil War and the early years of Franco's dictatorship, very few women, all unmarried, were allowed to participate in scientific research. While Jenara did not return to full-time research after the war, while teaching secondary school she continued to be interested in science and completed various works for the Consejo Superior de Investigaciones Científicas (CSIC), while she served at the teaching institute Instituto de Pedagogía San José de Calasanz. She collaborated in writing for the Boletín Bibliográfico del CSIC journal, most notably in publications dedicated to primary school teachers published by of the Auxiliary Library of Education (Biblioteca Auxiliar de Educación). She was the second woman to serve as the director of a department of physics and chemistry at a Spanish secondary school from 1930 onward. In May 1947 Arnal Yarza obtained authorization to travel to London to attend the First Centennial of the Royal Society and the XI International Congress of Pure and Applied Chemistry. In December, the General Office of Secondary Education (Dirección General de Enseñanzas Medias) gave her permission to go on a trip to Japan as a delegate of the (Foreign) Exchange Section of CSIC. Upon her return to Spain, Arnal Yarza gave conferences and facilitated the exchange of publications by CSIC with Japanese universities and centers of advanced research. Later, she would return to Japan under the auspices of the CSIC for two years where she would advance her studies in chemistry. In July 1953, she made a trip to attend the XIII International Congress of Pure and Applied Chemistry in Stockholm and Uppsala. That same year she began her last trip to Europe for research purposes, to attend the meeting of the International Committee of Electrochemical Thermodynamics and Kinetics in Vienna from September 28 to October 5, 1953. Jenara Vicenta Arnal Yarza died suddenly on May 27, 1960, of a cerebral hemorrhage due to thrombosis. After her death, the Ministry of Education awarded her the distinguished honor of the Orden Civil de Alfonso X el Sabio. Career in teaching Arza Yarnal began her career trajectory as a teacher in 1926, working as an assistant instructor in practical classes with the goal of being a professor of Analytical Chemistry in the Faculty of Sciences of the University of Zaragoza, and worked there until 1927. At the time she was in charge of the first course of Inorganic Chemistry, as the director was on leave at the time. In the same year she obtained a contract as a temporary assistant to the professor of Electrochemistry and Advanced Physics in the same faculty, which she held until 1930. On April 9, 1930, after passing the requirements for professorship, she became the eleventh Spanish woman to receive the title of professor and the second professor of sciences, after Ángela García de la Puerta. Thus she began her career in secondary education. Her first secondary teaching post was at the Instituto Nacional Femenino Infanta Cristina, a girls' school in Barcelona, from 1930 until its closure in 1931, where she served as acting professor. In 1933 she was transferred to the Institute of Secondary Education in Calatayud. Later she was a professor of Physics and Chemistry in the Institute of Bilbao, from which she finally transferred to Madrid where she was assigned to the Instituto Velázquez from 1935 to 1936. When the Spanish Civil War broke out, the Republican government maintained her as a government employee, earning two-thirds of her salary, at the same time that there was a reduction of personnel at the Ministry of Public Instruction. Arza Yarnal did not have any political inclinations towards either of the two sides. This stance allowed her not to suffer any reprisals and permitted her to leave Madrid, and after a time in France, to enter the National Zone, where she presented herself before the Commission of Culture and Teaching of the Junta Técnica del Estado, which reinstated her to her position in Bilbao. In 1940, she was readmitted as a professor for the Beatriz Galindo Institute in Madrid without any sanctions, and there she was able to continue her duties in the directorial team of the center until her untimely death in 1960. As an educator, Jenara distinguished herself with her pedagogical approach to the teaching of Natural Sciences, Physics and Chemistry. She believed that the teaching of basic sciences fomented cultural development in students, by providing knowledge of the natural world while developing their mental discipline via observation, experimentation and the interpretation of results. She believed that the method of teaching science should be adapted to the cognitive level of the student, so that for elementary students of 5–12 years of age, the focus should be on experiencing science via observation, experimentation and discovery. For older students of 12–15 years of age, she emphasized lessons that contained practical applications for science as part of professional development or for recreation. She detailed these approaches in a 1933 monograph edition of the journal Bordón, which was dedicated to the teaching of Natural Sciences. Publications In 1930, Arza Yarnal completed research in collaboration with and Ángela García de la Puerta on the subject of electrolytic oxidation of chlorides. In addition, she published distinguished works in the journal Helvética Chimica Acta and Transactions of the American Chemical Society. Also with Rius Miró, she published “Estudio del potencial del electrodo de cloro y sus aplicaciones al análisis”, in the Anales de la Sociedad Española de Física y Química in 1933, and “La oxidación electrolítica” in 1935. She also authored additional educational publications: Física y Química de la vida diaria ("Physics and Chemistry in daily life") (1954 and 1959) Los primeros pasos en el laboratorio de Física y Química ("First steps in Physics and Chemistry labs") (1956) Química en Acción ("Chemistry in action") (1959). In addition, Jenara collaborated with Inés García Escalera, professor of the Institute of Secondary Education of Alcalá de Henares, on two books: Lecciones de cosas ("Lessons about things") (1958) El mundo del saber (ciencias y letras) ("The world of knowledge (Arts and Sciences)) (1968 and 1970), later re-edited in 1982. She also translated specialized books about the history of science such as Historia de la Química ("The History of chemistry"), by and Historia de la Física ("The History of Physics"), by . Legacy One of her former students from her time in Japan, the Spanish ambassador created the Vicenta Arnal Prize in her honor for graduating students of the Instituto Beatriz Galindo, and in recent years his son has continued to award the prize. In March 2019, the city of Zaragoza proposed changing the name of a street from Calle Rudesindo Nasarre Ariño to Calle Jenara Vicenta Arnal Yarza in her honor, in an effort to recognize the contributions of four notable women from that city and to comply with the Historical Memory Law. The plan was cancelled in September of that year. References Bibliography Cien años de Política Científica en España. María Jesús Santesmases y Ana Romero de Pablos. Fundación BBVA 2008. 424 pages. (Spanish) De analfabetas científicas a catedráticas de Física y Química de Instituto en España: el esfuerzo de un grupo de mujeres para alcanzar un reconocimiento profesional y científico. Delgado Martínez, Mª Ángeles y López Martínez, J. Damián. Revista de Educación. 2004. Number 333. pp. 255–268. (Spanish) Pioneras españolas en las ciencias. Las mujeres del Instituto Nacional de Física y Química. Carmen Magallón Portolés. Editorial: Consejo Superior de Investigaciones Científicas. 2004. 408 pages. (Spanish) Women in Their Element: Selected Women's Contributions To The Periodic System. Lykknes, Annette and Van Tiggelen, Brigitte. World Scientific Publishing Co. 2019. 556 pages. 1902 births 1960 deaths Spanish chemists Spanish women chemists Spanish women scientists University of Zaragoza alumni Academic staff of the University of Zaragoza People from Zaragoza Electrochemists
Jenara Vicenta Arnal Yarza
[ "Chemistry" ]
2,689
[ "Electrochemistry", "Electrochemists" ]
63,095,268
https://en.wikipedia.org/wiki/1%2C6-Hexanediol%20diacrylate
1,6-Hexanediol diacrylate (HDDA or HDODA) is a difunctional acrylate ester monomer used in the manufacture of polymers. It is particularly useful for use in ultraviolet light cure applications. Furthermore, it is also used in adhesives, sealants, alkyd coatings, elastomers, photopolymers, and inks for improved adhesion, hardness, abrasion and heat resistance. Like other acrylate monomers it is usually supplied with a radical inhibitor such as hydroquinone added. Preparation The material is prepared by acid-catalyzed esterification of 1,6-hexanediol with acrylic acid. Other uses As the molecule has acrylic functionality, it is capable of undergoing the Michael reaction with an amine. This allows it use in epoxy chemistry where its use speeds up the cure time considerably. See also TMPTA (trimethylolpropane triacrylate), a triacrylate crosslinker Pentaerythritol tetraacrylate, a tetraacrylate crosslinker References External links Product Stewardship Information Acrylate esters Monomers
1,6-Hexanediol diacrylate
[ "Chemistry", "Materials_science" ]
256
[ "Monomers", "Polymer chemistry" ]
63,097,582
https://en.wikipedia.org/wiki/NGC%203686
NGC 3686 is a spiral galaxy that forms with three other spiral galaxies, NGCs 3681, 3684, and 3691, a quartet of galaxies in the Leo constellation. It was discovered on 14 March 1784 by William Herschel. It is a member of the NGC 3607 Group of galaxies, which is a member of the Leo II Groups, a series of galaxies and galaxy clusters strung out from the right edge of the Virgo Supercluster. References External links Leo (constellation) 3686 17840314 Barred spiral galaxies 035268
NGC 3686
[ "Astronomy" ]
118
[ "Leo (constellation)", "Constellations" ]
63,098,501
https://en.wikipedia.org/wiki/Qbox
Qbox is an open-source software package for atomic-scale simulations of molecules, liquids and solids. It implements first principles (or ab initio) molecular dynamics, a simulation method in which inter-atomic forces are derived from quantum mechanics. Qbox is released under a GNU General Public License (GPL) with documentation provided at http://qboxcode.org. It is available as a FreeBSD port. Main features Born-Oppenheimer molecular dynamics in the microcanonical(NVE) or canonical ensemble (NVT) Car-Parrinello molecular dynamics Constrained molecular dynamics for thermodynamic integration Efficient computation of maximally localized Wannier functions GGA and hybrid density functional approximations (LDA, PBE, SCAN, PBE0, B3LYP, HSE06, ...) Electronic structure in the presence of a constant electric field Computation of the electronic polarizability Electronic response to arbitrary external potentials Infrared and Raman spectroscopy Methods and approximations Qbox computes molecular dynamics trajectories of atoms using Newton's equations of motion, with forces derived from electronic structure calculations performed using Density Functional Theory. Simulations can be performed either within the Born–Oppenheimer approximation or using Car-Parrinello molecular dynamics. The electronic ground state is computed at each time step by solving the Kohn-Sham equations. Various levels of Density Functional Theory approximations can be used, including the local-density approximation (LDA), the generalized gradient approximation (GGA), or hybrid functionals that incorporate a fraction of Hartree-Fock exchange energy. Electronic wave functions are expanded using the plane wave basis set. The electron-ion interaction is represented by pseudopotentials. Examples of use Electronic properties of nanoparticles Electronic properties of aqueous solutions Free energy landscape of molecules Infrared and Raman spectra of hydrogen at high pressure Properties of solid-liquid interfaces Code architecture and implementation Qbox is written in C++ and implements parallelism using both the message passing interface (MPI) and the OpenMP application programming interface. It makes use of the BLAS, LAPACK, ScaLAPACK, FFTW and Apache Xerces libraries. Qbox was designed for operation on massively parallel computers such as the IBM Blue Gene supercomputer, or the Cray XC40 supercomputer. In 2006 it was used to establish a performance record on the BlueGene/L computer installed at the Lawrence Livermore National Laboratory. Interface with other simulation software The functionality of Qbox can be enhanced by coupling it with other simulation software using a client-server paradigm. Examples of Qbox coupled operation include: Free energy computations: Coupled with the Software Suite for Advanced Ensemble Simulations (SSAGES). Quasiparticle energy computations: Coupled with the WEST many-body perturbation software package. Path integral quantum simulations: Coupled with the i-PI universal force engine. See also List of quantum chemistry and solid-state physics software Density Functional Theory References External links Computational chemistry software Physics software Free physics software
Qbox
[ "Physics", "Chemistry" ]
632
[ "Computational chemistry software", "Chemistry software", "Computational physics", "Computational chemistry", "Physics software" ]
47,618,107
https://en.wikipedia.org/wiki/Doering%E2%80%93LaFlamme%20allene%20synthesis
In organic chemistry, the Doering–LaFlamme allene synthesis is a reaction of alkenes that converts them to allenes by insertion of a carbon atom. This name reaction is named for William von Eggers Doering and a co-worker, who first reported it. The reaction is a two-stage process, in which first the alkene is reacted with dichlorocarbene or dibromocarbene to form a dihalocyclopropane. This intermediate is then reacted with a reducing metal, such as sodium or magnesium, or with an organolithium reagent. Either approach results in metal-halogen exchange to convert the gem-dihalogenated carbon to a 1-metallo-1-halocyclopropane. This species undergoes α-elimination of metal halide and ring-opening via an electrocyclic reaction (at least formally) to give the allene. Several different mechanisms for the electrocyclic rearrangement have been studied. In a study in which an enantioenriched substituted cyclopropyl Grignard reagent was prepared, the reaction was shown to give the allene with very high levels of enantiospecificity, suggesting a concerted mechanism. Similarly, in a computational study of the bromolithiocyclopropane, a concerted mechanism was found to be favored. A discrete cyclopropylidene carbene was found to be unlikely, although early ejection of LiBr (roughly simultaneous to C–C bond scission and before formation of the orthogonal pi bonds of the allene) was suggested. References Name reactions
Doering–LaFlamme allene synthesis
[ "Chemistry" ]
353
[ "Name reactions" ]
47,618,620
https://en.wikipedia.org/wiki/Silicon%20organic%20water%20repellent
Organosilicon water repellent: water solution of siliconate The water-repelling liquid is applied: To provide the surface of materials with excellent water resistance properties - the surface does not absorb water; To make the material frost- and corrosion resistant; To reduce the pollution of surface; In addition, the treated surface does not change its appearance, maintains air permeability - material is not sweated and retains the ability to output pairs. The water-repelling liquid is applied: To provide the surface of materials with excellent water resistance properties - the surface does not absorb water; To make the material frost- and corrosion resistant; To reduce the pollution of surface; In addition, the treated surface does not change its appearance, maintains air permeability - material is not sweated and retains the ability to output pairs. The liquid is methyl hydride siloxane polymer with low viscosity of light-yellow color or colorless. It is readily dissoluble in aromatic and chlorinated hydrocarbons, and is undergone to gelation in the presence of amines, amino alcohols, strong acids and alkalis. No dissolution in lower alcohols and water. The positive effects of the application of methyl hydride siloxane: Improved water resistance of various building materials - water remains on the surface in the form of droplets and does not penetrate the material; Increases frost resistance and improves thermal insulation materials; Does not prevent air exchange – the construction outputs pair outside and does not accumulate moisture; Prevents UV and infrared radiation; Preserves the appearance of the material; Extends the service life of materials; Prevents surface mosses and lichens. Water emulsion of organo silicon the methyl hydride siloxane with additives of emulsifier, biocides and stabilizers Solids content in the emulsion SE 50-94M is 50%. The color is from white to light gray. Application: The emulsion oligo methyl hydride siloxane has properties and characteristics similar with the methyl hydride siloxane. The emulsion is also used to provide various materials with water repellency properties. However, as oligo methyl hydride siloxane is the water emulsion, it can be applied as an additive in the production of solutions and mixtures that is by the volumetric method. for concrete, asbestos, gypsum, ceramic, porcelain in the production of waterproof papers and leather; in the production of water-resistant fabrics; by volumetric method in the manufacture of paving tiles, slabs, curbs, fences of different silicate materials; as plasticizer in the preparation of plaster, lime and cement solutions; as an air involving admixture in the preparation of cement solution Liquid is a mixture of tetra ethoxy silane and polyethoxy siloxanes. Application Metal manufacture: binding agent in the manufacture of ceramic molds for precision core-mold casting; manufacture of rods exposed to high temperatures; manufacture of non-stick paints; Textile industry: feltproofing of woolen cloths; abatement of carpet shrinkage; antirot and antidust protection of carpets; impregnating compound for filter cloths; Construction engineering: hydrophobization of construction materials, treatment of coated surfaces; porosity decreasing impregnation of concrete; manufacture of acid-resistant cement; Glasswork and cerarnics: antireflection treatment of optical glass; application of light-diffusing coat to electric light bulbs; binding agent for ceramic mixtures, resistant to strongly corrosive mediums, with high manufacture of fireproof material standing temperatures of about 1750 °C and stress of above 127 kg/cm3; Coating industry: paint additives forming quick-drying, thermostable and water-resistant coats with constant gloss. Chemistry Commercially available siliconates include potassium methyl siliconate (CAS 31795-24-1, CH5KO3Si) and sodium methyl siliconate (CAS 16589-43-8, CH5NaO3Si). These are supplied as a concentrate in water with an active content of between 30 and 40% by weight. This solution is further diluted in water prior to their application by spraying, dipping or rolling to a mineral building material, such as brickwork, to make the surface water repellent. The dilution is clear, stable with a high pH of 13 to 14. When applied to a surface the siliconate reacts with carbon dioxide in the air to form an insoluble water resistant treatment within 24 hours. CH5KO3Si + silanol functional substrate OHSi → CH4O3Si + KOH The methyl group has now attached itself to the substrata. 2KOH + CO2 → K2CO3 + H2O The salts formed by this reaction are often the cause of white efflorescence when too much of the solution is applied to the surface. References See also Hydrophobe Amphiphile Froth flotation Hydrophile Hydrophobic effect Hydrophobicity scales Superhydrophobe Superhydrophobic coating Chemical properties Intermolecular forces Articles containing video clips
Silicon organic water repellent
[ "Chemistry", "Materials_science", "Engineering" ]
1,066
[ "Molecular physics", "Materials science", "nan", "Intermolecular forces" ]
47,623,232
https://en.wikipedia.org/wiki/Dibromine%20monoxide
Dibromine monoxide is the chemical compound composed of bromine and oxygen with the formula Br2O. It is a dark brown solid which is stable below −40 °C and is used in bromination reactions. It is similar to dichlorine monoxide, the monoxide of its halogen neighbor one period higher on the periodic table. The molecule is bent, with C2v molecular symmetry. The Br−O bond length is 1.85 Å and the Br−O−Br bond angle is 112°, similar to dichlorine monoxide. Reactions Dibromine monoxide can be prepared by reacting bromine vapor or a solution of bromine in carbon tetrachloride with mercury(II) oxide at low temperatures: 2 Br2 + 2 HgO → HgBr2·HgO + Br2O It can also be formed by thermal decomposition of bromine dioxide or by passing an electrical current through a 1:5 mixture of bromine and oxygen gases. References Bromine(I) compounds Oxides Nonmetal halides
Dibromine monoxide
[ "Chemistry" ]
218
[ "Inorganic compounds", "Oxides", "Inorganic compound stubs", "Salts" ]