id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
76,521,746
https://en.wikipedia.org/wiki/Thorium%20triiodide
Thorium triiodide is a binary inorganic compound of thorium metal and iodine with the chemical formula . Synthesis Th metal is heated with in a vacuum at 800 °C. Physical properties The compound forms black or grey crystals. Chemical properties reacts with water. References Thorium compounds Nuclear materials Iodides Actinide halides
Thorium triiodide
[ "Physics" ]
68
[ "Materials", "Nuclear materials", "Matter" ]
76,527,771
https://en.wikipedia.org/wiki/Thermophone
A thermophone is a type of transducer that converts an electrical signal into heat, which then becomes sound. It can be thought of as a type of loudspeaker that uses heat fluctuations to produce sound, instead of mechanical vibration. The basic principle of the thermophone has been known since the 19th century. Thermophones have been used to calibrate acoustical apparatus (like microphones) since the 20th century. In recent years, the name thermoacoustic speaker has also been used. Beginnings Thermoacoustics is the study of the interaction between heat and sound. It is the basis of the thermophone. Byron Higgins in 1802 reported "singing flames" which occurred when the necks of jars were put over a hydrogen gas flame. Sondhauss (1850) and Rijke (1859) performed further experiments. A theory of thermoacoustics was produced by Lord Rayleigh in 1878. The theory and practice of creating sound with electric heat emerged in the late 19th century. In 1880, William Henry Preece observed that, upon connecting a microphone transmitter to a platinum wire, sounds were produced: In 1917, Harold D. Arnold and of Bell Labs developed a quantitative theory for the thermophone. Since then, thermophones have been used as a precision device for microphone calibration. However, they did not see widespread use elsewhere due to their poor efficiency. Description When alternating current is passed through a thin conductor, that conductor periodically heats up and cools down following the variations in current strength. This periodic heating and cooling creates temperature waves which the conductor propagates into the surroundings. As the temperature waves propagate away from the conductor, the thermal expansion and contraction of the transmission medium (e.g. air) produces corresponding sound waves. An ideal thermophone is made of a conductor which is very thin and has a small heat capacity. Modern thermophones In 1999, Shinouda and others presented a porous doped sillicon thermophone for ultrasonic emission. In 2008, Xiao et al. reported a thermophone made of carbon nanotubes. Since then, there has been a resurgence of research into thermophones and thermoacoustics. New materials for thermophones are being explored, and thermophones have been created using VLSI processes (like integrated circuits are). References Transducers Electrical components 19th-century inventions Solid state engineering
Thermophone
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
506
[ "Electrical components", "Electronic engineering", "Condensed matter physics", "Electrical engineering", "Solid state engineering", "Components" ]
76,529,639
https://en.wikipedia.org/wiki/Thorium%20trifluoride
Thorium trifluoride is a binary inorganic compound of thorium metal and fluorine with the chemical formula . Synthesis Reaction of thorium metal with thorium tetrafluoride: References Thorium compounds Nuclear materials Fluorides Actinide halides
Thorium trifluoride
[ "Physics", "Chemistry" ]
55
[ "Salts", "Materials", "Nuclear materials", "Fluorides", "Matter" ]
76,530,711
https://en.wikipedia.org/wiki/Fertility%20Clinic%20Success%20Rate%20and%20Certification%20Act
The Fertility Clinic Success Rate and Certification Act (FCSRCA) of 1992 are United States regulatory requirements that mandate all assisted reproductive technology (ART) clinics report pregnancy success rates data to the Centers for Disease Control and Prevention (CDC) in a standardized manner and for the CDC to publish pregnancy success rates . FCSRCA is the primary consumer protection regulation for in-vitro fertilization in the US. Though participation in FCSRCA is mandatory, there is no penalty for non-participation. In 2024, approximately 90% of fertility clinics participated, though the results are susceptible to manipulation by cherry picking couples with a higher chance of conception. The CDC annually audits a sampling of participating clinics for validity. Criticism The FCSRCA has been criticized for its lack of enforceability and as being insufficient. Currently, the fertility industry in the United States is largely self-regulated with voluntary guidelines established by American Society for Reproductive Medicine (ASRM). FCSRCA also does not collect embryo data, including how many embryos are created with each IVF cycle, nor how many are discarded, frozen, or implanted. Further reading References External links National Assisted Reproductive Technology (ART) Surveillance System, CDC Assisted reproductive technology Healthcare in the United States Fertility medicine
Fertility Clinic Success Rate and Certification Act
[ "Biology" ]
252
[ "Assisted reproductive technology", "Medical technology" ]
59,872,034
https://en.wikipedia.org/wiki/FHI-aims
FHI-aims (Fritz Haber Institute ab initio materials simulations) is a software package for computational molecular and materials science written in Fortran. It uses density functional theory and many-body perturbation theory to simulate chemical and physical properties of atoms, molecules, nanostructures, solids, and surfaces. Originally developed at the Fritz Haber Institute in Berlin, the ongoing development of the FHI-aims source code is now driven by a worldwide community of collaborating research institutions. Overview The FHI-aims software package is an all-electron, full-potential electronic structure code utilizing numeric atom-centered basis functions for its electronic structure calculations. The localized basis set enables the accurate treatment of all electrons on the same footing in periodic and non-periodic systems without relying on the approximation for the core states, such as pseudopotentials. Importantly, the basis sets enable high numerical accuracy on par with the best available all-electron reference methods while remaining scalable to system sizes up to several thousands of atoms. In order to achieve this for bulk solids, surfaces or other low-dimensional systems and molecules, the choice of basis functions is crucial. The workload of the simulations is efficiently distributable for parallel computing using the MPI communication protocol. The code is routinely used on platforms ranging from laptops to distributed-parallel supercomputers with ten thousand CPUs, and the scalability of the code has been tested up to 100,000's of CPUs. The primary production methods of FHI-aims are density functional theory as well as many-body methods and higher-level quantum chemistry approaches. For the exchange-correlation treatment, local (LDA), semi-local (e.g., PBE, PBEsol), meta-GGA, and hybrid (e.g., HSE06, B3LYP) functionals have been implemented. The resulting orbitals can be used within the framework of many-body perturbation theory, such as Møller-Plesset perturbation theory or the GW approximation. Moreover, thermodynamic properties of the molecules and solids are accessible via Born-Oppenheimer molecular dynamics and path integral molecular dynamics methods. The first step is to expand the Kohn-Sham orbitals packages into a set of basis functions Since FHI-aims is an all-electron full-potential code that is computationally efficient without compromising accuracy, the choice of basis function is crucial in order to achieve the said accuracy. Therefore, FHI-aims is based on numerically tabulated atom-centered orbitals (NAOs) of the form: As the name implies, the radial shape is numerically tabulated and, therefore, fully flexible. This allows the creation of optimized element-dependent basis sets that are as compact as possible while retaining a high and transferable accuracy in production calculations up to meV-level total energy convergence. To obtain real-valued , here denotes the real parts () and imaginary parts () of complex spherical harmonics, with an implicit function of the radial function index . History The first line of code of the actual FHI-aims code was written in late 2004, using the atomic solver employed in the Fritz Haber Institute pseudopotential program package fhi98PP as a foundation to obtain radial functions for use as basis functions. The first developments benefitted heavily from the excellent set of numerical technologies described in several publications by Bernard Delley and coworkers in the context of the DMol3 code, as well as from many broader methodological developments published in the electronic structure theory community over the years. Initial efforts in FHI-aims focused on developing a complete numeric atom-centered basis set library for density-functional theory from "light" to highly accurate (few meV/atom) accuracy for total energies, available for the elements up to nobelium (Z=102) across the periodic table. By 2006, work on parallel functionality, support for periodic boundary conditions, total energy gradients (forces) and on exact exchange and many-body perturbation theory had commenced. On May 18, 2009, an initial formal point release of the code, "051809", was made available and laid the foundation for broadening the user and developer base of the code. See also List of quantum chemistry and solid-state physics software References Fortran software Computational chemistry software Computational physics Density functional theory software Physics software
FHI-aims
[ "Physics", "Chemistry" ]
916
[ "Computational chemistry software", "Chemistry software", "Computational physics", "Computational chemistry", "Density functional theory software", "Physics software" ]
56,505,339
https://en.wikipedia.org/wiki/Supersymmetric%20localization
Supersymmetric localization is a method to exactly compute correlation functions of supersymmetric operators in certain supersymmetric quantum field theories such as the partition function, supersymmetric Wilson loops, etc. The method can be seen as an extension of the Berline–Vergne– Atiyah– Bott formula (or the Duistermaat–Heckman formula) for equivariant integration to path integrals of certain supersymmetric quantum field theories. Although the method cannot be applied to general local operators, it does provide the full nonperturbative answer for the restricted class of supersymmetric operators. It is a powerful tool which is currently extensively used in the study of supersymmetric quantum field theory. The method, built on the previous works by E.Witten, in its modern form involves subjecting the theory to a nontrivial supergravity background, such that the fermionic symmetry preserved by the latter can be used to perform the localization computation, as in. Applications range from the proof of the Seiberg–Witten theory, or the conjectures of Erickson–Semenoff–Zarembo and Drukker–Gross to checks of various dualities, and precision tests of the AdS/CFT correspondence. References Supersymmetric quantum field theory
Supersymmetric localization
[ "Physics" ]
277
[ "Supersymmetric quantum field theory", "Quantum physics stubs", "Quantum mechanics", "Supersymmetry", "Symmetry" ]
56,511,089
https://en.wikipedia.org/wiki/Braking%20test%20track
The braking test track is a fundamental element of the vehicle industry proving grounds, designed for conducting vehicle braking system operability and efficiency tests under various braking circumstances. Such types of tests are highly significant in regard to road safety. Testing is an indispensable step prior to manufacturing newly developed braking systems and enabling their utilization under real traffic circumstances. The effects of all factors other than human factors influencing the process of braking can be thoroughly tested in a testing environment designed to this end, i.e. on a braking test track. Structure and characteristics Several lanes enabling braking under different material characteristic conditions are necessary for testing as many of the circumstances that directly influence the braking effect as possible. Consequently, most of the traditional or so-called classic test tracks have several (5 to 8) differently paved braking lanes. Most test tracks attempt to provide surfaces with both higher and lower grip coefficients for testing and developing companies. Asphalt (with a higher and lower coefficient), basalt, ceramic and concrete are frequently used materials for surface paving. Certain test tracks even have chequer surfaces with alternative tiles of higher and lower coefficients, increasing the challenge for intelligent braking systems. The number of settings provided by the available surfaces may be doubled by wetting them, for which mostly a 1–2 mm thick water coating is used. So-called aquaplaning lanes are also frequently provided for testers. The triggering of the phenomenon of aquaplaning, however, requires the provision of higher water levels. It is also important to provide for the collection of the water spilt onto the wettable braking surfaces, which is in most cases managed by using ditches. In order to enable the testing of extreme braking situations, e.g. emergency braking from high speed, it is necessary for the lanes to be designed with a length of typically 150–250 m. Additionally acceleration lanes with the appropriate length are also required for reaching high speed, which enable even heavy-duty vehicles to reach 100 km/h. This way it becomes possible to test the braking systems of trucks and buses. An adequate amount of data and test results are necessary to draw the conclusions, which can be achieved by repeated tests. In order to perform repeated tests fluently and safely it is necessary to design a route separated from the test (braking) zone that allows the test vehicle to return to the beginning of the acceleration lane quickly and safely. Goal and utilization The significance of braking system testing The braking systems of vehicles, especially those of motorised vehicles play a highly important role in road safety. The activation of the braking systems decelerates and stops the vehicle (or keeps it in an immobile standing position, e.g. with the handbrake). Depending on the country, different national or occasionally international regulations apply to braking systems. The braking effect of an installed system depends on several factors, e.g. on the human factors (routine, health condition and mental state of the driver), technical factors (structure of the system, technical condition of the individual components), weather (e.g. wet, icy surfaces), and the condition and quality of the pavement. Braking test tracks enable the testing of such technical, meteorological and pavement factors. Traditional tests Most traditional test tracks are equipped with surfaces suitable for testing ABS, ATC, ESP systems as well as brakeforce intervention systems connected to all braking systems. At the same time such surfaces are also suitable for the testing of other active systems. Automatic Emergency Braking System tests The number of necessary vehicle industry tests has multiplied with the implementation of ADAS (advanced driver-assistance systems). The combining of several functions has made the planned tests more complex too. Recently an increasing number of cars are equipped with emergency braking systems. Such systems need to be capable of sensing an obstacle which triggers the command that initiates the braking; at the same time, however, the appropriate intervention (braking) is also indispensable. Protocols for such tests, which all developers strive to comply with (Euro NCAP active safety system tests), already exist; these tests are conducted, however, under highly limited, well predefined circumstances. It is practical to carry out such tests on several different surfaces. Platooning tests Platooning tests, i.e. testing vehicles advancing in convoy pose a further challenge. A typical situation to be tested is when the first vehicle produces a signal for intervention (which could even be an intervention based on sensing), and then transmits it to the following vehicles, which need to receive and process the signal with the least possible latency, and initiate the necessary intervention. Other tests It is not unusual to use a braking test track for testing the efficiency of the braking system of a vehicle crossing several different lanes angularly. Braking test tracks on existing proving grounds Most proving grounds in Europe have braking test tracks. The most well-known examples are the Boxberg Proving Ground constructed by Bosch (Germany), Automotive Testing Papenburg GmbH constructed by Daimler (Germany), Applus Idiada (Spain), Aldenhoven Testing Center (Germany). The Zalaegerszeg Test Track currently under construction in Hungary will also feature a braking test track. Boxberg Proving Ground, Boxberg, Germany The braking test track of the Boxberg Proving Ground provides testing opportunities under various grip conditions on its seven lanes differing in quality and grip and its track sections equipped with track wetting functions. The surface of the braking test track is partially shared with one of the acceleration lanes of the dynamic platform. Available lanes: chequered lane, asphalt, ceramic, basalt (polished), concrete, aquaplaning lane, basalt concrete. Automotive Testing Papenburg GmbH, Papenburg, Germany The 300 m long braking test track of the Papenburg Test Ground provides testing opportunities on eight lanes differing in quality and grip, which can be wetted as preferred; furthermore which are connected through a 280 long and 30 m wide acceleration lane to the oval platform, thus enabling acceleration to higher speeds. The braking test track is closed with a 150 m long asphalt safety surface. Available lanes: chequered lane, asphalt (100 m aquaplaning lane), mixed basalt & asphalt, basalt (polished), asphalt, concrete, “blue asphalt”. Aldenhoven Testing Center, Aldenhoven, Germany The length of the braking test track at the Aldenhoven Testing Center is 150 m, which contains an asphalt and a ceramic pavement lane. Both lanes are 4 m wide and may be wetted as preferred. The braking test lane is surrounded on both sides by a safety zone. A 200 m access acceleration lane is also part of the braking test track. Applus Idiada, Spain, Tarragona The braking test track at Applus Idiada is divided into two separate zones. Zone 1 may be used only by one a vehicle at a time. The surface used for the braking tests is 250 m long and has five lanes with different kinds of pavement. The lanes may be wetted as preferred. The braking test track is closed with a safety area. Lanes: concrete, basalt, asphalt, ceramic, aquaplaning lane Zone 2 may be used by two vehicles simultaneously, as there is a dividing area between the two lanes used for braking tests. Both lanes are 250 m long and 5 m wide and are paved with asphalt; one of them, however, is not wettable. Zala ZONE Vehicle Industry Test Track, Zalaegerszeg, Hungary The braking test track under construction at the Zala ZONE Vehicle Industry Test Track is designed for testing ABS, ATC and ESP systems; it has six special, differently paved and lanes, which can be wetted separately by the inbuilt wetting and draining system. Its 700 m acceleration lane and 200 m long braking surface enables the testing of longer combination vehicles as well. References Road transport Vehicle technology Road test tracks Vehicle industry Automotive industry
Braking test track
[ "Engineering" ]
1,609
[ "Vehicle technology", "Mechanical engineering by discipline" ]
67,779,688
https://en.wikipedia.org/wiki/Transition%20metal%20acyl%20complexes
Transition metal acyl complexes describes organometallic complexes containing one or more acyl (RCO) ligands. Such compounds occur as transient intermediates in many industrially useful reactions, especially carbonylations. Structure and bonding Acyl complexes are usually low-spin and spin-paired. Monometallic acyl complexes adopt one of two related structures, C-bonded and η2-C-O-bonded. These forms sometimes interconvert. For the purpose of electron-counting, C-bonded acyl ligands count as 1-electron ligands, akin to pseudohalides. η2-Acyl ligands count as 3-electron "L-X" ligands. bridging acyl ligands are also well known, where the carbon bonds to one metal and the oxygen bonds to a second metal. One example is the bis(μ-acetyl) complex [(CO)3Fe(C(O)CH3)2Fe(CO)3]2-. Synthesis Metal acyls are often generated by the reaction of low-valent metal centers with acyl chlorides. Illustrative is the oxidative addition of acetyl chloride to Vaska's complex, converting square planar Ir(I) to octahedral Ir(III): Some acyl complexes can be produced from aldehydes by C-H oxidative addition. This reaction underpins hydroacylation. In a related reaction, metal carbonyl anions are acylated by acyl chlorides: (C5H5)Fe(CO)2Na + CH3C(O)Cl → (C5H5)Fe(CO)2COCH3 + NaCl Another important route to metal acyls entails insertion of CO into a metal alkyl bond. In this pathway, the alkyl ligand migrates to an adjacent CO ligand. This reaction is a step in the hydroformylation process. Coordinatively saturated metal carbonyls react with organolithium reagents to give acyls. This reaction proceeds by attack of the alkyl nucleophile on the electrophilic CO ligand. Reactions In practical sense, the most important reaction of metal acyls is their detachment by reductive elimination of aldehydes from acyl metal hydrides: LnMC(O)R(H) → LnM + RCHO This reaction is the final step of hydroformylation. Another important reaction is decarbonylation. This reaction requires that the acyl complex be coordinatively unsaturated: LnMC(O)R → Ln-1M(CO)R + L Ln-1MC(O)R → Ln-1M(CO)R The oxygen center of acyl ligands is basic. This aspect is manifested in O-alkylations, which converts acyl complexes to alkoxycarbene complexes: Applications Metal acyl complexes participate in several commercial processes, including: hydroformylation acetic acid synthesis Eastman acetic anhydride process Ethylene-carbon monoxide copolymerization A reaction involving metal acyl complexes of occasional value in organic synthesis is the Tsuji–Wilkinson decarbonylation reaction of aldehydes. References Organometallic chemistry Transition metals Coordination chemistry Ligands
Transition metal acyl complexes
[ "Chemistry" ]
709
[ "Ligands", "Organometallic chemistry", "Coordination chemistry" ]
55,053,454
https://en.wikipedia.org/wiki/Classical%20Electrodynamics%20%28book%29
Classical Electrodynamics is a textbook written by theoretical particle and nuclear physicist John David Jackson. The book originated as lecture notes that Jackson prepared for teaching graduate-level electromagnetism first at McGill University and then at the University of Illinois at Urbana-Champaign. Intended for graduate students, and often known as Jackson for short, it has been a standard reference on its subject since its first publication in 1962. The book is notorious for the difficulty of its problems, and its tendency to treat non-obvious conclusions as self-evident. A 2006 survey by the American Physical Society (APS) revealed that 76 out of the 80 U.S. physics departments surveyed require all first-year graduate students to complete a course using the third edition of this book. Overview Advanced topics treated in the first edition include magnetohydrodynamics, plasma physics, the vector form of Kirchhoff's diffraction theory, special relativity, and radiation emitted by moving and colliding charges. Jackson's choice of these topics is aimed at students interested in theoretical physics in general and nuclear and high-energy physics in particular. The necessary mathematical methods include vector calculus, ordinary and partial differential equations, Fourier series, Green's function, and some special functions (the Bessel functions and Legendre polynomials). In the second edition, some new topics were added, including the Stokes parameters, the Kramers–Kronig dispersion relations, and the Sommerfeld–Brillouin problem. The two chapters on special relativity were rewritten entirely, with the basic results of relativistic kinematics being moved to the problems and replaced by a discussion on the electromagnetic Lagrangian. Materials on transition and collision radiation and multipole fields were modified. A total of 117 new problems were added. While the previous two editions use Gaussian units, the third uses SI units, albeit for the first ten chapters only. Jackson wrote that this is in acknowledgement of the fact virtually all undergraduate textbooks on electrodynamics employ SI units and admitted he had "betrayed" an agreement he had with Edward Purcell that they would support each other in the use of Gaussian units. In the third edition, some materials, such as those on magnetostatics and electromagnetic induction, were rearranged or rewritten, while others, such as discussions of plasma physics, were eliminated altogether. One major addition is the use of numerical techniques. More than 110 new problems were added. Table of contents (third edition) Introduction and Survey Chapter 1: Introduction to Electrostatics Chapter 2: Boundary-value Problems in Electrostatics I Chapter 3: Boundary-value Problems in Electrostatics II Chapter 4: Multipoles, Electrostatics of Macroscopic Media, Dielectrics Chapter 5: Magnetostatics, Faraday's Law, Quasi-static Fields Chapter 6: Maxwell Equations, Macroscopic Electromagnetism, Conservation Laws Chapter 7: Plane Electromagnetic Waves and Wave Propagation Chapter 8: Waveguides, Resonant Cavities, and Optical Fibers Chapter 9: Radiating Systems, Multipole Fields and Radiation Chapter 10: Scattering and Diffraction Chapter 11: Special Theory of Relativity Chapter 12: Dynamics of Relativistic Particles and Electromagnetic Fields Chapter 13: Collisions, Energy Loss, and Scattering of Charged Particles, Cherenkov and Transition Radiation Chapter 14: Radiation by Moving Charges Chapter 15: Bremsstrahlung, Method of Virtual Quanta, Radiative Beta Processes Chapter 16: Radiation Damping, Classical Models of Charged Particles Appendix on Units and Dimensions Bibliography Index Editions Reception According to a 2015 review of Andrew Zangwill's Modern Electrodynamics in the American Journal of Physics, "[t]he classic electrodynamics text for the past four decades has been the monumental work by J. D. Jackson, the book from which most current-generation physicists took their first course." First edition L.C. Levitt, who worked at the Boeing Scientific Research Laboratory, commented that the first edition offers a lucid, comprehensive, and self-contained treatment of electromagnetism going from Coulomb's law of electrostatics all the way to self-fields and radiation reaction. However, it does not consider electrodynamics in media with spatial dispersion and radiation scattering in bulk matter. He recommended Electrodynamics of Continuous Media by Lev Landau and Evgeny Lifshitz as a supplement. Second edition Reviewer Royce Zia from the Virginia Polytechnic Institute wrote that according to many students and professors, a major problem with the first edition of the book was how mathematically heavy the book was, which distracted students from the essential physics. In the second edition, many issues were addressed, more insightful discussions added and misleading diagrams removed. Extended chapters on the applications of electromagnetism brought students closer to research. Third edition Physicist Wayne Saslow from Texas A&M University observed that some important new applications were added to the text, such as fiber optics and dielectric waveguides, which are crucial in modern communications technology, and synchrotron light sources, responsible for advances in condensed-matter physics, and that fragments of the excised chapter on magnetohydrodynamics and plasma physics were scattered throughout the text. Saslow argued that Jackson's broad background in electrical engineering, nuclear and high-energy physics served him well in writing this book. Ronald Fox, a professor of physics at the Georgia Institute of Technology, opined that this book compares well with Classical Electricity and Magnetism by Melba Phillips and Wolfgang Panofsky, and The Classical Theory of Fields by Lev Landau and Evgeny Lifshitz. Classical Electrodynamics is much broader and has many more problems for students to solve. Landau and Lifshitz is simply too dense to be used as a textbook for beginning graduate students. However, the problems in Jackson do not pertain to other branches of physics, such as condensed-matter physics and biophysics. For optimal results, one must fill in the steps between equations and solve a lot of practice problems. Suggested readings and references are valuable. The third edition retains the book's reputation for the difficulty of the exercises it contains, and for its tendency to treat non-obvious conclusions as self-evident. Fox stated that Jackson is the most popular text on classical electromagnetism in the post-war era and that the only other graduate book of comparable fame is Classical Mechanics by Herbert Goldstein. However, while Goldstein's text has been facing competition from Vladimir Arnold's Mathematical Methods of Classical Mechanics, Jackson remained unchallenged (as of 1999). Fox took an advanced course on electrodynamics in 1965 using the first edition of Jackson and taught graduate electrodynamics for the first time in 1978 using the second edition. Jagdish Mehra, a physicist and historian of science, wrote that Jackson's text is not as good as the book of the same name by Julian Schwinger et al. Whereas Jackson treats the subject as a branch of applied mathematics, Schwinger integrates the two, illuminating the properties of the mathematical objects used with physical phenomena. Unlike Jackson, Schwinger employs variational methods and Green's functions extensively. Mehra took issue with the use of SI units in the third edition, which he considered to be more appropriate for engineering than for theoretical physics. More specifically, he argued that electric and magnetic fields should not have different units because they are components of the electromagnetic field strength tensor. Jackson himself responded to Mehra's review. Andrew Zangwill, a physicist at the Georgia Institute of Technology, noted the mixed reviews of Jackson after surveying the literature and reviews on Amazon. He pointed out that Jackson often leaves out the details in going from one equation to the next, which is often quite difficult. He stated that four different instructors at his school had worked on an alternative to Jackson using lecture notes developed in roughly a decade with the goal of strengthening the student's understanding of electrodynamics rather than treating it as a topic of applied mathematics. Thomas Peters from the University of Zürich argued that while Jackson has historically been training students to perform difficult mathematical calculations, a task that is undoubtedly important, there is much more to electrodynamics than this. He wrote that Modern Electrodynamics by Andrew Zangwill offers a "stimulating fresh look" on this subject. James Russ, an experimental high-energy physicist at the Carnegie Mellon University, was of the opinion that examples are challenging, and the fine points of physics are often left as exercises. He added that Modern Electrodynamics by Andrew Zangwill is a better choice for beginning graduate students, but Jackson offers more comprehensive coverage and remains a fine reference. He recommended having both on the shelf. See also List of textbooks in electromagnetism A Treatise on Electricity and Magnetism Introduction to Electrodynamics (textbook) Classical Mechanics (textbook) General Relativity (textbook) Notes References Further reading Electromagnetism Electrodynamics Physics textbooks 1962 non-fiction books
Classical Electrodynamics (book)
[ "Physics", "Mathematics" ]
1,852
[ "Electromagnetism", "Physical phenomena", "Fundamental interactions", "Electrodynamics", "Dynamical systems" ]
55,053,716
https://en.wikipedia.org/wiki/GW170817
GW170817 was a gravitational wave (GW) signal observed by the LIGO and Virgo detectors on 17 August 2017, originating from the shell elliptical galaxy NGC 4993, about 140 million light years away. The signal was produced by the last moments of the inspiral process of a binary pair of neutron stars, ending with their merger. , it is the only GW detection to be definitively correlated with any electromagnetic observation. Unlike the five prior GW detections—which were of merging black holes and thus not expected to have detectable electromagnetic signals—the aftermath of this merger was seen across the electromagnetic spectrum by 70 observatories on 7 continents and in space, marking a significant breakthrough for multi-messenger astronomy. The discovery and subsequent observations of GW170817 were given the Breakthrough of the Year award for 2017 by the journal Science. The gravitational wave signal, designated GW170817, had an audible duration of approximately 100 seconds, and showed the characteristic intensity and frequency expected of the inspiral of two neutron stars. Analysis of the slight variation in arrival time of the GW at the three detector locations (two LIGO and one Virgo) yielded an approximate angular direction to the source. Independently, a short gamma-ray burst (sGRB) of around 2 seconds, designated GRB 170817A, was detected by the Fermi and INTEGRAL spacecraft beginning 1.7 seconds after the GW merger signal. These detectors have very limited directional sensitivity, but indicated a large area of the sky which overlapped the gravitational wave position. The co-occurrence confirmed a long-standing hypothesis that neutron star mergers describe an important class of sGRB progenitor event. An intense observing campaign was prioritized, to scan the region indicated by the gravitational wave detection for the expected emission at optical wavelengths. During this search, 11 hours after the signal, an astronomical transient SSS17a, later designated kilonova AT 2017gfo, was observed in the galaxy . It was captured by numerous telescopes, from radio to X-ray wavelengths, over the following days and weeks, and was found to be a fast-moving, rapidly-cooling cloud of neutron-rich material, as expected of debris ejected from a neutron-star merger. In October 2018, astronomers reported that, in retrospect, an sGRB event detected in 2015 () may represent an earlier case of the same astrophysics reported for GW170817. The similarities between the two events in terms of gamma ray, optical, and x-ray emissions, as well as to the nature of the associated host galaxies, were considered "striking", suggesting that the earlier event may also be the result of a neutron star merger, and that together these may signify a hitherto-unknown class of kilonova transients, making kilonovae more diverse and common in the universe than previously understood. Later research further construed —another sGRB predating GW170817—also to be a kilonova, again based its resemblance to the signature. Announcement The observations were officially announced on 16 October 2017 at press conferences at the National Press Club in Washington, D.C., and at the ESO headquarters in Garching bei München in Germany. Some information was leaked before the official announcement, beginning on 18 August 2017 when astronomer J. Craig Wheeler of the University of Texas at Austin tweeted "New LIGO. Source with optical counterpart. Blow your sox off!" He later deleted the tweet and apologized for scooping the official announcement embargo. Other people followed up on the rumor, and reported that the public logs of several major telescopes listed priority interruptions in order to observe , a galaxy away in the Hydra constellation. The collaboration had earlier declined to comment on the rumors, not adding to a previous announcement that there were several triggers under analysis. Gravitational wave detection The gravitational wave signal lasted for approximately 100 seconds starting from a frequency of 24 hertz. It covered approximately 3,000 cycles, increasing in amplitude and frequency to a few hundred hertz in the typical inspiral chirp pattern, ending with the collision received at 12:41:04.4 UTC. It arrived first at the Virgo detector in Italy, then 22 milliseconds later at the LIGO-Livingston detector in Louisiana, United States, and another 3 milliseconds later at the LIGO-Hanford detector in the state of Washington, in the United States. The signal was detected and analyzed by a comparison with a prediction from general relativity defined from the post-Newtonian expansion. An automatic computer search of the LIGO-Hanford datastream triggered an alert to the LIGO team about 6 minutes after the event. The gamma-ray alert had already been issued at this point (16 seconds post-event), so the timing near-coincidence was automatically flagged. The LIGO/Virgo team issued a preliminary alert (with only the crude gamma-ray position) to astronomers in the follow-up teams at 40 minutes post-event. Sky localisation of the event required combining data from the three interferometers, but this was delayed by two problems. The Virgo data were delayed by a data transmission problem, and the LIGO Livingston data were contaminated by a brief burst of instrumental noise a few seconds prior to the event peak, which persisted parallel to the rising transient signal in the lowest frequencies. These required manual analysis and interpolation before the sky location could be announced about 4.5 hours after the event. The three detections localized the source to an area of 31 square degrees in the southern sky at 90% probability. More detailed calculations later refined the localization to within 28 square degrees. In particular, the absence of a clear detection by the Virgo interferometer implied that the source was localized within one of its blind spots, a constraint which reduced the search area considerably. Gamma ray detection The first electromagnetic signal detected was GRB 170817A, a short gamma-ray burst, detected after the merger time and lasting for about 2 seconds. GRB 170817A was first recorded by the Fermi Gamma-ray Space Telescope, which issued an automatic alert just 14 seconds after the detection. After the LIGO/Virgo circular 40 minutes later, manual processing of data from the INTEGRAL gamma-ray telescope retrieved independent data for the event. The difference in arrival time between Fermi and INTEGRAL helped to improve the sky localization. This GRB was relatively faint given the proximity of the host galaxy , possibly due to its jets not being pointed directly toward Earth, but rather at an angle of about 30 degrees off axis. Electromagnetic follow-up A series of alerts to other astronomers were issued, beginning with a report of the gamma-ray detection and single-detector LIGO trigger at 13:21 UTC, and a three-detector sky location at 17:54 UTC. These prompted a massive search by many survey and robotic telescopes. In addition to the expected large size of the search area (about 150 times the area of a full moon), this search was challenging because the search area was near the Sun in the sky and thus visible for at most a few hours after dusk for any given telescope. In total six teams (One-Meter, Two Hemispheres (1M2H), DLT40, VISTA, Master, DECam, and Las Cumbres Observatory (Chile)) imaged the same new source independently in a 90-minute interval. The first to detect optical light associated with the collision was the 1M2H team running the Swope Supernova Survey, which found it in an image of taken 10 hours and 52 minutes after the GW event by the Swope Telescope operating in the near infrared at Las Campanas Observatory, Chile. They were also the first to announce it, naming their detection SSS17a in a circular issued 1226 post-event. The new source was later given an official International Astronomical Union (IAU) designation AT 2017gfo. The 1M2H team surveyed all galaxies in the region of space predicted by the gravitational wave observations, and identified a single new transient. By identifying the host galaxy of the merger, it is possible to provide an accurate distance consistent with that based on gravitational waves alone. The detection of the optical and near-infrared source provided a huge improvement in localisation, reducing the uncertainty from several degrees to 0.0001 degree; this enabled many large ground and space telescopes to follow up the source over the following days and weeks. Within hours after localization, many additional observations were made across the infrared and visible spectrum. Over the following days, the color of the optical source changed from blue to red as the source expanded and cooled. Numerous optical and infrared spectra were observed; early spectra were nearly featureless, but after a few days, broad features emerged indicative of material ejected at roughly 10 percent of light speed. There are multiple strong lines of evidence that AT 2017gfo is indeed the aftermath of GW170817. The color evolution and spectra are dramatically different from any known supernova. The distance of NGC 4993 is consistent with that independently estimated from the GW signal. No other transient has been found in the GW sky localisation region. Finally, various archive images show nothing at the location of AT 2017gfo, ruling out a foreground variable star in the Milky Way. The source was detected in the ultraviolet (but not in X-rays) 15.3 hours after the event by the Swift Gamma-Ray Burst Mission. After initial lack of X-ray and radio detections, the source was detected in X-rays 9 days later using the Chandra X-ray Observatory, and 16 days later in the radio using the Karl G. Jansky Very Large Array (VLA) in New Mexico. More than 70 observatories covering the electromagnetic spectrum observed the source. The radio and X-ray light increased to a peak 150 days after the merger, diminishing afterwards. Astronomers have monitored the optical afterglow of GW170817 using the Hubble Space Telescope. In March 2020, continued X-ray emission at 5-sigma was observed by the Chandra Observatory 940 days after the merger. Other detectors No neutrinos consistent with the source were found in follow-up searches by the IceCube and ANTARES neutrino observatories and the Pierre Auger Observatory. A possible explanation for the non-detection of neutrinos is because the event was observed at a large off-axis angle and thus the outflow jet was not directed towards Earth. Astrophysical origin and products The origin and properties (masses and spins) of a double neutron star system like GW170817 are the result of a long sequence of complex binary star interactions. The gravitational wave signal indicated that it was produced by the collision of two neutron stars with a total mass of solar masses (). If low spins are assumed, consistent with those observed in binary neutron stars that will merge within a Hubble time, the total mass is . The total energy output of the gravitational wave was ≃63 Foe. The masses of the progenitor stars have greater uncertainty. The chirp mass, a directly observable parameter which may be roughly equated to the geometric mean of the prior masses, was measured at . The larger progenitor () has a 90% chance of being between , and the smaller () has a 90% chance of being between . Under the low spin assumption, the ranges are for and for , inside a 12 km radius. The neutron star merger event resulted in a spherically expanding kilonova, characterized by a short gamma-ray burst followed by a longer optical afterglow powered by the radioactive decay of heavy r-process nuclei. GW170817 therefore confirmed neutron star mergers to be viable sites for the r-process, where the neucleosynthesis of around half the isotopes in elements heavier than iron can occur. A total of 16,000 times the mass of the Earth in heavy elements is believed to have formed, including approximately 10 Earth masses just of the two elements gold and platinum. A hypermassive neutron star was believed to have formed initially, as evidenced by the large amount of ejecta (much of which would have been swallowed by an immediately forming black hole). At first, the lack of evidence for emissions being powered by neutron star spindown, which would occur for longer-surviving neutron stars, suggested it collapsed into a black hole within milliseconds. However, a more detailed analysis of the GW170817 signal tail later found evidence of further features consistent with the seconds-long spindown of an intermediate or remnant hypermassive magnetar, the energy of which was below the estimated sensitivity of the LIGO search algorithms at the time. This was confirmed in 2023 by a statistically independent method of analysis revealing the central engine of GRB170817A. , the precise nature of the ultimately stable remnant remains uncertain. Scientific importance Scientific interest in the event was enormous, with dozens of preliminary papers (and almost 100 preprints) published the day of the announcement, including 8 letters in Science, 6 in Nature, and 32 in a special issue of The Astrophysical Journal Letters devoted to the subject. The interest and effort was global: The paper describing the multi-messenger observations is coauthored by almost 4,000 astronomers (about one-third of the worldwide astronomical community) from more than 900 institutions, using more than 70 observatories on all 7 continents and in space. The event also provided a limit on the difference between the speed of light and that of gravity. Assuming the first photons were emitted between zero and ten seconds after peak gravitational wave emission, the difference between the speeds of gravitational and electromagnetic waves, vGW − vEM, is constrained to between −3×10−15 and +7×10−16 times the speed of light, which improves on the previous estimate by about 14 orders of magnitude. In addition, GW170817 allowed investigation of the equivalence principle (through Shapiro delay measurement) and Lorentz invariance. The limits of possible violations of Lorentz invariance (values of 'gravity sector coefficients') are reduced by the new observations by up to ten orders of magnitude. The event also excluded some alternatives to general relativity, including variants of scalar–tensor theory, Hořava–Lifshitz gravity, Dark Matter Emulators, and bimetric gravity, Furthermore, an analysis published in July 2018 used GW170817 to show that gravitational waves propagate fully through the 3+1 curved spacetime described by general relativity, ruling out hypotheses involving "leakage" into higher, non-compact spatial dimensions. Gravitational wave signals such as GW170817 may be used as a standard siren to provide an independent measurement of the Hubble constant. An initial estimate of the constant derived from the observation is  (km/s)/Mpc, broadly consistent with current best estimates. Further studies improved the measurement to  (km/s)/Mpc. Together with the observation of future events of this kind, the uncertainty is expected to reach two percent within five years and one percent within ten years. Electromagnetic observations help support the theory that neutron star mergers contribute to rapid neutron capture (r-process) nucleosynthesis—previously assumed to be associated with supernova explosions—and are therefore the primary source of r-process elements heavier than iron, including gold and platinum. The first identification of r-process elements in a neutron star merger was obtained during a re-analysis of GW170817 spectra. The spectra provided direct proof of strontium production during a neutron star merger. This also provided the most direct proof that neutron stars are made of neutron-rich matter. Since then, several r-process elements have been identified in the ejecta including yttrium, lanthanum and cerium. In October 2017, Stephen Hawking, in his last broadcast interview, discussed the overall scientific importance of GW170817. In September 2018, astronomers reported related studies about possible mergers of neutron stars (NS) and white dwarfs (WD): including NS-NS, NS-WD, and WD-WD mergers. See also Gravitational-wave astronomy List of gravitational wave observations Multi-messenger astronomy Notes References External links Related videos (16 October 2017): Neutron stars Gravitational waves August 2017 2017 in science 2017 in outer space Hydra (constellation)
GW170817
[ "Physics", "Astronomy" ]
3,399
[ "Hydra (constellation)", "Physical phenomena", "Constellations", "Waves", "Gravitational waves" ]
55,054,778
https://en.wikipedia.org/wiki/Plane%20of%20polarization
For light and other electromagnetic radiation, the plane of polarization is the plane spanned by the direction of propagation and either the electric vector or the magnetic vector, depending on the convention. It can be defined for polarized light, remains fixed in space for linearly-polarized light, and undergoes axial rotation for circularly-polarized light. Unfortunately the two conventions are contradictory. As originally defined by Étienne-Louis Malus in 1811, the plane of polarization coincided (although this was not known at the time) with the plane containing the direction of propagation and the magnetic vector. In modern literature, the term plane of polarization, if it is used at all, is likely to mean the plane containing the direction of propagation and the electric vector, because the electric field has the greater propensity to interact with matter. For waves in a birefringent (doubly-refractive) crystal, under the old definition, one must also specify whether the direction of propagation means the ray direction (Poynting vector) or the wave-normal direction, because these directions generally differ and are both perpendicular to the magnetic vector (Fig.1). Malus, as an adherent of the corpuscular theory of light, could only choose the ray direction. But Augustin-Jean Fresnel, in his successful effort to explain double refraction under the wave theory (1822 onward), found it more useful to choose the wave-normal direction, with the result that the supposed vibrations of the medium were then consistently perpendicular to the plane of polarization. In an isotropic medium such as air, the ray and wave-normal directions are the same, and Fresnel's modification makes no difference. Fresnel also admitted that, had he not felt constrained by the received terminology, it would have been more natural to define the plane of polarization as the plane containing the vibrations and the direction of propagation. That plane, which became known as the plane of vibration, is perpendicular to Fresnel's "plane of polarization" but identical with the plane that modern writers tend to call by that name! It has been argued that the term plane of polarization, because of its historical ambiguity, should be avoided in original writing. One can easily specify the orientation of a particular field vector; and even the term plane of vibration carries less risk of confusion than plane of polarization. Physics of the term For electromagnetic (EM) waves in an isotropic medium (that is, a medium whose properties are independent of direction), the electric field vectors (E and D) are in one direction, and the magnetic field vectors (B and H) are in another direction, perpendicular to the first, and the direction of propagation is perpendicular to both the electric and the magnetic vectors. In this case the direction of propagation is both the ray direction and the wave-normal direction (the direction perpendicular to the wavefront). For a linearly-polarized wave (also called a plane-polarized wave), the orientations of the field vectors are fixed (Fig.2). Because innumerable materials are dielectrics or conductors while comparatively few are ferromagnets, the reflection or refraction of EM waves (including light) is more often due to differences in the electric properties of media than to differences in their magnetic properties. That circumstance tends to draw attention to the electric vectors, so that we tend to think of the direction of polarization as the direction of the electric vectors, and the "plane of polarization" as the plane containing the electric vectors and the direction of propagation. Indeed, that is the convention used in the online Encyclopædia Britannica, and in Feynman's lecture on polarization. In the latter case one must infer the convention from the context: Feynman keeps emphasizing the direction of the electric (E) vector and leaves the reader to presume that the "plane of polarization" contains that vector — and this interpretation indeed fits the examples he gives. The same vector is used to describe the polarization of radio signals and antennas (Fig.3). If the medium is magnetically isotropic but electrically non-isotropic (like a doubly-refracting crystal), the magnetic vectors B and H are still parallel, and the electric vectors E and D are still perpendicular to both, and the ray direction is still perpendicular to E and the magnetic vectors, and the wave-normal direction is still perpendicular to D and the magnetic vectors; but there is generally a small angle between the electric vectors E and D, hence the same angle between the ray direction and the wave-normal direction (Fig.1). Hence D, E, the wave-normal direction, and the ray direction are all in the same plane, and it is all the more natural to define that plane as the "plane of polarization". This "natural" definition, however, depends on the theory of EM waves developed by James Clerk Maxwell in the 1860s — whereas the word polarization was coined about 50 years earlier, and the associated mystery dates back even further. History of the term Three candidates Whether by accident or by design, the plane of polarization has always been defined as the plane containing a field vector and a direction of propagation. In Fig.1, there are three such planes, to which we may assign numbers for ease of reference: (1)  the plane containing both electric vectors and both propagation directions  (i.e., the plane normal to the magnetic vectors); (2a)  the plane containing the magnetic vectors and the wave-normal  (i.e., the plane normal to D); (2b)  the plane containing the magnetic vectors and the ray  (i.e., the plane normal to E). In an isotropic medium, E and D have the same direction, so that the ray and wave-normal directions merge, and the planes (2a) and (2b) become one: (2)  the plane containing both magnetic vectors and both propagation directions  (i.e., the plane normal to the electric vectors). Malus's choice Polarization was discovered — but not named or understood — by Christiaan Huygens, as he investigated the double refraction of "Iceland crystal" (transparent calcite, now called Iceland spar). The essence of his discovery, published in his Treatise on Light (1690), was as follows. When a ray (meaning a narrow beam of light) passes through two similarly oriented calcite crystals at normal incidence, the ordinary ray emerging from the first crystal suffers only the ordinary refraction in the second, while the extraordinary ray emerging from the first suffers only the extraordinary refraction in the second. But when the second crystal is rotated 90° about the incident rays, the roles are interchanged, so that the ordinary ray emerging from the first crystal suffers only the extraordinary refraction in the second, and vice versa. At intermediate positions of the second crystal, each ray emerging from the first is doubly refracted by the second, giving four rays in total; and as the crystal is rotated from the initial orientation to the perpendicular one, the brightnesses of the rays vary, giving a smooth transition between the extreme cases in which there are only two final rays. Huygens defined a principal section of a calcite crystal as a plane normal to a natural surface and parallel to the axis of the obtuse solid angle. This axis was parallel to the axes of the spheroidal secondary waves by which he (correctly) explained the directions of the extraordinary refraction. The term polarization was coined by Étienne-Louis Malus in 1811.  In 1808, in the midst of confirming Huygens' geometric description of double refraction (while disputing his physical explanation), Malus had discovered that when a ray of light is reflected off a non-metallic surface at the appropriate angle, it behaves like one of the two rays emerging from a calcite crystal. As this behavior had previously been known only in connection with double refraction, Malus described it in that context. In particular, he defined the plane of polarization of a polarized ray as the plane, containing the ray, in which a principal section of a calcite crystal must lie in order to cause only ordinary refraction. This definition was all the more reasonable because it meant that when a ray was polarized by reflection (off an isotopic medium), the plane of polarization was the plane of incidence and reflection — that is, the plane containing the incident ray, the normal to the reflective surface, and the polarized reflected ray. But, as we now know, this plane happens to contain the magnetic vectors of the polarized ray, not the electric vectors. The plane of the ray and the magnetic vectors is the one numbered (2b) above. The implication that the plane of polarization contains the magnetic vectors is still found in the definition given in the online Merriam-Webster dictionary. Even Julius Adams Stratton, having said that "It is customary to define the polarization in terms of E", promptly adds: "In optics, however, the orientation of the vectors is specified traditionally by the 'plane of polarization,' by which is meant the plane normal to E containing both H and the axis of propagation." That definition is identical with Malus's. Fresnel's choice In 1821, Augustin-Jean Fresnel announced his hypothesis that light waves are exclusively transverse and therefore always polarized in the sense of having a particular transverse orientation, and that what we call unpolarized light is in fact light whose orientation is rapidly and randomly changing. Supposing that light waves were analogous to shear waves in elastic solids, and that a higher refractive index corresponded to a higher density of the luminiferous aether, he found that he could account for the partial reflection (including polarization by reflection) at the interface between two transparent isotropic media, provided that the vibrations of the aether were perpendicular to the plane of polarization. Thus the polarization, according to the received definition, was "in" a certain plane if the vibrations were perpendicular to that plane! Fresnel himself found this implication inconvenient; later that year he wrote: Adopting this hypothesis, it would have been more natural to have called the plane of polarisation that in which the oscillations are supposed to be made: but I wished to avoid making any change in the received appellations. But he soon felt obliged to make a less radical change. In his successful model of double refraction, the displacement of the medium was constrained to be tangential to the wavefront, while the force was allowed to deviate from the displacement and from the wavefront. Hence, if the vibrations were perpendicular to the plane of polarization, then the plane of polarization contained the wave-normal but not necessarily the ray. In his "Second Memoir" on double refraction, Fresnel formally adopted this new definition, acknowledging that it agreed with the old definition in an isotropic medium such as air, but not in a birefringent crystal. The vibrations normal to Malus's plane of polarization are electric, and the electric vibration tangential to the wavefront is D (Fig.1). Thus, in terms of the above numbering, Fresnel changed the "plane of polarization" from (2b) to (2a). Fresnel's definition remains compatible with the Merriam-Webster definition, which fails to specify the propagation direction. And it remains compatible with Stratton's definition, because that is given in the context of an isotropic medium, in which planes (2a) and (2b) merge into (2). What Fresnel called the "more natural" choice was a plane containing D and a direction of propagation. In Fig.1, the only plane meeting that specification is the one labeled "Plane of vibration" and later numbered (1) — that is, the one that modern authors tend to identify with the "plane of polarization". We might therefore wish that Fresnel had been less deferential to his predecessors. That scenario, however, is less realistic than it may seem, because even after Fresnel's transverse-wave theory was generally accepted, the direction of the vibrations was the subject of continuing debate. "Plane of vibration" The principle that refractive index depended on the density of the aether was essential to Fresnel's aether drag hypothesis. But it could not be extended to birefringent crystals — in which at least one refractive index varies with direction — because density is not directional. Hence his explanation of refraction required a directional variation in stiffness of the aether within a birefringent medium, plus a variation in density between media. James MacCullagh and Franz Ernst Neumann avoided this complication by supposing that a higher refractive index corresponded always to the same density but a greater elastic compliance (lower stiffness). To obtain results that agreed with observations on partial reflection, they had to suppose, contrary to Fresnel, that the vibrations were within the plane of polarization. The question called for an experimental determination of the direction of vibration, and the challenge was answered by George Gabriel Stokes. He defined the plane of vibration as "the plane passing through the ray and the direction of vibration" (in agreement with Fig.1). Now suppose that a fine diffraction grating is illuminated at normal incidence. At large angles of diffraction, the grating will appear somewhat edge-on, so that the directions of vibration will be crowded towards the direction parallel to the plane of the grating. If the planes of polarization coincide with the planes of vibration (as MacCullagh and Neumann said), they will be crowded in the same direction; and if the planes of polarization are normal to the planes of vibration (as Fresnel said), the planes of polarization will be crowded in the normal direction. To find the direction of the crowding, one could vary the polarization of the incident light in equal steps, and determine the planes of polarization of the diffracted light in the usual manner. Stokes performed such an experiment in 1849, and it found in favor of Fresnel. In 1852, Stokes noted a much simpler experiment that leads to the same conclusion. Sunlight scattered from a patch of blue sky 90° from the sun is found, by the methods of Malus, to be polarized in the plane containing the line of sight and the sun. But it is obvious from the geometry that the vibrations of that light can only be perpendicular to that plane. There was, however, a sense in which MacCullagh and Neumann were correct. If we attempt an analogy between shear waves in a non-isotropic elastic solid, and EM waves in a magnetically isotropic but electrically non-isotropic crystal, the density must correspond to the magnetic permeability (both being non-directional), and the compliance must correspond to the electric permittivity (both being directional). The result is that the velocity of the solid corresponds to the H field, so that the mechanical vibrations of the shear wave are in the direction of the magnetic vibrations of the EM wave. But Stokes's experiments were bound to detect the electric vibrations, because those have the greater propensity to interact with matter. In short, the MacCullagh-Neumann vibrations were the ones that had a mechanical analog, but Fresnel's vibrations were the ones that were more likely to be detected in experiments. Modern practice The electromagnetic theory of light further emphasized the electric vibrations because of their interactions with matter, whereas the old "plane of polarization" contained the magnetic vectors. Hence the electromagnetic theory would have reinforced the convention that the vibrations were normal to the plane of polarization — provided, of course, that one was familiar with the historical definition of the plane of polarization. But if one was influenced by physical considerations alone, then, as Feynman and the Britannica illustrate, one would pay attention to the electric vectors and assume that the "plane" of polarization (if one needed such a concept) contained those vectors. However, it is not clear that a "plane of polarization" is needed at all: knowing what field vectors are involved, one can specify the polarization by specifying the orientation of a particular vector, or, as Born and Wolf suggest, by specifying the "plane of vibration" of that vector.  Hecht also prefers the term plane of vibration (or, more usually, plane-of-vibration), which he defines as the plane of E and the wave-normal, in agreement with Fig.1 above. Remaining uses In an optically chiral medium — that is, one in which the direction of polarization gradually rotates as the wave propagates — the choice of definition of the "plane of polarization" does not affect the existence or direction ("handedness") of the rotation. This is one context in which the ambiguity of the term plane of polarization causes no further confusion. There is also a context in which the original definition might still suggest itself. In a non-magnetic non-chiral crystal of the biaxial class (in which there is no ordinary refraction, but both refractions violate Snell's law), there are three mutually perpendicular planes for which the speed of light is isotropic within the plane provided that the electric vectors are normal to the plane. This situation naturally draws attention to a plane normal to the vibrations as envisaged by Fresnel, and that plane is indeed the plane of polarization as defined by Fresnel or Malus. In most contexts, however, the concept of a "plane of polarization" distinct from a plane containing the electric "vibrations" has arguably become redundant, and has certainly become a source of confusion. In the words of Born & Wolf, "it is… better not to use this term." See also E-plane and H-plane Plane of incidence Notes References Bibliography W.S. Aldis, 1879, A Chapter on Fresnel's Theory of Double Refraction, 2nd Ed., Cambridge: Deighton, Bell, & Co. / London: George Bell & Sons. M. Born and E. Wolf, 1970, Principles of Optics, 4th Ed., Oxford: Pergamon Press. J.Z. Buchwald, 1989, The Rise of the Wave Theory of Light: Optical Theory and Experiment in the Early Nineteenth Century, University of Chicago Press, . O. Darrigol, 2012, A History of Optics: From Greek Antiquity to the Nineteenth Century, Oxford, . A. Fresnel, 1822, De la Lumière (On Light), in J. Riffault (ed.), Supplément à la traduction française de la cinquième édition du "Système de Chimie" par Th.Thomson, Paris: Chez Méquignon-Marvis, 1822, pp.1–137,535–9; reprinted in Fresnel, 1866–70, vol.2, pp.3–146; translated by T. Young as "Elementary view of the undulatory theory of light", Quarterly Journal of Science, Literature, and Art, vol.22 (Jan.–Jun.1827), pp.127–41, 441–54; vol.23 (Jul.–Dec.1827), pp.113–35, 431–48; vol.24 (Jan.–Jun.1828), pp.198–215; vol.25 (Jul.–Dec.1828), pp.168–91, 389–407; vol.26 (Jan.–Jun.1829), pp.159–65. A. Fresnel, 1827, "Mémoire sur la double réfraction", Mémoires de l'Académie Royale des Sciences de l'Institut de France, vol. (for 1824, printed 1827), pp.45–176; reprinted as "Second mémoire…" in Fresnel, 1866–70, vol.2, pp.479–596; translated by A.W. Hobson as "Memoir on double refraction", in R.Taylor (ed.), Scientific Memoirs, vol. (London: Taylor & Francis, 1852), pp.238–333. (Cited page numbers are from the translation.) A. Fresnel (ed. H. de Senarmont, E. Verdet, and L. Fresnel), 1866–70, Oeuvres complètes d'Augustin Fresnel (3 volumes), Paris: Imprimerie Impériale; vol.1 (1866), vol.2 (1868), vol.3 (1870). E. Hecht, 2017, Optics, 5th Ed., Pearson Education, . C. Huygens, 1690, Traité de la Lumière (Leiden: Van der Aa), translated by S.P. Thompson as Treatise on Light, University of Chicago Press, 1912; Project Gutenberg, 2005. (Cited page numbers match the 1912 edition and the Gutenberg HTML edition.) B. Powell (July 1856), "On the demonstration of Fresnel's formulas for reflected and refracted light; and their applications", Philosophical Magazine and Journal of Science, Series 4, vol.12, no.76, pp.1–20. J.A. Stratton, 1941, Electromagnetic Theory, New York: McGraw-Hill. E. T. Whittaker, 1910, A History of the Theories of Aether and Electricity: From the Age of Descartes to the Close of the Nineteenth Century, London: Longmans, Green, & Co. Light Optics Physical optics Polarization (waves) Electromagnetic radiation Antennas (radio) History of physics Planes (geometry)
Plane of polarization
[ "Physics", "Chemistry", "Mathematics" ]
4,544
[ "Physical phenomena", "Applied and interdisciplinary physics", "Optics", "Spectrum (physical sciences)", "Electromagnetic radiation", "Mathematical objects", "Electromagnetic spectrum", "Astrophysics", "Infinity", "Waves", "Radiation", "Light", " molecular", "Atomic", "Planes (geometry)"...
55,059,510
https://en.wikipedia.org/wiki/Immunological%20memory
Immunological memory is the ability of the immune system to quickly and specifically recognize an antigen that the body has previously encountered and initiate a corresponding immune response. Generally, they are secondary, tertiary and other subsequent immune responses to the same antigen. The adaptive immune system and antigen-specific receptor generation (TCR, antibodies) are responsible for adaptive immune memory. After the inflammatory immune response to danger-associated antigen, some of the antigen-specific T cells and B cells persist in the body and become long-living memory T and B cells. After the second encounter with the same antigen, they recognize the antigen and mount a faster and more robust response. Immunological memory is the basis of vaccination. Emerging resources show that even the innate immune system can initiate a more efficient immune response and pathogen elimination after the previous stimulation with a pathogen, respectively with PAMPs or DAMPs. Innate immune memory (also called trained immunity) is neither antigen-specific nor dependent on gene rearrangement, but the different response is caused by changes in epigenetic programming and shifts in cellular metabolism. Innate immune memory was observed in invertebrates as well as in vertebrates. Adaptive immune memory Development of adaptive immune memory Immunological memory occurs after a primary immune response against the antigen. Immunological memory is thus created by each individual, after a previous initial exposure, to a potentially dangerous agent. The course of secondary immune response is similar to primary immune response. After the memory B cell recognizes the antigen it presents the peptide: MHC II complex to nearby effector T cells. That leads to activation of these cells and rapid proliferation of cells. After the primary immune response has disappeared, the effector cells of the immune response are eliminated. However, antibodies that were previously created in the body remain and represent the humoral component of immunological memory and comprise an important defensive mechanism in subsequent infections. In addition to the formed antibodies in the body there remains a small number of memory T and B cells that make up the cellular component of the immunological memory. They stay in blood circulation in a resting state and at the subsequent encounter with the same antigen these cells are able to respond immediately and eliminate the antigen. Memory cells have a long life and last up to several decades in the body. Immunity to chickenpox, measles, and some other diseases lasts a lifetime. Immunity to many diseases eventually wears off. The immune system's response to a few diseases, such as dengue, counterproductively worsens the next infection (antibody-dependent enhancement). As of 2019, researchers are still trying to find out why some vaccines produce life-long immunity, while the effectiveness of other vaccines drops to zero in less than 30 years (for mumps) or less than six months (for H3N2 influenza). Evolution of adaptive immune memory The evolutionary invention of memory T and B cells is widespread; however, the conditions required to develop this costly adaptation are specific. First, in order to evolve immune memory the initial molecular machinery cost must be high and will demand losses in other host characteristics. Second, middling or long lived organisms have higher chance of evolving such apparatus. The cost of this adaption increases if the host has a middling lifespan as the immune memory must be effective earlier in life. Furthermore, research models show that the environment plays an essential role in the diversity of memory cells in a population. Comparing the influence of multiple infections to a specific disease as opposed to disease diversity of an environment provide evidence that memory cell pools accrue diversity based on the number of individual pathogens exposed, even at the cost of efficiency when encountering more common pathogens. Individuals living in isolated environments such as islands have a less diverse population of memory cells, which are, however, present with sturdier immune responses. That indicates that the environment plays a large role in the evolution of memory cell populations. Previously acquired immune memory can be depleted by measles in unvaccinated children, leaving them at risk of infection by other pathogens in the years after infection. Memory B cells Memory B cells are plasma cells that are able to produce antibodies for a long time. Unlike the naive B cells involved in the primary immune response the memory B cell response is slightly different. The memory B cell has already undergone clonal expansion, differentiation and affinity maturation, so it is able to divide multiple times faster and produce antibodies with much higher affinity (especially IgG). In contrast, the naive plasma cell is fully differentiated and cannot be further stimulated by antigen to divide or increase antibody production. Memory B cell activity in secondary lymphatic organs is highest during the first 2 weeks after infection. Subsequently, after 2 to 4 weeks its response declines. After the germinal center reaction the memory plasma cells are located in the bone marrow which is the main site of antibody production within the immunological memory. Memory T cells Memory T cells can be both CD4+ and CD8+. These memory T cells do not require further antigen stimulation to proliferate; therefore, they do not need a signal via MHC. Memory T cells can be divided into two functionally distinct groups based on the expression of the CCR7 chemokine receptor. This chemokine indicates the direction of migration into secondary lymphatic organs. Those memory T cells that do not express CCR7 (these are CCR7-) have receptors to migrate to the site of inflammation in the tissue and represent an immediate effector cell population. These cells were named memory effector T cells (TEM). After repeated stimulation they produce large amounts of IFN-γ, IL-4 and IL-5. In contrast, CCR7 + memory T cells lack proinflammatory and cytotoxic function but have receptors for lymph node migration. These cells were named central memory T cells (TCM). They effectively stimulate dendritic cells, and after repeated stimulation they are able to differentiate in CCR7- effector memory T cells. Both populations of these memory cells originate from naive T cells and remain in the body for several years after initial immunization. Experimental techniques used to study these cells include measuring antigen-stimulated cell proliferation and cytokine release, staining with peptide-MHC multimers or using an activation-induced marker (AIM) assay. Innate immune memory Many invertebrates such as species of fresh water snails, copepod crustaceans, and tapeworms have been observed activating innate immune memory to instigate a more efficient immune response to second encounter with specific pathogens, despite missing an adaptive branch of the immune system. RAG1-deficient mice without functional T and B cells were able to survive the administration of a lethal dose of Candida albicans when exposed previously to a much smaller amount, showing that vertebrates also retain this ability. Despite not having the ability to manufacture antibodies like the adaptive immune system, innate immune system has immune memory properties as well. Innate immune memory (trained immunity) is defined as a long-term functional reprogramming of innate immune cells evoked by exogenous or endogenous insults and leading to an altered response towards a second challenge after returning to a non-activated state. When innate immune cells receive an activation signal; for example, through recognition of PAMPs with PRRs, they start the expression of proinflammatory genes, initiate an inflammatory response, and undergo epigenetic reprogramming. After the second stimulation, the transcription activation is faster and more robust. Immunological memory was reported in monocytes, macrophages, NK cells, ILC1, ILC2, and ILC3 cells. Concomitantly, some nonimmune cells, for example, epithelial stem cells on barrier tissues, or fibroblasts, change their epigenetic state and respond differently after priming insult. Mechanism of innate immune memory At the steady state, unstimulated cells have reduced biosynthetic activities and more condensed chromatin with reduced gene transcription. The interaction of exogenous PAMPs (β-glucan, muramyl peptide) or endogenous DAMPs (oxidized LDL, uric acid) with PRR initiates a cellular response. Triggered Intracellular signaling cascades lead to the upregulation of metabolic pathways such as glycolysis, Krebs cycle, and fatty acid metabolism. An increase in metabolic activity provides cells with energy and building blocks, which are needed for the production of signaling molecules such as cytokines and chemokines. Signal transduction changes the epigenetic marks and increases chromatin accessibility, to allow binding of transcription factors and start transcription of genes connected with inflammation. There is an interplay between metabolism and epigenetic changes because some metabolites such as fumarate and acetyl-CoA can activate or inhibit enzymes involved in chromatin remodeling. After the stimulus let up, there is no need for immune factors production, and their expression in immune cells is terminated. Several epigenetic modifications created during stimulation remain. Characteristic epigenetic rewiring in trained cells is the accumulation of H3K4me3 on immune genes promoters and the increase of H3k4me1 and H3K27ac on enhancers. Additionally, cellular metabolism does not return to the state before stimulation, and trained cells remain in a prepared state. This status can last from weeks to several months and can be transmitted into daughter cells. Secondary stimulation induces a new response, which is faster and stronger. Evolution of innate immune memory Immune memory brings a major evolutionary advantage when the organism faces repeated infections. Inflammation is very costly, and increased effectivity of response accelerates pathogen elimination and prevents damage to the host's own tissue. Classical adaptive immune memory evolved in jawed vertebrates and in jawless fish (lamprey), which is approximately just 1% of living organisms. Some form of immune memory is, therefore, reported in other species. In plants and invertebrates, faster kinetics, increased magnitude of immune response and an improved survival rate can be seem after secondary infection encounters. Immune memory is common for the vast majority of biodiversity on earth. It has been proposed that immune memory in innate and adaptive immunity represents an evolutionary continuum in which a more robust immune response evolved first, mediated by epigenetic reprogramming. In contrast, specificity through antigen-specific receptors evolved later in some vertebrates. Evolutionary mechanisms leading to the development of immunological memory The emergence of the adaptive immune system is rooted in the deep history of evolution dating back roughly 500 million years. Investigations and recent studies found that two major events led to the emergence of the same. These two macroevolutionary events were the origin of RAG and two whole rounds of genome duplication (WGD).The early origins and evidence for emergence of features resembling AIS dates to the era where jawed and jawless vertebrates diverged phylogenetically. Early investigations around the 1970s led to the discovery of unique inverted repeat flanking signal sequences while groups studied the RAG genome. These so-called RAG transposons invaded regions of genome which may have been involved in AIS. Culmination of several works and review suggests that these disruptions could have been selected for a rearrangement to maintain genomic integrity which ultimately led to mechanisms like RAG diversifications in AIS. This discovery led to the hypothesis that there was an invasion event of a regulatory element-like region because these repeats resembled a remnant transposable element. This invasion was argued to be necessary for the emergence of BCR and TCR-dependent immunity as we see now in all gnathostomes .According to recent scientific findings around 450-500mya the vertebrate genome went through two rounds of whole genome duplication. This is usually referred to as the “2R hypothesis”. Such intense genomic events lead to gene sub-functionalization, neofunctionalization or in many cases lead to loss of functions. Ohno, 40 years ago proposed that the evolutionary events which led to whole genome duplication was key for the emergence of the diversity we see in adaptive immunity and memory. Further works illustrate that newer genic regions which arose because of this duplication event, are major contributors to today's adaptive immune systems which control immunological memory in gnathostomes. Okada’s work on investigating ohnologues that arose from WGD is clear proof of the same, that today AIS systems are remnants of the WGD events See also Immunity (medical) Seroconversion Serostatus Virgin soil epidemic References Immune system
Immunological memory
[ "Biology" ]
2,600
[ "Immune system", "Organ systems" ]
55,063,733
https://en.wikipedia.org/wiki/Insilico%20Medicine
Insilico Medicine is a biotechnology company headquartered in Boston, Massachusetts, with additional facilities in Pak Shek Kok, Hong Kong in Hong Kong Science Park near the Chinese University of Hong Kong, and in New York, at The Cure by Deerfield. The company combines genomics, big data analysis, and deep learning for in silico drug discovery. History In 2011, Alex Zhavoronkov published an article in the journal PLOS One with Dr. Charles Cantor, previously director of the Human Genome Project at the Department of Energy (DOE) and founder of Sequenom on the International Aging Research Portfolio (IARP), establishing a public data set tracking government research funding and outcomes. This work formed the basis for an artificial intelligence (AI) pharmacological analysis platform. Zhavoronkov assertedly founded Insilico Medicine in 2014, as an alternative to animal testing for research and development programs in the pharmaceutical industry, using AI and deep-learning techniques to analyze how a compound will affect cells and what drugs can be used to treat the cells in addition to possible side effects. Through its Pharma.AI division, the company provides machine learning services to different pharmaceutical, biotechnology, and skin care companies. Insilico is known for hiring mainly through hackathons such as their own MolHack online hackathon. The company has multiple collaborations in the applications of next-generation artificial intelligence technologies such as the generative adversarial networks (GANs) and reinforcement learning to the generation of novel molecular structures with desired properties. In 2016, Insilico published an algorithm that it called the "Insilico Pathway Activation Network Decomposition Analysis" or "iPANDA" algorithm, asserted to allow researchers "to quickly and efficiently analyze signaling and metabolic pathway perturbation states using gene expression data". In conjunction with Alan Aspuru-Guzik's group at Harvard, they published a journal article about an improved architecture for molecular generation which combines GANs, reinforcement learning, and a differentiable neural computer. In 2017, Insilico was named one of the Top five AI companies by NVIDIA for its potential for social impact. Insilico has resources in Belgium, Russia, and the UK and hires talent through hackathons and other local competitions. By mid-2017, Insilico had raised $8.26 million in funding from investors including Deep Knowledge Ventures, JHU A-Level Capital, Jim Mellon, and Juvenescence. In 2019 it raised another $37 million from Fidelity Investments, Eight Roads Ventures, Qiming Venture Partners, WuXi AppTec, Baidu, Sinovation, Lilly Asia Ventures, Pavilion Capital, BOLD Capital, and other investors. The company "focused exclusively on drug discovery until 2019 when it began developing its own therapeutics". In January 2021, Insilico entered into a partnership with Fosun Pharma, to facilitate entry into the Chinese market. Later in 2021 after developing a novel preclinical candidate molecule for a novel target, the company announced a series C $255 million megaround from Warburg Pincus, Sequoia Capital, Orbimed, Mirae Asset Financial Group, and over 25 biotechnology, AI, and pharmaceutical investors. By mid-2021, it claimed to have nominated eight preclinical candidates. Another $60 million in new Series D financing was raised in 2022. it was reported that over $400 million had been invested in the company. In 2023, Zhavoronkov stated that he "moved the company's R&D to China to capitalize on 'half a trillion dollars' worth of infrastructure and hundreds of thousands of scientists [provided by the government] to enable AI-designed drugs". In mid-2024, it was reported that the corporate headquarters had relocated to Boston, Massachusetts. In November 2024, Insilico was named one of the top 50 AI innovators by Fortune magazine. Research The company "applies DL, big data, and genomis for in silico drug discovery" for various conditions. It has sought to develop AI to "identify novel drug targets for untreated diseases", and has pursued dual-purpose therapeutics, "going after a specific disease or several diseases while targeting ageing at the same time". In 2019, the company in partnership with researchers at the University of Toronto, used AI to design potential new drugs. One was reported to have shown promising initial results when tested in mice. Research areas for therapeutics have included fibrosis, immunology, oncology and the central nervous system. To demonstrate the capacity of their proprietary AI platforms, the company published two projects on identifying therapeutic targets for ageing and amyotrophic lateral sclerosis in 29 March and 28 June 2022, respectively. The company has collaborated with scientists at the University of Chicago, George Mason University, and University of Liverpool, focusing on ageing. For ALS, the company worked with researchers from Answer ALS, Johns Hopkins University School of Medicine, Harvard Medical School, Mayo Clinic, Tsinghua University, and 4B Technologies Limited. In 2023, it was reported that Insilico had initiated "one of the first mid-stage human trials of a drug discovered and designed by artificial intelligence". Outside of human drugs, in 2021 the company partnered with Swiss company Syngenta weedkillers. References External links Deborah Borfitz, "Insilico Medicine’s AI-Driven Platform Pushes The Envelope Of Drug Discovery", Bio-IT World (15 December 2022). Biotechnology companies of Hong Kong Drug discovery companies Biogerontology organizations Life extension organizations Organizations established in 2014 Warburg Pincus companies
Insilico Medicine
[ "Chemistry" ]
1,164
[ "Drug discovery companies", "Drug discovery" ]
61,304,144
https://en.wikipedia.org/wiki/Coherent%20Raman%20scattering%20microscopy
Coherent Raman scattering (CRS) microscopy is a multi-photon microscopy technique based on Raman-active vibrational modes of molecules. The two major techniques in CRS microscopy are stimulated Raman scattering (SRS) and coherent anti-Stokes Raman scattering (CARS). SRS and CARS were theoretically predicted and experimentally realized in the 1960s. In 1982 the first CARS microscope was demonstrated. In 1999, CARS microscopy using a collinear geometry and high numerical aperture objective were developed in Xiaoliang Sunney Xie's lab at Harvard University. This advancement made the technique more compatible with modern laser scanning microscopes. Since then, CRS's popularity in biomedical research started to grow. CRS is mainly used to image lipid, protein, and other bio-molecules in live or fixed cells or tissues without labeling or staining. CRS can also be used to image samples labeled with Raman tags, which can avoid interference from other molecules and normally allows for stronger CRS signals than would normally be obtained for common biomolecules. CRS also finds application in other fields, such as material science and environmental science. Background Coherent Raman scattering is based on Raman scattering (or spontaneous Raman scattering). In spontaneous Raman, only one monochromatic excitation laser is used. Spontaneous Raman scattering's signal intensity grows linearly with the average power of a continuous-wave pump laser. In CRS, two lasers are used to excite specific vibrational modes of molecules to be imaged. The laser with a higher photon energy is normally called the pump laser and the laser with a lower photon energy is called Stokes laser. In order to produce a signal their photon energy differences must match the energy of a vibrational mode: , where the . CRS is a nonlinear optical process, where the signal level is normally a function of the product of the powers of the pump and Stokes lasers. Therefore, most CRS microscopy experiments are performed with pulsed lasers, where higher peak power improved the signal levels of CRS significantly. Coherent anti-Stokes Raman scattering (CARS) Microscopy In CARS, anti-Stokes photons (higher in energy, shorter wavelength than the pump) are detected as signals. In CARS microscopy, there are normally two ways to detect the newly generated photons. One is called forward-detected CARS, the other called epi-detected CARS. In forward-detected CARS, the generated CARS photons together with pump and Stokes lasers go through the sample. The pump and Stokes lasers are completely blocked by a high optical density (OD) notch filter. The CARS photons are then detected by a photomultiplier tube (PMT) or a CCD camera. In epi-detected CARS, back-scattered CARS photons are redirected by a dichroic mirror or polarizing beam splitter. After high OD filters are used to block back-scattered pump and Stokes lasers, the newly generated photons are detected by a PMT. The signal intensity of CARS has the following relationship with the pump and Stokes laser intensities , the number of molecules in the focus of the lasers and the third order Raman susceptibility of the molecule: The signal-to-noise ratio (SNR), which is a more important characteristic in imaging experiments depends on the square root of the number of CARS photons generated, which is given below: There are other non-linear optical processes that also generate photons at the anti-Stokes wavelength. Those signals are normally called non-resonant (NR) four-wave-mixing (FWM) background in CARS microscopy. These background can interfere with the CARS signal either constructively or destructively. However, the problem can be partially circumvented by subtracting the on- and off-resonance images or using mathematical methods to retrieve the background free images. Stimulated Raman scattering (SRS) microscopy In SRS, the intensity of the energy transfer from the pump wavelength to the Stokes laser wavelength is measured as a signal. There are two ways to measure SRS signals, one is to measure the increase of power in Stokes laser, which is called stimulated Raman gain (SRG). The other is to measure the decrease of power in the pump laser, which is called stimulated Raman loss (SRL). Since the change of power is on the order of 10−3 to 10−6 compared with the original power of pump and Stokes lasers, a modulation transfer scheme is normally employed to extract the SRS signals. The SRS signal depends on the pump and Stokes laser powers in the following way: Shot noise limited detection can be achieved if electronic noise from detectors are reduced well below optical noise and the lasers are shot noise limited at the detection frequency (modulation frequency). In the shot noise limited case, the signal-to-noise ratio (SNR) of SRS is The signal of SRS is free from the non-resonant background which plagues CARS microscopy, although a much smaller non-resonant background from other optical process (e.g. cross-phase modulation, multi-color multi-photon absorption) may exist. SRS can be detected in either the forward direction and epi directions. In forward-detected SRS, the modulated laser is blocked by a high OD notch filter and the other laser is measured by a photodiode. Modulation transferred from the modulated laser to the originally unmodulated laser is normally extracted by a lock-in amplifier from the output of photodiode. In epi-detected SRS, there are normally two methods to detect the SRS signal. One method is to detect the back-scattered light in front of the objective by a photodiode with a hole at the center. The other method is similar to the epi-detected CARS microscopy, where the back-scattered light goes through the objective and is deflected to the side of the light path, normally with the combination of a polarizing beam splitter and a quarter wave-plate. The Stokes (or pump) laser is then detected after filtering out the pump (or Stokes laser). Two-color, multi-color, and hyper-spectral CRS microscopy One pair of laser wavelengths only gives access to a single vibrational frequency. Imaging samples at different wavenumbers can provide a more specific and quantitative chemical mapping of the sample. This can be achieved by imaging at different wavenumbers one after another. This operation always involves some type of tuning: tuning of one of the lasers' wavelengths, tuning of a spectral filtering device, or tuning of the time delay between the pump and Stokes lasers in the case of spectral-focusing CRS. Another way of performing multi-color CRS is to use one picosecond laser with a narrow spectral bandwidth (<1 nm) as pump or Stokes and the other laser with broad spectral bandwidth. In this case, the spectrum of the transmitted broadband laser can be spread by a grating and measured by an array of detectors. Spectral-focusing CRS CRS normally use lasers with narrow bandwidth lasers, whose bandwidth < 1 nm, to maintain good spectral resolution ~ 15 cm−1. Lasers with sub 1 nm bandwidth are picosecond lasers. In spectral-focusing CRS, femtosecond pump and Stokes lasers are equally linearly chirped into picosecond lasers. The effective bandwidth become smaller and therefore, high spectral resolution can be achieved this way with femtosecond lasers which normally have a broad bandwidth. The wavenumber tuning of spectral-focusing CRS can be achieved both by changing the center wavelength of lasers and by changing the delay between pump and Stokes lasers. Applications Coherent Raman histology One of the major applications for CRS is label-free histology, which is also called coherent Raman histology, or sometimes stimulated Raman histology. In CRH, CRS images are obtained at lipid and protein images and after some image processing, an image similar to H&E staining can be obtained. Different from H&E staining, CRH can be done on live and fresh tissue and doesn't need fixation or staining. Cell metabolism The metabolism of small molecules like glucose, cholesterol, and drugs are studied with CRS in live cells. CRS provide a way to measure molecular distribution and quantities with relatively high throughput. Myelin imaging Myelin is rich in lipid. CRS is routinely used to image myelin in live or fixed tissues to study neurodegenerative diseases or other neural disorders. Pharmaceutical research The functions of drugs can be studied by CRS too. For example, an anti-leukemia drug imatinib are studied with SRS in leukemia cell lines. The study revealed the possible mechanism of its metabolism in cells and provided insight about ways to improve drug effectiveness. Raman tags Even though CRS allows label-free imaging, Raman tags can also be used to boost signal for specific targets. For example, deuterated molecules are used to shift Raman signal to a band where the interference from other molecules is absent. Specially engineered molecules containing isotopes can be used as Raman tags to achieve super-multiplexing multi-color imaging with SRS. Comparison to confocal Raman microscopy Confocal Raman microscopy normally uses continuous wave lasers to provide a spontaneous Raman spectrum over a broad wavenumber range for each point in an image. It takes a long time to scan the whole sample, since each pixel requires seconds for data acquisition. The whole imaging process is long and therefore, it is more suitable for samples that do not move. CRS on the other hand measures signals at single wavenumber but allows for fast scanning. If more spectral information is needed, multi-color or hyperspectral CRS can be used and the scanning speed or data quality will be compromised accordingly. Comparison between SRS and CARS In CRS microscopy, we can regard SRS and CARS as two aspects of the same process. CARS signal is always mixed with non-resonant four-wave mixing background and has a quadratic dependence on concentration of chemicals being imaged. SRS has much smaller background and depends linearly on the concentration of the chemical being imaged. Therefore, SRS is more suitable for quantitative imaging than CARS. On the instrument side, SRS requires modulation and demodulation (e.g. lock-in amplifier or resonant detector). For multi-channel imaging, SRS requires multichannel demodulation while CARS only needs a PMT array or a CCD. Therefore, the instrumentation required is more complicated for SRS than CARS. On the sensitivity side, SRS and CARS normally provide similar sensitivities. Their differences are mainly due to detection methods. In CARS microscopy, PMT, APD or CCDs are used as detectors to detect photons generated in the CARS process. PMTs are most commonly used due to their large detection area and high speed. In SRS microscopy, photodiodes are normally used to measure laser beam intensities. Because of such differences, the applications of CARS and SRS are also different. PMTs normally have relatively low quantum efficiency compared with photodiodes. This will negatively impact the SNR of CARS microscopy. PMTs also have reduced sensitivity for lasers with wavelengths longer than 650 nm. Therefore, with the commonly used laser system for CRS (Ti-sapphire laser), CARS is mainly used to image at high wavenumber region (2800–3400 cm−1). The SNR of CARS microscopy is normally poor for fingerprint imaging (400–1800 cm−1). SRS microscopy mainly uses silicon photodiode as detectors. Si photodiodes have much higher quantum efficiency than PMTs, which is one of the reasons that the SNR of SRS can be better than CARS in many cases. Si photodiodes also suffer reduced sensitivity when the wavelength of laser is longer than 850 nm. However, the sensitivity is still relatively high and allows for imaging in fingerprint region (400–1800 cm−1). See also References Microscopy Raman spectroscopy
Coherent Raman scattering microscopy
[ "Chemistry" ]
2,489
[ "Microscopy" ]
61,308,580
https://en.wikipedia.org/wiki/Anti-structure
In crystallography, an anti-structure is obtained from a salt structure by exchanging anion and cation positions. For instance, calcium fluoride, CaF2, crystallizes in a cubic motif called the fluorite structure. The same crystal structure is found in numerous ionic compounds with formula AB2, such as ceria (CeO2), zirconia (cubic ZrO2), uranium dioxide (UO2). In the corresponding anti-structure, called the antifluorite structure, anions and cations are swapped, such as beryllium carbide (Be2C) or lithium oxide (Li2O), potassium sulfate (K2SO4). Other anti-structures include: anti-SnO2: Ti2N anti-PbCl2: Co2P anti-CdCl2: Co2N anti-CdI2: Cs2O anti-NbS2: Hf2S anti-ReO3: Cu3N anti-LaF3: Cu3P, Cu3As References Crystallography
Anti-structure
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
227
[ "Crystallography", "Condensed matter physics", "Materials science" ]
69,193,526
https://en.wikipedia.org/wiki/Isomorphic%20Labs
Isomorphic Labs Limited is a London-based company which uses artificial intelligence for drug discovery. Isomorphic Labs was founded by Demis Hassabis, who also serves as the CEO. The company was incorporated on February 24, 2021 and announced on November 4, 2021. It was established under Alphabet Inc. as a spin-off from its AI research lab DeepMind, of which Hassabis is also founder and CEO. The company draws upon DeepMind's AlphaFold technology, which can be used to predict protein structures in the human body with high accuracy, allowing its researchers to find new target pathways for drug delivery. In December 2022, Isomorphic Labs announced its second office location in Lausanne, Switzerland. In January 2024, Isomorphic Labs partnered with Novartis AG and Eli Lilly and Company to work together on AI drug discovery and research. In May 2024, Google DeepMind and Isomorphic Labs announced the release of AlphaFold 3, freely available on the AlphaFold server for non-commercial research. AlphaFold 3 is not limited to predicting how proteins fold, it can also predict the interactions with molecules typically found in drugs such as ligands or antibodies, which is expected to significantly accelerate drug discovery. In November 2024, preliminary results of CASP16 showed AlphaFold 3-based models did not significantly outperform older methods for predicting protein-ligand interactions. The top performing models in the CASP16 Pose Prediction for Pharma Targets section were ClusPro and CoDock utilizing AlphaFold 2 based predictions, human visual inspection, and manual adjustments. References External links Alphabet Inc. Biotechnology companies Drug discovery companies Artificial intelligence companies Companies based in London 2021 establishments in the United States
Isomorphic Labs
[ "Chemistry", "Engineering", "Biology" ]
343
[ "Biotechnology organizations", "Drug discovery companies", "Drug discovery", "Biotechnology companies" ]
69,196,020
https://en.wikipedia.org/wiki/1105%20%28number%29
1105 (eleven hundred [and] five, or one thousand one hundred [and] five) is the natural number following 1104 and preceding 1106. Mathematical properties 1105 is the smallest positive integer that is a sum of two positive squares in exactly four different ways, a property that can be connected (via the sum of two squares theorem) to its factorization as the product of the three smallest prime numbers that are congruent to 1 modulo 4. It is also the smallest member of a cluster of three semiprimes (1105, 1106, 1107) with eight divisors, and the second-smallest Carmichael number, after 561, one of the first four Carmichael numbers identified by R. D. Carmichael in his 1910 paper introducing this concept. Its binary representation 10001010001 and its base-4 representation 101101 are both palindromes, and (because the binary representation has nonzeros only in even positions and its base-4 representation uses only the digits 0 and 1) it is a member of the Moser–de Bruijn sequence of sums of distinct powers of four. As a number of the form for 1105 is the magic constant for magic squares, and as a difference of two consecutive fourth powers it is a rhombic dodecahedral number (a type of figurate number), and a magic number for body-centered cubic crystals. These properties are closely related: the difference of two consecutive fourth powers is always a magic constant for an odd magic square whose size is the sum of the two consecutive numbers (here . References Integers
1105 (number)
[ "Mathematics" ]
329
[ "Elementary mathematics", "Integers", "Mathematical objects", "Numbers" ]
69,199,096
https://en.wikipedia.org/wiki/Intrinsic%20bond%20orbitals
Intrinsic bond orbitals (IBO) are localized molecular orbitals giving exact and non-empirical representations of wave functions. They are obtained by unitary transformation and form an orthogonal set of orbitals localized on a minimal number of atoms. IBOs present an intuitive and unbiased interpretation of chemical bonding with naturally arising Lewis structures. For this reason IBOs have been successfully employed for the elucidation of molecular structures and electron flow along the intrinsic reaction coordinate (IRC). IBOs have also found application as Wannier functions in the study of solids. Theory The IBO method entails molecular wave-functions calculated using self-consistent field (SCF) methods such as Kohn-Sham density functional theory (DFT) which are expressed as linear combinations of localized molecular orbitals. In order to arrive at IBOs, intrinsic atomic orbitals (IAOs) are first calculated as representations of a molecular wave function for which each IAO can be assigned to a specific atom. This allows for a chemically intuitive orbital picture as opposed to the commonly used large and diffuse basis sets for the construction of more complex molecular wavefunctions. IAOs are constructed from tabulated free-atom AOs of standard basis-sets under consideration of the molecular environment. This yields polarized atomic orbitals that resemble the free-atom AOs as much as possible, before orthonormalization of the polarized AOs results in the set of IAOs. IAOs are thus a minimal basis for a given molecule in which atomic contributions can be distinctly assigned. The sum of all IAOs spans exactly over the molecular orbitals which renders them an exact representation of the wavefunction. Since IAOs are associated with a specific atom, they can provide atom specific properties such as the partial charge. Compared to other charges, such as the Mulliken charge, the IAO charges are independent of the employed basis set. IBOs are constructed as a linear combination over IAOs with the condition of minimizing the number of atoms over which the orbital charge is spread. Each IBO can thereby be divided into the contributions of the atoms as the electronic occupation of orbital on atom . The localization is performed in the spirit of the Pipek-Mezey localization scheme, maximizing a localization functional . with or . While the choice of the exponent does not affect the resulting IBOs in most cases, the choice of localizes the orbitals in aromatic systems unlike . The process of IBO construction is performed by unitary tranfomation of canonical MOs, which ensures that the IBOs remain an exact and physically accurate representation of the molecular wavefunction due to the invariance of Slater determinant wavefunctions towards unitary rotations. The unitary matrix , which produces the localized IBOs upon matrix multiplication with set of occupied MOs , is thereby chosen to effectively minimize spread of IBOs over the atoms of a molecule. The product is a set of localized IBOs, closely resembling the chemically intuitive shapes of molecular orbitals, allowing for distinction of bond types, atomic contributions and polarization. Application in structure and bonding In his original paper introducing IBOs, Knizia showed the versatility of his method for describing not only classical bonding situations, such as the σ and π bond, but also aromatic systems and non-trivial bonds. The differentiation of σ and π bonds in acrylic acid is possible based on IBO geometries, as are the identification of the IBOs corresponding to the oxygen lone pairs. Benzene provided an example of a delocalized aromatic system to test the IBO method. Apart from the C-C and C-H σ-bonds, the six electron π-system is expressed as three delocalized IBOs. Representation of non-Lewis bonding was demonstrated on diborane B2H6, with one IBO stretching over B-H-B, corresponding to the 3-center-2-electron bond. Transition metal compounds IBO analysis was used to explain the stability of electron rich gold-carbene complexes, mimicking reactive intermediates in gold catalysis. While these complexes are sometimes depicted with a Au-C double bond, representing the sigma donation of the carbene and π backbonding of Au, IBO analysis points towards a minimal amount of π-backbonding with the respective orbital mainly localized on Au. The σ-donating carbene orbital is likewise strongly polarized towards C. Stabilization of the compound thus occurs through strong donation of the aromatic carbene substituents into the carbene carbon p-orbitals, outcompeting the Au-π-backdonation. IBO analysis was thus able to negate the double bond character of the gold-carbene complexes and provided deep insight into the electronic structure of Cy3P-Au-C(4-OMe-C6H4)2) (Cy = cyclohexyl). The π-backbonding character was again evaluated for gold-vinylidene complexes, as another common type of gold catalysis intermediates. IBO analysis revealed significantly stronger π-backbonding for the gold-vinylidenes compared to the gold-carbenes. This was attributed to the geometric inability of aromatic vinylidene substituents to compete with Au for π-interactions since the respective orbitals are perpendicular to each other. Knizia and Klein similarly employed IBO for the analysis of [Fe(CO)3NO]–. The even polarization of IBOs between Fe and N points towards a covalently bonded NO ligand. The double bond occurs via two d-p π-interactions and results in a formal Fe0 center. Confirmed by further calculations, IBO proved as a fast and straightforward method to interpret bonding in this case. Making use of the low computational cost, a Cloke-Wilson rearrangement catalyzed by [Fe(CO)3NO]– was investigated by constructing the IBOs for every stationary point along the IRC. It was found that one of the Fe-NO π bonds takes active part in catalysis by electron transfer to and from the substrate, explaining the unique catalytic activity of [Fe(CO)3NO]– compared to the isoelectronic [Fe(CO)4]2–. Apart from the above mentioned compounds, the IBO method has been employed to investigate various other transition metal complexes, such as gold-diarylallenylidenes or diplatinum diboranyl complexes, proving as a valuable tool to gain insight into the extent and nature of bonding. Main group compounds IBO analysis has been employed in main group chemistry to elucidate oftentimes non-trivial electronic structure. The bonding of phosphaaluminirenes was, for example, investigated showing a 3-center-2π-electron bond of the AlCP cycle. Further application was found for confirming the distonic nature of a phosphorus containing radical cation reported by Chen et al. (see figure). While the IAO charge analysis yielded a positive charge on the chelated P, IBOs showed the localization of the unpaired electron on the other P atom, confirming the spatial separation of radical site and charge. Another example is the elucidation of the electronic structure of the hexamethylbenzene dication. Three π bonding IBOs were found between the basal C5Me5 plane and the apical C, reminiscent of Cp* coordination complexes. The three π bonds are thereby polarized towards the apical C, which in turn coordinates to a CH3+ cation with its lone pair. IBO analysis therefore revealed the Lewis-acidic and Lewis-basic character of the apical C. Applications of IBO for cluster compounds have included zirconium doped boron clusters. IBO analysis showed, that the unusual stability of the neutral ZrB12 cluster stems from several multicenter σ bonds. The B-B σ bonding orbitals extend to the central Zr atom, forming the mulicenter bonds. This example displays the method's aptitude to analyze cluster compounds and multicenter bonding. Valence virtual intrinsic bond orbitals Although IBOs typically describe occupied orbitals, the description of unoccupied orbitals can likewise be of value for interpreting chemical interactions. Valence virtual IBOs (vvIBOs) were introduced with the investigation of high valent formal Ni(IV) complexes. The bonding and antibonding manifold of the compound were described using IBOs and vvIBOs respectively. Compared to the widely used HOMO/LUMOs which are often spread over the whole molecule and can be difficult to interpret, vvIBOs allow for more direct interpretation of chemical interactions with unoccupied orbitals. Electron flow along the IRC In 2015, Knizia and Klein introduced the analysis of electron flow in reactions with IBO as a non-empirical and straight-forward method of evaluating curly arrow mechanisms. Since IBOs are exact representations of Kohn-Sham wavefunctions, they can provide physical conformation for curly arrows mechanisms based on first-principles. IBOs usually represent chemical bonds and lone pairs, this method allows for elucidation of bond rearrangements in terms of the elemental steps and their sequence. By calculating the root mean square deviations of the partial charge distributions compared to the initial charge distribution, IBOs taking active part in a reaction can be distinguished from those that remain unchanged along the IRC. Knizia and Klein demonstrate the versatility of this method in their original report, first presenting a simple SN2-type self-exchange reaction of H3CCl and Cl–, followed by the migration of π bonds in a substitution reaction (SN2) and π- to σ-bond transformations in a Claisen rearrangement. Electron flow can be easily followed by observing the migration of an IBO and bond types are easily distinguished based on the geometries of the IBOs. The value of IBO analysis along the IRC especially shows for complex reactions, such as a cyclopropanation reaction with only one transition state and without intermediates, reported by Haven et al. Calculations by Knizia and Klein yielded a precise curly arrow mechanism for this reaction. Closed-shell systems Examples of IBO analysis along the IRC included the investigation of C-H bond activation by gold-vinylidene complexes. Through this method, it is possible to discern between concerted and stepwise reactions. The previously thought single step C-H activation reaction was in this case revealed to consist of three distinct phases: i) hydride transfer, ii) C-C bond formation and iii) sigma to pi rearrangement of the lone pair coordinated to Au. Other reports of IBO analysis along the IRC include the elucidation and confirmation of a previously proposed mechanism for a [3,3]-sigmatropic rearrangement of a Au(I)-vinyl species or the epoxidation of alkene by peracids. For the latter, the textbook four curly arrow mechanism was found to be physically inaccurate. Instead, seven changing IBOs were found, yielding an ideal mechanism featuring seven curly arrows. The combination of IBO analysis with other computational methods, such as natural bond orbital (NBO) analysis for a Ti-catalyzed pyrrole synthesis or natural localized molecular orbital (NLMO) analysis for an intramolecular cycloaddition of a phosphaalkene to an arene has likewise led to insightful results regarding the specifics of the reaction mechanisms. Open-shell systems Klein and Knizia furthermore introduced the first examples of IBOs used for analysis of open-shell systems during proton-coupled electron transfer (PCET) and hydrogen atom transfer (HAT). The differentiation between pCET, a separate but concerted electron and proton transfer, and HAT, the transfer of a hydrogen atom, were shown for two well-studied model systems of enzymatic Fe-oxo active sites. IBOs along the IRC were calculated for the alpha and beta spin manifold respectively. While the IBO of the alpha spin electron travelled together with the proton to take part in the formation of a new H-O bond in case of HAT, the electron was transferred to the Fe-center separated from the transferred proton for PCET. The successful application of the IBO method for these two examples of open-shell systems was suggested to pave the way for broader applications to similar problems. See also Localized molecular orbitals References Electronic structure methods
Intrinsic bond orbitals
[ "Physics", "Chemistry" ]
2,587
[ "Quantum chemistry", "Quantum mechanics", "Computational physics", "Electronic structure methods", "Computational chemistry" ]
69,201,841
https://en.wikipedia.org/wiki/Natural%20resonance%20theory
In computational chemistry, natural resonance theory (NRT) is an iterative, variational functional embedded into the natural bond orbital (NBO) program, commonly run in Gaussian, GAMESS, ORCA, Ampac and other software packages. NRT was developed in 1997 by Frank A. Weinhold and Eric D. Glendening, chemistry professors at University of Wisconsin-Madison and Indiana State University, respectively. Given a list of NBOs for an idealized natural Lewis structure, the NRT functional creates a list of Lewis resonance structures and calculates the resonance weights of each contributing resonance structure. Structural and chemical properties, such as bond order, valency, and bond polarity, may be calculated from resonance weights. Specifically, bond orders may be divided into their covalent and ionic contributions, while valency is the sum of bond orders of a given atom. This aims to provide quantitative results that agree with qualitative notions of chemical resonance. In contrast to the "wavefunction resonance theory" (i.e., the superposition of wavefunctions), NRT uses the density matrix resonance theory, performing a superposition of density matrices to realize resonance. NRT has applications in ab initio calculations, including calculating the bond orders of intra- and intermolecular interactions and the resonance weights of radical isomers. History During the 1930s, Professor Linus Pauling and postdoctoral researcher George Wheland applied quantum-mechanical formalism to calculate the resonance energy of organic molecules. To do this, they estimated the structure and properties of molecules described by more than one Lewis structure as a linear combination of all Lewis structures: where aiκ and Ψaκ denote the weight and single-electron eigenfunction from the wavefunction for a Lewis structure κ, respectively. Their formalism assumes that localized valence bond wavefunctions are mutually orthogonal. While this assumption ensures that the sum of the weights of the resonance structures describing the molecule is one, it creates difficulties in computing aiκ. The Pauling-Wheland formalism also assumes that cross-terms from density matrix multiplication may be neglected. This facilitates the averaging of chemical properties, but, like the first assumption, is not true for actual wavefunctions. Additionally, in the case of polar bonding, these assumptions necessitate the generation of ionic resonance structures that often overlap with covalent structures. In other words, superfluous resonance structures are calculated for polar molecules. Overall, the Pauling-Wheland formulation of resonance theory was unsuitable for quantitative purposes. Glendening and Weinhold sought to create a new formalism, within their ab initio NBO program, that would provide an accurate quantitative measure of resonance theory, matching chemical intuition. To do this, instead of evaluating a linear combination of wavefunctions, they express a linear combination of density operators, Γ, (i.e., matrices) for localized structures, where the sum of all weights, ωα, is one. where and In the context of NBO, the true density operator Γ represents the NBOs of an idealized natural Lewis structure. Once NRT has generated a set of density operators, Γα, for localized resonance structures, α, a least-squares variational functional is employed to quantify the resonance weights of each structure. It does this by measuring the variational error, δw, of the linear combination of resonance structures to the true density operator Γ. To evaluate a single resonance structure, δref, the absolute difference between a single term expansion and the true density operator, approximated as the leading reference structure, can be taken. Now, the extent to which each reference structure represents the true structure may be evaluated as the "fractional improvement", fw. From this equation, it is evident that as fw approaches one and δw approaches zero, δref becomes a better representation of the true structure. Updates In 2019, Glendening, Wright and Weinhold introduced a quadratic programming (QP) strategy for variational minimization in NRT. This new feature is integrated into NBO 7.0 version of their program. In this program, the matrix root-mean square deviation (Frobenius norm) of the resonance weights is calculated. The mean-squared density matrices, representing deviation from the true density matrix, may be rewritten as a Gram matrix, and an iterative algorithm is used to minimize the Gram matrix and solve the QP. Theory Generation of resonance structures and their density matrices From a given wavefunction, Ψ, a list of optimal NBOs for a Lewis-type wavefunction are generated along with a list of non-Lewis NBOs (e.g., incorporating some antibonding interactions). When these latter orbitals have nonzero value, there is "delocalization" (i.e., deviation from the ideal Lewis-type wavefunction). From this, NRT generates a "delocalization list" from deviation from the parent structure and describes a series of alternative structures reflecting the delocalization. A threshold for the number of generated resonance structures can be set by controlling the desired energetic maximum (NRTTHR threshold). The NBOs for a resonance structure formula can then be, subsequently, calculated from the CHOOSE option. Operationally, there are three ways in which alternative resonance structures may be generated: (1) from the LEWIS option, considering the Wiberg bond indices; (2) from the delocalization list; (3) specified by the user. Below is an example of how NRT may generate a list of resonance structures. (1) Given an input wavefunction, NRT creates a list of reference Lewis structures. The LEWIS option tests each structure and rejects those that do not conform to the Lewis bonding theory (i.e., those that do not fulfill the octet rule, pose unreasonable formal charges, etc.). (2) The PARENT and CHOOSE operations determine the optimal set of NBOs corresponding to a specific resonance structure. Additionally, CHOOSE is able to eliminate identical resonance structures. (3) A user may then call SELECT to select the structure that best matches to the true molecular structure. This option may also show other structures within a defined energy threshold NRTTHR, deviating from optimal Lewis density. (4) Two other operations, CONDNS and KEKULE, are ran to remove redundant ionic structures and append structures related by bond shifts, respectively. (5) Lastly, SECRES is called to calculate the NBOs and density matrices of each resonance structure. Generation of resonance weights To compute the variational error, δw, NRT offers the following optimization methods: the steepest descent algorithms BFGS and POWELL and a "simulated annealing method" ANNEAL and MULTI. Most commonly, the NRT program computes an initial guess of the resonance weights by the following relation: where the weight is proportional to the exponential of the non-Lewis density, ρ, of structure α. Then the BFGS and POWELL steepest descent methods optimize for the nearest local minimum in energy. In contrast, the ANNEAL option finds the global maximum of the fractional improvement, fw, and performs a controlled, iterative random walk across the fw surface. This method is more computationally expensive than the BFGS and POWELL steepest descent methods. After optimization, SUPPL evaluates the weight of each resonance structure and modifies the list of resonance structures by either retaining or adding resonance structures of high weight and deleting or excluding those of low weight. It continues this process until either convergence is achieved or oscillation occurs. Updates In NBO version 7.0, the $NRTSTR function does not need to be called to generate a list of representative resonance structures, and the $CHOOSE algorithm has been adapted to be "essentially identical to the NLS [natural Lewis structure] algorithm", increasing the overall optimization of each resonance structure by reducing the amount to which the parent Lewis structure contributes to the resonance structure. Applications Main group chemistry Bond order of the pnictogen bond In 2015, Liu et al., conducted ab initio MP2/aug-cc-pvDZ calculations and used NRT in NBO version 5.0 to determine the natural bond order (i.e., a measure of electron density) of noncovalent weak "pnicogen bond" interactions—analogous to the hydrogen bond—between various compounds. Their results are summarized in the following table. These results indicate that the ionic bond order of the O· · · P pnictogen bond is the greatest contribution to the total bond order. Therefore, this weak, noncovalent interaction is primarily electrostatic. Bond order of Ge2M compounds In 2018 Minh et al., used NRT in the NBO 5.G program, with density obtained from the B3P86/6-311+G(d) level of theory, to calculate the bond orders in a series of Ge2M compounds, where M is a first-row transition metal. The results are found in the following table. These results show that the Ge–Ge bond order ranges from 1.5 to 2.4, while the Ge–M bond order ranges from 0.3 to 1.7. Furthermore, the Ge–Ge bond is primarily covalent, whereas the Ge–M bond usually has an equal mix of covalent and ionic nature. Exceptions to this are Cr, Mn, and Cu, where the ionic component is dominant because of smaller overlap with the 4s orbital of the M atom, leading to less stability. Interactions with M = Cr, Mn, and Cu are described as an electron transfer from the 4s atomic orbital on the M atom to a pi molecular orbital of the Ge2 fragment. Interactions with the other M atoms are described by two electron transfers: firstly, an electron transfer from the Ge2 fragment into an empty 3d atomic orbital on M and secondly, an electron transfer from the 3d atomic orbital on M into an antibonding orbital on Ge2. Resonance structures and bond order of regium bonds In 2019, Zheng et al., used NRT at the wB97XD level in the GENBO 6.0W program to generate natural Lewis resonance structures and calculate the bond orders of regium bond interactions between phosphonates and metal halides MX (M = Cu, Ag, Au; X = F, Cl, Br). In a regium bond interaction, electron donors participate in a charge transfer to the metal species. Results of this analysis are shown in the following figures and tables. In the case of H3PO:· · · MX complexes, these results indicate that ωI is “the best natural Lewis structure” and the lone pair of electrons on the oxygen atom interact with a MX sigma antibonding orbital. Zheng et al., also analyzed MX interactions with trans- and cis-phosphinuous acid to compare the electron donating abilities of phosphorus and oxygen atoms. The results above demonstrate that when phosphorus acts as the electron donor the weights of ωI and ωII are similar. This is indicative of 3-center 4-electron bonding models. Despite greater mixing, ωII is determined to be the best natural Lewis structure for both the trans- and cis- complexes, with CuBr and AgBr as the only exceptions. Researchers explain that this result is consistent with analyses showing the preference for phosphorus to form covalent interactions. Overall, "the degree of covalency for P–M bonds decreases in the order of F> Cl > Br, Au > Cu > Ag, while the degree of noncovalent for O–M bonds, there is an increase according to F < Cl < Br, Au < Cu < Ag in the entire family." Weight of resonance structures of arsenic radicals In 2015, Viana et al., used NRT to determine the weight of resonance structures of the arsenic radical isomers of AsCO, AsSiO and AsGeO, which are of interest in the fields of astrochemistry and astrobiology. The results are shown in the following figures and table. According to Viana et al., “for most of the isomers, the percentage weight of the secondary resonance structure is negligible. In cyclic structures, the resonance weights lead to very similar percentage values.” Limitations Calculating chemical and physical properties by using linear combinations of density matrices, rather than wavefunctions, may result in negative, and therefore erroneous, resonance weights because it is mathematically impossible to expand the density matrix without introducing negative values. See also Natural bond orbital Chirgwin-Coulson Weights References External links Frank Weinhold https://www2.chem.wisc.edu/users/weinhold https://scholar.google.com/citations?user=47IzzwYAAAAJ&hl=en Eric D. Glending https://www.indstate.edu/cas/chem_phys/eric-d-glendening https://scholar.google.com/citations?user=iRjJ1Y0AAAAJ&hl=en Natural Bond Orbital 7.0 Homepage https://nbo7.chem.wisc.edu/ Quantum chemistry Computational chemistry
Natural resonance theory
[ "Physics", "Chemistry" ]
2,739
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", "Computational chemistry", " molecular", "Atomic", " and optical physics" ]
77,968,557
https://en.wikipedia.org/wiki/D-square%20law
The d-square law or -law is a relationship between diameter and time for an isolated, spherical droplet when it evaporates quasi-steadily, which was first observed by Boris Sreznevsky in 1882, and was explained by Irving Langmuir in 1918. If and are the droplet diameter and time, then -law pertains to the relation where is the initial time, is the initial droplet diameter and is called the evaporation constant. References Equations of fluid dynamics Combustion
D-square law
[ "Physics", "Chemistry" ]
103
[ "Equations of fluid dynamics", "Equations of physics", "Combustion", "Fluid dynamics stubs", "Fluid dynamics" ]
77,970,038
https://en.wikipedia.org/wiki/Stuff%20Matters
Stuff Matters: Exploring the Marvelous Materials That Shape Our Man-Made World is a 2014 non-fiction book by the British materials scientist Mark Miodownik. The book explores many of the common materials people encounter during their daily lives and seeks to explain the science behind them in an accessible manner. Miodownik devotes a chapter each to ten such materials, discussing their scientific qualities alongside quirky facts and anecdotes about their impacts on human history. Called "a hugely enjoyable marriage of science and art", Stuff Matters was critically and commercially successful, becoming a New York Times best seller and a winner of the Royal Society Prize for Science Books. Background Miodownik was working at University College London as a professor of materials and society at the time the book was published. He first gained interest in his field of study during his teenage years following an attempted robbery while on the subway. He was stabbed with a razor through multiple layers of clothing, leading him to be curious about the qualities of steel that provided for such a sharp and strong edge. The author would go on to earn a doctorate in jet engine alloys before entering into academic work. He was an occasional presenter on instructional television programs, and the year after publishing Stuff Matters he was the recipient of the American Association for the Advancement of Science's Public Engagement with Science Award. Stuff Matters was the author's first published popular science work. Synopsis Each of the book's chapters begins with the same photograph of Miodownik sitting on the rooftop of his London apartment. In each iteration of the photo, a different object is circled a teacup in one chapter, a flowerpot in another, and so on with that chapter focused on the history and science of the material of which the highlighted item consists. Over the course of the book, Miodownik covers a number of materials that have been around for a long time (steel, paper, glass, porcelain), some introduced last century (concrete, plastics, carbon fiber), and a few relatively new inventions (graphene, aerogels). He includes a chapter on chocolate due in large part to his own obsession with the sweet. Miodownik seeks to draw connections from the materials to the lives of the people who use them, saying, "The material world is not just a display of our technology and culture, it is part of us. We invented it, we made it, and in turn, it makes us who we are." The author takes varying approaches to explaining each material's attributes and their importance, since according to him, the "materials and our relationships with them are too diverse for a single approach to suit them all". In the process of describing the book's subjects he intersperses scientific knowledge with insights into the materials' impacts on human history. For instance, historically the Chinese had a technological edge over the rest of the world in many respects (they alone held the secret to making porcelain for hundreds of years, for one example). However, their culture preferred other materials over glass, and Miodownik surmises that the resulting lack of advancement with that substance later held the culture back scientifically, as glass is a key component in such tools as microscopes and telescopes. Elsewhere, the author describes how the sudden 19th-century surge in popularity of billiards can be linked to the invention of both nylon and vinyl (the need for a cheap alternative to ivory for making pool balls led to the increased development of celluloid, the success of which led to further innovation in plastics). While much of the book relates the history of the selected materials, Miodownik also devotes time to many of their futures, including the development of a type of concrete that is infused with bacteria meant to self-repair cracks as they occur. Also described are aerogels, which are ultralight materials that are the best thermal insulators known to man. Composed of over 99% air, these materials are able to produce Rayleigh scattering in much the same way as the Earth's atmosphere, thereby appearing blue to the naked eye. This effect, combined with the aerogel's light weight, leads Miodownik to say that holding a sample is "like holding a piece of sky". The material is extremely expensive to make, however, and outside of occasional specific applications for NASA (it was a key component of that agency's Stardust mission), practical uses have been difficult to find. Miodownik writes that civilization is built on the materials around us, and that we acknowledge their importance by naming our historical eras after them. The Stone, Bronze, and Iron Ages are well known, and Miodownik argues that the steel age likely began in the late 19th century and we could be considered to be currently living in the silicon age. The constant desire for improvements in our lives (improved comfort, improved safety, etc.) drives the constant improvements to the materials that comprise our world. Therefore, Miodownik concludes, materials are "a multi-scale expression of our human needs and desires". Reception Stuff Matters was a New York Times best seller and won the 2014 Royal Society Prize for Science Books as well as the 2015 National Academies Communication Award. The book released to generally positive reviews. Writing for The New York Times Book Review, Rose George praised Miodownik's blend of science and storytelling. The Wall Street Journal called it a "thrilling account of the modern material world", while The Independent was impressed with the "learned, elegant discourse" Miodownik conducts in each chapter. The Observer Robin McKie considered the book "deftly written" and appreciated the author's conclusions drawn from the historical record. The reviewer for the Financial Times enjoyed the book but was critical of the occasional error, as when Miodownik mistakenly identifies the Greek word for chocolate as being much older than it is. The reviewer for Entertainment Weekly wrote that Miodownik occasionally lapsed into technical speak in a book meant for a broader audience, but that the author's clear enthusiasm for his subject outweighed any such negative aspects. Science News considered Miodownik's explanations of the more science-intensive material to be accessible and praised the humor interspersed throughout the book. Stuff Matters was well-received by certain trade journals as well. The American Ceramic Society Bulletin wrote that Miodownik's writing worked both as an introduction to the layperson as well as a "reminder of the field's broad purpose" for those with more knowledge on the subject, while the journal of the Boston Society of Architects particularly enjoyed the book's chapter on concrete. Bill Gates reviewed the book favorably on his website, writing, "In political contests, voters sometimes put more weight on whether they'd like to have a beer with a candidate than on the candidate's qualifications. Miodownik would pass anyone's beer test, and he has serious qualifications." References English-language non-fiction books 2014 non-fiction books Materials science Houghton Mifflin books Science books
Stuff Matters
[ "Physics", "Materials_science", "Engineering" ]
1,437
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
77,973,234
https://en.wikipedia.org/wiki/E-Defense
The 3-D Full-Scale Earthquake Testing Facility or E-Defense () is an earthquake shaking table facility in Miki, Hyōgo Prefecture, Japan. Operated by the Japanese National Research Institute for Earth Science and Disaster Resilience (NIED), it was the largest 3D earthquake shake table in the world when it was commissioned. History After the destructive Great Hanshin earthquake of 1995, the Science and Technology Agency established a round-table conference, which in May 1996 recommended that an earthquake research centre be founded to prevent future earthquake damage in urban areas. It was recommended that the research centre should have a three-dimensional shake table. Development of the table's actuators began in 1995, and the design and construction of E-Defense began in 1998 or 1999 (sources vary), intending to replicate the ground motions of the Great Hanshin earthquake. Mitsubishi Heavy Industries machinery systems designed and constructed the facility, which is located at Miki Earthquake Disaster Memorial Park. Construction of the table's foundation started in 1999 and was completed in 2001. Operations began in 2005 after a total construction cost of 45 billion yen. The nickname "E-Defense" was selected in a public competition, with the letter "E" standing for Earth. E-Defense could not reproduce the ground motions of the 2011 Tōhoku earthquake due to the long period and long duration of the shaking. NIED tried to simulate the ground motions of this earthquake for five minutes but was initially only able to manage 1.5 minutes of shaking due to insufficient oil for the actuators. After more accumulators were installed and bypass valves were added to the actuators, they achieved the goal of five minutes of sustained shaking. Facility The table is 20 by 15 metres (an area of 300 square metres), making it the largest earthquake shaking table in the world when it was constructed. It can move in the x, y, and z directions and perform yaw, pitch, and roll rotations. It can accelerate up to 1 g horizontally in both directions and up to 1.5 g vertically. It can have a maximum payload of 1,200 tons. The table has five horizontal actuators for both directions and 14 vertical actuators, each with a maximum driving force of 4,500 kilonewtons. They can generate frequencies with good accuracy up to 15 hertz and can be increased to 30 hertz with lower accuracy. Universal joints are placed between the actuators and the table. The facility is on a six-hectare site, which includes several buildings. These are the experiment building, which contains the shaking table; the operation building, which contains the control system for the shaking table; the hydraulic unit building, which contains equipment that powers the shaking table; and the preparation building, where test structures are prepared. Experiments As of 2020, 113 experiments have been run on the table, at an average of 7.1 experiments per year. Experiments are either projects run by the NIED, projects run jointly by the NIED and other organisations, or run by other organisations. Most of the design and construction time for experiments takes place outside the main E-Defense facility to maximise the use of the table. Experimental structures are placed onto the table using two cranes with a combined maximum loading capacity of 9000 kilonewtons. Due to the high cost of running the experiments, it is E-Defense policy that the results not be intellectual property of the conductors of the experiments but instead shared by the international earthquake engineering community. This is so that the results can have a high impact. References Further reading Earthquake engineering Research institutes in Japan Miki, Hyōgo Earthquake and seismic risk mitigation Science and technology in Japan
E-Defense
[ "Engineering" ]
752
[ "Structural engineering", "Earthquake engineering", "Earthquake and seismic risk mitigation", "Civil engineering" ]
70,704,600
https://en.wikipedia.org/wiki/Protein%20nanoparticles
Protein nanotechnology is a burgeoning field of research that integrates the diverse physicochemical properties of proteins with nanoscale technology. This field assimilated into pharmaceutical research to give rise to a new classification of nanoparticles termed protein (or protein-based) nanoparticles (PNPs). PNPs garnered significant interest due to their favorable pharmacokinetic properties such as high biocompatibility, biodegradability, and low toxicity Together, these characteristics have the potential to overcome the challenges encountered with synthetic NPs drug delivery strategies. These existing challenges including low bioavailability, a slow excretion rate, high toxicity, and a costly manufacturing process, will open the door to considerable therapeutic advancements within oncology, theranostics, and clinical translational research. Continued advancement within this field is required for the clinical translation of PNPs. As of 2022, only one PNP formulation (Abraxane) and five VLPs (Gardasil, Ceravix, Mosquirix, Sci-B-Vac, Gardasil9) are approved by the FDA for clinical use. FDA approval of PNPs formulations is restrained by complications arising from in-vivo interactions between PNPs and the biological environment that jeopardize their safety or function. For example, PNPs may undergo protein conformation changes, form a protein corona, or induce inflammation and may risk patient well-being. Synthesis methods To capitalize on the favorable characteristics of PNPs, improvements within PNP synthesis methods are being widely explored. Advancements or the development of new synthesis methods are desirable as existing methods (sonochemistry, thermal decomposition, and colloidal/ hydrothermal/microemulsion methods) contribute to systemic toxicity and are limited to hydrophilic drugs. As a result, recent advancements seek to overcome these challenges and achieve commercial-size production. In addition, newly developed PNP synthesis methods such as electrospray or desolvation provide a more sustainable approach as compared to traditional nanoparticle methods. Unlike synthetic nanoparticles, PNPs can be synthesized under mild conditions and without toxic chemicals or organic solvents. PNPs are also naturally sourced and readily degradable. Yet, despite these advantages and the addition of new synthesis methods, the methods remain relatively expensive and do not deliver full control of PNP size, greatly limiting their application in biomedicine Types of protein Numerous proteins are utilized in PNP synthesis. They are often sourced naturally from animal and plant sources. Accordingly, generally shared advantages of animal proteins include high biocompatibility, biodegradability, non-immunogenicity, drug loading efficiency, cell uptake, and easy and cost-effective production. Tables 2–4 below compile the common proteins used in PNP synthesis. The types of PNPs share similar physical properties such as high biocompatibility, non-immunogenicity, high drug efficiency, high biodegradability, and high cell uptake. Due to the abundance of proteins necessary for proper bodily function, the body has developed processes to update proteins into tissues and cells. PNPs take advantage of these natural processes to enhance their cellular uptake. This abundance and the natural sourcing subsequent purification of the proteins also reduce the immunogenic responses and produce low toxicity levels in the body. As the PNPs are degraded, the tissues assimilate the amino acids into energy or protein production. Protein nanoparticle modifications PNPs can be chemically modified to increase particle stability, reduce degradation, and enhance favorable characteristics. Crosslinking is a common modification that can utilize synthetic or natural cross-linkers. Natural cross-linkers are significantly less toxic than synthetic cross-linkers. Driving factors in the modification of PNPs stem from their surface properties (surface charge, hydrophobicity, functional groups, etc.). Functional groups can bind to tissue-specific ligands for targeted drug delivery. Functional ligands may include protein receptors, antibodies, and smaller peptides. The purpose of ligand binding is to direct the PNP to the target cells, thereby reducing systemic toxicity, and improving the retention and excretion of the PNP within tissues. The optimal ligand for PNP modification is dependent on the target cell. Modification of a PNP surface with ligands can be achieved through chemical conjugation, though chemical dyes for imaging and peptides for immune activation can also be attached [11,33,34]. One example is the ligand anti-human epidermal growth factor receptor 2 which targets breast cancer cells. The following provides additional applications of ligand modifications and their therapeutic applications [12]. In addition to chemical conjugation, genetic modification can facilitate direct attachment of the modifying protein monomers with the PNP surface. This results in a co-assembly and a solution to existing challenges with direct attachments or large proteins. Attaching large proteins to PNPs interferes with the self-assembly process and induces steric interactions. Though, smaller protein attachments are generally tolerated by protein NPs. A significant limitation to direct attachment via genetic modification of protein monomers is that it cannot accommodate the attachment of multiple components. Enzymatic ligation helps overcome this limitation by providing a site-specific covalent link to the PNP surface following PNP assembly. This strategy can also provide greater control over the density and ratios of attached proteins. The modification of VLPs is unique due to their nanocage architecture. PNPs with cage structures can fully encapsulate functional components in their interior, termed co-encapsulation. Drug encapsulation within VLP cages can occur through two processes. This first process occurs in-vitro and requires the disassembly of the cage and reassembling it with the presence of the drug components to be encapsulated [8]. Since loading efficiency is influenced through electrostatic interactions, the drug compounds cannot be fully encapsulated without interfering with the VLP cage self-assembly. Another process is the encapsulation of drug components in-vivo. This involves direct genetic attachment of the drug components to the interior of the VLP cage. This process guides drugs for encapsulation directly to the interior of the cage. Therapeutic drug delivery applications Due to PNPs’ breadth of favorable pharmacokinetic properties such as high biocompatibility, high biodegradability, high modifiability, low toxicity, high cell uptake, and a fast excretion rate, PNPs are prime candidates for anti-cancer therapy. Previous anticancer therapies relied on the enhanced permeability effect to passively accumulate within tumors. This resulted in greater toxicity due to higher concentrations required to achieve critical drug efficacy levels. Newer strategies allow PNPs to actively target the tumor microenvironment via the attachment of ligands and site-specific protein receptors. Active targeting decreases the total concentration of drugs required to deliver an effective dose, thereby reducing systemic side effects. In addition to active tumor targeting, PNPs can also be engineered to respond to changing external environments such as pH, temperature, or enzyme concentration. The tumor microenvironment is slightly acidic, so PNPs can be engineered to only release their drug cargo under specific tumor physiological conditions. Another application is photothermal or photodynamic therapy. PNPs selectively accumulate into the tumor microenvironment where they are subsequently irradiated using a 1064 nm wavelength laser. The light energy is transferred into heat energy, increasing the temperature of the tumor microenvironment to inhibit tumor growth. Ferritin is a favorable protein for this application due to its high thermal stability. In-vivo imaging is another application of PNPs. PNPs can carry fluorescent dyes that selectively accumulate in the tumor microenvironment. This is important because a significant limitation of Green Fluorescent Protein, the standard protein for tumor imaging, is its insufficient deep tissue penetration. Due to their small size, PNPs can deliver fluorescent dyes deep into the tissue overcoming this challenge and providing more accurate tumor imaging. This strategy may also be applied to MRI imaging using PNPs carrying magnetic components to tumor microenvironments for subsequent scanning. Other applications include vaccine development through VLPs carrying immunogenic components. Since VLPs are not carrying any attenuated genetic material, these vaccines pose a safer alternative, especially for the immunocompromised or elderly. PNPs may also treat neurological diseases as they can cross the blood-brain barrier [28]. Lastly. PNPs may find applications within ophthalmic drug delivery as PNPs have a significantly longer circulation time in the eye than eye drops. Drug delivery challenges and regulations Despite numerous pharmacokinetic advantages of PNPs, there remain several critical challenges to their clinical translation. Only two PNPs have been FDA-approved, despite over 50 PNP formulations to date (2022). The two FDA-approved drugs include Abraxane, an albumin nanoparticle carrying paclitaxel used for breast cancer, non-small cell lung cancer, and pancreatic cancer treatment. The second FDA-approved PNP is Ontak, a protein conjugate carrying L-2 and Diphtheria toxin used for cutaneous T-cell lymphoma. The two approved formulations are summarized in Table 5 below. The low approval rate of PNPs is due to limited existing control over drug encapsulation and the pharmacokinetic variability between PNP batches. Balancing both the repeatability of these two properties and their relative interactions is important because it ensures the predictability of their clinical outcomes, greater patient safety, and that protein loading does not interfere with the PNP's properties. Another limitation surrounds the cost and ability of large-scale production. Many synthesis methods that can deliver greater homogeneity between produced nanoparticles are also more costly options or cannot achieve mass production. This limitation is compounded by the lower yields of PNP manufacturing. This limits the availability of PNPs to broad clinical adoption [20,29]. References Protein complexes Nanotechnology
Protein nanoparticles
[ "Chemistry", "Materials_science", "Engineering" ]
2,124
[ "Pharmacology", "Drug delivery devices", "Materials science", "Nanomedicine", "Nanotechnology" ]
63,475,215
https://en.wikipedia.org/wiki/Wheal%20Maid
Wheal Maid (also Wheal Maiden) is a former mine in the Camborne-Redruth-St Day Mining District, 1.5 km east of St Day. Between 1800 and 1840, profits are said to have been up to £200,000. In 1852, the mine was amalgamated with Poldice Mine and Carharrack Mine and worked as St Day United mine. Throughout the 1970s and 1980s, the mine site was turned into large lagoons and used as a tip for two other nearby mines: Mount Wellington and Wheal Jane. There were suggestions that the mine could be used as a landfill site for rubbish imported from New York and a power plant that would produce up to 40 megawatts of electricity; the concept was opposed by local residents and by Cornwall County Council, with Doris Ansari, the chair of the council's planning committee, saying that the idea "[did] not seem right for Cornwall". The site was bought from Carnon Enterprises by Gwennap District Council for a price of £1 in 2002. An investigation by the Environment Agency that concluded in 2007 found that soil near the mine had high levels of arsenic, copper and zinc contamination and by 2012, it was deemed too hazardous for human activity. The mine gains attention during dry spells when the lagoons dry up and leaving brightly coloured stains on the pit banks and bed. 2014 murder In 2014, a 72-year-old man from Falmouth died at the site after what was initially thought to be a cycling accident. It was later found that the man had been murdered. A 34-year-old was found guilty and sentenced to life and to serve at least 28 years. References Mines in Cornwall Pollution in the United Kingdom Soil contamination 2014 murders in the United Kingdom
Wheal Maid
[ "Chemistry", "Environmental_science" ]
361
[ "Environmental chemistry", "Soil contamination" ]
63,478,374
https://en.wikipedia.org/wiki/Judith%20Breuer
Judith Breuer is a British virologist who is professor of virology and director of the Pathogen Genomics Unit at University College London. She was elected a Fellow of the Academy of Medical Sciences in 2019. Breuer is part of the United Kingdom genome sequencing team that looks to map the spread of the coronavirus disease 2019. Early life and education As a child, Breuer was inspired by Vera Brittain and Simone de Beauvoir. She eventually studied medicine at the Middlesex Hospital medical school. During her doctoral degree Breuer studied the genes of HIV-2 tissue culture isolates. Her medical career started in East London, where she noticed that there was a large population of adults with chickenpox. This is rare for countries like the United Kingdom, where children usually contract the disease. She undertook her specialist training in virology at St Mary's Hospital, London in the early nineties, and move to St Bartholomew's Hospital in 1993. She was elected a Fellow of the Royal College of Pathologists in 1998. Research and career In 2005 Breuer joined University College London, where she serves as Chair of Molecular Virology. She simultaneously holds a clinical position at Great Ormond Street Hospital. In 2012 she was made co-Director of the Division of Infection & Immunity. Her research focusses on genome sequencing and phylogenetics. She also studies how viral evolution impacts public health practises and policy. Breuer demonstrated a methodology that enables the recovery of low copy viral DNA from clinical samples, which can then be used for whole genome sequencing. She has primarily investigated the genetic association of Varicella zoster virus, Herpes simplex virus and human parainfluenza viruses. Breuer has investigated norovirus, a pandemic that occurs on cycles of between two and five years. Using phylogenetic trees Breuer showed that the pandemic strains of norovirus exist in the population long before the virus spreads around the world. She believes that changes in the immunity of a population create an environment that allows the pandemic to spread, and that the pandemic strains may exist in children before they emerge in the wider population. Alongside norovirus, Breuer has extensively studied the Varicella zoster virus, which causes chickenpox and shingles. It is the smallest of all herpesviridae. For almost three decades it was unclear how the Varicella zoster virus retained its dormancy. Breuer was the first to identify a latency associated genetic transcript, which can persist in the neurons of almost all adults. She demonstrated that the diversity of human cytomegalovirus (HCMV) in clinical samples is not caused by frequent mutation, as was previously thought, but instead due to multi-strain infection. This finding demonstrates that HCMV does not mutate faster than other viruses, making it easier to identify a vaccination. In 2016 Breuer launched the Pathogens Genomics Unit at University College London, which allows the scientific community to better sequence pathogen genomes. She was elected a Fellow of the Academy of Medical Sciences in 2019. Her research includes the development of new tools and tests to protect people from antimicrobial resistance. Supported by the Department of Health and Social Care, Breuer looks to identify and treat antimicrobial resistant diseases, ensure the appropriate treatment pathways and prevent the spread of antibiotic resistant diseases between people. This aspect of her research makes use of artificial intelligence to quickly interpret test results, collating information from electronic health records and learning how clinicians make use of test results in clinical care. To achieve this, Breuer is involved with the design of new diagnostic tools, comprehensive randomized controlled trials and clinical management mechanisms. In 2020 Breuer was appointed the London lead of a national response effort to sequence the genome and map the spread of the novel coronavirus disease. Selected publications References Living people Year of birth missing (living people) Alumni of the University of London Academics of University College London British virologists Fellows of the Academy of Medical Sciences (United Kingdom) Pathogen genomics Women virologists
Judith Breuer
[ "Biology" ]
849
[ "Molecular genetics", "DNA sequencing", "Pathogen genomics" ]
63,479,435
https://en.wikipedia.org/wiki/Floral%20isolation
Floral Isolation is a form of reproductive isolation found in angiosperms. Reproductive isolation is the process of species evolving mechanisms to prevent reproduction with other species. In plants, this is accomplished through the manipulation of the pollinator’s behavior (ethological isolation) or through morphological characteristics of flowers that favor intraspecific pollen transfer (morphological isolation). Preventing interbreeding prevents hybridization and gene flow between the species (introgression), and consequently protects genetic integrity of the species. Reproductive isolation occurs in many organisms, and floral isolation is one form present in plants. Floral isolation occurs prior to pollination, and is divided into two types of isolation: morphological isolation and ethological isolation. Floral isolation was championed by Verne Grant in the 1900s as an important mechanism of reproductive isolation in plants. Morphological Isolation Mechanical or morphological isolation is a form of floral isolation where the characteristics of the flower prevents reproduction between species. These morphological differences primarily affect the positioning of reproductive structures within flowers and control the placement of pollen on the pollinator’s body to promote transfer within the same species. For example, flowers of Salvia mellifera have anthers and stigmas which are positioned to contact the dorsal surface of the bumblebee abdomen while flowers of the co-occurring Salvia apiana place pollen on the bumblebee’s flanks. Ethological Isolation Ethological isolation is a form of floral isolation caused predominantly by the behavior of pollinators. Flowers can have morphological features which attract or reward specific types of pollinators. The relationship between floral signals and pollinators can promote floral constancy, where different pollinators preferentially visit one species over other others. The color or odor of flowers promotes this isolation as plants effectively manipulate the behavior of their animal pollinators. An example of this type of manipulation is found in orchids as they mimic female bees and wasps in order to attract male pollinators as a form of sexual deception referred to as pseudocopulation. References Botany Evolution of plants
Floral isolation
[ "Biology" ]
412
[ "Evolution of plants", "Plants", "Botany" ]
63,479,676
https://en.wikipedia.org/wiki/David%20Tom%C3%A1nek
David Tománek (born July 1954) is a U.S.-Swiss physicist of Czech origin and researcher in nanoscience and nanotechnology. He is Emeritus Professor of Physics at Michigan State University. He is known for predicting the structure and calculating properties of surfaces, atomic clusters including the C60 buckminsterfullerene, nanotubes, nanowires and nanohelices, graphene, and two-dimensional materials including phosphorene. Academic career Tománek earned a doctoral degree in Physics from the Freie Universität Berlin in 1983 under the supervision of Karl Heinz Bennemann and became Hochschulassistent there in 1984. Between 1985 and 1987 he worked as postdoctoral researcher at the Bell Labs under the supervision of Michael A. Schlüter and at the University of California, Berkeley under the supervision of Steven G. Louie. Since 1987, he has been Professor of Physics at Michigan State University, where he directs the Computational Nanotechnology Laboratory at the Department of Physics and Astronomy. Research Tománek and his research group have worked in areas in nanoscience and nanotechnology. As a graduate student at FU Berlin, he studied structural end electronic properties of surfaces, including reconstruction and photoemission spectra. He was intrigued by the unusual structure and electronic properties of atomic clusters, including collective electronic excitations and superconductivity. His computational studies of growth regimes of silicon and carbon clusters have made use of the semi-quantitative Linear Combination of Atomic Orbitals (LCAO) or tight-binding method. During his 1994 sabbatical stay at the laboratory of Richard E. Smalley, he turned his interest to the unique properties of nanotubes formed of carbon (CNTs) and other materials. He studied their morphology, formation, mechanical stiffness, their ability to conduct heat and electrons, and field electron emission. After 2000, he got involved in studies of two-dimensional materials including phosphorene. In the following years, he has continued identifying applications of carbon nanotubes and two-dimensional materials in fields including low-resistance contacts to nanostructures, nanomechanical energy storage, and purification and desalination of water. Conferences Tománek initiated a series of annual Nanotube (NT) conferences and a Gordon Research Conference on Two-dimensional electronics beyond graphene. Honors and awards In 2004 Tománek was elected a Fellow of the American Physical Society and in 2005 he received the prestigious Alexander-von-Humboldt Senior Scientist Award (Germany). In 2008 he received the Japan Carbon Award for Life-Time Achievement and was chosen by the American Physical Society as member of the Outstanding Referees Program for excellence in peer review. In 2016 he received the Lee Hsun Research Award for Materials Science from the Chinese Academy of Sciences. His h-index is currently 85. References External links Computational Nanotechnology Laboratory at Michigan State University Google profile Living people Swiss physicists Tomanek, David 1954 births Czech physicists Carbon scientists Tomanek, David Theoretical physicists
David Tománek
[ "Physics" ]
621
[ "Theoretical physics", "Theoretical physicists" ]
63,481,723
https://en.wikipedia.org/wiki/Timeline%20of%20crystallography
This is a timeline of crystallography. 17th century 1669 - In his book De solido intra solidum naturaliter contento Nicolas Steno asserted that, although the number and size of crystal faces may vary from one crystal to another, the angles between corresponding faces are always the same. This was the original statement of the first law of crystallography (Steno's law). 18th century 1723 - Moritz Anton Cappeller introduced the term crystallography in his book Prodromus Crystallographiae De Crystallis Improprie Sic Dictis Commentarium. 1766 - Pierre-Joseph Macquer, in his Dictionnaire de Chymie, promoted mechanisms of crystallization based on the idea that crystals are composed of polyhedral molecules (primitive integrantes). 1772 - Jean-Baptiste L. Romé de l'Isle developed geometrical ideas on crystal structure in his Essai de Cristallographie. He also described the twinning phenomenon in crystals. 1781 - Abbé René Just Haüy (often termed the "Father of Modern Crystallography") discovered that crystals always cleave along crystallographic planes. Based on this observation, and the fact that the inter-facial angles in each crystal species always have the same value, Haüy concluded that crystals must be periodic and composed of regularly arranged rows of tiny polyhedra (molécules intégrantes). This theory explained why all crystal planes are related by small rational numbers (the law of rational indices). 1783 - Jean-Baptiste L. Romé de l'Isle in the second edition of his Cristallographie used the contact goniometer to discover the law of constancy of interfacial angles: angles are constant and characteristic for crystals of the same chemical substance. 1784 - René Just Haüy published his law of decrements: a crystal is composed of molecules arranged periodically in three dimensions. 1795 - René Just Haüy lectured on his law of symmetry: "the manner in which Nature creates crystals is always obeying ... the law of the greatest possible symmetry, in the sense that oppositely situated but corresponding parts are always equal in number, arrangement, and form of their faces". 19th century 1801 - René Just Haüy published his multi-volume Traité de Minéralogie in Paris. A second edition under the title Traité de Cristallographie was published in 1822. 1801 - Déodat de Dolomieu published his Sur la philosophie minéralogique et sur l'espèce minéralogique in Paris. 1815 - René Just Haüy published his law of symmetry. 1815 - Christian Samuel Weiss, founder of the dynamist school of crystallography, developed a geometric treatment of crystals in which crystallographic axes are the basis for classification of crystals rather than Haüy's polyhedral molecules. 1819 - Eilhard Mitscherlich discovered crystallographic isomorphism. 1822 - Friedrich Mohs attempted to bring the molecular approach of Haüy and the geometric approach of Weiss into agreement. 1823 - Franz Ernst Neumann invented a system of crystal face notation, by using the reciprocals of the intercepts with crystal axes, which becomes the standard for the next 60 years. 1824 - Ludwig August Seeber conceived of the concept of using an array of discrete (molecular) points to represent a crystal. 1826 - Moritz Ludwig Frankenheim derived the 32 crystal classes by using the crystallographic restriction, consistent with Haüy's laws, that only 2, 3, 4 and 6-fold rotational axes are permitted. 1830 - Johann F. C. Hessel published an independent geometrical derivation of the 32 point groups (crystal classes). 1832 - Friedrich Wöhler and Justus von Liebig discovered polymorphism in molecular crystals, using the example of benzamide. 1839 - William Hallowes Miller invented zonal relations by projecting the faces of a crystal upon the surface of a circumscribed sphere. Miller indices are defined which form a notation system in crystallography for planes in crystal (Bravais) lattices. 1840 - Gabriel Delafosse, independently of Seeber, represented crystal structure as an array of discrete points generated by defined translations. 1842 - Moritz Frankenheim derived 15 different theoretical networks of points in space not dependent on molecular shape. 1848 - Louis Pasteur discovered that sodium ammonium tartrate can crystallize in left- and right-handed forms and showed that the two forms can rotate polarized light in opposite directions. This was the first demonstration of molecular chirality, and also the first explanation of isomerism. 1850 - Auguste Bravais derived the 14 space lattices. 1869 - Axel Gadolin, independently of Hessel, derived the 32 crystal classes using stereographic projection. 1877 - Paul Heinrich von Groth founded the journal Zeitschrift für Krystallographie und Mineralogie, and served as its editor for 44 years. 1877 - Ernest-François Mallard, building on the work of Auguste Bravais, published a memoir on optically "anomalous" crystals (that is, those crystals the morphology of which seems to be of greater symmetry than their optics), in which the importance of crystal twinning and "pseudosymmetry" were used as explanatory concepts. 1879 - Leonhard Sohncke listed the 65 crystallographic point systems using rotations and reflections in addition to translations. 1888 - Friedrich Reinitzer discovered the existence of liquid crystals during investigations of cholesteryl benzoate. 1889 - Otto Lehmann, after receiving a letter from Friedrich Reinitzer, used polarizing light to explain the phenomenon of liquid crystals. 1891 - Derivation of the 230 space groups (by adding mirror-image symmetry to Sohncke's work) by a collaborative effort of Evgraf Fedorov and Arthur Schoenflies. 1894 - William Barlow, using a sphere packing approach, independently derived the 230 space groups. 1894 - Pierre Curie described the now called Curie's principle for the symmetry properties of crystals. 1895 - Wilhelm Conrad Röntgen on 8 November 1895 produced and detected electromagnetic radiation in a wavelength range now known as X-rays or Röntgen rays, an achievement that earned him the first Nobel Prize in Physics in 1901. X-rays became the major mode of crystallographic research in the 20th century. 1899 - Hermanus Haga and Cornelis Wind observed X-ray diffuse broadening through a slit and deduced that the wavelength of X-rays is on the order of an angstrom. 20th century 1905 - Charles Glover Barkla discovered the X-ray polarization effect. 1908 - Bernhard Walter and Robert Wichard Pohl observed X-ray diffraction from a slit. 1912 - Max von Laue discovered diffraction patterns from crystals in an x-ray beam. 1912 - Bragg diffraction, expressed through Bragg's law, is first presented by Lawrence Bragg on 11 November 1912 to the Cambridge Philosophical Society. 1912 - Heinrich Baumhauer discovered and described polytypism in crystals of carborundum, or silicon carbide. 1913 - Lawrence Bragg published the first observation of x-ray diffraction by crystals. Similar observations were also published by Torahiko Terada in the same year. 1913 - Georges Friedel stated Friedel's law, a property of Fourier transforms of real functions. Friedel's law is used in X-ray diffraction, crystallography and scattering from real potential within the Born approximation. 1914 - Max von Laue won the Nobel Prize in Physics "for his discovery of the diffraction of X-rays by crystals." 1915 - William and Lawrence Bragg published the book X rays and crystal structure and shared the Nobel Prize in Physics "for their services in the analysis of crystal structure by means of X-rays." 1916 - Peter Debye and Paul Scherrer discovered powder (polycrystalline) diffraction. 1916 - Paul Peter Ewald predicted the Pendellösung effect, which is a foundational aspect of the dynamical diffraction theory of X rays. 1917 - Albert W. Hull independently discovered powder diffraction in researching the crystal structure of metals. 1920 - Reginald Oliver Herzog and Willi Jancke published the first systematic analysis of X-ray diffraction patterns of cellulose extracted from a variety of sources. 1921 - Paul Peter Ewald introduced a spherical construction for explaining the occurrence of diffraction spots, which is now called Ewald's sphere. 1922 - Charles Galton Darwin formulated the theory of X-ray diffraction from imperfect crystals and introduced the concept of mosaicity in crystallography. 1922 - Ralph Wyckoff published a book containing tables with the positional coordinates permitted by the symmetry elements. These positions are now known as Wyckoff positions. This book was the forerunner of the International tables for crystallography, which first appeared in 1935. 1923 - Roscoe Dickinson and Albert Raymond, and independently, H.J. Gonell and Hermann Mark, first showed that an organic molecule, specifically hexamethylenetetramine, could be characterized by x-ray crystallography. 1923 - William H. Bragg and Reginald E. Gibbs elucidated the structure of quartz. 1923 - Paul Peter Ewald published his book Kristalle und Röntgenstrahlen (Crystals and X-rays). 1924 - Louis de Broglie in his PhD thesis Recherches sur la théorie des quanta introduced his theory of electron waves. This was the start of electron and neutron diffraction and crystallography. 1924 - J.D. Bernal established the structure of graphite. 1926 - Victor Goldschmidt distinguished between atomic and ionic radii and postulated some rules for atom substitution in crystal structures. 1927 - Frits Zernike and Jan Albert Prins proposed the pair distribution function for analyzing molecular structures in solution-phase diffraction. 1927 - Two groups demonstrated electron diffraction, the first the Davisson–Germer experiment, the other by George Paget Thomson and Alexander Reid. Alexander Reid, who was Thomson's graduate student, performed the first experiments, but he died soon after in a motorcycle accident. 1928 - Felix Machatschki, working with Goldschmidt, showed that silicon can be replaced by aluminium in feldspar structures. 1928 - Kathleen Lonsdale used x-rays to determine that the structure of benzene is a flat hexagonal ring. 1928 - Paul Niggli introduced reduced cells for simplifying structures using a technique now known as Niggli reduction. 1928 - Hans Bethe published the first non-relativistic explanation of electron diffraction based upon Schrödinger's equation, which remains central to all further analysis. 1928 - Carl Hermann introduced and Charles Mauguin modified the international standard notation for crystallographic groups called Hermann–Mauguin notation. 1929 - Linus Pauling formulated a set of rules (later called Pauling's rules) to describe the structure of complex ionic crystals. 1929 - William Howard Barnes published the crystal structure of ice. 1930 - Lawrence Bragg assembled the first classification of silicates, describing their structure in terms of grouping of SiO4 tetrahedra. 1930 - Gas electron diffraction was developed by Herman Mark and Raymond Wierl, 1931 - Paul Ewald and Carl Hermann published the first volume of the Strukturbericht (Structure Report), which established the systematic classification of crystal structure prototypes, also known as the Strukturbericht designation. 1931 - Fritz Laves enumerated the Laves tilings for the first time. 1932 - W. H. Zachariasen published an article entitled The atomic arrangement in glass, which perhaps had more influence than any other published work on the science of glass. 1932 - Friedrich Rinne introduced the concept of paracrystallinity for liquid crystals and amorphous materials. 1932 - Vadim E. Lashkaryov and Ilya D. Usyskin determined of the positions of hydrogen atoms in ammonium chloride crystals using electron diffraction. 1934 - Arthur Patterson introduced the Patterson function which uses diffraction intensities to determine the interatomic distances within a crystal, setting limits to the possible phase values for the reflected x-rays. 1934 - Martin Julian Buerger developed the equi-inclination Weissenberg X-ray camera. Buerger invented the precession camera in 1942. 1934 - C. Arnold Beevers and Henry Lipson invented the Beevers–Lipson strip as a calculation aid for Fourier methods for the determination of the crystal structure of CuSO4.5H2O. 1934 - Fritz Laves investigated the structures of intermetallic compounds of formula AB2. These structures were subsequently named Laves phases. 1935 - First publication of the International tables for the determination of crystal structures edited by Carl Hermann. The successor volumes are currently published by IUCr as the International tables for crystallography. 1935 - William Astbury established the structure of keratin using x-ray crystallography; this work provided the foundation for Linus Pauling's 1951 discovery of the α-helix. 1936 - Peter Debye won the Nobel Prize in Chemistry "for his contributions to our knowledge of molecular structure through his investigations on dipole moments and on the diffraction of X-rays and electrons in gases." 1936 - showed that electron microscope could be used as micro-diffraction cameras with an aperture—the birth of selected area electron diffraction. 1937 - Clinton Joseph Davisson and George Paget Thomson shared the Nobel Prize in physics "for their experimental discovery of the diffraction of electrons by crystals." 1939 - Linus Pauling published the book The Nature of the Chemical Bond and the Structure of Molecules and Crystals. 1939 - André Guinier discovered small-angle X-ray scattering. 1939 - Walther Kossel and Gottfried Möllenstedt published the first work on convergent beam electron diffraction (CBED), It was extended by Peter Goodman and Gunter Lehmpfuhl, then mainly by the groups of John Steeds and Michiyoshi Tanaka who showed how to use CBED patterns to determine point groups and space groups. 1941 - The International Centre for Diffraction Data was founded. 1945 - George W. Brindley and Keith Robinson solved the crystal structure of kaolinite. 1945 - The crystal structure of the perovskite BaTiO3 was first published by Helen Megaw based on barium titanate X-ray diffraction data. 1945 - A.F. Wells published the classic reference book, Structural inorganic chemistry, which subsequently went through five editions. 1946 - Foundation of the International Union of Crystallography. 1946 - James Batcheller Sumner shared the Nobel Prize in Chemistry "for his discovery that enzymes can be crystallized". 1947 - Lewis Stephen Ramsdell systematically classified the polytypes of silicon carbide, and introduced the Ramsdell notation. 1948 - The first congress and general assembly of the International Union of Crystallography was held at Harvard University. 1948 - Acta Crystallographica was founded by the International Union of Crystallography (IUCr) with P.P. Ewald as its first editor. 1948 - Ernest O. Wollan and Clifford Shull published the first series of neutron diffraction experiments for crystallography performed at the Oak Ridge National Laboratory. 1948 - George Pake used solid state NMR spectroscopy to determine hydrogen atom distances in a single crystal of gypsum. 1949 - Clifford Shull opened a new field of magnetic crystallography based on neutron diffraction. 1950 - Jerome Karle and Herbert A. Hauptman introduced formulae for phase determination known as direct methods. 1951 - Johannes Martin Bijvoet and his colleagues, using anomalous scattering, confirmed Emil Fischer's arbitrary assignment of absolute configuration, in relation to the direction of optical rotation of polarized light, was correct in practice. 1951 - Linus Pauling determined the structure of the α-helix and the β-sheet in polypeptide chains. 1951 - Alexei Vasilievich Shubnikov published Symmetry and antisymmetry of finite figures which opened up the field of antisymmetry in magnetic structures. 1952 - David Sayre suggested that the phase problem could be more easily solved by having at least one more intensity measurement beyond those of the Bragg peaks in each dimension. This concept is understood today as oversampling. 1952 - Geoffrey Wilkinson and Ernst Otto Fischer determined the structure of ferrocene, the first metallic sandwich compound, for which they won the 1973 Nobel prize in Chemistry. The structure was soon refined by Jack Dunitz, Leslie Orgel, and Alexander Rich. 1953 - Arne Magnéli introduced the term homologous series to describe polytypes of transition metal oxides that exhibit crystallographic shear structures. 1953 - Determination of the structure of DNA by three British teams, for which James Watson, Francis Crick and Maurice Wilkins won the 1962 Nobel Prize in Physiology or Medicine in 1962 (Franklin's death in 1958 made her ineligible for the award). 1954 - Ukichiro Nakaya's book Snow Crystals: Natural and Artificial, dedicated to the modern study of snow crystals, is published. 1954 - Linus Pauling won the Nobel Prize in Chemistry "for his research into the nature of the chemical bond and its application to the elucidation of the structure of complex substances"." 1956 - Durward W. J. Cruickshank developed the theoretical framework for anisotropic displacement parameters, also known as the thermal ellipsoid. 1956 - James Menter published the first electron microscope images showing the lattice structure of a material. 1958 - William Burton Pearson published A Handbook of Lattice Spacings and Structures of Metals and Alloys, where he introduced the Pearson symbols for crystal structure types. 1959 - Norio Kato and Andrew Richard Lang observed Pendellösung fringes in X-ray diffraction from silicon and quartz. The observation of similar fringes in neutron diffraction was made by Clifford Shull in 1968. 1960 - John Kendrew determined the structure of myoglobin for which he shared the 1962 Nobel Prize in Chemistry. 1960 - After many years of research, Max Perutz determined the structure of haemoglobin for which he shared the 1962 Nobel Prize in Chemistry. 1960 - Lester Germer and his coworkers at Bell Labs using a flat phosphor screen for the first modern low-energy electron diffraction camera combined with ultra-high vacuum, the start of quantitative surface crystallography. 1962 - Alan Mackay demonstrated that there exists close packing of spheres to yield icosahedral structures. 1962 - Michael Rossmann and David Blow laid the foundation for the molecular replacement approach which provides phase information without requiring additional experimental effort. 1962 - Max Perutz and John Kendrew shared the Nobel Prize for Chemistry "for their studies of the structures of globular proteins", namely haemoglobin and myoglobin respectively 1962 - James Watson, Francis Crick and Maurice Wilkins won the Nobel Prize in Physiology or Medicine "for their discoveries concerning the molecular structure of nucleic acids and its significance for information transfer in living material," specifically for their determination of the structure of DNA. 1963 - Isabella Karle developed the symbolic addition procedure in direct methods for inverting X-ray diffraction data. 1963 - Jürg Waser introduced restrained least square method, also known as regularized least squares, for crystallographic structure fitting. 1964 - Dorothy Hodgkin won the Nobel Prize for Chemistry "for her determinations by X-ray techniques of the structures of important biochemical substances." The substances included penicillin and vitamin B12. 1965 - David Chilton Phillips, Louise Johnson and their co-workers published the structure of Lysozyme, the first enzyme to have its structure determined. 1965 - Olga Kennard established the Cambridge Structural Database. 1967 - Hugo Rietveld invented the Rietveld refinement method for computation of crystal structures. 1968 - Erwin Félix Lewy-Bertaut introduced magnetic space groups to account for the spin ordering of magnetic structures observed in neutron crystallography. 1968 - Aaron Klug and David DeRosier used electron microscopy to visualise the structure of the tail of bacteriophage T4, a common virus, thus signalling a breakthrough in macromolecular structure determination. 1968 - Dorothy Hodgkin, after 35 years of work, finally deciphered the structure of insulin. 1969 - Benno P. Schoenborn conducted the first structural study of macromolecules (myoglobin) by neutron diffraction at the Brookhaven National Laboratory. 1970 - Albert Crewe demonstrated imaging of single atoms in a scanning transmission electron microscopy. 1971 - Establishment of the Protein Data Bank (PDB). At PDB, Edgar Meyer develops the first general software tools for handling and visualizing protein structural data. 1971 - Gerd Rosenbaum, Kenneth Holmes, and Jean Witz first discussed the potential of synchrotron X-ray diffraction for biological applications. 1972 - The first quantitative matching of atomic scale images and dynamical simulations was published by J. G. Allpress, E. A. Hewat, A. F. Moodie and J. V. Sanders. 1972 - Michael Glazer established the classification of octahedral tilting patterns in perovskite crystal structures, later also known as the Glazer tilts. 1973 - Alex Rich's group published the first report of a polynucleotide crystal structure - that of the yeast transfer RNA (tRNA) for phenylalanine. 1973 - Geoffrey Wilkinson and Ernst Fischer shared the Nobel Prize in Chemistry "for their pioneering work, performed independently, on the chemistry of the organometallic, so called sandwich compounds", specifically the structure of ferrocene. 1976 - Douglas L. Dorset and Herbert A. Hauptman used direct methods to solve crystal structures from electron diffraction data. 1976 - Boris Delaunay, building on his work in the 1930s, proved that the regularity of a system of points, an (r, R) system or Delone set, can be established by postulating the points' congruence within a sphere of a defined finite radius. 1976 - William Lipscomb won the Nobel Prize in Chemistry "for his studies on the structure of boranes illuminating problems of chemical bonding." 1978 - Stephen C. Harrison provided the first high-resolution structure of a virus: tomato bushy stunt virus which is icosahedral in form. 1978 - Günter Bergerhoff and I. David Brown initiated the Inorganic Crystal Structure Database. 1979 - The first award of the Gregori Aminoff Prize for a contribution in the field of crystallography is made by the Royal Swedish Academy of Sciences to Paul Peter Ewald. 1979 - A team involving Alfred Y. Cho and others at Bell Labs made the first reconstruction of atomic structures at the materials interface between gallium arsenide and aluminium using X-ray diffraction. 1980 - Jerome Karle and Wayne Hendrickson developed multi-wavelength anomalous dispersion (MAD) a technique to facilitate the determination of the three-dimensional structure of biological macromolecules via a solution of the phase problem. 1982 - Aaron Klug won the Nobel Prize in Chemistry "for his development of crystallographic electron microscopy and his structural elucidation of biologically important nucleic acid-protein complexes." 1983 - John R. Helliwell promoted the use of synchrotron radiation in the crystallography of molecular biology. 1983 - Effectively simultaneously Ian Robinson used surface X-ray Diffraction (SXRD) to solve the structure of the gold 2x1 (110) surface, Laurence D. Marks used electron microscopy and Gerd Binnig and Heinrich Rohrer used scanning tunneling microscope. 1984 - A team led by Dan Shechtman also involving Ilan Blech, Denis Gratias, and John W. Cahn discovered quasicrystals in a metallic alloy. These structures have no unit cell and no periodic translational order but have long-range bond orientational order, which generates a defined diffraction pattern. 1984 - Aaron Klug and his colleagues provided an advance in determining the structure of protein–nucleic acid complexes when they solved the structure of the 206-kDa nucleosome core particle. 1985 - Jerome Karle shared the Nobel Prize in Chemistry with Herbert A. Hauptman "for their outstanding achievements in the development of direct methods for the determination of crystal structures". Karle developed the theoretical basis for multiple-wavelength anomalous diffraction (MAD). 1985 - Hartmut Michel and his colleagues reported the first high-resolution X-ray crystal structure of an integral membrane protein when they published the structure of a photosynthetic reaction centre. 1985 - Kunio Takanayagi led a team which solved the structure of the 7x7 reconstruction of the silicon (111) surface using Patterson function methods with ultra-high vacuum electron diffraction. This surface structure had defeated many prior attempts. 1986 - Ernst Ruska shared the Nobel Prize in Physics "for his fundamental work in electron optics, and for the design of the first electron microscope". 1987 - John M. Cowley and Alexander F. Moodie shared the first IUCr Ewald Prize "for their outstanding achievements in electron diffraction and microscopy. They carried out pioneering work on the dynamical scattering of electrons and the direct imaging of crystal structures and structure defects by high-resolution electron microscopy. The physical optics approach used by Cowley and Moodie takes into account many hundreds of scattered beams, and represents a far-reaching extension of the dynamical theory for X-rays, first developed by P.P. Ewald". 1987 - Don Craig Wiley and Jack L. Strominger solved the structure of the soluble portion of a class I MHC molecule known as HLA-A2. This structure revealed the presence of a pocket which holds the antigenic peptide, which is recognized by the receptors of T cells only when firmly bound to the MHC product and presented at the surface of an infected cell. This structure strongly influenced the concept of T cell recognition in future work. 1988 - Johann Deisenhofer, Robert Huber and Hartmut Michel shared the Nobel Prize in Chemistry "for the determination of the three-dimensional structure of a photosynthetic reaction centre." 1989 - Gautam R. Desiraju defined crystal engineering as "the understanding of intermolecular interactions in the context of crystal packing and the utilization of such understanding in the design of new solids with desired physical and chemical properties." 1991 - Georg E. Schulz and colleagues reported the structure of a bacterial porin, a membrane protein with a cylindrical shape (a ‘β-barrel'). 1991 - The crystallographic information file (CIF) format was introduced by Sydney R. Hall, Frank H. Allen, and I. David Brown based on the self-defining text archive and retrieval (STAR) file format developed by Sydney R. Hall. 1991 - Sumio Iijima used electron diffraction to determine the structure of carbon nanotubes. 1992 - The International Union of Crystallography changed the IUCr's definition of a crystal to "any solid having an essentially discrete diffraction pattern" thus formally recognizing quasicrystals. 1992 - First release of the CNS software package by Axel T. Brunger. CNS is an extension of X-PLOR released in 1987, and is used for solving structures based on X-ray diffraction or solution NMR data. 1994 - Jan Pieter Abrahams et al. reported the structure of an F1-ATPase which uses the proton-motive force across the inner mitochondrial membrane to facilitate the synthesis of adenosine triphosphate (ATP). 1994 - Roger Vincent and Paul Midgley invented the precession electron diffraction method for electron crystallography in a transmission electron microscope. 1994 - Bertram Brockhouse and Clifford Shull shared the Nobel Prize in Physics "for pioneering contributions to the development of neutron scattering techniques for studies of condensed matter". Specifically, Brockhouse "for the development of neutron spectroscopy" and Shull "for the development of the neutron diffraction technique." 1994 - Philip Coppens led a team of researchers to uncover the transient structure of sodium nitroprusside, a first example in X-ray excited-state crystallography. 1995 - Douglas L. Dorset published Structural Electron Crystallography, a major text on electron crystallography. 1997 - The Bilbao Crystallographic Server was launched at the University of the Basque Country, led by Mois Ilia Aroyo, Juan Manuel Perez-Mato. 1997 - The X-ray crystal structure of bacteriorhodopsin was the first time the lipidic cubic phase (LCP) was used to facilitate the crystallization of a membrane protein; LCP has since been used to obtain the structures of many unique membrane proteins, including G protein-coupled receptors (GPCRs). 1997 - Paul D. Boyer and John E. Walker shared one half of the Nobel Prize in Chemistry "for their elucidation of the enzymatic mechanism underlying the synthesis of adenosine triphosphate (ATP)" Walker determined the crystal structure of ATP synthase, and this structure confirmed a mechanism earlier proposed by Boyer, mainly on the basis of isotopic studies. 1997 - Nobuo Niimura led a team that first used a neutron image plate for structure determination of lysozyme at the Institut Laue–Langevin. 1998 - The structure of tubulin and the location of the taxol-binding site is first determined by Eva Nogales and her team using electron crystallography. 1998 - A group led by Jon Gjønnes combined three-dimensional electron diffraction with precession electron diffraction and direct methods to solve an intermetallic, combining this with dynamical refinements. 1999 - Jianwei Miao, Janos Kirz, David Sayre and co-workers performed the first experiment to extend crystallography to allow structural determination of non-crystalline specimens which has become known as coherent diffraction imaging (CDI), lensless imaging, or computational microscopy. 1999 - A team led by Michael O'Keefe and Omar Yaghi synthesized and determined the structure of MOF-5, the first metal-organic framework (MOF) compound. In the ensuing years, the duo and mathematician Olaf Delgado-Friedrichs further developed the periodic net theory proposed by Alexander F. Wells to characterize MOFs. 21st century 2000 - Janos Hajdu, Richard Neutze, and colleagues calculated that they could use Sayre's ideas from the 1950s, to implement a ‘diffraction before destruction' concept, using an X-ray free-electron laser (XFEL). 2001 - Harry F. Noller's group published the 5.5-Å structure of the complete Thermus thermophilus 70S ribosome. This structure revealed that the major functional regions of the ribosome were based on RNA, establishing the primordial role of RNA in translation. 2001 - Roger Kornberg's group published the 2.8-Å structure of Saccharomyces cerevisiae RNA polymerase. The structure allowed both transcription initiation and elongation mechanisms to be deduced. Simultaneously, this group reported the structure of free RNA polymerase II, which contributed towards the eventual visualisation of the interaction between DNA, RNA, and the ribosome. 2003 - Raimond Ravelli et al. demonstrated X-ray radiation damage-induced phasing method for structure determination. 2005 - The first X-ray free-electron laser in the soft X-ray regime, FLASH, became an operational user facility at DESY for X-ray diffraction experiments. 2007 - Ute Kolb and co-workers developed automated diffraction tomography for electron crystallography by combining diffraction and tomography within a transmission electron microscope. 2007 - Two X-ray crystal structures of a GPCR, the human β2 adrenergic receptor, were published. Because many drugs elicit their biological effect(s) by binding to a GPCR, the structures of these and other GPCRs may be used to develop efficacious drugs with few side effects. 2009 - The first hard X-ray free-electron laser, the Linac Coherent Light Source, became operational at the SLAC National Accelerator Laboratory. 2009 - Luca Bindi, Paul Steinhardt, Nan Yao, and Peter Lu identified the first naturally occurring quasicrystal using X-ray and electron crystallography. 2009 - Venkatraman Ramakrishnan, Thomas A. Steitz and Ada E. Yonath shared the Nobel Prize in Chemistry "for studies of the structure and function of the ribosome." 2009 - Judith Howard and her collaborators created the Olex2 crystallographic software package. 2011 - Gustaaf Van Tendeloo led a team including Sandra Van Aert, Kees Joost Batenburg et. al. determined the 3D atomic positions of a silver nanoparticle using electron tomography. 2011 - Dan Shechtman received the Nobel Prize in chemistry "for the discovery of quasicrystals." 2011 - Henry N. Chapman, Petra Fromme, John C. H. Spence and 85 co-workers used femtosecond pulses from a Free-electron laser (XFEL) to examine the structure of nanocrystals of Photosystem I. By using very brief x-ray pulses, most radiation damage is mitigated using the technique called serial femtosecond crystallography. 2012 - Jianwei Miao and his co-workers applied the coherent diffraction imaging (CDI) method in Atomic Electron Tomography (AET). 2013 - Tamir Gonen and his co-workers demonstrated microcrystal electron diffraction (microED) for lysozyme microcrystals at the Janelia Farm Research Campus. 2014 - Carmelo Giacovazzo published Phasing in Crystallography: A Modern Perspective, a comprehensive opus on phasing methods in X-ray and electron crystallography. 2014 - The International Union of Crystallography and UNESCO named 2014 the International Year of Crystallography to commemorate the century of discovery since the invention of X-ray diffraction. 2017 - Lukas Palatinus and co-workers used dynamical structure refinement to resolve hydrogen atom positions in nanocrystals using electron diffraction. 2017 - Jacques Dubochet, Joachim Frank and Richard Henderson shared the Nobel Prize in chemistry "for developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution." 2019 - The Cambridge Structural Database reached the milestone of one million structures. 2020 - Two independent groups led respectively by Holger Stark and Sjors Scheres demonstrated that single-particle cryoelectron microscopy has reached atomic resolution. 2021 - Kenneth G. Libbrecht published the book Snow Crystals: A Case Study in Spontaneous Structure Formation, summarizing his decade-spanning work on the subject for engineering conditions for designer ice crystals. 2022 - Leonid Dubrovinsky, Igor A. Abrikosov, and Natalia Dubrovinskaia led a team that demonstrates high-pressure crystallography in the terapascal regime. 2024 - A team led by Anders Madsen developed a deep learning model, PhAI, to solve crystallographic phase problem for small molecules. References Further reading Crystallography before 20th century Burke, John G. (1966), Origins of the science of crystals, University of California Press. Lima-de-Faria, José (ed.) (1990), Historical atlas of crystallography, Springer Netherlands Crystallography in the 20th century and beyond Milestones in crystallography, Nature, August 2014 History of X-ray crystallography Ewald, P. P. (ed.) (1962), 50 Years of x-ray diffraction, IUCR, Oosthoek Authier, André (2013), Early days of x-ray crystallography, Oxford Univ. Press. History of electron crystallography History of neutron crystallography History of NMR crystallography History of structure determination History of macromolecular crystallography History of crystallographic organizations and journals Crystallography Crystallography Chemistry timelines
Timeline of crystallography
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
7,584
[ "Crystallography", "Condensed matter physics", "Materials science" ]
63,481,932
https://en.wikipedia.org/wiki/Phenserine
{{Infobox drug | drug_name = Phenserine | type = | IUPAC_name = (3aS,8aR)-1,3a,8-Trimethyl-1H,2H,3H,3aH,8H,8aH-pyrrolo[2,3-b]indol-5-yl N-phenylcarbamate | image = Phenserine.svg | width = | alt = | caption = | pronounce = | tradename = | Drugs.com = | MedlinePlus = | licence_EU = | licence_US = | DailyMedID = | pregnancy_AU = | pregnancy_AU_comment = | pregnancy_US = | pregnancy_US_comment = | pregnancy_category = | legal_AU = | legal_AU_comment = | legal_BR = | legal_BR_comment = | legal_CA = | legal_CA_comment = | legal_DE = | legal_DE_comment = | legal_NZ = | legal_NZ_comment = | legal_UK = | legal_UK_comment = | legal_US = | legal_UN = | legal_US_comment = | legal_UN_comment = | legal_status = Investigational | dependency_liability = | addiction_liability = | routes_of_administration = By mouth | bioavailability = ~100% | protein_bound = | metabolism = liver | metabolites = (−)-N1-norphenserine, (−)-N8-norphenserine, (−)-N1,N8-bisnorphenserine | onset = | elimination_half-life = 12.6 minutes | duration_of_action = 8.25 hours | excretion = renal or hepatic clearance | CAS_number = 101246-66-6 | CAS_supplemental = | ATCvet = | ATC_prefix = | ATC_suffix = | ATC_supplemental = | PubChem = 192706 | PubChemSubstance = | IUPHAR_ligand = | DrugBank = DB04892 | ChemSpiderID = 167225 | UNII = SUE285UG3S | KEGG = | ChEBI = | ChEMBL = 74926 | NIAID_ChemDB = | PDB_ligand = | synonyms = (-)-Phenserine, (-)-Eseroline phenylcarbamate, N-phenylcarbamoyleseroline, N''-phenylcarbamoyl eseroline | chemical_formula = | C = 20 | H = 23 | N = 3 | O = 2 | charge = | SMILES = [H][C@]12N(C)CC[C@@]1(C)C1=C(C=CC(OC(=O)NC3=CC=CC=C3)=C1)N2C | StdInChI = InChI=1S/C20H23N3O2/c1-20-11-12-22(2)18(20)23(3)17-10-9-15(13-16(17)20)25-19(24)21-14-7-5-4-6-8-14/h4-10,13,18H,11-12H2,1-3H3,(H,21,24)/t18-,20+/m1/s1 | StdInChI_comment = | StdInChIKey = PBHFNBQPZCRWQP-QUCCMNQESA-N | density = | density_notes = | melting_point = 150 | melting_high = | melting_notes = | solubility = | specific_rotation = | INN = | licence_CA = | class = | Jmol = | sol_units = }}Phenserine (also known as (-)-phenserine or (-)-eseroline phenylcarbamate''') is a synthetic drug which has been investigated as a medication to treat Alzheimer's disease (AD), as the drug exhibits neuroprotective and neurotrophic effects. The research of phenserine, initially patented by the National Institute on Aging (NIA), has been suspended since phase III of clinical trials in 2006, conducted right after the drug licenses were issued. The abandonment of the clinical trials led to disapproval by FDA. The retrospective meta-analysis of the phenserine research proposed that its clinical invalidation was arisen from methodological issues that were not impeccably settled before proceeding to the subsequent clinical phases. Phenserine was introduced as an inhibitor of acetylcholinesterase (AChE) and demonstrated significant alleviation in numerous neuropathological manifestations, improving cognitive functions of the brain. The ameliorative mechanism involves both cholinergic and non-cholinergic pathways. The clinical translatable doses of phenserine show relatively high tolerability and rarely manifest severe adverse effects. With respect to overdosing of the drug (20 mg/kg), a few cholinergic adverse effects were reported, including nausea and tremor which are not life-threatening. An administration form of phenserine, (-)-phenserine tartrate, which exhibits high bioavailability and solubility, is taken by mouth. Phenserine and its metabolites can readily access the brain with high permeability across the blood-brain barrier and sustain to act for a long duration with the relatively short half-life. Posiphen ((+)-phenserine), the enantiomer of (-)-phenserine, is also a potential drug by itself or synergically with (-)-phenserine, to mitigate the progression of neurological diseases, mainly Alzheimer's disease. History Phenserine was first investigated as a substitute for physostigmine which failed to satisfy the clinical standards for treating Alzheimer's disease, and developed into more compatible remedy. It was initially invented by Nigel Greig whose laboratory is affiliated with the National Institute on Aging (NIA) under the US National Institutes of Health (NIH) which subsequently released a patent of phenserine as an AChE inhibitor in 1995. During phase I in 2000, the supplementary patent regarding its inhibitory mechanism upon β-amyloid precursor protein (APP) synthesis was added. Following 6 years of phase I and II trials, Axonyx Corporation had licensed phenserine to Daewoong Pharmaceutical and QR Pharma (later adopted new corporation name, Annovis Bio) companies in 2006, which then planned to undertake phase III trial and merchandize the drug. However, the clinical deficits ̶ representatively from a double-blinded, placebo-controlled and 7-month phase III trial which had been conducted on 377 mild to moderate Alzheimer's disease patients across Austria, Croatia, Spain, and UK ̶ were discovered and no significance was exhibited for the drug efficacy. This led to the relinquishment of phenserine development, merely displaying its marketable potential. Approval status Phenserine failed in phase III of Alzheimer's disease-aimed clinical trials and there has yet been no promise of the trial resumption since 2006. The methodological problems of trials are frequently speculated as the principal reason for the failure of FDA approval as well as the scarcity of Alzheimer's disease drugs. The underlying complications are generated by an inordinate variance in clinical outcomes and poor determination in optimal dosing. Intra and inter-site variations were incurred by a lack of baseline evaluation and longitudinal assessment on placebo groups. This produced an inadequate power and, thus, appeared to have insufficient statistical significance. In light of the dose determination, the criteria for human subject engagement was not meticulously established before dosing and the effective dose range was not completely established in phase I and II, yet still persisting to phase III. Compared to other Alzheimer's disease drugs, such as donepenzil, tacrine and metrifonate, the clinical advancement of phenserine involves comparably high compliance in outcome measures and protocol regimentation on methods and the clinical phase transition. Pharmacological benefits Phenserine was invented as the Alzheimer's disease-oriented treatment in particular, and also proven to have alleviative effects upon other neurological disorders, Parkinson's disease, dementia and amyotrophic lateral sclerosis. The administration of phenserine within the short delay of the disease onset was shown to diminish the severity of neurodegeneration and accompanying cognitive impairments. Its post-injury intervention at clinical translatable doses has been shown to significantly mitigate various neurodegenerative manifestations, preventing chronic deterioration in cognitive functions. The collective neuropathological cascades in the brain are either naturally occurring or provoked by mild or moderate traumatic brain injuries, such as concussion, diffuse axonal injury, ischemic and hypoxic brain injuries The traumatic brain injuries have been substantially examined and induced to form test groups in phenserine research. They are highly correlated with the onset of neurodegenerative disorders, precipitating cognitive, and behavioral impairments. Phenserine was proven to mitigate the multiple cascades of neuropathology, triggered by traumatic brain injuries, via both cholinergic and non-cholinergic mechanisms. Pharmacodynamics (mechanism of action) Cholinergic mechanism Phenserine serves as an acetylcholinesterase (AChE) inhibitor which selectively acts on the acetylcholinesterase enzyme. It prevents acetylcholine from being hydrolyzed by the enzyme and enables the neurotransmitter to be further retained at the synaptic clefts. Such mechanism promotes the cholinergic neuronal circuits to be activated and thereby enhances memory and cognition in Alzheimer's subjects. Non-cholinergic mechanism In clinical trials, phenserine was demonstrated to alleviate neurodegeneration, repressing the programmed neuronal cell death and enhancing the stem cell survival and differentiation. The alleviation can be achieved by increase in levels of neurotrophic BDNF and anti-apoptotic protein, Bcl-2, which subsequently reduces expressions of pro-apoptotic factors, GFAP and activates caspase 3. The treatment also suppresses the levels of Alzheimer's disease-inducing proteins which are β-amyloid precursor protein (APP) and Aβ peptide. The drug interaction with the APP gene mediates the expression of both APP and its following product, Aβ protein. This regulating action reverses the glial cell-favored differentiation and increases the neuronal cell output. Phenserine also attenuates the neuroinflammation which involves the excessive activation of microglial cells to remove the cellular wastes from injury lesions. The accumulation of the activated glia near the site of brain injury is unnecessarily prolonged, stimulating the oxidative stress. The inflammatory response was significantly weakened with the introduction of phenserine, which was evidenced with discouraged expression of pro-inflammatory markers, IBA 1 and TNF-α. The disrupted integrity of the blood brain barrier by a degrading chemical, MMP-9, leading to neuroinflammation, is restored by phenserine as well. The alpha-synucleins, the toxic aggregates resulting from the protein misfolding, are highly observed in Parkinson's disease. The drug therapy was proven to neutralize the toxicity of alpha-synucleins via protein translation, alleviating the symptoms of the disease. Dosage Clinically, the translatable dose of phenserine was primarily employed within a range of 1 to 5 mg/kg where the unit calibration took account of the body surface area. This standard dose range was generally well tolerated in long term trials by neuronal cell cultures, animal models and humans. Increment in dosing by 10 mg/kg is still tolerated without instigating any physiological implication. The maximal administration of phenserine up to 15 mg/kg was reported in rats. Overdose The dose of 20 mg/kg and above are appraised as overdosing in which cholinergic adverse effects ensue. The symptoms of overdosing includes: Nausea Vomiting Dizziness Tremors Bradycardia Mild symptoms were notified in clinical trials but no other seriously considerable adverse effects were expressed. Tremor was also noted as one of the dose-limiting actions. Chemistry Pharmacokinetics Oral bioavailability of phenserine was shown to be very high, up to 100%. Its bioavailability was tested by computing the drug's delivery rate across the rat's blood brain barrier. The drug concentration, reached in the brain, is 10-fold higher than plasma levels, verifying phenserine as a brain-permeable AChE inhibitor. Relative to its short plasma half life of 8 to 12 minutes, phenserine exhibits a long duration of action with the half-life of 8.25 hours in which the hindering effect on AChE is time-dependently faded. With the administration of phenserine, 70% or higher AChE inhibitory action in the blood was observed in preclinical studies and with systemic phenserine administration, the extracellular ACh level in the striatum increased up to three times. Through PET studies and microdialysis, the compound's brain permeability was able to be further elucidated. Enantiomer (posiphen) (-)-Phenserine, generally referred to phenserine, acts as an active enantiomer for the inhibition of acetylcholinesterase (AChE) while posiphen, its alternative enantiomer, was comparably demonstrated as a poor AChE inhibitor. In the history of posiphen research, several companies were interactively involved. In 2005, an Investigational New Drug (IND) application of posiphen was filed with FDA by TorreyPines Therapeutics while its phase I trial on animal models had been implemented by Axonyx. Axonyx and TorreyPines Therapeutics officially signed for their merger agreement in 2006 and licensed the drug to QR Pharma in 2008. The clinical trials of posiphen against Alzheimer's disease are still underway. Interactions Currently, 282 drugs have been reported to make interactions with phenserine. Current research A 5-year double blinded, donepezil-controlled clinical study for validation of Alzheimer's disease course modification using phenserine has been undertaken as from 2018, involving 200 patients in the UK and US. The study aims to reduce variation in AD therapeutic response between patients via optimal dose formulation. References Experimental drugs for Alzheimer's disease Neuroprotective agents Neurotrophic factors
Phenserine
[ "Chemistry" ]
3,150
[ "Neurotrophic factors", "Neurochemistry", "Signal transduction" ]
67,785,773
https://en.wikipedia.org/wiki/Semi-inclusive%20deep%20inelastic%20scattering
In high energy particle physics nucleon-lepton scattering, the semi-inclusive deep inelastic scattering (SIDIS) is a method to obtain information on the nucleon structure. It expands the traditional method of deep inelastic scattering (DIS). In DIS, only the scattered lepton is detected while the remnants of the shattered nucleon are ignored (inclusive experiment). In SIDIS, a high momentum hadron, a.k.a. as the leading hadron is detected in addition to the scattered lepton. This allows us to obtain additional details about the scattering process kinematics. Usefulness The leading hadron results from the hadronization of the struck quark. This latter retains the information on its motion inside the nucleon, including its transverse momentum which allows to access the transverse momentum distributions (TMDs) of partons. Likewise, by detecting the leading hadron, one essentially tags (i.e. identifies) the quark on which the scattering occurred. For example, if the leading hadron is a kaon, we know that the scattering occurred on one of the strange quarks of the nucleon's quark sea. In DIS the struck quark is not identified and the information is an indistinguishable sum over all the quark flavors. SIDIS allows to disentangle this information. Experiments SIDIS measurements were pioneered at DESY by the HERMES experiment. They are currently (2021) being carried out at CERN by the COMPASS experiment and several experiments at Jefferson Lab. SIDIS will be an important technique used in the future Electron Ion Collider scientific program. References Quantum chromodynamics Nuclear physics Scattering
Semi-inclusive deep inelastic scattering
[ "Physics", "Chemistry", "Materials_science" ]
354
[ "Matter", "Hadrons", "Scattering", "Particle physics", "Condensed matter physics", "Nuclear physics", "Subatomic particles" ]
67,786,287
https://en.wikipedia.org/wiki/United%20Nations%20General%20Assembly%20Resolution%201%20%28I%29
United Nations General Assembly Resolution 1 was the first resolution passed by the United Nations General Assembly on 24 January 1946, which created the United Nations Atomic Energy Commission to "deal with the problems raised by the discovery of atomic energy", and commissioned to "make specific proposals... for the elimination from national armaments of atomic weapons and of all other major weapons adaptable to mass destruction", among other issues regarding nuclear technology." References 01/1 1946 in the United Nations Nuclear energy
United Nations General Assembly Resolution 1 (I)
[ "Physics", "Chemistry" ]
99
[ "Nuclear energy", "Radioactivity", "Nuclear physics" ]
67,789,038
https://en.wikipedia.org/wiki/Spaces%20of%20test%20functions%20and%20distributions
In mathematical analysis, the spaces of test functions and distributions are topological vector spaces (TVSs) that are used in the definition and application of distributions. Test functions are usually infinitely differentiable complex-valued (or sometimes real-valued) functions on a non-empty open subset that have compact support. The space of all test functions, denoted by is endowed with a certain topology, called the , that makes into a complete Hausdorff locally convex TVS. The strong dual space of is called and is denoted by where the "" subscript indicates that the continuous dual space of denoted by is endowed with the strong dual topology. There are other possible choices for the space of test functions, which lead to other different spaces of distributions. If then the use of Schwartz functions as test functions gives rise to a certain subspace of whose elements are called . These are important because they allow the Fourier transform to be extended from "standard functions" to tempered distributions. The set of tempered distributions forms a vector subspace of the space of distributions and is thus one example of a space of distributions; there are many other spaces of distributions. There also exist other major classes of test functions that are subsets of such as spaces of analytic test functions, which produce very different classes of distributions. The theory of such distributions has a different character from the previous one because there are no analytic functions with non-empty compact support. Use of analytic test functions leads to Sato's theory of hyperfunctions. Notation The following notation will be used throughout this article: is a fixed positive integer and is a fixed non-empty open subset of Euclidean space denotes the natural numbers. will denote a non-negative integer or If is a function then will denote its domain and the of denoted by is defined to be the closure of the set in For two functions , the following notation defines a canonical pairing: A of size is an element in (given that is fixed, if the size of multi-indices is omitted then the size should be assumed to be ). The of a multi-index is defined as and denoted by Multi-indices are particularly useful when dealing with functions of several variables, in particular we introduce the following notations for a given multi-index : We also introduce a partial order of all multi-indices by if and only if for all When we define their multi-index binomial coefficient as: will denote a certain non-empty collection of compact subsets of (described in detail below). Definitions of test functions and distributions In this section, we will formally define real-valued distributions on . With minor modifications, one can also define complex-valued distributions, and one can replace with any (paracompact) smooth manifold. Note that for all and any compact subsets and of , we have: Distributions on are defined to be the continuous linear functionals on when this vector space is endowed with a particular topology called the . This topology is unfortunately not easy to define but it is nevertheless still possible to characterize distributions in a way so that no mention of the canonical LF-topology is made. Proposition: If is a linear functional on then the is a distribution if and only if the following equivalent conditions are satisfied: For every compact subset there exist constants and (dependent on ) such that for all For every compact subset there exist constants and such that for all with support contained in For any compact subset and any sequence in if converges uniformly to zero on for all multi-indices , then The above characterizations can be used to determine whether or not a linear functional is a distribution, but more advanced uses of distributions and test functions (such as applications to differential equations) is limited if no topologies are placed on and To define the space of distributions we must first define the canonical LF-topology, which in turn requires that several other locally convex topological vector spaces (TVSs) be defined first. First, a (non-normable) topology on will be defined, then every will be endowed with the subspace topology induced on it by and finally the (non-metrizable) canonical LF-topology on will be defined. The space of distributions, being defined as the continuous dual space of is then endowed with the (non-metrizable) strong dual topology induced by and the canonical LF-topology (this topology is a generalization of the usual operator norm induced topology that is placed on the continuous dual spaces of normed spaces). This finally permits consideration of more advanced notions such as convergence of distributions (both sequences nets), various (sub)spaces of distributions, and operations on distributions, including extending differential equations to distributions. Choice of compact sets K Throughout, will be any collection of compact subsets of such that (1) and (2) for any compact there exists some such that The most common choices for are: The set of all compact subsets of or A set where and for all , and is a relatively compact non-empty open subset of (here, "relatively compact" means that the closure of in either or is compact). We make into a directed set by defining if and only if Note that although the definitions of the subsequently defined topologies explicitly reference in reality they do not depend on the choice of that is, if and are any two such collections of compact subsets of then the topologies defined on and by using in place of are the same as those defined by using in place of Topology on Ck(U) We now introduce the seminorms that will define the topology on Different authors sometimes use different families of seminorms so we list the most common families below. However, the resulting topology is the same no matter which family is used. All of the functions above are non-negative -valued seminorms on As explained in this article, every set of seminorms on a vector space induces a locally convex vector topology. Each of the following sets of seminorms generate the same locally convex vector topology on (so for example, the topology generated by the seminorms in is equal to the topology generated by those in ). With this topology, becomes a locally convex Fréchet space that is normable. Every element of is a continuous seminorm on Under this topology, a net in converges to if and only if for every multi-index with and every compact the net of partial derivatives converges uniformly to on For any any (von Neumann) bounded subset of is a relatively compact subset of In particular, a subset of is bounded if and only if it is bounded in for all The space is a Montel space if and only if The topology on is the superior limit of the subspace topologies induced on by the TVSs as ranges over the non-negative integers. A subset of is open in this topology if and only if there exists such that is open when is endowed with the subspace topology induced on it by Metric defining the topology If the family of compact sets satisfies and for all then a complete translation-invariant metric on can be obtained by taking a suitable countable Fréchet combination of any one of the above defining families of seminorms (A through D). For example, using the seminorms results in the metric Often, it is easier to just consider seminorms (avoiding any metric) and use the tools of functional analysis. Topology on Ck(K) As before, fix Recall that if is any compact subset of then For any compact subset is a closed subspace of the Fréchet space and is thus also a Fréchet space. For all compact satisfying denote the inclusion map by Then this map is a linear embedding of TVSs (that is, it is a linear map that is also a topological embedding) whose image (or "range") is closed in its codomain; said differently, the topology on is identical to the subspace topology it inherits from and also is a closed subset of The interior of relative to is empty. If is finite then is a Banach space with a topology that can be defined by the norm And when then is even a Hilbert space. The space is a distinguished Schwartz Montel space so if then it is normable and thus a Banach space (although like all other it is a Fréchet space). Trivial extensions and independence of Ck(K)'s topology from U The definition of depends on so we will let denote the topological space which by definition is a topological subspace of Suppose is an open subset of containing and for any compact subset let is the vector subspace of consisting of maps with support contained in Given its is by definition, the function defined by: so that Let denote the map that sends a function in to its trivial extension on . This map is a linear injection and for every compact subset (where is also a compact subset of since ) we have If is restricted to then the following induced linear map is a homeomorphism (and thus a TVS-isomorphism): and thus the next two maps (which like the previous map are defined by ) are topological embeddings: (the topology on is the canonical LF topology, which is defined later). Using the injection the vector space is canonically identified with its image in (however, if then is a topological embedding when these spaces are endowed with their canonical LF topologies, although it is continuous). Because through this identification, can also be considered as a subset of Importantly, the subspace topology inherits from (when it is viewed as a subset of ) is identical to the subspace topology that it inherits from (when is viewed instead as a subset of via the identification). Thus the topology on is independent of the open subset of that contains . This justifies the practice of written instead of Canonical LF topology Recall that denote all those functions in that have compact support in where note that is the union of all as ranges over Moreover, for every , is a dense subset of The special case when gives us the space of test functions. This section defines the canonical LF topology as a direct limit. It is also possible to define this topology in terms of its neighborhoods of the origin, which is described afterwards. Topology defined by direct limits For any two sets and , we declare that if and only if which in particular makes the collection of compact subsets of into a directed set (we say that such a collection is ). For all compact satisfying there are inclusion maps Recall from above that the map is a topological embedding. The collection of maps forms a direct system in the category of locally convex topological vector spaces that is directed by (under subset inclusion). This system's direct limit (in the category of locally convex TVSs) is the pair where are the natural inclusions and where is now endowed with the (unique) strongest locally convex topology making all of the inclusion maps continuous. Topology defined by neighborhoods of the origin If is a convex subset of then is a neighborhood of the origin in the canonical LF topology if and only if it satisfies the following condition: Note that any convex set satisfying this condition is necessarily absorbing in Since the topology of any topological vector space is translation-invariant, any TVS-topology is completely determined by the set of neighborhood of the origin. This means that one could actually the canonical LF topology by declaring that a convex balanced subset is a neighborhood of the origin if and only if it satisfies condition . Topology defined via differential operators A is a sum where and all but finitely many of are identically . The integer is called the of the differential operator If is a linear differential operator of order then it induces a canonical linear map defined by where we shall reuse notation and also denote this map by For any the canonical LF topology on is the weakest locally convex TVS topology making all linear differential operators in of order into continuous maps from into Properties of the canonical LF topology Canonical LF topology's independence from One benefit of defining the canonical LF topology as the direct limit of a direct system is that we may immediately use the universal property of direct limits. Another benefit is that we can use well-known results from category theory to deduce that the canonical LF topology is actually independent of the particular choice of the directed collection of compact sets. And by considering different collections (in particular, those mentioned at the beginning of this article), we may deduce different properties of this topology. In particular, we may deduce that the canonical LF topology makes into a Hausdorff locally convex strict LF-space (and also a strict LB-space if ), which of course is the reason why this topology is called "the canonical LF topology" (see this footnote for more details). Universal property From the universal property of direct limits, we know that if is a linear map into a locally convex space (not necessarily Hausdorff), then is continuous if and only if is bounded if and only if for every the restriction of to is continuous (or bounded). Dependence of the canonical LF topology on Suppose is an open subset of containing Let denote the map that sends a function in to its trivial extension on (which was defined above). This map is a continuous linear map. If (and only if) then is a dense subset of and is a topological embedding. Consequently, if then the transpose of is neither one-to-one nor onto. Bounded subsets A subset is bounded in if and only if there exists some such that and is a bounded subset of Moreover, if is compact and then is bounded in if and only if it is bounded in For any any bounded subset of (resp. ) is a relatively compact subset of (resp. ), where Non-metrizability For all compact the interior of in is empty so that is of the first category in itself. It follows from Baire's theorem that is metrizable and thus also normable (see this footnote for an explanation of how the non-metrizable space can be complete even though it does not admit a metric). The fact that is a nuclear Montel space makes up for the non-metrizability of (see this footnote for a more detailed explanation). Relationships between spaces Using the universal property of direct limits and the fact that the natural inclusions are all topological embedding, one may show that all of the maps are also topological embeddings. Said differently, the topology on is identical to the subspace topology that it inherits from where recall that 's topology was to be the subspace topology induced on it by In particular, both and induces the same subspace topology on However, this does imply that the canonical LF topology on is equal to the subspace topology induced on by ; these two topologies on are in fact equal to each other since the canonical LF topology is metrizable while the subspace topology induced on it by is metrizable (since recall that is metrizable). The canonical LF topology on is actually than the subspace topology that it inherits from (thus the natural inclusion is continuous but a topological embedding). Indeed, the canonical LF topology is so fine that if denotes some linear map that is a "natural inclusion" (such as or or other maps discussed below) then this map will typically be continuous, which (as is explained below) is ultimately the reason why locally integrable functions, Radon measures, etc. all induce distributions (via the transpose of such a "natural inclusion"). Said differently, the reason why there are so many different ways of defining distributions from other spaces ultimately stems from how very fine the canonical LF topology is. Moreover, since distributions are just continuous linear functionals on the fine nature of the canonical LF topology means that more linear functionals on end up being continuous ("more" means as compared to a coarser topology that we could have placed on such as for instance, the subspace topology induced by some which although it would have made metrizable, it would have also resulted in fewer linear functionals on being continuous and thus there would have been fewer distributions; moreover, this particular coarser topology also has the disadvantage of not making into a complete TVS). Other properties The differentiation map is a continuous linear operator. The bilinear multiplication map given by is continuous; it is however, hypocontinuous. Distributions As discussed earlier, continuous linear functionals on a are known as distributions on . Thus the set of all distributions on is the continuous dual space of which when endowed with the strong dual topology is denoted by We have the canonical duality pairing between a distribution on and a test function which is denoted using angle brackets by One interprets this notation as the distribution acting on the test function to give a scalar, or symmetrically as the test function acting on the distribution . Characterizations of distributions Proposition. If is a linear functional on then the following are equivalent: is a distribution; : is a continuous function. is continuous at the origin. is uniformly continuous. is a bounded operator. is sequentially continuous. explicitly, for every sequence in that converges in to some is sequentially continuous at the origin; in other words, maps null sequences to null sequences. explicitly, for every sequence in that converges in to the origin (such a sequence is called a ), a is by definition a sequence that converges to the origin. maps null sequences to bounded subsets. explicitly, for every sequence in that converges in to the origin, the sequence is bounded. maps Mackey convergent null sequences to bounded subsets; explicitly, for every Mackey convergent null sequence in the sequence is bounded. a sequence is said to be if there exists a divergent sequence of positive real number such that the sequence is bounded; every sequence that is Mackey convergent to necessarily converges to the origin (in the usual sense). The kernel of is a closed subspace of The graph of is closed. There exists a continuous seminorm on such that There exists a constant a collection of continuous seminorms, that defines the canonical LF topology of and a finite subset such that For every compact subset there exist constants and such that for all For every compact subset there exist constants and such that for all with support contained in For any compact subset and any sequence in if converges uniformly to zero for all multi-indices then Any of the statements immediately above (that is, statements 14, 15, and 16) but with the additional requirement that compact set belongs to Topology on the space of distributions The topology of uniform convergence on bounded subsets is also called . This topology is chosen because it is with this topology that becomes a nuclear Montel space and it is with this topology that the kernels theorem of Schwartz holds. No matter what dual topology is placed on a of distributions converges in this topology if and only if it converges pointwise (although this need not be true of a net). No matter which topology is chosen, will be a non-metrizable, locally convex topological vector space. The space is separable and has the strong Pytkeev property but it is neither a k-space nor a sequential space, which in particular implies that it is not metrizable and also that its topology can be defined using only sequences. Topological properties Topological vector space categories The canonical LF topology makes into a complete distinguished strict LF-space (and a strict LB-space if and only if ), which implies that is a meager subset of itself. Furthermore, as well as its strong dual space, is a complete Hausdorff locally convex barrelled bornological Mackey space. The strong dual of is a Fréchet space if and only if so in particular, the strong dual of which is the space of distributions on , is metrizable (note that the weak-* topology on also is not metrizable and moreover, it further lacks almost all of the nice properties that the strong dual topology gives ). The three spaces and the Schwartz space as well as the strong duals of each of these three spaces, are complete nuclear Montel bornological spaces, which implies that all six of these locally convex spaces are also paracompact reflexive barrelled Mackey spaces. The spaces and are both distinguished Fréchet spaces. Moreover, both and are Schwartz TVSs. Convergent sequences Convergent sequences and their insufficiency to describe topologies The strong dual spaces of and are sequential spaces but not Fréchet-Urysohn spaces. Moreover, neither the space of test functions nor its strong dual is a sequential space (not even an Ascoli space), which in particular implies that their topologies can be defined entirely in terms of convergent sequences. A sequence in converges in if and only if there exists some such that contains this sequence and this sequence converges in ; equivalently, it converges if and only if the following two conditions hold: There is a compact set containing the supports of all For each multi-index the sequence of partial derivatives tends uniformly to Neither the space nor its strong dual is a sequential space, and consequently, their topologies can be defined entirely in terms of convergent sequences. For this reason, the above characterization of when a sequence converges is enough to define the canonical LF topology on The same can be said of the strong dual topology on What sequences do characterize Nevertheless, sequences do characterize many important properties, as we now discuss. It is known that in the dual space of any Montel space, a sequence converges in the strong dual topology if and only if it converges in the weak* topology, which in particular, is the reason why a sequence of distributions converges (in the strong dual topology) if and only if it converges pointwise (this leads many authors to use pointwise convergence to actually the convergence of a sequence of distributions; this is fine for sequences but it does extend to the convergence of nets of distributions since a net may converge pointwise but fail to converge in the strong dual topology). Sequences characterize continuity of linear maps valued in locally convex space. Suppose is a locally convex bornological space (such as any of the six TVSs mentioned earlier). Then a linear map into a locally convex space is continuous if and only if it maps null sequences in to bounded subsets of . More generally, such a linear map is continuous if and only if it maps Mackey convergent null sequences to bounded subsets of So in particular, if a linear map into a locally convex space is sequentially continuous at the origin then it is continuous. However, this does necessarily extend to non-linear maps and/or to maps valued in topological spaces that are not locally convex TVSs. For every is sequentially dense in Furthermore, is a sequentially dense subset of (with its strong dual topology) and also a sequentially dense subset of the strong dual space of Sequences of distributions A sequence of distributions converges with respect to the weak-* topology on to a distribution if and only if for every test function For example, if is the function and is the distribution corresponding to then as so in Thus, for large the function can be regarded as an approximation of the Dirac delta distribution. Other properties The strong dual space of is TVS isomorphic to via the canonical TVS-isomorphism defined by sending to (that is, to the linear functional on defined by sending to ); On any bounded subset of the weak and strong subspace topologies coincide; the same is true for ; Every weakly convergent sequence in is strongly convergent (although this does not extend to nets). Localization of distributions Preliminaries: Transpose of a linear operator Operations on distributions and spaces of distributions are often defined by means of the transpose of a linear operator. This is because the transpose allows for a unified presentation of the many definitions in the theory of distributions and also because its properties are well known in functional analysis. For instance, the well-known Hermitian adjoint of a linear operator between Hilbert spaces is just the operator's transpose (but with the Riesz representation theorem used to identify each Hilbert space with its continuous dual space). In general the transpose of a continuous linear map is the linear map or equivalently, it is the unique map satisfying for all and all (the prime symbol in does not denote a derivative of any kind; it merely indicates that is an element of the continuous dual space ). Since is continuous, the transpose is also continuous when both duals are endowed with their respective strong dual topologies; it is also continuous when both duals are endowed with their respective weak* topologies (see the articles polar topology and dual system for more details). In the context of distributions, the characterization of the transpose can be refined slightly. Let be a continuous linear map. Then by definition, the transpose of is the unique linear operator that satisfies: Since is dense in (here, actually refers to the set of distributions ) it is sufficient that the defining equality hold for all distributions of the form where Explicitly, this means that a continuous linear map is equal to if and only if the condition below holds: where the right hand side equals Extensions and restrictions to an open subset Let be open subsets of Every function can be from its domain to a function on by setting it equal to on the complement This extension is a smooth compactly supported function called the and it will be denoted by This assignment defines the operator which is a continuous injective linear map. It is used to canonically identify as a vector subspace of (although as a topological subspace). Its transpose (explained here) is called the and as the name suggests, the image of a distribution under this map is a distribution on called the restriction of to The defining condition of the restriction is: If then the (continuous injective linear) trivial extension map is a topological embedding (in other words, if this linear injection was used to identify as a subset of then 's topology would strictly finer than the subspace topology that induces on it; importantly, it would be a topological subspace since that requires equality of topologies) and its range is also dense in its codomain Consequently, if then the restriction mapping is neither injective nor surjective. A distribution is said to be if it belongs to the range of the transpose of and it is called if it is extendable to Unless the restriction to is neither injective nor surjective. Spaces of distributions For all and all all of the following canonical injections are continuous and have an image/range that is a dense subset of their codomain: where the topologies on the LB-spaces are the canonical LF topologies as defined below (so in particular, they are not the usual norm topologies). The range of each of the maps above (and of any composition of the maps above) is dense in the codomain. Indeed, is even sequentially dense in every For every the canonical inclusion into the normed space (here has its usual norm topology) is a continuous linear injection and the range of this injection is dense in its codomain if and only if . Suppose that is one of the LF-spaces (for ) or LB-spaces (for ) or normed spaces (for ). Because the canonical injection is a continuous injection whose image is dense in the codomain, this map's transpose is a continuous injection. This injective transpose map thus allows the continuous dual space of to be identified with a certain vector subspace of the space of all distributions (specifically, it is identified with the image of this transpose map). This continuous transpose map is not necessarily a TVS-embedding so the topology that this map transfers from its domain to the image is finer than the subspace topology that this space inherits from A linear subspace of carrying a locally convex topology that is finer than the subspace topology induced by is called . Almost all of the spaces of distributions mentioned in this article arise in this way (e.g. tempered distribution, restrictions, distributions of order some integer, distributions induced by a positive Radon measure, distributions induced by an -function, etc.) and any representation theorem about the dual space of may, through the transpose be transferred directly to elements of the space Compactly supported Lp-spaces Given the vector space of on and its topology are defined as direct limits of the spaces in a manner analogous to how the canonical LF-topologies on were defined. For any compact let denote the set of all element in (which recall are equivalence class of Lebesgue measurable functions on ) having a representative whose support (which recall is the closure of in ) is a subset of (such an is almost everywhere defined in ). The set is a closed vector subspace and is thus a Banach space and when even a Hilbert space. Let be the union of all as ranges over all compact subsets of The set is a vector subspace of whose elements are the (equivalence classes of) compactly supported functions defined on (or almost everywhere on ). Endow with the final topology (direct limit topology) induced by the inclusion maps as ranges over all compact subsets of This topology is called the and it is equal to the final topology induced by any countable set of inclusion maps () where are any compact sets with union equal to This topology makes into an LB-space (and thus also an LF-space) with a topology that is strictly finer than the norm (subspace) topology that induces on it. Radon measures The inclusion map is a continuous injection whose image is dense in its codomain, so the transpose is also a continuous injection. Note that the continuous dual space can be identified as the space of Radon measures, where there is a one-to-one correspondence between the continuous linear functionals and integral with respect to a Radon measure; that is, if then there exists a Radon measure on such that for all and if is a Radon measure on then the linear functional on defined by is continuous. Through the injection every Radon measure becomes a distribution on . If is a locally integrable function on then the distribution is a Radon measure; so Radon measures form a large and important space of distributions. The following is the theorem of the structure of distributions of Radon measures, which shows that every Radon measure can be written as a sum of derivatives of locally functions in : Positive Radon measures A linear function on a space of functions is called if whenever a function that belongs to the domain of is non-negative (meaning that is real-valued and ) then One may show that every positive linear functional on is necessarily continuous (that is, necessarily a Radon measure). Lebesgue measure is an example of a positive Radon measure. Locally integrable functions as distributions One particularly important class of Radon measures are those that are induced locally integrable functions. The function is called if it is Lebesgue integrable over every compact subset of . This is a large class of functions which includes all continuous functions and all Lp space functions. The topology on is defined in such a fashion that any locally integrable function yields a continuous linear functional on – that is, an element of – denoted here by , whose value on the test function is given by the Lebesgue integral: Conventionally, one abuses notation by identifying with provided no confusion can arise, and thus the pairing between and is often written If and are two locally integrable functions, then the associated distributions and are equal to the same element of if and only if and are equal almost everywhere (see, for instance, ). In a similar manner, every Radon measure on defines an element of whose value on the test function is As above, it is conventional to abuse notation and write the pairing between a Radon measure and a test function as Conversely, as shown in a theorem by Schwartz (similar to the Riesz representation theorem), every distribution which is non-negative on non-negative functions is of this form for some (positive) Radon measure. Test functions as distributions The test functions are themselves locally integrable, and so define distributions. The space of test functions is sequentially dense in with respect to the strong topology on This means that for any there is a sequence of test functions, that converges to (in its strong dual topology) when considered as a sequence of distributions. Or equivalently, Furthermore, is also sequentially dense in the strong dual space of Distributions with compact support The inclusion map is a continuous injection whose image is dense in its codomain, so the transpose is also a continuous injection. Thus the image of the transpose, denoted by forms a space of distributions when it is endowed with the strong dual topology of (transferred to it via the transpose map so the topology of is finer than the subspace topology that this set inherits from ). The elements of can be identified as the space of distributions with compact support. Explicitly, if is a distribution on then the following are equivalent, ; the support of is compact; the restriction of to when that space is equipped with the subspace topology inherited from (a coarser topology than the canonical LF topology), is continuous; there is a compact subset of such that for every test function whose support is completely outside of , we have Compactly supported distributions define continuous linear functionals on the space ; recall that the topology on is defined such that a sequence of test functions converges to 0 if and only if all derivatives of converge uniformly to 0 on every compact subset of . Conversely, it can be shown that every continuous linear functional on this space defines a distribution of compact support. Thus compactly supported distributions can be identified with those distributions that can be extended from to Distributions of finite order Let The inclusion map is a continuous injection whose image is dense in its codomain, so the transpose is also a continuous injection. Consequently, the image of denoted by forms a space of distributions when it is endowed with the strong dual topology of (transferred to it via the transpose map so 's topology is finer than the subspace topology that this set inherits from ). The elements of are The distributions of order which are also called are exactly the distributions that are Radon measures (described above). For a is a distribution of order that is not a distribution of order A distribution is said to be of if there is some integer such that it is a distribution of order and the set of distributions of finite order is denoted by Note that if then so that is a vector subspace of and furthermore, if and only if Structure of distributions of finite order Every distribution with compact support in is a distribution of finite order. Indeed, every distribution in is a distribution of finite order, in the following sense: If is an open and relatively compact subset of and if is the restriction mapping from to , then the image of under is contained in The following is the theorem of the structure of distributions of finite order, which shows that every distribution of finite order can be written as a sum of derivatives of Radon measures: Example. (Distributions of infinite order) Let and for every test function let Then is a distribution of infinite order on . Moreover, can not be extended to a distribution on ; that is, there exists no distribution on such that the restriction of to is equal to . Tempered distributions and Fourier transform Defined below are the , which form a subspace of the space of distributions on This is a proper subspace: while every tempered distribution is a distribution and an element of the converse is not true. Tempered distributions are useful if one studies the Fourier transform since all tempered distributions have a Fourier transform, which is not true for an arbitrary distribution in Schwartz space The Schwartz space, is the space of all smooth functions that are rapidly decreasing at infinity along with all partial derivatives. Thus is in the Schwartz space provided that any derivative of multiplied with any power of converges to 0 as These functions form a complete TVS with a suitably defined family of seminorms. More precisely, for any multi-indices and define: Then is in the Schwartz space if all the values satisfy: The family of seminorms defines a locally convex topology on the Schwartz space. For the seminorms are, in fact, norms on the Schwartz space. One can also use the following family of seminorms to define the topology: Otherwise, one can define a norm on via The Schwartz space is a Fréchet space (i.e. a complete metrizable locally convex space). Because the Fourier transform changes into multiplication by and vice versa, this symmetry implies that the Fourier transform of a Schwartz function is also a Schwartz function. A sequence in converges to 0 in if and only if the functions converge to 0 uniformly in the whole of which implies that such a sequence must converge to zero in is dense in The subset of all analytic Schwartz functions is dense in as well. The Schwartz space is nuclear and the tensor product of two maps induces a canonical surjective TVS-isomorphisms where represents the completion of the injective tensor product (which in this case is the identical to the completion of the projective tensor product). Tempered distributions The inclusion map is a continuous injection whose image is dense in its codomain, so the transpose is also a continuous injection. Thus, the image of the transpose map, denoted by forms a space of distributions when it is endowed with the strong dual topology of (transferred to it via the transpose map so the topology of is finer than the subspace topology that this set inherits from ). The space is called the space of . It is the continuous dual of the Schwartz space. Equivalently, a distribution is a tempered distribution if and only if The derivative of a tempered distribution is again a tempered distribution. Tempered distributions generalize the bounded (or slow-growing) locally integrable functions; all distributions with compact support and all square-integrable functions are tempered distributions. More generally, all functions that are products of polynomials with elements of Lp space for are tempered distributions. The can also be characterized as , meaning that each derivative of grows at most as fast as some polynomial. This characterization is dual to the behaviour of the derivatives of a function in the Schwartz space, where each derivative of decays faster than every inverse power of An example of a rapidly falling function is for any positive Fourier transform To study the Fourier transform, it is best to consider complex-valued test functions and complex-linear distributions. The ordinary continuous Fourier transform is a TVS-automorphism of the Schwartz space, and the is defined to be its transpose which (abusing notation) will again be denoted by . So the Fourier transform of the tempered distribution is defined by for every Schwartz function is thus again a tempered distribution. The Fourier transform is a TVS isomorphism from the space of tempered distributions onto itself. This operation is compatible with differentiation in the sense that and also with convolution: if is a tempered distribution and is a smooth function on is again a tempered distribution and is the convolution of and . In particular, the Fourier transform of the constant function equal to 1 is the distribution. Expressing tempered distributions as sums of derivatives If is a tempered distribution, then there exists a constant and positive integers and such that for all Schwartz functions This estimate along with some techniques from functional analysis can be used to show that there is a continuous slowly increasing function and a multi-index such that Restriction of distributions to compact sets If then for any compact set there exists a continuous function compactly supported in (possibly on a larger set than itself) and a multi-index such that on Tensor product of distributions Let and be open sets. Assume all vector spaces to be over the field where or For define for every and every the following functions: Given and define the following functions: where and These definitions associate every and with the (respective) continuous linear map: Moreover, if either (resp. ) has compact support then it also induces a continuous linear map of (resp. denoted by or is the distribution in defined by: Schwartz kernel theorem The tensor product defines a bilinear map the span of the range of this map is a dense subspace of its codomain. Furthermore, Moreover induces continuous bilinear maps: where denotes the space of distributions with compact support and is the Schwartz space of rapidly decreasing functions. This result does not hold for Hilbert spaces such as and its dual space. Why does such a result hold for the space of distributions and test functions but not for other "nice" spaces like the Hilbert space ? This question led Alexander Grothendieck to discover nuclear spaces, nuclear maps, and the injective tensor product. He ultimately showed that it is precisely because is a nuclear space that the Schwartz kernel theorem holds. Like Hilbert spaces, nuclear spaces may be thought as of generalizations of finite dimensional Euclidean space. Using holomorphic functions as test functions The success of the theory led to investigation of the idea of hyperfunction, in which spaces of holomorphic functions are used as test functions. A refined theory has been developed, in particular Mikio Sato's algebraic analysis, using sheaf theory and several complex variables. This extends the range of symbolic methods that can be made into rigorous mathematics, for example Feynman integrals. See also Notes References Bibliography . . . . . . . . . . . . Further reading M. J. Lighthill (1959). Introduction to Fourier Analysis and Generalised Functions. Cambridge University Press. (requires very little knowledge of analysis; defines distributions as limits of sequences of functions under integrals) V.S. Vladimirov (2002). Methods of the theory of generalized functions. Taylor & Francis. . . . . . Functional analysis Generalized functions Generalizations of the derivative Smooth functions Topological vector spaces Schwartz distributions
Spaces of test functions and distributions
[ "Mathematics" ]
8,579
[ "Functions and mappings", "Functional analysis", "Vector spaces", "Mathematical objects", "Space (mathematics)", "Topological vector spaces", "Mathematical relations" ]
59,885,583
https://en.wikipedia.org/wiki/Journal%20of%20Alloys%20and%20Compounds
The Journal of Alloys and Compounds is a peer-reviewed scientific journal covering experimental and theoretical approaches to materials problems that involve compounds and alloys. It is published by Elsevier and the editor-in-chief is Hongge Pan, Livio Battezzati. It was the first journal established to focus specifically on a group of inorganic elements. History The journal was established by William Hume-Rothery in 1958 as the Journal of the Less-Common Metals, focussing on the chemical elements in the rows of the periodic table for the Actinide and Lanthanide series. The lanthanides are sometimes referred to as the rare earths. The journal was not strictly limited to articles about those specific elements: it also included papers about the preparation and use of other elements and alloys. The journal developed out of an international symposium on metals and alloys above 1200 °C which Hume-Rothery organized at Oxford University on September 17–18, 1958. The conference included more than 100 participants from several countries. The papers presented at the symposium "The study of metals and alloys above 1200°C" were published as volume 1 of the journal. It was the first journal dealing specifically with a category of inorganic elements. The title of "Less-Common Metals" was something of a misnomer, since these metals are actually found fairly commonly, but in small amounts. The journal obtained its current name in 1991 and is considered a particularly rich source of information on hydrogen-metal systems. Retractions In 2017, Elsevier was reported to be retracting 3 papers from the journal, which was one of several to be affected by falsified reviews, which led to a broader discussion of the processes for reviewing journal articles. Abstracting and indexing The journal is abstracted and indexed in: Chemical Abstracts Service Current Contents/Physical, Chemical & Earth Sciences Science Citation Index Scopus According to the Journal Citation Reports, the journal has a 2022 impact factor of 6.371. References External links Materials science journals Academic journals established in 1958 Rare earth elements English-language journals Elsevier academic journals
Journal of Alloys and Compounds
[ "Materials_science", "Engineering" ]
420
[ "Materials science journals", "Materials science" ]
62,567,637
https://en.wikipedia.org/wiki/Radio%20Spectrum%20Policy%20Group
The Radio Spectrum Policy Group (RSPG) is an advisory group founded 26 July 2002 for the European Commission on matters related to the radio spectrum. The group is made up of representatives from the European Commission and the member states of the European Union. The group focuses on dealing with the radio spectrum in regards to telecommunications, health and transportation. The group was reformed 11 June 2019 under the same name. References External links European Commission Radio spectrum
Radio Spectrum Policy Group
[ "Physics" ]
88
[ "Radio spectrum", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
62,567,781
https://en.wikipedia.org/wiki/Radio%20Spectrum%20Policy%20Programme
The Radio Spectrum Policy Programme (RSPP) was a five year programme which set out regulatory requirements, goals and priorities of the European Union relating to the radio spectrum. It was first adopted 14 March 2012. It attempted to standardise the frequencies that different types of communication could use and also set goals as to when this standardisation should be complete. However, some member states did not meet certain goals laid out in the programme. A legislative review recommended implementing an adapted programme as legislation in a regulation, and so a modified version was adapted into a proposed regulation. The legislation was supported by the European Parliament, but was subsequently removed after criticism from member states in the European Council. In 2015, the Radio Spectrum Policy Group said the programme had mostly met its goals. The modified version was then used as a basis for the section on the radio spectrum in the European Electronic Communications Code. History The programme was adopted by the European Council and the European Parliament 14 March 2012. It was managed and created by the European Commission. The first version laid out goals and their timescales, which aimed to standardise the assignment of the radio spectrum across the EU. It also stipulated that the commission had to produce a report on what the programme had achieved by April 2014. Several member states failed to meet certain goals due to a variety of reasons and this meant that the programme did not achieve standardisation in all member states early on. There were several member states who missed the target for the assignment of the 800MHz band. In the year after its introduction, the European Commission initiated three different legislative reviews of the programme, with the third review proposing that the commission adopt the programme into regulation. This was because not all of the member states met the goals on time. Adapting it into a regulation would mean that once it was adopted it would be enforceable in member states without national legislation, ensuring that every member state met the goals on time. In 2013, the commission modified the programme and added the new version as legislation to a regulatory proposal. The European Parliament supported the legislation, however, member states in the European Council did not agree with the legislation due to its "intrusiveness into national prerogatives". The legislation then was removed by the council from the regulatory proposal. In 2015, the Radio Spectrum Policy Group in their annual report said that the programme's objectives had been mostly achieved. In 2016, the European Electronic Communications Code was created, which incorporated a section on the radio spectrum and this section was mostly based on the modified 2013 programme. The code was implemented, along with the section, in 2018. Aims of the programme Some of the goals of the programme included switching to digital broadcasting from analogue, assignment of certain frequencies to mobile broadband throughout the EU and making use of the freed radio spectrum space for wireless communication. The proposed legislation in 2013 aimed to phase out national differences in the allocation of the radio spectrum. References Works cited European Union law 2012 in law 2012 in the European Union Radio spectrum
Radio Spectrum Policy Programme
[ "Physics" ]
601
[ "Radio spectrum", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
76,536,009
https://en.wikipedia.org/wiki/Thorium%20trichloride
Thorium trichloride is a binary inorganic compound of thorium metal and chlorine with the chemical formula . Synthesis The compound can be prepared by reducing thorium tetrachloride at 800°C: Also a reaction of both elements: Other reactions are also known. Physical properties The compound forms crystals of the uranium trichloride crystal system. Chemical properties Above 630 °C thorium trichloride dissociates into the dichloride and tetrachloride. Uses Thorium trichloride is supposed to be used in a dual fluid reactor as reactor fuel. References Thorium compounds Nuclear materials Chlorides Actinide halides
Thorium trichloride
[ "Physics", "Chemistry" ]
137
[ "Chlorides", "Inorganic compounds", "Salts", "Materials", "Nuclear materials", "Matter" ]
76,542,190
https://en.wikipedia.org/wiki/Dalpiciclib
Dalpiciclib is a drug for the treatment of various forms of cancer. In China, dalpiciclib is approved for use in combination with fulvestrant for treatment of HR-positive, HER2-negative recurrent, or metastatic breast cancer in patients who have progressed after previous endocrine therapy. Dalpicicilib is a CDK inhibitor that targets the CDK4 and CDK6 isoforms. References CDK inhibitors Piperidines 2-Aminopyridines Cyclopentanes Pyridopyrimidines Ketones Lactams
Dalpiciclib
[ "Chemistry" ]
119
[ "Ketones", "Functional groups" ]
76,547,002
https://en.wikipedia.org/wiki/Steven%20Gao
Steven Shichang Gao (; born 1972) is a British Chinese. He is a Professor of Electronic Engineering. His research mainly includes antennas, MIMO, intelligent antennas and phased arrays for mobile and satellite communications, navigation and sensing. He obtained a Doctorate degree at Shanghai University in 1999. He completed Post-Doctoral research at the National University of Singapore and subsequently moved to the United Kingdom in 2001 to work as a Research Fellow at the University of Birmingham. The following year, he began teaching at Northumbria University as a Senior Lecturer and was promoted to a Reader in 2006. In 2007, he moved to Surrey Space Center at the University of Surrey, and from 2013 to 2022, worked at the University of Kent as a Professor and Chair of RF and Microwave Engineering. Since Sep 2022, he joined the Chinese University of Hong Kong, Hong Kong, as a Professor and the Director of Center for intelligent Electromagnetic Systems. Since 2023, Prof. Gao serves as the Editor-in-Chief of the journal IEEE Antennas and Wireless Propagation Letters. Prof Gao is a Fellow of the Institute of Electrical and Electronics Engineers, the Royal Aeronautical Society, and the Institution of Engineering and Technology. References Chinese electrical engineers Chinese expatriates in the United Kingdom Academic staff of the Chinese University of Hong Kong Academics of the University of Surrey Academics of the University of Kent Academics of Northumbria University Fellows of the IEEE Fellows of the Royal Aeronautical Society Fellows of the Institution of Engineering and Technology Scientific journal editors Living people 1972 births Chinese expatriates in Singapore Shanghai University alumni
Steven Gao
[ "Engineering" ]
313
[ "Institution of Engineering and Technology", "Fellows of the Institution of Engineering and Technology" ]
56,518,900
https://en.wikipedia.org/wiki/Why%20Is%20It%20So%3F
Why Is It So? is an educational science series produced in Australia by ABC Television from 1963 to 1986. The series was hosted by American scientist Julius Sumner Miller, who demonstrated experiments in the world of physics. The series was also screened in the United States, Canada, New Zealand and in Europe. This program was based on his 1959 series Why Is It So? in the United States on KNXT (now KCBS-TV) Channel 2 in Los Angeles. Several segments from the program have been uploaded to the ABC Science YouTube channel. References External links 1960s Australian documentary television series 1970s Australian documentary television series 1980s Australian documentary television series Science education television series Physics education Australian Broadcasting Corporation original programming 1963 Australian television series debuts 1986 Australian television series endings
Why Is It So?
[ "Physics" ]
148
[ "Applied and interdisciplinary physics", "Physics education" ]
56,520,281
https://en.wikipedia.org/wiki/William%20Foster%20Nye
William Foster Nye (May 20, 1824 – August 12, 1910) was an American businessman and founder of a lubricating oil business in New Bedford, Massachusetts, which is still in existence today and known as Nye Lubricants. He was a relative of Bill Nye of Bill Nye the Science Guy fame. Life and career Nye was born in the village of Pocasset (at the time considered part of the town of Sandwich), one of the eight children of Syrena née Dimmock and Ebenezer Nye. His family was descended from Benjamin Nye who had emigrated from England in 1635 and settled in Massachusetts where he eventually built and operated a sawmill near Sandwich. At the age of 16 Nye was apprenticed to Prince Weeks, a master builder in New Bedford. On finishing his apprenticeship, he worked for a pipe organ-building company in Boston and then spent three years in Calcutta as a carpenter for the Frederic Tudor Ice Company. On his return to Massachusetts he married Mary Keith on May 20, 1851. Nye then sailed to California, crossing the Isthmus of Panama on foot, and arriving in San Francisco shortly after the Fire of 1851 which had destroyed much of the city. He worked in the re-building of the city for several years, helping to construct some of San Francisco's first brick buildings. In 1855, Nye returned to New Bedford and set up an oil and kerosene business which he operated until the outbreak of the American Civil War when he joined the Union Army as a sutler to the Massachusetts Artillery and the 4th Massachusetts Cavalry. He was with the advance guard of the cavalry when it entered Richmond, Virginia in 1865 and set up a trading post there in one of the city's remaining brick buildings. For a time he was the sole tradesman operating in Richmond. After his regiment was mustered out in November 1865 Nye returned to New Bedford and began developing the lubricant oil business for which he became principally known. Nye's oil business, originally run out of small rented premises in Fairhaven, focused primarily on highly refined lubricant oils for watches, clocks, typewriters, sewing machines, and bicycles. In the late 1860s he acquired an entire catch of 2,200 pilot whales which would supply the raw material for his lubricating oils for several years. He expanded the business in 1877 with the purchase of a large brick building on Fish Island which became its principal refinery. By 1888, his company had become one of the world's largest suppliers of refined lubricant oils. In 1896 Nye absorbed Ezra Kelley's oil company, his main rival. He remained actively involved in the business until shortly before his death in 1910 at the age of 86. Nye's son, Joseph Keith Nye (1858–1923) worked extensively with his father, patented several inventions for the improvement of the refining process, and took over the company after his father's death. After Joseph's death in 1923, the business was acquired by his associate Anderson W. Kelley. It was subsequently acquired by the Mock family in 1956 and still operates today under the name Nye Lubricants. A keen believer in spiritualism, Nye had been one of the founders and most active promoters of the Onset Bay Grove Association. Their summer retreat, Onset Bay Grove, built by the association in the late 1870s, claimed to be the "largest community of spiritualism yet formed in the fifty years history of its teachings." In his later years Nye said of his beliefs: That I am a spiritualist must be to those I leave behind me the touch that withers my memory or the ever living archway about which they can entwine earth's fragrant flowers and through which they may in gladness follow me to the evergreen shore. Nye is buried with his wife Mary in Riverside Cemetery in Fairhaven. Notes References 1824 births 1910 deaths American manufacturing businesspeople Businesspeople from Massachusetts People from Sandwich, Massachusetts 19th-century American businesspeople Tribologists
William Foster Nye
[ "Materials_science" ]
829
[ "Tribology", "Tribologists" ]
56,520,290
https://en.wikipedia.org/wiki/Neutral%20Point%20Clamped
Neutral point clamped (NPC) inverters are widely used topology of multilevel inverters in high-power applications. This kind of inverters are able to be used for up to several megawatts applications. See links for more information. See also Active power filter Synchronverter References Power electronics
Neutral Point Clamped
[ "Engineering" ]
68
[ "Electronic engineering", "Power electronics" ]
56,523,750
https://en.wikipedia.org/wiki/Cyanoalanine
Cyanoalanine (more accurately β-Cyano-L-alanine) is an amino acid with the formula NCCH2CH(NH2)CO2H. Like most amino acids, it exists as a tautomer NCCH2CH(NH3+)CO2−. It is a rare example of a nitrile-containing amino acid. It is a white, water-soluble solid. It can be found in common vetch seeds. Cyanoalanine arises in nature by the action of cyanide on cysteine catalyzed by L-3-cyanoalanine synthase: HSCH2CH(NH2)CO2H + HCN → NCCH2CH(NH2)CO2H + H2S It is converted to aspartic acid and asparagine enzymatically. References Alpha-Amino acids Nitriles
Cyanoalanine
[ "Chemistry" ]
187
[ "Nitriles", "Functional groups" ]
56,524,376
https://en.wikipedia.org/wiki/Pollen%20DNA%20barcoding
Pollen DNA barcoding is the process of identifying pollen donor plant species through the amplification and sequencing of specific, conserved regions of plant DNA. Being able to accurately identify pollen has a wide range of applications though it has been difficult in the past due to the limitations of microscopic identification of pollen. Pollen identified using DNA barcoding involves the specific targeting of gene regions that are found in most to all plant species but have high variation between members of different species. The unique sequence of base pairs for each species within these target regions can be used as an identifying feature. The applications of pollen DNA barcoding range from forensics, to food safety, to conservation. Each of these fields benefits from the creation of plant barcode reference libraries. These libraries range largely in size and scope of their collections as well as what target region(s) they specialize in. One of the main challenges of identifying pollen is that it is often collected as a mixture of pollen from several species. Metabarcoding is the process of identifying the individual species DNA from a mixed DNA sample and is commonly used to catalog pollen in mixed pollen loads found on pollinating animals and in environmental DNA (also called eDNA) which is DNA extracted straight from the environment such as in soil or water samples. Advantages Some of the principle constraints of microscopic identification are the expertise and time requirements. Identifying pollen via microscopy requires a high level of expertise in the pollen characteristics of the specific plants being studied. With expertise it can still be extremely difficult to identify pollen accurately with high taxonomic resolution. The skills required to do DNA barcoding are much more common making the approach easier to adopt. Pollen DNA barcoding is a technique that has grown in popularity due to the decreased costs associated with "next generation sequencing" (NGS) techniques and is being continually improved in efficiency including through the use of a dual-indexing approach. Some of the other major advantages include the savings in time and resources compared to microscopic identification. Identifying pollen is time-consuming, involving spreading pollen on a slide, staining the pollen to improve visibility, then focusing in on individual pollen grains and identifying them based on size, shape, as well as the shape and number of pores. If a pollen reference library is not available, then pollen has to be collected from wild specimens or from herbarium specimens and is then added to a pollen reference library. Rare plants visited by some pollinators can be difficult to determine, by using pollen DNA barcoding researchers can uncover "invisible" interactions between plants and pollinators. Challenges There are many challenges when it comes to genetic barcoding of pollen. The amplification process of DNA can mean that even small pieces of plant DNA can be detected included those from contaminants to a sample. Strict procedures to prevent contamination are important and can be facilitated by the hardiness of the pollen coat allowing the pollen to be washed of contaminants without damaging the internal pollen DNA. DNA barcode reference libraries are still being built and standardized target regions are being gradually adopted. These challenges are likely due to the newness of DNA barcoding and will likely improve with the wider adoption of DNA barcoding as a tool used by taxonomists. Determining the amount of each contributor to a mixed pollen load can be difficult to determine through the use of DNA barcoding. However, scientists have been able to compare pollen amounts via rank order. Alternatives Innovations in automated microscopy and imagining software offer one potential alternative in the identification of pollen. Through the use of pattern-recognition software, researchers have developed software that can characterize microscopic pollen images based on texture analyzes. Target regions There have been several different regions of plant DNA that have been used as targets for genetic barcoding including rbcL, matK, trnH-psbA, ITS1 and ITS2. A combination of rbcL and matK has been recommended for use in plant DNA barcoding. It has been found that trnL is better for degraded DNA and ITS1 is better for differentiating species within a genus. Applications Use in pollination networks Being able to identify pollen is especially important in the study of pollination networks which are made up of all the interactions between plants and the animals that facilitate their pollination. Identifying the pollen carried on insects helps scientists understand what plants are being visited by which insects. Insects can also have homologous features making them difficult to identify and are themselves sometimes identified through genetic barcoding (usually of the CO1 region). Every insect that visits a flower is not necessarily a pollinator. Many lack features such as hairs allowing them to carry pollen while others avoid the pollen-laden anthers to steal nectar. Pollination networks are made more accurate by including what pollen is being carried by which insects. Some scientists argue that pollination effectiveness (PE), which is measured by studying the germination rates of seeds produced from flowers visited only once by a single animal, is the best way to determine which animals are important pollinators though other scientists have used DNA barcoding to determine the genetic origin of pollen found on insects and have argued that this in conjunction with other traits is a good indication of pollination effectiveness. By studying the composition and structure of pollination networks, conservationists can understand the stability of a pollination network and identify which species are most important and which are at most risk to perturbation leading to pollinator declines. Another advantage of pollen DNA barcoding is that it can be used to determine the source of pollen found on museum specimens of insects, and these records of insect-plant interactions can then be compared to modern-day interactions to see how pollination networks have changed over time due to global warming, land use change, and other factors. Forensics Being accurately able to identify pollen found on evidence helps forensic investigators identify which regions evidence originated from based on the plants that are endemic to those regions. In addition to this, atmospheric pollen originating from illegal cannabis farms were successfully detected by scientists which in the future could allow law enforcement officials to narrow down the search areas for illegal farms. Ancient pollen Due to the hardy structure of pollen which has evolved to survive being transported sometimes great distances while keeping the internal genetic information intact, the origin of pollen found mixed in ancient substrates can often be determined through DNA barcoding. Food safety Honeybees carry pollen as well as the nectar used in their production of honey. For food quality and safety concerns it is important to understand the plant providence of human-consumed bee products including honey, royal jelly, and pollen pellets. Investigators can test which plants honeybees foraged on and thus the origin of the nectar used in honey by collecting pollen packets from honeybees' corbicular loads and identify the pollen via DNA metabarcoding. See also Aeroplankton References Bioinformatics Molecular genetics Palynology DNA barcoding
Pollen DNA barcoding
[ "Chemistry", "Engineering", "Biology" ]
1,394
[ "Genetics techniques", "Biological engineering", "DNA barcoding", "Bioinformatics", "Molecular genetics", "Molecular biology", "Phylogenetics" ]
56,525,552
https://en.wikipedia.org/wiki/Elementary%20flow
In the larger context of the Navier-Stokes equations (and especially in the context of potential theory), elementary flows are basic flows that can be combined, using various techniques, to construct more complex flows. In this article the term "flow" is used interchangeably with the term "solution" due to historical reasons. The techniques involved to create more complex solutions can be for example by superposition, by techniques such as topology or considering them as local solutions on a certain neighborhood, subdomain or boundary layer and to be patched together. Elementary flows can be considered the basic building blocks (fundamental solutions, local solutions and solitons) of the different types of equations derived from the Navier-Stokes equations. Some of the flows reflect specific constraints such as incompressible or irrotational flows, or both, as in the case of potential flow, and some of the flows may be limited to the case of two dimensions. Due to the relationship between fluid dynamics and field theory, elementary flows are relevant not only to aerodynamics but to all field theory in general. To put it in perspective boundary layers can be interpreted as topological defects on generic manifolds, and considering fluid dynamics analogies and limit cases in electromagnetism, quantum mechanics and general relativity one can see how all these solutions are at the core of recent developments in theoretical physics such as the ads/cft duality, the SYK model, the physics of nematic liquids, strongly correlated systems and even to quark gluon plasmas. Two-dimensional uniform flow For steady-state, spatially uniform flow of a fluid in the plane, the velocity vector is where is the absolute magnitude of the velocity (i.e., ); is the angle the velocity vector makes with the positive axis ( is positive for angles measured in a counterclockwise sense from the positive axis); and and are the unit basis vectors of the coordinate system. Because this flow is incompressible (i.e., ) and two-dimensional, its velocity can be expressed in terms of a stream function, : where and is a constant. In cylindrical coordinates: and This flow is irrotational (i.e., ) so its velocity can be expressed in terms of a potential function, : where and is a constant. In cylindrical coordinates Two-dimensional line source The case of a vertical line emitting at a fixed rate a constant quantity of fluid Q per unit length is a line source. The problem has a cylindrical symmetry and can be treated in two dimensions on the orthogonal plane. Line sources and line sinks (below) are important elementary flows because they play the role of monopole for incompressible fluids (which can also be considered examples of solenoidal fields i.e. divergence free fields). Generic flow patterns can be also de-composed in terms of multipole expansions, in the same manner as for electric and magnetic fields where the monopole is essentially the first non-trivial (e.g. constant) term of the expansion. This flow pattern is also both irrotational and incompressible. This is characterized by a cylindrical symmetry: Where the total outgoing flux is constant Therefore, This is derived from a stream function or from a potential function Two-dimensional line sink The case of a vertical line absorbing at a fixed rate a constant quantity of fluid Q per unit length is a line sink. Everything is the same as the case of a line source a part from the negative sign. This is derived from a stream function or from a potential function Given that the two results are the same a part from a minus sign we can treat transparently both line sources and line sinks with the same stream and potential functions permitting Q to assume both positive and negative values and absorbing the minus sign into the definition of Q. Two-dimensional doublet or dipole line source If we consider a line source and a line sink at a distance d we can reuse the results above and the stream function will be The last approximation is to the first order in d. Given It remains The velocity is then And the potential instead Two-dimensional vortex line This is the case of a vortex filament rotating at constant speed, there is a cylindrical symmetry and the problem can be solved in the orthogonal plane. Dual to the case above of line sources, vortex lines play the role of monopoles for irrotational flows. Also in this case the flow is also both irrotational and incompressible and therefore a case of potential flow. This is characterized by a cylindrical symmetry: Where the total circulation is constant for every closed line around the central vortex and is zero for any line not including the vortex. Therefore, This is derived from a stream function or from a potential function Which is dual to the previous case of a line source Generic two-dimensional potential flow Given an incompressible two-dimensional flow which is also irrotational we have: Which is in cylindrical coordinates We look for a solution with separated variables: which gives Given the left part depends only on r and the right parts depends only on , the two parts must be equal to a constant independent from r and . The constant shall be positive. Therefore, The solution to the second equation is a linear combination of and In order to have a single-valued velocity (and also a single-valued stream function) m shall be a positive integer. therefore the most generic solution is given by The potential is instead given by References Specific Further reading External links Fluid dynamics
Elementary flow
[ "Chemistry", "Engineering" ]
1,124
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
56,527,569
https://en.wikipedia.org/wiki/Conservation%20and%20restoration%20of%20photographic%20plates
The conservation and restoration of photographic plates is the practice of caring for and maintaining photographic plates to preserve their materials and content. The practice includes the measures that can be taken by conservators, curators, collection managers, and other professionals to conserve the material unique to photographic plate processes. This practice includes understanding the composition and agents of deterioration of photographic plates, as well as the preventive, and interventive conservational measures that can be taken to increase a photographic image's longevity. History Composition In general, black and white photographic negatives are made up of fine silver particles (or color dyes for color negatives), which are embedded in a thin layer called a binder; the two together comprising the emulsion. This emulsion layer sits upon what is called the support, which can be paper, metal, film, or, as in the case of photographic plates, glass. Before exposure, a photographic plate consists of a photosensitive substance layered on a support medium. Glass plates emerged as a common support medium for photographic negatives from the mid-nineteenth century to the 1920s. Depending upon the period, there can be variants to the binder and, thus, the chemistry of the image. In the case of the Wet Plate Collodion, the image is run under a wash bath to stop the development of the image after exposure. An important part of the photographic process, "fixing", is then used to wash the silver particles that are not part of the image, which then produces a stable negative image. The fix bath will ensure that the remaining silver halide crystals are no longer sensitive to additional light exposure, removing all excess. This negative image can then be used over many years to produce paper positives. It is important for the conservator to understand the chemistry, in order to prevent further chemical reactions. Processes Collodion glass plate negative: This process was invented by the Englishman Frederick Scott Archer in 1851. While the first process to take advantage of glass plates was the albumen print method, it was quite laborious and was quickly surpassed by the collodion glass plate negative in common use. The collodion photographic process was a wet plate process, which meant that the glass plate itself had to be wet while it was exposed and throughout processing. This required a portable darkroom to be taken wherever a photographer went, in order to produce a negative image successfully. During the process, the collodion emulsion was poured onto a glass plate before being exposed. The glass plate was then developed, fixed, washed, and protected with a varnish. Collodion is cotton dissolved in nitric acid. Because collodion was both complex and dangerous to produce, it was often purchased by the photographer. Once dissolved, iodide was added. Over time, bromide was added to help the image be more sensitive to light. Sometimes albumen (made of egg whites) was used to help the collodion stick to the glass plate. Pyrogallic Acid or ferrous sulfate was often used to develop the latent image, then sodium thiosulfate (also known as hypo) or potassium cyanide was used to fix the image. Gelatin dry plate negative: This process was invented by Richard Leach Maddox in 1871, but it was not commonly used until 1879, when the process became commercially successful. Because of this process's advances in photography, it soon replaced the wet plate process in the 1880s. The collodion binder formerly used was replaced by gelatin, which already contained light-sensitive silver salts. This meant the emulsion was already present and did not have to be painted on the glass plate right before exposure – which now took less than one second. Because of this advancement, photographers did not have to carry a portable darkroom, as the plate could be developed later. To make the gelatin dry plate, the glass was cleaned, polished, and treated to ensure the gelatin would adhere to the glass plate. Treatments included applying thin layers of gelatin, albumen, or chemical etching. After 1879, when further improvements were made to the gelatin emulsion, gelatin glass plates began being mass-produced by companies such as Wratten & Wainright, Keystone Dry Plate Works, and notably the Eastman Dry Plate Company. This led to the advanced use and production of photographic glass plates until around 1925 and marked the start of the development of modern photography as an industry. Screen Plate: The Screen Plate process is also known as Autochrome Lumière and was invented in France in 1907 by Louis and Auguste Lumière. Screen plates were an additive color screen process considered the first successful color process for commercial photography. The Lumière brothers drew upon the color theories of James Clerk Maxwell and Louis Ducos de Hauron from the second half of the 19th century. Autochrome plates were "covered in microscopic red, green, and blue colored potato starch grains". These grains, before being placed on the glass plate, were sorted through sieves to break them down to "thousandths of a millimeter" in diameter. Once broken down in size, they were separated into groups, then dyed either red, violet, or green. The grains were mixed and then spread over a glass plate covered with a tacky varnish. A second varnish was then applied over the layer of starch grains. The second coating of varnish was a hydrophobic layer composed of castor oil, cellulose nitrate, and dammar resin. Ambrotype: The Ambrotype process resembled the Wet Plate Collodion considerably in composition and creation, and was considered to be a "Collodion Positive". In 1850, Louis Désiré Blanquart-Evrard realized that the negative appeared as a positive by underexposing the image and placing the image against a dark background. Dark backgrounds—created with paints, fabrics, and papers—were used to achieve this effect. In some instances, bleach was used, after the image was developed, to yield a softer appearance. In this process, the chemical composition and fix bath are critical elements to the lifespan of the image, but the material that backs the glass plate may also cause deterioration. Agents of deterioration There are ten accepted agents of deterioration: dissociation, fire, incorrect relative humidity, incorrect temperature, light, pests, pollutants, physical forces, thieves, and water. Photographic plates face risks of damage from both external forces and from their own chemical composition. For a conservator to create an appropriate plan to protect against agents of deterioration, they must understand what might impact a photographic plate. The following list addresses how each agent of deterioration harms photographic glass plates. Relative humidity and temperature Relative humidity (RH) and temperature are two of the most common threats to photographic plates. As with all material collections, high temperature in combination with high humidity can cause mold growth and attract pests. Photographic plates face significant structural and chemical challenges unique to their makeup. There are two types of photographic glass plates: collodion wet plates and gelatin dry plates. Structurally, collodion wet plates are held together with a specific emulsion type, made using a silver halide mixture in gelatin. Fluctuations in RH can strain the adhesive emulsion, causing the gelatin to expand and contract. The strain from incorrect RH can also cause the emulsion to crack or separate along the plates' edges. With gelatin dry plates, high humidity can cause mold to grow on the emulsion. High levels of humidity can cause glass plates that have been stored incorrectly to stick together, compromising the image on the plate. Increasing RH can cause deterioration of other elements; these include the silver halide, varnish, and glass support. Decreasing the RH will cause deterioration by eventually leading to the flaking of the binder and dehydration of the glass. Much like RH, temperatures must be precise and closely monitored for the correct storage of photographic glass plates. A safe temperature to keep glass plates is ; however, a fluctuation of +/- would not cause a significant impact, making the safest range . Low temperatures aid in slowing a plate's inherent fragility by delaying the chemical reactions that cause decay of the plate's structure. Increasing temperatures or frequently high fluctuations will speed up the decay process. Theft and dissociation Although theft and dissociation can occur separately, it is not uncommon for the two to go hand-and-hand. Dissociation typically results in overtime from an ordered system falling apart due to lack of routine maintenance updates or from a catastrophic event leading to data loss. If an inventory is not regularly updated it could become easy for a single, or several, glass plates to go missing. Regular inventory maintenance can also serve as a deterrent against theft. Ensuring glass plates are locked and stored where only designated museum staff can access them is the best preventative measure against theft. Water and pests Deterioration in glass is often directly related to moisture, from humidity or direct contact. Enough moisture over time will result in the chemical composition of the image to change. In the 1990s, The United States National Archive began to notice that some glass plates featured in their collection, on the non-photo bearing side of the scale, a crystalline deposit, known as sick-glass, was present. If a glass plate has been subject to large amounts of moisture, it could grow mold on the plate's emulsion. Mold will eat away at the emulsion and attract other living pests. Insects will be more likely to appear in areas already compromised by inappropriate storage conditions. Insects will produce waste materials that, like dust, build up over time, causing further damage. Pests eat glass plate storage materials such as paper envelopes or cardboard boxes. Light Photographic plates and all photographic materials are susceptible to light. Extensive and ongoing exposure to light can cause significant and irreversible deterioration. Sunlight is the type of light most damaging to photographic plates. However, indoor lighting and other forms of UV lighting all pose a threat to photographic plates, causing fading and yellowing. Light is especially threatening to color photographic materials as it causes accelerated fading of the color dyes. Exposure to light could lead to deterioration and discoloration of the pigments present on the plate. Pollutants and fire Air pollution can threaten photographic plates through poor air quality and dirt that can damage the materials. This can include dust and gaseous pollution in an urban environment. Air pollution can cause fading of photographic materials. If a plate is subject to poor air quality, debris removal must be done with care using a cotton cloth; if done incorrectly, the glass might be subject to abrasions. Other sources of air pollution include "photocopying machines, construction materials, paint fumes, cardboard, carpets, and janitorial supplies". Fire can cause severe damage to photographic glass plates. The heat produced by a fire can aid in increasing the chemical decomposition rate of the plate's emulsion. Pollutants in the air produced by the fire—such as smoke and debris—can also attach to or rest upon plates. The same care should be taken removing trash from the aftermath of a fire that would be used to remove dust and other air pollutants. Material and chemical The glass composition of photographic plates can be a factor of deterioration. Due to poor quality or an inherent fragility, "sick glass" can occur. Environmental conditions are usually linked to the increase or presence of this glass corrosion. The effect of "sick glass" can be weeping and crizzling caused by excessive alkali and a lack of stabilizers. Weeping involves droplets forming on the glass that appear as tiny crystals. This deterioration is especially threatening to cased photographs because the cover glass could be corroded and damage the image underneath. Corrosion of the glass plate support can also damage the image layer by causing the lifting of the binder and varnish layers. The other chemical components of glass-plate negatives can also be threatening agents of deterioration. For instance, the silver image layer could undergo oxidative deterioration, leading to fading and discoloration. Additionally, the collodion binder itself is made up of cellulose nitrate, which is known to be a highly flammable compound. Most of these agents of deterioration are the result of poor chemical processing as a result of inherent frgility, but poor environmental and storage conditions usually accelerate them. Physical Glass plates are relatively stable dimensionally but also very fragile and brittle. Glass is highly susceptible to breakage, cracks, and fractures. This can be caused by human error, including dropping or bumping the glass plate, or it can be caused by failure of storage equipment, housing, shelves, etc., which may lead to an impact against the glass. Different breakage and stress states affect the image layer and binder differently. Types of breakage: Impact Break: Point of impact and surrounding radiating arcs. Cracks: Running perpendicular to applied stress. Blind Cracks: Breaks do not carry through the whole thickness of the glass. Preventive conservation Environment Environmental controls are a crucial part of the preservation of photographic glass plates. Relative humidity (RH), temperature, and light play a significant role in keeping the multiple materials in photographic glass plates maintained. The following environmentally regulatory measures are taken for their preservation: For photographic glass plates, the temperature is kept cool at approximately . RH levels are generally kept at 30–40%. If RH drops below 30%, the image binder of the glass plate will dehydrate. If RH rises above 40%, the glass will begin hydrating. For cased glass plate photographs, such as ambrotypes, RH levels are kept at 40–50% and the temperature between . These levels differ because of the case being at risk for embrittlement, brass mat, and glass deterioration. Though cold storage is safe for photographic plates, with proper acclimation periods to room temperature, frozen storage, unlike for film photography, is not recommended. Fluctuations, called "cycling", in RH and temperature should be avoided. Environmental fluctuations can contribute to mold growth, chemical deterioration including discoloration and yellowing, degradation of the silver halide crystals resulting in silver mirroring, and deterioration of the emulsion. Acceptable fluctuations include +/- 2 degrees of temperature and +/- 3% of relative humidity. Photographic glass plates, especially negatives, are preserved in dark enclosures due to their risk of deterioration when exposed to light, particularly UV and sunlight. If displayed, spot-lighting and uneven heating of the photographic plate is avoided. Light levels are kept below 50 lux. Handling Photographic glass plates are handled carefully to avoid physical or chemical deterioration and damage – the following measures aid in their preservation through proper handling: To prevent fingerprints, non-vinyl plastic gloves are worn when handling – either latex or nitrile. Cotton gloves are not recommended by conservators due to the possibility of glass easily slipping from the cotton material. Cotton gloves are also susceptible to snagging on the emulsion, if it is flaking, or on the edges of the glass support. When handling, a glass plate is not held by one edge or corner but by two opposite edges and always with two hands. the glass plate on a flat surface is always placed with the emulsion side up. Glass plates are never stacked nor have any pressure placed upon them. The sleeve or enclosure is labeled before placing the glass plate inside. Since glass plates are fragile and brittle, duplicates are created if a glass plate is to be used often for printing. This helps to minimize the threat of breakage of the original. Storage Storage of photographic glass plates is important to their preservation. Museums and other cultural institutions take the following measures to ensure their glass plates are properly housed: Photographic glass plates are housed in four-flap enclosures, emulsion side up. These four-flap buffered enclosures prevent a glass plate from being pulled in and out, which would cause further deterioration of the image from flaking and abrasions. The four-flap enclosure allows the glass plate to be accessed by unfolding the flaps without pulling the plate across any surface or material. The photographic glass plates are stored vertically on the long side of the plate in storage boxes whose material is free of acid, lignin, polyvinyl chloride (PVC), dye, sulfur, and alum. The acidity of any paper storage used should show a pH between 7 and 8.5. Glass plates should not be packed tightly and should not rub against each other. Each plate should be separated with stiffeners made of acid-free folder stock or cardboard to support the plate. Photographic glass plates stored in a partially filled box will have spacers, most likely acid-free corrugated paperboard, inserted to prevent significant bumping or moving. Glass plates larger than 10" x 12" are stored in legal-size boxes that are partially filled to prevent a box that is too heavy. The extra space in the box is filled with board or spacers to avoid shifting when jostled. Storage boxes of photographic glass plates are stored on a lower shelf, specifically below four feet. This helps prevent someone from lifting them down from above their head and dropping them from a great height. Each storage box of photographic glass plates should be labeled with words such as "Heavy", "Handle with Care", and "Caution: Contains Glass Negatives", so all with access to the collection know to be extra careful when lifting the box off a shelf. When there are concerns about the reactivity of housing materials, the Photographic Activity Test (PAT) by the Image Permanence Institute should be consulted. The use of the PAT is a standard in the preservation of photographic plates. The PAT "explores the possibility of chemical interactions between photographs and a given material after prolonged contact". It is considered best practice to use steel shelving to store photographic plates. It is not recommended to use wood cabinets or crates. Wood shelves are susceptible to termites and are more prone to trigger chemical reactions with the plates. Wood shelves tend to possess finishes, paints, and glues that cause off-gassing. Acetic acid and formaldehyde build-up are also more likely to occur. Lastly, given the weight of the photographic plates, it is more problematic that the relative weakness of wood shelving can hold the weight of the collection. Storage of broken photographic plates Broken or cracked glass plates are stored specially, separate from other photographic plates, and in the following ways: Broken glass plates are stored flat, unlike intact plates stored vertically. Stacking broken plates only five plates high is recommended due to the plates' weight. This will prevent further breakage and damage. Photographic glass plates with cracked or damaged binder are stored on sink-mats. Those with minor flaking are still housed in the four-flap enclosure that is labeled appropriately, describing the damage. Glass plates with extensive flaking are stored on sink-mats horizontally and placed in a storage box with a label that reads "Caution: Broken glass. Carry Horizontally." Broken glass plate shards are "sandwiched" in between two pieces of buffered board and placed inside the four-flap enclosure. AIC advises that form-fit support should be created for broken glass shards by cutting out two pieces of 4-ply mat board that fit each shard. These pieces are then glued to each side of the buffered board with either wheat starch paste or 3M #415 double-stick tape, placing each shard in between the form-fit support to help prevent further damage. These broken shards are then placed in individual four-flap enclosures and stored flat, with appropriate labeling that warns of their broken condition. Another method of storing broken shards is to place them on sink mats. If this method is used, each piece is separated with paperboard spacers to prevent the pieces from touching. These paperboard spacers are sometimes attached with adhesives to the mat so that physical damage does not occur to the shards. They are stored horizontally and placed in a storage box with a label that reads "Caution: Broken glass. Carry Horizontally." Maintenance/housekeeping Maintenance/housekeeping of photographic plates requires minimal intervention: Light cleaning of plates is carried out occasionally by removing dust with a soft brush for their preservation. To dust the emulsion side, it is best to use an unused paint brush and, very gently, brush from the center to the outside of the plate. To clean the underside of the leaf (non-emulsion side), dip a cotton ball or cotton round into a cup of distilled water, and work from the middle of the plate to the outside. Water on the emulsion side will wash the emulsion away, causing the image to be lost forever, be careful to ensure this cleaning treatment is only used on the glass support underside and not the emulsion side of the plate. Conservators should also keep the surrounding collections area clean of dust, pests, and any other debris that may attract pests. Food and drink should not be permitted in the storage area as they attract pests. To prevent deterioration from air pollutants, it is helpful to have the air entering the storage area filtered and purified, windows closed, obsolete/outdated media minimized, and enclosures and cabinets in use to protect collection objects. Conservation treatment Many broken or cracked glass plates may benefit from conservation treatment. There are various actions taken in reassembling and restoring these plates using the following materials and methods: Handling Conservators tend to wear Neoprene gloves to help protect the emulsion from fingerprints that will cause deterioration over time. They avoid handling glass fragments to help prevent further breaking of the glass. A padded (foamed polyethylene) and tight weave tissue or Sintered Teflon lined box are preferred by conservators to store fragments, as they help prevent further breaking or cracking. Adhesives Conservators use various adhesives; each adhesive type has benefits and disadvantages for different situations. Paraloid B-72 – A solution of 50–70% B-72 in a solvent with added silica is used to reassemble glass plate fragments. It takes 1–2 hours to dry. One issue with this adhesive is that it creates "snowflakes" in between pieces, making an invisible reassembly impossible. Epoxy resin – This adhesive is powerful and has minimal shrinkage. An issue with this method is that it yellows over time and is not advisable to be used on glass plates with a collodion binder. This is due to the potential damage to the collodion binder of the reversibility method. Cyanoacrylates – This adhesive bonds firmly with alkaline surfaces but is very brittle and is only used for temporary repairs. Pressure-sensitive tape – This adhesive type is ubiquitous, easy to use, and completely removable but only provides minimal support. Sticky wax – As the pieces are assembled, sticky resin, such as that used for lost wax casting in jewelry making, is handy for holding the shards in place. Backing material Silpat sheet – This is made of silicone and fibreglass; textured and provides air pockets to prevent damage from the capillary application; it does not traumatize the emulsion side of the glass. Secondary support – This method is used for glass plates broken into many pieces or over in size. A second piece of glass is used with silicone to be inserted as a barrier layer. Application Wicking – This is used by conservators to apply the adhesive to the glass with a wooden or glass applicator. A capillary tube or bottle puts the appropriate amount of glue on the glass shard without excess. Direct application – When repairing a broken plate on an inclined plane, conservators apply the adhesive to the fracture interface. The shard is placed directly next to its corresponding bit on the inclined plane. Repair methods and techniques Photoshop Software assembly – Virtually assembling broken glass shards through Photoshop, after scanning or photographing all pieces, is used by conservators. Once all the details are within Photoshop, conservators will construct a copy of the glass plate by moving and rotating the parts until the glass plate is fully assembled. This allows conservators to understand how the glass plate should be reconstructed while avoiding further damage to the glass plate. This method allows for further research and study of the plate without the risk of further injury through continual handling. Inclined assembly – This method involves applying an adhesive to the glass shard interfaces and assembling them on an inclined surface covered with Mylar or Silpat. The glass shards are reassembled by direct application, which involves applying the adhesive directly to the shard interface and attaching it to its corresponding piece or assembling through wicking. Vertical assembly – This method is used because the glass shards fit back together most accurately vertically. This also helps to protect the side of the binder layer. The adhesive is not applied until all the pieces are assembled, enabling conservators to recognize any misalignment before an adhesive is applied. As the last step, the adhesive is applied through wicking. Light-line – This is often used to ensure all pieces are aligned, allowing a conservator to see any misalignment, which would show as a crooked line. Once the details are aligned, the light bar will be straight again. Projects The vertical assembly method along with a light line is used in The Glass Plate Negative Project at the Heritage Conservation Centre, as outlined in the case study. This study shows how conservators also deal with other conservation issues, including accretions and residue. For instance, while the plates were considered structurally stable, they may have needed surface cleaning. This was completed by using swabs dampened with water/ethanol solutions to reduce stains or do away with any tape residue. Pressure-sensitive labels were removed mechanically. Conservators used Whatman lens tissues to wipe off any other residue marks. References External links Conservation of Glass Plate Negatives at the Smithsonian Case Study: Repair of a Broken Glass Plate Negative The Conservation of Glass Plates: Student Placement The Preservation of Glass Plate Negatives A Method of Rehousing Glass Plate Negatives Gaylord Archival – rehousing resources Guidelines for Exhibition Light Levels for Photographic Materials Digitizing glass plate negatives Conservation and restoration of cultural heritage Monochrome photography Glass
Conservation and restoration of photographic plates
[ "Physics", "Chemistry" ]
5,430
[ "Homogeneous chemical mixtures", "Amorphous solids", "Unsolved problems in physics", "Glass" ]
56,528,165
https://en.wikipedia.org/wiki/Oxygen%20rebound%20mechanism
In biochemistry, the oxygen rebound mechanism is the pathway for hydroxylation of organic compounds by iron-containing oxygenases. Many enzymes effect the hydroxylation of hydrocarbons as a means for biosynthesis, detoxification, gene regulation, and other functions. These enzymes often utilize Fe-O centers that convert C-H bonds into C-OH groups. The oxygen rebound mechanism starts with abstraction of H from the hydrocarbon, giving an organic radical and an iron hydroxide. In the rebound step, the organic radical attacks the Fe-OH center to give an alcohol group, which is bound to Fe as a ligand. Dissociation of the alcohol from the metal allows the cycle to start anew. This mechanistic scenario is an alternative to the direct insertion of an O center into a C-H bond. The pathway is an example of C-H activation. Three main classes of these enzymes are cytochrome P450, alpha-ketoglutarate-dependent hydroxylases, and nonheme-diiron hydroxylases. References Organic redox reactions Enzymes Oxygenases
Oxygen rebound mechanism
[ "Chemistry" ]
232
[ "Organic redox reactions", "Organic reactions" ]
56,528,765
https://en.wikipedia.org/wiki/Fluoroanion
In chemistry, a fluoroanion or fluorometallate anion is a polyatomic anion that contains one or more fluorine atoms. The ions and salts form from them are also known as complex fluorides. They can occur in salts, or in solution, but seldom as pure acids. Fluoroanions often contain elements in higher oxidation states. They mostly can be considered as fluorometallates, which are a subclass of halometallates. Anions that contain both fluorine and oxygen can be called "oxofluoroanions" (or rarely "fluorooxoanions"). The following is a list of fluoroanions in atomic number order. trifluoroberyllate tetrafluoroberyllate tetrafluoroborate magnesium tetrafluoride trifluoroaluminate tetrafluoroaluminate pentafluoroaluminate hexafluoroaluminate heptafluoroaluminate hexafluorosilicate hexafluorophosphate Sulfur trifluoride anion pentafluorosulfate aka pentafluorosulfite or Sulfur pentafluoride ion sulfur pentafluoride anion tetrafluorochlorate hexafluorotitanate hexafluorovanadate(III) hexafluorovanadate(IV) hexafluorovanadate(V) trifluoromanganate hexafluoromanganate(III) hexafluoromanganate(IV) heptafluoromanganate IV Tetrafluoroferrate 1− and 2− hexafluoroferrate 4− and 3− tetrafluorocobaltate II Hexafluorocobaltate III and IV Heptafluorocobaltate IV Tetrafluoronickelate Hexafluoronickelate II, III and IV hexafluorocuprate tetrafluorozincate Hexafluorogallate hexafluorogermanate hexafluoroarsenate tetrafluorobromate hexafluorobromate pentafluorozirconate hexafluorozirconate octafluorozirconate hexafluoroniobate heptafluoroniobate octafluoromolybdate tetrafluoropalladate hexafluororhodate hexafluororuthenate(IV) hexafluororuthenate(V) hexafluoroindate hexafluorostannate fluoroantimonate hexafluoroiodate 1− octafluoroxenate tetrafluorolanthanate pentafluorocerate IV Hexafluorocerate IV Heptafluorocerate IV octafluorocerate IV pentafluorohafnate hexafluorohafnate heptafluorotantalate octafluorotantalate heptafluorotungstate octafluorotungstate octafluororhenate hexafluoroplatinate tetrafluoroaurate hexafluoroaurate hexafluorothallate(III) tetrafluorobismuthate hexafluorobismuthate hexafluorothorate hexafluorouranate(IV) hexafluorouranate(V) octafluorouranate(IV) octafluorouranate(V) References Fluorine compounds Anions Double salts
Fluoroanion
[ "Physics", "Chemistry" ]
784
[ "Matter", "Anions", "Double salts", "Salts", "Ions" ]
74,972,478
https://en.wikipedia.org/wiki/Tricyclodecane
Tricyclodecane (TCD) is an organic compound with the formula C10H16. It is classed as a hydrocarbon. It has two main stereoisomers–the endo and exo forms. Its primary use in the exo form is as a component of jet fuel. It is used here primarily because of its high energy density. The exo isomer also has a low freezing point. Because of this, its properties have been studied extensively. It is often called tetrahydrodicyclopentadiene. Reactions Its reactions with other materials have been studied, as have various production methods. The two isomers can interconvert in the presence of aluminum chloride as catalyst absorbed on substrates such as silicon dioxide or zeolites, with preference for forming the exo as the major product. References Further reading Aviation fuels Cycloalkanes
Tricyclodecane
[ "Engineering" ]
181
[ "Aviation fuels", "Aerospace engineering" ]
74,975,314
https://en.wikipedia.org/wiki/Outline%20of%20fluid%20dynamics
The following outline is provided as an overview of and topical guide to fluid dynamics: Below is a structured list of topics in fluid dynamics. What type of thing is fluid dynamics? Fluid dynamics can be described as all of the following: An academic discipline – one with academic departments, curricula and degrees; national and international societies; and specialized journals. A scientific field (a branch of science) – widely recognized category of specialized expertise within science, and typically embodies its own terminology and nomenclature. Such a field will usually be represented by one or more scientific journals, where peer-reviewed research is published. A natural science – one that seeks to elucidate the rules that govern the natural world using empirical and scientific methods. A physical science – one that studies non-living systems. A branch of physics – study of matter, its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. A branch of mechanics – area of mathematics and physics concerned with the relationships between force, matter, and motion among physical objects. A branch of continuum mechanics – subject that models matter without using the information that it is made out of atoms; that is, it models matter from a macroscopic viewpoint rather than from microscopic. A subdiscipline of fluid mechanics – branch of physics concerned with the mechanics of fluids (liquids, gases, and plasmas) and the forces on them. A biological science – field that studies the role of physical processes in living organisms. For an example of a biological area involving fluid dynamics, see hemodynamics. Branches of fluid dynamics History of fluid dynamics History of fluid dynamics Mathematical equations and concepts Types of fluid flow Fluid properties Fluid phenomena Concepts in aerodynamics (hydrodynamic) (aerodynamics) Fluid dynamics research Fluid dynamics journals Methods used in fluid dynamics research Tools used in fluid dynamics research Applications of fluid dynamics Fluid dynamics organizations Von Karman Institute for Fluid Dynamics Max Planck Institute for Dynamics and Self-Organization Fluid dynamics publications Books on fluid dynamics An Album of Fluid Motion (1982) Journals pertaining to fluid dynamics Annual Review of Fluid Mechanics Journal of Fluid Mechanics Physics of Fluids Physical Review Fluids Experiments in Fluids European Journal of Mechanics B: Fluids Theoretical and Computational Fluid Dynamics Computers and Fluids International Journal for Numerical Methods in Fluids Flow, Turbulence and Combustion Persons influential in fluid dynamics Contributors to the field of fluid dynamics in turn come from a wide array of fields, and in addition to their other titles, each is also a fluid dynamicist. Following is a list of notable fluid dynamicists: Miscellaneous concepts These topics need placement in the sections above, or in new sections. References External links Fluid dynamics Fluid dynamics Fluid dynamics
Outline of fluid dynamics
[ "Chemistry", "Engineering" ]
539
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
73,605,210
https://en.wikipedia.org/wiki/Motzkin%E2%80%93Taussky%20theorem
The Motzkin–Taussky theorem is a result from operator and matrix theory about the representation of a sum of two bounded, linear operators (resp. matrices). The theorem was proven by Theodore Motzkin and Olga Taussky-Todd. The theorem is used in perturbation theory, where e.g. operators of the form are examined. Statement Let be a finite-dimensional complex vector space. Furthermore, let be such that all linear combinations are diagonalizable for all . Then all eigenvalues of are of the form (i.e. they are linear in und ) and are independent of the choice of . Here stands for an eigenvalue of . Comments Motzkin and Taussky call the above property of the linearity of the eigenvalues in property L. Bibliography Notes Mathematical theorems Linear algebra Perturbation theory Linear operators
Motzkin–Taussky theorem
[ "Physics", "Mathematics" ]
183
[ "Functions and mappings", "Mathematical theorems", "Algebra", "Mathematical objects", "Quantum mechanics", "Linear operators", "Mathematical relations", "nan", "Linear algebra", "Mathematical problems", "Perturbation theory" ]
73,613,536
https://en.wikipedia.org/wiki/Ross%20Baldick
Ross Baldick is an American professor emeritus of electrical and computer engineering at the University of Texas at Austin. He is an Institute of Electrical and Electronics Engineers (IEEE) fellow of power and energy society. He is the chairman of the System Economics Sub-Committee of the IEEE Power Engineering and an associate editor of IEEE Transactions on Power Systems. His research interests are optimization and economic theory application to electric power system operations, public policy, and technical issues related to electric transmission under deregulation. Education and career He received his bachelor of science in mathematics and physics and bachelor of engineering in electrical engineering from the University of Sydney, Australia in 1983 and 1985, respectively. He received his Master of Science and Doctor of Philosophy in electrical engineering and computer sciences from University of California, Berkeley in 1988 and 1990, respectively. From 1991-1992, after completing his doctoral studies, he worked as a post-doctoral fellow at the Lawrence Berkeley National Laboratory. From 1992 to 1993, he was an assistant professor at Worcester Polytechnic Institute, Worcester, MA. In 1993, Baldick joined the University of Texas at Austin faculty, where he remained until his retirement in 2021. Research Baldick's research interests in electric power span across multiple areas, and he has contributed to over one hundred peer-reviewed journal articles. Baldick's research focuses on optimization and economic theory applied to electric power system operations and the public policy and technical issues associated with electric transmission under deregulation. He is the author of the textbook "Applied Optimization: Formulation and Algorithms for Engineering Systems." Honors and awards In 2008, Baldick was named an IEEE Fellow for his contributions to analyzing and optimizing electric power systems. Baldick has received the 2014 IEEE Power and Energy Society Outstanding Engineering Educator Award. Selected publications References Living people 21st-century American academics Electrical engineering University of Sydney alumni University of California alumni University of California, Berkeley alumni Fellows of the IEEE Electrical engineering academics University of Texas faculty Year of birth missing (living people)
Ross Baldick
[ "Engineering" ]
398
[ "Electrical engineering" ]
64,926,134
https://en.wikipedia.org/wiki/Quadric%20%28algebraic%20geometry%29
In mathematics, a quadric or quadric hypersurface is the subspace of N-dimensional space defined by a polynomial equation of degree 2 over a field. Quadrics are fundamental examples in algebraic geometry. The theory is simplified by working in projective space rather than affine space. An example is the quadric surface in projective space over the complex numbers C. A quadric has a natural action of the orthogonal group, and so the study of quadrics can be considered as a descendant of Euclidean geometry. Many properties of quadrics hold more generally for projective homogeneous varieties. Another generalization of quadrics is provided by Fano varieties. By definition, a quadric X of dimension n over a field k is the subspace of defined by q = 0, where q is a nonzero homogeneous polynomial of degree 2 over k in variables . (A homogeneous polynomial is also called a form, and so q may be called a quadratic form.) If q is the product of two linear forms, then X is the union of two hyperplanes. It is common to assume that and q is irreducible, which excludes that special case. Here algebraic varieties over a field k are considered as a special class of schemes over k. When k is algebraically closed, one can also think of a projective variety in a more elementary way, as a subset of defined by homogeneous polynomial equations with coefficients in k. If q can be written (after some linear change of coordinates) as a polynomial in a proper subset of the variables, then X is the projective cone over a lower-dimensional quadric. It is reasonable to focus attention on the case where X is not a cone. For k of characteristic not 2, X is not a cone if and only if X is smooth over k. When k has characteristic not 2, smoothness of a quadric is also equivalent to the Hessian matrix of q having nonzero determinant, or to the associated bilinear form b(x,y) = q(x+y) – q(x) – q(y) being nondegenerate. In general, for k of characteristic not 2, the rank of a quadric means the rank of the Hessian matrix. A quadric of rank r is an iterated cone over a smooth quadric of dimension r − 2. It is a fundamental result that a smooth quadric over a field k is rational over k if and only if X has a k-rational point. That is, if there is a solution of the equation q = 0 of the form with in k, not all zero (hence corresponding to a point in projective space), then there is a one-to-one correspondence defined by rational functions over k between minus a lower-dimensional subset and X minus a lower-dimensional subset. For example, if k is infinite, it follows that if X has one k-rational point then it has infinitely many. This equivalence is proved by stereographic projection. In particular, every quadric over an algebraically closed field is rational. A quadric over a field k is called isotropic if it has a k-rational point. An example of an anisotropic quadric is the quadric in projective space over the real numbers R. Linear subspaces of quadrics A central part of the geometry of quadrics is the study of the linear spaces that they contain. (In the context of projective geometry, a linear subspace of is isomorphic to for some .) A key point is that every linear space contained in a smooth quadric has dimension at most half the dimension of the quadric. Moreover, when k is algebraically closed, this is an optimal bound, meaning that every smooth quadric of dimension n over k contains a linear subspace of dimension . Over any field k, a smooth quadric of dimension n is called split if it contains a linear space of dimension over k. Thus every smooth quadric over an algebraically closed field is split. If a quadric X over a field k is split, then it can be written (after a linear change of coordinates) as if X has dimension 2m − 1, or if X has dimension 2m. In particular, over an algebraically closed field, there is only one smooth quadric of each dimension, up to isomorphism. For many applications, it is important to describe the space Y of all linear subspaces of maximal dimension in a given smooth quadric X. (For clarity, assume that X is split over k.) A striking phenomenon is that Y is connected if X has odd dimension, whereas it has two connected components if X has even dimension. That is, there are two different "types" of maximal linear spaces in X when X has even dimension. The two families can be described by: for a smooth quadric X of dimension 2m, fix one m-plane Q contained in X. Then the two types of m-planes P contained in X are distinguished by whether the dimension of the intersection is even or odd. (The dimension of the empty set is taken to be −1 here.) Low-dimensional quadrics Let X be a split quadric over a field k. (In particular, X can be any smooth quadric over an algebraically closed field.) In low dimensions, X and the linear spaces it contains can be described as follows. A quadric curve in is called a conic. A split conic over k is isomorphic to the projective line over k, embedded in by the 2nd Veronese embedding. (For example, ellipses, parabolas and hyperbolas are different kinds of conics in the affine plane over R, but their closures in the projective plane are all isomorphic to over R.) A split quadric surface X is isomorphic to , embedded in by the Segre embedding. The space of lines in the quadric surface X has two connected components, each isomorphic to . A split quadric 3-fold X can be viewed as an isotropic Grassmannian for the symplectic group Sp(4,k). (This is related to the exceptional isomorphism of linear algebraic groups between SO(5,k) and .) Namely, given a 4-dimensional vector space V with a symplectic form, the quadric 3-fold X can be identified with the space LGr(2,4) of 2-planes in V on which the form restricts to zero. Furthermore, the space of lines in the quadric 3-fold X is isomorphic to . A split quadric 4-fold X can be viewed as the Grassmannian Gr(2,4), the space of 2-planes in a 4-dimensional vector space (or equivalently, of lines in ). (This is related to the exceptional isomorphism of linear algebraic groups between SO(6,k) and .) The space of 2-planes in the quadric 4-fold X has two connected components, each isomorphic to . The space of 2-planes in a split quadric 5-fold is isomorphic to a split quadric 6-fold. Likewise, both components of the space of 3-planes in a split quadric 6-fold are isomorphic to a split quadric 6-fold. (This is related to the phenomenon of triality for the group Spin(8).) As these examples suggest, the space of m-planes in a split quadric of dimension 2m always has two connected components, each isomorphic to the isotropic Grassmannian of (m − 1)-planes in a split quadric of dimension 2m − 1. Any reflection in the orthogonal group maps one component isomorphically to the other. The Bruhat decomposition A smooth quadric over a field k is a projective homogeneous variety for the orthogonal group (and for the special orthogonal group), viewed as linear algebraic groups over k. Like any projective homogeneous variety for a split reductive group, a split quadric X has an algebraic cell decomposition, known as the Bruhat decomposition. (In particular, this applies to every smooth quadric over an algebraically closed field.) That is, X can be written as a finite union of disjoint subsets that are isomorphic to affine spaces over k of various dimensions. (For projective homogeneous varieties, the cells are called Schubert cells, and their closures are called Schubert varieties.) Cellular varieties are very special among all algebraic varieties. For example, a cellular variety is rational, and (for k = C) the Hodge theory of a smooth projective cellular variety is trivial, in the sense that for . For a cellular variety, the Chow group of algebraic cycles on X is the free abelian group on the set of cells, as is the integral homology of X (if k = C). A split quadric X of dimension n has only one cell of each dimension r, except in the middle dimension of an even-dimensional quadric, where there are two cells. The corresponding cell closures (Schubert varieties) are: For , a linear space contained in X. For r = n/2, both Schubert varieties are linear spaces contained in X, one from each of the two families of middle-dimensional linear spaces (as described above). For , the Schubert variety of dimension r is the intersection of X with a linear space of dimension r + 1 in ; so it is an r-dimensional quadric. It is the iterated cone over a smooth quadric of dimension 2r − n. Using the Bruhat decomposition, it is straightforward to compute the Chow ring of a split quadric of dimension n over a field, as follows. When the base field is the complex numbers, this is also the integral cohomology ring of a smooth quadric, with mapping isomorphically to . (The cohomology in odd degrees is zero.) For n = 2m − 1, , where |h| = 1 and |l| = m. For n = 2m, , where |h| = 1 and |l| = m, and a is 0 for m odd and 1 for m even. Here h is the class of a hyperplane section and l is the class of a maximal linear subspace of X. (For n = 2m, the class of the other type of maximal linear subspace is .) This calculation shows the importance of the linear subspaces of a quadric: the Chow ring of all algebraic cycles on X is generated by the "obvious" element h (pulled back from the class of a hyperplane in ) together with the class of a maximal linear subspace of X. Isotropic Grassmannians and the projective pure spinor variety The space of r-planes in a smooth n-dimensional quadric (like the quadric itself) is a projective homogeneous variety, known as the isotropic Grassmannian or orthogonal Grassmannian OGr(r + 1, n + 2). (The numbering refers to the dimensions of the corresponding vector spaces. In the case of middle-dimensional linear subspaces of a quadric of even dimension 2m, one writes for one of the two connected components.) As a result, the isotropic Grassmannians of a split quadric over a field also have algebraic cell decompositions. The isotropic Grassmannian W = OGr(m,2m + 1) of (m − 1)-planes in a smooth quadric of dimension 2m − 1 may also be viewed as the variety of Projective pure spinors, or simple spinor variety, of dimension m(m + 1)/2. (Another description of the pure spinor variety is as .) To explain the name: the smallest SO(2m + 1)-equivariant projective embedding of W lands in projective space of dimension . The action of SO(2m + 1) on this projective space does not come from a linear representation of SO(2m+1) over k, but rather from a representation of its simply connected double cover, the spin group Spin(2m + 1) over k. This is called the spin representation of Spin(2m + 1), of dimension . Over the complex numbers, the isotropic Grassmannian OGr(r + 1, n + 2) of r-planes in an n-dimensional quadric X is a homogeneous space for the complex algebraic group , and also for its maximal compact subgroup, the compact Lie group SO(n + 2). From the latter point of view, this isotropic Grassmannian is where U(r+1) is the unitary group. For r = 0, the isotropic Grassmannian is the quadric itself, which can therefore be viewed as For example, the complex projectivized pure spinor variety OGr(m, 2m + 1) can be viewed as SO(2m + 1)/U(m), and also as SO(2m+2)/U(m+1). These descriptions can be used to compute the cohomology ring (or equivalently the Chow ring) of the spinor variety: where the Chern classes of the natural rank-m vector bundle are equal to . Here is understood to mean 0 for j > m. Spinor bundles on quadrics The spinor bundles play a special role among all vector bundles on a quadric, analogous to the maximal linear subspaces among all subvarieties of a quadric. To describe these bundles, let X be a split quadric of dimension n over a field k. The special orthogonal group SO(n+2) over k acts on X, and therefore so does its double cover, the spin group G = Spin(n+2) over k. In these terms, X is a homogeneous space G/P, where P is a maximal parabolic subgroup of G. The semisimple part of P is the spin group Spin(n), and there is a standard way to extend the spin representations of Spin(n) to representations of P. (There are two spin representations for n = 2m, each of dimension , and one spin representation V for n = 2m − 1, of dimension .) Then the spinor bundles on the quadric X = G/P are defined as the G-equivariant vector bundles associated to these representations of P. So there are two spinor bundles of rank for n = 2m, and one spinor bundle S of rank for n = 2m − 1. For n even, any reflection in the orthogonal group switches the two spinor bundles on X. For example, the two spinor bundles on a quadric surface are the line bundles O(−1,0) and O(0,−1). The spinor bundle on a quadric 3-fold X is the natural rank-2 subbundle on X viewed as the isotropic Grassmannian of 2-planes in a 4-dimensional symplectic vector space. To indicate the significance of the spinor bundles: Mikhail Kapranov showed that the bounded derived category of coherent sheaves on a split quadric X over a field k has a full exceptional collection involving the spinor bundles, along with the "obvious" line bundles O(j) restricted from projective space: if n is even, and if n is odd. Concretely, this implies the split case of Richard Swan's calculation of the Grothendieck group of algebraic vector bundles on a smooth quadric; it is the free abelian group for n even, and for n odd. When k = C, the topological K-group (of continuous complex vector bundles on the quadric X) is given by the same formula, and is zero. Notes References Algebraic geometry Projective geometry Algebraic homogeneous spaces
Quadric (algebraic geometry)
[ "Mathematics" ]
3,279
[ "Fields of abstract algebra", "Algebraic geometry" ]
66,285,578
https://en.wikipedia.org/wiki/N-Desmethyltamoxifen
N-Desmethyltamoxifen (developmental code name ICI-55,548) is a major metabolite of tamoxifen, a selective estrogen receptor modulator (SERM). N-Desmethyltamoxifen is further metabolized into endoxifen (4-hydroxy-N-desmethyltamoxifen), which is thought to be the major active form of tamoxifen in the body. In one study, N-desmethyltamoxifen had an affinity for the estrogen receptor of 2.4% relative to estradiol. For comparison, tamoxifen, endoxifen, and afimoxifene (4-hydroxytamoxifen) had relative binding affinities of 2.8%, 181%, and 181%, respectively. References Amines Hormonal antineoplastic drugs Human drug metabolites Prodrugs Selective estrogen receptor modulators Triphenylethylenes
N-Desmethyltamoxifen
[ "Chemistry" ]
216
[ "Functional groups", "Prodrugs", "Human drug metabolites", "Amines", "Chemicals in medicine", "Bases (chemistry)" ]
66,294,087
https://en.wikipedia.org/wiki/Furry%27s%20theorem
In quantum electrodynamics, Furry's theorem states that if a Feynman diagram consists of a closed loop of fermion lines with an odd number of vertices, its contribution to the amplitude vanishes. As a corollary, a single photon cannot arise from the vacuum or be absorbed by it. The theorem was first derived by Wendell H. Furry in 1937, as a direct consequence of the conservation of energy and charge conjugation symmetry. Theory Quantum electrodynamics has a number of symmetries, one of them being the discrete symmetry of charge conjugation. This acts on fields through a unitary charge conjugation operator which anticommutes with the photon field as , while leaving the vacuum state invariant . Considering the simplest case of the correlation function of a single photon operator gives so this correlation function must vanish. For photon operators, this argument shows that under charge conjugation this picks up a factor of and thus vanishes when is odd. More generally, since the charge conjugation operator also anticommutes with the vector current , Furry's theorem states that the correlation function of any odd number of on-shell or off-shell photon fields and/or currents must vanish in quantum electrodynamics. Since the theorem holds at the non-perturbative level, it must also hold at each order in perturbation theory. At leading order this means that any fermion loop with an odd number of vertices must have a vanishing contribution to the amplitude. An explicit calculation of these diagrams reveals that this is because the diagram with a fermion going clockwise around the loop cancels with the second diagram where the fermion goes anticlockwise. The vanishing of the three vertex loop can also be seen as a consequence of the renormalizability of quantum electrodynamics since the bare Lagrangian does not have any counterterms involving three photons. Applications and limitations Furry's theorem allows for the simplification of a number of amplitude calculations in quantum electrodynamics. In particular, since the result also holds when photons are off-shell, all Feynman diagrams which have at least one internal fermion loops with an odd number of vertices have a vanishing contribution to the amplitude and can be ignored. Historically the theorem was important in showing that the scattering of photons by an external field, known as Delbrück scattering, does not proceed via a triangle diagram and must instead proceed through a box diagram. In the presence of a background charge density or a nonzero chemical potential, Furry's theorem is broken, although if both these vanish then it does hold at nonzero temperatures as well as at zero temperatures. It also does not apply in the presence of a strong background magnetic field where photon splitting interactions are allowed, a process that may be detected in astrophysical settings such as around neutron stars. The theorem also does not hold when Weyl fermions are involved in the loops rather than Dirac fermions, resulting in non-vanishing odd vertex number diagrams. In particular, the non-vanishing of the triangle diagram with Weyl fermions gives rise to the chiral anomaly, with the sum of these having to cancel for a quantum theory to be consistent. While the theorem has been formulated in quantum electrodynamics, a version of it holds more generally. For example, while the Standard Model is not charge conjugation invariant due to weak interactions, the fermion loop diagrams with an odd number of photons attached will still vanish since these are equivalent to a purely quantum electrodynamical diagram. Similarly, any diagram involving such loops as sub-diagrams will also vanish. It is however no longer true that all odd number photon diagrams need to vanish. For example, relaxing the requirement of charge conjugation and parity invariance of quantum electrodynamics, as occurs when weak interactions are included, allows for a three-photon vertex term. While this term does give rise to interactions, they only occur if two of the photons are virtual; searching for such interactions must be done indirectly, such as through bremsstrahlung experiments from electron-positron collisions. In non-Abelian Yang–Mills theories, Furry's theorem does not hold since these involve noncommuting color charges. For example, the quark triangle diagrams with three external gluons are proportional to two different generator traces and so they do not cancel. However, charge conjugation arguments can still be applied in limited cases such as to deduce that the triangle diagram for a color neutral spin boson vanishes. See also Landau–Yang theorem Ward–Takahashi identity Wick's theorem References Quantum electrodynamics Scattering theory Theorems in quantum mechanics
Furry's theorem
[ "Physics", "Chemistry", "Mathematics" ]
975
[ "Theorems in quantum mechanics", "Scattering theory", "Equations of physics", "Quantum mechanics", "Theorems in mathematical physics", "Scattering", "Physics theorems" ]
61,309,152
https://en.wikipedia.org/wiki/Proteolipid
A proteolipid is a protein covalently linked to lipid molecules, which can be fatty acids, isoprenoids or sterols. The process of such a linkage is known as protein lipidation, and falls into the wider category of acylation and post-translational modification. Proteolipids are abundant in brain tissue, and are also present in many other animal and plant tissues. They include ghrelin, a peptide hormone associated with feeding. Many proteolipids have bound fatty acid chains, which often provide an interface for interacting with biological membranes and act as lipidons that direct proteins to specific zones. Proteolipids were discovered serendipitously in 1951 by Jordi Folch Pi and Marjorie Lees while extracting sulfatides from brain lipids. They are not to be confused with lipoproteins, a kind of spherical assembly made up of many molecules of lipids and some apolipoproteins. Structure Depending on the type of fatty acid attached to the protein, a proteolipid can often contain myristoyl, palmitoyl, or prenyl groups. These groups each serve different functions and have different preferences as to which amino acid residue they attach to. The processes are respectively named myristoylation (usually at N-terminal Gly), palmitoylation (to cysteine), and prenylation (also to cysteine). Despite the seemingly specific names, N-myristoylation and S-palmitoylation can also involve some other fatty acids, most commonly in plants and viral proteolipids. The article on lipid-anchored proteins has more information on these canonical classes. Lipidated peptides are a type of peptide amphiphile that incorporate one or more alkyl/lipid chains, attached to a peptide head group. As with peptide amphiphiles, they self-assemble depending on the hydrophilic/hydrophobic balance, as well interactions between the peptide units, which is dependent on the charge of the amino acid residues. Lipidated peptides combine the structural features of amphiphilic surfactants with the functions of bioactive peptides, and they are known to assemble into a variety of nanostructures. Function and application Due to the desirable properties of peptides such as high receptor affinity and bioactivity, and low toxicity, the use of peptides in therapeutics (i. e. as peptide therapeutics) has great potential; shown by a fast growing market with over 100 approved peptide-based drugs. The disadvantages are that peptides have low oral bioavailability and stability. Lipidation as a chemical modification tool in the development of therapeutic agents has proven to be useful in overcoming these issues, with four lipidized peptide drugs currently approved for use in humans, and various others in clinical trials. Two of the approved drugs are long-acting anti-diabetic GLP-1 analogues liraglutide (Victoza®), and insulin detemir (Levemir®). The other two are the antibiotics daptomycin and polymyxin B. Lipidated peptides also have applications in other areas, such as use in the cosmetic industry. A commercially available lipidated peptide, Matrixyl, is used in anti-wrinkle creams. Matrixyl is a pentapeptide and has the sequence KTTKS, with an attached palmitoyl lipid chain, that is able to stimulate collagen and fibronectin production in fibroblasts. Several studies have shown promising results of palmitoyl-KTTKS, and topical formulations have been found to significantly reduce fine lines and wrinkles, helping to delay the aging process in the skin. The Hamley group have also carried out investigations of palmitoyl-KTTKS, and found it so self-assemble into nano tapes in the pH range 3-7, in addition to stimulating human dermal and corneal fibroblasts in a concentration dependant manner, suggesting that stimulation occurs above the critical aggregation concentration. There exist some rarer forms of protein acylation that may not have a membrane-related function. They include serine O-octanoylation in ghrelin, serine O-palmitoleoylation in Wnt proteins, and O-palmitoylation in histone H4 with LPCAT1. Hedgehog proteins are double-modified by (N-)palmitate and cholesterol. Some skin ceramides are proteolipids. The amino group on lysine can also be myristoylation via a poorly-understood mechanism. In bacteria All bacteria use proteolipids, sometimes confusingly referred to as bacterial lipoproteins, in their cell membrane. A common modification consists of N-acyl- and S‑diacylglycerol attached to an N-terminal cystine residue. Braun's lipoprotein, found in gram-negative bacteria, is a representative of this group. In addition, Mycobacterium O-mycolate proteins destined for the outer membrane. The plant chloroplast is capable of many of the same modifications that bacteria perform to proteolipids. One database for such N-Acyl Diacyl Glycerylated cell wall proteolipids is DOLOP. Pathogenic spirochetes, including B. burgdorferi and T. pallidum, use their proteolipid adhesins to stick to victim cells. These proteins are also potent antigens, and are in fact the main immunogens of these two species. Proteolipids include bacterial antibiotics that aren't synthesised in the ribosome. Products of nonribosomal peptide synthase may also involve a peptide structure linked to lipids. These are usually referred to as "lipopeptides". Bacterial "lipoproteins" and "lipopeptides" (LP) are potent inducers of sepsis, second only to lipopolysaccharide (LPS) in its ability to cause an inflammation response. While LPS is detected by the toll-like receptor TLR4, LPs are detected by TLR2. Bacillus Many proteolipids are produced by the Bacillus subtilis family, and are composed of a cyclic structure made up of 7-10 amino acids, and a β-hydroxy fatty acid chain of varying length ranging from 13-19 carbon atoms. These can be divided into three families depending on the structure of the cyclic peptide sequence: surfactins, iturins, and fengycins. Lipidated peptides produced by Bacillus strains have many useful bio-activities such as anti-bacterial, anti- viral, anti-fungal, and anti-tumour properties, making them very attractive for use in a wide range of industries. Surfactins As the name implies, surfactins are potent biosurfactants (surfactants produced by bacteria, yeast, or fungi), and they have been shown to reduce the surface tension of water from 72 to 27 mN/m at very low concentrations. Furthermore, surfactins are also able to permeabilize lipid membranes, allowing them to have specific antimicrobial and antiviral activities. Since surfactins are biosurfactants, they have diverse functional properties. These include low toxicity, biodegradability and a higher tolerance towards variation of temperature and pH, making them very interesting for use in a wide range of applications. Iturins Iturins are pore‐forming lipopeptides with antifungal activity, and this is dependent on the interaction with the cytoplasmic membrane of the target cells. Mycosubtilin is an iturin isoform that can interact with membranes via its sterol alcohol group, to target ergosterol (a compound found in fungi) to give it antifungal properties. Fengycins Fengycins are another class of biosurfactant produced by Bacillus subtilis, with antifungal activity against filamentous fungi. There are two classes of Fengycins, Fengycin A and Fengycin B, with the two only differing by one amino acid at position 6 in the peptide sequence, with the former having an alanine residue, and the latter having valine. Streptomyces Daptomycin is another naturally occurring lipidated peptide, produced by the Gram positive bacterium Streptomyces roseoporous. The structure of Daptomycin consists of a decanoyl lipid chain attached to a partially cyclised peptide head group. It has very potent antimicrobial properties and is used as an antibiotic to treat life-threatening conditions caused by Gram positive bacteria including MRSA (methicillin-resistant Staphylococcus aureus) and vancomycin resistant Enterococci. As with the Bacillus subtilis lipidated peptides, the permeation of the cell membrane is what gives it its properties, and the mechanism of action with daptomycin is thought to involve the insertion of the decanoyl chain into the bacterial membrane to cause disruption. This then causes a serious depolarization resulting in the inhibition of various synthesis processes including those of DNA, protein and RNA, leading to apoptosis. See also Myelin proteolipid protein References External links GO:0006497: gene ontology term for protein lipidation Lipids Proteins Physiology
Proteolipid
[ "Chemistry", "Biology" ]
2,015
[ "Biomolecules by chemical classification", "Physiology", "Organic compounds", "Molecular biology", "Proteins", "Lipids" ]
61,309,845
https://en.wikipedia.org/wiki/Cefoperazone/sulbactam
Cefoperazone/sulbactam is a combination drug used as an antibiotic. It is effective for the treatment of urinary tract infections. It contains cefoperazone, a β-lactam antibiotic, and sulbactam, a β-lactamase inhibitor, which helps prevent bacteria from breaking down cefoperazone. References Drugs developed by Pfizer Combination antibiotics Antibiotics
Cefoperazone/sulbactam
[ "Biology" ]
88
[ "Antibiotics", "Biocides", "Biotechnology products" ]
61,314,096
https://en.wikipedia.org/wiki/Bobbitt%20reaction
The Bobbitt reaction is a name reaction in organic chemistry. It is named after the American chemist James M. Bobbitt. The reaction allows the synthesis of 1-, 4-, and N-substituted 1,2,3,4-tetrahydroisoquinolines and also 1-, and 4-substituted isoquinolines. General Reaction Scheme The reaction scheme below shows the synthesis of 1,2,3,4-tetrahydroisoquinoline from benzaldehyde and 2,2-diethylethylamine. Reaction Mechanism A possible mechanism is depicted below: First the benzaliminoacetal 3 is built by the condensation of benzaldehyde 1 and 2,2-diethylethylamine 2. After the condensation the C=N-double bond in 3 is hydrogenated to form 4. Subsequently, an ethanol is removed. Next, the compound 5 is built including the cyclization step. After that the C=C-double bond in 5 is hydrogenated . Thus, 1,2,3,4-tetrahydroisoquinoline 6 is formed. Applications The Bobbitt reaction has found application in the preparation of some alkaloids such as carnegine, lophocerine, salsolidine, and salsoline. See also Pomeranz–Fritsch reaction References Nitrogen heterocycle forming reactions Heterocycle forming reactions Name reactions Isoquinolines
Bobbitt reaction
[ "Chemistry" ]
308
[ "Name reactions", "Ring forming reactions", "Heterocycle forming reactions", "Organic reactions" ]
61,314,883
https://en.wikipedia.org/wiki/Centre%20for%20Digital%20Built%20Britain
The Centre for Digital Built Britain (CDBB) was a partnership between the University of Cambridge and UK's Department for Business, Energy and Industry Strategy. The CDBB was established in 2017 to support the transformation of the UK built environment using digital technologies to better design, build, maintain and integrate assets. Prior to its closure in March 2022, it was the home of the UK BIM programme, begun by the UK BIM Task Group (2011-2017), and the National Digital Twin programme. History In May 2011, UK Government Chief Construction Adviser Paul Morrell called for adoption of Building Information Modelling (BIM) on UK government construction projects. The UK BIM Task Group was a UK Government-funded group, managed through the Cabinet Office, and created in 2011. Chaired by Mark Bew, it was founded to "drive adoption of BIM across government" in support of the Government Construction Strategy. It led the government's BIM programme and requirements, including a free-to-use set of UK standards and tools that defined 'level 2 BIM'. The BIM Task Group later took responsibility for delivering the Digital Built Britain strategy, published in February 2015. The work of the BIM Task Group continued under the stewardship of the Cambridge-based Centre for Digital Built Britain, announced by the Department for Business, Innovation and Skills in December 2017 and formally launched in early 2018. Its role was to support the transformation of the UK's construction sector using digital technologies to better plan, build, maintain and use infrastructure. In October 2019, the CDBB, the UK BIM Alliance (renamed 'nima' in 2022) and the BSI Group launched the UK BIM Framework. Superseding the BIM levels approach, the framework describes an overarching approach to implementing BIM in the UK, integrating the international ISO 19650 series of standards into UK processes and practice. In March 2022, the CDBB completed its mission, passing the Digital Twin Hub, International Programme, and Climate Resilience Demonstrator to the Connected Places Catapult. The final research projects within the Construction Innovation Hub Programme will complete by September 2022. Structure and work The CDBB was led by Professor Andy Neely, building on the work of the Cambridge Centre for Smart Infrastructure and Construction (CSIC), Cambridge Big Data, the Distributed Information and Automation Lab, the Cambridge Service Alliance and the Institute for Manufacturing. The CDBB was based in the CSIC's facility, the Maxwell Centre, in West Cambridge. The CDBB was a member of the Construction Innovation Hub, alongside the Building Research Establishment and the Manufacturing Technology Centre, and collaborated with other partners in the Transforming Construction Sector Deal. The Digital Built Britain strategy expanded the remit beyond BIM to include other digital processes and technologies, including new contractual frameworks, open data standards, data analytics and big data. For example, in November 2018, the CDBB published The Gemini Principles, a framework to guide the development of the National Digital Twin - an ecosystem of connected digital twins. The National Digital Twin was first recommended in the National Infrastructure Commission's December 2017 Data for the Public Good report. Mott MacDonald CTO Mark Enzer, Head of the National Digital Twin programme, was awarded OBE in October 2020. External links CDBB website Digital Twin Hub - The National Digital Twin programme online community. References Construction industry of the United Kingdom Building information modeling 2017 establishments in England
Centre for Digital Built Britain
[ "Engineering" ]
699
[ "Building engineering", "Building information modeling" ]
72,185,156
https://en.wikipedia.org/wiki/Fuch%C5%AB-Yamanouchi%20Tile%20Kiln%20ruins
is an archaeological site consisting of the remains of a Nara period kiln located in what is now the Fuchu neighborhood of the city of Sakaide and the Kokubunji neighborhood of the city of Takamatsu, Kagawa Prefecture on the island of Shikoku, Japan. It has been protected by the central government as a National Historic Site since 1922. Overview The use of tiled roofs, which was a symbol of continental culture and the advanced state of the central administration, spread during the Asuka and Nara period to Buddhist temples and regional administrative centers. The Fuchū-Yamanouchi kiln is located about one kilometer southwest of the Sanuki Kokubun-ji and is on the border between Sakaide and Takamatsu cities. This anagama kiln is located on a slope and has a total length of over three meters. In the vicinity of the kiln, eaves tiles with the same pattern as roof tiles unearthed from the ruins of Sanuki Kokubun-ji and the Sanuki Kokubun-niji have been unearthed, so it is clear that the tiles of both temples were fired at this kiln.There were once around 14 kiln sites in the area, but today only this one remains in almost complete shape. Based on excavated roof tiles, it is believed to have been built in the Nara period. The site is about 15 minutes on foot from Kokubu Station on the JR Shikoku Yosan Line Takase Station. See also List of Historic Sites of Japan (Kagawa) References External links Takamatsu City official site Kagawa Prefecture homepage Takamatsu, Kagawa Sakaide, Kagawa Japanese pottery kiln sites History of Kagawa Prefecture Historic Sites of Japan Sanuki Province
Fuchū-Yamanouchi Tile Kiln ruins
[ "Chemistry", "Engineering" ]
363
[ "Kilns", "Japanese pottery kiln sites" ]
72,188,498
https://en.wikipedia.org/wiki/Pompidou%20Group
The Council of Europe International Cooperation Group on Drugs and Addiction, also known as Pompidou Group (French: Groupe Pompidou; and formerly Cooperation Group to Combat Drug Abuse and Illicit Trafficking in Drugs) is the co-operation platform of the Council of Europe on matters of drug policy currently composed of 42 countries. It was established as an ad'hoc inter-governmental platform in 1971 until its incorporation into the Council of Europe in 1980. Its headquarters are in Strasbourg, France. History During the 1960s, the "French Connection", a large-scale drug smuggling scheme allowing the import of heroin into the United States via Turkey and France, raised international concerns. On 6 August 1971, former French President Georges Pompidou sent a letter to his counterparts of Germany, Belgium, Italy, Luxembourg, the Netherlands and the United Kingdom expressing his concerns and proposing a joint effort "to better understand and tackle the growing drug problems in Europe." It has been suggested the initiative was pressed by a letter addressed to Pompidou by U.S. President Rixhard Nixon in 1969. The Group was officially launched at the first ministerial meeting held in Paris on 4 November 1971. According to its website:"Until 1979, the group operated without a formal status supported by the countries holding its presidency: France from 1971 to 1977 and Sweden from 1977 to 1979. The group developed as a sui generis entity throughout the 1970s, and three other countries (Denmark, Ireland and Sweden) joined it during that decade."After the death of Pompidou in 1974, the group started to informally adopt the name "Pompidou Group." On 27 March 1980, the Committee of Ministers of the Council of Europe adopted Resolution (80)2, integrating the Pompidou Group into the institutional framework of the Council as an inter-governmental body, after which numerous countries joined it. As the European integration process and expansion of Schengen Area took over many drug-related areas of competences of European countries, the Pompidou Group reoriented its action towards monitoring. It publishes on a number of topics such as review of seizures carried out at borders, guidelines for custom officers, drug markets, and epidemiology. Since 1989, the Group started working on human rights, health, prevention (including the role of police in drug use prevention), and more recently on harm reduction and HIV/AIDS. Since 2004, the Group now awards every two years a "European Drug Prevention Prize" to drug prevention projects involving young people. More recently, the group has started involving on topics such as addiction to the internet, trade in precursors, on-line drug sales, gender-related issues, prison policies, etc. In 1999 and 2010, the group signed Memoranda of Understanding with the EU's European Monitoring Centre for Drugs and Drug Addiction. On 16 June 2021, marking the fiftieth anniversary of the initiative, the Committee of Ministers of the Council of Europe adopted Resolution CM/Res(2021)4 making important changes to the status and mandate of the group. It also officially changed its name to "Council of Europe International Cooperation Group on Drugs and Addiction." Structure The Presidency is the main political body of the Pompidou Group. It takes primary responsibility for supervising the work of the group. The Ministerial Conference elects the countries holding the Presidency and Vice-Presidency for a four years term. The Ministerial Conference is the policy-making body and high-level political forum. It is composed of Ministers responsible for drug policies in their countries, who meet every four years. The Ministerial Conference establishes the Group's strategy and priorities. The Permanent Correspondents is the main decision-making body. It is composed by officials representing their government in-between Ministerial Conferences. Permanent Correspondents meet twice a year. Membership Although the group was launched among seven European countries, its membership has expanded in number and in nature along the years. Member states As of 2022, the Pompidou Group consists of 41 member states. As an "Enlarged Partial Agreement", membership of the Pompidou Group is also open to countries not members of the Council of Europe, including states outside Europe. Observer status is also possible for states. In addition, the US and the Holy See "at their request and after deliberation by the Permanent Correspondents, have been associated with the work of the Pompidou Group on an ad hoc technical basis." Intergovernmental and non-governmental observers Beyond countries' governments, as of 2022, the European Commission, the European Monitoring Centre for Drugs and Drug Addiction, the Conference of INGOs of the Council of Europe, the Inter-American Drug Abuse Control Commission (CICAD/OAS), the United Nations Office on Drugs and Crime (UNODC), and the World Health Organisation enjoy observer status. On its turn, the Pompidou Group enjoys observer or similar status in a number of EU and UN fora. Criticism Some governments have criticized the overlap of discussions held at the Pompidou Group with those taking place in fora like the European Union (Horizontal Drugs Group) or the United Nations (Commission on Narcotic Drugs). Countries have also lamented the membership fees. Civil society stakeholders have criticized the Pompidou Group for leaving little room for the direct participation and involvement of non-governmental organizations in its work and discussions. In 2022, while announcing the withdrawal of his country from the Pompidou Group, Russian Deputy Foreign Minister Oleg Syromolotov nonetheless declared that "the expert dialogue with the EU on combatting drugs has until recently been one of the few that has not been subject to political conjuncture." The consensus of the Pompidou Group around stringent drug policies, compatible with the zero-tolerance approach of the Russian Federation on drugs, and in particular with strong positions opposing decriminalization and legalization of drugs, has long been criticized by observers. In 2021, the Executive Secretary of the Pompidou Group Denis Huber declared:"The Pompidou Group, with the diversity of its members, has no official stance on the issue of decriminalisation, but it will continue to play its role of a platform of cooperation and dialogue for discussing both health and criminal related problems associated with drug use and abuse." References External links 1971 establishments in France Council of Europe Drug control law Drug control treaties Drug policy Drug policy organizations International organizations based in Europe International organizations based in France Organizations based in Strasbourg Organizations established in 1971 Politics of Europe Georges Pompidou
Pompidou Group
[ "Chemistry" ]
1,340
[ "Drug control law", "Regulation of chemicals" ]
62,579,382
https://en.wikipedia.org/wiki/H4K8ac
H4K8ac, representing an epigenetic modification to the DNA packaging protein histone H4, is a mark indicating the acetylation at the 8th lysine residue of the histone H4 protein. It has been implicated in the prevalence of malaria. Nomenclature H4K8ac indicates acetylation of lysine 8 on histone H4 protein subunit: Histone modifications The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. The basic structural unit of chromatin is the nucleosome: this consists of the core octamer of histones (H2A, H2B, H3 and H4) as well as a linker histone and about 180 base pairs of DNA. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K36me3. H4 histone H4 modifications are not as well known as H3's and H4 have fewer variations which might explain their important function. H4K8ac H4K8ac is part of 17 modifications of a group of active promoters. H4K8ac is found more often in active promoters and transcribed regions than other marks. H4K8ac is modified by a different group of enzymes than other H4 lysines. Lysine acetylation and deacetylation Proteins are typically acetylated on lysine residues and this reaction relies on acetyl-coenzyme A as the acetyl group donor. In histone acetylation and deacetylation, histone proteins are acetylated and deacetylated on lysine residues in the N-terminal tail as part of gene regulation. Typically, these reactions are catalyzed by enzymes with histone acetyltransferase (HAT) or histone deacetylase (HDAC) activity, although HATs and HDACs can modify the acetylation status of non-histone proteins as well. The regulation of transcription factors, effector proteins, molecular chaperones, and cytoskeletal proteins by acetylation and deacetylation is a significant post-translational regulatory mechanism These regulatory mechanisms are analogous to phosphorylation and dephosphorylation by the action of kinases and phosphatases. Not only can the acetylation state of a protein modify its activity, but this post-translational modification may also crosstalk with phosphorylation, methylation, ubiquitination, sumoylation, and others for dynamic control of cellular signaling. Epigenetic implications The post-translational modification of histone tails by either histone modifying complexes or chromatin remodeling complexes are interpreted by the cell and lead to complex, combinatorial transcriptional output. It is thought that a histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states which define genomic regions by grouping the interactions of different proteins and/or histone modifications together. Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterised by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. The human genome was annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation. Methods The histone mark acetylation can be detected in a variety of ways: 1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region. 2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well positioned nucleosomes are seen to have enrichment of sequences. 3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation. Clinical significance This mark has been implicated in the prevalence of malaria. See also Histone acetylation References Epigenetics Post-translational modification
H4K8ac
[ "Chemistry" ]
1,203
[ "Post-translational modification", "Gene expression", "Biochemical reactions" ]
62,580,833
https://en.wikipedia.org/wiki/Novak%E2%80%93Tyson%20model
The Novak–Tyson Model is a non-linear dynamics framework developed in the context of cell-cycle control by Bela Novak and John J. Tyson. It is a prevalent theoretical model that describes a hysteretic, bistable bifurcation of which many biological systems have been shown to express. Historical background Bela Novak and John Tyson came from the Department of Biology at the Virginia Polytechnic Institute and State University in Blacksburg, Virginia, when this model was first published in the Journal of Cell Science in 1993. In 1990, two key papers were published that identified and characterized important dynamic relationships between cyclin and MPF in interphase-arrested frog egg extracts. The first was Solomon's 1990 Cell paper, titled "Cyclin activation of p34cdc2" and the second was Felix's 1990 Nature paper, titled "Triggering of cyclin degradation in interphase extracts of amphibian eggs of cdc2 kinase". Solomon's paper showed a distinct cyclin concentration threshold for the activation of MPF. Felix's paper looked at cyclin B degradation in these extracts and found that MPF degrades cyclin B in a concentration dependent and time-delayed manner. In response to these observations, three competing models were published in the next year, 1991, by Norel and Agur, Goldbeter, and Tyson. These competing theories all attempted to model the experimental observations seen in the 1990 papers regarding the cyclin-MPF network. The Norel and Agur model Norel and Agur's model proposes a mechanism where cyclin catalytically drives the production of MPF, which in turn autocatalyzes. This model assumes that MPF activates cyclin degradation via APC activation, and it decouples cyclin degradation from MPF destruction. However, this model is unable to recreate the observed cyclin dependent MPF activity relationship seen in Solomon's 1990 paper, as it shows no upper steady-state level of MPF activity. Goldbeter model Goldbeter proposed a model where cyclin also catalytically activates MPF, but without an autocatalytic, positive feedback loop. The model describes a two-step process, where MPF first activates the APC, and then the APC drives cyclin degradation. When graphing the MPF activity with respect to cyclin concentration, the model shows a sigmoidal shape, with a hypersensitive, threshold region similar to what was observed by Solomon. However, this model depicts an effectively asymptotic plateau behavior at cyclin concentrations above the threshold, whereas the observed curve shows a steady increase in MPF activity at cyclin concentrations above the threshold. Tyson model In Tyson's 1991 model, cyclin is a stoichiometric activator of Cdc2, as cyclin binds with phosphorylated Cdc2 to form preMPF, which is activated by Cdc25 to generate MPF. Because Cdc25 itself is also activated by MPF, the conversion of preMPF to active MPF is a self-amplifying process in this model. Tyson neglected the role of MPF in activating the APC, assuming that only a phosphorylated form of cyclin was rapidly degraded. Tyson's model predicts an S-shaped curve, which is phenotypically consistent with Solomon's experimental results. However, this model generates additional lower turning point behavior on the S-curve that implies hysteresis when interpreted as a threshold. The Novak–Tyson model, first published in the paper titled "Numerical analysis of a comprehensive model of M-phase control in Xenopus oocyte extracts and intact embryos", builds on the Goldbeter and Tyson 1991 models in order to generate a unifying theory, encapsulating the observed dynamics of the cyclin-MPF relationship. Model The model proposes a complex set of feedback relationships that are mathematically defined by a series of rate constants and ordinary differential equations. It employs concepts seen in the previous models such as stoichiometric binding of Cdc2 and cyclin B, positive feedback loops through Cdc25 and Wee1, and delayed activation by MPF of the APC, but includes additional reactions such as that of Wee1 and Cdc25. The result is a non-linear dynamic system with a similar S-shaped curve from Tyson's 1991 model. In the process, this model makes four key predictions. Discontinuous bistable hysteresis According to the Novak–Tyson model, rather than describing Solomon's observations as a sigmoidal switch as seen in the Goldbeter model, the threshold behavior of cyclin concentration dependent MPF activity is instead, a discontinuity of a bistable system. Moreover, due to the S-shape dynamics, the Novak–Tyson model additionally predicts that the cyclin concentration threshold for activation is higher than the cyclin concentration threshold for inactivation; that is, this model predicts a dynamically hysteretic behavior. Critical slowing down Since the Novak–Tyson model predicts that the observed threshold is actually a discontinuity in the system dynamics, it additionally predicts a critical slowing down behavior near the threshold, which is a characteristic behavior of discontinuous bistable systems. Biochemical regulation Since the model predicts that MPF activation at the interphase-to-mitosis transition is governed by the turning point of an S-shaped curve, Novak and Tyson suggest that transition-delaying checkpoint signals biochemically move the turning point to larger values of cyclin B concentration. Phosphatase regulation Novak and Tyson predict that unreplicated DNA interferes with M-phase initiation by activating the phosphatases that oppose MPF in the positive feedback loops. This prediction suggests a possible role for regulated serine/threonine protein phosphatases in cell cycle control. Model validation At the time of publishing, the predictions from the paper were all experimentally untested and were based only off the signal pathways and mathematical modeling proposed by Novak and Tyson. However, since then, two papers have experimentally validated three of the four predictions listed above, namely the discontinuous bistable hysteresis, critical slowing down, and biochemical regulation predictions. Weaknesses According to the Novak and Tyson, this model, as with any biologically detailed, mathematically driven model, is heavily reliant on parameter estimation, especially given the mathematical complexity for this particular model. Ultimately these parameters are fit to experimental data, which is inherently susceptible to the compounded reliability of various experiments measuring various parameters. References Cell cycle Biology theories
Novak–Tyson model
[ "Biology" ]
1,391
[ "Cell cycle", "Cellular processes", "Biology theories" ]
62,582,142
https://en.wikipedia.org/wiki/H4K12ac
H4K12ac is an epigenetic modification to the DNA packaging protein histone H4. It is a mark that indicates the acetylation at the 12th lysine residue of the histone H4 protein. H4K12ac is involved in learning and memory. It is possible that restoring this modification could reduce age-related decline in memory. Nomenclature H4K12ac indicates acetylation of lysine 12 on histone H4 protein subunit: Histone modifications The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. The basic structural unit of chromatin is the nucleosome: this consists of the core octamer of histones (H2A, H2B, H3 and H4) as well as a linker histone and about 180 base pairs of DNA. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K36me3. H4 histone H4 modifications are not as well known as H3's and H4 have fewer variations which might explain their important function. H4K12ac Acetylation of histone H4K5 and H4K12 is enriched at centromeres. H4K8ac and H4K12ac are associated with active promoters to form a backbone. H4 localizes more to gene bodies promoters than other acetylations so H4K8ac facilitates transcriptional elongation. H4K12ac is involved in learning and memory so it could help with reducing age-related decline in memory. It has also been implicated in enabling developmental plasticity. High levels early in development are thought to provide transcriptional flexibility, while lower levels are thought to contribute to repressed transcription. In this way, H4K12 acetylation and deacetylation could open and close "critical windows" of environmental sensitivity. Lysine acetylation and deacetylation Proteins are typically acetylated on lysine residues and this reaction relies on acetyl-coenzyme A as the acetyl group donor. In histone acetylation and deacetylation, histone proteins are acetylated and deacetylated on lysine residues in the N-terminal tail as part of gene regulation. Typically, these reactions are catalyzed by enzymes with histone acetyltransferase (HAT) or histone deacetylase (HDAC) activity, although HATs and HDACs can modify the acetylation status of non-histone proteins as well. The regulation of transcription factors, effector proteins, molecular chaperones, and cytoskeletal proteins by acetylation and deacetylation is a significant post-translational regulatory mechanism These regulatory mechanisms are analogous to phosphorylation and dephosphorylation by the action of kinases and phosphatases. Not only can the acetylation state of a protein modify its activity, but this post-translational modification may also crosstalk with phosphorylation, methylation, ubiquitination, sumoylation, and others for dynamic control of cellular signaling. Epigenetic implications The post-translational modification of histone tails by either histone modifying complexes or chromatin remodeling complexes are interpreted by the cell and lead to complex, combinatorial transcriptional output. It is thought that a histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states which define genomic regions by grouping the interactions of different proteins and/or histone modifications together. Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterised by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. The human genome was annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation. Methods The histone mark acetylation can be detected in a variety of ways: 1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region. 2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well positioned nucleosomes are seen to have enrichment of sequences. 3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation. See also Histone acetylation References Epigenetics Post-translational modification
H4K12ac
[ "Chemistry" ]
1,311
[ "Post-translational modification", "Gene expression", "Biochemical reactions" ]
69,203,926
https://en.wikipedia.org/wiki/Rosalind%20Kornfeld
Rosalind Hauk Kornfeld (1935–2007) was a scientist at Washington University in St. Louis known for her research determining the structure and formation of oligosaccharides. The Society of Glycobiology annually awards a lifetime achievement award in her honor. Education and career Rosalind Kornfeld Hauk was born in Dallas, Texas and then grew up in Chevy Chase, Maryland. She earned a bachelor of science degree from George Washington University in 1957. She went on to receive her Ph.D. degree in 1961 working on enzymes in rabbit muscles. For a brief period she stayed at George Washington University as a postdoctoral investigator before moving to the National Institutes of Health as a fellow working with Victor Ginsburg. She moved back to St. Louis in 1965 when she accepted a position at Washington University in St. Louis. She started as a research instructor, was promoted to research assistant professor in 1971, and then research associate professor in 1978. In 1981 she was named a professor of medicine and professor of biochemistry and molecular biophysics. Kornfeld retired in 2001. Kornfeld founded the Academic Women's Network at Washington University and then served as its first president. She also served as president of the Society of Glycobiology in 1993. Research Kornfeld's research laid the groundwork for the field of glycobiology with her investigations into nucleotide sugar biosynthesis and glycan ligands for lectins. Korneld's notable accomplishments include defining the structure and function of N-acetylglucosamine-1-phosphodiester alpha-N-acetylglucosaminidase, commonly known as the 'uncovering enzyme' or UCE. She then worked to place that enzyme within the trans Golgi network. Kornfeld wrote two influential reviews on oligosaccharides, the second of which has been cited over 6000 times as of 2021. Selected publications Awards and honors Starting in 2007, the Academic Women's Network at Washington University has awarded the Rosalind Kornfeld Lecture for Distinguished Women in Science. Since 2008, the Society for Glycobiology has awarded the Rosalind Kornfeld Lifetime Achievement Award to honor accomplishments in research within the field. The second edition of Essentials in Glycobiology is dedicated to Rosalind Kornfeld and Roger W. Jeanloz, who are noted as "pioneers in the elucidation of glycan structure and function". Personal life Her husband was Stuart Kornfeld whom she met when she was a first year graduate student in biochemistry in 1958. They married in 1959 and worked together for 48 years; during his speech accepting the 2010 George M. Kober medal he acknowledged the key role she played in their joint accomplishments, a situation that has been noted by others. Her son, Kerry Kornfeld, is also a scientist and while he was in high school, they jointly published a paper on lectins. References Washington University in St. Louis faculty George Washington University alumni 1935 births 2007 deaths Women biochemists
Rosalind Kornfeld
[ "Chemistry" ]
631
[ "Biochemists", "Women biochemists" ]
69,209,212
https://en.wikipedia.org/wiki/Forouhi%E2%80%93Bloomer%20model
The Forouhi–Bloomer model is a mathematical formula for the frequency dependence of the complex-valued refractive index. The model can be used to fit the refractive index of amorphous and crystalline semiconductor and dielectric materials at energies near and greater than their optical band gap. The dispersion relation bears the names of Rahim Forouhi and Iris Bloomer, who created the model and interpreted the physical significance of its parameters. The model is aphysical due to its incorrect asymptotic behavior and non-Hermitian character. These shortcomings inspired modified versions of the model as well as development of the Tauc–Lorentz model. Mathematical formulation The complex refractive index is given by where is the real component of the complex refractive index, commonly called the refractive index, is the imaginary component of the complex refractive index, commonly called the extinction coefficient, is the photon energy (related to the angular frequency by ). The real and imaginary components of the refractive index are related to one another through the Kramers-Kronig relations. Forouhi and Bloomer derived a formula for for amorphous materials. The formula and complementary Kramers–Kronig integral are given by where is the bandgap of the material, , , , and are fitting parameters, denotes the Cauchy principal value, . , , and are subject to the constraints , , , and . Evaluating the Kramers-Kronig integral, where , , . The Forouhi–Bloomer model for crystalline materials is similar to that of amorphous materials. The formulas for and are given by . . where all variables are defined similarly to the amorphous case, but with unique values for each value of the summation index . Thus, the model for amorphous materials is a special case of the model for crystalline materials when the sum is over a single term only. References See also Cauchy equation Sellmeier equation Lorentz oscillator model Tauc–Lorentz model Brendel–Bormann oscillator model Condensed matter physics Electric and magnetic fields in matter Optics
Forouhi–Bloomer model
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
438
[ "Applied and interdisciplinary physics", "Optics", "Phases of matter", "Electric and magnetic fields in matter", "Materials science", "Condensed matter physics", " molecular", "Atomic", "Matter", " and optical physics" ]
69,212,521
https://en.wikipedia.org/wiki/Agents%20of%20deterioration
The 'ten agents of deterioration' are a conceptual framework developed by the Canadian Conservation Institute (CCI) used to categorise the major causes of change, loss or damage to cultural heritage objects (such as collections held by galleries, libraries, archives and museums). Also referred to as the 'agents of change', the framework was first developed in the late 1980s and early 1990s. The defined agents reflect and systematise the main chemical and physical deterioration pathways to which most physical material is subject. They are a major influence on the applied practice of conservation, restoration, and collection management, finding particular use in risk management for cultural heritage collections. CCI defines ten 'agents': dissociation, fire, incorrect relative humidity, incorrect temperature, light and ultraviolet light, pests, pollutants (or contaminants), physical forces, thieves and vandals (at times referred to as 'criminals'), and water. The number of primary agents has remained the same since the 1994 with the addition of 'custodial neglect' (now termed dissociation), though the scope and names of some categories have been updated over time to reflect new research or thinking. Each category may be further subcategorised as rare and/or catastrophic (Type 1), sporadic (Type 2), or constant/ongoing (Type 3), particularly when applied to risk assessments. For example, within the category of physical forces, an earthquake may be designated a Type 1 event; a handling accident where an object is dropped as Type 2, and ongoing physical wear from daily handling as Type 3. Dissociation Dissociation refers to the loss of information associated with an object, such as provenance or location information, without which the object loses significance or is lost. In earlier versions of the framework this was referred to as 'custodial neglect'. Dissociation can cover loss of identification labels, misplacement of parts of an object, lack of descriptive information for example. Neglecting a collection is also part of dissociation. By not doing the proper research and not making sure everything stays together, institutions can lose information and cause their collections to lose value. Dissociation can also be caused because of a natural disaster. Having a good documentation plan and a good backup for the electronic system can help mitigate the damages done by events we cannot control. Rigorous information management protocols are necessary, such as regular collection audits and administrative reviews. A good documentation practice and making sure the labels are following their respective object is essential to help prevent dissociation. Fire Fire directly consumes cultural heritage through burning or by the deposition of smoke and soot on surfaces. Fire suppression systems can also cause damage - e.g. water damage from sprinklers. For example, about 18 million objects were destroyed in the 2018 fire at the National Museum of Brazil. Preventive maintenance is critical to prevent and minimize the risk of fire. Strategies include banning smoking and other sources of flame and heat, routine maintenance of fire extinguishers, maintaining a regular schedule for the maintenance and testing of smoke detectors, and protecting the building and contents with [Sprinkler Systems|sprinkler systems]. Materials can be classified based on their level of vulnerability to heat and combustion. Jean Tetrault identified five levels of sensitivity—from very low for non-combustible materials to very high for self-igniting, easily combustible materials. Inorganic materials, such as ceramics, stone, glass, and metal, have lower relative sensitivity to fire, as compared to organic materials, such as wood, paper, or textile, that are highly reactive to fire.  Certain materials are known for their extremely high relative sensitivity to fire, such as organic solvents with a flash point below 32oC that makes them flammable and most dangerous if it goes below 21oC. An object’s susceptibility to fire is a salient factor that affects the rate at which fire can spread within space. Considering that fire is an exothermic chain reaction that produces energy in the form of light and heat, it is important to note that for the chain reaction to persist, the fuel must be in a suitable condition and quantity. If the material is wet, the heat produced will react first with water to remove moisture instead of directly causing the fuel to break down and cause the chain reaction to continue should there be enough fuel source to feed on. Incorrect relative humidity Relative humidity (RH) may result in change or damage to cultural heritage when the RH is too high, too low or when it fluctuates. High RH can cause mould growth, salt efflorescence, increased pest activity, swelling of wood, rapid metal corrosion and accelerated hydrolysis reactions in paper and other substrates. Low RH can result in cracking and shrinkage of wooden objects, and desiccation and embrittlement of paper and organic textiles. Fluctuations in RH compound these effects and cause physical damage where organic materials contract and expand, particularly in mixed-media objects where materials expand and contract at different rates (e.g. panel paintings). For this reason, museum environments often have humidity control as part their heating, ventilation and cooling (HVAC) system, to keep RH stable and within defined limits. Relative humidity is the amount of water held in the air expressed as a percentage of the total amount of water that could be held in fully saturated air at a given temperature. The capacity of air to hold moisture is directly related to temperature, as warm air can hold more water than cold air. Some changes undergone by objects are reversible by adjusting the RH, but damage like cracks may be irreversible. Keeping the RH within an appropriate range for the type of material and as consistent as possible will prevent most RH-based damage. Keeping storage and display spaces between 40-60% RH will avoid most damaging effects, but maintaining a stable RH is often considered more important than adhering to absolute ranges. Guidelines for museum environments have been developed by professional conservation organisations such as the International Institute for Conservation (IIC), the International Council of Museums (ICOM-CC), the American Institute for Conservation (AIC) and the Australian Institute for the Conservation of Cultural Materials (AICCM). The RH of spaces can be measured by a number of tools, including humidity indicator cards, thermo-hygrographs, hygrometers, psychrometers and data loggers. RH data is monitored and analysed to determine if adjustments are needed. Adjustments may be possible via humidifiers, dehumidifiers, adjustments to existing heating and air conditioning systems, and passive control measures. Some objects have different needs in RH than the other ones in their exhibition or their storage room. For those objects, special cases or storage contains needs to be created to accommodate their needs. In them, the Rh and the temperature can be different than the ones present of the rooms. Some special crates and containers can also be done to ensure the safe travel of the objects. Incorrect temperature Two primary chemical deterioration mechanisms affecting cultural heritage include hydrolysis and oxidation, which can result in chain scission or cross linking. These occur at ambient temperatures to varying degrees (depending on the material); the rate of chemical reactions increases with temperature. Consequently, cool storage (e.g. storage at 10 °C, 4 °C or below freezing) is often used to slow the deterioration of vulnerable materials such as cellulose nitrate and cellulose acetate film. Higher temperatures may also cause softening or melting of material with low melting points, such as waxes and some plastics. Some materials become brittle with lower temperatures, increasing the chance of physical damage on handling. On freezing, ice crystals can physically disrupt delicate surfaces such as photographic emulsions. Light, ultraviolet and infrared Light, as it relates to collections maintenance, is primarily concerned with the visual and ultraviolet light (UV) ranges of the electromagnetic spectrum. Visible light exposure will fade many colourants. Higher energy ultraviolet light wavelengths can also cause bleaching, yellowing, discolouration and physical weakening of substrates, rendering some material brittle and prone to breakage. Light radiation provides energy to induce chemical changes within the molecular structure of materials. Damage from light, including loss of color and strength, is cumulative and irreversible. Controlling light damage is a process of compromise, as light is also necessary for people working with or viewing cultural heritage objects. Light exposure can be reduced by limiting either the amount of time sensitive objects are put on display, or the strength at which they are illuminated. Dimmers, timer switches, and motion sensors may be used to limit exposure times. Cultural organisations often develop schedules for exhibition changeovers, in order to control the rate at which light-induced damage occurs. Light levels for light sensitive objects such as textiles, works on paper, and dyed leather are generally kept at lower levels where possible (e.g at 50 lux), with 200 lux a more common guidelines for more light resistant materials, such as oil paintings, bone, and natural leather. Some material types, such as stone, metal, and glass, are not negatively impacted by visible light. Ultraviolet light is not normally needed for vision (unless UV-induced fluorescence is an important part of the viewing experience) so cultural organisations tend to remove or shield sources of natural light from storage or display spaces. Curtains, shades and UV-absorbing filters are also useful control strategies. Light and UV levels can be measured with a light meter so that adjustments can be made. Pests Many insect species feed on organic cultural heritage material - for example, carpet beetles and clothes moths are attracted to protein-based fibres such as wool and silk; silverfish graze on the surface of books and photographs; various species of borers can infest wooden furniture or frames. Insects can be attracted by accumulation of dust and debris on objects to feed on it. Insects, birds and rodents may use cultural heritage objects as nesting material, or soil them with excrement, or damage them by scratching or piercing. Bird poo can etch the surface of metal outdoor sculpture and birds' feet can scratch their surface. Damage from insects and other museum pests typically occurs because these pests are drawn to collections objects which they view as a food source. Certain material types, such as wood, organic textiles, furs, and paper are more vulnerable to insect damage than others. Integrated pest management or IPM has become a key strategy in monitoring and controlling pests in museum environments. IPM focusses on prevention, through housekeeping and maintenance, and monitoring pest populations by using a system of glue traps. This allows museum or repository staff to identify vulnerable locations, catch new infestations and identify the type of insects present, and then act to eliminate the infestation. A range of possible treatments are available to address insect infestations. Chemical treatments are no longer a preferred treatment method for cultural objects due to human safety risks and often undesirable effects on the objects themselves. Instead, non-chemical methods are preferred, and include freezing, controlled heating, radiation, and anoxic treatments. Even options as simple as regulating the temperature and relative humidity of a space can be effective at curtailing an infestation, depending on the pest. Each option has benefits and drawbacks, and the choice of treatment used should be undertaken in consultation with a qualified professional. Pollutants Atmospheric pollutants such as ozone, sulphur dioxide, hydrogen sulphide and nitrogen dioxide cause corrosion, acidification and discolouration of a variety of materials. Indoor pollutants such as formaldehyde and other volatile organic acids cause similar problems, and may be present in carpets, paints and varnishes or in the materials used to construct display cases or storage furniture (wood, plastics, fabrics, resins). Pollutants may also be generated internally by objects - for example, the deterioration of cellulose acetate film results in the generation of acetic acid, which can damage other objects nearby. Atmospheric pollutants can tarnish or corrode metal objects. Silver objects are vulnerable to sulphurous gasses which cause them to tarnish, and lead and pewter objects will corrode when exposed to volatile organic acids. Damage can be minimised by storing vulnerable silver objects in enclosures with activated charcoal or in silvercloth, which adsorb sulphur. Silver objects can also be coated, or lacquered, with a clear barrier material such as Paraloid B-72 to prevent tarnishing, but these coatings require periodic reapplication. One potential source of volatile organic acids is wooden shelves or wooden storage and display furniture. Dust is also categorised as an outdoor and indoor pollutant in this context, as it can cause damage to the surface by abrasion when removed, or by staining it when absorbing humidity. Dust can contain skin, mold and inorganic fragments like silica or sulfur. Dust can become bound to a surface over time, making it significantly more difficult to remove. Dust is also hygroscopic, meaning it is able to attract and hold water molecules creating an ideal climate for mold spores to grow and cause biological damage. Dust's hygroscopic nature can also prompt chemical reactions on a surface, especially upon metals. Inorganic dust particles may have tough sharp edges which can tear fibers and abrade softer surfaces if not properly removed. Dust accumulation can be prevented by storing and displaying collection objects in closed cabinets or cases, or by using dust covers. Dust minimisation strategies also include the use of air filters in heating and air conditioning systems, using vacuum cleaners equipped with HEPA filters and housekeeping strategies using soft cloths. Physical forces This category includes sources of mechanical damage, where objects may be bent, broken, distorted, abraded, worn etc. The change occurs by some applied force, which may be as varied as seismic movement from earthquakes, vibration from roads, electrical equipment or amplified music, or from simple storage accidents such as shocks and rubbing or handling accidents where objects are bumped or knocked. Handling training and guidelines help to prevent accidental damage due to physical forces when moving and working with museum objects. Handling guidelines may contain advice to carefully inspect objects before picking them up, clearing paths of obstacles and trip hazards, lining trolleys and carts for transport with polyethylene foam padding, and planning all steps of a procedure in advance". In storage, objects are housed in a manner to make them easily accessible, and fragile objects may have custom supports or mounts and padded storage boxes. Thieves and vandals At times referred to as 'criminals', this category includes deliberate theft or damage to cultural heritage. Many famous examples exist, such as the 1990 theft of paintings from the Isabella Stewart Gardner Museum, or the 2012 attack on the Rothko painting at the Tate Gallery, though it is possible many thefts go unreported or even unnoticed in large institutions, or when inventory checks are not frequent. Control strategies include limiting access to collections based on their value, rarity, portability and/or accessibility, both to staff and potential visitors. Storage and display furniture may be locked and alarmed. Cultural organisations may install security cameras, motion sensors and alarms and employ security guards and patrols. Water Water damage usually occurs through leaks in building fabric or through flooding associated with weather events or the failure of water-carrying infrastructure (plumbing, wet pipe sprinkler systems, air conditioning). Condensation may occur where the temperature of the air drops suddenly, as when warm indoor air hits a cooler external wall or window. Water may soften or solubilise applied media (paints, adhesives, coatings), cause staining and leave tidelines after evaporation, cause physical damage through impact, weaken substrates and foster microorganism growth, and swell, shrink or distort organic materials. Water can also carry pollutants and contaminants, such as mud and sewage, which leaves stains. The 1966 Florence flood was a formative moment in the development of the conservation-restoration profession and particularly preventive conservation. Other frameworks This is not the only framework used to categorise the deterioration of materials in cultural heritage professions. For example, deterioration may also be categorised according to source: biological, chemical, physical. Sustainability The ten agents of deterioration are often used to frame discussions about sustainability in the cultural heritage sector, when the cost of controlling or minimising deterioration is compared to the perceived benefits. Costs may be financial (e.g. the cost of running air conditioning), environmental (e.g. the use of plastics as storage and packing material, or the use of solvents for conservation treatment, or the use of energy to run air conditioning systems) or even the labour required to sustain an activity. The repercussions of established conservation-restoration practice on the environment and on climate change are increasingly debated, particularly the profession's emphasis on tight control of temperature and humidity. Recent environmental guidelines for cultural heritage collections, such as those developed by the Australian Institute for the Conservation of Cultural Materials (AICCM), emphasise sustainability and resilience as a guiding principle and directly reference climate change as a reason for frequent review. These guidelines highlight the need to consider the local climate and allow variations of relative humidty and temperature values accordingly. See also Conservation and restoration of cultural property Disaster preparedness References Michalski. S., ‘An overall framework for preventive conservation and remedial conservation’ in ICOM Committee for Conservation 9th Triennial Meeting, Dresden (1990) 589-591. Waller, Robert. 1994. ‘Conservation risk assessment: A strategy for managing resources for preventive conservation. Preprints of the Contributions to the Ottawa Congress, 12 ‑ 16 September 1994, Preventive Conservation: Practice, Theory and Research. London: IIC. A. Roy and P. Smith (Eds.). Available at or Agents of deterioration Materials degradation
Agents of deterioration
[ "Materials_science", "Engineering" ]
3,698
[ "Materials degradation", "Materials science" ]
63,487,960
https://en.wikipedia.org/wiki/Taiwan%20Semiconductor%20Research%20Institute
The Taiwan Semiconductor Research Institute () (TSRI) is a research institute in Taiwan which was created in 2019 through the merger of the National Nano Device Laboratories and National Chip Implementation Center. It is part of the National Applied Research Laboratories under the Ministry of Science. Overview According to the China Times the Taiwan Semiconductor Research Institute is the "world’s only national science and technology research and development center which integrates integrated circuit design, chip offline manufacturing, and semiconductor component manufacturing process research." History The Taiwan Semiconductor Research Institute was created in 2019 through the merger of the National Nano Device Laboratories and National Chip Implementation Center under the National Applied Research Laboratories. TSRI was inaugurated on Jan. 30 2019 at Hsinchu Science Park. National Chip Implementation Center The Chip Implementation Center Establishment Project was initiated in 1992 with the National Chip Implementation Center (NCIC) being inaugurated in 1997. In 2003 it was incorporated into NARLabs. In 2007 the CIC had 106 employees with 66 being full-time researchers. National Nano Device Laboratories The National Nano Device Laboratories (NDL) was implemented under the National Submicron Device Laboratories Establishment Project in 1988. They began operating their first level-10 clean room in 1992. In 1993 they were renamed the National Millimicron Device Laboratories and in 2002 they were renamed the National Nano Device Laboratories. They were incorporated into NARLabs in 2003. See also Industrial Technology Research Institute National Center for High-Performance Computing References 2019 establishments in Taiwan Research institutes established in 2019 Research institutes in Taiwan Computer science institutes Semiconductors
Taiwan Semiconductor Research Institute
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
312
[ "Electrical resistance and conductance", "Physical quantities", "Semiconductors", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Matter" ]
63,488,647
https://en.wikipedia.org/wiki/Phenol%20sulfur%20transferase%20deficiency
Phenol sulfur transferase deficiency, in short PST deficiency, is the lack or the reduced activity of the functional enzyme phenol sulfur transferase, which is crucial in the detoxification of mainly phenolic compounds by catalysing the sulfate conjugation of the hydroxyl groups in the toxic phenolic compounds to result in more hydrophilic forms for more efficient excretion. This metabolic disorder was first discovered in the late 1990s by Dr. Rosemary Waring during her researches with autistic children, which also made this deficiency commonly associated to the topics of autism. Mutations in the PST genes account for the genetic causes of the deficiency, of which single nucleotide polymorphism and methylation of promoters are two examples of mutations that respectively cause conformational abnormalities and diminished expressions to the enzyme, resulting in the reduced detoxification of phenolic compounds and regulation of phenolic neurotransmitter. The deficiency may cause symptoms like flushing, tachycardia, and depression, and be a risk factor for disorders like autism, migraine, and cancer, while it also limits the use of phenolic drugs in PST deficient patients. There is currently no drug available for treating PST deficiency. However, some people suffering from PST deficiency have found taking a digestive enzyme supplement containing Xylanase 10 minutes before eating to greatly reduce symptoms. Phenol sulfur transferase Phenol sulfur transferase, in short PST or SULT1, is a subfamily of the enzyme cytosolic sulfotransferases (SULTs) consisting of at least 8 isoforms in humans that catalyze the transfer of sulfuryl group from 3′-phosphoadenosine 5′-phosphosulfate (PAPS) to phenolic compounds, resulting in more hydrophilic products that can be more easily expelled from tissues for excretion. At high concentration, PST could also catalyze the sulfate conjugation of amino groups. This enzyme subfamily, which exists in nearly all human tissues, is important for the detoxification of phenol-containing xenobiotics or endogenous compounds, including the biotransformation of neurotransmitters and drugs. Its expression is controlled by the PST genes located on chromosomes 2, 4, and 16 depending on the isoform, for example the genes for the predominant isoform throughout the body of human adults, SULT1A1, which is highly heritable and variable between individuals, and the most important one in the nervous system, SULT1A3, are located on chromosome 16 at the position of 16p11.2 to 16p12.1. Discovery PST deficiency was first discovered in the late 1990s by Dr. Rosemary Waring through a series of tests during her researches on the mechanisms and characteristics of sulfation in autistic children. From the result of the test administering individuals with paracetamol, it was found that the level of sulfate conjugate in urine was significantly lower in the autistic individuals as compared to the non-autistic controls, which was caused by the decreased ability in the formation of sulfated metabolites. The level of sulfate in plasma was also found to be significantly lower in autistic children, leading to a reduced activity of PST. Therefore, she concluded that there was possibly a deficiency of PST in autistic children due to the reduction of sulfate in plasma as a substrate of PST. Pathophysiology Causes PST deficiency can be caused by inherited mutations in the PST genes, for example the SULT1A1*2 polymorphism, which is a single nucleotide polymorphism at the 638th base of the SULT1A1 gene from guanine to adenosine that causes the change of the 213th amino acid residue of the resultant SULT1A1 from arginine to histidine. This mutation causes a conformational change in the enzyme, reducing the size of the binding site and altering the thermochemical properties, which halves the substrate binding affinity and enzyme thermostability, and results in diminished enzymatic activity. The methylation at the distal and proximal promoters of the PST gens is another mutation that accounts for the deficiency, which causes a reduction in PST expression rather than conformational abnormalities. This prevents the binding of RNA polymerase, which therefore inhibits the mRNA expression of the gene for the production of PST, and finally results in PST deficiency. Disease-causing mechanisms PST deficiency can directly cause diseases by the resulted phenol sulfoconjugation defect which reduces the removal of toxic phenolic compounds. In the liver, where PST serves as one of the important enzymes involved in detoxification, the reduced transcriptional and translational levels of the PST genes would lead to the accumulation of phenolic xenobiotics and cause liver diseases like hepatic steatosis and cirrhosis, or even liver cancers like hepatocellular carcinoma when phenolic carcinogens are accumulated to trigger their developments. In clinical neurochemistry, PST, in particular the SULT1A3 isoform, is responsible for the degradation of phenolic neurotransmitters such as dopamine and norepinephrine, and therefore is important in the regulation of neurotransmitters which would greatly affect neurological functions. Deficiency or down-regulation of SULT1A3 would cause the retention of neurotransmitter in synapses which affects brain functions including cognitive flexibility and associative learning. Clinical impact Related disorders Symptoms of PST deficiency are mainly resulted from the disruptions in multiple metabolic processes due to the accumulation of phenols in the body. Common symptoms include polydipsia, flushing, tachycardia, night sweats, and gastrointestinal problems such as diarrhoea. Neurological and psychiatric disorders such as depression may also occur when regulation of phenolic neurotransmitters is disrupted. PST deficiency is also a risk factor for various diseases including autism, migraine, and cancers. Autism It is suspected that mutations, including both microdeletion and microduplication, of the PST genes are the risk factors of autism spectrum disorder, especially the mutation causing decreased SULT1A activity which is usually reported in autistic individuals. Some studies have found that sulfotransferases like PST are involved in glycosylation, and therefore PST deficiency may cause impaired glycosylation, leading to dystroglycanopathies where severe abnormalities of the central nervous system including neuronal migration and cortical defects would occur, and finally result in autistic behaviours. However, it is still unclear on whether PST deficiency is a cause of autism, or just a biomarker for the disorder. Although recent researches have associated autism with the mutations in the position 16p11.2 on chromosome 16, where the gene of the predominant PST isoform in the nervous system SULT1A3 exists, due to the large number of gene in this region, PST deficiency resulted from the mutation there may not be a cause of autism but just a condition that is associated with the mutation of another gene which is causing autism. Migraine PST deficiency in platelets is a risk factor of migraine. It is believed that the reduced PST levels and activity raise the amount of unconjugated amines in the bloodstream and the central nervous system, resulting in a rise of catecholamine level which contributes to the occurrence of recurring headache in migraine. It is also found that dietary intake of foods that are rich in amines may further lower the activity of PST and trigger more serious migraine symptoms. Cancers It is controversial for whether PST deficiency increases or decreases the risk of cancers. Although one major function of PST is to inactivate phenolic carcinogens, and therefore a deficiency of PST would reduce inactivation of carcinogens and result in a higher risk of cancer, some studies have also found that PST, specifically SULT1A1, is responsible for the toxification of dietary and environmental mutagens which would increase the risk of cancer, and therefore a decreased risk may be associated with the deficient state of SULT1A1. Pharmacological impacts Drug metabolism of phenolic drugs, such as paracetamol and salicylamide, is greatly dependent on the phenol sulfoconjugation by PST, and therefore careful controls on the dosage forms, routes, rates, and duration of administration of those drugs are important for PST deficient patients to prevent accumulation of drugs in the body and depletion of PST for the sulfoconjugation of other xenobiotics and endogenous substances. High dosage of nonsteroidal anti-inflammatory drugs (NSAIDs), such as aspirin, would also cause a short term inhibition to the activity of PST, and should be administered to PST deficient patients with caution to prevent further reduction in PST activity and accumulation of phenolic compounds which would result in adverse impacts. References Detoxification Causes of autism Chemistry Metabolic disorders
Phenol sulfur transferase deficiency
[ "Chemistry" ]
1,928
[ "Metabolic disorders", "Metabolism" ]
63,490,326
https://en.wikipedia.org/wiki/Kac%27s%20lemma
In ergodic theory, Kac's lemma, demonstrated by mathematician Mark Kac in 1947, is a lemma stating that in a measure space the orbit of almost all the points contained in a set of such space, whose measure is , return to within an average time inversely proportional to . The lemma extends what is stated by Poincaré recurrence theorem, in which it is shown that the points return in infinite times. Application In physics, a dynamical system evolving in time may be described in a phase space, that is by the evolution in time of some variables. If this variables are bounded, that is having a minimum and a maximum, for a theorem due to Liouville, a measure can be defined in the space, having a measure space where the lemma applies. As a consequence, given a configuration of the system (a point in the phase space) the average return period close to this configuration (in the neighbourhood of the point) is inversely proportional to the considered size of volume surrounding the configuration. Normalizing the measure space to 1, it becomes a probability space and the measure of its set represents the probability of finding the system in the states represented by the points of that set. In this case the lemma implies that the smaller is the probability to be in a certain state (or close to it), the longer is the time of return near that state. In formulas, if is the region close to the starting point and is the return period, its average value is: Where is a characteristic time of the system in question. Note that since the volume of , therefore , depends exponentially on the variables in the system (, with infinitesimal side, therefore less than 1, of the volume in dimensions), decreases very rapidly as the variables of the system increase and consequently the return period increases exponentially. In practice, as the variables needed to describe the system increase, the return period increases rapidly. References Further reading Ergodic theory Lemmas
Kac's lemma
[ "Mathematics" ]
408
[ "Ergodic theory", "Mathematical problems", "Mathematical theorems", "Lemmas", "Dynamical systems" ]
58,248,385
https://en.wikipedia.org/wiki/Walsh%E2%80%93Lebesgue%20theorem
The Walsh–Lebesgue theorem is a famous result from harmonic analysis proved by the American mathematician Joseph L. Walsh in 1929, using results proved by Lebesgue in 1907. The theorem states the following: Let be a compact subset of the Euclidean plane such the relative complement of with respect to is connected. Then, every real-valued continuous function on (i.e. the boundary of ) can be approximated uniformly on by (real-valued) harmonic polynomials in the real variables and . Generalizations The Walsh–Lebesgue theorem has been generalized to Riemann surfaces and to . In 1974 Anthony G. O'Farrell gave a generalization of the Walsh–Lebesgue theorem by means of the 1964 Browder–Wermer theorem with related techniques. References Theorems in harmonic analysis Theorems in approximation theory
Walsh–Lebesgue theorem
[ "Mathematics" ]
168
[ "Theorems in approximation theory", "Theorems in mathematical analysis", "Theorems in harmonic analysis" ]
58,264,058
https://en.wikipedia.org/wiki/Measurable%20acting%20group
In mathematics, a measurable acting group is a special group that acts on some space in a way that is compatible with structures of measure theory. Measurable acting groups are found in the intersection of measure theory and group theory, two sub-disciplines of mathematics. Measurable acting groups are the basis for the study of invariant measures in abstract settings, most famously the Haar measure, and the study of stationary random measures. Definition Let be a measurable group, where denotes the -algebra on and the group law. Let further be a measurable space and let be the product -algebra of the -algebras and . Let act on with group action If is a measurable function from to , then it is called a measurable group action. In this case, the group is said to act measurably on . Example: Measurable groups as measurable acting groups One special case of measurable acting groups are measurable groups themselves. If , and the group action is the group law, then a measurable group is a group , acting measurably on . References Group theory Measure theory
Measurable acting group
[ "Mathematics" ]
223
[ "Group theory", "Fields of abstract algebra" ]
56,531,421
https://en.wikipedia.org/wiki/Temporal%20light%20artefacts
Temporal light artefacts (TLAs) are undesired effects in the visual perception of a human observer induced by temporal light modulations. Two well-known examples of such unwanted effects are flicker and stroboscopic effect. Flicker is a directly visible light modulation at relatively low frequencies (< 80 Hz) and small intensity modulation levels. Stroboscopic effect may become visible for a person when a moving object is illuminated by modulated light at somewhat higher frequencies (>80 Hz) and larger intensity variations. Relevance Various scientific committees have assessed the potential health, performance and safety-related aspects resulting from temporal light modulations. TLAs must be limited to certain levels to avoid annoyance due to the direct visibility by humans and to prevent potential health issues. After longer exposure, TLAs may reduce task performance and cause fatigue. Possible health effects for specific persons are photosensitive epileptic seizure, migraine and aggravation of autistic behavior. The incorrect perception of the motion of an object due to stroboscopic effect may be unacceptable in working environments with fast moving or rotating machinery. Types TLAs are generally unwanted effects that may be perceived by humans due to the fact that the light output of a lighting equipment varies with time. Different TLA phenomena, the associated terms and definitions and their visibility aspects are given in a technical note of CIE; see CIE TN 006:2016. In CIE TN 006:2016 three types of TLAs are distinguished: Flicker refers to unacceptable (irritating) light variation of a light source that is perceived by an average person, either directly or via a reflecting surface ; Stroboscopic effect is an unwanted effect which may become visible for an average person when a moving or rotating object is illuminated by a time-modulated light source; Phantom array (or ghosting) may be perceived by an average person when making an eye saccade over a small light source having a periodic fluctuation, the light source is then perceived as a series of spatially extended light spots. Further background and explanations on the different TLA phenomena are given in a recorded webinar "Is it all just flicker?". Models for the visibility of flicker and stroboscopic effect from the temporal behavior of luminous output of LEDs are in the doctoral thesis of Perz. Root causes The root cause of TLAs is the variation of the light intensity of lighting equipment. Important factors that can contribute and that determine the magnitude and type of light modulation of lighting equipment are: Light source technology: LEDs do not intrinsically produce temporal modulation; they just reproduce the input current waveform very well, and any ripple in the current waveform is reproduced by a light ripple because LEDs have a fast response; therefore compared to conventional lighting technologies (incandescent, fluorescent), for LED lighting more variety in the TLA properties is seen. Power source technology (driver, electrical ballast): Many types and topologies of LED drivers and electrical ballasts are applied; simpler electronics and limited or no buffer capacitors often result in larger residual current ripple and thus larger temporal light modulation. Light regulation: Dimming technologies of either externally applied dimmers (incompatible dimmers) or internal light-level regulators may have a large impact.; the level of temporal light modulation generally increases at lower light levels. Mains voltage fluctuations: Electrical mains voltage variations are caused by switching or varying loads of electrical apparatus connected to the mains network, or may be intentionally applied e.g. for power-line communication. Visible light communication technologies: Intentional temporal light modulations like LiFi can be applied, e.g. for communication purposes; these additional TLMs may give rise to unwanted TLAs. Metrics Several simple metrics such as Modulation Depth, Flicker Index and Flicker Percentage are often used to assess the acceptability of flicker. None of these metrics are suitable to objectively assess the visibility and acceptability of TLAs by humans. Human perception of TLAs is impacted by various factors: modulation depth, frequency, wave shape and duty cycle. More advanced metrics have been developed and validated to objectively assess the visibility of TLAs: for flicker, the short-term flicker indicator PstLM, for stroboscopic effect, the stroboscopic effect visibility measure SVM. For flicker also two alternative measures are derived to measure its visibility, the Flicker Visibility Measure FVM and the Time domain Flicker Visibility Measure TFVM. NOTE - The application of the SVM-metric is limited for human perception of stroboscopic effect in normal application environments (residential, office) where the speed of movement of persons and/or objects is limited. For phantom array effect no metric has been defined yet. Measurement methods Standardised test and measurement methods Measurement of PstLM, and optionally testing effect of mains voltage fluctuations or dimming: see IEC TR 61547-1, edition 3; Measurement of SVM, and optionally testing effect of dimming: see IEC TR 63158; TLA: Test Methods and Guidance for Acceptance Criteria, see NEMA 77-2017; Guidance on the measurement of temporal light modulation of light sources and lighting systems: see CIE TN 012 Recommended limits Recommended limits for the TLA phenomena flicker and stroboscopic effect are in NEMA 77-2017 publication. Improper use of cameras for TLA assessment If smart-phone phone cameras, video cameras or film cameras are used in presence of temporally modulated light, a variety of artefacts may be seen on the picture or on the recording, e.g. vertical or horizontal banding with varying brightness (this category of unwanted effects is temporal light interference - TLI). However, the type of artefact depends very much on the camera technology and camera settings. Different camera's will show different artefacts depending on type of shutter, picture frame rate and on the mitigation measures taken in the camera. Apart from the possible variety of effects that can be seen, there is also a difference between what people perceive directly compared to what people perceive via a camera and display or monitor. Hence, usage of common cameras is not a valid and objective means to assess the potential TLA from lighting equipment. See also Temporal light effects Temporal light interference Flicker (light) Stroboscopic effect (lighting) Stroboscopic effect References Further reading LightingEurope TLA position paper, LightingEurope Position Paper on Flicker and Stroboscopic Effect (Temporal Light Artefacts), September 2016; NEMA TLA position paper, Temporal Light Artifacts (Flicker and Stroboscopic Effects), 15 June 2015; ZVEI information paper, Temporal Light Artefacts – TLA, March 2017 (in German and English); CIE Technical Note CIE TN 008:2017, Final Report CIE Stakeholder Workshop for Temporal Light Modulation Standards for Lighting Systems. Lighting Optical illusions
Temporal light artefacts
[ "Physics" ]
1,402
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
56,533,153
https://en.wikipedia.org/wiki/TCMTB
(Benzothiazol-2-ylthio)methyl thiocyanate (TCMTB) is a chemical compound classified as a benzothiazole. Properties TCMTB is an oily, flammable, red to brown liquid with a pungent odor that is very slightly soluble in water. It decomposes on heating producing hydrogen cyanide, sulfur oxides, and nitrogen oxides. The degradation products are 2-mercaptobenzothiazole (2-MBT) and 2-benzothiazolesulfonic acid. Uses TCMTB is used as wideband microbicide, paint fungicide, and paint gallicide. The active substance approved in 1980 in the United States. It is used, for example, in leather preservation, for the protection of paper products, in wood preservatives, and against germs in industrial water. In the US, TCMTB is used as a fungicide for seed dressing in cereals, safflower, cotton and sugar beet. It is also used when dealing with fungal problems when extracting hydrocarbons via fracking. Approval TCMTB is not an authorized plant protection product in the European Union. In Germany, Austria and Switzerland, no plant protection products containing this active substance are authorized. TCMTB contributes to health problems in tannery workers as it is a potential carcinogen, and is a hepatotoxin. It is also a skin sensitizer, and may cause contact dermatitis in those exposed to the poisonous compound. Hence, it is mainly used in developing countries. References Benzothiazoles Thioethers Thiocyanates Biocides Fungicides
TCMTB
[ "Chemistry", "Biology", "Environmental_science" ]
357
[ "Fungicides", "Toxicology", "Functional groups", "Thiocyanates", "Biocides" ]
56,536,030
https://en.wikipedia.org/wiki/Webduino
The BPI Bit (also referred to as BPI:bit, stylised as webduino:bit) is an ESP32 with Xtensa 32bit LX6 single/dual-core processor based embedded system The board is and has an ESP32 module with Xtensa 32bit LX6 single/dual-core processor, with a capacity of up to 600DMIPS, with a built-in 448KB ROM and 520 KB SRAM accelerometer and magnetometer sensors, 2.4G WiFi, Bluetooth and USB connectivity, a display consisting of 25 light-emitting diodes, two programmable buttons, and can be powered by either USB or an external battery pack. The device inputs and outputs are through five ring connectors that are part of the 23-pin edge connector. BPI:bit provides a wide range of onboard resources, supports photosensitive sensor, digital triaxial sensor, digital compass, temperature sensor interface. Webduino:bit have 25 intelligent control LED light source that the control circuit and RGB chip are integrated in a package of 5050 components. Cascading port transmission signal by single line. Each pixel of the three primary color can achieve 256 brightness display, completed 16777216 color full color display, and scan frequency not less than 400 Hz/s. BPI:bit use MPU9250 on board, MPU-9250 is a multi-chip module (MCM) consisting of two dies integrated into a single QFN package. One die houses the 3-Axis gyroscope and the 3-Axis accelerometer. The other die houses the AK8963 3-Axis magnetometer from Asahi Kasei Microdevices Corporation. Hence, the MPU-9250 is a 9-axis MotionTracking device that combines a 3-axis gyroscope, 3-axis accelerometer, 3-axis magnetometer and a Digital Motion Processor™ (DMP) all in a small 3x3x1mm package available as a pin-compatible upgrade from the MPU-6515. With its dedicated I2C sensor bus, the MPU-9250directly provides complete 9-axis MotionFusion™ output. The MPU-9250 MotionTracking device, with its 9-axis integration, on-chip MotionFusion™, and runtime calibration firmware, enables manufacturers to eliminate the costly and complex selection, qualification, and system level integration of discrete devices, guaranteeing optimal motion performance for consumers. MPU-9250 is also designed to interface with multiple non-inertial digital sensors, such as pressure sensors, on its auxiliary I²C port. BPI:bit interface: BPI:bit for arduino Source code on GitHub: https://github.com/BPI-STEAM BPI:bit for webduino source code on GitHub: https://github.com/webduinoio BPI:bit wiki page : http://wiki.banana-pi.org/BPI-Bit BPI:UNO32 for Webduino & Arduino The BPI:UNO32 (also referred to as BPI UNO32, stylised as BPI-UNO32), BPI-UNO32 uses the ESP-WROOM-32 of espressif company as MCU. ESP32 is a single-chip solution integrating 2.4 GHz Wi-Fi and Bluetooth dual mode. The 40 nanometer technology of TSMC has the best power consumption, RF performance, stability, versatility and reliability. It can deal with various application scenarios. Two separate controlled CPU cores, the main frequency can be up to 240 MHz, 448KB ROM, 520KB SRAM. BPI-UNO32 The appearance size is fully matched with Arduino UNO R3 BPI:UNO32 for arduino Source code on GitHub: https://github.com/BPI-STEAM BPI-UNO32 BPI wik page : http://wiki.banana-pi.org/BPI-UNO32 BPI:Smart for Webduino & Arduino The BPI:Smart (also referred to as BPI-Smart, stylised as Webduino Smart), it use ESP8266 design, Webduino official support it. also can support Arduino. BPI:Smart Board dimensions: 3 cm in length, 2.5 cm in width, 1.3 cm in height, and a weight of 85 grams. Digital pins: 0, 2, 4, 5, 14, and 16. PWM pins: 12, 13, 15. Analog Pin: AD (A0). Other pins: TX, RX, 3.3V, VCC, RST, and GRD. BPI:Smart has a photocell sensor, an RGB LED, and a micro switch button on board. The photocell is connected to the AD pin, and the RGB (Red, Geen, Blue) LED is connected to pins 15, 12, and 13 respectively (The LED is a common cathode, whereas most of the examples on this site use common anode RGB LED.) And the micro switch button is connected to pin 4. Please take note when you use these pins. BPI-smart BPI wiki page : http://wiki.banana-pi.org/BPI-Smart Arduino Physical computing
Webduino
[ "Engineering" ]
1,170
[ "Physical computing", "Robotics engineering" ]
64,946,914
https://en.wikipedia.org/wiki/Jocic%20reaction
In organic chemistry, the Jocic reaction, also called the Jocic–Reeve reaction (named after Zivojin Jocic and Wilkins Reeve) is a name reaction that generates α-substituted carboxylic acids from trichloromethylcarbinols and corresponding nucleophiles in the presence of sodium hydroxide. The reaction involves nucleophilic displacement of the hydroxyl group in a 1,1,1-trichloro-2-hydroxyalkyl structure with concomitant conversion of the trichloromethyl portion to a carboxylic acid or other acyl group. The key stages of the reaction involve an SN2 reaction, where the nucleophile displaces the oxygen with geometric inversion. Mechanism The reaction mechanism involves an epoxide intermediate that undergoes an SN2 reaction by the nucleophile. As a result of this mechanistic aspect, the reaction can easily occur on secondary or tertiary positions, and chiral products can be made by using chiral alcohol substrates. The reaction is one stage of the Corey–Link reaction, the Bargellini reaction, and other processes for synthesizing α-amino acids and related structures. Using hydride as the nucleophile, which also reduces the carbonyl of the product, allows this sequence to be used as a homologation reaction for primary alcohols. Scope Examples of this reaction include: Generation of α-azidocarboxylic acids with the use of sodium azide as the nucleophile in DME with the presence of sodium hydroxide.Conversion of aldehydes to homoelongated carboxylic acids, by first reacting with trichloromethide to form a trichloromethylcarbinol, then undergoing a Jocic reaction with either sodium borohydride or sodium phenylseleno(triethoxy)borate as the nucleophile in sodium hydroxide. This reaction can be followed by the introduction of an amine, to form the corresponding homoelongated amides. References Name reactions Substitution reactions
Jocic reaction
[ "Chemistry" ]
448
[ "Name reactions" ]
64,952,415
https://en.wikipedia.org/wiki/Pharmacovigilance%20Programme%20of%20India
The Pharmacovigilance Programme of India (PvPI) is an Indian government organization which identifies and responds to drug safety problems. Its activities include receiving reports of adverse drug events and taking necessary action to remedy problems. The Central Drugs Standard Control Organisation established the program in July 2010 with All India Institute of Medical Sciences, New Delhi as the National Coordination Centre, which later shifted to Indian Pharmacopoeia Commission in Ghaziabad on 15 April 2011. History Many developed countries set up their pharmacovigilance programs following the Thalidomide scandal in the 1960s. India set up its program in the 1980s. This general concept of drug safety monitoring went through different forms, but the Central Drugs Standard Control Organisation established the present Pharmacovigilance Program of India in 2010. Now the program is well integrated with government legislation, a regulator as leader, and a research center as part of the Indian Pharmacopoeia Commission. Activities As of 2018 there were 250 centers around India capable of responding to reports of serious adverse reactions. One of the challenges of the organization is training doctors and hospitals to report adverse drug reactions when patients have them. The Pharmacovigilance Program makes these reports itself, but ideally, such reports could originate from any clinic. The Pharmacovigilance Programme seeks to encourage a culture and social expectation of reporting drug problems. One of the successes of the program was detecting adverse effects of people in India using carbamazepine. While this drug is safer among people native to the Europe, people of South Asia have different genetics and are more likely to experience problems when using it. Other countries could not have been able to detect this problem, and the Pharmacovigilance Programme's detection of it was a success story. Collaboration The establishment of the Pharmacovigilance Program made India a more attractive international destination for foreign companies to bring clinical trials research. Understanding the quality of India's pharmacovigilance programme is key to international researchers conducting trials in India. The program collaborates both in India and internationally with the World Health Organization on projects for safe medication. As a collaborating center, the Pharmacovigilance Programme assists the WHO in developing international policy for other countries to manage their own drug safety programs. While the United States and Europe have pharmacovigilance systems which are developed well in some ways, the Indian programme has more and specialized expertise to apply for the unique circumstances of India. The Pharmaceutical industry in India produces more drugs than any other national industry. Because of the large amount of drugs and the many countries which import them, the Indian program monitors in some ways more than anywhere else. References External links Drug safety Pharmaceuticals policy Medical and health organisations based in India National agencies for drug regulation Regulators of biotechnology products Regulatory agencies of India Ministry of Health and Family Welfare
Pharmacovigilance Programme of India
[ "Chemistry", "Biology" ]
581
[ "Biotechnology products", "Regulation of biotechnologies", "National agencies for drug regulation", "Regulators of biotechnology products", "Drug safety" ]
64,958,336
https://en.wikipedia.org/wiki/Henry%20Vernon%20Wong
Henry Vernon Wong is a Jamaican-American physicist known for his work in plasma physics. He is professor emeritus at the University of Texas, Austin. Career Wong's early education was at Cornwall College in Montego Bay, Jamaica. He won a Jamaica Scholarship to the University of the West Indies, graduating with a B.Sc. in physics in 1961. He obtained his D.Phil. in Nuclear physics from Wadham College, Oxford in 1964. Wong remained at Oxford during 1964–1965 as a postdoctoral scholar. In 1965, he was the recipient of a CIBA Fellowship to continue his research at the International Centre for Theoretical Physics in Trieste, Italy. The following year he joined the Laboratoria Gas Ionizzati in Rome. In 1967, Wong joined the Fusion Research Center (FRC) of the University of Texas at Austin as a research scientist. Awards and honors In 1961, Wong was awarded a Rhodes Scholarship to Wadham College, Oxford In 1988, He was elected fellow of the American Physical Society. Selected publications Wong, H. Vernon. "Stability of Bernstein‐Greene‐Kruskal Wave with Small Fraction of Trapped Electrons" The Physics of Fluids 15, 632 (1972); DOI:10.1063/1.1693958 Wong, H. Vernon. "Sideband instabilities in free electron lasers" Physics of Fluids B: Plasma Physics, 2 1635 (1990). DOI:10.1063/1.859489 Wong, H. Vernon. "Particle canonical variables and guiding center Hamiltonian up to second order in the Larmor radius" Physics of Plasmas 7, 73 (2000). DOI:10.1063/1.873782 Wong, H. Vernon. "Nonlinear finite-Larmor-radius drift-kinetic equation" Physics of Plasmas 12, 112305 (2005). DOI:10.1063/1.2116867 References University of the West Indies alumni Alumni of the University of Oxford Alumni of Wadham College, Oxford Living people 20th-century American physicists 21st-century American physicists Jamaican scientists Jamaican Rhodes Scholars Plasma physicists Fellows of the American Physical Society University of Texas at Austin faculty 1938 births Cornwall College, Jamaica alumni
Henry Vernon Wong
[ "Physics" ]
457
[ "Plasma physicists", "Plasma physics" ]
64,959,915
https://en.wikipedia.org/wiki/DNA%20end%20resection
DNA end resection, also called 5′–3′ degradation, is a biochemical process where the blunt end of a section of double-stranded DNA (dsDNA) is modified by cutting away some nucleotides from the 5' end to produce a 3' single-stranded sequence. The presence of a section of single-stranded DNA (ssDNA) allows the broken end of the DNA to line up accurately with a matching sequence, so that it can be accurately repaired.Double-strand breaks (DSBs) can occur at any phase of the cell cycle causing DNA end resection and repair activities to take place, but they are also normal intermediates in mitosis recombination. Furthermore, the natural ends of the linear chromosomes resemble DSBs, and although DNA breaks can cause damage to the integrity of genomic DNA, the natural ends are packed into complex specialized DNA protective packages called telomeres that prevent DNA repair activities. Telomeres and mitotic DSBs have different functionality, but both experience the same 5′–3′ degradation process. Background A double-strand break is a kind of DNA damage in which both strands in the double helix are severed. DSBs only occur during DNA replication of the cell cycle. Furthermore, DSBs can lead to genome rearrangements and instability. Cases where two complementary strands are linked at the point of the DSB have potential to be catastrophic, such that the cell will not be able to complete mitosis when it next divides, and will either die or, in rare cases, undergo chromosomal loss, duplications, and even mutations. Three mechanisms exist to repair DSBs: non-homologous end joining (NHEJ), microhomology-mediated end joining (MMEJ), and homologous recombination HR. Of these, only NHEJ does not rely on DNA end resection. Mechanism Accurate repair of DSBs are essential in the upkeep of genome integrity. From the three mechanisms that exists to repair DSBs, NHEJ and HR repair mechanisms are the dominant pathways. Several highly conservative proteins trigger the DNA Damage Checkpoint for detection of DSBs ensuing repair by either NHEJ or HR repair pathways. NHEJ mechanism functions in ligating two different DSBs with high fidelity, while HR relies on a homologous template to repair DSB ends. DNA end resection in the HR pathway only occurs at two specific phases: S and G2 phases. Since HR pathway requires sister chromatids for activation, this event only happens in the G2 and S phases of the cell cycle during replication. DSBs that have not begun DNA end resection can be ligated by NHEJ pathway, but resection of a few nucleotides inhibits the NHEJ pathway and commits' DNA repair by the HR pathway. The NHEJ pathway is involved throughout the cell cycle, but it is critical to DNA repair during the G1 phase. In G1 phase there is no sister chromatids to repair DSBs via the HR pathway making the NHEJ pathway a critical repair mechanism. Before resection can take place, the break needs to be detected. In animals, this detection is done by PARP1; similar systems exist in other eukaryotes: in plants, PARP2 seems to play this role. PARP binding then recruits the MRN complex to the breakage site. This is a highly conserved complex consisting of Mre11, Rad50 and NBS1 (known as Nibrin in mammals, or Xrs2 in yeast, where this complex is called the MRX complex). Before resection can start, CtBP1-interacting protein (CtIP) needs to bind to the MRN complex so that the first phase of resection can begin, namely short-range end resection. After phosphorylated CtIP binds, the Mre11 subunit is able to cut the 5'-terminated strand endonucleolytically, probably about 300 base pairs from the end, and then acts as a 3'→5' exonuclease to strip away the end of the 5' strand. Resection of telomere DSBs Linear chromosomes are packed into complex specialized DNA protective packages called telomeres. The structure of telomeres is highly conserve and is organized in multiple short tandem DNA repeats. Telomeres and DSBs have different functionality, such that telomeres prevent DNA repair activities. During telomeric DNA replication in the S/G2 and G1 phases of the cell cycle, the 3' lagging strand leaves a short overhang called a G-tail. Telomeric DNA ends at the 3' G tail end because the 3' lagging strand extends without its complementary 5' C leading strand. The G tail provide a major function to telomeric DNA such that the G tails control telomere homeostasis. Telomeres in G1 phase In the G1 phase of the cell cycle, the telomere-associated proteins RIF1, RIF2, and RAP2 bind to telomeric DNA and prevent access to the MRX complex. Such process in S. Cerevisiae for example is negatively regulated by this activity. The MRX complex and the Ku complex bind simultaneously and independently to DSBs ends. In the presence of the telomere-associated proteins, MRX fails to bind to the DSB ends while the Ku complex binds to DSB ends. The bound Ku complex to the DSB ends protect the telomeres from nucleolytic degradation by exo1. This results in an inhibition of telomerase elongation at the DSB ends and prevents further telomere action at the G1 phase of the cell cycle. Telomeres in the late S/G2 phase In the late S/G2 phase of the cell cycle, the telomere-associated proteins RIF1, RIF2, and RAP2 exhibit their inhibitory effect by binding to telomeric DNA. In the Late S/G2 phase, the protein kinase CDK1 (cyclin-dependent) promotes telomeric resection. This control is exerted by cyclin-dependent kinases, which phosphorylate parts of the resection machinery. This process alleviates the inhibitory effect of the telomere-associated proteins, and allows Cdc13 (a binding protein on both the lagging strand, and leading strand) to cover telomeric DNA. The binding of cdc13 to DNA suppresses DNA damage checkpoint and allows resection to occur while allowing for telomerase elongation at the DSB ends. Resection of mitotic DSBs One of the important regulatory controls in mitotic cells is deciding which specific DSB repair pathway to take. Once a DSB is detected, the highly conserved complexes are recruited by the DNA ends. If the cell is in the G1 phase of the cell cycle, the complex Ku prevents resection to occur and triggers the NHEJ pathway factors. DSBs in the NHEJ pathway are ligated, a step in the NHEJ pathway that requires DNA ligase activity of Dnl4-Lif1/XRCC4 heterodimer and the Nej1/XLF protein. This process results in error-prone religation of DSB ends at the G1 phase of the cell cycle. If the cells are in S/G2 phase, mitotic DSBs are controlled through Cdk1 activity and involves phosphorylation of Sae2 Ser267. After phosphorylation occurs by Cdk1, MRX complex binds to dsDNA ends and generates short ssDNA that stretches in the 5' direction. The 5' ssDNA continues resection by the activity of the helicase enzyme, Sgs1 enzyme, and the nucleases Exo1 and Dna2. Involvement of Sae2 Sar267 in DSB processing is highly conserved throughout eukaryotes, such that the Sae2 along with the MRX complex are involved in two major functions: single-strand annealing, and processing of hairpin DNA structures. Like all ssDNA in the nucleus, the resected region is first coated by Replication protein A (RPA) complex, but RPA is then replaced with RAD51 to form a nucleoprotein filament which can take part in the search for a matching region, allowing HR to take place. The 3' ssDNA coated by a RPA promotes the recruitment of Mec1. Mec1 further phosphorylates Sae2 along with cdk1. The resulting phosphorylation by Sae2 by Mec1 helps increase the effect of resection and this in turn leads to the DNA damage checkpoint activation. Regulators The pathway of choice in DNA repair is highly regulated to guarantee that cells in the S/G2 and G1 phase use the appropriate mechanism. Regulators in both the NHEJ and HR pathway mediate the appropriate DNA repair response pathway. Furthermore, recent studies into DNA repair show that regulation of DNA end resection is governed by the activity of cdk1 in the cell replication cycle. NHEJ pathway DNA end resection is key in determining the correct pathway in NHEJ. For NHEJ pathway to occur, positive regulators such as the Ku and MRX complex mediate recruitment of other NHEJ-associated proteins such as Tel1, Lif1, Dnl4, and Nej1. Since NHEJ does not rely on end resection, NHEJ could only happen in the G1 phase of the cell cycle. Both Ku and NHEJ-associated proteins prevent initiation of resection. Resection ensures that DSBs are not repaired by NHEJ (which joins broken DNA ends together without ensuring that they match), but rather by methods based on homology (matching DNA sequences). Cyclin-dependent protein kinase such as cdk1 in yeast serves as a negative regulator of the NHEJ pathway. Any activity associated with the presence of cyclin dependent protein kinases inhibit the NHEJ pathway Positive regulators The presence of a ssDNA allows the broken end of the DNA to line up accurately with a matching sequence, so that it can be accurately repaired. For HR pathway to occur in the S and G2 phases of the cell cycle, availability of a sister chromatid is required. 5′–3′ resection automatically links a DSB to the HR pathway. Cyclin-dependent protein kinase such as cdk1 serve as a positive regulator of the HR pathway. This positive regulator promotes 5′–3′ nucleolytic degradation of DNA ends. Along with cdk1, the MRX complex, B1 cyclin, and Spo11-induced DSBs serve as a positive regulators to the HR pathway. See also Exonuclease Double-strand breaks Blunt ends Non-homologous end joining Nucleotide Cell cycle Telomere NHEJ Homologous Recombination Microhomology-mediated end joining References DNA repair
DNA end resection
[ "Biology" ]
2,277
[ "Molecular genetics", "DNA repair", "Cellular processes" ]
70,714,339
https://en.wikipedia.org/wiki/Spur%20%28chemistry%29
A spur or track in radiation chemistry is a region of high concentration of chemical products after ionizing radiation passes through. The spur model, proposed by Samuel and Magee in 1953, describes the kinetic behavior of reaction spurs involving one type of radicals in a diffusion-driven environment. The spurs from gamma rays or X-rays are considered to be spherical, while those from alpha particles are cylindrical, also called tracks. See also Linear energy transfer Radiobiology References Nuclear chemistry Reaction mechanisms
Spur (chemistry)
[ "Physics", "Chemistry" ]
96
[ "Reaction mechanisms", "Nuclear chemistry", "Nuclear chemistry stubs", "Physical organic chemistry", "Nuclear physics", "nan", "Chemical kinetics" ]
70,716,297
https://en.wikipedia.org/wiki/Spatial%20anxiety
Spatial anxiety (sometimes also referred to as spatial orientation discomfort) is a sense of anxiety an individual experiences while processing environmental information contained in one's geographical space (in the sense of Montello's classification of space), with the purpose of navigation and orientation through that space (usually unfamiliar, or very little known). Spatial anxiety is also linked to the feeling of stress regarding the anticipation of a spatial-content related performance task (such as mental rotation, spatial perception, spatial visualisation, object location memory, dynamic spatial ability). Particular cases of spatial anxiety can result in a more severe form of distress, as in agoraphobia. Classification It is still investigated whether spatial anxiety would be considered as one solid, concrete ("unitary") construct (including the experiences of anxiety due to any spatial task), or whether it could be considered to be a "multifactorial construct" (including various subcomponents), attributing the experience of anxiety to several aspects. Evidence has shown that spatial anxiety seems to be a "multifactorial construct" that entails two components; that of anxiety regarding navigation and that of anxiety regarding the demand of rotation and visualization skills. Gender and further individual differences Gender differences appear to be one of the most prominent differences in spatial anxiety as well as in navigational strategies. Evidence show higher levels of spatial anxiety in women, who tend to choose route strategies, as opposed to men, who tend to choose orientation strategies (a fact which, in turn, has been found to be negatively related to spatial anxiety). Spatial anxiety levels also seem to vary across different age groups. Evidence has shown spatial anxiety to appear also, early on, during the elementary school years, with anxiety varying in level and tending to be stable; with minimum fluctuations, across life span. Measuring instruments There are two primary ways of measuring spatial anxiety. One of them is Lawton's Spatial Anxiety Scale, which was dominant during its era of creation. The other is the Child Spatial Anxiety Questionnaire, which was first one to assess spatial anxiety levels related to other spatial abilities other than navigation and map reading. Lawton's Spatial Anxiety Scale The scale measures the degree of anxiety regarding the individual's experience and performance, in tasks assessing one's information processing related to the environment; such as way-finding and navigation. In total there are eight statements. Some examples are "leaving a store that you have been to for the first time and deciding which way to turn to get to a destination" and "finding your way around in an unfamiliar mall". The rating takes place on a 5-point scale, expressing the degree of anxiety with a continuum from "not at all" to "very much". Child Spatial Anxiety Questionnaire The Child Spatial Anxiety Questionnaire was designed for young children and attempts to assess anxiety related to a wider (than usually) range of spatial abilities. Children are asked to report the level of anxiety they feel while in particular spatial abilities-demanding situations. In total it includes eight situations. Some examples are: "how do you feel being asked to say which direction is right or left?", "how do you feel when you are asked to point to a certain place on a map, like this one?", "how do you feel when you have to solve a maze like this in one minute?". In the original version, the rating takes place on a 3-point scale which includes three different faces; each facial expression, representing a different emotional state (getting from "calm", to "somewhat nervous", to "very nervous"). The revised version assessment takes place on a 5-point scale, with two more facial expressions added. Cognitive maps in individuals with spatial anxiety Self-reported spatial anxiety is negatively correlated with performance in spatial tasks, both small-scale – as assessing mental rotation, spatial visualization; and large scale – as environment learning, with participants scoring higher in spatial anxiety scale showing lowered performance. Spatial anxiety is also negatively correlated navigation proficiency ratings on the self-reported sense of direction measures, as well as orientation (map based) and route (egocentric) strategies. Additionally, as anxiety has been shown to influence performance on tasks that utilize working memory resources, working memory is bound to be affected by spatial anxiety, especially visuo-spatial working memory. There has been evidence demonstrating the negative relationship between spatial anxiety and environmental learning ability. For example, spatial anxiety is found to induce more errors in directional pointing tasks. In an experiment where participants were required to use directional instructions to move a toy car in a virtual three-dimensional environment, those with higher reported spatial anxiety performed with less accuracy. As spatial anxiety increases, pointing accuracy decreases, and navigation errors increase significantly.. This effect has been also shown in patients with cognitive impairment. Early detection might therefore allow for timely therapeutical intervention, e.g., in Alzheimer's disease Moreover, spatial anxiety has been shown to relate to gender differences in spatial abilities. Generally, women report higher levels of spatial anxiety than men. The use of orientation (based on map view) strategies in indoor or/and outdoor environment can be associated with lower levels of spatial anxiety. Women tend to report using route strategies more than orientation strategies, whereas men report the opposite. Spatial anxiety also contributes to gender differences in environment learning. Recent findings in university students indicate that men rely more than women upon distal gradient cues that provide information on both orientation and direction (i.e., hill lines) whereas women depend upon proximal pinpoint (i.e., landmark) cues more than other cue types when identifying a visual scene. The addition of an exogenous stressor would differentially alter the impact of spatial anxiety on performance in men and women by producing a higher perception of stress in women than males, which results in decreasing performance in females. The findings suggest that gender differences in distal gradient and new cue perception varied based on stress condition. Some studies have discovered that acute stress can reduce memory for spatial locations, and people reporting difficulties in memorizing landmarks and directions when they are displaced also report higher levels of spatial anxiety. In addition, it has been demonstrated that people with Agoraphobia Disorder have reduced visuo-spatial working memory when they are required to process multiple spatial elements simultaneously. Specifically, in tasks where they were required to navigate using the landmarks independent of themselves (allocentric coordinates), visuo-spatial working memory deficits were shown to hinder their performance. Bilateral vestibulopathy can cause higher levels of spatial anxiety, potentially related to hippocampal atrophy. Overall, the role of the vestibular system on spatial anxiety is not yet fully understood, but vestibular function plays a relevant role in emotion processing and the development of (vertigo-related) anxiety, as well as in spatial perception. Possible explanations for the negative correlation between spatial anxiety and the ability to form cognitive map include: individuals lacking sense of their own position with respect to the external environment are more likely to get anxious when faced with unplanned navigation, and the anxiety about becoming lost itself may reduce the ability to attend to cues necessary for way-finding strategizing. The influence of spatial anxiety can be counteracted by positive beliefs, such as spatial self-efficacy and confidence (i.e. as the belief that one will do well in cognitive tasks). For example, it has been demonstrated that confidence was a predictive factor for accuracy in mental rotation tasks, with participants being more accurate when they were more confident. When this factor was manipulated, the performance was significantly affected. Furthermore, having more self-perception of spatial self-efficacy has a positive role in supporting environment learning beyond the role of gender. See also Spatial cognition Agoraphobia Navigation Sex differences in psychology References External links Child Spatial Anxiety Questionnaire (CSAQ) (northwestern.edu) SpatialAnxietyQuestionnaire A sample of the CSAQ's items Anxiety Spatial cognition Navigation Orientation (geometry) Agoraphobia
Spatial anxiety
[ "Physics", "Mathematics" ]
1,630
[ "Topology", "Space", "Spatial cognition", "Geometry", "Spacetime", "Orientation (geometry)" ]
70,717,136
https://en.wikipedia.org/wiki/Heat%20content%20%28fuel%29
In the U.S. energy industry, heat content is the amount of heat energy that will be released by combustion of a unit quantity of a fuel or by transformation of another energy form. For example, fossil fuels are rated by heat content, with a distinction made between gross heat content (which includes heat energy used to vaporize moisture in the fuel) and net heat content (which excludes heat energy used to vaporize moisture in the fuel.) The term is also sometimes applied to other energy forms, such as heat content of a kilowatt-hour of electricity or a pound of steam. References Energy
Heat content (fuel)
[ "Physics" ]
125
[ "Energy (physics)", "Energy", "Physical quantities" ]
70,718,499
https://en.wikipedia.org/wiki/Angzarr
Angzarr () is the name of a ghost character-like Unicode symbol of unknown origin and unknown meaning. It was added to Unicode 3.2, but the symbol has been present in works prior to its release. The name is from an abbreviation of its ISO 9573-13 name, "Angle with Down Zig-zag Arrow", also reflected in its Unicode name, "Right Angle with Downwards Zigzag Arrow". Its HTML entity reference, originally defined in ISO 9573-13, is . History The earliest known use of the symbol is found in a 1963 Monotype typeset catalog of arrow characters; it does not appear in an earlier 1954 edition of the same catalog. Monotype listed the symbol as matrix serial number S9576. A later 1972 Monotype catalog, for mathematical characters, listed it under another serial number, S16139; the reason for the redundant serial number is unclear. It is unknown why Monotype added the character, or what purpose it was intended to serve, although much of Monotype's character repertoire for movable type originated from customer requests, including corporate logos. In 1988, the International Organization for Standardization added the symbol to its Standard Generalized Markup Language (SGML) definition, apparently pulling it from the Monotype character set. The STIX Fonts project adopted the Angzarr symbol from the ISO's SGML characters. In March 2000, the Angzarr symbol reached wide distribution when the Unicode Project proposed adding it to the Unicode Standard. The symbol appeared in the ISO publication Proposal for Encoding Additional Mathematical Symbols, although the symbol has no specific purpose. The lack of meaning associated with the Angzarr symbol gained notoriety in 2022 when a blog post was published on its unknown origins. The blog was updated in 2023, confirming the appearance of Angzarr in a 1972 Monotype typeset catalogue with a scan of the page, and in 2024, confirming its appearance in earlier Monotype catalogues. See also List of Unicode characters Arrow (symbol) References Unicode Symbols Symbols introduced in 1963
Angzarr
[ "Astronomy", "Mathematics" ]
425
[ "Astronomical coordinate systems", "Horizontal coordinate system", "Symbols" ]
61,324,337
https://en.wikipedia.org/wiki/Applied%20category%20theory
Applied category theory is an academic discipline in which methods from category theory are used to study other fields including but not limited to computer science, physics (in particular quantum mechanics), natural language processing, control theory, probability theory and causality. The application of category theory in these domains can take different forms. In some cases the formalization of the domain into the language of category theory is the goal, the idea here being that this would elucidate the important structure and properties of the domain. In other cases the formalization is used to leverage the power of abstraction in order to prove new results about the field. List of applied category theorists Samson Abramsky John C. Baez Bob Coecke Joachim Lambek Valeria de Paiva Gordon Plotkin Dana Scott David Spivak See also Categorical quantum mechanics ZX-calculus DisCoCat Petri net Univalent foundations String diagrams External links Journals: Compositionality Conferences: Applied category theory Symposium on Compositional Structures (SYCO) Books: Picturing Quantum Processes Categories for Quantum Theory An Invitation to Applied Category Theory (preprint) Category Theory for the Sciences (preprint) Institutes: the Quantum Group at the University of Oxford TallCat, a research group at Tallinn University of Technology Topos Institute Cybercat Institute Software: DisCoPy, a Python toolkit for computing with string diagrams CatLab.jl, a framework for applied category theory in the Julia language CQL, a query language based on Kan extensions Companies: Conexus AI, a data integration company Symbolica, a machine learning company Mascots: Gremlin-Morgoth References Category theory
Applied category theory
[ "Mathematics" ]
330
[ "Functions and mappings", "Mathematical structures", "Category theory stubs", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Category theory" ]
61,327,141
https://en.wikipedia.org/wiki/Cell%20Proliferation%20%28journal%29
Cell Proliferation is a monthly open-access online-only scientific journal covering cell biology. It was established in 1968 as Cell and Tissue Kinetics, obtaining its current title in 1991. It is published by John Wiley & Sons and the editor-in-chief is Q. Zhou (Chinese Academy of Sciences). According to the Journal Citation Reports, the journal has a 2021 impact factor of 8.755, ranking it 42nd out of 194 journals in the category "Cell Biology". References External links Molecular and cellular biology journals Academic journals established in 1968 Wiley (publisher) academic journals English-language journals Monthly journals Online-only journals Open access journals
Cell Proliferation (journal)
[ "Chemistry" ]
131
[ "Molecular and cellular biology journals", "Molecular biology" ]
76,548,512
https://en.wikipedia.org/wiki/Chloroflexus%20aggregans
Chloroflexus aggregans is a bacterium from the genus Chloroflexus which has been isolated from hot springs in Japan. Etymology The Chloroflexus aggregans name origins from aggregate forming strains indicative of a new species from the Chloroflexus genus. The naming of the C. aggregans comes from the visible aggregates formed by the species. Discovery and isolation In 1995, Satoshi Hanada, Akira Hiraishi, Keizo Shimada, and Katsumi Matsuura discovered a new strain of the Chloroflexus genus, named as the Chloroflexus aggregans. The researchers discovered two strains of this bacterial species: MD-66T and YI-9. The "T" in MD-66T represents the type strain. The former, MD-66T strain, was discovered from the Meotobuchi hot spring while the YI-9 strain was from the Yufuin hot spring. Phylogenetics Phylogenetically, Chloroflexus bacteria are very distinct from green sulfur bacteria but are still taxonomic relatives. Thus, there is some overlap between these groups. For instance, the light harvesting systems responsible for photosynthesis in both groups rely on bacteriochlorophyll pigments. Currently, the molecular phylogenetic data remains unknown for most Chloroflexus strains. Moreover, Chloroflexus strains have not yet been isolated in axenic cultures—meaning, strains that are able to be grown in the absence of other types of species. Currently, the closest known relative to C. aggregans is C. aurantiacus. Morphology The naming of the C. aggregans comes from the visible aggregates formed by the species. At first, the researchers classified these microbes as the C. aurantiacus species because they had a similar morphological appearance. In addition, they had a high degree of genetic similarity. However, C. aggregans''' production of mat-like aggregates when cultured in the researchers' lab suggested that it was a different species than C. aurantiacus, resulting in the discovery of a new species. Genomics Phenotypically, the species resembles the Chloroflexus aurantiacus bacteria. Genotypically, the species' 16S rRNA sequences are 92.8% similar to C. aurantiacus. Its genome is 4.7 Megabases (Mb). Metabolism Chloroflexus aggregans have an extremely versatile mixotrophic metabolism. This is an advantage for their environment, since the microbial mats they inhabit have constantly fluctuating conditions that follow a general daily cycle. During the daytime, when light is abundant, it is their main energy source and C. aggregans exhibit photoautotrophy, photomixotrophy, and photoheterotrophy. They perform photosynthesis through the use of their chlorosomes, which are large pigment-containing complexes that can harvest light. During the afternoon, when there is less light and lower oxygen concentrations in the microbial mats, the bacteria switch to chemoheterophy and use oxygen as their final electron acceptor (O2 respiration). At night, when light is not available and the microbial mats are anaerobic, the bacteria continue to exhibit a chemoheterotrophic metabolism, but it is instead based on fermentation. Finally to complete their daily metabolic cycle, C. aggregans vertically migrate to the surface of their microbial mats, which are microaerobic, in the early morning. Here, they switch to chemoautotrophy based on O2 respiration. When exhibiting heterotrophy, C. aggregans can utilize a diverse range of organic substrates as their carbon source, but grow optimally when either yeast extract or Casamino Acids are used. Ecology Currently, C. aggregans are known to reside in microbial mats in freshwater hot springs, living closely associated with other microorganisms in multilayered sheets. Specifically, they have been discovered and sampled from these hot springs in Japan. They coexist with filamentous, unicellular cyanobacteria in these mats. When exhibiting a heterotrophic metabolism, C. aggregans rely on organic substrates excreted from these cyanobacterial neighbors to obtain carbon for biosynthesis. To occupy these hot springs, C. aggregans are thermophiles and isolated cultures have been shown to exhibit optimal growth between . They are filamentous, meaning the cells grow into long rods that only divide terminally, forming unbranched, multicellular filaments. Uniquely, these long filaments of C. aggregans then associate into dense, mat-like aggregates, setting the bacteria apart from other species of Chloroflexus. Evolution of photosynthesis 16S rRNA data has shown that bacterial species within the Chloroflexus genus are among the earliest bacteria that were able to perform photosynthesis. However, much still remains unknown about Chloroflexus aggregans and its complete genome has yet to be fully sequenced. Thus, continued study of this organism could be important to help elucidate the origins of photosynthesis in bacteria. In addition, studying the broader evolutionary relationships of C. aggregans to other groups of early photosynthetic bacteria could help scientists build a phylogenetic tree of these related phyla, deducing their evolutionary order. For instance, a study comparing the signature sequences in highly conserved proteins of photosynthetic bacteria found that organisms in the genus Chloroflexus'' evolved before cyanobacteria. Resolving these phylogenies could further help scientists understand how photosynthesis developed. Today, this process sustains almost all life on Earth by providing oxygen to the atmosphere and energy for organisms in higher trophic levels. Therefore, it is highly valuable to study how this process first arose. References External links LPSN Type strain of Chloroflexus aggregans at BacDive - the Bacterial Diversity Metadatabase Thermophiles Chloroflexota
Chloroflexus aggregans
[ "Biology" ]
1,317
[ "Bacteria stubs", "Bacteria" ]
76,556,523
https://en.wikipedia.org/wiki/Drug%20antagonism
Drug antagonism refers to a medicine stopping the action or effect of another substance, preventing a biological response. The stopping actions are carried out by four major mechanisms, namely chemical, pharmacokinetic, receptor and physiological antagonism. The four mechanisms are widely used in reducing overstimulated physiological actions. Drug antagonists can be used in a variety of medications, including anticholinergics, antihistamines, etc. The antagonistic effect can be quantified by pharmacodynamics. Some can even serve as antidotes for toxicities and overdose. Receptor Antagonism Mechanism of Action Receptors bind with endogenous ligands to produce a physiological effect and regulate the body and cellular homeostasis. In a ligand-receptor interaction, the ligand binds with the receptors to form a drug-receptor complex, producing a biological response. The biological nature of receptors can be enzymes, nucleic acids or cellular proteins. Common types of receptors include G-protein coupled receptors, nuclear receptors and ion channels. Functional antagonists would not produce a biological response after binding with a receptor. It blocks the binding of endogenous ligands to the receptors and thus inhibits the subsequent physiological effect. Types of receptor antagonism Reversible and irreversible competitive antagonism Both agonist and antagonist bond the same active site. Adding agonist dose can reverse the effect of reversible competitive antagonism. Irreversible competitive antagonism occurs when the antagonist binds to the same spot on the receptor as the agonist but dissociates from the receptors very slowly or not. As a result, when the agonist is delivered, there is no change in the antagonist occupancy. Since a receptor can only hold one molecule at a time, competitive antagonists can reduce the agonist occupancy (percentage of receptors to which the agonist is bound). Raising the agonist concentration can bring back the agonist occupancy and the subsequent tissue response due to their competition. Thus, the opposition is surmountable. The amount to which the competitive antagonist causes the agonist log concentration–effect curve to shift to the right while maintaining its maximum slope is a measure of the dosage ratio. The antagonist concentration causes the dosage ratio to rise linearly. Non-competitive (irreversible) antagonism Allosteric antagonists Different active sites are bonded by agonist and antagonist, which means in which the antagonist obstructs the chain of events that triggers the agonist to produce a response at a point downstream from the agonist binding site on the receptor. It irreversibly binds to the active site. One examples is when Ketamine enters the NMDA receptor's ion channel pore and blocks it, stopping ions from passing through the channels. Also, medication like nifedipine and verapamil stops Ca2+ from entering the cell membrane and so non-selectively prevent medications that act at any receptor that binds to these calcium channels from causing smooth muscle contraction. Partial agonists and full agonists In the presence of a full agonist exerting its maximal effect, a partial agonist can behave like a competitive antagonist to lower the effect of receptor binding, generating merely a submaximal reaction. These variations can be evaluated regarding effectiveness, indicating the agonist-receptor complex's "strength" in causing a tissue response. It relies on receptor occupancy and response. A particular medication of intermediate efficacy may appear as a partial agonist in one tissue (lower level of receptor expression) and a full agonist in another (high level of receptor expression) across distinct cell types expressing the same receptor but at varying densities. Clinical use Reversible and irreversible competitive antagonism Competitive antagonists are usually structurally similar to the active compound since they are structural analogues that have to bind to the same pocket. Examples of reversible competitive antagonists like antihistamines (Figure 1) compete with histamine (Figure 2) to bind to histamine receptors, blocking the allergic response by histamine. They are used in treating histamine-mediated allergies and allergic rhinitis. Irreversible competitive antagonists like phenoxybenzamine do not dissociate from alpha-adrenergic receptors. It is used to block the activity of alpha receptors in sympathetic pathway and is used in the treatment of paroxysmal hypertension and sweating resulting from pheochromocytoma and benign prostate hyperplasia. Chemical antagonism Mechanism of Action Chemical antagonism occurs when a chemical antagonist combines with a ligand to form an inactive product compound, inhibiting the response. In chemical antagonism, the receptors are not involved in the process, and the antagonist directly binds with or removes the ligand. It prevents the ligand from binding to the receptor. As the ligand cannot stimulate the receptor, no physiological effect is generated by the receptors and thus provides an inhibitory effect. The common types of chemical antagonism include chelating agents, neutralising antibodies and salt aggregation. Clinical use Chelating agent Chelating agents are organic compounds which are capable of linking to metal ions. They are usually useful for removing toxic heavy metal ions from body. Dimercaprol is a common chelating agent to treat toxic exposure to arsenic, mercury, gold, and lead. It is in the chelating class of drugs. From Figure 3, the SH-ligands of dimercaprol can compete with -SH groups in natural enzymes for heavy metal, forming a stable metal complex to be excreted through urine. The action antagonises the toxic metal ions and helps remove them from body circulation. However, dimercaprol has a narrow TI and is later replaced by its derivative, 2,3-dimercaptosuccinic acid (DMSA). Neutralising antibodies Neutralising antibodies block pathogen entry into cells to prevent further infection and replication. Infliximab is a monoclonal antibody binding with tumour necrosis factor-alpha (TNF-alpha), inhibiting its pro-inflammatory action. Its efficacious anti-inflammatory action is clinically used in Crohn's Disease, active rheumatoid arthritis, psoriatic arthritis, and active ankylosing spondylitis. Salt aggregation Salt aggregation refers to reactions between a drug and an active compound forming a salt. Strongly anionic unfractionated heparin reacts with the positive cationic protamine arginine peptide to generate a salt aggregation. The resulting salt aggregate is not anticoagulant and is inactive. Protamine acts quickly, taking only five minutes to neutralize unfractionated heparin, and its half-life is only ten minutes. Pharmacokinetics antagonism Mechanism of Action A drug which can affect the pharmacokinetics (absorption, digestion, metabolism, excretion) profile of another chemical (or drugs), thereby reducing the action of the target chemical. There could be a rise in the active drug's rate of metabolic breakdown. As an alternative, there may be a decrease in the rate at which the active medication is absorbed from the digestive system or a rise in the rate at which the drug is excreted by the kidneys. Clinical use Drugs affecting Absorption: antacid Most drugs are taken orally and are absorbed through the gastrointestinal tract. Antacids would increase the pH environment in the stomach and cause premature release of enteric coated drugs, which are designed to be protected from an acidic environment in stomach. For example, proton-pump inhibitors (PPIs) are enteric coated to protect them from decomposition under an acidic environment. Co-administration of antacids with PPIs would lead to premature release into acidic gastric environments and inactivate PPIs before absorption. These types of pharmacokinetics antagonism should be carefully avoided to prevent loss of drug efficacy. Since most drugs are either weakly acidic or weakly basic, modified pH would also affect the location at which the drug is deionised, thus affecting the required time for absorption and onset. Drugs affecting metabolism: phenytoin Many drugs are metabolised by a set of liver enzymes called CYP450s. The activity of these enzymes would determine the rate of pro-drug activation and the rate of inactivation of active drugs. For example, warfarin, a commonly-used anticoagulant drug in atrial fibrillation, is metabolised by an enzyme called CYP2C9. Phenytoin, a CYP2C9 inducer, would increase its activity and the rate of warfarin breakdown, thereby reducing its efficacy. Patients should avoid the co-administration of warfarin and phenytoin. In cases where both drugs must be used together, warfarin dosing may be titrated up to cope with the reduced efficacy. Drugs affecting excretion: intravenous sodium bicarbonate The kidney excretes most drugs through urine. Since urine is weakly alkaline in nature, weakly acid drugs would ionise in urine, making it difficult for them to be reabsorbed. Therefore, in cases of aspirin (weak acid) toxicity, injecting intravenous sodium bicarbonate could increase urine pH, thereby increasing the excretion of aspirin through urine. A similar approach can be used in other weakly acidic drug toxicity. Physiological antagonism Mechanism of action Physiologic antagonism refers to the behaviour in which an antagonist behaves the opposite of the agonist but does not bind to the same active site as the agonist does. A physiologic antagonist binds to a different receptor but not the original agonist receptor. Clinical use Both insulin and glucagon are synthesised naturally in the human body to regulate blood glucose levels at homeostasis. Insulin binds to insulin receptors to decrease blood glucose levels, whilst glucagon binds to glucagon receptors to increase blood glucose levels. In cases of insulin-induced hypoglycaemia, glucagon injection could help increase blood glucose levels. Another example is epinephrine (a bronchodilator) and histamine (a bronchoconstrictor). Epinephrine binds to adrenergic receptors to promote bronchodilation whilst histamine binds to histamine receptors which leads to bronchoconstriction. Since they have opposite effects in different pathways, they are considered physiological antagonists, and they are not advised to be taken together. Quantifying effects of antagonists using pharmacodynamics Pharmacodynamics Pharmacodynamics (PD) is the core principle of quantifying the effects of antagonists by measuring the drug’s efficacy and safety. PD emphasises the relationship between the dose and response of a certain drug, which can be illustrated using a dose-response curve. Efficacy Efficacy is the maximal effect (Emax) that an agonist can produce. As a receptor antagonist does not affect receptors after binding, it is said to have zero efficacy. A competitive antagonist does not affect the Emax of the agonist. This is because the effect of an agonist can be maximized by adding the dose of the agonist as the action of the antagonist is reversible. The maximum effect of the agonist can be achieved by adding the concentration of the agonist. A non-competitive antagonist(or Allosteric antagonist) lowers the Emax of an agonist. The Emax of an agonist is inversely proportional to the concentration of the antagonist, which means a higher concentration of antagonist results in a lower Emax. The maximal efficacy of agonists is reduced as the inhibition cannot be reversed by adding the agonist concentration. Potency Potency is the amount of drug needed to give a certain therapeutic effect. It is affected by the drug’s affinity to the receptors and the number of receptors available. For antagonists, half maximal inhibitory concentration (IC50) is used to measure the potency of antagonists. IC50 means the concentration of antagonist needed to give a 50% inhibition. It can be directly compared with EC50, which is commonly used to measure the potency of an agonist. EC50 means the concentration of agonist needed to give a 50% response. IC50 is significant in determining the optimal dose of antagonist. A high concentration of an antagonist in the body may result in toxicity in the cell and damage the cell membrane. A lower IC50 means the inhibitory effect can be met with a lower concentration of antagonist and, therefore a lower risk of toxicity. For example, the IC50 of antagonists on cancer cell growth is essential for determining the optimal dose which inhibits cancer cells while inducing less harmful systemic effects in the body. Therapeutic index The therapeutic index (TI) is used to quantify the risks and benefits of a certain drug. It describes the relationship between toxic dose and minimum effective dose, thus providing an important insight into the safety of a drug. The Therapeutic Index is calculated using the following equation: TI = TD50 / ED50, where TD50 is the dose at which toxicity presents in 50% of the population, and ED50 is the dose needed to produce 50% of maximal response. From the equation, a high TI indicates that the drug needs a high dose to induce toxicity in 50% of the population or a low dose to achieve the minimum effective dose, and vice versa. In the case of physiological antagonists, for example, insulin has a narrow TI. A narrow TI indicates that either excess or lack of insulin can cause significant risks. On one hand, lack of insulin may result in high blood glucose levels and kidney or cardiovascular damage. On the other hand, excess insulin may result in insulin-induced hypoglycemia as aforementioned. Another example is dimercaprol, a chemical antagonist in treating metal toxicity. Dimercaprol has a narrow TI so it is replaced by its derivative, 2,3-dimercaptosuccinic acid (DMSA). Upregulation of receptors in functional antagonists Upregulation of receptors is the increase in receptor number or sensitivity of receptors. The receptors involved in functional antagonism are regulated in sensitivity, number and location. Therefore, changes in receptors are common. Using a long-term antagonist drug or continuous exposure to an antagonist may cause the upregulation and hypersensitivity of receptors, which means an increase in the number and sensitivity of receptors. The increase in the number of receptors is due to the increased expression of receptors after prolonged inhibition. The upregulation of receptors is important in the clinical aspect. One example of upregulation of receptors is the upregulation of β-receptors caused by β receptor antagonists (also called β-blocker). The prolonged use of β-blockers results in the blockade of β-receptors, causing cells (mainly myocardial cells) to increase their expression of β-receptor. After removing the blockage, more receptors available for stimulation, resulting in higher sensitivity of β-receptors called the hypersensitivity of β-receptors. Abrupt discontinuation of β-blocker may potentially aggravate coronary artery disease, tachycardia, or even sudden cardiac death. Therefore, to prevent the adverse effects, doses of β-blocker must be reduced gradually over 10–14 days. Antidotes Antidotes are agents that can neutralise the effects of a poison or toxin. Antidotes counteract the effects of toxins in many ways, such as by blocking the absorption of the toxin, binding and neutralising the poison, opposing the toxin's end-organ function, or blocking the toxin's conversion to more hazardous metabolites. In addition to lowering the amount of free or active poison present, antidote delivery may also lessen the toxin's effects on organs through competitive inhibition, receptor blockage, or direct antagonistic interaction. The therapeutic index or ratio (TD50/ED50), which is the ratio of the toxic dosage (TD) or fatal dose (LD) to the effective dose (ED), determines the level of safety associated with a substance. Mechanism of action Decrease the active toxin level Agents that "bind" to the toxin can reduce free or active toxin present. It is possible for this binding to be nonspecific or specific. Activated charcoal is the non-specific binding agent most frequently utilised as it has strong adsorption capacity and could prevent the toxin's enterohepatic recirculation. Chelation agents, immunotherapy, and bioscavenger therapy are examples of specific binders. Urinary alkalization or hemadsorption may improve elimination in some circumstances. Block the site of action of the toxin It might occur either at the enzyme or receptor level. There are two possible outcomes at the enzyme level: competitive inhibition or enzyme activity reactivation. Ethyl alcohol or fomepizole used in methyl alcohol or ethylene glycol poisoning is a typical example of competitive enzyme inhibition. By posing competition for alcohol dehydrogenase (ADH) with methyl alcohol and ethylene glycol, these drugs reduce the production of harmful metabolites. For receptor level, the traditional antidotes include naloxone and flumazenil. Flumazenil functions as a competitive antagonist at the GABA-A receptor complex's benzodiazepine site. By doing this, the CNS and respiratory depression would reverse and the inward chloride current would reduce. Flumazenil is useful in treating and preventing benzodiazepine-induced coma from recurring. Decreasing the toxic metabolite Antidotes can be employed to either mop up hazardous metabolites or change them into less toxic forms once they have developed. Hepatic glutathione stores are replenished by N-acetyl cysteine, and this process is what leads to the conjugation of the poisonous metabolite N-acetyl P-benzoquinone imine (NAPQI). See also Receptor antagonist Receptor Dose–response relationship Pharmacodynamics Antidotes References Medicinal chemistry Drugs Pharmacodynamics
Drug antagonism
[ "Chemistry", "Biology" ]
3,800
[ "Pharmacology", "Products of chemical industry", "Pharmacodynamics", "nan", "Medicinal chemistry", "Biochemistry", "Chemicals in medicine", "Drugs" ]
76,557,122
https://en.wikipedia.org/wiki/Oleg%20Starykh
Oleg Aleksandrovich Starykh () is a Russian physicist. Starykh earned a master's of science in physics from the Moscow Institute of Physics and Technology, then worked as a researcher for the Institute for High Pressure Physics before obtaining a doctorate in theoretical condensed matter physics at the Russian Academy of Sciences. Starykh split his postdoctoral research experience at the University of Houston's Texas Center for Superconductivity, the University of California, Davis, the University of Florida, and Yale University between 1993 and 2000. Starykh subsequently began his teaching career at Hofstra University as an assistant professor. In 2004, he joined the University of Utah faculty as an associate professor, and was promoted to a full professorship in 2012. Starykh was elected a fellow of the American Physical Society in 2020. References Living people Year of birth missing (living people) 20th-century Russian physicists Fellows of the American Physical Society 21st-century Russian physicists Russian expatriates in the United States Hofstra University faculty University of Utah faculty Moscow Institute of Physics and Technology alumni Condensed matter physicists
Oleg Starykh
[ "Physics", "Materials_science" ]
224
[ "Condensed matter physicists", "Condensed matter physics" ]
76,558,849
https://en.wikipedia.org/wiki/V762%20Cassiopeiae
V762 Cassiopeiae is a red supergiant and a variable star located about 2,500 light-years away in the Cassiopeia constellation. Its apparent magnitude vary between 5.82 and 5.95, which makes it faintly visible to the naked eye under dark skies. It is a relatively cool star with an average surface temperature of 3,869K. Characteristics V762 Cassiopeiae has a spectral classification of K0I, meaning that it is an evolved K-type red supergiant star. It is estimated to be ten million years old, has around 16.9 times the Sun's mass and has expanded to 266 times the Sun's diameter. It radiates 15,000 times the solar luminosity from its photosphere at an effective temperature of 3869K, which gives it an orange-red hue, typical of red supergiants. Parallax measurements from the Gaia spacecraft show that V762 Cassiopeiae is located 2,480 light-years away. At the estimated distance, V762 Cassiopeiae's apparent brightness is diminished by 1.04 magnitudes due to interstellar extinction. Hipparcos satellite data showed that the star is variable, and because of that it was given the variable-star designation V762 Cassiopeiae, in 1999. The variability amplitude in visible light is only about 0.1 magnitudes. The International Variable Star Index lists it as an irregular variable, but the General Catalogue of Variable Stars (GCVS) classifies it as a BY Draconis star. The designation of GCVS is erroneous, since BY Draconis variability is a characteristic of main sequence stars. Distance and titleholding Some websites claim V762 Cassiopeiae is the "farthest star visible to the naked eye", at a distance of 16,308 light-years. This is inconsistent with parallax measurements from both Hipparcos, which found a parallax of , corresponding to a distance of about 2,800 light-years, and Gaia DR3, which lists a parallax of , corresponding to a distance of about 2,500 light-years. The websites claiming that V762 Cassiopeiae is the "farthest star visible to the naked eye" also do not cite any references for the distance of 16,308 light-years, making the origin of this value uncertain. Notes References Cassiopeia (constellation) BD+70 0090 007389 005926 0365 Cassiopeiae, V762 K-type supergiants
V762 Cassiopeiae
[ "Astronomy" ]
551
[ "Cassiopeia (constellation)", "Constellations" ]
77,977,581
https://en.wikipedia.org/wiki/Cholera%20autoinducer-1
Cholera autoinducer-1 (CAI-1) is an autoinducer signaling molecule present in the bacterium Vibrio cholerae, which causes cholera. CAI-1 is known structurally as a (S)-3-hydroxytridecan and regulates expression of virulence factors. Discovery V. cholerae is the bacterium that houses the CAI-1 chemokine, and was incidentally discovered twice at different locations and by different people independent of the other. Once by Filippo Pacini in 1854 and again by Robert Koch in 1883. CAI-1 has no readily apparent recordable discovery date, but some of the earliest studies on autoinducers within and the quorum-sensing nature of V. cholerae was done by the Microbiology department of Harvard Medical school. Prior to the study of V. cholerae, researchers obtained data from a close relative, Vibrio harveyi. Interestingly, the accumulation of autoinducer in colonies of V. cholera decrease LuxO repression on hapR expression. Research seems indicative that hapR plays a role in protecting the cell from virulence factors, thereby helping the cell survive. Chemical structure and properties CAI-1 is identified as (S)-3-hydroxytridecan. It is produced by multiple Vibrio species, functioning as an intra-genus signal. CAI-1 directly works with AI-2, which aids in interspecies communication. AI-2 is identified as (2S,4S)-2-methyl-2,3,3,4-tetrahydroxytetrahydrofuran borate. The two autoinducers are created by synthase CqsA and LuxS. CAI-1 has a molecular weight of 214.34 g/mol, with the formula C13H26O2. It is synthesized by CqsA, a pyridoxal phosphate dependent amintotransferase enzyme, but its role is not fully known. CqsA can be identified by substrates (S)-2-aminobutyrate and deconyl coenzyme A. Furthermore, an example of the effects of CAI-1 on its capacity to grow and reproduce can be shown from the studies of the National Library of Medicine (NIH). NIH demonstrates that an abundance of autoinducer within the Vibrio cholerae genus can yield to the formation of a certain protease that upon activation leads to degradation of attachment sites, thereby allowing the bacterial strain to be free from restraints and be carried to another area for growth. Role in biofilm formation CAI-1 plays a critical role in regulating biofilm formation in Vibrio cholerae by controlling the switch between biofilm production and dispersal through quorum sensing. At low cell densities and when CAI-1 levels are minimal, V. cholerae favors biofilm formation as a survival strategy in environmental reservoirs. Biofilms protect the bacteria from environmental stresses and allow them to persist in aquatic environments. As cell density increases and CAI-1 accumulates, it activates the quorum sensing pathways that inhibit further biofilm formation and promote dispersal, enabling the bacteria to spread and infect new hosts. As part of the quorum sensing system, CAI-1 primarily functions to sense the presence of Vibrio species and integrates signals from multiple autoinducers. Research shows that while CAI-1 alone can initiate a response at low cell densities, both CAI-1 and the more abundantly produced autoinducer-2 (AI-2) must be present in sufficient quantities to fully repress biofilm formation. This dual autoinducer system allows V. cholerae to assess both its own population and the presence of other bacterial species in the environment, changing its biofilm behavior based on what is around it. These insights suggest that CAI-1 plays a key role in ensuring that V. cholerae only disperses from biofilms when sufficient bacterial density is achieved, and when the surrounding environment supports successful transmission and colonization. This makes CAI-1 a potential target for therapies aimed at disrupting biofilm formation to control cholera outbreaks. Role in quorum sensing Quorum sensing is essentially the intercommunication between cells to interact and respond to its environment.V. cholerae uses two autoinducers: cholera autoinducer-1 (CAI-1) and autoinducer-2 (AI-2). CAI-1 is the major quorum-sensing signal. Quorum sensing is the cell to cell communication that allows organisms to share information. This process can be used to control pathogenicity and biofilm formation, determine population size and modulate virulence mechanisms, and track changes in the cell or gene expression. Quorum sensing uses production, release, accumulation, and detection of autoinducers to do their work. CAI-1 and AI-2 transduce information into a cell using a phosphorylation-dephosphorylation cascade affecting target gene expression. Specifically in V. cholerae, CqsS detects CAI-1 and LuxS detects AI-2. Information from these two autoinducers is transducer through LuxO to maintain levels of HapR transcription factor. HapR represses genes for biofilm formation and production of virulence factors. This is the goal of quorum sensing at high cell densities. At lower cell densities, without autoinducers, HapR is not produced. Signaling pathway In Vibrio cholerae, it is through quorum sensing that bacterial cells are able to behave and communicate based on the amount of bacterial cells present. When the concentration of CAI-1 reaches a certain limit, it binds to CqsS (a receptor for CAI-1). Transcription of virulence genes, including those responsible for the production of cholera toxin, is then activated because of the phosphorylation cascade caused by the binding of CqsS. Because of the ability for V. cholerae to adapt to its environment through CAI-1's ability to facilitate communication, it increases the pathogenic abilities in the host. The downstream signaling cascades that are triggered by CAI-1 binding to CqsS includes multiple regulatory pathways. CqsS undergoes autophosphorylation which involves the transfer of the phosphate group to LuxO, a response regulator. This activates LuxO, which inhibits the synthesis of HapR, a master regulator, through the expression of small regulatory RNAs (sRNAs). As the HapR levels decrease, there is upregulation of virulence factors and biofilm formation which enhances the ability for the bacteria to cause infection. The way that this system works allows for efficient adaptation to the host environment and for V. cholerae to infect more efficiently. Function in Vibrio cholerae pathogenesis CAI-1 is an important signaling molecule that has a role in regulating virulence factors, such as the cholera toxin, in Vibrio cholerae. This is done through a quorum sensing mechanism. As V. cholerae multiplies, genes of the virulence factors are expressed as amount of CAI-1 accumulates including the ones that encode for the cholera toxin. This toxin has the ability to disrupt electrolyte balance in intestinal epithelial cells which can lead to issues including severe diarrhea, which is known to be a common symptom of this toxin. In addition to the cholera toxin, there are other virulence factors such as surface adhesins, which are essential in helping the bacteria to adhere to the intestinal mucosa. This will consequently assist the bacteria in its pathogenic properties. Because CAI-1 assists in facilitating biofilm formation, it plays a significant role in promoting colonization of the human gut. This provides a way for V. cholerae to avoid attacks from the host immune responses as well as any antibiotics. The signaling cascades initiated by CAI-1 enable V. cholerae to survive and infect in the harsh conditions of the gut. Observing the important role of CAI-1 in V. cholerae can highlight possible key targets for any future therapeutic intervention. References Wikipedia Student Program Cholera Signal transduction Bacterial substances
Cholera autoinducer-1
[ "Chemistry", "Biology" ]
1,696
[ "Bacterial substances", "Signal transduction", "Biomolecules", "Bacteria", "Biochemistry", "Neurochemistry" ]
77,983,529
https://en.wikipedia.org/wiki/Actinide%20contraction
The actinide contraction is the greater-than-expected decrease in atomic radii and ionic radii of the elements in the actinide series, from left to right. Description It is more pronounced than the lanthanide contraction because the 5f electrons are less effective at shielding than 4f electrons. It is caused by the poor shielding effect of nuclear charge by the 5f electrons along with the expected periodic trend of increasing electronegativity and nuclear charge on moving from left to right. About 40-50% of the actinide contraction has been attributed to relativistic effects. A decrease in atomic radii can be observed across the 5f elements from atomic number 89, actinium, to 102, nobelium. This results in smaller than otherwise expected atomic radii and ionic radii for the subsequent d-block elements starting with 103, lawrencium. This effect causes the radii of transition metals of group 5 and 6 to become unusually similar, as the expected increase in radius going down a period is nearly cancelled out by the f-block insertion, and has many other far ranging consequences in post-actinide elements. The decrease in ionic radii (M3+) is much more uniform compared to decrease in atomic radii. References Atomic radius Chemical bonding Periodic table
Actinide contraction
[ "Physics", "Chemistry", "Materials_science" ]
273
[ "Periodic table", "Atomic radius", "Condensed matter physics", "nan", "Chemical bonding", "Atoms", "Matter" ]
77,985,195
https://en.wikipedia.org/wiki/Model-Informed%20Precision%20Dosing
Model-Informed Precision Dosing (MIPD for short) is the use of pharmacometric models with computer software to optimize drug dosage for an individual patient. Developed in the late 1960s under the impetus of clinical pharmacologists such as Lewis Sheiner and Roger Jelliffe, these approaches involve applying the equations and parameters describing a drug's pharmacokinetics and pharmacodynamics to define the best dosage regimen for a given individual, likely to produce circulating concentrations associated with maximum efficacy and minimum toxicity. Models typically take into account the patient's demographic characteristics (age, gender, ethnicity), clinical profile (body measurements, renal and hepatic function, comorbidities, co-medications, dietary habits, substances use) and possibly genetic factors (e.g. polymorphisms affecting cytochromes or drug transporters). When starting a treatment, these models can be used to select a priori the optimal dosage for a patient, based on simulations. During the treatment course, these same models can be used to integrate the results of Therapeutic Drug Monitoring (i.e. the measurement and medical interpretation of circulating drug concentrations) or the measurement of biomarkers of efficacy or toxicity, in an a posteriori approach to dose optimization, derived from Bayesian inference and feedback loops. Practically, these approaches make extensive use of computer software dedicated to the clinical use of pharmacokinetic/pharmacodynamic models, belonging to the computerized clinical decision support tools. They complement Model-Informed Drug Development (MIDD), which is mainly carried out by pharmaceutical industry researchers prior to marketing. Prescribers are expected to make increasingly regular use of model-driven precision dosing tools for patient treatment and follow-up. Dosage individualization represents the quantitative aspect of precision medicine, while the qualitative aspect lies in the personalized choice of the best drug to treat a given pathology. This optimization of dose selection is especially desirable for drugs with narrow therapeutic index (i.e. effective concentration close to toxic ones). It is also important when a treatment is to be applied to patients with peculiarities, such as children, frail elderly persons, polymorbid patients or those already heavily treated. Technical hurdles still limit the wide implementation of these approaches in clinical practice, but it is to be expected that electronic patient records will pursue their development, thus enabling the increasing integration of model-informed precision dosing into medical practice. References Pharmacology Chemical pathology
Model-Informed Precision Dosing
[ "Chemistry", "Biology" ]
517
[ "Pharmacology", "Chemical pathology", "Medicinal chemistry stubs", "Medicinal chemistry", "Biochemistry", "Pharmacology stubs" ]
77,985,307
https://en.wikipedia.org/wiki/Flurpiridaz%20%2818F%29
{{DISPLAYTITLE:Flurpiridaz (18F)}} Flurpiridaz (18F), sold under the brand name Flyrcado, is a cyclotron-produced radioactive diagnostic agent for use with positron emission tomography (PET) myocardial perfusion imaging under rest or stress (pharmacologic or exercise). Flurpiridaz (18F) It is given by intravenous injection. The most common adverse reactions include dyspnea (shortness of breath), headache, angina pectoris (severe pain in the chest), chest pain, fatigue, ST segment changes, flushing, nausea, abdominal pain, dizziness, and arrhythmia (irregular heartbeat). Flurpiridaz (18F) was approved for medical use in the United States in September 2024. Medical uses Flurpiridaz (18F) is indicated for positron emission tomography myocardial perfusion imaging under rest or stress (pharmacologic or exercise) in adults with known or suspected coronary artery disease to evaluate for myocardial ischemia and infarction. History Flurpiridaz F-18 is a fluorine 18-labeled agent developed by Lantheus Medical Imaging for the diagnosis of coronary artery disease. The efficacy and safety of flurpiridaz (18F) were evaluated in two prospective, multicenter, open-label clinical studies in adults with either suspected CAD (Study 1: NCT03354273) or known or suspected CAD (Study 2: NCT01347710). Study 1 evaluated the sensitivity (ability to designate an imaged patient with disease as positive) and specificity (ability to designate an imaged patient without disease as negative) of flurpiridaz (18F) for the detection of significant CAD in subjects with suspected CAD who were scheduled for invasive coronary angiography (ICA). Across three flurpiridaz (18F) imaging readers, estimates of sensitivity ranged from 74% to 89% and estimates of specificity ranged from 53% to 70% for CAD defined as at least 50% narrowing of an artery. Study 2 evaluated the sensitivity and specificity of flurpiridaz (18F) for the detection of significant CAD in subjects with known or suspected CAD who had ICA without intervention within 60 days prior to imaging or were scheduled for ICA. Across three flurpiridaz (18F) imaging readers, estimates of sensitivity ranged from 63% to 77% and estimates of specificity ranged from 66% to 86% for CAD defined as at least 50% narrowing of an artery. Society and culture Legal status Flurpiridaz (18F) was approved for medical use in the United States in September 2024. Names Flurpiridaz (18F) is the international nonproprietary name. References Further reading External links Medicinal radiochemistry PET radiotracers Radiopharmaceuticals Pyridazines Chloroarenes Tert-butyl compounds
Flurpiridaz (18F)
[ "Chemistry" ]
646
[ "Medicinal radiochemistry", "PET radiotracers", "Radiopharmaceuticals", "Medicinal chemistry", "Chemicals in medicine" ]
66,299,647
https://en.wikipedia.org/wiki/HL-2A
HL-2A (Huan-Liuqi-2A) is a medium-sized tokamak for fusion research in Chengdu, China. It was constructed by the China National Nuclear Corporation from early 1999 to 2002, based on the main components (magnet coils and plasma vessel) of the former German ASDEX device. HL-2A was the first tokamak with a divertor in China. The research goals of HL-2A are the study of fundamental fusion plasma physics to support the international ITER fusion reactor. References Tokamaks Fusion reactors
HL-2A
[ "Physics", "Chemistry" ]
117
[ "Nuclear fusion", "Fusion reactors", "Plasma physics stubs", "Plasma physics" ]
66,300,206
https://en.wikipedia.org/wiki/Dual%20Steenrod%20algebra
In algebraic topology, through an algebraic operation (dualization), there is an associated commutative algebra from the noncommutative Steenrod algebras called the dual Steenrod algebra. This dual algebra has a number of surprising benefits, such as being commutative and provided technical tools for computing the Adams spectral sequence in many cases (such as ) with much ease. Definition Recall that the Steenrod algebra (also denoted ) is a graded noncommutative Hopf algebra which is cocommutative, meaning its comultiplication is cocommutative. This implies if we take the dual Hopf algebra, denoted , or just , then this gives a graded-commutative algebra which has a noncommutative comultiplication. We can summarize this duality through dualizing a commutative diagram of the Steenrod's Hopf algebra structure:If we dualize we get mapsgiving the main structure maps for the dual Hopf algebra. It turns out there's a nice structure theorem for the dual Hopf algebra, separated by whether the prime is or odd. Case of p=2 In this case, the dual Steenrod algebra is a graded commutative polynomial algebra where the degree . Then, the coproduct map is given bysendingwhere . General case of p > 2 For all other prime numbers, the dual Steenrod algebra is slightly more complex and involves a graded-commutative exterior algebra in addition to a graded-commutative polynomial algebra. If we let denote an exterior algebra over with generators and , then the dual Steenrod algebra has the presentationwhereIn addition, it has the comultiplication defined bywhere again . Rest of Hopf algebra structure in both cases The rest of the Hopf algebra structures can be described exactly the same in both cases. There is both a unit map and counit map which are both isomorphisms in degree : these come from the original Steenrod algebra. In addition, there is also a conjugation map defined recursively by the equationsIn addition, we will denote as the kernel of the counit map which is isomorphic to in degrees . See also Adams-Novikov spectral sequence References Algebraic topology Hopf algebras Homological algebra
Dual Steenrod algebra
[ "Mathematics" ]
479
[ "Mathematical structures", "Algebraic topology", "Fields of abstract algebra", "Topology", "Category theory", "Homological algebra" ]
66,307,448
https://en.wikipedia.org/wiki/Neural%20network%20quantum%20states
Neural Network Quantum States (NQS or NNQS) is a general class of variational quantum states parameterized in terms of an artificial neural network. It was first introduced in 2017 by the physicists Giuseppe Carleo and Matthias Troyer to approximate wave functions of many-body quantum systems. Given a many-body quantum state comprising degrees of freedom and a choice of associated quantum numbers , then an NQS parameterizes the wave-function amplitudes where is an artificial neural network of parameters (weights) , input variables () and one complex-valued output corresponding to the wave-function amplitude. This variational form is used in conjunction with specific stochastic learning approaches to approximate quantum states of interest. Learning the Ground-State Wave Function One common application of NQS is to find an approximate representation of the ground state wave function of a given Hamiltonian . The learning procedure in this case consists in finding the best neural-network weights that minimize the variational energy Since, for a general artificial neural network, computing the expectation value is an exponentially costly operation in , stochastic techniques based, for example, on the Monte Carlo method are used to estimate , analogously to what is done in Variational Monte Carlo, see for example for a review. More specifically, a set of samples , with , is generated such that they are uniformly distributed according to the Born probability density . Then it can be shown that the sample mean of the so-called "local energy" is a statistical estimate of the quantum expectation value , i.e. Similarly, it can be shown that the gradient of the energy with respect to the network weights is also approximated by a sample mean where and can be efficiently computed, in deep networks through backpropagation. The stochastic approximation of the gradients is then used to minimize the energy typically using a stochastic gradient descent approach. When the neural-network parameters are updated at each step of the learning procedure, a new set of samples is generated, in an iterative procedure similar to what done in unsupervised learning. Connection with Tensor Networks Neural-Network representations of quantum wave functions share some similarities with variational quantum states based on tensor networks. For example, connections with matrix product states have been established. These studies have shown that NQS support volume law scaling for the entropy of entanglement. In general, given a NQS with fully-connected weights, it corresponds, in the worse case, to a matrix product state of exponentially large bond dimension in . See also Differentiable programming References Quantum mechanics Quantum Monte Carlo Machine learning
Neural network quantum states
[ "Physics", "Chemistry", "Engineering" ]
530
[ "Quantum chemistry", "Machine learning", "Quantum mechanics", "Artificial intelligence engineering", "Quantum states", "Quantum Monte Carlo" ]
74,990,551
https://en.wikipedia.org/wiki/Competence%20stimulating%20peptide
Competence stimulating peptides (CSP) are chemical messengers that assist the initiation of quorum sensing, and exist in many bacterial genera. Bacterial transformation of DNA is driven by CSP-coupled quorum sensing. Competence stimulating peptides are a subset of proteins that promote quorum sensing in numerous bacterial genera including Streptococcus and Bacillus. Quorum sensing contributes to regulation of specific gene expressions in response to cell population density fluctuations. Streptococcus pneumonia, a highly studied gram-positive bacterium, is capable of quorum sensing and can release autoinducers, chemical signals that increase as concentration based on density. CSPs are part of a unique form of regulation involved in DNA processing. The form of DNA processing starts abruptly and at the same time in all cells when in a constantly or exponentially growing culture, and then growth rapidly decreases after about 12 minutes of exponential growth. Background Competence is the ability of bacteria to pull DNA fragments from the environment and integrate it into their chromosome. Competence stimulating peptides (CSP) are a 17-amino acid signal peptide that triggers quorum sensing, which aids competence, biofilm formation, and virulence. The propensity of S. pneumoniae to become competent is critical to the bacterium's development of antibiotic resistance. A substantial fraction of cells in the culture of species whose appearance of competence has been studied shows that specific growth conditions (ex. growth-limiting conditions) have led to the development of competence. S. pneumoniae is unique in the sense that virtually all cells of a culture develop the ability to become competent at the same time. The density that the cells have reached during exponential growth plays a role at determining when the competency is triggered. This competency period only lasts for a short period of time, and studies indicate that this does not affect the growth rate of the culture. There are two main specificity groups that S. pneumoniae can be divided into based on the CSP signal they produce and their compatible receptors. The CSP1 signal is received by receptor ComD1 and the CSP2 signal is received by ComD2. Physiology and biochemistry Streptococcus pneumoniae is one of the mostly highly studied bacterial species containing CSP, though other genus and species also utilize the hormone-like protein. Variations in structure, receptor specificity, and codon sequence occur even between different strains of the same species. However, homology between CSP's retain a single negatively charged N-terminus, an arginine residue in position three (C3), and a positively charged C-terminus. Signal-receptor specificity is demonstrated in Streptococcal species through the relationship between CSP1 and CSP2 signals, and the receptors ComD1 and ComD2. Variations of receptor specificity and composition can be estimated based on nuclear magnetic resonance (NMR) spectroscopy analysis. Alterations in the structure of CSP signals, such as CSP1 and CSP2, are shown to inhibit the cellular response to these peptides, often resulting in reduced biofilm production. Replacement of the first glutamate residue in CSP1 inhibits receptor activation of competency genes, and hydrophobic regions on the CSP1 molecule play key roles in effective ComD1 and Com2 binding. Interspecies interactions between biofilm producing organisms induce the release of chemical signals that inhibit binding or receptor activation in competence stimulating processes. Initiation of DNA transformation begins as a threshold concentration of CSP is met within a bacterial cell. Cellular density is proportional to CSP concentration. After meeting threshold concentration, transmembrane histidine kinases are activated via binding of corresponding peptides. Regulator proteins in turn are phosphorylated by the activated kinases, thereby inducing competency gene expression. Such genes produce proteins responsible for inducing DNA transformation. Implications in health and industry Quorum sensing bacteria within the human microbiome are responsible for many diseases including sinusitis, otitis media, pneumonia, bacteraemia, osteomyelitis, septic arthritis, and meningitis. In the United States alone there is a death toll of >22,000 a year tracing back to this pathogen. S. pneumoniae uses the competence stimulating peptide and quorum sensing to initiate its attack, establish an infection, and develop antibiotic resistance genes. Overall, competence stimulating peptide allows S. pneumoniae to initiate a more pervading attack on the human host. Currently in health and industry, studies center on explaining and intercepting the competence region within the S. pneumoniae. The goal is to limit cell–cell communication with the hopes of attenuating S. pneumoniae infectivity. Inhibiting the competence stimulating peptide shows potential as a way to combat pneumococcal infections. References Peptides Microbial population biology
Competence stimulating peptide
[ "Chemistry" ]
987
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
74,991,751
https://en.wikipedia.org/wiki/Brian%20Green%20%28chemist%29
Brian Noel Green OBE (25 December 1933 -17 December 2021) was an English mass spectrometrist. Early life and career Green was born in Urmston, Manchester, UK on Christmas Day 1933. He attended Manchester Grammar School and in 1955 graduated from Manchester University. Green had a long career at Metropolitan-Vickers before moving to VG MICROMASS in 1972. Honours and awards In 1985, Green was awarded the Order of the British Empire for contributions to mass spectrometry, and in 1996, the British Mass Spectrometry Society awarded him the Aston Medal. Legacy In Green's honour, the British Mass Spectrometry Society now presents the B. N. Green Prize to an early career scientist with the best flash oral presentation at their annual meeting. Notable works Ferrige, A.G., Seddon, M.J., Green, B.N., Jarvis, S.A., Skilling, J. and Staunton, J. (1992), Disentangling electrospray spectra with maximum entropy. Rapid Commun. Mass Spectrom., 6: 707-711. https://doi.org/10.1002/rcm.1290061115 The Analysis of Human Haemoglobin Variants Using Mass Spectrometry Micromass UK Ltd: Wilmslow (2021 ISBN 978-1-5262-0895-8) References 1933 births 2021 deaths People educated at Manchester Grammar School 20th-century English chemists
Brian Green (chemist)
[ "Physics", "Chemistry" ]
318
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
74,998,455
https://en.wikipedia.org/wiki/Hydrogen%20fuel%20cell%20power%20plant
A hydrogen fuel cell power plant is a type of fuel cell power plant (or station) which uses a hydrogen fuel cell to generate electricity for the power grid. They are larger in scale than backup generators such as the Bloom Energy Server and can be up to 60% efficient in converting hydrogen to electricity. There is little to no nitrous oxide produced in the fuel cell process, which is produced in the process of a combined cycle hydrogen power plant. If the hydrogen could be produced with electrolysis also known as green hydrogen, then this could be a solution to the energy storage problem of renewable energy. Shinincheon Bitdream Hydrogen Fuel Cell Power Plant The Shinincheon Bitdream Hydrogen Fuel Cell Power Plant in Incheon, South Korea can produce 78.96 MegaWatts of power. It opened in 2021 and is one of the first large scale fuel cell power plants for the grid, rather than just a backup generator. The plant will also purify the air by sucking in 2.4 tons of fine dust per year and filtering it out of the air. It will also produce hot water as a by-product that will be used to heat houses locally, also known as district heating. Cogeneration or combined cycle Fuel cells produce a lot of hot water and a cogeneration or combined cycle could be used for further benefit or to produce more electricity with a steam turbine, increasing the efficiency to >80% using a Phosphoric acid fuel cell. Water uses Further studies are needed to see if the water is potable. Places that are dry and have water shortages could use the water for agriculture or other greywater uses. Another use would be to use the hot water by-product for High-temperature electrolysis for more hydrogen fuel. High temperature electrolysis at nuclear power plants High-temperature electrolysis at nuclear power plants could produce hydrogen at scale and more efficiently. The DOE Office of Nuclear Energy has demonstration projects to test 3 nuclear facilities in the United States at: Nine Mile Point Nuclear Generating Station in Oswego, NY Davis–Besse Nuclear Power Station in Oak Harbor, Ohio Prairie Island Nuclear Power Plant in Red Wing, Minnesota See also Strategic natural gas reserve Comparison of fuel cell types Green energy Hydrogen economy Hydrogen fuel cell train Hydrogen fuel enhancement Hydrogen storage Underground hydrogen storage Blue hydrogen Intermountain Power Plant Smart grid Pumped-storage hydroelectricity Methanol economy References Mechanical engineering Power station technology Energy conversion Fuel technology
Hydrogen fuel cell power plant
[ "Physics", "Engineering" ]
492
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
75,000,176
https://en.wikipedia.org/wiki/International%20Society%20for%20Transgenic%20Technologies
The International Society for Transgenic Technologies (ISTT) is an organization dedicated to advancing research, communication, and technology exchange regarding transgenic technologies. Purpose Support for scientific research and education in the field of generating genetically modified model organisms in adherence with the 3Rs principles. Promotion of science and technology used in the generation and analysis of genetically modified organisms for biomedical research and biotechnological application. Providing the organizational framework for a scientific community that includes academic and industrial scientists, students and technical assistants, and in general, any individuals with an interest in the generation of and the analysis of genetically modified organisms. Providing a communication and knowledge sharing platform that brings together scientists from academic research and industry, as well as research technology experts. Organization of a regular international scientific conference entitled "Transgenic Technology Meeting". Publication of specialist information in the form of books, protocols, and other specialist texts in the field of transgenic technologies. Organization and promotion of courses, seminars, and other educational activities for training in transgenic technologies. Cooperation with other national and international societies with similar aims (e.g., IMGS, AALAS, AAALAC, FELASA). Providing information to the public about the benefits associated with using and applying transgenic technologies. Providing local, national and international bodies with expert advice and guidance on scientific, technical or other aspects of generating genetically modified organisms. Resources and education Every one and a half years the ISTT organizes an international scientific conference, the Transgenic Technology Meeting, also known as the TT Meeting. To promote communication and technology exchange, the website of the society publishes information and protocols related to transgenic technologies as well as the locations of transgenic service facilities, recognized as a valuable resource in the scientific literature. A collection of ISTT subject-related protocols has been published in the book Advanced Protocols for Animal Transgenesis – an ISTT Manual. The society is also associated with the peer-reviewed scientific journal Transgenic Research, which publishes scientific findings on transgenic and genome-edited higher model organisms. As a platform for the rapid exchange of scientific information, the ISTT hosts two mailing lists, the public transgenic-list (often referred to as tg-l) and the ISTT-list reserved for ISTT members with around 1500 and 660 participants (April 2024). History of Transgenic Technology meetings Presidents Rebecca Haffner-Krausz (since 2023) Ernst Martin Füchtbauer (2020-2023) Wojtek Auerbach (2017–2019) Jan Parker-Thornburg (2014–2016) Lluís Montoliu (2006–2014) Awards ISTT Prize The ISTT Prize recognizes individuals for their outstanding contributions to the field of transgenic technologies and is presented at the Transgenic Technology Meeting. Prominent winners included Ralph Brinster (2011), Janet Rossant (2014), Mario Capecchi (2017) and Rudolf Jaenisch (2025). ISTT Young Investigator Award The ISTT Young Investigator Award recognizes outstanding achievements by young scientists whose work is advancing the field of transgenic technologies with new ideas and who have recently received an academic degree. The ISTT Young Investigator Award is presented at the Transgenic Technology Meetings. Prominent winners included Feng Zhang (2014) and Alexis Komor (2017) for their work on genome editing in model organisms. 3Rs Award The 3Rs Award recognizes outstanding achievements by a researcher or research team that advances the field of transgenic technologies with new methods and improvements in strict accordance with the 3Rs principles for reduction, refinement, and replacement of animals used in research. The prize is awarded during the Transgenic Technology Meetings. See also International Mammalian Genome Society List of genetics research organizations References External links Official website from the International Society for Transgenic Technologies Genetic engineering Genome editing Scientific societies International educational organizations
International Society for Transgenic Technologies
[ "Chemistry", "Engineering", "Biology" ]
770
[ "Genetics techniques", "Biological engineering", "Genome editing", "Genetic engineering", "Molecular biology" ]
72,203,153
https://en.wikipedia.org/wiki/Phosphorus%20porphyrin
Phosphorus-centered (or P-centered) porphyrins are conjugated polycyclic ring systems consisting of either four pyrroles with inward-facing nitrogens and a phosphorus atom at their core or porphyrins with one of the four pyrroles substituted for a phosphole. Unmodified porphyrins are composed of pyrroles and linked by unsaturated hydrocarbon bridges often acting as multidentate ligands centered around a transition metal like Cu II, Zn II, Co II, Fe III. Being highly conjugated molecules with many accessible energy levels, porphyrins are used in biological systems to perform light-energy conversion and modified synthetically to perform similar functions as a photoswitch or catalytic electron carriers. Phosphorus III and V ions are much smaller than the typical metal centers and bestow distinct photochemical properties unto the porphyrin. Similar compounds with other pnictogen cores (As, Sb, Bi) or different polycyclic rings coordinated to phosphorus result in other changes to the porphyrin’s chemistry. Synthesis Phosphorus core Early experiments with group 15 elements at the porphyrin core by Barbour et al. in 1992 included syntheses of P-centered porphyrin compounds from meso-Tetra-p-tolylporphyrin (TTP). Phosphorus oxychloride (POCl3) added to H2TTP forms the P-centered porphyrin molecule [P(TTP)Cl2]+Cl−. Several chemical variants were synthesized by refluxing in solutions of pyridine solvent and alcohols to produce [P(TTP)OCH2CH3]+Cl−, [P(TTP)(O-p-C6H4OH)2]+OH−, and other similar compounds. Other researchers including Poddutoori in 2015 and 2022 have used such synthetic methods to yield hypervalent phosphorus (V) bonded to porphyrin as well as axial alcohols substituents. Later syntheses have been performed with other phosphorus precursors, including PhPCl2 and octaethylporphyrin (OEP) in DCM to yield [POEP(Cl)2]+Cl−. PCl3/POCl3 and KPF6 yield similar porphyrin products with a PF6− counterion. Other halogens like bromine have been used successfully in place of chlorine for this synthesis method of P-centered rings. More syntheses with complex alcohols have been reported. Porphyrins functionalized with axial carbazolylvinylnaphthalimides are synthesized using similar methods to the previously described process for binding alcohols to phosphorus cores. In a similar synthetic process, Susumu et al. linked several modified porphyrins in a center-to-edge bonding scheme. The chlorines on a P-centered porphyrin are first substituted by an external hydroxyphenyl group on another porphyrin. The substituent porphyrins are then refluxed with POCl3 to synthesize the final center-to-edge porphyrin array. The resulting P-centered complex consists of three porphyrins with phosphorus atoms bound at each core. Phosphaporphyrins Synthesis of a phosphole-substituted porphyrin or phosphaporphyrin involves a more complex chemical route. Phosphaporphyrins are not created using an unmodified porphyrin ring as a synthetic reagent. As reported by Matano and Imahori in 2008, a phosphaporphyrin is constructed with a phosphole linked to two pyrrole functional groups which is then bound to another pyrrole molecule. Specifically, addition of 2,5-bis(hydroxymethyl)-1-phenyl-1-thiophosphole to excess pyrrole in the presence of BF3·OEt2 results in the phosphatripyrrane precursor. A phosphaporphyrinogen ring is formed through dehydration condensation of the phosphatripyrrane with a 2,5-difunctionalized pyrrole. Dichloro-dicyanobenzoquinone (DDQ) oxidation of the product yields the highly conjugated 18π-system phosphaporphyrin product. Metals can be coordinated in the core of the phosphaporphyrin by introducing metal salts. Rhodium metal is very easily inserted into the core of a phosphaporphyrin without the presence of a stabilizing thiophene-substituted pyrrole. Properties Several varieties of the P-centered porphyrin exist. The porphyrin with a core phosphorus (V) ion can be tuned with additional substituents added to either the outside of the polycyclic ring system or axially to the core phosphorus. Meso-substituted porphyrins like meso-tetra-p-tolylporphyrin (TTP) and octaethylporphyrin (OEP) are often used in synthesis of the core phosphorus porphyrin. Substituents on the hypervalent phosphorus also result in the existence of a diverse array of molecules with varying properties. Axial substituents on the phosphorus include a wide variety of alkyl (-CH3, -CH2CH3), alkoxy (-OCH3, -OCH2CH3), aryl (-C6H5), and halide functional groups. Crystallographic experiments reveal that substituents affect the structure properties of these complexes. The porphyrin P-N bond distances decrease as the electronegativity of the axial substituents increase. Porphyrins bound to unnaturally small ions at the core result in ruffling, a deviation in position of the carbon atoms from the median plane of the aromatic conjugated system. This phenomenon, like saddling and doming, has been observed as well with small transition metal ions like nickel II. The ruffling effect of phosphorus (V) in a porphyrin is apparent because of the small size of the ion. More electronegative axial groups result in greater ruffling while large, sterically bulky groups have a similar effect. The degree to which various substituents cause ruffling was determined by Akiba et al. in 2001. Phosphaporphyrins possess phospholes usually bound to a phenyl group resting above the porphyrin plane in a trigonal pyramidal molecular geometry as is typical of phosphorus centers. In addition to the bound phenyl group, these molecules may also possess a metal ion core that coordinates to the three pyrroles and phosphole and distorts the naturally planar molecule. Very negative nucleus independent chemical shift (NCIS) values used to quantify aromaticity indicate the aromatic character of the phosphaporphyrins. These values however are more positive than NICS values for the undistorted four-pyrrole structure, which is a result of the less planar π-system in phosphaporphyrins. Photoelectrochemical properties Unmodified porphyrins have been identified and used in spectroscopic studies because of their highly conjugated π-systems. Isolated monolayers of porphyrins with dicationic transition metals yield similar spectroscopic data to those suspended in dilute solution. Axial ligand substitution of the metal was proposed as a method of varying the electrochemical properties and has since been done extensively with phosphorus ions. Phosphorus core porphyrins have been researched extensively to assess their photo and electrochemical properties. Axial groups bonded to the core phosphorus exert influence on the oxidation and reduction potentials of porphyrins. Like the electronegativity effects of the axial group on P-N bonding distance and plane ruffling, electronegativity effects the molecular redox potentials. Akiba et al. in 2002 were first to quantify the redox potentials of various porphyrin molecules with different axial groups. Generally, as the electronegativity of the groups increases, the oxidative and reductive potentials become more positive indicating the complex’s ability to accept an electron more easily. A 2022 study by Sharma et al. compounded the axial group electrochemical effects with the effects of outer porphyrin ring aryl substituents to determine their overall influence on P-centered porphyrin electrochemistry. More electron-withdrawing groups on the outside of the porphyrin like 3,4,5-trimethoxyphenyl resulted in a greater red-shift in the absorption spectra as opposed to less withdrawing groups like a simple phenyl. Additionally, the high oxidation potential of the modified porphyrins allows for their use in electrochemical applications. Artificial photosynthetic systems can be fine-tuned by the addition of different functional groups for catalysis and energy storage. Poddutoori et al. in 2015 explored the electrochemical applications of the p-core porphyrins deposited with Ir(III)Cp on tin (II) oxide to form a pre-catalyst for water-splitting reactions. The ability to form an effective catalytic system is a result of the high redox potential (1.62-1.65V) of the phosphorus-modified porphyrin bound to tin (II) oxide. In 2016, he devised a similar application for p-centered porphyrins with tin (IV) oxide and Mn(II)typ. Photooxidation of the Mn complex allows the reduction of the tin(IV) ion to tin(III) through the porphyrin as an electron carrier in a similar fashion to the 2015 publication. The photoreduction applications mimic those of the natural porphyrin role in photosynthesis; however, the phosphorus (V) allows for tuning and more wide-ranging applications than transition metal ions. Phosphaporphyrins have been studied to a lesser degree. UV-vis and cyclic voltammetry (CV) experiments with phosphaporphyrins reveal that the single pyrrole to phosphole substitution narrows the HOMO and LUMO gap in an 18 π-system, allowing for a more easily accessible excited state and unique electrochemical applications. The phosphole component has been shown by Reau et al. in 2002 to be prone to hyperpolarizability. This characteristic is more true of phospholes when bound to a palladium II ion as is often the case in porphyrins. Nonlinear optical (NLO) properties like molecular hyperpolarizability in metal-phosphole hybrids yield unique electrochemical applications. A variety of phosphole-metal complexes have been synthesized to bridge copper II and silver I metal ions through phosphorus-containing chelating ligands. The new electrochemically stabile complexes indicate the importance of the NLO properties that are unique to phosphaporphyrins by virtue of their phosphorus-integrated π-system. Photochemical applications As early as 2002, axial substituents on the phosphorus core have been utilized for photochemistry. Reddy and Maiya pioneered the use of a p-centered porphyrin with azobenzene substituents capable of reversible (E/Z) isomerization upon irradiation at a wavelength of 345 nm. The process of isomerization is made possible through quenching of the fluorescence by the highly conjugated porphyrin ring and the reversibility of the isomerization. The high yield of both isomers after several iterations of the reaction indicates that the photoswitching is a reliable, stable process. Photovoltaic devices using NiO have also been made more efficient by hole injection of phosphorus porphyrins, yielding ground state reactants that are regenerated at a much faster rate than is typical for a TiO2 cell. Biological applications Photodynamic therapy (PDT) is used to target and destroy DNA in cases of extreme cell proliferation as is common in cancer. Porphyrins are used to generate radical oxygen species in the body to degrade nucleotides, stopping proliferation. Tetraphenylporphyrin phosphorus (V) complexes with DNA and possesses an oxidation potential larger than guanine(1.4-1.8V > 1.24V). Upon coupling to DNA fragments through electrostatic interaction, a p-centered porphyrin electron is excited with 365 nm radiation to the S2 state and proceeds down an energy cascade via two mechanisms to degrade guanine. The first mechanism involves electron transfer that degrades a sequence of consecutive guanine nucleotides while the second mechanism proceeds via oxygen radical formation that indiscriminately destroys guanine residues in DNA. Reactivity P-centered porphyrins typically react in redox reactions where they serve as tunable intermediate electron carriers in biological reactions for DNA degradation as well as other industrial catalytic reactions. Tn (II/IV) oxide is photoreduced by iridium and manganese complexes through p-centered porphyrin electron carriers in catalysis. Photovoltaic cells involving NiO complexes supplemented with high oxidation potential p-centered porphyrins improve cell efficiency as is the case in indium tin oxide cells with porphyrins containing axial carbazolylvinylnaphthalimides bound to a core phosphorus. Phosphorus (V) porphyrins are particularly good electron carriers in redox reactions because of their adjustable reduction potential. Poddutoori et al. in 2021 investigated the electron transfer of naphthalene, a commonly excited molecule in photochemistry, to octaethylporphyrin and TEP. The relatively low potentials of the porphyrins yield highly energetic charge-separated states upon the transfer of the electron from naphthalene. The axial and peripheral substituent diversity is key to accessing the wide range of charge-separated states and electron transfer across electronically diverse reactants to form a great variety of redox products. Phosphaporphyrins after synthesis can be complexed with metals like the unmodified porphyrin molecule. Nickel, palladium, and platinum can be coordinated as the metal center of a phosphaporphyrin by reacting the conjugated ring with metal salts like Ni(cod)2, Pd(dba)2, and Pt(dba)2 in DCM/dichlorobenzene respectively. Derivatives and structural analogs Other pnictogen ion cores Phosphorus (V) is a prevalent ion center in modified porphyrin complexes but is not the only group 15 element that has been used in place of a transition metal ion. Antimony and bismuth have also been identified as suitable porphyrin cores as early as 1991 by Barbour et al. Stronger electron donating from the less electronegative antimony results in a more positive reduction potential than in the phosphorus ion. Sterically speaking, arsenic (V), being a larger ion than phosphorus (V), possesses a more stable, planar coordinated porphyrin as opposed to the ruffled porphyrin coordinated to phosphorus. Oppositely, a pnictogen element too large to neatly coordinate into the porphyrin hole like bismuth causes a disruption of symmetry in the ring. Calixphyrins Calixphyrins are analogous to porphyrins with two of the hydrocarbon bridges between pyrroles fully saturated. Like phosphaporphyrin, a calixphyrin pyrrole can be substituted with a phosphole to form a P-centered calixphyrin. The results of increased saturation are mixed sp2 and sp3 hybridization of the ring carbons and the extension of the metal-nitrogen bond length. Matano et al. in 2006 used the first ever P,S-hybridized calixphyrin coordinated to palladium to catalyze the Heck reaction to form substituted alkenes. Like porphyrins, electronic properties of this complex are also tunable through the influence of various functional groups. In 2009, these molecules were examined in comparison to porphyrins due to their more restricted π-systems and carefully characterized with a focus on the tunability of their electronic properties with different metals and substituents. Calixpyrroles Calixpyrroles have all four hydrocarbon bridges fully saturated, breaking the conjugation between heterocycles present in other porphyrinoids. A phosphole replacing a pyrrole allows for similar chemistry to other phosphaporphyrinoids with increased flexibility of the tetradentate ligand by virtue of sp3 hybridization. Isophlorins Isophlorins are less stable nonaromatic complexes that engage in similar chemistry to the porphyrinoids. The 20π systems are synthesized from 18π porphyrins via redox-coupled complexation. Although the 4πn conjugated system would suggest that the molecule exhibits antiaromatic character, geometric and magnetic criteria confirm that the complex is nonaromatic. References Phosphorus heterocycles Porphyrins
Phosphorus porphyrin
[ "Chemistry" ]
3,621
[ "Porphyrins", "Biomolecules" ]