id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
1,206,859
https://en.wikipedia.org/wiki/Extended%20X-ray%20absorption%20fine%20structure
Extended X-ray absorption fine structure (EXAFS), along with X-ray absorption near edge structure (XANES), is a subset of X-ray absorption spectroscopy (XAS). Like other absorption spectroscopies, XAS techniques follow Beer's law. The X-ray absorption coefficient of a material as a function of energy is obtained by directing X-rays of a narrow energy range at a sample, while recording the incident and transmitted x-ray intensity, as the incident x-ray energy is incremented. When the incident x-ray energy matches the binding energy of an electron of an atom within the sample, the number of x-rays absorbed by the sample increases dramatically, causing a drop in the transmitted x-ray intensity. This results in an absorption edge. Every element has a set of unique absorption edges corresponding to different binding energies of its electrons, giving XAS element selectivity. XAS spectra are most often collected at synchrotrons because the high intensity of synchrotron X-ray sources allows the concentration of the absorbing element to reach as low as a few parts per million. Absorption would be undetectable if the source were too weak. Because X-rays are highly penetrating, XAS samples can be gases, solids or liquids. Background EXAFS spectra are displayed as plots of the absorption coefficient of a given material versus energy, typically in a 500 – 1000 eV range beginning before an absorption edge of an element in the sample. The x-ray absorption coefficient is usually normalized to unit step height. This is done by regressing a line to the region before and after the absorption edge, subtracting the pre-edge line from the entire data set and dividing by the absorption step height, which is determined by the difference between the pre-edge and post-edge lines at the value of E0 (on the absorption edge). The normalized absorption spectra are often called XANES spectra. These spectra can be used to determine the average oxidation state of the element in the sample. The XANES spectra are also sensitive to the coordination environment of the absorbing atom in the sample. Finger printing methods have been used to match the XANES spectra of an unknown sample to those of known "standards". Linear combination fitting of several different standard spectra can give an estimate to the amount of each of the known standard spectra within an unknown sample. The dominant physical process in x-ray absorption is one where the absorbed photon ejects a core photoelectron from the absorbing atom, leaving behind a core hole. The ejected photoelectron's energy will be equal to that of the absorbed photon minus the binding energy of the initial core state. The atom with the core hole is now excited and the ejected photoelectron interacts with electrons in the surrounding non-excited atoms. If the ejected photoelectron is taken to have a wave-like nature and the surrounding atoms are described as point scatterers, it is possible to imagine the backscattered electron waves interfering with the forward-propagating waves. The resulting interference pattern shows up as a modulation of the measured absorption coefficient, thereby causing the oscillation in the EXAFS spectra. A simplified plane-wave single-scattering theory has been used for interpretation of EXAFS spectra for many years, although modern methods (like FEFF, GNXAS) have shown that curved-wave corrections and multiple-scattering effects can not be neglected. The photelectron scattering amplitude in the low energy range (5-200 eV) of the photoelectron kinetic energy become much larger so that multiple scattering events become dominant in the XANES (or NEXAFS) spectra. The wavelength of the photoelectron is dependent on the energy and phase of the backscattered wave which exists at the central atom. The wavelength changes as a function of the energy of the incoming photon. The phase and amplitude of the backscattered wave are dependent on the type of atom doing the backscattering and the distance of the backscattering atom from the central atom. The dependence of the scattering on atomic species makes it possible to obtain information pertaining to the chemical coordination environment of the original absorbing (centrally excited) atom by analyzing these EXAFS data. EXAFS Equation The effect of the backscattered photoelectron on the absorption spectra is described by the EXAFS equation, first demonstrated by Sayers, Stern, and Lytle. The oscillatory part of the dipole matrix element is given by , where the sum is over the sets of neighbors of the absorbing atom, is the number of atoms at distance , is the wavenumber and is proportional to energy, is the thermal vibration factor with being the mean square amplitude of the atom's relative displacements, is the mean free path of the photoelectron with momentum (this is related to coherence of the quantum state), and is an element dependent scattering factor. The origin of the oscillations in the absorption cross section are due to the term which imposes the interference condition, leading to peaks in absorption when the wavelength of the photoelectron is equal to an integer fraction of (the round trip distance from the absorbing atom to the scattering atom). This is analogous to eigenstates of the particle in a box toy model. The factor inside the is an element dependent phase shift. Experimental considerations Since EXAFS requires a tunable x-ray source, data are frequently collected at synchrotrons, often at beamlines which are especially optimized for the purpose. The utility of a particular synchrotron to study a particular solid depends on the brightness of the x-ray flux at the absorption edges of the relevant elements. Recent developments in the design and quality of crystal optics have allowed for some EXAFS measurements to take place in a lab setting, where the tunable x-ray source is achieved via a Rowland circle geometry. While experiments requiring high x-ray flux or specialized sample environments can still only be performed at synchrotron facilities, absorption edges in the 5 - 30 keV range are feasible for lab based EXAFS studies. Applications XAS is an interdisciplinary technique and its unique properties, as compared to x-ray diffraction, have been exploited for understanding the details of local structure in: glass, amorphous and liquid systems solid solutions doping and ionic implantation of materials for electronics local distortions of crystal lattices organometallic compounds metalloproteins metal clusters vibrational dynamics ions in solutions chemical speciation analysis XAS provides complementary to diffraction information on peculiarities of local structural and thermal disorder in crystalline and multi-component materials. The use of atomistic simulations such as molecular dynamics or the reverse Monte Carlo method can help in extracting more reliable and richer structural information. Examples EXAFS is, like XANES, a highly sensitive technique with elemental specificity. As such, EXAFS is an extremely useful way to determine the chemical state of practically important species which occur in very low abundance or concentration. Frequent use of EXAFS occurs in environmental chemistry, where scientists try to understand the propagation of pollutants through an ecosystem. EXAFS can be used along with accelerator mass spectrometry in forensic examinations, particularly in nuclear non-proliferation applications. History A very detailed, balanced and informative account about the history of EXAFS (originally called Kossel's structures) is given by R. Stumm von Bordwehr. A more modern and accurate account of the history of XAFS (EXAFS and XANES) is given by the leader of the group that developed the modern version of EXAFS in an award lecture by Edward A. Stern. See also X-ray absorption spectroscopy X-ray absorption near edge structure Surface-extended X-ray absorption fine structure References Bibliography Books Book chapters Papers F.W. Lytle, "The EXAFS family tree: a personal history of the development of extended X-ray absorption fine structure", A. Kodre, I. Arčon, Proceedings of 36th International Conference on Microelectronics, Devices and Materials, MIDEM, Postojna, Slovenia, October 28–20, (2000), p. 191-196 External links International X-ray Absorption Society FEFF Project, University of Washington, Seattle GNXAS project and XAS laboratory, Università di Camerino EXAFS Spectroscopy Laboratory (Riga, Latvia) Community web site for XAFS X-ray absorption spectroscopy
Extended X-ray absorption fine structure
[ "Chemistry", "Materials_science", "Engineering" ]
1,759
[ "X-ray absorption spectroscopy", "Materials science", "Laboratory techniques in condensed matter physics" ]
1,207,000
https://en.wikipedia.org/wiki/Larch%20Prover
The Larch Prover, or LP for short, is an interactive theorem proving system for multi-sorted first-order logic. It was used at MIT and elsewhere during the 1990s to reason about designs for circuits, concurrent algorithms, hardware, and software. Unlike most theorem provers, which attempt to find proofs automatically for correctly stated conjectures, LP was intended to assist users in finding and correcting flaws in conjectures—the predominant activity in the early stages of the design process. It worked efficiently on large problems, had many important user amenities, and could be used by relatively naïve users. Development LP was developed by Stephen Garland and John Guttag at the MIT Laboratory for Computer Science with assistance from James Horning and James Saxe at the DEC Systems Research Center, as part of the Larch project on formal specifications. It extended the REVE 2 equational term rewriting system developed by Pierre Lescanne, Randy Forgaard with assistance from David Detlefs and Katherine Yelick. It supports proofs by equational term rewriting (for terms with associative-commutative operators), cases, contradiction, induction, generalization, and specialization. LP was written in the CLU programming language. Sample LP Axiomatization declare sorts E, S declare variables e, e1, e2: E, x, y, z: S declare operators {}: -> S {__}: E -> S insert: E, S -> S __ \union __: S, S -> S __ \in __: E, S -> Bool __ \subseteq __: S, S -> Bool .. set name setAxioms assert sort S generated by {}, insert; {e} = insert(e, {}); ~(e \in {}); e \in insert(e1, x) <=> e = e1 \/ e \in x; {} \subseteq x; insert(e, x) \subseteq y <=> e \in y /\ x \subseteq y; e \in (x \union y) <=> e \in x \/ e \in y .. set name extensionality assert \A e (e \in x <=> e \in y) => x = y Sample LP Proofs set name setTheorems prove e \in {e} qed prove \E x \A e (e \in x <=> e = e1 \/ e = e2) resume by specializing x to insert(e2, {e1}) qed % Three theorems about union (proved using extensionality) prove x \union {} = x instantiate y by x \union {} in extensionality qed prove x \union insert(e, y) = insert(e, x \union y) resume by contradiction set name lemma critical-pairs *Hyp with extensionality qed prove ac \union resume by contradiction set name lemma critical-pairs *Hyp with extensionality resume by contradiction set name lemma critical-pairs *Hyp with extensionality qed % Three theorems about subset set proof-methods =>, normalization prove e \in x /\ x \subseteq y => e \in y by induction on x resume by case ec = e1c set name lemma complete qed prove x \subseteq y /\ y \subseteq x => x = y set name lemma prove e \in xc <=> e \in yc by <=> complete complete instantiate x by xc, y by yc in extensionality qed prove (x \union y) \subseteq z <=> x \subseteq z /\ y \subseteq z by induction on x qed % An alternate induction rule prove sort S generated by {}, {__}, \union set name lemma resume by induction critical-pairs *GenHyp with *GenHyp critical-pairs *InductHyp with lemma qed Bibliography Pascal André, Annya Romanczuk, Jean-Claude Royer, and Aline Vasconcelos, "Checking the consistency of UML class diagrams using Larch Prover", Proceedings of the 2000 International Conference on Rigorous Object-Oriented Methods, page 1, York, UK, BCS Learning & Development Ltd., Swindon, GBR, January 2000. Boutheina Chetali, "Formal verification of concurrent programs using the Larch Prover", IEEE Transactions on Software Engineering 24:1, pages 46–62, January 1998. doi: 10.1109/32.663997. Manfred Broy, "Experiences with software specification and verification using LP, the Larch proof assistant", Formal Methods in System Design 8:3, pages 221–272, 1996. Urban Engberg, Peter Grønning, and Leslie Lamport, "Mechanical Verification of Concurrent Systems with TLA", Computer-Aided Verification, G. v. Bochmann and D. K. Probst editors, Proceedings of the Fourth International Conference CAV'92), Lecture Notes in Computer Science 663, Springer-Verlag, June 1992, pages 44–55. Urban Engberg, Reasoning in the Temporal Logic of Actions, BRICS Dissertation Series DS 96–1, Department of Computer Science, University of Aarhus, Denmark, August 1996. ISSN 1396-7002. Stephen J. Garland and John V. Guttag, "Inductive methods for reasoning about abstract data types," Fifteenth Annual ACM Symposium on Principles of Programming Languages, pages 219–228, San Diego, CA, January 1988. Stephen J. Garland and John V. Guttag, "LP: The Larch Prover," Ninth International Conference on Automated Deduction Lecture Notes in Computer Science 310, pages 748–749, Argonne, Illinois, May 1988. Springer-Verlag. Stephen J. Garland, John V. Guttag, and Jørgen Staunstrup, "Verification of VLSI circuits using LP," The Fusion of Hardware Design and Verification, pages 329–345, Glasgow, Scotland, July 4–6, 1988. IFIP WG 10.2, North Holland. Stephen J. Garland and John V. Guttag, "An overview of LP, the Larch Prover," Third International Conference on Rewriting Techniques and Applications Lecture Notes in Computer Science 355, pages 137–151, Chapel Hill, NC, April 1989. Springer-Verlag. Stephen J. Garland and John V. Guttag, "Using LP to debug specifications," Programming Concepts and Methods, Sea of Galilee, Israel, April 2–5, 1990. IFIP WG 2.2/2.3, North-Holland. Stephen J. Garland and John V. Guttag, A Guide to LP: the Larch Prover, MIT Laboratory for Computer Science, December 1991. Also published as Digital Equipment Corporation Systems Research Center Report 82, 1991. Victor Luchangco, Ekrem Söylemez, Stephen Garland, and Nancy Lynch, "Verifying timing properties of concurrent algorithms," FORTE '94: Seventh International Conference on Formal Description Techniques, pages 259–273, Berne, Switzerland, October 4–7, 1994. Chapman & Hall. Ursula Martin and Michael Lai, "Some experiments with a completion theorem prover", Journal of Symbolic Computation 13:1, 1992, pages 81–100, ISSN 0747-7171. Ursula Martin and Jeannette M. Wing, editors, First International Workshop on Larch, Proceedings of the First International Workshop non Larch, Dedham, Massachusetts, July 13–15 1992, Workshops in Computing, Springer-Verlag, 1992. Michel Bidoit and Rolf Hennicker, "How to prove observational theorems with LP", pages 18–35 Boutheina Chetali and Pierre Lescanne, "An exercise in LP: the proof of a non-restoring division circuit",, pages 55–68 Christine Choppy and Michel Bidoit, "Integrating ASSPEGIQUE and LP", pages 69–85 Niels Mellergaard and Jørgen Staunstrup, "Generating proof obligations for circuits", pages 185–200 E. A. Scott and K. J. Norrie, "Using LP to study the language PL0+", pages 227–245 Frédéric Voisin, "A new front-end for the Larch Prover", pages 282–296 J. M. Wing, E. Rollins, and A. Moorman Zaremski, "Thoughts on a Larch/ML and a new application for LP", pages 297–312 Toh Ne Win, Michael D. Ernst, Stephen J. Garland, Dilsun Kirli, and Nancy Lynch, Using simulated execution in verifying distributed algorithms," Software Tools for Technology Transfer 6:1, Lenore D. Zuck, Paul C. Attie, Agostino Cortesi, and Supratik Mukhopadhyay (editors), pages 67–76. Springer-Verlag, July 2004. Tsvetomir P. Petrov, Anya Pogosyants, Stephen J. Garland, Victor Luchangco, and Nancy A. Lynch, "Computer-assisted verification of an algorithm for concurrent timestamps," Formal Description Techniques IX: Theory, Application, and Tools (FORTE/PSTV), Reinhard Gotzhein and Jan Bredereke (editors), pages 29–44, Kaiserslautern, Germany, October 8–11, 1996. Chapman & Hall. James B. Saxe, Stephen J. Garland, John V. Guttag, and James J. Horning, "Using transformations and verification in circuit design," Formal Methods in System Design 3:3 (December 1993), pages 181–209. Jørgen F. Søgaard-Anderson, Stephen J. Garland, John V. Guttag, Nancy A. Lynch, and Anya Pogosyants, "Computed-assisted simulation proofs," Fifth Conference on Computer-Aided Verification (CAV '03), Costas Courcoubetis (editor), Lecture Notes in Computer Science 697, pages 305–319, Elounda, Greece, June 1993. Springer-Verlag. Jørgen Staunstrup, Stephen J. Garland, and John V. Guttag, "Localized verification of circuit descriptions," Automatic Verification Methods for Finite State Systems, Lecture Notes in Computer Science 407, pages 349–364, Grenoble, France, June 1989. Springer-Verlag. Jørgen Staunstrup, Stephen J. Garland, and John V. Guttag, "Mechanized verification of circuit descriptions using the Larch Prover", Theorem Provers in Circuit Design, Victoria Stavridou, Thomas F. Melham, and Raymond T. Boute (editors), IFIP Transactions A-10, pages 277–299, Nijmegen, The Netherlands, June 22–24, 1992. North-Holland. Mark T. Vandevoorde and Deepak Kapur, "Distributed Larch Prover (DLP): an experiment in parallelizing a rewrite-rule based prover", International Conference on Rewriting Techniques and Applications RTA 1996, Lecture Notes in Computer Science 1103, pages 420–423. Springer-Verlag. Frédéric Voisin, "A new proof manager and graphic interface for the Larch prover", International Conference on Rewriting Techniques and Applications RTA 1996, Lecture Notes in Computer Science 1103, pages 408–411. Springer-Verlag. Jeannette M. Wing and Chun Gong, Experience with the Larch Prover, ACM SIGSOFT Software Engineering Notes 15:44, September 1990, pages 140–143 https://doi.org/10.1145/99571.99835 References External links Larch website On-line documentation for LP Theorem proving software systems
Larch Prover
[ "Mathematics" ]
2,510
[ "Theorem proving software systems", "Automated theorem proving", "Mathematical software" ]
1,207,207
https://en.wikipedia.org/wiki/Little%E2%80%93Parks%20effect
In condensed matter physics, the Little–Parks effect was discovered in 1962 by William A. Little and Ronald D. Parks in experiments with empty and thin-walled superconducting cylinders subjected to a parallel magnetic field. It was one of the first experiments to indicate the importance of Cooper-pairing principle in BCS theory. The essence of the Little–Parks effect is slight suppression of the cylinder's superconductivity by persistent current. Explanation The electrical resistance of such cylinders shows a periodic oscillation with the magnetic flux piercing the cylinder, the period being where is the Planck constant and is the elementary charge. The explanation provided by Little and Parks is that the resistance oscillation reflects a more fundamental phenomenon, i.e. periodic oscillation of the superconducting critical temperature . The Little–Parks effect consists in a periodic variation of the with the magnetic flux, which is the product of the magnetic field (coaxial) and the cross sectional area of the cylinder. depends on the kinetic energy of the superconducting electrons. More precisely, the is such temperature at which the free energies of normal and superconducting electrons are equal, for a given magnetic field. To understand the periodic oscillation of the , which constitutes the Little–Parks effect, one needs to understand the periodic variation of the kinetic energy. The kinetic energy oscillates because the applied magnetic flux increases the kinetic energy while superconducting vortices, periodically entering the cylinder, compensate for the flux effect and reduce the kinetic energy. Thus, the periodic oscillation of the kinetic energy and the related periodic oscillation of the critical temperature occur together. The Little–Parks effect is a result of collective quantum behavior of superconducting electrons. It reflects the general fact that it is the fluxoid rather than the flux which is quantized in superconductors. The Little–Parks effect can be seen as a result of the requirement that quantum physics be invariant with respect to the gauge choice for the electromagnetic potential, of which the magnetic vector potential forms part. Electromagnetic theory implies that a particle with electric charge travelling along some path in a region with zero magnetic field , but non-zero (by ), acquires a phase shift , given in SI units by In a superconductor, the electrons form a quantum superconducting condensate, called a Bardeen–Cooper–Schrieffer (BCS) condensate. In the BCS condensate all electrons behave coherently, i.e. as one particle. Thus the phase of the collective BCS wavefunction behaves under the influence of the vector potential in the same way as the phase of a single electron. Therefore, the BCS condensate flowing around a closed path in a multiply connected superconducting sample acquires a phase difference determined by the magnetic flux through the area enclosed by the path (via Stokes' theorem and ), and given by: This phase effect is responsible for the quantized-flux requirement and the Little–Parks effect in superconducting loops and empty cylinders. The quantization occurs because the superconducting wave function must be single valued in a loop or an empty superconducting cylinder: its phase difference around a closed loop must be an integer multiple of , with the charge for the BCS electronic superconducting pairs. If the period of the Little–Parks oscillations is 2π with respect to the superconducting phase variable, from the formula above it follows that the period with respect to the magnetic flux is the same as the magnetic flux quantum, namely Applications Little–Parks oscillations are a widely used proof mechanism of Cooper pairing. One of good example is the study of the Superconductor Insulator Transition. The challenge here is to separate Little–Parks oscillations from weak (anti-)localization, as in Altshuler et al. results, where authors observed the Aharonov–Bohm effect in a dirty metallic film. History Fritz London predicted that the fluxoid is quantized in a multiply connected superconductor. Experimentally has been shown, that the trapped magnetic flux existed only in discrete quantum units h/2e. Deaver and Fairbank were able to achieve the accuracy 20–30% because of the wall thickness of the cylinder. Little and Parks examined a "thin-walled" (Materials: Al, In, Pb, Sn and Sn–In alloys) cylinder (diameter was about 1 micron) at T very close to the transition temperature in an applied magnetic field in the axial direction. They found magnetoresistance oscillations with the period consistent with h/2e. What they actually measured was an infinitely small changes of resistance versus temperature for (different) constant magnetic field. The figure to the right shows instead measurements of the resistance for varying applied magnetic field, which corresponds to varying magnetic flux, with the different colors (probably) representing different temperatures. References Condensed matter physics Superconductivity
Little–Parks effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,036
[ "Electrical resistance and conductance", "Physical quantities", "Superconductivity", "Phases of matter", "Materials science", "Condensed matter physics", "Matter" ]
1,208,420
https://en.wikipedia.org/wiki/Three-body%20problem
In physics, specifically classical mechanics, the three-body problem is to take the initial positions and velocities (or momenta) of three point masses that orbit each other in space and calculate their subsequent trajectories using Newton's laws of motion and Newton's law of universal gravitation. Unlike the two-body problem, the three-body problem has no general closed-form solution, meaning there is no equation that always solves it. When three bodies orbit each other, the resulting dynamical system is chaotic for most initial conditions. Because there are no solvable equations for most three-body systems, the only way to predict the motions of the bodies is to estimate them using numerical methods. The three-body problem is a special case of the -body problem. Historically, the first specific three-body problem to receive extended study was the one involving the Earth, the Moon, and the Sun. In an extended modern sense, a three-body problem is any problem in classical mechanics or quantum mechanics that models the motion of three particles. Mathematical description The mathematical statement of the three-body problem can be given in terms of the Newtonian equations of motion for vector positions of three gravitationally interacting bodies with masses : where is the gravitational constant. As astronomer Juhan Frank describes, "These three second-order vector differential equations are equivalent to 18 first order scalar differential equations." As June Barrow-Green notes with regard to an alternative presentation, if represent three particles with masses , distances = , and coordinates (i,j = 1,2,3) in an inertial coordinate system ... the problem is described by nine second-order differential equations. The problem can also be stated equivalently in the Hamiltonian formalism, in which case it is described by a set of 18 first-order differential equations, one for each component of the positions and momenta : where is the Hamiltonian: In this case, is simply the total energy of the system, gravitational plus kinetic. Restricted three-body problem In the restricted three-body problem formulation, in the description of Barrow-Green,two... bodies revolve around their centre of mass in circular orbits under the influence of their mutual gravitational attraction, and... form a two body system... [whose] motion is known. A third body (generally known as a planetoid), assumed massless with respect to the other two, moves in the plane defined by the two revolving bodies and, while being gravitationally influenced by them, exerts no influence of its own. Per Barrow-Green, "[t]he problem is then to ascertain the motion of the third body." That is to say, this two-body motion is taken to consist of circular orbits around the center of mass, and the planetoid is assumed to move in the plane defined by the circular orbits. (That is, it is useful to consider the effective potential.) With respect to a rotating reference frame, the two co-orbiting bodies are stationary, and the third can be stationary as well at the Lagrangian points, or move around them, for instance on a horseshoe orbit. The restricted three-body problem is easier to analyze theoretically than the full problem. It is of practical interest as well since it accurately describes many real-world problems, the most important example being the Earth–Moon–Sun system. For these reasons, it has occupied an important role in the historical development of the three-body problem. Mathematically, the problem is stated as follows. Let be the masses of the two massive bodies, with (planar) coordinates and , and let be the coordinates of the planetoid. For simplicity, choose units such that the distance between the two massive bodies, as well as the gravitational constant, are both equal to . Then, the motion of the planetoid is given by: where . In this form the equations of motion carry an explicit time dependence through the coordinates ; however, this time dependence can be removed through a transformation to a rotating reference frame, which simplifies any subsequent analysis. Solutions General solution There is no general closed-form solution to the three-body problem. In other words, it does not have a general solution that can be expressed in terms of a finite number of standard mathematical operations. Moreover, the motion of three bodies is generally non-repeating, except in special cases. However, in 1912 the Finnish mathematician Karl Fritiof Sundman proved that there exists an analytic solution to the three-body problem in the form of a Puiseux series, specifically a power series in terms of powers of . This series converges for all real , except for initial conditions corresponding to zero angular momentum. In practice, the latter restriction is insignificant since initial conditions with zero angular momentum are rare, having Lebesgue measure zero. An important issue in proving this result is the fact that the radius of convergence for this series is determined by the distance to the nearest singularity. Therefore, it is necessary to study the possible singularities of the three-body problems. As is briefly discussed below, the only singularities in the three-body problem are binary collisions (collisions between two particles at an instant) and triple collisions (collisions between three particles at an instant). Collisions of any number are somewhat improbable, since it has been shown that they correspond to a set of initial conditions of measure zero. But there is no criterion known to be put on the initial state in order to avoid collisions for the corresponding solution. So Sundman's strategy consisted of the following steps: Using an appropriate change of variables to continue analyzing the solution beyond the binary collision, in a process known as regularization. Proving that triple collisions only occur when the angular momentum vanishes. By restricting the initial data to , he removed all real singularities from the transformed equations for the three-body problem. Showing that if , then not only can there be no triple collision, but the system is strictly bounded away from a triple collision. This implies, by Cauchy's existence theorem for differential equations, that there are no complex singularities in a strip (depending on the value of ) in the complex plane centered around the real axis (related to the Cauchy–Kovalevskaya theorem). Find a conformal transformation that maps this strip into the unit disc. For example, if (the new variable after the regularization) and if , then this map is given by This finishes the proof of Sundman's theorem. The corresponding series converges extremely slowly. That is, obtaining a value of meaningful precision requires so many terms that this solution is of little practical use. Indeed, in 1930, David Beloriszky calculated that if Sundman's series were to be used for astronomical observations, then the computations would involve at least 10 terms. Special-case solutions In 1767, Leonhard Euler found three families of periodic solutions in which the three masses are collinear at each instant. In 1772, Lagrange found a family of solutions in which the three masses form an equilateral triangle at each instant. Together with Euler's collinear solutions, these solutions form the central configurations for the three-body problem. These solutions are valid for any mass ratios, and the masses move on Keplerian ellipses. These four families are the only known solutions for which there are explicit analytic formulae. In the special case of the circular restricted three-body problem, these solutions, viewed in a frame rotating with the primaries, become points called Lagrangian points and labeled L1, L2, L3, L4, and L5, with L4 and L5 being symmetric instances of Lagrange's solution. In work summarized in 1892–1899, Henri Poincaré established the existence of an infinite number of periodic solutions to the restricted three-body problem, together with techniques for continuing these solutions into the general three-body problem. In 1893, Meissel stated what is now called the Pythagorean three-body problem: three masses in the ratio 3:4:5 are placed at rest at the vertices of a 3:4:5 right triangle, with the heaviest body at the right angle and the lightest at the smaller acute angle. Burrau further investigated this problem in 1913. In 1967 Victor Szebehely and C. Frederick Peters established eventual escape of the lightest body for this problem using numerical integration, while at the same time finding a nearby periodic solution. In the 1970s, Michel Hénon and Roger A. Broucke each found a set of solutions that form part of the same family of solutions: the Broucke–Hénon–Hadjidemetriou family. In this family, the three objects all have the same mass and can exhibit both retrograde and direct forms. In some of Broucke's solutions, two of the bodies follow the same path. In 1993, physicist Cris Moore at the Santa Fe Institute found a zero angular momentum solution with three equal masses moving around a figure-eight shape. In 2000, mathematicians Alain Chenciner and Richard Montgomery proved its formal existence. The solution has been shown numerically to be stable for small perturbations of the mass and orbital parameters, which makes it possible for such orbits to be observed in the physical universe. But it has been argued that this is unlikely since the domain of stability is small. For instance, the probability of a binary–binary scattering event resulting in a figure-8 orbit has been estimated to be a small fraction of a percent. In 2013, physicists Milovan Šuvakov and Veljko Dmitrašinović at the Institute of Physics in Belgrade discovered 13 new families of solutions for the equal-mass zero-angular-momentum three-body problem. In 2015, physicist Ana Hudomal discovered 14 new families of solutions for the equal-mass zero-angular-momentum three-body problem. In 2017, researchers Xiaoming Li and Shijun Liao found 669 new periodic orbits of the equal-mass zero-angular-momentum three-body problem. This was followed in 2018 by an additional 1,223 new solutions for a zero-angular-momentum system of unequal masses. In 2018, Li and Liao reported 234 solutions to the unequal-mass "free-fall" three-body problem. The free-fall formulation starts with all three bodies at rest. Because of this, the masses in a free-fall configuration do not orbit in a closed "loop", but travel forward and backward along an open "track". In 2023, Ivan Hristov, Radoslava Hristova, Dmitrašinović and Kiyotaka Tanikawa published a search for "periodic free-fall orbits" three-body problem, limited to the equal-mass case, and found 12,409 distinct solutions. Numerical approaches Using a computer, the problem may be solved to arbitrarily high precision using numerical integration. There have been attempts of creating computer programs that numerically solve the three-body problem (and by extension, the n-body problem) involving both electromagnetic and gravitational interactions, and incorporating modern theories of physics such as special relativity. In addition, using the theory of random walks, an approximate probability of different outcomes may be computed. History The gravitational problem of three bodies in its traditional sense dates in substance from 1687, when Isaac Newton published his Philosophiæ Naturalis Principia Mathematica, in which Newton attempted to figure out if any long term stability is possible especially for such a system like that of the Earth, the Moon, and the Sun, after having solved the two-body problem. Guided by major Renaissance astronomers Nicolaus Copernicus, Tycho Brahe and Johannes Kepler, Newton introduced later generations to the beginning of the gravitational three-body problem. In Proposition 66 of Book 1 of the Principia, and its 22 Corollaries, Newton took the first steps in the definition and study of the problem of the movements of three massive bodies subject to their mutually perturbing gravitational attractions. In Propositions 25 to 35 of Book 3, Newton also took the first steps in applying his results of Proposition 66 to the lunar theory, the motion of the Moon under the gravitational influence of Earth and the Sun. Later, this problem was also applied to other planets' interactions with the Earth and the Sun. The physical problem was first addressed by Amerigo Vespucci and subsequently by Galileo Galilei, as well as Simon Stevin, but they did not realize what they contributed. Though Galileo determined that the speed of fall of all bodies changes uniformly and in the same way, he did not apply it to planetary motions. Whereas in 1499, Vespucci used knowledge of the position of the Moon to determine his position in Brazil. It became of technical importance in the 1720s, as an accurate solution would be applicable to navigation, specifically for the determination of longitude at sea, solved in practice by John Harrison's invention of the marine chronometer. However the accuracy of the lunar theory was low, due to the perturbing effect of the Sun and planets on the motion of the Moon around Earth. Jean le Rond d'Alembert and Alexis Clairaut, who developed a longstanding rivalry, both attempted to analyze the problem in some degree of generality; they submitted their competing first analyses to the Académie Royale des Sciences in 1747. It was in connection with their research, in Paris during the 1740s, that the name "three-body problem" () began to be commonly used. An account published in 1761 by Jean le Rond d'Alembert indicates that the name was first used in 1747. From the end of the 19th century to early 20th century, the approach to solve the three-body problem with the usage of short-range attractive two-body forces was developed by scientists, which offered P. F. Bedaque, H.-W. Hammer and U. van Kolck an idea to renormalize the short-range three-body problem, providing scientists a rare example of a renormalization group limit cycle at the beginning of the 21st century. George William Hill worked on the restricted problem in the late 19th century with an application of motion of Venus and Mercury. At the beginning of the 20th century, Karl Sundman approached the problem mathematically and systematically by providing a functional theoretical proof to the problem valid for all values of time. It was the first time scientists theoretically solved the three-body problem. However, because there was not a qualitative enough solution of this system, and it was too slow for scientists to practically apply it, this solution still left some issues unresolved. In the 1970s, implication to three-body from two-body forces had been discovered by V. Efimov, which was named the Efimov effect. In 2017, Shijun Liao and Xiaoming Li applied a new strategy of numerical simulation for chaotic systems called the clean numerical simulation (CNS), with the use of a national supercomputer, to successfully gain 695 families of periodic solutions of the three-body system with equal mass. In 2019, Breen et al. announced a fast neural network solver for the three-body problem, trained using a numerical integrator. In September 2023, several possible solutions have been found to the problem according to reports. Other problems involving three bodies The term "three-body problem" is sometimes used in the more general sense to refer to any physical problem involving the interaction of three bodies. A quantum-mechanical analogue of the gravitational three-body problem in classical mechanics is the helium atom, in which a helium nucleus and two electrons interact according to the inverse-square Coulomb interaction. Like the gravitational three-body problem, the helium atom cannot be solved exactly. In both classical and quantum mechanics, however, there exist nontrivial interaction laws besides the inverse-square force that do lead to exact analytic three-body solutions. One such model consists of a combination of harmonic attraction and a repulsive inverse-cube force. This model is considered nontrivial since it is associated with a set of nonlinear differential equations containing singularities (compared with, e.g., harmonic interactions alone, which lead to an easily solved system of linear differential equations). In these two respects it is analogous to (insoluble) models having Coulomb interactions, and as a result has been suggested as a tool for intuitively understanding physical systems like the helium atom. Within the point vortex model, the motion of vortices in a two-dimensional ideal fluid is described by equations of motion that contain only first-order time derivatives. I.e. in contrast to Newtonian mechanics, it is the velocity and not the acceleration that is determined by their relative positions. As a consequence, the three-vortex problem is still integrable, while at least four vortices are required to obtain chaotic behavior. One can draw parallels between the motion of a passive tracer particle in the velocity field of three vortices and the restricted three-body problem of Newtonian mechanics. The gravitational three-body problem has also been studied using general relativity. Physically, a relativistic treatment becomes necessary in systems with very strong gravitational fields, such as near the event horizon of a black hole. However, the relativistic problem is considerably more difficult than in Newtonian mechanics, and sophisticated numerical techniques are required. Even the full two-body problem (i.e. for arbitrary ratio of masses) does not have a rigorous analytic solution in general relativity. -body problem The three-body problem is a special case of the -body problem, which describes how objects move under one of the physical forces, such as gravity. These problems have a global analytical solution in the form of a convergent power series, as was proven by Karl F. Sundman for and by Qiudong Wang for (see -body problem for details). However, the Sundman and Wang series converge so slowly that they are useless for practical purposes; therefore, it is currently necessary to approximate solutions by numerical analysis in the form of numerical integration or, for some cases, classical trigonometric series approximations (see -body simulation). Atomic systems, e.g. atoms, ions, and molecules, can be treated in terms of the quantum -body problem. Among classical physical systems, the -body problem usually refers to a galaxy or to a cluster of galaxies; planetary systems, such as stars, planets, and their satellites, can also be treated as -body systems. Some applications are conveniently treated by perturbation theory, in which the system is considered as a two-body problem plus additional forces causing deviations from a hypothetical unperturbed two-body trajectory. See also Few-body systems Galaxy formation and evolution Gravity assist Lagrange point Low-energy transfer Michael Minovitch -body simulation Symplectic integrator Sitnikov problem Two-body problem Synodic reference frame Triple star system The Three-Body Problem (novel) 3 Body Problem (TV series) References Further reading External links The '3-body problem' may not be so chaotic after all, new study suggests (Live Science, October 22, 2024) Physicists Discover a Whopping 13 New Solutions to Three-Body Problem (Science, March 8, 2013) Chaotic maps Classical mechanics Dynamical systems Mathematical physics Orbits Equations of astronomy
Three-body problem
[ "Physics", "Astronomy", "Mathematics" ]
3,993
[ "Functions and mappings", "Concepts in astronomy", "Applied mathematics", "Theoretical physics", "Mathematical objects", "Classical mechanics", "Mechanics", "Mathematical relations", "Equations of astronomy", "Chaotic maps", "Mathematical physics", "Dynamical systems" ]
1,208,872
https://en.wikipedia.org/wiki/Shannon%27s%20source%20coding%20theorem
In information theory, Shannon's source coding theorem (or noiseless coding theorem) establishes the statistical limits to possible data compression for data whose source is an independent identically-distributed random variable, and the operational meaning of the Shannon entropy. Named after Claude Shannon, the source coding theorem shows that, in the limit, as the length of a stream of independent and identically-distributed random variable (i.i.d.) data tends to infinity, it is impossible to compress such data such that the code rate (average number of bits per symbol) is less than the Shannon entropy of the source, without it being virtually certain that information will be lost. However it is possible to get the code rate arbitrarily close to the Shannon entropy, with negligible probability of loss. The source coding theorem for symbol codes places an upper and a lower bound on the minimal possible expected length of codewords as a function of the entropy of the input word (which is viewed as a random variable) and of the size of the target alphabet. Note that, for data that exhibits more dependencies (whose source is not an i.i.d. random variable), the Kolmogorov complexity, which quantifies the minimal description length of an object, is more suitable to describe the limits of data compression. Shannon entropy takes into account only frequency regularities while Kolmogorov complexity takes into account all algorithmic regularities, so in general the latter is smaller. On the other hand, if an object is generated by a random process in such a way that it has only frequency regularities, entropy is close to complexity with high probability (Shen et al. 2017). Statements Source coding is a mapping from (a sequence of) symbols from an information source to a sequence of alphabet symbols (usually bits) such that the source symbols can be exactly recovered from the binary bits (lossless source coding) or recovered within some distortion (lossy source coding). This is one approach to data compression. Source coding theorem In information theory, the source coding theorem (Shannon 1948) informally states that (MacKay 2003, pg. 81, Cover 2006, Chapter 5): i.i.d. random variables each with entropy can be compressed into more than bits with negligible risk of information loss, as ; but conversely, if they are compressed into fewer than bits it is virtually certain that information will be lost.The coded sequence represents the compressed message in a biunivocal way, under the assumption that the decoder knows the source. From a practical point of view, this hypothesis is not always true. Consequently, when the entropy encoding is applied the transmitted message is . Usually, the information that characterizes the source is inserted at the beginning of the transmitted message. Source coding theorem for symbol codes Let denote two finite alphabets and let and denote the set of all finite words from those alphabets (respectively). Suppose that is a random variable taking values in and let be a uniquely decodable code from to where . Let denote the random variable given by the length of codeword . If is optimal in the sense that it has the minimal expected word length for , then (Shannon 1948): Where denotes the expected value operator. Proof: source coding theorem Given is an i.i.d. source, its time series is i.i.d. with entropy in the discrete-valued case and differential entropy in the continuous-valued case. The Source coding theorem states that for any , i.e. for any rate larger than the entropy of the source, there is large enough and an encoder that takes i.i.d. repetition of the source, , and maps it to binary bits such that the source symbols are recoverable from the binary bits with probability of at least . Proof of Achievability. Fix some , and let The typical set, , is defined as follows: The asymptotic equipartition property (AEP) shows that for large enough , the probability that a sequence generated by the source lies in the typical set, , as defined approaches one. In particular, for sufficiently large , can be made arbitrarily close to 1, and specifically, greater than (See AEP for a proof). The definition of typical sets implies that those sequences that lie in the typical set satisfy: The probability of a sequence being drawn from is greater than . , which follows from the left hand side (lower bound) for . , which follows from upper bound for and the lower bound on the total probability of the whole set . Since bits are enough to point to any string in this set. The encoding algorithm: the encoder checks if the input sequence lies within the typical set; if yes, it outputs the index of the input sequence within the typical set; if not, the encoder outputs an arbitrary digit number. As long as the input sequence lies within the typical set (with probability at least ), the encoder does not make any error. So, the probability of error of the encoder is bounded above by . Proof of converse: the converse is proved by showing that any set of size smaller than (in the sense of exponent) would cover a set of probability bounded away from . Proof: Source coding theorem for symbol codes For let denote the word length of each possible . Define , where is chosen so that . Then where the second line follows from Gibbs' inequality and the fifth line follows from Kraft's inequality: so . For the second inequality we may set so that and so and and so by Kraft's inequality there exists a prefix-free code having those word lengths. Thus the minimal satisfies Extension to non-stationary independent sources Fixed rate lossless source coding for discrete time non-stationary independent sources Define typical set as: Then, for given , for large enough, . Now we just encode the sequences in the typical set, and usual methods in source coding show that the cardinality of this set is smaller than . Thus, on an average, bits suffice for encoding with probability greater than , where and can be made arbitrarily small, by making larger. See also Channel coding Error exponent Noisy-channel coding theorem References Information theory Coding theory Data compression Presentation layer protocols Mathematical theorems in theoretical computer science Articles containing proofs
Shannon's source coding theorem
[ "Mathematics", "Technology", "Engineering" ]
1,289
[ "Discrete mathematics", "Coding theory", "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory", "Mathematical problems", "Articles containing proofs", "Mathematical theorems", "Mathematical theorems in theoretical computer science" ]
1,209,000
https://en.wikipedia.org/wiki/Electric%20flux
In electromagnetism, electric flux is the total electric field that crosses a given surface. The electric flux through a closed surface is equal to the total charge contained within that surface. The electric field E can exert a force on an electric charge at any point in space. The electric field is the gradient of the electric potential. Overview An electric charge, such as a single electron in space, has an electric field surrounding it. In pictorial form, this electric field is shown as "lines of flux" being radiated from a dot (the charge). These are called Gauss lines. Note that field lines are a graphic illustration of field strength and direction and have no physical meaning as isolated lines. The density of these lines corresponds to the electric field strength, which could also be called the electric flux density: the number of "lines" per unit area. Electric flux is directly proportional to the total number of electric field lines going through a surface. For simplicity in calculations it is often convenient to consider a surface perpendicular to the flux lines. If the electric field is uniform, the electric flux passing through a surface of vector area is where is the electric field (having the unit ), is its magnitude, is the area of the surface, and is the angle between the electric field lines and the normal (perpendicular) to . For a non-uniform electric field, the electric flux through a small surface area is given by (the electric field, , multiplied by the component of area perpendicular to the field). The electric flux over a surface is therefore given by the surface integral: where is the electric field and is an infinitesimal area on the surface with an outward facing surface normal defining its direction. For a closed Gaussian surface, electric flux is given by: where is the electric field, is an infinitesimal area on the closed surface, is the total electric charge inside the surface, is the electric constant (a universal constant, also called the permittivity of free space) () This relation is known as Gauss's law for electric fields in its integral form and it is one of Maxwell's equations. While the electric flux is not affected by charges that are not within the closed surface, the net electric field, can be affected by charges that lie outside the closed surface. While Gauss's law holds for all situations, it is most useful for "by hand" calculations when high degrees of symmetry exist in the electric field. Examples include spherical and cylindrical symmetry. The SI unit of electric flux is the volt-meter (), or, equivalently, newton-meter squared per coulomb (). Thus, the unit of electric flux expressed in terms of SI base units is . Its dimensional formula is . See also Magnetic flux Maxwell's equations Electric field Magnetic field Electromagnetic field Citations References External links Electric flux – HyperPhysics Electrostatics Electromagnetic quantities
Electric flux
[ "Physics", "Mathematics" ]
585
[ "Quantity", "Electromagnetic quantities", "Physical quantities" ]
1,209,057
https://en.wikipedia.org/wiki/Wnt%20signaling%20pathway
In cellular biology, the Wnt signaling pathways are a group of signal transduction pathways which begin with proteins that pass signals into a cell through cell surface receptors. The name Wnt, pronounced "wint", is a portmanteau created from the names Wingless and Int-1. Wnt signaling pathways use either nearby cell-cell communication (paracrine) or same-cell communication (autocrine). They are highly evolutionarily conserved in animals, which means they are similar across animal species from fruit flies to humans. Three Wnt signaling pathways have been characterized: the canonical Wnt pathway, the noncanonical planar cell polarity pathway, and the noncanonical Wnt/calcium pathway. All three pathways are activated by the binding of a Wnt-protein ligand to a Frizzled family receptor, which passes the biological signal to the Dishevelled protein inside the cell. The canonical Wnt pathway leads to regulation of gene transcription, and is thought to be negatively regulated in part by the SPATS1 gene. The noncanonical planar cell polarity pathway regulates the cytoskeleton that is responsible for the shape of the cell. The noncanonical Wnt/calcium pathway regulates calcium inside the cell. Wnt signaling was first identified for its role in carcinogenesis, then for its function in embryonic development. The embryonic processes it controls include body axis patterning, cell fate specification, cell proliferation and cell migration. These processes are necessary for proper formation of important tissues including bone, heart and muscle. Its role in embryonic development was discovered when genetic mutations in Wnt pathway proteins produced abnormal fruit fly embryos. Later research found that the genes responsible for these abnormalities also influenced breast cancer development in mice. Wnt signaling also controls tissue regeneration in adult bone marrow, skin and intestine. This pathway's clinical importance was demonstrated by mutations that lead to various diseases, including breast and prostate cancer, glioblastoma, type II diabetes and others. In recent years, researchers reported first successful use of Wnt pathway inhibitors in mouse models of disease. History and etymology The discovery of Wnt signaling was influenced by research on oncogenic (cancer-causing) retroviruses. In 1982, Roel Nusse and Harold Varmus infected mice with mouse mammary tumor virus in order to mutate mouse genes to see which mutated genes could cause breast tumors. They identified a new mouse proto-oncogene that they named int1 (integration 1). Int1 is highly conserved across multiple species, including humans and Drosophila. Its presence in D. melanogaster led researchers to discover in 1987 that the int1 gene in Drosophila was actually the already known and characterized Drosophila gene known as Wingless (Wg). Since previous research by Christiane Nüsslein-Volhard and Eric Wieschaus (which won them the Nobel Prize in Physiology or Medicine in 1995) had already established the function of Wg as a segment polarity gene involved in the formation of the body axis during embryonic development, researchers determined that the mammalian int1 discovered in mice is also involved in embryonic development. Continued research led to the discovery of further int1-related genes; however, because those genes were not identified in the same manner as int1, the int gene nomenclature was inadequate. Thus, the int/Wingless family became the Wnt family and int1 became Wnt1. The name Wnt is a portmanteau of int and Wg and stands for "Wingless-related integration site". Proteins Wnt comprises a diverse family of secreted lipid-modified signaling glycoproteins that are 350–400 amino acids in length. The lipid modification of all Wnts is palmitoleoylation of a single totally conserved cysteine residue. Palmitoleoylation is necessary because it is required for Wnt to bind to its carrier protein Wntless (WLS) so it can be transported to the plasma membrane for secretion and it allows the Wnt protein to bind its receptor Frizzled Wnt proteins also undergo glycosylation, which attaches a carbohydrate in order to ensure proper secretion. In Wnt signaling, these proteins act as ligands to activate the different Wnt pathways via paracrine and autocrine routes. These proteins are highly conserved across species. They can be found in mice, humans, Xenopus, zebrafish, Drosophila and many others. Mechanism Foundation Wnt signaling begins when a Wnt protein binds to the N-terminal extra-cellular cysteine-rich domain of a Frizzled (Fz) family receptor. These receptors span the plasma membrane seven times and constitute a distinct family of G-protein coupled receptors (GPCRs). However, to facilitate Wnt signaling, co-receptors may be required alongside the interaction between the Wnt protein and Fz receptor. Examples include lipoprotein receptor-related protein (LRP)-5/6, receptor tyrosine kinase (RTK), and ROR2. Upon activation of the receptor, a signal is sent to the phosphoprotein Dishevelled (Dsh), which is located in the cytoplasm. This signal is transmitted via a direct interaction between Fz and Dsh. Dsh proteins are present in all organisms and they all share the following highly conserved protein domains: an amino-terminal DIX domain, a central PDZ domain, and a carboxy-terminal DEP domain. These different domains are important because after Dsh, the Wnt signal can branch off into multiple pathways and each pathway interacts with a different combination of the three domains. Canonical and noncanonical pathways The three best characterized Wnt signaling pathways are the canonical Wnt pathway, the noncanonical planar cell polarity pathway, and the noncanonical Wnt/calcium pathway. As their names suggest, these pathways belong to one of two categories: canonical or noncanonical. The difference between the categories is that a canonical pathway involves the protein beta-catenin (β-catenin) while a noncanonical pathway operates independently of it. Canonical pathway The canonical Wnt pathway (or Wnt/β-catenin pathway) is the Wnt pathway that causes an accumulation of β-catenin in the cytoplasm and its eventual translocation into the nucleus to act as a transcriptional coactivator of transcription factors that belong to the TCF/LEF family. Without Wnt, β-catenin would not accumulate in the cytoplasm since a destruction complex would normally degrade it. This destruction complex includes the following proteins: Axin, adenomatosis polyposis coli (APC), protein phosphatase 2A (PP2A), glycogen synthase kinase 3 (GSK3) and casein kinase 1α (CK1α). It degrades β-catenin by targeting it for ubiquitination, which subsequently sends it to the proteasome to be digested. However, as soon as Wnt binds Fz and LRP5/6, the destruction complex function becomes disrupted. This is due to Wnt causing the translocation of the negative Wnt regulator, Axin, and the destruction complex to the plasma membrane. Phosphorylation by other proteins in the destruction complex subsequently binds Axin to the cytoplasmic tail of LRP5/6. Axin becomes de-phosphorylated and its stability and levels decrease. Dsh then becomes activated via phosphorylation and its DIX and PDZ domains inhibit the GSK3 activity of the destruction complex. This allows β-catenin to accumulate and localize to the nucleus and subsequently induce a cellular response via gene transduction alongside the TCF/LEF (T-cell factor/lymphoid enhancing factor) transcription factors. β-catenin recruits other transcriptional coactivators, such as BCL9, Pygopus and Parafibromin/Hyrax. The complexity of the transcriptional complex assembled by β-catenin is beginning to emerge thanks to new high-throughput proteomics studies. However, a unified theory of how β‐catenin drives target gene expression is still missing, and tissue-specific players might assist β‐catenin to define its target genes. The extensivity of the β-catenin interacting proteins complicates our understanding: β-catenin may be directly phosphorylated at Ser552 by Akt, which causes its disassociation from cell-cell contacts and accumulation in cytosol, thereafter 14-3-3ζ interacts with β-catenin (pSer552) and enhances its nuclear translocation. BCL9 and Pygopus have been reported, in fact, to possess several β-catenin-independent functions (therefore, likely, Wnt signaling-independent). Noncanonical pathways The noncanonical planar cell polarity (PCP) pathway does not involve β-catenin. It does not use LRP-5/6 as its co-receptor and is thought to use NRH1, Ryk, PTK7 or ROR2. The PCP pathway is activated via the binding of Wnt to Fz and its co-receptor. The receptor then recruits Dsh, which uses its PDZ and DIX domains to form a complex with Dishevelled-associated activator of morphogenesis 1 (DAAM1). Daam1 then activates the small G-protein Rho through a guanine exchange factor. Rho activates Rho-associated kinase (ROCK), which is one of the major regulators of the cytoskeleton. Dsh also forms a complex with rac1 and mediates profilin binding to actin. Rac1 activates JNK and can also lead to actin polymerization. Profilin binding to actin can result in restructuring of the cytoskeleton and gastrulation. The noncanonical Wnt/calcium pathway also does not involve β-catenin. Its role is to help regulate calcium release from the endoplasmic reticulum (ER) in order to control intracellular calcium levels. Like other Wnt pathways, upon ligand binding, the activated Fz receptor directly interacts with Dsh and activates specific Dsh-protein domains. The domains involved in Wnt/calcium signaling are the PDZ and DEP domains. However, unlike other Wnt pathways, the Fz receptor directly interfaces with a trimeric G-protein. This co-stimulation of Dsh and the G-protein can lead to the activation of either PLC or cGMP-specific PDE. If PLC is activated, the plasma membrane component PIP2 is cleaved into DAG and IP3. When IP3 binds its receptor on the ER, calcium is released. Increased concentrations of calcium and DAG can activate Cdc42 through PKC. Cdc42 is an important regulator of ventral patterning. Increased calcium also activates calcineurin and CaMKII. CaMKII induces activation of the transcription factor NFAT, which regulates cell adhesion, migration and tissue separation. Calcineurin activates TAK1 and NLK kinase, which can interfere with TCF/β-Catenin signaling in the canonical Wnt pathway. However, if PDE is activated, calcium release from the ER is inhibited. PDE mediates this through the inhibition of PKG, which subsequently causes the inhibition of calcium release. Integrated Wnt Pathway The binary distinction of canonical and non-canonical Wnt signaling pathways has come under scrutiny and an integrated, convergent Wnt pathway has been proposed. Some evidence for this was found for one Wnt ligand (Wnt5A). Evidence for a convergent Wnt signaling pathway that shows integrated activation of Wnt/Ca2+ and Wnt/β-catenin signaling, for multiple Wnt ligands, was described in mammalian cell lines. Other pathways Wnt signaling also regulates a number of other signaling pathways that have not been as extensively elucidated. One such pathway includes the interaction between Wnt and GSK3. During cell growth, Wnt can inhibit GSK3 in order to activate mTOR in the absence of β-catenin. However, Wnt can also serve as a negative regulator of mTOR via activation of the tumor suppressor TSC2, which is upregulated via Dsh and GSK3 interaction. During myogenesis, Wnt uses PA and CREB to activate MyoD and Myf5 genes. Wnt also acts in conjunction with Ryk and Src to allow for regulation of neuron repulsion during axonal guidance. Wnt regulates gastrulation when CK1 serves as an inhibitor of Rap1-ATPase in order to modulate the cytoskeleton during gastrulation. Further regulation of gastrulation is achieved when Wnt uses ROR2 along with the CDC42 and JNK pathway to regulate the expression of PAPC. Dsh can also interact with aPKC, Pa3, Par6 and LGl in order to control cell polarity and microtubule cytoskeleton development. While these pathways overlap with components associated with PCP and Wnt/Calcium signaling, they are considered distinct pathways because they produce different responses. Regulation In order to ensure proper functioning, Wnt signaling is constantly regulated at several points along its signaling pathways. For example, Wnt proteins are palmitoylated. The protein porcupine mediates this process, which means that it helps regulate when the Wnt ligand is secreted by determining when it is fully formed. Secretion is further controlled with proteins such as GPR177 (wntless) and evenness interrupted and complexes such as the retromer complex. Upon secretion, the ligand can be prevented from reaching its receptor through the binding of proteins such as the stabilizers Dally and glypican 3 (GPC3), which inhibit diffusion. In cancer cells, both the heparan sulfate chains and the core protein of GPC3 are involved in regulating Wnt binding and activation for cell proliferation. Wnt recognizes a heparan sulfate structure on GPC3, which contains IdoA2S and GlcNS6S, and the 3-O-sulfation in GlcNS6S3S enhances the binding of Wnt to the heparan sulfate glypican. A cysteine-rich domain at the N-lobe of GPC3 has been identified to form a Wnt-binding hydrophobic groove including phenylalanine-41 that interacts with Wnt. Blocking the Wnt binding domain using a nanobody called HN3 can inhibit Wnt activation. At the Fz receptor, the binding of proteins other than Wnt can antagonize signaling. Specific antagonists include Dickkopf (Dkk), Wnt inhibitory factor 1 (WIF-1), secreted Frizzled-related proteins (SFRP), Cerberus, Frzb, Wise, SOST, and Naked cuticle. These constitute inhibitors of Wnt signaling. However, other molecules also act as activators. Norrin and R-Spondin2 activate Wnt signaling in the absence of Wnt ligand. Interactions between Wnt signaling pathways also regulate Wnt signaling. As previously mentioned, the Wnt/calcium pathway can inhibit TCF/β-catenin, preventing canonical Wnt pathway signaling. Prostaglandin E2 (PGE2) is an essential activator of the canonical Wnt signaling pathway. Interaction of PGE2 with its receptors E2/E4 stabilizes β-catenin through cAMP/PKA mediated phosphorylation. The synthesis of PGE2 is necessary for Wnt signaling mediated processes such as tissue regeneration and control of stem cell population in zebrafish and mouse. Intriguingly, the unstructured regions of several oversized intrinsically disordered proteins play crucial roles in regulating Wnt signaling. Induced cell responses Embryonic development Wnt signaling plays a critical role in embryonic development. It operates in both vertebrates and invertebrates, including humans, frogs, zebrafish, C. elegans, Drosophila and others. It was first found in the segment polarity of Drosophila, where it helps to establish anterior and posterior polarities. It is implicated in other developmental processes. As its function in Drosophila suggests, it plays a key role in body axis formation, particularly the formation of the anteroposterior and dorsoventral axes. It is involved in the induction of cell differentiation to prompt formation of important organs such as lungs and ovaries. Wnt further ensures the development of these tissues through proper regulation of cell proliferation and migration. Wnt signaling functions can be divided into axis patterning, cell fate specification, cell proliferation and cell migration. Axis patterning In early embryo development, the formation of the primary body axes is a crucial step in establishing the organism's overall body plan. The axes include the anteroposterior axis, dorsoventral axis, and right-left axis. Wnt signaling is implicated in the formation of the anteroposterior and dorsoventral (DV) axes. Wnt signaling activity in anterior-posterior development can be seen in mammals, fish and frogs. In mammals, the primitive streak and other surrounding tissues produce the morphogenic compounds Wnts, BMPs, FGFs, Nodal and retinoic acid to establish the posterior region during late gastrula. These proteins form concentration gradients. Areas of highest concentration establish the posterior region while areas of lowest concentration indicate the anterior region. In fish and frogs, β-catenin produced by canonical Wnt signaling causes the formation of organizing centers, which, alongside BMPs, elicit posterior formation. Wnt involvement in DV axis formation can be seen in the activity of the formation of the Spemann organizer, which establishes the dorsal region. Canonical Wnt signaling β-catenin production induces the formation of this organizer via the activation of the genes twin and siamois. Similarly, in avian gastrulation, cells of the Koller's sickle express different mesodermal marker genes that allow for the differential movement of cells during the formation of the primitive streak. Wnt signaling activated by FGFs is responsible for this movement. Wnt signaling is also involved in the axis formation of specific body parts and organ systems later in development. In vertebrates, sonic hedgehog (Shh) and Wnt morphogenetic signaling gradients establish the dorsoventral axis of the central nervous system during neural tube axial patterning. High Wnt signaling establishes the dorsal region while high Shh signaling indicates the ventral region. Wnt is involved in the DV formation of the central nervous system through its involvement in axon guidance. Wnt proteins guide the axons of the spinal cord in an anterior-posterior direction. Wnt is also involved in the formation of the limb DV axis. Specifically, Wnt7a helps produce the dorsal patterning of the developing limb. In the embryonic differentiation waves model of development Wnt plays a critical role as part a signalling complex in competent cells ready to differentiate. Wnt reacts to the activity of the cytoskeleton, stabilizing the initial change created by a passing wave of contraction or expansion and simultaneously signals the nucleus through the use of its different signalling pathways as to which wave the individual cell has participated in. Wnt activity thereby amplifies mechanical signalling that occurs during development. Cell fate specification Cell fate specification or cell differentiation is a process where undifferentiated cells can become a more specialized cell type. Wnt signaling induces differentiation of pluripotent stem cells into mesoderm and endoderm progenitor cells. These progenitor cells further differentiate into cell types such as endothelial, cardiac and vascular smooth muscle lineages. Wnt signaling induces blood formation from stem cells. Specifically, Wnt3 leads to mesoderm committed cells with hematopoietic potential. Wnt1 antagonizes neural differentiation and is a major factor in self-renewal of neural stem cells. This allows for regeneration of nervous system cells, which is further evidence of a role in promoting neural stem cell proliferation. Wnt signaling is involved in germ cell determination, gut tissue specification, hair follicle development, lung tissue development, trunk neural crest cell differentiation, nephron development, ovary development and sex determination. Wnt signaling also antagonizes heart formation, and Wnt inhibition was shown to be a critical inducer of heart tissue during development, and small molecule Wnt inhibitors are routinely used to produce cardiomyocytes from pluripotent stem cells. Cell proliferation In order to have the mass differentiation of cells needed to form the specified cell tissues of different organisms, proliferation and growth of embryonic stem cells must take place. This process is mediated through canonical Wnt signaling, which increases nuclear and cytoplasmic β-catenin. Increased β-catenin can initiate transcriptional activation of proteins such as cyclin D1 and c-myc, which control the G1 to S phase transition in the cell cycle. Entry into the S phase causes DNA replication and ultimately mitosis, which are responsible for cell proliferation. This proliferation increase is directly paired with cell differentiation because as the stem cells proliferate, they also differentiate. This allows for overall growth and development of specific tissue systems during embryonic development. This is apparent in systems such as the circulatory system where Wnt3a leads to proliferation and expansion of hematopoietic stem cells needed for red blood cell formation. The biochemistry of cancer stem cells is subtly different from that of other tumor cells. These so-called Wnt-addicted cells hijack and depend on constant stimulation of the Wnt pathway to promote their uncontrolled growth, survival and migration. In cancer, Wnt signaling can become independent of regular stimuli, through mutations in downstream oncogenes and tumor suppressor genes that become permanently activated even though the normal receptor has not received a signal. β-catenin binds to transcription factors such as the protein TCF4 and in combination the molecules activate the necessary genes. LF3 strongly inhibits this binding in vitro, in cell lines and reduced tumor growth in mouse models. It prevented replication and reduced their ability to migrate, all without affecting healthy cells. No cancer stem cells remained after treatment. The discovery was the product of "rational drug design", involving AlphaScreens and ELISA technologies. Cell migration Cell migration during embryonic development allows for the establishment of body axes, tissue formation, limb induction and several other processes. Wnt signaling helps mediate this process, particularly during convergent extension. Signaling from both the Wnt PCP pathway and canonical Wnt pathway is required for proper convergent extension during gastrulation. Convergent extension is further regulated by the Wnt/calcium pathway, which blocks convergent extension when activated. Wnt signaling also induces cell migration in later stages of development through the control of the migration behavior of neuroblasts, neural crest cells, myocytes, and tracheal cells. Wnt signaling is involved in another key migration process known as the epithelial-mesenchymal transition (EMT). This process allows epithelial cells to transform into mesenchymal cells so that they are no longer held in place at the laminin. It involves cadherin down-regulation so that cells can detach from laminin and migrate. Wnt signaling is an inducer of EMT, particularly in mammary development. Insulin sensitivity Insulin is a peptide hormone involved in glucose homeostasis within certain organisms. Specifically, it leads to upregulation of glucose transporters in the cell membrane in order to increase glucose uptake from the bloodstream. This process is partially mediated by activation of Wnt/β-catenin signaling, which can increase a cell's insulin sensitivity. In particular, Wnt10b is a Wnt protein that increases this sensitivity in skeletal muscle cells. Clinical implications Cancer Since its initial discovery, Wnt signaling has had an association with cancer. When Wnt1 was discovered, it was first identified as a proto-oncogene in a mouse model for breast cancer. The fact that Wnt1 is a homolog of Wg shows that it is involved in embryonic development, which often calls for rapid cell division and migration. Misregulation of these processes can lead to tumor development via excess cell proliferation. Canonical Wnt pathway activity is involved in the development of benign and malignant breast tumors. The role of Wnt pathway in tumor chemoresistance has been also well documented, as well as its role in the maintenance of a distinct subpopulation of cancer-initiating cells. Its presence is revealed by elevated levels of β-catenin in the nucleus and/or cytoplasm, which can be detected with immunohistochemical staining and Western blotting. Increased β-catenin expression is correlated with poor prognosis in breast cancer patients. This accumulation may be due to factors such as mutations in β-catenin, deficiencies in the β-catenin destruction complex, most frequently by mutations in structurally disordered regions of APC, overexpression of Wnt ligands, loss of inhibitors and/or decreased activity of regulatory pathways (such as the Wnt/calcium pathway). Breast tumors can metastasize due to Wnt involvement in EMT. Research looking at metastasis of basal-like breast cancer to the lungs showed that repression of Wnt/β-catenin signaling can prevent EMT, which can inhibit metastasis. Wnt signaling has been implicated in the development of other cancers as well as in desmoid fibromatosis. Changes in CTNNB1 expression, which is the gene that encodes β-catenin, can be measured in breast, colorectal, melanoma, prostate, lung, and other cancers. Increased expression of Wnt ligand-proteins such as Wnt1, Wnt2 and Wnt7A were observed in the development of glioblastoma, oesophageal cancer and ovarian cancer respectively. Other proteins that cause multiple cancer types in the absence of proper functioning include ROR1, ROR2, SFRP4, Wnt5A, WIF1 and those of the TCF/LEF family. Wnt signaling is further implicated in the pathogenesis of bone metastasis from breast and prostate cancer with studies suggesting discrete on and off states. Wnt is down-regulated during the dormancy stage by autocrine DKK1 to avoid immune surveillance, as well as during the dissemination stages by intracellular Dact1. Meanwhile Wnt is activated during the early outgrowth phase by E-selectin. The link between PGE2 and Wnt suggests that a chronic inflammation-related increase of PGE2 may lead to activation of the Wnt pathway in different tissues, resulting in carcinogenesis. Type II diabetes Diabetes mellitus type 2 is a common disease that causes reduced insulin secretion and increased insulin resistance in the periphery. It results in increased blood glucose levels, or hyperglycemia, which can be fatal if untreated. Since Wnt signaling is involved in insulin sensitivity, malfunctioning of its pathway could be involved. Overexpression of Wnt5b, for instance, may increase susceptibility due to its role in adipogenesis, since obesity and type II diabetes have high comorbidity. Wnt signaling is a strong activator of mitochondrial biogenesis. This leads to increased production of reactive oxygen species (ROS) known to cause DNA and cellular damage. This ROS-induced damage is significant because it can cause acute hepatic insulin resistance, or injury-induced insulin resistance. Mutations in Wnt signaling-associated transcription factors, such as TCF7L2, are linked to increased susceptibility. See also AXIN1 GSK-3 Management of hair loss Wingless localisation element 3 (WLE3) WNT1-inducible-signaling pathway protein 1 (WISP1) WNT1-inducible-signaling pathway protein 2 (WISP2) WNT1-inducible-signaling pathway protein 3 (WISP3) References Further reading External links Signal transduction Genes Evolutionary developmental biology
Wnt signaling pathway
[ "Chemistry", "Biology" ]
5,991
[ "Biochemistry", "Neurochemistry", "Signal transduction" ]
1,209,416
https://en.wikipedia.org/wiki/DIDO%20%28nuclear%20reactor%29
DIDO was a materials testing nuclear reactor at the Atomic Energy Research Establishment at Harwell, Oxfordshire in the United Kingdom. It used enriched uranium metal fuel, and heavy water as both neutron moderator and primary coolant. There was also a graphite neutron reflector surrounding the core. In the design phase, DIDO was known as AE334 after its engineering design number. DIDO was designed to have a high neutron flux, largely to reduce the time required for testing of materials intended for use in nuclear power reactors. This also allowed for the production of intense beams of neutrons for use in neutron diffraction. DIDO was shut down in 1990 and is under planning for decommissioning. In all, six DIDO class reactors were constructed based on this design: DIDO, first criticality 1956. PLUTO, also at Harwell, first criticality 1957. HIFAR (Australia), first criticality January 1958. Dounreay Materials Testing Reactor (DMTR) at Dounreay Nuclear Power Development Establishment in Scotland, first criticality May 1958. DR-3 at Risø National Laboratory (Denmark), first criticality January 1960. FRJ-II at Jülich Research Centre (Germany), first criticality 1962. HIFAR was the last to shut down, in 2007. See also List of nuclear reactors References Buildings and structures in Oxfordshire Former nuclear research institutes Neutron facilities Nuclear research institutes in the United Kingdom Nuclear research reactors Science and technology in the United Kingdom Vale of White Horse
DIDO (nuclear reactor)
[ "Physics" ]
311
[ "Nuclear and atomic physics stubs", "Nuclear physics" ]
1,209,759
https://en.wikipedia.org/wiki/Temporal%20difference%20learning
Temporal difference (TD) learning refers to a class of model-free reinforcement learning methods which learn by bootstrapping from the current estimate of the value function. These methods sample from the environment, like Monte Carlo methods, and perform updates based on current estimates, like dynamic programming methods. While Monte Carlo methods only adjust their estimates once the final outcome is known, TD methods adjust predictions to match later, more accurate, predictions about the future before the final outcome is known. This is a form of bootstrapping, as illustrated with the following example: Suppose you wish to predict the weather for Saturday, and you have some model that predicts Saturday's weather, given the weather of each day in the week. In the standard case, you would wait until Saturday and then adjust all your models. However, when it is, for example, Friday, you should have a pretty good idea of what the weather would be on Saturday – and thus be able to change, say, Saturday's model before Saturday arrives. Temporal difference methods are related to the temporal difference model of animal learning. Mathematical formulation The tabular TD(0) method is one of the simplest TD methods. It is a special case of more general stochastic approximation methods. It estimates the state value function of a finite-state Markov decision process (MDP) under a policy . Let denote the state value function of the MDP with states , rewards and discount rate under the policy : We drop the action from the notation for convenience. satisfies the Hamilton-Jacobi-Bellman Equation: so is an unbiased estimate for . This observation motivates the following algorithm for estimating . The algorithm starts by initializing a table arbitrarily, with one value for each state of the MDP. A positive learning rate is chosen. We then repeatedly evaluate the policy , obtain a reward and update the value function for the current state using the rule: where and are the current and next states, respectively. The value is known as the TD target, and is known as the TD error. TD-Lambda TD-Lambda is a learning algorithm invented by Richard S. Sutton based on earlier work on temporal difference learning by Arthur Samuel. This algorithm was famously applied by Gerald Tesauro to create TD-Gammon, a program that learned to play the game of backgammon at the level of expert human players. The lambda () parameter refers to the trace decay parameter, with . Higher settings lead to longer lasting traces; that is, a larger proportion of credit from a reward can be given to more distant states and actions when is higher, with producing parallel learning to Monte Carlo RL algorithms. In neuroscience The TD algorithm has also received attention in the field of neuroscience. Researchers discovered that the firing rate of dopamine neurons in the ventral tegmental area (VTA) and substantia nigra (SNc) appear to mimic the error function in the algorithm. The error function reports back the difference between the estimated reward at any given state or time step and the actual reward received. The larger the error function, the larger the difference between the expected and actual reward. When this is paired with a stimulus that accurately reflects a future reward, the error can be used to associate the stimulus with the future reward. Dopamine cells appear to behave in a similar manner. In one experiment measurements of dopamine cells were made while training a monkey to associate a stimulus with the reward of juice. Initially the dopamine cells increased firing rates when the monkey received juice, indicating a difference in expected and actual rewards. Over time this increase in firing back propagated to the earliest reliable stimulus for the reward. Once the monkey was fully trained, there was no increase in firing rate upon presentation of the predicted reward. Subsequently, the firing rate for the dopamine cells decreased below normal activation when the expected reward was not produced. This mimics closely how the error function in TD is used for reinforcement learning. The relationship between the model and potential neurological function has produced research attempting to use TD to explain many aspects of behavioral research. It has also been used to study conditions such as schizophrenia or the consequences of pharmacological manipulations of dopamine on learning. See also PVLV Q-learning Rescorla–Wagner model State–action–reward–state–action (SARSA) Notes Works cited Further reading See final chapter and appendix. External links Connect Four TDGravity Applet (+ mobile phone version) – self-learned using TD-Leaf method (combination of TD-Lambda with shallow tree search) Self Learning Meta-Tic-Tac-Toe Example web app showing how temporal difference learning can be used to learn state evaluation constants for a minimax AI playing a simple board game. Reinforcement Learning Problem, document explaining how temporal difference learning can be used to speed up Q-learning TD-Simulator Temporal difference simulator for classical conditioning Computational neuroscience Reinforcement learning Subtraction
Temporal difference learning
[ "Mathematics" ]
1,009
[ "Sign (mathematics)", "Subtraction" ]
1,209,805
https://en.wikipedia.org/wiki/Depaneling
Depaneling or depanelization is a process step in high-volume electronics assembly production. In order to increase the throughput of printed circuit board (PCB) manufacturing and surface mount (SMT) lines, PCBs are often arranged in a process called panelization so that they consist of many smaller individual PCBs that will be used in the final product. This PCB cluster is called a panel or multiblock. The large panel is broken up or "depaneled" as a certain step in the process - depending on the product, it may happen right after SMT process, after in-circuit test (ICT), after soldering of through-hole elements, or even right before the final assembly of the PCB assembly (PCBA) into the enclosure. Risks When selecting a depaneling technique, it is important to be mindful of the risks, including: Mechanical strain: depaneling can be a violent operation and may bend the PCB causing some components to fracture, or in the worst case, break traces. Ways to mitigate this are avoiding placing components near the edge of the PCBA, and orienting components parallel to the break line. Tolerance: some methods of depaneling may result in the PCBA being a different size than intended. Ways to mitigate are to communicate with the manufacturer about which dimensions are critical, and selecting a depaneling method that meets your needs. Hand depaneling will have the loosest tolerance, laser depaneling the tightest. Depaneling methods There are different methods to depanel a PCB Board. Most commonly used depaneling methods are: V-Scoring Tab Routing Breakaway Rail V-Scoring In V-Scoring method, a V-shaped groove is placed between the individual PCBs. This groves can easily be broken by hand or with saw and the PCB boards are separated. In V-Scoring, the grove thickness should be selected carefully, normally the grove should be one-third of the actual PCB thickness. Tab routing In Tab Routing, small tabs are placed at the edges of individual PCBs. PCB boards can easily be separated by removing the tabs. The advantages of Tab-routing over V-soring is that it can handle different shapes of PCBs. While selecting this method, one should be careful of tabs stiffness, as they need to hold the whole board. Breakaway Rail Breakaway Rail, also called Edge Rail, is used at the border of the PCB Panel. The main reason of Breakaway Rail is to save the PCB boards from any damage at the edges. The rails are removed easily from the PCB Panel during depaneling. Main depanel technologies There are six main depaneling cutting techniques currently in use: hand break pizza cutter / V-cut Water jet Cutter punch router saw laser Hand break This method is suitable for strain-resistant circuits (e.g. without SMD components). The operator simply breaks the PCB, usually along a prepared V-groove line, with the help of a proper fixture. Pizza cutter / V-cut A pizza cutter is a rotary blade, sometimes rotating using its own motor. The operator moves a pre-scored PCB along a V-groove line, usually with the help of a special fixture. This method is often used only for cutting huge panels into smaller ones. The equipment is cheap and requires only sharpening of the blade and greasing as maintenance. It uses an aluminium based jig to secure the PCB in place. Water jet Cutter In order to cut the PCB Panel, a high-pressure water stream is used. The water stream is normally mixed with abrasive particles, helps in smooth cutting process. Water jet cutter has high precision and accuracy and is usually used to cut the metallic sheets. Punch Punching is a process where single PCBs are punched out of the panel through the use of special fixture. It is a two-part fixture, with sharp blades on one part and supports on the other. The production capacity of such a system is high, but fixtures are quite expensive and require regular sharpening. Router A depaneling router is a machine similar to wood router. It uses a router bit to mill the material of the PCB. The hardness of the PCB material wears down the bit, which must be replaced periodically. Routing requires that single boards are connected using tabs in a panel. The bit mills the whole material of the tab. It produces much dust that has to be vacuumed. It is important for the vacuum system to be ESD-safe. Also the fixturing of the PCB must be tight - usually an aluminium jig or a vacuum holding system is used. The two most important parameters of the routing process are: feed rate and rotational speed. They are chosen according to the bit type and diameter and should remain proportional (i.e. increasing feed rate should be done together with increasing the rotational speed). Routers generate vibrations of the same frequency as their rotational speed (and higher harmonics), which might be important if there are vibration-sensitive components on the surface of the board. The strain level is lower than for other depaneling methods. Their advantage is that they are able to cut arcs and turn at sharp angles. Their disadvantage is lower capacity. Saw A saw is able to cut through panels at high feed rates. It can cut both V-grooved and not-V-grooved PCBs. It does not cut much material and therefore generates low amounts of dust. The disadvantages are: ability to cut in straight lines only and higher stress than for routing. Laser Laser cutting is now being offered as an additional method by some manufacturers. UV laser depaneling makes use of a 355 nm wavelength (ultraviolet), diode-pumped, Nd:YAG laser source. At this wavelength the laser is capable of cutting, drilling and structuring on rigid and flex circuit substrates. The laser beam, capable of cut widths under 25 μm, is controlled by high-precision, galvo-scanning mirrors with repeat accuracy of +/-4 μm. A variety of substrate materials can be cut with a UV laser source including FR-4 and similar resin-based substrates, polyimide, ceramics, PTFE, PET, aluminium, brass and copper. Advantages: accuracy, precision, low mechanical stress and flexible contour and cut capabilities. Disadvantages: initial capital investment is often higher than traditional depaneling technologies, also the optimal board thickness is recommended to be no more than 1 mm. laser sources have also been used for depaneling, but are considered outdated as UV laser technology provides cleaner cuts, less-thermal stress and higher precision capabilities. See also Capacitor flex cracking References External links Depaneling: a study in yield and productivity: saw systems can provide a low stress and fast alternative to hand breaking methods CircuitPeople PCB Panel Calculator Printed circuit board manufacturing
Depaneling
[ "Engineering" ]
1,442
[ "Electrical engineering", "Electronic engineering", "Printed circuit board manufacturing" ]
443,415
https://en.wikipedia.org/wiki/Parallelogram%20of%20force
The parallelogram of forces is a method for solving (or visualizing) the results of applying two forces to an object. When more than two forces are involved, the geometry is no longer a parallelogram, but the same principles apply to a polygon of forces. The resultant force due to the application of a number of forces can be found geometrically by drawing arrows for each force. The parallelogram of forces is a graphical manifestation of the addition of vectors. Newton's proof Preliminary: the parallelogram of velocity Suppose a particle moves at a uniform rate along a line from A to B (Figure 2) in a given time (say, one second), while in the same time, the line AB moves uniformly from its position at AB to a position at DC, remaining parallel to its original orientation throughout. Accounting for both motions, the particle traces the line AC. Because a displacement in a given time is a measure of velocity, the length of AB is a measure of the particle's velocity along AB, the length of AD is a measure of the line's velocity along AD, and the length of AC is a measure of the particle's velocity along AC. The particle's motion is the same as if it had moved with a single velocity along AC. Newton's proof of the parallelogram of force Suppose two forces act on a particle at the origin (the "tails" of the vectors) of Figure 1. Let the lengths of the vectors F1 and F2 represent the velocities the two forces could produce in the particle by acting for a given time, and let the direction of each represent the direction in which they act. Each force acts independently and will produce its particular velocity whether the other force acts or not. At the end of the given time, the particle has both velocities. By the above proof, they are equivalent to a single velocity, Fnet. By Newton's second law, this vector is also a measure of the force which would produce that velocity, thus the two forces are equivalent to a single force. Bernoulli's proof for perpendicular vectors We model forces as Euclidean vectors or members of . Our first assumption is that the resultant of two forces is in fact another force, so that for any two forces there is another force . Our final assumption is that the resultant of two forces doesn't change when rotated. If is any rotation (any orthogonal map for the usual vector space structure of with ), then for all forces Consider two perpendicular forces of length and of length , with being the length of . Let and , where is the rotation between and , so . Under the invariance of the rotation, we get Similarly, consider two more forces and . Let be the rotation from to : , which by inspection makes . Applying these two equations Since and both lie along , their lengths are equal which implies that has length , which is the length of . Thus for the case where and are perpendicular, . However, when combining our two sets of auxiliary forces we used the associativity of . Using this additional assumption, we will form an additional proof below. Algebraic proof of the parallelogram of force We model forces as Euclidean vectors or members of . Our first assumption is that the resultant of two forces is in fact another force, so that for any two forces there is another force . We assume commutativity, as these are forces being applied concurrently, so the order shouldn't matter . Consider the map If is associative, then this map will be linear. Since it also sends to and to , it must also be the identity map. Thus must be equivalent to the normal vector addition operator. Controversy The mathematical proof of the parallelogram of force is not generally accepted to be mathematically valid. Various proofs were developed (chiefly Duchayla's and Poisson's), and these also caused objections. That the parallelogram of force was true was not questioned, but why it was true. Today the parallelogram of force is accepted as an empirical fact, non-reducible to Newton's first principles. See also Newton's Mathematical Principles of Natural Philosophy, Axioms or Laws of Motion, Corollary I, at Wikisource Vector (geometric) Net force References Force Vector calculus Diagrams
Parallelogram of force
[ "Physics", "Mathematics" ]
883
[ "Force", "Physical quantities", "Quantity", "Mass", "Classical mechanics", "Wikipedia categories named after physical quantities", "Matter" ]
443,757
https://en.wikipedia.org/wiki/Topological%20quantum%20field%20theory
In gauge theory and mathematical physics, a topological quantum field theory (or topological field theory or TQFT) is a quantum field theory that computes topological invariants. While TQFTs were invented by physicists, they are also of mathematical interest, being related to, among other things, knot theory and the theory of four-manifolds in algebraic topology, and to the theory of moduli spaces in algebraic geometry. Donaldson, Jones, Witten, and Kontsevich have all won Fields Medals for mathematical work related to topological field theory. In condensed matter physics, topological quantum field theories are the low-energy effective theories of topologically ordered states, such as fractional quantum Hall states, string-net condensed states, and other strongly correlated quantum liquid states. Overview In a topological field theory, correlation functions do not depend on the metric of spacetime. This means that the theory is not sensitive to changes in the shape of spacetime; if spacetime warps or contracts, the correlation functions do not change. Consequently, they are topological invariants. Topological field theories are not very interesting on flat Minkowski spacetime used in particle physics. Minkowski space can be contracted to a point, so a TQFT applied to Minkowski space results in trivial topological invariants. Consequently, TQFTs are usually applied to curved spacetimes, such as, for example, Riemann surfaces. Most of the known topological field theories are defined on spacetimes of dimension less than five. It seems that a few higher-dimensional theories exist, but they are not very well understood . Quantum gravity is believed to be background-independent (in some suitable sense), and TQFTs provide examples of background independent quantum field theories. This has prompted ongoing theoretical investigations into this class of models. (Caveat: It is often said that TQFTs have only finitely many degrees of freedom. This is not a fundamental property. It happens to be true in most of the examples that physicists and mathematicians study, but it is not necessary. A topological sigma model targets infinite-dimensional projective space, and if such a thing could be defined it would have countably infinitely many degrees of freedom.) Specific models The known topological field theories fall into two general classes: Schwarz-type TQFTs and Witten-type TQFTs. Witten TQFTs are also sometimes referred to as cohomological field theories. See . Schwarz-type TQFTs In Schwarz-type TQFTs, the correlation functions or partition functions of the system are computed by the path integral of metric-independent action functionals. For instance, in the BF model, the spacetime is a two-dimensional manifold M, the observables are constructed from a two-form F, an auxiliary scalar B, and their derivatives. The action (which determines the path integral) is The spacetime metric does not appear anywhere in the theory, so the theory is explicitly topologically invariant. The first example appeared in 1977 and is due to A. Schwarz; its action functional is: Another more famous example is Chern–Simons theory, which can be applied to knot invariants. In general, partition functions depend on a metric but the above examples are metric-independent. Witten-type TQFTs The first example of Witten-type TQFTs appeared in Witten's paper in 1988 , i.e. topological Yang–Mills theory in four dimensions. Though its action functional contains the spacetime metric gαβ, after a topological twist it turns out to be metric independent. The independence of the stress-energy tensor Tαβ of the system from the metric depends on whether the BRST-operator is closed. Following Witten's example many other examples can be found in string theory. Witten-type TQFTs arise if the following conditions are satisfied: The action of the TQFT has a symmetry, i.e. if denotes a symmetry transformation (e.g. a Lie derivative) then holds. The symmetry transformation is exact, i.e. There are existing observables which satisfy for all . The stress-energy-tensor (or similar physical quantities) is of the form for an arbitrary tensor . As an example : Given a 2-form field with the differential operator which satisfies , then the action has a symmetry if since . Further, the following holds (under the condition that is independent on and acts similarly to a functional derivative): . The expression is proportional to with another 2-form . Now any averages of observables for the corresponding Haar measure are independent on the "geometric" field and are therefore topological: . The third equality uses the fact that and the invariance of the Haar measure under symmetry transformations. Since is only a number, its Lie derivative vanishes. Mathematical formulations The original Atiyah–Segal axioms Atiyah suggested a set of axioms for topological quantum field theory, inspired by Segal's proposed axioms for conformal field theory (subsequently, Segal's idea was summarized in ), and Witten's geometric meaning of supersymmetry in . Atiyah's axioms are constructed by gluing the boundary with a differentiable (topological or continuous) transformation, while Segal's axioms are for conformal transformations. These axioms have been relatively useful for mathematical treatments of Schwarz-type QFTs, although it isn't clear that they capture the whole structure of Witten-type QFTs. The basic idea is that a TQFT is a functor from a certain category of cobordisms to the category of vector spaces. There are in fact two different sets of axioms which could reasonably be called the Atiyah axioms. These axioms differ basically in whether or not they apply to a TQFT defined on a single fixed n-dimensional Riemannian / Lorentzian spacetime M or a TQFT defined on all n-dimensional spacetimes at once. Let Λ be a commutative ring with 1 (for almost all real-world purposes we will have Λ = Z, R or C). Atiyah originally proposed the axioms of a topological quantum field theory (TQFT) in dimension d defined over a ground ring Λ as following: A finitely generated Λ-module Z(Σ) associated to each oriented closed smooth d-dimensional manifold Σ (corresponding to the homotopy axiom), An element Z(M) ∈ Z(∂M) associated to each oriented smooth (d + 1)-dimensional manifold (with boundary) M (corresponding to an additive axiom). These data are subject to the following axioms (4 and 5 were added by Atiyah): Z is functorial with respect to orientation preserving diffeomorphisms of Σ and M, Z is involutory, i.e. Z(Σ*) = Z(Σ)* where Σ* is Σ with opposite orientation and Z(Σ)* denotes the dual module, Z is multiplicative. Z() = Λ for the d-dimensional empty manifold and Z() = 1 for the (d + 1)-dimensional empty manifold. Z(M*) = (the hermitian axiom). If so that Z(M) can be viewed as a linear transformation between hermitian vector spaces, then this is equivalent to Z(M*) being the adjoint of Z(M). Remark. If for a closed manifold M we view Z(M) as a numerical invariant, then for a manifold with a boundary we should think of Z(M) ∈ Z(∂M) as a "relative" invariant. Let f : Σ → Σ be an orientation-preserving diffeomorphism, and identify opposite ends of Σ × I by f. This gives a manifold Σf and our axioms imply where Σ(f) is the induced automorphism of Z(Σ). Remark. For a manifold M with boundary Σ we can always form the double which is a closed manifold. The fifth axiom shows that where on the right we compute the norm in the hermitian (possibly indefinite) metric. The relation to physics Physically (2) + (4) are related to relativistic invariance while (3) + (5) are indicative of the quantum nature of the theory. Σ is meant to indicate the physical space (usually, d = 3 for standard physics) and the extra dimension in Σ × I is "imaginary" time. The space Z(Σ) is the Hilbert space of the quantum theory and a physical theory, with a Hamiltonian H, will have a time evolution operator eitH or an "imaginary time" operator e−tH. The main feature of topological QFTs is that H = 0, which implies that there is no real dynamics or propagation, along the cylinder Σ × I. However, there can be non-trivial "propagation" (or tunneling amplitudes) from Σ0 to Σ1 through an intervening manifold M with ; this reflects the topology of M. If ∂M = Σ, then the distinguished vector Z(M) in the Hilbert space Z(Σ) is thought of as the vacuum state defined by M. For a closed manifold M the number Z(M) is the vacuum expectation value. In analogy with statistical mechanics it is also called the partition function. The reason why a theory with a zero Hamiltonian can be sensibly formulated resides in the Feynman path integral approach to QFT. This incorporates relativistic invariance (which applies to general (d + 1)-dimensional "spacetimes") and the theory is formally defined by a suitable Lagrangian—a functional of the classical fields of the theory. A Lagrangian which involves only first derivatives in time formally leads to a zero Hamiltonian, but the Lagrangian itself may have non-trivial features which relate to the topology of M. Atiyah's examples In 1988, M. Atiyah published a paper in which he described many new examples of topological quantum field theory that were considered at that time . It contains some new topological invariants along with some new ideas: Casson invariant, Donaldson invariant, Gromov's theory, Floer homology and Jones–Witten theory. d = 0 In this case Σ consists of finitely many points. To a single point we associate a vector space V = Z(point) and to n-points the n-fold tensor product: V⊗n = V ⊗ … ⊗ V. The symmetric group Sn acts on V⊗n. A standard way to get the quantum Hilbert space is to start with a classical symplectic manifold (or phase space) and then quantize it. Let us extend Sn to a compact Lie group G and consider "integrable" orbits for which the symplectic structure comes from a line bundle, then quantization leads to the irreducible representations V of G. This is the physical interpretation of the Borel–Weil theorem or the Borel–Weil–Bott theorem. The Lagrangian of these theories is the classical action (holonomy of the line bundle). Thus topological QFT's with d = 0 relate naturally to the classical representation theory of Lie groups and the symmetric group. d = 1 We should consider periodic boundary conditions given by closed loops in a compact symplectic manifold X. Along with holonomy such loops as used in the case of d = 0 as a Lagrangian are then used to modify the Hamiltonian. For a closed surface M the invariant Z(M) of the theory is the number of pseudo holomorphic maps f : M → X in the sense of Gromov (they are ordinary holomorphic maps if X is a Kähler manifold). If this number becomes infinite i.e. if there are "moduli", then we must fix further data on M. This can be done by picking some points Pi and then looking at holomorphic maps f : M → X with f(Pi) constrained to lie on a fixed hyperplane. has written down the relevant Lagrangian for this theory. Floer has given a rigorous treatment, i.e. Floer homology, based on Witten's Morse theory ideas; for the case when the boundary conditions are over the interval instead of being periodic, the path initial and end-points lie on two fixed Lagrangian submanifolds. This theory has been developed as Gromov–Witten invariant theory. Another example is Holomorphic Conformal Field Theory. This might not have been considered strictly topological quantum field theory at the time because Hilbert spaces are infinite dimensional. The conformal field theories are also related to the compact Lie group G in which the classical phase consists of a central extension of the loop group (LG). Quantizing these produces the Hilbert spaces of the theory of irreducible (projective) representations of LG. The group Diff+(S1) now substitutes for the symmetric group and plays an important role. As a result, the partition function in such theories depends on complex structure, thus it is not purely topological. d = 2 Jones–Witten theory is the most important theory in this case. Here the classical phase space, associated with a closed surface Σ is the moduli space of a flat G-bundle over Σ. The Lagrangian is an integer multiple of the Chern–Simons function of a G-connection on a 3-manifold (which has to be "framed"). The integer multiple k, called the level, is a parameter of the theory and k → ∞ gives the classical limit. This theory can be naturally coupled with the d = 0 theory to produce a "relative" theory. The details have been described by Witten who shows that the partition function for a (framed) link in the 3-sphere is just the value of the Jones polynomial for a suitable root of unity. The theory can be defined over the relevant cyclotomic field, see . By considering a Riemann surface with boundary, we can couple it to the d = 1 conformal theory instead of coupling d = 2 theory to d = 0. This has developed into Jones–Witten theory and has led to the discovery of deep connections between knot theory and quantum field theory. d = 3 Donaldson has defined the integer invariant of smooth 4-manifolds by using moduli spaces of SU(2)-instantons. These invariants are polynomials on the second homology. Thus 4-manifolds should have extra data consisting of the symmetric algebra of H2. has produced a super-symmetric Lagrangian which formally reproduces the Donaldson theory. Witten's formula might be understood as an infinite-dimensional analogue of the Gauss–Bonnet theorem. At a later date, this theory was further developed and became the Seiberg–Witten gauge theory which reduces SU(2) to U(1) in N = 2, d = 4 gauge theory. The Hamiltonian version of the theory has been developed by Floer in terms of the space of connections on a 3-manifold. Floer uses the Chern–Simons function, which is the Lagrangian of Jones–Witten theory to modify the Hamiltonian. For details, see . has also shown how one can couple the d = 3 and d = 1 theories together: this is quite analogous to the coupling between d = 2 and d = 0 in Jones–Witten theory. Now, topological field theory is viewed as a functor, not on a fixed dimension but on all dimensions at the same time. The case of a fixed spacetime Let BordM be the category whose morphisms are n-dimensional submanifolds of M and whose objects are connected components of the boundaries of such submanifolds. Regard two morphisms as equivalent if they are homotopic via submanifolds of M, and so form the quotient category hBordM: The objects in hBordM are the objects of BordM, and the morphisms of hBordM are homotopy equivalence classes of morphisms in BordM. A TQFT on M is a symmetric monoidal functor from hBordM to the category of vector spaces. Note that cobordisms can, if their boundaries match, be sewn together to form a new bordism. This is the composition law for morphisms in the cobordism category. Since functors are required to preserve composition, this says that the linear map corresponding to a sewn together morphism is just the composition of the linear map for each piece. There is an equivalence of categories between the category of 2-dimensional topological quantum field theories and the category of commutative Frobenius algebras. All n-dimensional spacetimes at once To consider all spacetimes at once, it is necessary to replace hBordM by a larger category. So let Bordn be the category of bordisms, i.e. the category whose morphisms are n-dimensional manifolds with boundary, and whose objects are the connected components of the boundaries of n-dimensional manifolds. (Note that any (n−1)-dimensional manifold may appear as an object in Bordn.) As above, regard two morphisms in Bordn as equivalent if they are homotopic, and form the quotient category hBordn. Bordn is a monoidal category under the operation which maps two bordisms to the bordism made from their disjoint union. A TQFT on n-dimensional manifolds is then a functor from hBordn to the category of vector spaces, which maps disjoint unions of bordisms to their tensor product. For example, for (1 + 1)-dimensional bordisms (2-dimensional bordisms between 1-dimensional manifolds), the map associated with a pair of pants gives a product or coproduct, depending on how the boundary components are grouped – which is commutative or cocommutative, while the map associated with a disk gives a counit (trace) or unit (scalars), depending on the grouping of boundary components, and thus (1+1)-dimension TQFTs correspond to Frobenius algebras. Furthermore, we can consider simultaneously 4-dimensional, 3-dimensional and 2-dimensional manifolds related by the above bordisms, and from them we can obtain ample and important examples. Development at a later time Looking at the development of topological quantum field theory, we should consider its many applications to Seiberg–Witten gauge theory, topological string theory, the relationship between knot theory and quantum field theory, and quantum knot invariants. Furthermore, it has generated topics of great interest in both mathematics and physics. Also of important recent interest are non-local operators in TQFT (). If string theory is viewed as the fundamental, then non-local TQFTs can be viewed as non-physical models that provide a computationally efficient approximation to local string theory. Witten-type TQFTs and dynamical systems Stochastic (partial) differential equations (SDEs) are the foundation for models of everything in nature above the scale of quantum degeneracy and coherence and are essentially Witten-type TQFTs. All SDEs possess topological or BRST supersymmetry, , and in the operator representation of stochastic dynamics is the exterior derivative, which is commutative with the stochastic evolution operator. This supersymmetry preserves the continuity of phase space by continuous flows, and the phenomenon of supersymmetric spontaneous breakdown by a global non-supersymmetric ground state encompasses such well-established physical concepts as chaos, turbulence, 1/f and crackling noises, self-organized criticality etc. The topological sector of the theory for any SDE can be recognized as a Witten-type TQFT. See also Quantum topology Topological defect Topological entropy in physics Topological order Topological quantum number Topological quantum computer Topological string theory Arithmetic topology Cobordism hypothesis References Quantum field theory Topology
Topological quantum field theory
[ "Physics", "Mathematics" ]
4,181
[ "Quantum field theory", "Quantum mechanics", "Topology", "Space", "Geometry", "Spacetime" ]
444,349
https://en.wikipedia.org/wiki/Extracorporeal%20membrane%20oxygenation
Extracorporeal membrane oxygenation (ECMO), is a form of extracorporeal life support, providing prolonged cardiac and respiratory support to persons whose heart and lungs are unable to provide an adequate amount of oxygen, gas exchange or blood supply (perfusion) to sustain life. The technology for ECMO is largely derived from cardiopulmonary bypass, which provides shorter-term support with arrested native circulation. The device used is a membrane oxygenator, also known as an artificial lung. ECMO works by temporarily drawing blood from the body to allow artificial oxygenation of the red blood cells and removal of carbon dioxide. Generally, it is used either post-cardiopulmonary bypass or in late-stage treatment of a person with profound heart and/or lung failure, although it is now seeing use as a treatment for cardiac arrest in certain centers, allowing treatment of the underlying cause of arrest while circulation and oxygenation are supported. ECMO is also used to support patients with the acute viral pneumonia associated with COVID-19 in cases where artificial ventilation alone is not sufficient to sustain blood oxygenation levels. Medical uses Guidelines that describe the indications and practice of ECMO are published by the Extracorporeal Life Support Organization (ELSO). Criteria for the initiation of ECMO vary by institution, but generally include acute severe cardiac or pulmonary failure that is potentially reversible and unresponsive to conventional management. Examples of clinical situations that may prompt the initiation of ECMO include the following: Hypoxemic respiratory failure with a ratio of arterial oxygen tension to fraction of inspired oxygen (PaO2/FiO2) of <100 mmHg despite optimization of the ventilator settings, including the fraction of inspired oxygen (FiO2), positive end-expiratory pressure (PEEP), and inspiratory to expiratory (I:E) ratio Hypercapnic respiratory failure with an arterial pH <7.20 Refractory cardiogenic shock Thyroid storm Cardiac arrest Failure to wean from cardiopulmonary bypass after cardiac surgery As a bridge to either heart transplantation or placement of a ventricular assist device As a bridge to lung transplantation Septic shock is a more controversial but increasingly studied use of ECMO Hypothermia, with a core temperature between 28 and 24 °C and cardiac instability, or with a core temperature below 24 °C. In those with cardiac arrest or cardiogenic shock, it is believed to improve survival and good outcomes. However, a recent clinical trial has shown that in patients with cardiogenic shock following acute myocardial infarction, ECLS did not improve survival (as measured via 30-day mortality); on the contrary, it resulted in increased complications (e.g., major bleeding, lower limb ischemia). This finding is corroborated by a recent meta-analysis that used data from four previous clinical trials, indicating a need to reassess current guidelines for initiation of ECLS treatment. Use in COVID-19 patients Beginning in early February 2020, doctors in China increasingly used ECMO as an adjunct support for patients presenting with acute viral pneumonia associated with SARS-CoV-2 infection (COVID-19) when, with ventilation alone, the blood oxygenation levels still remain too low to sustain the patient. Initial reports indicated that it assisted in restoring patients' blood oxygen saturation and reducing fatalities among the approximately 3% of severe cases where it was utilized. For critically ill patients, the mortality rate reduced from around 59–71% with conventional therapy to approximately 46% with extracorporeal membrane oxygenation. A March 2021 Los Angeles Times cover story illustrated the efficacy of ECMO in an extremely challenging COVID patient. In February 2021, three pregnant Israeli women who had "very serious" cases of COVID-19 were given ECMO treatment and it seemed this treatment option would continue. Outcomes Early studies had shown survival benefit with use of ECMO for people in acute respiratory failure especially in the setting of acute respiratory distress syndrome. A registry maintained by ELSO of nearly 51,000 people that have received ECMO has reported outcomes with 75% survival for neonatal respiratory failure, 56% survival for pediatric respiratory failure, and 55% survival for adult respiratory failure. Other observational and uncontrolled clinical trials have reported survival rates from 50 to 70%. These reported survival rates are better than historical survival rates. Even though ECMO is used for a range of conditions with varying mortality rates, early detection is key to prevent the progression of deterioration and increase survival outcomes. In the United Kingdom, veno-venous ECMO deployment is concentrated in designated ECMO centers to potentially improve care and promote better outcomes. Contraindications Most contraindications are relative, balancing the risks of the procedure versus the potential benefits. The relative contraindications are: Conditions incompatible with normal life if the person recovers Preexisting conditions that affect the quality of life (CNS status, end-stage malignancy, risk of systemic bleeding with anticoagulation) Age and size Futility: those who are too sick, have been on conventional therapy too long, or have a fatal diagnosis. Side effects and complications Neurologic A common consequence in ECMO-treated adults is neurological injury, which may include intracerebral hemorrhage, subarachnoid hemorrhage, ischemic infarctions in susceptible areas of the brain, hypoxic-ischemic encephalopathy, unexplained coma, and brain death. Bleeding occurs in 30 to 40% of those receiving ECMO and can be life-threatening. It is due to both the necessary continuous heparin infusion and platelet dysfunction. Meticulous surgical technique, maintaining platelet counts greater than 100,000/mm3, and maintaining the target activated clotting time reduce the likelihood of bleeding. Blood Heparin-induced thrombocytopenia (HIT) is increasingly common among people receiving ECMO. When HIT is suspected, the heparin infusion is usually replaced by a non-heparin anticoagulant. There is retrograde blood flow in the descending aorta whenever the femoral artery and vein are used for VA (Veno-Arterial) ECMO. Stasis of the blood can occur if left ventricular output is not maintained, which may result in thrombosis. Bridge-to-assist device In VA ECMO, those whose cardiac function does not recover sufficiently to be weaned from ECMO may be bridged to a ventricular assist device (VAD) or transplant. A variety of complications can occur during cannulation, including vessel perforation with bleeding, arterial dissection, distal ischemia, and incorrect location. Children Preterm infants, having inefficiency of the heart and lungs, are at unacceptably high risk for intraventricular hemorrhage (IVH) if ECMO is performed at a gestational age less than 32 weeks. Infections The prevalence of hospital-acquired infections during ECMO is 10-12% (higher compared to other critically ill patients). Coagulase-negative staphylococci, Candida spp., Enterobacteriaceae and Pseudomonas aeruginosa are the most frequently involved pathogens. ECMO patients display a high incidence of ventilator-associated pneumonia (24.4 cases/1000 ECMO days), with a major role played by Enterobacteriaceae. The infectious risk was shown to increase along the duration of the ECMO run, which is the most important risk factor for the development of infections. Other ECMO-specific factors predisposing to infections include the severity of illness in ECMO patients, the high risk of bacterial translocation from the gut and ECMO-related impairment of the immune system. Another important issue is the microbial colonisation of catheters, ECMO cannulae and the oxygenator. Types There are several forms of ECMO; the two most common are veno-arterial (VA) ECMO and veno-venous (VV) ECMO. In both modalities, blood drained from the venous system is oxygenated outside of the body. In VA ECMO, this blood is returned to the arterial system and in VV ECMO the blood is returned to the venous system. In VV ECMO, no cardiac support is provided. Veno-arterial In veno-arterial (VA) ECMO, a venous cannula is usually placed in the right or left common femoral vein for extraction, and an arterial cannula is usually placed into the right or left femoral artery for infusion. The tip of the femoral venous cannula should be maintained near the junction of the inferior vena cava and right atrium, while the tip of the femoral arterial cannula is maintained in the iliac artery. In adults, accessing the femoral artery is preferred because the insertion is simpler. Central VA ECMO may be used if cardiopulmonary bypass has already been established or emergency re-sternotomy has been performed (with cannulae in the right atrium (or SVC/IVC for tricuspid repair) and ascending aorta). VA ECMO is typically reserved when native cardiac function is minimal to mitigate increased cardiac stroke work associated with pumping against retrograde flow delivered by the aortic cannula. Veno-venous In veno-venous (VV) ECMO, cannulae are usually placed in the right common femoral vein for drainage and right internal jugular vein for infusion. Alternatively, a dual-lumen catheter is inserted into the right internal jugular vein, draining blood from the superior and inferior vena cavae and returning it to the right atrium. Initiation ECMO should be performed only by clinicians with training and experience in its initiation, maintenance, and discontinuation. ECMO insertion is typically performed in the operating room setting by a cardiothoracic surgeon. ECMO management is commonly performed by a registered nurse, respiratory therapist, or a perfusionist. Once it has been decided to inititiate ECMO, the patient is anticoagulated with intravenous heparin to prevent thrombus formation from clotting off the oxygenator. Prior to initiation, an IV bolus of heparin is given and measured to ensure that the activated clotting time (ACT) is between 300 and 350 seconds. Once the ACT is between this range, ECMO can be initiated and a heparin drip will be started after as a maintenance dose. Cannulation Cannulae can be placed percutaneously by the Seldinger technique, a relatively straightforward and common method for obtaining access to blood vessels, or via surgical cutdown. The largest cannulae that can be placed in the vessels are used in order to maximize flow and minimize shear stress. However, limb ischemia is one of the notorious complications of ECMO but can be avoided utilizing a proper distal limb perfusion method. In addition, ECMO can be used intraoperatively during lung transplantation to stabilize the patient with excellent outcomes. ECMO required for complications post-cardiac surgery can be placed directly into the appropriate chambers of the heart or great vessels. Peripheral (femoral or jugular) cannulation can allow patients awaiting lung transplantation to remain awake and ambulatory with improved post-transplant outcomes. Titration Following cannulation and connection to the ECMO circuit, the appropriate amount of blood flow through the ECMO circuit is determined using hemodynamic parameters and physical exam. Goals of maintaining end-organ perfusion via ECMO circuit are balanced with sufficient physiologic blood flow through the heart to prevent stasis and subsequent formation of blood clot. Maintenance Once the initial respiratory and hemodynamic goals have been achieved, the blood flow is maintained at that rate. Frequent assessment and adjustments are facilitated by continuous venous oximetry, which directly measures the oxyhemoglobin saturation of the blood in the venous limb of the ECMO circuit. Special considerations VV ECMO is typically used for respiratory failure, while VA ECMO is used for cardiac failure. There are unique considerations for each type of ECMO, which influence management. Blood flow High flow rates are usually desired during VV ECMO to optimize oxygen delivery. In contrast, the flow rate used during VA ECMO must be high enough to provide adequate perfusion pressure and venous oxyhemoglobin saturation (measured on drainage blood) but low enough to provide sufficient preload to maintain left ventricular output. Diuresis Since most people are fluid-overloaded when ECMO is initiated, aggressive diuresis is warranted once the patient is stable on ECMO. Ultrafiltration can be easily added to the ECMO circuit if the patient has inadequate urine output. ECMO "chatter", or instability of ECMO waveforms, represents under-resuscitation and would support cessation of aggressive diuresis or ultrafiltration. There is an increased risk of acute kidney injury related to the use of ECMO and systemic inflammatory response. Left ventricular monitoring Left ventricular output is rigorously monitored during VA ECMO because left ventricular function can be impaired from increased afterload, which can in turn lead to formation of thrombus within the heart. Weaning and discontinuing For those with respiratory failure, improvements in radiographic appearance, pulmonary compliance, and arterial oxyhemoglobin saturation indicate that the person may be ready to be taken off ECMO support. For those with cardiac failure, enhanced aortic pulsatility correlates with improved left ventricular output and indicates that they may be ready to be taken off ECMO support. If all markers are in good status, the blood flows on the ECMO will be slowly decreased and the patients parameters will be observed during this time to ensure that the patient can tolerate the changes. When the flows are below 2 liters per minute, permanent removal is attempted and the patient is continuously monitored during this time until the cannulae can be removed. Veno-venous ECMO liberation trial VV ECMO trials are performed by eliminating all countercurrent sweep gas through the oxygenator. Extracorporeal blood flow remains constant, but gas transfer does not occur. They are then observed for several hours, during which the ventilator settings that are necessary to maintain adequate oxygenation and ventilation off ECMO are determined as indicated by arterial and venous blood gas results. Veno-arterial ECMO liberation trial VA ECMO trials require temporary clamping of both the drainage and infusion lines, while allowing the ECMO circuit to circulate through a bridge between the arterial and venous limbs. This prevents thrombosis of stagnant blood within the ECMO circuit. In addition, the arterial and venous lines should be flushed continuously with heparinized saline or intermittently with heparinized blood from the circuit. In general, VA ECMO trials are shorter in duration than VV ECMO trials because of the higher risk of thrombus formation. History ECMO was developed in the 1950s by John Gibbon, and then by C. Walton Lillehei. The first use for neonates was in 1965. Banning Gray Lary first demonstrated that intravenous oxygen could maintain life. His results were published in Surgical Forum in November 1951. Lary commented on his initial work in a 2007 presentation wherein he writes, "Our research began by assembling an apparatus that, for the first time, kept animals alive while breathing pure nitrogen. This was accomplished with very small bubbles of oxygen injected into the blood stream. These bubbles were made by adding a 'wetting agent' to oxygen being forced through a porcelain filter into the venous blood stream. Shortly after its initial presentation to the American College of Surgeons, this apparatus was reviewed by Walton Lillehei who with DeWall made the first practical heart[–]lung machine that employed a bubble oxygenator. With variations such machines were used for the next twenty years." Manufacturers Medtronic Maquet (Getinge Group) Xenios AG (Fresenius Medical Care) Sorin Group Terumo Nipro MicroPort Availability by country Research Randomized controlled trials (RCTs) Four randomized controlled trials (RCTs) have been conducted to evaluate the effectiveness of ECMO in respiratory failure patients. Early trials conducted by Zapol et al. and Morris et al. were plagued by technical challenges related to the ECMO technology available in the 1970s and 1990s. The CESAR and EOLIA trials utilized modern ECMO systems and are considered the central ECMO RCTs. CESAR Trial (2009) The Conventional Ventilatory Support vs. Extracorporeal Membrane Oxygenation for Severe Adult Respiratory Failure (CESAR) Trial was a UK-based multicenter RCT aiming to evaluate the safety, efficacy and cost effectiveness of ECMO compared to conventional mechanical ventilation in adults with severe but reversible respiratory failure. Death or severe disability at 6 months or prior to hospital discharge was the primary outcome. The primary outcome was analyzed by intention to treat only. Economic analysis included quality-adjusted life-years (QALYs), analysis of cost generating events, cost-utility 6-months post-randomization and modelling of life-time cost utility. The trial planned to enroll 180 patients; 90 to each arm. The Trial met its enrollment goal of 180 patients. 68 of the 90 (75%) of the patients intended to be treated with ECMO were actually treated with ECMO. Survival of patients allocated to the ECMO group (i.e. referred for consideration for treatment with ECMO) was significantly higher than patients allocated to the conventional ventilation group (63% vs 47%, p=0.03). The referral to ECMO group gained 0.03 QALY compared to the conventional ventilation group at the 6-month follow-up. The referral to ECMO group had longer lengths of stay and higher costs. No standardized treatment protocol for the conventional ventilation group is the main limitation of the CESAR study. The trial authors note that this occurred due to the inability of enrolling sites to agree on a protocol. This resulted in control patients not receiving lung protective ventilation which is known to improve mortality in ARDS patients. The authors conclude that referral of patients with severe, potentially reversible respiratory failure to an ECMO center can significantly improve 6-month, severe disability free survival. The CESAR trial results do provide a direct survival comparison for treatment with ECMO versus conventional mechanical ventilation alone since only 75% of the ECMO group were actually treated with ECMO. EOLIA Trial (2018) The ECMO to Rescue Lung Injury in Severe ARDS (EOLIA) Trial was designed to evaluate the effects of early ECMO initiation compared to continued standard of care (conventional mechanical ventilation) in severe ARDS patients. Mortality at 60 days was the primary endpoint. The calculated sample size was 331 patients with an intent to show a 20% reduction in absolute mortality in the ECMO group. The main secondary endpoint was treatment failure – cross-over to ECMO due to refractory hypoxemia or death in the control group and death in the ECMO group. Following the fourth planned interim analysis the trial was ended due to futility. A total of 249 patients were enrolled at study termination. Thirty-five control group patients (28%) required emergency cross-over to ECMO. Results of EOLIA demonstrated no significant difference in 60-day mortality between the ECMO group and the control group (35% vs 46%, respectively). The interpretation of this result however is complicated by the cross-over patients. The secondary endpoint, treatment failure, demonstrated a relative risk of 0.62 (p<0.001) in favor of the ECMO group. Results of the secondary endpoint should be interpreted cautiously due to the primary end point results. With respect to safety, the ECMO group had significantly higher rates of severe thrombocytopenia and bleeding requiring transfusion, but lower rates of ischemic stroke. The primary limitation to the EOLIA Trial was that it was underpowered. For EOLIA to have been properly powered to detect significance of an 11% reduction in mortality a total of 624 patients would need to have been enrolled. Such a trial would take 9 years based on the EOLIA recruitment rates and is likely not feasible. The main conclusion the study authors drew from these results is that early ECMO initiation in severe ARDS patients does not provide a mortality benefit compared to continued standard of care treatment. Subsequent editorials by key opinion leaders suggest that the practical implication is that ECMO may improve mortality if used as a rescue therapy for patients failing conventional ARDS therapies. References External links Extracorporeal Education Portal Intensive care medicine Medical equipment Membrane technology
Extracorporeal membrane oxygenation
[ "Chemistry", "Biology" ]
4,363
[ "Medical technology", "Membrane technology", "Medical equipment", "Separation processes" ]
445,020
https://en.wikipedia.org/wiki/Dilution%20refrigerator
A 3He/4He dilution refrigerator is a cryogenic device that provides continuous cooling to temperatures as low as 2 mK, with no moving parts in the low-temperature region. The cooling power is provided by the heat of mixing of the helium-3 and helium-4 isotopes. The dilution refrigerator was first proposed by Heinz London in the early 1950s, and was experimentally realized in 1964 in the Kamerlingh Onnes Laboratorium at Leiden University. Theory of operation The refrigeration process uses a mixture of two isotopes of helium: helium-3 and helium-4. When cooled below approximately 870 millikelvins, the mixture undergoes spontaneous phase separation to form a 3He-rich phase (the concentrated phase) and a 3He-poor phase (the dilute phase). As shown in the phase diagram, at very low temperatures the concentrated phase is essentially pure 3He, while the dilute phase contains about 6.6% 3He and 93.4% 4He. The working fluid is 3He, which is circulated by vacuum pumps at room temperature. The 3He enters the cryostat at a pressure of a few hundred millibar. In the classic dilution refrigerator (known as a wet dilution refrigerator), the 3He is precooled and purified by liquid nitrogen at 77 K and a 4He bath at 4.2 K. Next, the 3He enters a vacuum chamber where it is further cooled to a temperature of 1.2–1.5 K by the 1 K bath, a vacuum-pumped 4He bath (as decreasing the pressure of the helium reservoir depresses its boiling point). The 1 K bath liquefies the 3He gas and removes the heat of condensation. The 3He then enters the main impedance, a capillary with a large flow resistance. It is cooled by the still (described below) to a temperature 500–700 mK. Subsequently, the 3He flows through a secondary impedance and one side of a set of counterflow heat exchangers where it is cooled by a cold flow of 3He. Finally, the pure 3He enters the mixing chamber, the coldest area of the device. In the mixing chamber, two phases of the 3He–4He mixture, the concentrated phase (practically 100% 3He) and the dilute phase (about 6.6% 3He and 93.4% 4He), are in equilibrium and separated by a phase boundary. Inside the chamber, the 3He is diluted as it flows from the concentrated phase through the phase boundary into the dilute phase. The heat necessary for the dilution is the useful cooling power of the refrigerator, as the process of moving the 3He through the phase boundary is endothermic and removes heat from the mixing chamber environment. The 3He then leaves the mixing chamber in the dilute phase. On the dilute side and in the still the 3He flows through superfluid 4He which is at rest. The 3He is driven through the dilute channel by a pressure gradient just like any other viscous fluid. On its way up, the cold, dilute 3He cools the downward flowing concentrated 3He via the heat exchangers and enters the still. The pressure in the still is kept low (about 10 Pa) by the pumps at room temperature. The vapor in the still is practically pure 3He, which has a much higher partial pressure than 4He at 500–700 mK. Heat is supplied to the still to maintain a steady flow of 3He. The pumps compress the 3He to a pressure of a few hundred millibar and feed it back into the cryostat, completing the cycle. Cryogen-free dilution refrigerators Modern dilution refrigerators can precool the 3He with a cryocooler in place of liquid nitrogen, liquid helium, and a 1 K bath. No external supply of cryogenic liquids is needed in these "dry cryostats" and operation can be highly automated. However, dry cryostats have high energy requirements and are subject to mechanical vibrations, such as those produced by pulse tube refrigerators. The first experimental machines were built in the 1990s, when (commercial) cryocoolers became available, capable of reaching a temperature lower than that of liquid helium and having sufficient cooling power (on the order of 1 Watt at 4.2 K). Pulse tube coolers are commonly used cryocoolers in dry dilution refrigerators. Dry dilution refrigerators generally follow one of two designs. One design incorporates an inner vacuum can, which is used to initially precool the machine from room temperature down to the base temperature of the pulse tube cooler (using heat-exchange gas). However, every time the refrigerator is cooled down, a vacuum seal that holds at cryogenic temperatures needs to be made, and low temperature vacuum feed-throughs must be used for the experimental wiring. The other design is more demanding to realize, requiring heat switches that are necessary for precooling, but no inner vacuum can is needed, greatly reducing the complexity of the experimental wiring. Cooling power The cooling power (in watts) at the mixing chamber is approximately given by where is the 3He molar circulation rate, Tm is the mixing-chamber temperature, and Ti the temperature of the 3He entering the mixing chamber. There will only be useful cooling when This sets a maximum temperature of the last heat exchanger, as above this all cooling power is used up only cooling the incident 3He. Inside of a mixing chamber there is negligible thermal resistance between the pure and dilute phases, and the cooling power reduces to A low Tm can only be reached if Ti is low. In dilution refrigerators, Ti is reduced by using heat exchangers as shown in the schematic diagram of the low-temperature region above. However, at very low temperatures this becomes more and more difficult due to the so-called Kapitza resistance. This is a heat resistance at the surface between the helium liquids and the solid body of the heat exchanger. It is inversely proportional to T4 and the heat-exchanging surface area A. In other words: to get the same heat resistance one needs to increase the surface by a factor 10,000 if the temperature reduces by a factor 10. In order to get a low thermal resistance at low temperatures (below about 30 mK), a large surface area is needed. The lower the temperature, the larger the area. In practice, one uses very fine silver powder. Limitations There is no fundamental limiting low temperature of dilution refrigerators. Yet the temperature range is limited to about 2 mK for practical reasons. At very low temperatures, both the viscosity and the thermal conductivity of the circulating fluid become larger if the temperature is lowered. To reduce the viscous heating, the diameters of the inlet and outlet tubes of the mixing chamber must go as T, and to get low heat flow the lengths of the tubes should go as T. That means that, to reduce the temperature by a factor 2, one needs to increase the diameter by a factor of 8 and the length by a factor of 256. Hence the volume should be increased by a factor of 214 = 16,384. In other words: every cm3 at 2 mK would become 16,384 cm3 at 1 mK. The machines would become very big and very expensive. There is a powerful alternative for cooling below 2 mK: nuclear demagnetization. See also Adiabatic demagnetization Magnetic refrigeration Helium-3 refrigerator Refrigerated transport Dewar Timeline of low-temperature technology References External links Lancaster University, Ultra Low Temperature Physics – Description of dilution refrigeration. Harvard University, Marcus Lab – Hitchhiker's Guide to the Dilution Refrigerator. Cryogenics Cooling technology
Dilution refrigerator
[ "Physics" ]
1,616
[ "Applied and interdisciplinary physics", "Cryogenics" ]
445,637
https://en.wikipedia.org/wiki/Reflection%20high-energy%20electron%20diffraction
Reflection high-energy electron diffraction (RHEED) is a technique used to characterize the surface of crystalline materials. RHEED systems gather information only from the surface layer of the sample, which distinguishes RHEED from other materials characterization methods that also rely on diffraction of high-energy electrons. Transmission electron microscopy, another common electron diffraction method samples mainly the bulk of the sample due to the geometry of the system, although in special cases it can provide surface information. Low-energy electron diffraction (LEED) is also surface sensitive, but LEED achieves surface sensitivity through the use of low energy electrons. Introduction A RHEED system requires an electron source (gun), photoluminescent detector screen and a sample with a clean surface, although modern RHEED systems have additional parts to optimize the technique. The electron gun generates a beam of electrons which strike the sample at a very small angle relative to the sample surface. Incident electrons diffract from atoms at the surface of the sample, and a small fraction of the diffracted electrons interfere constructively at specific angles and form regular patterns on the detector. The electrons interfere according to the position of atoms on the sample surface, so the diffraction pattern at the detector is a function of the sample surface. Figure 1 shows the most basic setup of a RHEED system. Surface diffraction In the RHEED setup, only atoms at the sample surface contribute to the RHEED pattern. The glancing angle of incident electrons allows them to escape the bulk of the sample and to reach the detector. Atoms at the sample surface diffract (scatter) the incident electrons due to the wavelike properties of electrons. The diffracted electrons interfere constructively at specific angles according to the crystal structure and spacing of the atoms at the sample surface and the wavelength of the incident electrons. Some of the electron waves created by constructive interference collide with the detector, creating specific diffraction patterns according to the surface features of the sample. Users characterize the crystallography of the sample surface through analysis of the diffraction patterns. Figure 2 shows a RHEED pattern. Video 1 depicts a metrology instrument recording the RHEED intensity oscillations and deposition rate for process control and analysis. Two types of diffraction contribute to RHEED patterns. Some incident electrons undergo a single, elastic scattering event at the crystal surface, a process termed kinematic scattering. Dynamic scattering occurs when electrons undergo multiple diffraction events in the crystal and lose some of their energy due to interactions with the sample. Users extract non-qualitative data from the kinematically diffracted electrons. These electrons account for the high intensity spots or rings common to RHEED patterns. RHEED users also analyze dynamically scattered electrons with complex techniques and models to gather quantitative information from RHEED patterns. Kinematic scattering analysis RHEED users construct Ewald's spheres to find the crystallographic properties of the sample surface. Ewald's spheres show the allowed diffraction conditions for kinematically scattered electrons in a given RHEED setup. The diffraction pattern at the screen relates to the Ewald's sphere geometry, so RHEED users can directly calculate the reciprocal lattice of the sample with a RHEED pattern, the energy of the incident electrons and the distance from the detector to the sample. The user must relate the geometry and spacing of the spots of a perfect pattern to the Ewald's sphere in order to determine the reciprocal lattice of the sample surface. The Ewald's sphere analysis is similar to that for bulk crystals, however the reciprocal lattice for the sample differs from that for a 3D material due to the surface sensitivity of the RHEED process. The reciprocal lattices of bulk crystals consist of a set of points in 3D space. However, only the first few layers of the material contribute to the diffraction in RHEED, so there are no diffraction conditions in the dimension perpendicular to the sample surface. Due to the lack of a third diffracting condition, the reciprocal lattice of a crystal surface is a series of infinite rods extending perpendicular to the sample's surface. These rods originate at the conventional 2D reciprocal lattice points of the sample's surface. The Ewald's sphere is centered on the sample surface with a radius equal to the magnitude of the wavevector of the incident electrons, where λ is the electrons' de Broglie wavelength. Diffraction conditions are satisfied where the rods of reciprocal lattice intersect the Ewald's sphere. Therefore, the magnitude of a vector from the origin of the Ewald's sphere to the intersection of any reciprocal lattice rods is equal in magnitude to that of the incident beam. This is expressed as (2) Here, khl is the wave vector of the elastically diffracted electrons of the order (hl) at any intersection of reciprocal lattice rods with Ewald's sphere The projections of the two vectors onto the plane of the sample's surface differ by a reciprocal lattice vector Ghl, (3) Figure 3 shows the construction of the Ewald's sphere and provides examples of the G, khl and ki vectors. Many of the reciprocal lattice rods meet the diffraction condition, however the RHEED system is designed such that only the low orders of diffraction are incident on the detector. The RHEED pattern at the detector is a projection only of the k vectors that are within the angular range that contains the detector. The size and position of the detector determine which of the diffracted electrons are within the angular range that reaches the detector, so the geometry of the RHEED pattern can be related back to the geometry of the reciprocal lattice of the sample surface through use of trigonometric relations and the distance from the sample to detector. The k vectors are labeled such that the vector k00 that forms the smallest angle with the sample surface is called the 0th order beam. The 0th order beam is also known as the specular beam. Each successive intersection of a rod and the sphere further from the sample surface is labeled as a higher order reflection. Because of the way the center of the Ewald's sphere is positioned, the specular beam forms the same angle with the substrate as the incident electron beam. The specular point has the greatest intensity on a RHEED pattern and is labeled as the (00) point by convention. The other points on the RHEED pattern are indexed according to the reflection order they project. The radius of the Ewald's sphere is much larger than the spacing between reciprocal lattice rods because the incident beam has a very short wavelength due to its high-energy electrons. Rows of reciprocal lattice rods actually intersect the Ewald's sphere as an approximate plane because identical rows of parallel reciprocal lattice rods sit directly in front and behind the single row shown. Figure 3 shows a cross sectional view of a single row of reciprocal lattice rods filling of the diffraction conditions. The reciprocal lattice rods in Figure 3 show the end on view of these planes, which are perpendicular to the computer screen in the figure. The intersections of these effective planes with the Ewald's sphere forms circles, called Laue circles. The RHEED pattern is a collection of points on the perimeters of concentric Laue circles around the center point. However, interference effects between the diffracted electrons still yield strong intensities at single points on each Laue circle. Figure 4 shows the intersection of one of these planes with the Ewald's Sphere. The azimuthal angle affects the geometry and intensity of RHEED patterns. The azimuthal angle is the angle at which the incident electrons intersect the ordered crystal lattice on the surface of the sample. Most RHEED systems are equipped with a sample holder that can rotate the crystal around an axis perpendicular to the sample surface. RHEED users rotate the sample to optimize the intensity profiles of patterns. Users generally index at least 2 RHEED scans at different azimuth angles for reliable characterization of the crystal's surface structure. Figure 5 shows a schematic diagram of an electron beam incident on the sample at different azimuth angles. Users sometimes rotate the sample around an axis perpendicular to the sampling surface during RHEED experiments to create a RHEED pattern called the azimuthal plot. Rotating the sample changes the intensity of the diffracted beams due to their dependence on the azimuth angle. RHEED specialists characterize film morphologies by measuring the changes in beam intensity and comparing these changes to theoretical calculations, which can effectively model the dependence of the intensity of diffracted beams on the azimuth angle. Dynamic scattering analysis The dynamically, or inelastically, scattered electrons provide several types of information about the sample as well. The brightness or intensity at a point on the detector depends on dynamic scattering, so all analysis involving the intensity must account for dynamic scattering. Some inelastically scattered electrons penetrate the bulk crystal and fulfill Bragg diffraction conditions. These inelastically scattered electrons can reach the detector to yield Kikuchi diffraction patterns, which are useful for calculating diffraction conditions. Kikuchi patterns are characterized by lines connecting the intense diffraction points on a RHEED pattern. Figure 6 shows a RHEED pattern with visible Kikuchi lines. RHEED system requirements Electron gun The electron gun is one of the most important piece of equipment in a RHEED system. The gun limits the resolution and testing limits of the system. Tungsten filaments are the primary electron source for the electron gun of most RHEED systems due to the low work function of tungsten. In the typical setup, the tungsten filament is the cathode and a positively biased anode draws electrons from the tip of the tungsten filament. The magnitude of the anode bias determines the energy of the incident electrons. The optimal anode bias is dependent upon the type of information desired. At large incident angles, electrons with high energy can penetrate the surface of the sample and degrade the surface sensitivity of the instrument. However, the dimensions of the Laue zones are proportional to the inverse square of the electron energy meaning that more information is recorded at the detector at higher incident electron energies. For general surface characterization, the electron gun is operated the range of 10-30 keV. In a typical RHEED setup, one magnetic and one electric field focus the incident beam of electrons. A negatively biased Wehnelt electrode positioned between the cathode filament and anode applies a small electric field, which focuses the electrons as they pass through the anode. An adjustable magnetic lens focuses the electrons onto the sample surface after they pass through the anode. A typical RHEED source has a focal length around 50 cm. The beam is focused to the smallest possible point at the detector rather than the sample surface so that the diffraction pattern has the best resolution. Phosphor screens that exhibit photoluminescence are widely used as detectors. These detectors emit green light from areas where electrons hit their surface and are common to TEM as well. The detector screen is useful for aligning the pattern to an optimal position and intensity. CCD cameras capture the patterns to allow for digital analysis. Sample surface The sample surface must be extremely clean for effective RHEED experiments. Contaminants on the sample surface interfere with the electron beam and degrade the quality of the RHEED pattern. RHEED users employ two main techniques to create clean sample surfaces. Small samples can be cleaved in the vacuum chamber prior to RHEED analysis. The newly exposed, cleaved surface is analyzed. Large samples, or those that are not able to be cleaved prior to RHEED analysis can be coated with a passive oxide layer prior to analysis. Subsequent heat treatment under the vacuum of the RHEED chamber removes the oxide layer and exposes the clean sample surface. Vacuum requirements Because gas molecules diffract electrons and affect the quality of the electron gun, RHEED experiments are performed under vacuum. The RHEED system must operate at a pressure low enough to prevent significant scattering of the electron beams by gas molecules in the chamber. At electron energies of 10 keV, a chamber pressure of 10−5 mbar or lower is necessary to prevent significant scattering of electrons by the background gas. In practice, RHEED systems are operated under ultra high vacuums. The chamber pressure is minimized as much as possible in order to optimize the process. The vacuum conditions limit the types of materials and processes that can be monitored in situ with RHEED. RHEED patterns of real surfaces Previous analysis focused only on diffraction from a perfectly flat surface of a crystal surface. However, non-flat surfaces add additional diffraction conditions to RHEED analysis. Streaked or elongated spots are common to RHEED patterns. As Fig 3 shows, the reciprocal lattice rods with the lowest orders intersect the Ewald sphere at very small angles, so the intersection between the rods and sphere is not a singular point if the sphere and rods have thickness. The incident electron beam diverges and electrons in the beam have a range of energies, so in practice, the Ewald sphere is not infinitely thin as it is theoretically modeled. The reciprocal lattice rods have a finite thickness as well, with their diameters dependent on the quality of the sample surface. Streaks appear in the place of perfect points when broadened rods intersect the Ewald sphere. Diffraction conditions are fulfilled over the entire intersection of the rods with the sphere, yielding elongated points or ‘streaks’ along the vertical axis of the RHEED pattern. In real cases, streaky RHEED patterns indicate a flat sample surface while the broadening of the streaks indicate small area of coherence on the surface. Surface features and polycrystalline surfaces add complexity or change RHEED patterns from those from perfectly flat surfaces. Growing films, nucleating particles, crystal twinning, grains of varying size and adsorbed species add complicated diffraction conditions to those of a perfect surface. Superimposed patterns of the substrate and heterogeneous materials, complex interference patterns and degradation of the resolution are characteristic of complex surfaces or those partially covered with heterogeneous materials. Specialized RHEED techniques Film growth RHEED is an extremely popular technique for monitoring the growth of thin films. In particular, RHEED is well suited for use with molecular beam epitaxy (MBE), a process used to form high quality, ultrapure thin films under ultrahigh vacuum growth conditions. The intensities of individual spots on the RHEED pattern fluctuate in a periodic manner as a result of the relative surface coverage of the growing thin film. Figure 8 shows an example of the intensity fluctuating at a single RHEED point during MBE growth. Each full period corresponds to formation of a single atomic layer thin film. The oscillation period is highly dependent on the material system, electron energy and incident angle, so researchers obtain empirical data to correlate the intensity oscillations and film coverage before using RHEED for monitoring film growth. Video 1 depicts a metrology instrument recording the RHEED intensity oscillations and deposition rate for process control and analysis. RHEED-TRAXS Reflection high energy electron diffraction - total reflection angle X-ray spectroscopy is a technique for monitoring the chemical composition of crystals. RHEED-TRAXS analyzes X-ray spectral lines emitted from a crystal as a result of electrons from a RHEED gun colliding with the surface. RHEED-TRAXS is preferential to X-ray microanalysis (XMA)(such as EDS and WDS) because the incidence angle of the electrons on the surface is very small, typically less than 5°. As a result, the electrons do not penetrate deeply into the crystal, meaning the X-ray emission is restricted to the top of the crystal, allowing for real-time, in-situ monitoring of surface stoichiometry. The experimental setup is fairly simple. Electrons are fired onto a sample causing X-ray emission. These X-rays are then detected using a silicon-lithium Si-Li crystal placed behind beryllium windows, used to maintain vacuum. MCP-RHEED MCP-RHEED is a system in which an electron beam is amplified by a micro-channel plate (MCP). This system consists of an electron gun and an MCP equipped with a fluorescent screen opposite to the electron gun. Because of the amplification, the intensity of the electron beam can be decreased by several orders of magnitude and the damage to the samples is diminished. This method is used to observe the growth of insulator crystals such as organic films and alkali halide films, which are easily damaged by electron beams. References Further reading Introduction to RHEED, A.S. Arrot, Ultrathin Magnetic Structures I, Springer-Verlag, 1994, pp. 177–220 A Review of the Geometrical Fundamentals of RHEED with Application to Silicon Surfaces, John E. Mahan, Kent M. Geib, G.Y. Robinson, and Robert G. Long, J.V.S.T., A 8, 1990, pp. 3692–3700 Crystallography Electron spectroscopy Diffraction Measuring instruments X-ray spectroscopy
Reflection high-energy electron diffraction
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
3,616
[ "Spectrum (physical sciences)", "Electron spectroscopy", "Materials science", "Measuring instruments", "Crystallography", "Diffraction", "Condensed matter physics", "X-ray spectroscopy", "Spectroscopy" ]
12,680,031
https://en.wikipedia.org/wiki/Activating%20protein%202
Activating Protein 2 (AP-2) is a family of closely related transcription factors which plays a critical role in regulating gene expression during early development. References External links Gene expression Transcription factors
Activating protein 2
[ "Chemistry", "Biology" ]
39
[ "Gene expression", "Signal transduction", "Molecular genetics", "Cellular processes", "Induced stem cells", "Molecular biology", "Biochemistry", "Transcription factors" ]
12,681,282
https://en.wikipedia.org/wiki/Internalnet
An internalnet is a computer network composed of devices inside and on the human body. Such a system could be used to link nanochondria, bionic implants, wearable computers, and other devices. See also Nanomedicine Personal area network External links PC Magazine definition Smart computing Bionics Computer networks by scale Implants (medicine)
Internalnet
[ "Technology", "Engineering", "Biology" ]
73
[ "Computing stubs", "Bionics", "Computer network stubs" ]
12,683,145
https://en.wikipedia.org/wiki/Elastic%20instability
Elastic instability is a form of instability occurring in elastic systems, such as buckling of beams and plates subject to large compressive loads. There are a lot of ways to study this kind of instability. One of them is to use the method of incremental deformations based on superposing a small perturbation on an equilibrium solution. Single degree of freedom-systems Consider as a simple example a rigid beam of length L, hinged in one end and free in the other, and having an angular spring attached to the hinged end. The beam is loaded in the free end by a force F acting in the compressive axial direction of the beam, see the figure to the right. Moment equilibrium condition Assuming a clockwise angular deflection , the clockwise moment exerted by the force becomes . The moment equilibrium equation is given by where is the spring constant of the angular spring (Nm/radian). Assuming is small enough, implementing the Taylor expansion of the sine function and keeping the two first terms yields which has three solutions, the trivial , and which is imaginary (i.e. not physical) for and real otherwise. This implies that for small compressive forces, the only equilibrium state is given by , while if the force exceeds the value there is suddenly another mode of deformation possible. Energy method The same result can be obtained by considering energy relations. The energy stored in the angular spring is and the work done by the force is simply the force multiplied by the vertical displacement of the beam end, which is . Thus, The energy equilibrium condition now yields as before (besides from the trivial ). Stability of the solutions Any solution is stable iff a small change in the deformation angle results in a reaction moment trying to restore the original angle of deformation. The net clockwise moment acting on the beam is An infinitesimal clockwise change of the deformation angle results in a moment which can be rewritten as since due to the moment equilibrium condition. Now, a solution is stable iff a clockwise change results in a negative change of moment and vice versa. Thus, the condition for stability becomes The solution is stable only for , which is expected. By expanding the cosine term in the equation, the approximate stability condition is obtained: for , which the two other solutions satisfy. Hence, these solutions are stable. Multiple degrees of freedom-systems By attaching another rigid beam to the original system by means of an angular spring a two degrees of freedom-system is obtained. Assume for simplicity that the beam lengths and angular springs are equal. The equilibrium conditions become where and are the angles of the two beams. Linearizing by assuming these angles are small yields The non-trivial solutions to the system is obtained by finding the roots of the determinant of the system matrix, i.e. for Thus, for the two degrees of freedom-system there are two critical values for the applied force F. These correspond to two different modes of deformation which can be computed from the nullspace of the system matrix. Dividing the equations by yields For the lower critical force the ratio is positive and the two beams deflect in the same direction while for the higher force they form a "banana" shape. These two states of deformation represent the buckling mode shapes of the system. See also Buckling Cavitation (elastomers) Drucker stability Further reading Theory of elastic stability, S. Timoshenko and J. Gere Continuum mechanics Structural analysis Mechanics
Elastic instability
[ "Physics", "Engineering" ]
700
[ "Structural engineering", "Continuum mechanics", "Structural analysis", "Classical mechanics", "Mechanics", "Mechanical engineering", "Aerospace engineering" ]
12,687,690
https://en.wikipedia.org/wiki/Zonal%20and%20poloidal
In magnetic confinement fusion the zonal direction primarily connotes the poloidal direction (i.e. the short way around the torus), the corresponding coordinate being denoted by y in the slab approximation or θ in magnetic coordinates. However, in the fusion context, usage is restricted to the context of zonal plasma flows and there will in general be a toroidal component in such flows as well. Thus, although the term zonal has come into use in plasma physics to emphasize an analogy with zonal flows in geophysics, it does not uniquely identify the direction of flow, unlike the case in geophysics. See also Toroidal and poloidal Zonal and meridional Zonal flow (plasma) Zonal flow Orientation (geometry) Magnetic confinement fusion
Zonal and poloidal
[ "Physics", "Mathematics" ]
154
[ "Plasma physics", "Topology", "Plasma physics stubs", "Space", "Geometry", "Spacetime", "Orientation (geometry)" ]
12,688,342
https://en.wikipedia.org/wiki/Zonal%20flow%20%28plasma%29
In toroidally confined fusion plasma experiments the term zonal flow means a plasma flow within a magnetic surface primarily in the poloidal direction. This usage is inspired by the analogy between the quasi-two-dimensional nature of large-scale atmospheric and oceanic flows, where zonal means latitudinal, and the similarly quasi-two-dimensional nature of low-frequency flows in a strongly magnetized plasma. Zonal flows in the toroidal plasma context are further characterized by being localized in their radial extent transverse to the magnetic surfaces (in contrast to global plasma rotation), having little or no variation in either the poloidal or toroidal direction—they are m = n = 0 modes (where m and n are the poloidal and toroidal mode numbers, respectively), having zero real frequency when analyzed by linearization around an unperturbed toroidal equilibrium state (in contrast to the geodesic acoustic mode branch, which has finite frequency). Arising via a self-organization phenomenon driven by low-frequency drift-type modes, in which energy is transferred to longer wavelengths by modulational instability or turbulent inverse cascade. See also Zonal and poloidal Zonal flow References Plasma phenomena
Zonal flow (plasma)
[ "Physics" ]
243
[ "Plasma phenomena", "Physical phenomena", "Plasma physics stubs", "Plasma physics" ]
17,229,545
https://en.wikipedia.org/wiki/Cryogenic%20Rare%20Event%20Search%20with%20Superconducting%20Thermometers
The Cryogenic Rare Event Search with Superconducting Thermometers (CRESST) is a collaboration of European experimental particle physics groups involved in the construction of cryogenic detectors for direct dark matter searches. The participating institutes are the Max Planck Institute for Physics (Munich), Technical University of Munich, University of Tübingen (Germany), University of Oxford (Great Britain), the Comenius University Bratislava (Slovakia) and the Istituto Nazionale di Fisica Nucleare (INFN, Italy). The CRESST collaboration currently runs an array of cryogenic detectors in the underground laboratory of the Gran Sasso National Laboratory. The modular detectors used by CRESST facilitate discrimination of background radiation events by the simultaneous measurement of phonon and photon signals from scintillating calcium tungstate crystals. By cooling the detectors to temperatures of a few millikelvin, the excellent discrimination and energy resolution of the detectors allows identification of rare particle events. CRESST-I took data in 2000 using sapphire detectors with tungsten thermometers. CRESST-II uses CaWO4 crystal scintillating calorimeters. It was prototyped in 2004 and had a 47.9 kg-day commissioning run in 2007 and operated 2009 to 2011. CRESST-II Phase 1 experiment observed excess events above known background that could be understood to constitute a dark matter signal. However, later analysis showed that these excess events were due to a previously uncounted for excess of background from the detector itself and not a true signal from dark matter. The source of the excess background in the detector was removed for Phase 2. Phase 2 has a new CaWO4 crystal with better radiopurity, improved detectors, and significantly reduced background. It began July 2013 to explore excess signals in the prior run. The results of Phase 2 showed no signal above expected background, proving that the result of Phase 1 had indeed been due to excess background by components of the detector. CRESST-II first detected the alpha decay of tungsten-180 (180W). CRESST-II phase 1 full results were published in 2012. New phase 2 results have been presented in July 2014 with a limit on spin-independent WIMP-nucleon scattering for WIMP masses below 3 GeV/c2. In 2015 the CRESST detectors were upgraded by a sensitivity factor of 100 allowing dark-matter particles with a mass around that of a proton to be detected. In 2019, the team reported results of the first phase of CRESST-III, which ran from 2016 to 2018. CRESST-III used a single 23.6-g CaWO4 detector with a lowered energy threshold of 30.1 eV, about 1/10 that of CRESST-II. This allows the detection of WIMPs as light as 0.16 GeV/c2, slightly heavier than a pion. Despite many events from the electron capture decay of 179Ta, there was an unexplained excess of events imparting less than 200 eV. The experiment is planning an upgrade to accommodate 100 detector modules. References External links CRESST Official Website CRESST Publication 2004 CRESST Publication 2011 Gran Sasso National Laboratory Experiments for dark matter search
Cryogenic Rare Event Search with Superconducting Thermometers
[ "Physics" ]
668
[ "Dark matter", "Experiments for dark matter search", "Unsolved problems in physics" ]
15,494,156
https://en.wikipedia.org/wiki/IEEE%20Journal%20of%20Selected%20Topics%20in%20Quantum%20Electronics
The IEEE Journal of Selected Topics in Quantum Electronics is a bimonthly peer-reviewed scientific journal published by the IEEE Photonics Society. It covers research on quantum electronics. The editor-in-chief is José Capmany (Universitat Politècnica de València). According to the Journal Citation Reports, the journal has a 2020 impact factor of 4.544. See also IEEE Journal of Quantum Electronics References External links Journal of Selected Topics in Quantum Electronics Quantum mechanics journals Optics journals Electronics journals Bimonthly journals English-language journals Academic journals established in 1995
IEEE Journal of Selected Topics in Quantum Electronics
[ "Physics" ]
116
[ "Quantum mechanics", "Quantum physics stubs" ]
15,495,670
https://en.wikipedia.org/wiki/Green%20Star%20%28Australia%29
Green Star is a voluntary sustainability rating system for buildings in Australia. It was launched in 2003 by the Green Building Council of Australia (GBCA). The Green Star rating system assesses the sustainability of projects at all stages of the built environment life cycle. Ratings can be achieved at the planning phase for communities, during the design, construction or fit out phase of buildings, or during the ongoing operational phase. The system considers assesses and rates buildings, fitouts and communities against a range of environmental impact categories, and aims to encourage leadership in environmentally sustainable design and construction, showcase innovation in sustainable building practices, and consider occupant health, productivity and operational cost savings. In 2013, the GBCA released The Value of Green Star, a report that analysed data from 428 Green Star-certified projects occupying 5,746,000 million square metres across Australia and compared it to the ‘average’ Australian building and minimum practice benchmarks. The research found that, on average, Green Star-certified buildings produce 62% fewer greenhouse gas emissions and use 66% less electricity than average Australian buildings. Green Star buildings use 51% less potable water than average buildings. Green Star-certified buildings also have been found to recycle 96 per cent of their construction and demolition waste, compared to the average 58% for new construction projects. Rating system Green Star benchmarks projects against the nine Green Star categories of: Management; Indoor Environment Quality; Energy; Transport; Water; Materials; Land Use & Ecology; Emissions and Innovation. Within each category are credits which address specific aspects of sustainable building design, construction or performance. Ratings for buildings are available at the design stage ('Design' ratings), at the post-construction phase (known as 'As Built' ratings) or for interior fitouts (‘Interiors’ ratings). Green Star - Communities rates projects at the community or precinct scale against the categories of: Liveability; Economic Prosperity; Environment; Design; Governance and Innovation. Green Star certification is a formal process in which an independent assessment panel reviews documentary evidence that a project meets Green Star benchmarks within each credit. The assessment panel awards points, with a Green Star rating determined by comparing the overall score with the rating scale: Green Star rating tools for building, fitout and community design and construction reward projects that achieve best practice or above, which means ratings of 1, 2 or 3 are not awarded. Ongoing performance of a building can be rated at any of the 6 star ratings. Buildings assessed using the Green Star – Performance rating tool will be able to achieve a Green Star rating from 1 – 6 Star Green Star. Projects More than 1900 projects around Australia have achieved Green Star ratings. The first building to achieve a Green Star rating was 8 Brindabella Circuit at Canberra Airport, which achieved a 5 Star Green Star – Office Design v1 rating in 2004. In 2005, Council House 2 in Melbourne became the first building to achieve a 6 Star Green Star – Office Design v1 rating. Flinders Medical Centre – New South Wing was the first healthcare facility in Australia to achieve a Green Star rating. Scarborough Beach Pool was the first aquatic facility to achieve a 6 star green rating. Bond University Mirvac School for Sustainability achieved the first Green Star rating for an educational facility. Other well-known Green Star projects include 1 Bligh Street in Sydney and the Melbourne Convention and Exhibition Centre. Controversy The launch of the Green Star rating system was met with some scepticism by green groups, which argued that the rating system was funded by mostly development industry companies. There was controversy over a proposal to expand the forest certification of timber and composite timber products, but this issue was resolved with the release of the revised ‘Timber’ credit in 2010. There has also been concern over various aspects of the timeframe for awarding of the certification, transfer of properties once awarded, and termination rights. See also Green building House Energy Rating References External links What is Green Star?, Green Building Council of Australia Building engineering Sustainable building in Australia Building energy rating Energy conservation in Australia Forest certification Sustainable building rating systems
Green Star (Australia)
[ "Engineering" ]
822
[ "Building engineering", "Civil engineering", "Architecture" ]
15,497,991
https://en.wikipedia.org/wiki/BELBIC
In recent years, the use of biologically inspired methods such as the evolutionary algorithm have been increasingly employed to solve and analyze complex computational problems. BELBIC (Brain Emotional Learning Based Intelligent Controller) is one such controller which is proposed by Caro Lucas, Danial Shahmirzadi and Nima Sheikholeslami and adopts the network model developed by Moren and Balkenius to mimic those parts of the brain which are known to produce emotion (namely, the amygdala, orbitofrontal cortex, thalamus and sensory input cortex). Emotions and learning Traditionally, the study of learning in biological systems was conducted at the expense of overlooking its lesser known counterparts: motivation and emotion. However these phenomena can not be separated. Motivation is the drive that causes any system to do anything – without it, there is no reason to act. Emotions indicate how successful a course of actions have been and whether another set of actions should have been taken instead – they are a constant feedback to the learning system. Learning on the other hand, guarantees that motivation and emotional subsystems are able to adapt to constantly changing conditions. Thus, in the study of biological organisms, emotions have arisen to prominence as an integral part of any biologically inspired system. But how does any living organism benefit from its emotions? It is crucial to answer this question as we attempt to increasingly employ biologically inspired methods in solving computational problems. Every creature has innate abilities that accommodate its survival in the world. It can identify food, shelter, partners, and danger. But these "simple mappings between stimuli and reactions will not be enough to keep the organisms from encountering problems." For example, if a given animal knows that its predator has qualities A, B and C, it will escape all creatures that have those qualities. And thus waste much of its energy and resources on non-existent danger. We can not expect evolution to provide more advanced algorithms for assessing danger, because the predator is also evolving at the same speed. Thus, biological systems need to be equipped with the ability to learn. This learning and re-learning mechanism allows them to adapt to highly complex and advanced situations. To learn effectively, every learning organism needs an evaluation of the current situation and also feedback on how beneficial the results of learning were. On the most part, these evaluation mechanisms are built-in. And so we encounter a new problem: whereas creatures take appropriate measures in real time based on their evaluations, these built-in evaluation procedures are developed in evolutionary time. But all creatures need to learn of new evaluation techniques in their lifetime just as they learn the proper reactions. This is where the ability to condition emotional reactions comes into play. Biological organisms associate innate emotional stimuli with other stimuli they encounter in the world and thus give them an emotional significance when needed. These evaluations can be monitored to operate at very specific times, specific places or when accompanied by other specific stimuli. There is another reason why these observations are so significant and that is the creation of artificial systems. These systems do not evolve over time but are designed with certain abilities from the start. Thus, their adaptability must be built-in. Computational model A model is a simplified description of a phenomenon. It brings to life some aspects of this phenomenon while overlooking others. What aspects are kept in the model and what are overlooked greatly depends on the topic of study. Thus, the nature of a model depends on the purpose the investigator plans to carry out. A computational model is one which can be mathematically analyzed, tested and simulated using computer systems. To construct a computational model of emotional learning in the brain requires a thorough analysis of the amygdala and the orbitofrontal cortex and the interaction between them: In mammals, emotional responses are processed in a part of the brain called the limbic system which lies in the cerebral cortex. The main components of the limbic system are the amygdala, orbitofrontal cortex, thalamus and the sensory cortex. The amygdala is an almond shaped area that is placed such that it can communicate with all other cortices within the limbic system. The primary affective conditioning of the system occurs within the amygdala. That is, the association between a stimulus and its emotional consequence takes place in this region. It has been suggested that learning takes place in two fundamental steps. First, a particular stimulus is correlated with an emotional response. This stimulus can be an endless number of phenomena from observing a face, to detecting a scent, hearing a noise, etc. Second, this emotional consequence shapes an association between the stimulus and the response. This analysis is quite influential in part because it was one of the first to suggest that emotions play a key part in learning. In more recent studies, it has been shown that the association between a stimulus and its emotional consequence take place in the amygdala. "In this region, highly analyzed stimulus representations in the cortex are associated with an emotional value. Therefore, emotions are properties of stimuli". The task of the amygdala is thus to assign a primary emotional value to each stimulus that has been paired with a primary reinforcer – the reinforcer is the reward and punishment that the mammal receives. This task is aided by the orbitofrontal complex. "In terms of learning theory, the amygdala appears to handle the presentation of primary reinforcement, while the orbitofrontal cortex is involved in the detection of omission of reinforcement." The first thing we notice in the computational model developed by Moren and Balkenius is that quite a number of interacting learning systems exist in the brain that deal with emotional learning. The computational model is presented below where: Th : Thalamus CX : Sensory Cortex A : Input structures in the amygdala E : Output structures in the amygdala O : Orbitofrontal Cortex Rew/Pun : External signals identifying the presentation of reward and punishment CR/UR : conditioned response/unconditioned response V : Associative strength from cortical representation to the amygdala that is changed by learning W : Inhibitory connection from orbitofrontal cortex to the amygdala that is changed during learning This image shows that the sensory input enters through the thalamus TH. In biological systems, the thalamus takes on the task of initiating the process of a response to stimuli. It does so by passing the signal to the amygdala and the sensory cortex. This signal is then analyzed in the cortical area – CX. In biological systems, the sensory cortex operates by distributing the incoming signals appropriately between the amygdala and the orbitofrontal cortex. This sensory representation in CX is then sent to the amygdala A, through the pathway V. This is the main pathway for learning in this model. Reward and punishment enter the amygdala to strengthen the connection between the amygdala and the pathway. At a later stage if a similar representation is activated in the cortex, E becomes activated and produces an emotional response. O, the orbitofrontal cortex, operates based on the difference between the perceived (i.e. expected) reward/punishment and the actual received reward/punishment. This perceived reward/punishment is the one that has been developed in the brain over time using learning mechanisms and it reaches the orbitofrontal cortex via the sensory cortex and the amygdala. The received reward/punishment on the other hand, comes courtesy of the outside world and is the actual reward/punishment that the specie has just obtained. If these two are identical, the output is the same as always through E. If not, the orbitofrontal cortex inhibits and restrains emotional response to make way for further learning. So the path W is only activated in such conditions. Controller In most industrial processes that contain complex nonlinearities, control algorithms are used to create linearized models. One reason is that these linear models are developed using straightforward methods from process test data. However, if the process is highly complex and nonlinear, subject to frequent disturbances, a nonlinear model will be required. Biologically motivated intelligent controllers have been increasingly employed in these situations. Amongst them, fuzzy logic, neural networks and genetic algorithms are some of the most widely employed tools in control applications with highly complex, nonlinear settings. BELBIC is one such nonlinear controller – a neuromorphic controller based on the computational learning model shown above to produce the control action. This model is employed much like an algorithm in these control engineering applications. In these new approaches, intelligence is not given to the system from the outside but is actually acquired by the system itself. This simple model has been employed as a feedback controller to be applied to control design problems. One logic behind this use in control engineering is a belief held by many experts in the field that there has been too much focus on fully rational deliberative approaches, whereas in many real-world circumstances, we are only provided with a bounded rationality. Factors like computational complexity, multiplicity of objectives and prevalence of uncertainty lead to a desire to obtain more ad-hoc, rule-of-thumb approaches. Emotional decision making is highly capable of addressing these issues because it is neither fully cognitive nor fully behavioral. BELBIC, which is a model free controller, suffers from the same drawback of all intelligent model free controllers: it cannot be applied on unstable systems or systems with unstable equilibrium point. This is a natural result of the trial and error manner of the learning procedure, i.e. exploration for finding the appropriate control signals can lead to instability. By integrating imitative learning and fuzzy inference systems, BELBIC is generalized in order to be capable of controlling unstable systems. Applications To date, BELBIC and its modified versions have been tested on the following applications: HVAC Systems (heating, ventilating and air conditioning): these are some of the most challenging plants in control systems which consume 50% of the total world energy consumption. Unstable Systems (or stable systems with unstable equilibrium point) Inverted pendulum systems Nonlinear systems Cell-to-Cell Mapping Algorithm Electrically heated micro heat exchanger: this device has been developed to accelerate fluid and heat exchange in reduced systems. RoboCup Rescue Simulation: a large, multi-agent system is one of the most challenging environments to control and coordinate because there needs to be a precise coordination between agents. Control of intelligent washing machines: intelligent control of home appliances has gained considerable attention by scientists and the industry in recent years. In the case of washing machines, intelligent control could mean both easier use and energy and water conservation. Autolanding system Speed Regulation of DC motors Active queue management Aerospace launch vehicle control Impossibles AIBO 4-legged Robocup competition Predicting geomagnetic activity index; In this application, the various extended models are proposed by researchers. Babaei et al. presented multi agent model of brain emotional learning and Lotfi and Akbarzadeh proposed supervised learning version of brain emotional learning to forecast Geomagnetic Activity Indices. Gene expression microarray classification. Speed control of switched reluctance motor Intelligent control of Micro Heat Exchanger Model Free Control of Overhead Travelling Crane Autopilot Control Design for a 2-DOF Helicopter Model Path Tracking for a Car Attitude Control of a quadrotor Digital Servo System Multi-Agent Systems Secondary Control of Microgrids Position control of a real laboratorial EHS actuator: Electrohydraulic servo valves are known to be nonlinear and non-smooth due to many factors such as leakage, friction, hysteresis, null shift, saturation, dead zone, and especially fluid flow expression through the servomechanism. See also Fuzzy logic Evolutionary algorithm Neural network Genetic algorithm Caro Lucas References External links A Practical Tutorial on Genetic Algorithm Programming a Genetic Algorithm step by step. Fuzzy logic – article at Stanford Encyclopedia of Philosophy International Society for Genetic and Evolutionary Computation IEEE Computational Intelligence Society (IEEE CIS) A collection of non-linear models and demo applets (in Monash University's Virtual Lab) Nonlinear Dynamics I: Chaos at MIT's OpenCourseWare PSO-BELBIC scheme for two-coupled distillation column process Brain Emotional Learning-inspired Models Cognitive science Control engineering
BELBIC
[ "Engineering" ]
2,490
[ "Control engineering" ]
15,499,236
https://en.wikipedia.org/wiki/List%20of%20hydrodynamic%20instabilities%20named%20after%20people
This is a list of hydrodynamic and plasma instabilities named after people (eponymous instabilities). See also Eponym List of fluid flows named after people Instability Hydrodynamic stability Scientific phenomena named after people References Chandrashekhar, S., Hydrodynamic and Hydromagnetic Stability, Dover Publications, New York (1981). Drazin, P. G. and W. H. Reid, Hydrodynamic Stability, Cambridge Univ. Press, London (1981). Hydrodynamic instabilities Fluid dynamics Fluid dynamic instabilities
List of hydrodynamic instabilities named after people
[ "Physics", "Chemistry", "Engineering" ]
124
[ "Physical phenomena", "Fluid dynamic instabilities", "Chemical engineering", "Plasma phenomena", "Plasma instabilities", "Piping", "Fluid dynamics" ]
15,500,383
https://en.wikipedia.org/wiki/Sand%20separator
A sand separator is a device that separates sand or other solids from water. One version of sand separator utilizes centrifugal force to separate sand or other heavy particles out of the water. The separated material drops down into a tank or reservoir where it can be removed later or in the case of in-well separators, the separated sand drops into the bottom of the well. It is not a true filter, since there is no physical barrier to separate out the particles, but it is often used upstream of a filter to first remove the bulk of the contaminant, where the filter does the final cleaning. This type of design reduces the time required to flush and clean the filter. Applications Used in micro irrigation systems to remove sand and silt particles from irrigation water. Grinding circuits within the mineral processing industry and micro irrigation. The coarse particles return to the mill for re-grinding while the finer products are passed on to a subsequent stage of treatment. References See also Hydrocyclone Cyclonic separation Water filters Irrigation
Sand separator
[ "Chemistry", "Engineering" ]
213
[ "Water filters", "Water treatment", "Filters", "Civil engineering", "Civil engineering stubs" ]
20,606,232
https://en.wikipedia.org/wiki/Jacquet%E2%80%93Langlands%20correspondence
In mathematics, the Jacquet–Langlands correspondence is a correspondence between automorphic forms on GL2 and its twisted forms, proved by in their book Automorphic Forms on GL(2) using the Selberg trace formula. It was one of the first examples of the Langlands philosophy that maps between L-groups should induce maps between automorphic representations. There are generalized versions of the Jacquet–Langlands correspondence relating automorphic representations of GLr(D) and GLdr(F), where D is a division algebra of degree d2 over the local or global field F. Suppose that G is an inner twist of the algebraic group GL2, in other words the multiplicative group of a quaternion algebra. The Jacquet–Langlands correspondence is bijection between Automorphic representations of G of dimension greater than 1 Cuspidal automorphic representations of GL2 that are square integrable (modulo the center) at each ramified place of G. Corresponding representations have the same local components at all unramified places of G. and extended the Jacquet–Langlands correspondence to division algebras of higher dimension. References Automorphic forms Theorems in harmonic analysis
Jacquet–Langlands correspondence
[ "Mathematics" ]
252
[ "Theorems in mathematical analysis", "Theorems in harmonic analysis" ]
20,606,961
https://en.wikipedia.org/wiki/Computational%20lithography
Computational lithography (also known as computational scaling) is the set of mathematical and algorithmic approaches designed to improve the resolution attainable through photolithography. Computational lithography came to the forefront of photolithography technologies in 2008 when the semiconductor industry faced challenges associated with the transition to a 22 nanometer CMOS microfabrication process and has become instrumental in further shrinking the design nodes and topology of semiconductor transistor manufacturing. History Computational lithography means the use of computers to simulate printing of micro-lithography structures. Pioneering work was done by Chris Mack at NSA in developing PROLITH, Rick Dill at IBM and Andy Neureuther at University of California, Berkeley from the early 1980s. These tools were limited to lithography process optimization as the algorithms were limited to a few square micrometres of resist. Commercial full-chip optical proximity correction (OPC), using model forms, was first implemented by TMA (now a subsidiary of Synopsys) and Numerical Technologies (also part of Synopsys) around 1997. Since then the market and complexity has grown significantly. With the move to sub-wavelength lithography at the 180 nm and 130 nm nodes, RET techniques such as Assist features, phase shift masks started to be used together with OPC. For the transition from 65 nm to 45 nm nodes customers were worrying that not only that design rules were insufficient to guarantee printing without yield limiting hotspots, but also that tape-out time may need thousands of CPUs or weeks of run time. This predicted exponential increase in computational complexity for mask synthesis on moving to the 45 nm process node spawned a significant venture capital investment in design for manufacturing start-up companies. A number of startup companies promoting their own disruptive solutions to this problem started to appear, techniques from custom hardware acceleration to radical new algorithms such as inverse lithography were touted to resolve the forthcoming bottlenecks. Despite this activity, incumbent OPC suppliers were able to adapt and keep their major customers, with RET and OPC being used together as for previous nodes, but now on more layers and with larger data files, and turn around time concerns were met by new algorithms and improvements in multi-core commodity processors. The term computational lithography was first used by Brion Technology (now a subsidiary of ASML) in 2005 to promote their hardware accelerated full chip lithography simulation platform. Since then the term has been used by the industry to describe full chip mask synthesis solutions. As 45 nm goes into full production and EUV lithography introduction is delayed, 32 nm and 22 nm are expected to run on existing 193 nm scanners technology. Now, not only are throughput and capabilities concerns resurfacing, but also new computational lithography techniques such as Source Mask Optimization (SMO) is seen as a way to squeeze better resolution specific to a given design. Today, all the major mask synthesis vendors have settled on the term "computational lithography" to describe and promote the set of mask synthesis technologies required for 22 nm. Techniques comprising computational lithography Computational lithography makes use of a number of numerical simulations to improve the performance (resolution and contrast) of cutting-edge photomasks. The combined techniques include Resolution Enhancement Technology (RET), Optical Proximity Correction (OPC), Source Mask Optimization (SMO), etc. The techniques vary in terms of their technical feasibility and engineering sensible-ness, resulting in the adoption of some and the continual R&D of others. Resolution enhancement technology Resolution enhancement technologies, first used in the 90 nanometer generation, using the mathematics of diffraction optics to specify multi-layer phase-shift photomasks that use interference patterns in the photomask that enhance resolution on the printed wafer surface. Optical proximity correction Optical proximity correction uses computational methods to counteract the effects of diffraction-related blurring and under-exposure by modifying on-mask geometries with means such as: adjusting linewidths depending on the density of surrounding geometries (a trace surrounded by a large open area will be over-exposed compared with the same trace surrounded by a dense pattern), adding "dog-bone" endcaps to the end of lines to prevent line shortening, correcting for electron beam proximity effects OPC can be broadly divided into rule-based and model-based. Inverse lithography technology, which treats the OPC as an inverse imaging problem, is also a useful technique because it can provide unintuitive mask patterns. Complex modeling of the lens system and photoresist Beyond the models used for RET and OPC, computational lithography attempts to improve chip manufacturability and yields such as by using the signature of the scanner to help improve accuracy of the OPC model: polarization characteristics of the lens pupil, Jones matrix of the stepper lens, optical parameters of the photoresist stack, diffusion through the photoresist, stepper illumination control variables. Computational effort The computational effort behind these methods is immense. According to one estimate, the calculations required to adjust OPC geometries to take into account variations to focus and exposure for a state-of-the-art integrated circuit will take approximately 100 CPU-years of computer time. This does not include modeling the 3D polarization of the light source or any of the several other systems that need to be modeled in production computational photolithographic mask making flows. Brion Technologies, a subsidiary of ASML, markets a rack-mounted hardware accelerator dedicated for use in making computational lithographic calculations — a mask-making shop can purchase a large number of their systems to run in parallel. Others have claimed significant acceleration using re-purposed off-the-shelf graphics cards for their high parallel throughput. 193 nm deep UV photolithography The periodic enhancement in the resolution achieved through photolithography has been a driving force behind Moore's Law. Resolution improvements enable printing of smaller geometries on an integrated circuit. The minimum feature size that a projection system typically used in photolithography can print is given approximately by: where is the minimum feature size (also called the critical dimension). is the wavelength of light used. is the numerical aperture of the lens as seen from the wafer. (commonly called k1 factor) is a coefficient that encapsulates process-related factors. Historically, resolution enhancements in photolithography have been achieved through the progression of stepper illumination sources to smaller and smaller wavelengths — from "g-line" (436 nm) and "i-line" (365 nm) sources based on mercury lamps, to the current systems based on deep ultraviolet excimer lasers sources at 193 nm. However the progression to yet finer wavelength sources has been stalled by the intractable problems associated with extreme ultraviolet lithography and x-ray lithography, forcing semiconductor manufacturers to extend the current 193 nm optical lithography systems until some form of next-generation lithography proves viable (although 157 nm steppers have also been marketed, they have proven cost-prohibitive at $50M each). Efforts to improve resolution by increasing the numerical aperture have led to the use of immersion lithography. As further improvements in resolution through wavelength reduction or increases in numerical aperture have become either technically challenging or economically unfeasible, much attention has been paid to reducing the k1-factor. The k1 factor can be reduced through process improvements, such as phase-shift photomasks. These techniques have enabled photolithography at the 32 nanometer CMOS process technology node using a wavelength of 193 nm (deep ultraviolet). However, with the ITRS roadmap calling for the 22 nanometer node to be in use by 2011, photolithography researchers have had to develop an additional suite of improvements to make 22 nm technology manufacturable. While the increase in mathematical modeling has been underway for some time, the degree and expense of those calculations has justified the use of a new term to cover the changing landscape: computational lithography. See also International Technology Roadmap for Semiconductors References Lithography (microfabrication) Computational fields of study
Computational lithography
[ "Materials_science", "Technology" ]
1,678
[ "Computational fields of study", "Microtechnology", "Computing and society", "Nanotechnology", "Lithography (microfabrication)" ]
20,607,372
https://en.wikipedia.org/wiki/Light%20dark%20matter
Light dark matter, in astronomy and cosmology, are dark matter weakly interacting massive particles (WIMPS) candidates with masses less than 1 GeV (i.e., a mass similar to or less than a neutron or proton). These particles are heavier than warm dark matter and hot dark matter, but are lighter than the traditional forms of cold dark matter, such as Massive Compact Halo Objects (MACHOs). The Lee-Weinberg bound limits the mass of the favored dark matter candidate, WIMPs, that interact via the weak interaction to GeV. This bound arises as follows. The lower the mass of WIMPs is, the lower the annihilation cross section, which is of the order , where m is the WIMP mass and M the mass of the Z boson. This means that low mass WIMPs, which would be abundantly produced in the early universe, freeze out (i.e. stop interacting) much earlier and thus at a higher temperature, than higher mass WIMPs. This leads to a higher relic WIMP density. If the mass is lower than GeV the WIMP relic density would overclose the universe. Some of the few loopholes allowing one to avoid the Lee-Weinberg bound without introducing new forces below the electroweak scale have been ruled out by accelerator experiments (i.e. CERN, Tevatron), and in decays of B mesons. A viable way of building light dark matter models is thus by postulating new light bosons. This increases the annihilation cross section and reduces the coupling of dark matter particles to the Standard Model making them consistent with accelerator experiments. Current methods to search for light dark matter particles include direct detection through electron recoil. Motivation In recent years, light dark matter has become popular due in part to the many benefits of the theory. Sub-GeV dark matter has been used to explain the positron excess in the Galactic Center observed by INTEGRAL, excess gamma rays from the Galactic Center and extragalactic sources. It has also been suggested that light dark matter may explain a small discrepancy in the measured value of the fine structure constant in different experiments. Furthermore, the lack of dark matter signals in higher energy ranges in direct detection experiments incentivizes sub-GeV searches. Theoretical models Due to the constraints placed on the mass of WIMPs in the popular freeze out model which predict WIMP masses greater than 2 GeV, the freeze out model must be altered to allow for lower mass dark matter particles. Scalar dark matter The Lee–Weinberg limit, which restricts the mass of dark matter particles to >2 GeV may not apply in two special cases where dark matter is a scalar particle. The first case requires that the scalar dark matter particle is coupled with a massive fermion. This model rules out dark matter particles less than 100 MeV because observations of gamma ray production do not align with theoretical predictions for particles in this mass range. This discrepancy may be resolved by requiring an asymmetry between the dark matter particles and antiparticles, as well as adding new particles. The second case predicts that the scalar dark matter particle is coupled with a new gauge boson. The production of gamma rays due to annihilation in this case is predicted to be very low. Freeze in model The thermal freeze in model proposes that dark matter particles were very weakly interacting shortly after the Big Bang such that they were essentially decoupled from the plasma. Furthermore, their initial abundance was small. Dark matter production occurs predominantly when the temperature of the plasma falls under the mass of the dark matter particle itself. This is in contrast to the thermal freeze out theory, in which the initial abundance of dark matter was large, and differentiation into lighter particles decreases and eventually stops as the temperature of the plasma decreases. The freeze in model allows for dark matter particles well under the 2 GeV mass limit to exist. Asymmetric dark matter Observations show that the density of dark matter is about 5 times the density of baryonic matter. Asymmetric dark matter theories attempt to explain this relationship by suggesting that the ratio between the number densities of particles and antiparticles is the same in baryonic matter as it is in dark matter. This further implies that the mass of dark matter is close to 5 times the mass of baryonic matter, placing the mass of dark matter in the few GeV range. Experiments In general, the methods for detecting dark matter which apply to all heavier dark matter candidates also apply to light dark matter. These methods include direct detection and indirect detection. Dark matter particles with masses lighter than 1 GeV can be directly detected by searching for electron recoils. The greatest difficulty in using this method is creating a detector with a low enough threshold energy for detection while also minimizing background signals. Electron beam dump experiments can also be used to search for light dark matter particles. XENON10 XENON10 is a liquid xenon detector that searches for and places limits on the mass of dark matter by directly detecting electron recoil. This experiment placed the first sub GeV limits on the mass of dark matter using direct detection in 2012. SENSEI SENSEI is a silicon detector capable of measuring the electronic recoil of a dark matter particle between 500 keV and 4 MeV using CCD technology. The experiment has been working to place further rule out possible mass ranges of dark matter below 1 GeV, with its most recent results being published in October 2020. See also Axion Axion Dark Matter Experiment Dark matter halo Minimal Supersymmetric Standard Model Neutralino Scalar field dark matter Weakly interacting massive particles Weakly interacting slender particles References Further reading Astroparticle physics Dark matter Physics beyond the Standard Model
Light dark matter
[ "Physics", "Astronomy" ]
1,171
[ "Dark matter", "Unsolved problems in astronomy", "Concepts in astronomy", "Astroparticle physics", "Unsolved problems in physics", "Astrophysics", "Particle physics", "Exotic matter", "Physics beyond the Standard Model", "Matter" ]
20,607,517
https://en.wikipedia.org/wiki/Yamaguchi%20esterification
The Yamaguchi esterification is the chemical reaction of an aliphatic carboxylic acid and 2,4,6-trichlorobenzoyl chloride (TCBC, Yamaguchi reagent) to form a mixed anhydride which, upon reaction with an alcohol in the presence of stoichiometric amount of DMAP, produces the desired ester. It was first reported by Masaru Yamaguchi et al. in 1979. It is especially useful in the synthesis of macro-lactones and highly functionalised esters. Reaction mechanism The aliphatic carboxylate adds to the carbonyl carbon of Yamaguchi reagent, forming a mixed anhydride, which is then attacked by DMAP regioselectively at the less hindered carbon, producing acyl-substituted DMAP. This highly electrophilic agent is then attacked by the alcohol to form the product ester. The in situ formation of the symmetric aliphatic anhydride is proposed to explain the regioselectivity observed in the reactions of aliphatic acids, based on the fact that aliphatic carboxylates are more nucleophilic, and aliphatic anhydrides are more electrophilic towards DMAP and alcohol than their counterparts. See also Mitsunobu reaction Macrolide References External links Yamaguchi esterification—organic-chemistry.org Investigation of the Yamaguchi Esterification Mechanism. Synthesis of a Lux-S Enzyme Inhibitor Using an Improved Esterification Method. I. Dhimitruka, J. SantaLucia, Org. Lett., 2006, 8, 47–50. Article Condensation reactions Carbon-heteroatom bond forming reactions Esterification reactions Name reactions
Yamaguchi esterification
[ "Chemistry" ]
373
[ "Esterification reactions", "Coupling reactions", "Organic reactions", "Name reactions", "Carbon-heteroatom bond forming reactions", "Condensation reactions" ]
20,610,136
https://en.wikipedia.org/wiki/Reproductive%20system
The reproductive system of an organism, also known as the genital system, is the biological system made up of all the anatomical organs involved in sexual reproduction. Many non-living substances such as fluids, hormones, and pheromones are also important accessories to the reproductive system. Unlike most organ systems, the sexes of differentiated species often have significant differences. These differences allow for a combination of genetic material between two individuals, which allows for the possibility of greater genetic fitness of the offspring. Animals In mammals, the major organs of the reproductive system include the external genitalia (penis and vulva) as well as a number of internal organs, including the gamete-producing gonads (testicles and ovaries). Diseases of the human reproductive system are very common and widespread, particularly communicable sexually transmitted infections. Most other vertebrates have similar reproductive systems consisting of gonads, ducts, and openings. However, there is a great diversity of physical adaptations as well as reproductive strategies in every group of vertebrates. Vertebrates Vertebrates share key elements of their reproductive systems. They all have gamete-producing organs known as gonads. In females, these gonads are then connected by oviducts to an opening to the outside of the body, typically the cloaca, but sometimes to a unique pore such as a vagina. Humans The human reproductive system usually involves internal fertilization by sexual intercourse. During this process, the male inserts their erect penis into the female's vagina and ejaculates semen, which contains sperm. The sperm then travels through the vagina and cervix into the uterus or fallopian tubes for fertilization of the ovum. Upon successful fertilization and implantation, gestation of the fetus then occurs within the female's uterus for approximately nine months, this process is known as pregnancy in humans. Gestation ends with childbirth, delivery following labor. Labor consists of the muscles of the uterus contracting, the cervix dilating, and the baby passing out the vagina (the female genital organ). Human's babies and children are nearly helpless and require high levels of parental care for many years. One important type of parental care is the use of the mammary glands in the female breasts to nurse the baby. The female reproductive system has two functions: The first is to produce egg cells, and the second is to protect and nourish the offspring until birth. The male reproductive system has one function, and it is to produce and deposit sperm. Humans have a high level of sexual differentiation. In addition to differences in nearly every reproductive organ, numerous differences typically occur in secondary sexual characteristics. Male The male reproductive system is a series of organs located outside of the body and around the pelvic region of a male that contribute towards the reproduction process. The primary direct function of the male reproductive system is to provide the male sperm for fertilization of the ovum. The major reproductive organs of the male can be grouped into three categories. The first category is sperm production and storage. Production takes place in the testicles, which are housed in the temperature regulating scrotum, immature sperm then travel to the epididymides for development and storage. The second category is the ejaculatory fluid-producing glands which include the seminal vesicles, prostate, and the vasa deferentia. The final category are those used for copulation, and deposition of the spermatozoa (sperm) within the male, these include the penis, urethra, vas deferens, and Cowper's gland. Major secondary sex characteristics include larger, more muscular stature, deepened voice, facial and body hair, broad shoulders, and development of an Adam's apple. An important sexual hormone of males is androgen, and particularly testosterone. The testes release a hormone that controls the development of sperm. This hormone is also responsible for the development of physical characteristics in men such as facial hair and a deep voice. Female The human female reproductive system is a series of organs primarily located inside of the body and around the pelvic region of a female that contribute towards the reproductive process. The human female reproductive system contains three main parts: the vulva, which leads to the vagina, the vaginal opening, to the uterus; the uterus, which holds the developing fetus; and the ovaries, which produce the female's ova. The breasts are involved during the parenting stage of reproduction, but in most classifications they are not considered to be part of the female reproductive system. The vagina meets the outside at the vulva, which also includes the labia, clitoris and urethra; during intercourse, this area is lubricated by mucus secreted by the Bartholin's glands. The vagina is attached to the uterus through the cervix, while the uterus is attached to the ovaries via the fallopian tubes. Each ovary contains hundreds of ova (singular ovum). Approximately every 28 days, the pituitary gland releases a hormone that stimulates some of the ova to develop and grow. One ovum is released and it passes through the fallopian tube into the uterus. Hormones produced by the ovaries prepare the uterus to receive the ovum. The ovum will move through her fallopian tubes and awaits the sperm for fertilization to occur. When this does not occur, i.e. no sperm for fertilization, the lining of the uterus, called the endometrium, and unfertilized ova are shed each cycle through the process of menstruation. If the ovum is fertilized by sperm, it will attach to the endometrium and embryonic development will begin. Other mammals Most mammal reproductive systems are similar, however, there are some notable differences between the non-human mammals and humans. For instance, most male mammals have a penis which is stored internally until erect, and most have a penis bone or baculum. Additionally, both males and females of most species do not remain continually sexually fertile as humans do and the females of most mammalian species don't grow permanent mammaries like human females do either. Like humans, most groups of mammals have descended testicles found within a scrotum, however, others have descended testicles that rest on the ventral body wall, and a few groups of mammals, such as elephants, have undescended testicles found deep within their body cavities near their kidneys. The reproductive system of marsupials is unique in that the female has two vaginae, both of which open externally through one orifice but lead to different compartments within the uterus; males usually have a two-pronged penis, which corresponds to the females' two vaginae. Marsupials typically develop their offspring in an external pouch containing teats to which their newborn young (joeys) attach themselves for post uterine development. Also, marsupials have a unique prepenial scrotum. The long newborn joey instinctively crawls and wriggles the , while clinging to fur, on the way to its mother's pouch. In regards to males, the mammalian penis has a similar structure in reptiles and a small percentage of birds while the scrotum is only present in mammals. Regarding females, the vulva is unique to mammals with no homologue in birds, reptiles, amphibians, or fish. The clitoris, however, can be found in some reptiles and birds. In place of the uterus and vagina, non-mammal vertebrate groups have an unmodified oviduct leading directly to a cloaca, which is a shared exit-hole for gametes, urine, and feces. Monotremes (i.e. platypus and echidnas), a group of egg-laying mammals, also lack a uterus, vagina, and vulva, and in that respect have a reproductive system resembling that of a reptile. Dogs In domestic canines, sexual maturity (puberty) occurs between the ages of 6 and 12 months for both males and females, although this can be delayed until up to two years of age for some large breeds. Horses The mare's reproductive system is responsible for controlling gestation, birth, and lactation, as well as her estrous cycle and mating behavior. The stallion's reproductive system is responsible for his sexual behavior and secondary sex characteristics (such as a large crest). Even-toed ungulates Birds Male and female birds have a cloaca, an opening through which eggs, sperm, and wastes pass. Intercourse is performed by pressing the lips of the cloacae together, which is sometimes known as an intromittent organ which is known as a phallus that is analogous to the mammals' penis. The female lays amniotic eggs in which the young fetus continues to develop after it leaves the female's body. Unlike most vertebrates, female birds typically have only one functional ovary and oviduct. As a group, birds, like mammals, are noted for their high level of parental care. Reptiles Reptiles are almost all sexually dimorphic, and exhibit internal fertilization through the cloaca. Some reptiles lay eggs while others are ovoviviparous (animals that deliver live young). Reproductive organs are found within the cloaca of reptiles. Most male reptiles have copulatory organs, which are usually retracted or inverted and stored inside the body. In turtles and crocodilians, the male has a single median penis-like organ, while male snakes and lizards each possess a pair of penis-like organs. Amphibians Most amphibians exhibit external fertilization of eggs, typically within the water, though some amphibians such as caecilians have internal fertilization. All have paired, internal gonads, connected by ducts to the cloaca. Fish Fish exhibit a wide range of different reproductive strategies. Most fish, however, are oviparous and exhibit external fertilization. In this process, females use their cloaca to release large quantities of their gametes, called spawn into the water and one or more males release "milt", a white fluid containing many sperm over the unfertilized eggs. Other species of fish are oviparous and have internal fertilization aided by pelvic or anal fins that are modified into an intromittent organ analogous to the human penis. A small portion of fish species are either viviparous or ovoviviparous, and are collectively known as livebearers. Fish gonads are typically pairs of either ovaries or testicles. Most fish are sexually dimorphic but some species are hermaphroditic or unisexual. Invertebrates Invertebrates have an extremely diverse array of reproductive systems, the only commonality may be that they all lay eggs. Also, aside from cephalopods and arthropods, nearly all other invertebrates are hermaphroditic and exhibit external fertilization. Cephalopods All cephalopods are sexually dimorphic and reproduce by laying eggs. Most cephalopods have semi-internal fertilization, in which the male places his gametes inside the female's mantle cavity or pallial cavity to fertilize the ova found in the female's single ovary. Likewise, male cephalopods have only a single testicle. In the female of most cephalopods the nidamental glands aid in development of the egg. The "penis" in most unshelled male cephalopods (Coleoidea) is a long and muscular end of the gonoduct used to transfer spermatophores to a modified arm called a hectocotylus. That in turn is used to transfer the spermatophores to the female. In species where the hectocotylus is missing, the "penis" is long and able to extend beyond the mantle cavity and transfer the spermatophores directly to the female. Insects Most insects reproduce oviparously, i.e. by laying eggs. The eggs are produced by the female in a pair of ovaries. Sperm, produced by the male in one testis or more commonly two, is transmitted to the female during mating by means of external genitalia. The sperm is stored within the female in one or more spermathecae. At the time of fertilization, the eggs travel along oviducts to be fertilized by the sperm and are then expelled from the body ("laid"), in most cases via an ovipositor. Arachnids Arachnids may have one or two gonads, which are located in the abdomen. The genital opening is usually located on the underside of the second abdominal segment. In most species, the male transfers sperm to the female in a package, or spermatophore. Complex courtship rituals have evolved in many arachnids to ensure the safe delivery of the sperm to the female. Arachnids usually lay yolky eggs, which hatch into immatures that resemble adults. Scorpions, however, are either ovoviviparous or viviparous, depending on species, and bear live young. Plants Among all living organisms, flowers, which are the reproductive structures of angiosperms, are the most varied physically and show a correspondingly great diversity in methods of reproduction. Plants that are not flowering plants (green algae, mosses, liverworts, hornworts, ferns and gymnosperms such as conifers) also have complex interplays between morphological adaptation and environmental factors in their sexual reproduction. The breeding system, or how the sperm from one plant fertilizes the ovum of another, depends on the reproductive morphology, and is the single most important determinant of the genetic structure of nonclonal plant populations. Christian Konrad Sprengel (1793) studied the reproduction of flowering plants and for the first time it was understood that the pollination process involved both biotic and abiotic interactions. Fungi Fungal reproduction is complex, reflecting the differences in lifestyles and genetic makeup within this diverse kingdom of organisms. It is estimated that a third of all fungi reproduce using more than one method of propagation; for example, reproduction may occur in two well-differentiated stages within the life cycle of a species, the teleomorph and the anamorph. Environmental conditions trigger genetically determined developmental states that lead to the creation of specialized structures for sexual or asexual reproduction. These structures aid reproduction by efficiently dispersing spores or spore-containing propagules. See also Major systems of the human body Reproductive system disease Human sexuality Human sexual behavior Plant sexuality Meiosis References Cited literature External links Fertility Endocrine system
Reproductive system
[ "Biology" ]
3,115
[ "Behavior", "Reproductive system", "Endocrine system", "Sex", "Reproduction", "Organ systems" ]
20,612,358
https://en.wikipedia.org/wiki/Hydrogen%20silsesquioxane
Hydrogen silsesquioxane(s) (HSQ, H-SiOx, THn, H-resin) are inorganic compounds with the empirical formula [HSiO3/2]n. The cubic H8Si8O12 (TH8) is used as the visual representation for HSQ. TH8, TH10, TH12, and TH14 have been characterized by elemental analysis, gas chromatography–mass spectroscopy (GC-MS), IR spectroscopy, and NMR spectroscopy. High purity semiconductor-grade HSQ has been investigated as a negative resist in photolithography and electron-beam (e-beam) lithography. HSQ is commonly delivered in methyl isobutyl ketone (MIBK) and can be used to form 0.01–2 μm films on substrates/wafers. When exposed to electrons or extreme ultraviolet radiation (EUV), HSQ cross-links via hydrogen evolution concomitant with Si-O bond crosslinking. Recently, the possibility of crosslinking HSQ using ultrashort laser pulses through multiphoton absorption and its application to 3D printing of silica glass have been demonstrated. Sufficiently dosed and exposed regions form a low dielectric constant (low-k) Si rich oxide that is chemically resistant/insoluble towards developers, such as tetramethylammonium hydroxide (TMAH). Sub-10 nm patterning is achievable with HSQ. The nanoscale patterning capabilities and low-k of the Si rich oxide produced is potentially of broad scope of nano applications and devices. HSQ has been available as 1 and 6% (wt%) MIBK solutions from Dow Inc. (Formally Dow Corning), called XR-1541-001 and XR-1541-006, respectively. HSQ in MIBK has a short shelf life. Alternatively, Applied Quantum Materials Inc. (AQM) produces HSQ with a longer shelf life. HSQ solutions derived from AQM dry silone resin are available in the United States from DisChem, Inc in concentrations ranging from 1-20% in MIBK under the brand name H-SiQ. EM Resist Ltd (UK) also supplies HSQ worldwide both as powder and in solution. References Further reading Optical materials
Hydrogen silsesquioxane
[ "Physics" ]
489
[ "Materials stubs", "Materials", "Optical materials", "Matter" ]
20,614,850
https://en.wikipedia.org/wiki/Biochemistry%20%28journal%29
Biochemistry is a peer-reviewed academic journal in the field of biochemistry. Founded in 1962, the journal is now published weekly by the American Chemical Society, with 51 or 52 annual issues. According to the Journal Citation Reports, the journal has a 2022 impact factor of 2.9. The previous editor-in-chief was Richard N. Armstrong (Vanderbilt University School of Medicine) (2004–16). After his death, Alanna Schepartz (UC Berkeley) was appointed editor-in-chief. Indexing Biochemistry is indexed in: References External links Biochemistry website NCBI: Biochemistry Academic journals established in 1962 American Chemical Society academic journals Biochemistry journals English-language journals Weekly journals
Biochemistry (journal)
[ "Chemistry" ]
140
[ "Biochemistry stubs", "Biochemistry journals", "Biochemistry literature", "Biochemistry journal stubs" ]
747,591
https://en.wikipedia.org/wiki/Kammback
A Kammback—also known as a Kamm tail or K-tail—is an automotive styling feature wherein the rear of the car slopes downwards before being abruptly cut off with a vertical or near-vertical surface. A Kammback reduces aerodynamic drag, thus improving efficiency and reducing fuel consumption, while maintaining a practical shape for a vehicle. The Kammback is named after German aerodynamicist Wunibald Kamm for his work developing the design in the 1930s. Some vehicles incorporate the kammback design based on aerodynamic principles, while some use a cut-off tail as a design or marketing feature. Origins As the speed of cars increased during the 1920s and 1930s, designers observed and began to apply the principles of automotive aerodynamics. As aerodynamic drag increases, more energy, and thus more fuel, is required to propel the vehicle. In 1922, Paul Jaray patented a car based on a teardrop profile (i.e. with a rounded nose and long, tapered tail) to minimize the aerodynamic drag that is created at higher speeds. The streamliner vehicles of the mid 1930s—such as the Tatra 77, Chrysler Airflow and Lincoln-Zephyr—were designed according to these discoveries. However, the long tail was not a practical shape for a car, so automotive designers sought other solutions. In 1935, German aircraft designer Georg Hans Madelung showed alternatives to minimize drag without a long tail. In 1936, a similar theory was applied to cars after Baron Reinhard Koenig-Fachsenfeld developed a smooth roofline shape with an abrupt end at a vertical surface, effective in achieving low amounts of drag similar to a streamlined body. He worked on an aerodynamic design for a bus, and Koenig-Fachsenfeld patented the idea. Koenig-Fachsenfeld worked with Wunibald Kamm at Stuttgart University, investigating vehicle shapes to "provide a good compromise between everyday utility (e.g. vehicle length and interior dimensions) and an attractive drag coefficient". In addition to aerodynamic efficiency, Kamm emphasized vehicle stability in his design, mathematically and empirically proving the effectiveness of the design. In 1938, Kamm produced a prototype using a Kammback shape, based on a BMW 328. The Kammback, along with other aerodynamic modifications, gave the prototype a drag coefficient of 0.25. The earliest mass-produced cars using Kammback principles were the 1949–1951 Nash Airflyte in the United States and the 1952–1955 Borgward Hansa 2400 in Europe. Aerodynamic theory The ideal shape to minimize drag is a "teardrop," a smooth airfoil-like shape, but it is not practical for road vehicles because of size constraints. However, researchers, including Kamm, found that abruptly cutting off the tail resulted in a minimal increase in drag. The reason for this is that a turbulent wake region forms behind the vertical surface at the rear of the car. This wake region mimics the effect of the tapered tail in that air in the free stream does not enter this region (avoiding boundary layer separation); therefore, smooth airflow is maintained, minimizing drag. Kamm's design is based on the tail being truncated at the point where the cross section area is 50% of the car's maximum cross-section, which Kamm found represented a good compromise, as by that point the turbulence typical of flat-back vehicles had been mostly eliminated at typical speeds. The Kammback presented a partial solution to the problem of aerodynamic lift, which was becoming severe as sports car racing speeds increased during the 1950s. The design paradigm of sloping the tail to reduce drag was carried to an extreme on cars such as the Cunningham C-5R, resulting in an airfoil effect lifting the rear of the car at speed and so running the risk of instability or loss of control. The Kammback decreased the area of the lifting surface while creating a low-pressure zone underneath the tail. Some studies showed that the addition of a rear spoiler to a Kammback design was not beneficial because the overall drag increased with the angles that were studied. Usage In 1959, the Kammback came into use on full-body racing cars as an anti-lift measure, and within a few years would be used on virtually all such vehicles. The design had a resurgence in the early 2000s as a method to reduce fuel consumption in hybrid electric vehicles. Several cars have been marketed as Kammbacks despite their profiles not adhering to the aerodynamic philosophy of a true Kammback. These models include the 1971–1977 Chevrolet Vega Kammback wagon, the 1981–1982 AMC Eagle Kammback, the AMC AMX-GT, and the Pontiac Firebird–based "Type K" concept cars. Some models that are marketed as "coupes"—such as BMW and Mercedes-Benz SUVs like the X6 and GLC Coupé—"use a sort-of Kammback shape, though their tail ends have a few more lumps and bumps than a proper Kammback ought to have." Cars that have had a Kammback include: 1940 BMW 328 "Mille Miglia" Kamm coupé 1952 Cunningham C-4RK + 1958-1963 Lotus Elite 1961 Ferrari 250 GT SWB Breadvan 1962–1964 Ferrari 250 GTO 1963 Aston Martin DP215 1963-1964 Porsche 904 Carrera GTS 1963-1967 Alfa Romeo Giulia TZ 1963–1974 Bizzarrini Iso Grifo 1964-1965 Shelby Daytona 1964-1968 Ferrari 275 GTB 1965–1968 Ford GT40 1965–1970 Aston Martin DB6 1965–1996, 2005–present Mini Marcos 1966 Porsche 906 1966-1970 Unipower GT 1966-1974 Saab Sonett II and III 1967-1977 Alfa Romeo Tipo 33 1968–1973 Ferrari 365 GTB/4 ("Daytona") 1968–1976 Ferrari Dino 1968-1978 Lamborghini Espada 1969–1971 Fiat 850 Coupe and Sport Coupe 1970–1975 Citroën SM 1970–1977 Alfa Romeo Montreal 1970–1986 Citroën GS 1970–1978 Datsun 240Z, 260Z, 280Z 1971–1989 Alfa Romeo Alfasud 1971–1973 Ford Mustang Fastback 1972–1982 Maserati Khamsin 1972-1984 Alfa Romeo Alfetta 1974–1991 Citroën CX 1983-1991 Honda CR-X 1985-1995 Autobianchi Y10 / Lancia Y10 1986-2016 Daewoo LeMans 1991–1998 Mazda MX-3 1994–1998 Mazda Familia Neo/323C 1999-2005 Audi A2 2000–2006 Honda Insight 2004–present Toyota Prius 2007-2015 Renault Laguna III 2010–2014 Honda Insight (2nd generation) 2010-2016 Honda CR-Z 2010–present Audi A7 2012-2022 Hyundai Veloster 2017–2022 Hyundai Ioniq 2018–2023 Kia Stinger 2020–present Tesla Model Y 2020–present Ford Mustang Mach-E 2024–present Li Mega 2024–present Aston Martin Valour See also Fastback, a similar automotive styling feature Liftback, a type of tailgate that cars with a Kammback often use References Automotive styling features Aerodynamics
Kammback
[ "Chemistry", "Engineering" ]
1,467
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
748,047
https://en.wikipedia.org/wiki/Radiant%20barrier
A radiant barrier is a type of building material that reflects thermal radiation and reduces heat transfer. Because thermal energy is also transferred by conduction and convection, in addition to radiation, radiant barriers are often supplemented with thermal insulation that slows down heat transfer by conduction or convection. A radiant barrier reflects heat radiation (radiant heat), preventing transfer from one side of the barrier to another due to a reflective, low emittance surface. In building applications, this surface is typically a very thin, mirror-like aluminum foil. The foil may be coated for resistance to the elements or for abrasion resistance. The radiant barrier may be one or two sided. One sided radiant barrier may be attached to insulating materials, such as polyisocyanurate, rigid foam, bubble insulation, or oriented strand board (OSB). Reflective tape can be adhered to strips of radiant barrier to make it a contiguous vapor barrier or, alternatively, radiant barrier can be perforated for vapor transmittance. Reflectivity and emissivity All materials in existence give off, or emit, energy by thermal radiation as a result of their temperature. The amount of energy radiated depends on the surface temperature and a property called emissivity (also called "emittance"). Emissivity is expressed as a number between zero and one at a given wavelength. The higher the emissivity, the greater the emitted radiation at that wavelength. A related material property is reflectivity (also called "reflectance"). This is a measure of how much energy is reflected by a material at a given wavelength. Reflectivity is also expressed as a number between 0 and 1 (or a percentage between 0 and 100). At a given wavelength and angle of incidence the emissivity and reflectivity values sum to 1 by Kirchhoff's law. Radiant barrier materials must have low emissivity (usually 0.1 or less) at the wavelengths at which they are expected to function. For typical building materials, the wavelengths are in the mid- and long-infrared spectrum, in the range of 3-15 micrometres. Radiant barriers may or may not exhibit high visual reflectivity. While reflectivity and emissivity must sum to 1 at a given wavelength, reflectivity at one set of wavelengths (visible) and emissivity at a different set of wavelengths (thermal) do not necessarily sum to 1. Therefore, it is possible to create visibly dark colored surfaces with low thermal emissivity. To perform properly, radiant barriers need to face open space (e.g., air or vacuum) through which there would otherwise be radiation. History In 1860, the French scientist Jean Claude Eugene Peclet experimented with the insulating effect of high and low emissive metals facing air spaces. Peclet experimented with a wide variety of metals ranging from tin to cast iron, and came to the conclusion that neither the color nor the visual reflectance were significant determining factors in the materials’ performance. Peclet calculated the reduction in BTUs for high and low emissive surfaces facing into various air spaces, discovering the benefits of a radiant barrier in reducing the transfer of heat. In 1925, two German businessmen Schmidt and Dykerhoff filed for patents on reflective surfaces for use as building insulation because recent improvements in technology allowed low emissivity aluminum foil to be commercially viable. This became the launching pad for radiant barrier and reflective insulation around the world, and within the next 15 years, millions of square feet of radiant barrier were installed in the US alone. Within 30 years, radiant barrier was making a name for itself, and was included in projects at MIT, Princeton, and Frank Sinatra’s residence in Palm Springs, California. Applications Space exploration For the Apollo program, NASA helped develop a thin aluminum foil that reflected 95% of the radiant heat. A metalized film was used to protect spacecraft, equipment, and astronauts from thermal radiation or to retain heat in the extreme temperature fluctuations of space. The aluminum was vacuum-coated to a thin film and applied to the base of the Apollo landing vehicles. It was also used in numerous other NASA projects like the James Webb Space Telescope and Skylab. In the vacuum of outer space, where temperatures can range from heat transfer is only by radiation, so a radiant barrier is much more effective than it is on earth, where 5% to 45% of the heat transfer can still occur via convection and conduction, even when an effective radiant barrier is deployed. Radiant barrier is a Space Foundation Certified Space Technology(TM). Radiant barrier was inducted into the Space Technology Hall of Fame in 1996. Textiles Since the 1970s, sheets of metalized polyester called space blankets have been commercially available as a means to prevent hypothermia and other cold weather injuries. Because of their durability and light weight, these blankets are popular for survival and first aid applications. Swarms of people can be seen draped in reflective metalized film after a marathon, especially where the temperatures are particularly cold, like during the annual New York City Marathon which takes place in the fall. Window treatments Window glass can be coated to achieve low emissivity or "low-e". Some windows use laminate polyester film where at least one layer has been metalized using a process called sputtering. Sputtering occurs when a metal, most often aluminum, is vaporized and the polyester film is passed through it. This process can be adjusted to control the amount of metal that ultimately coats the surface of the film. These metalized films are applied to one or more surfaces of the glass to resist the transfer of radiant heat, yet the films are so thin that they allow visible light to pass through. Since the thin coatings are fragile and can be damaged when exposed to air and moisture, manufacturers typically use multiple pane windows. While films are typically applied to the glass during manufacturing, some films may be available for homeowners to apply themselves. Homeowner-applied window films are typically expected to last 10–15 years. Construction Roofs and attics When radiant solar energy strikes a roof, heating the roofing material (shingles, tiles or roofing sheets) and roof sheathing by conduction, it causes the underside of the roof surface and the roof framing to radiate heat downward through the roof space (attic / ceiling cavity) toward the attic floor / upper ceiling surface. When a radiant barrier is placed between the roofing material and the insulation on the attic floor, much of the heat radiated from the hot roof is reflected back toward the roof and the low emissivity of the underside of the radiant barrier means that very little radiant heat is emitted downwards. This makes the top surface of the insulation cooler than it would have been without a radiant barrier and thus reduces the amount of heat that moves through the insulation into the rooms below. This is different from the "cool roof" strategy which reflects solar energy before it heats the roof, but both are a means of reducing radiant heat. According to a study by the Florida Solar Energy Center, a white tile or white metal cool roof can outperform a traditional black shingle roof with a radiant barrier in the attic, but the black shingle roof with a radiant barrier outperformed the red tile cool roof. For installing a radiant barrier under a metal or tile roof, the radiant barrier (shiny side down) should NOT be applied directly over the roof sheathing, because high contact area reduces the efficacy of the metallic surface as low emitter. Vertical battens (aka firring strips) may be applied atop said sheathing; then OSB with a radiant barrier may be put atop the battens. The battens allow more air space than construction without battens. If an air space is not present or is too small, heat will conduct from the radiant barrier, into the substructure, resulting in unwanted IR shower on lower regions. Wood is a poor insulator and so it conducts heat from the radiant barrier to lower surfaces of said wood, where it, in turn, sheds heat by emitting IR radiation. According to the US Department of Energy, “Reflective insulation and radiant barrier products must have an air space adjacent to the reflective material to be effective.” The most common application for a radiant barrier is as a facing for attics. For a traditional shingle/tile/iron roof, radiant barriers may be applied beneath the rafters or trusses and under the roof decking. This application method has the radiant barrier sheets draped beneath the trusses of rafters, creating a small air space above with the radiant barrier facing into the entire interior attic space below. Reflective foil laminate is a product commonly used as the radiant barrier sheet. Another method of applying a radiant barrier to a roof in new construction is to use a radiant barrier that is pre-laminated to OSB panels or roof sheathing. Manufacturers of this installation method often tout the savings in labor costs in using a product that serves as roof decking and radiant barrier in one. To apply a radiant barrier in an existing attic, it may be stapled to the underside of the roof rafters. This method offers the same benefits as the draped method in that dual air spaces are provided. However, it is essential that the vents be allowed to remain open to prevent moisture from being trapped in the attic. In general, it is preferred to have the radiant barrier applied SHINY SIDE DOWN to the underside of the roof with an air space facing down; thus dust won't defeat it, as would be the case of a SHINY SIDE UP barrier. The final method of installing a radiant barrier in an attic is to lay it over the top of the insulation on the attic floor. While this method can be more effective in the winter there are a few potential concerns with this application, which the US Department of Energy and the Reflective Insulation Manufacturers Association International feel the need to address. First, a breathable radiant barrier should always be used here. This is usually achieved by small perforations in the radiant barrier foil. The vapor transmission rate of the radiant barrier should be at least 5 perms, as measured with ASTM E96, and the moisture in the insulation should be checked before installation. Second, the product should meet the required flame spread, which includes ASTM E84 with the ASTM E2599 method. Lastly, this method allows for dust to accumulate over the top surface of the radiant barrier, potentially reducing the efficiency over time. Energy savings According to a 2010 study by the Building Envelope Research Program of the Oak Ridge National Laboratory, homes with air-conditioning duct work in the attic in the hottest climate zones, such as in the US Deep South, could benefit the most from radiant barrier interventions, with annual utility bill savings up to $150, whereas homes in milder climates, e.g., Baltimore, could see savings about half those of their southern neighbors. On the other hand, if there are no ducts or air handlers in the attic, the annual savings could be even much less, from about $12 in Miami to $5 in Baltimore. Nevertheless, a radiant barrier may still help to improve comfort and to reduce the peak air-conditioning load. Shingle temperature One common misconception regarding radiant barrier is that the heat reflecting off the radiant barrier back to the roof has the potential to increase the roof temperature and possibly damage the shingles. Performance testing by Florida Solar Energy Center demonstrated that the increase in temperature at the hottest part of the day was no more than about 5 degrees F. In fact, this study showed that a radiant barrier has the potential to decrease the roof temperature once the sun goes down because it prevents heat loss or transfer, from the attic, through the roof. RIMA International wrote a technical paper on the subject which included statements collected from large roofing manufacturers, and none said that a radiant barrier would in any way affect the warranty of the shingles. Attic dust accumulation When laying a radiant barrier over the insulation on the attic floor, it is possible for dust to accumulate on the top side. Many factors like dust particle size, dust composition and the amount of ventilation in the attic affect how dust accumulates and thus the ultimate performance of a radiant barrier in an attic. A study by the Tennessee Valley Authority mechanically applied a small amount of dust over a radiant barrier and found no significant effect when testing for performance. However, TVA referenced a previous study which stated that it was possible for a radiant barrier to collect so much dust that its reflectivity could be decreased by nearly half. It is not true that a double-sided radiant barrier on the attic floor is immune to the dust concern. The TVA study also tested a double-sided radiant barrier with black plastic draped on top to simulate heavy dust accumulation, as well as a single-sided radiant barrier with heavy kraft paper on the top. The test indicated that the radiant barrier was not performing, and the small air spaces created between the peaks of the insulation were not sufficient to block radiant heat. Walls Radiant barrier may be used as a vented skin around the exterior of a wall. Furring strips are applied to the sheathing to create a vented air space between the radiant barrier and the siding, and vents are used at the top and bottom to allow convective heat to rise naturally to the attic. If brick is being used on the exterior, then a vented air space may already be present, and furring strips are not necessary. Wrapping a house with radiant barrier can result in a 10% to 20% reduction in the tonnage air conditioning system requirement, and save both energy and construction costs. Floors Reflective foil, bubble foil insulations, and radiant barriers are noted for their ability to reflect unwanted solar radiation in hot climates, when applied properly. Reflective foils are fabricated from aluminum foils with a variety of backings such as roofing paper, craft paper, plastic film, polyethylene bubbles, or cardboard. Reflective bubble foil is basically a plastic bubble wrap sheet with a reflective foil layer and belongs to a class of insulation products known as radiant foils. Reflective bubble/foil insulations are primarily radiant barriers, and reflective insulation systems work by reducing radiant heat gain. To be effective, the reflective surface must face an air space; also, dust accumulation on the reflective surface will reduce its reflective capability. The radiant barrier should be installed in a manner to minimize dust accumulation on the reflective surface. Radiant barriers are more effective in hot climates than in cooler/cold climates (especially when cooling air ducts are located in the attic). When the sun heats a roof, it's primarily the sun's radiant energy that makes the roof hot. Much of this heat travels by conduction through the roofing materials to the attic side of the roof. The hot roof material then radiates its gained heat energy onto the cooler attic surfaces, including the air ducts and the attic floor. A radiant barrier reduces the radiant heat transfer from the underside of the roof to the other surfaces in the attic. Some studies show that radiant barriers can reduce cooling costs 5% to 10% when used in a warm, sunny climate. The reduced heat gain may even allow for a smaller air conditioning system. In cool climates, however, it's usually more cost-effective to install more thermal insulation than to add a radiant barrier. Both the American Department of Energy (DOE, Energy Efficiency & Renewable Energy Department) and the Ministry of Natural Resources (NRCAN) state that these systems are not recommended for cold or very cold climates. Canada Canada is considered to be a cold climate, so these products do not perform as promoted. Though they are often marketed as offering very high insulating values, there is no specific standard for radiant insulation products, so be wary of posted testimonials and manufacturers’ thermal performance claims. Research has shown that the insulation value of reflective bubble foil insulations and radiant barriers can vary from RSI 0 (R-0) to RSI 0.62 (R-3.5) per thickness of material. A study conducted by CMHC (Canada Mortgage & Housing Corporation) on four homes in Paris, ON found that the performance of the bubble foil was similar to an uninsulated floor. It also performed a cost-benefit analysis, and the cost-benefit ratio was $12 to $13 per cubic metre RSI. The effective insulating value depends on the number of adjacent dead air spaces, layers of foil and where they are installed. If the foil is laminated to rigid foam insulation, the total insulating value is obtained by adding the RSI of the foam insulation to the RSI of the dead air space and the foil. If there is no air space or clear bubble layer, the RSI value of the film is zero. See also Aluminized cloth Bivouac sack Cool Roof Emissivity Fire proximity suit Interior radiation control coating Low emissivity R-value Space blanket Thermal insulation Thin-film deposition References External links How a radiant barrier saves energy Radiant Barriers: A Question and Answer Primer Radiant Barrier Fact Sheet Department of Energy This entry incorporates public domain text originally from the Oak Ridge National Laboratory and the U.S. Department of Energy. Roofs Heat transfer Materials Technical fabrics Building insulation materials
Radiant barrier
[ "Physics", "Chemistry", "Technology", "Engineering" ]
3,503
[ "Transport phenomena", "Structural engineering", "Physical phenomena", "Heat transfer", "Structural system", "Materials", "Thermodynamics", "Roofs", "Matter" ]
749,012
https://en.wikipedia.org/wiki/Current%20source
A current source is an electronic circuit that delivers or absorbs an electric current which is independent of the voltage across it. A current source is the dual of a voltage source. The term current sink is sometimes used for sources fed from a negative voltage supply. Figure 1 shows the schematic symbol for an ideal current source driving a resistive load. There are two types. An independent current source (or sink) delivers a constant current. A dependent current source delivers a current which is proportional to some other voltage or current in the circuit. Background |- align="center" |style="padding: 1em 2em 0;"| |style="padding: 1em 2em 0;"| |- align="center" | Voltage source | Current source |- align="center" |style="padding: 1em 2em 0;"| |style="padding: 1em 2em 0;"| |- align="center" | Controlled voltage source | Controlled current source |- align="center" |style="padding: 1em 2em 0;"| |style="padding: 1em 2em 0;"| |- align="center" | Battery of cells | Single cell An ideal current source generates a current that is independent of the voltage changes across it. An ideal current source is a mathematical model, which real devices can approach very closely. If the current through an ideal current source can be specified independently of any other variable in a circuit, it is called an independent current source. Conversely, if the current through an ideal current source is determined by some other voltage or current in a circuit, it is called a dependent or controlled current source. Symbols for these sources are shown in Figure 2. The internal resistance of an ideal current source is infinite. An independent current source with zero current is identical to an ideal open circuit. The voltage across an ideal current source is completely determined by the circuit it is connected to. When connected to a short circuit, there is zero voltage and thus zero power delivered. When connected to a load resistance, the current source manages the voltage in such a way as to keep the current constant; so in an ideal current source the voltage across the source approaches infinity as the load resistance approaches infinity (an open circuit). No physical current source is ideal. For example, no physical current source can operate when applied to an open circuit. There are two characteristics that define a current source in real life. One is its internal resistance and the other is its compliance voltage. The compliance voltage is the maximum voltage that the current source can supply to a load. Over a given load range, it is possible for some types of real current sources to exhibit nearly infinite internal resistance. However, when the current source reaches its compliance voltage, it abruptly stops being a current source. In circuit analysis, a current source having finite internal resistance is modeled by placing the value of that resistance across an ideal current source (the Norton equivalent circuit). However, this model is only useful when a current source is operating within its compliance voltage. Implementations Passive current source The simplest non-ideal current source consists of a voltage source in series with a resistor. The amount of current available from such a source is given by the ratio of the voltage across the voltage source to the resistance of the resistor (Ohm's law; ). This value of current will only be delivered to a load with zero voltage drop across its terminals (a short circuit, an uncharged capacitor, a charged inductor, a virtual ground circuit, etc.) The current delivered to a load with nonzero voltage (drop) across its terminals (a linear or nonlinear resistor with a finite resistance, a charged capacitor, an uncharged inductor, a voltage source, etc.) will always be different. It is given by the ratio of the voltage drop across the resistor (the difference between the exciting voltage and the voltage across the load) to its resistance. For a nearly ideal current source, the value of the resistor should be very large but this implies that, for a specified current, the voltage source must be very large (in the limit as the resistance and the voltage go to infinity, the current source will become ideal and the current will not depend at all on the voltage across the load). Thus, efficiency is low (due to power loss in the resistor) and it is usually impractical to construct a 'good' current source this way. Nonetheless, it is often the case that such a circuit will provide adequate performance when the specified current and load resistance are small. For example, a 5 V voltage source in series with a 4.7 kΩ resistor will provide an approximately constant current of to a load resistance in the range of 50 to 450 Ω. A Van de Graaff generator is an example of such a high voltage current source. It behaves as an almost constant current source because of its very high output voltage coupled with its very high output resistance and so it supplies the same few microamperes at any output voltage up to hundreds of thousands of volts (or even tens of megavolts) for large laboratory versions. Active current sources without negative feedback In these circuits the output current is not monitored and controlled by means of negative feedback. Current-stable nonlinear implementation They are implemented by active electronic components (transistors) having current-stable nonlinear output characteristic when driven by steady input quantity (current or voltage). These circuits behave as dynamic resistors changing their present resistance to compensate current variations. For example, if the load increases its resistance, the transistor decreases its present output resistance (and vice versa) to keep up a constant total resistance in the circuit. Active current sources have many important applications in electronic circuits. They are often used in place of ohmic resistors in analog integrated circuits (e.g., a differential amplifier) to generate a current that depends slightly on the voltage across the load. The common emitter configuration driven by a constant input current or voltage and common source (common cathode) driven by a constant voltage naturally behave as current sources (or sinks) because the output impedance of these devices is naturally high. The output part of the simple current mirror is an example of such a current source widely used in integrated circuits. The common base, common gate and common grid configurations can serve as constant current sources as well. A JFET can be made to act as a current source by tying its gate to its source. The current then flowing is the of the FET. These can be purchased with this connection already made and in this case the devices are called current regulator diodes or constant current diodes or current limiting diodes (CLD). Alternatively, an enhancement-mode N-channel MOSFET (metal–oxide–semiconductor field-effect transistor) could be used instead of a JFET in the circuits listed below for similar functionality. Following voltage implementation An example: bootstrapped current source. Voltage compensation implementation The simple resistor passive current source is ideal only when the voltage across it is zero; so voltage compensation by applying parallel negative feedback might be considered to improve the source. Operational amplifiers with feedback effectively work to minimise the voltage across their inputs. This results in making the inverting input a virtual ground, with the current running through the feedback, or load, and the passive current source. The input voltage source, the resistor, and the op-amp constitutes an "ideal" current source with value, . The transimpedance amplifier and an op-amp inverting amplifier are typical implementations of this idea. The floating load is a serious disadvantage of this circuit solution. Current compensation implementation A typical example are Howland current source and its derivative Deboo integrator. In the last example (Fig. 1), the Howland current source consists of an input voltage source, , a positive resistor, R, a load (the capacitor, C, acting as impedance ) and a negative impedance converter INIC ( and the op-amp). The input voltage source and the resistor R constitute an imperfect current source passing current, through the load (Fig. 3 in the source). The INIC acts as a second current source passing "helping" current, , through the load. As a result, the total current flowing through the load is constant and the circuit impedance seen by the input source is increased. However the Howland current source isn't widely used because it requires the four resistors to be perfectly matched, and its impedance drops at high frequencies. The grounded load is an advantage of this circuit solution. Current sources with negative feedback They are implemented as a voltage follower with series negative feedback driven by a constant input voltage source (i.e., a negative feedback voltage stabilizer). The voltage follower is loaded by a constant (current sensing) resistor acting as a simple current-to-voltage converter connected in the feedback loop. The external load of this current source is connected somewhere in the path of the current supplying the current sensing resistor but out of the feedback loop. The voltage follower adjusts its output current flowing through the load so that to make the voltage drop across the current sensing resistor R equal to the constant input voltage . Thus the voltage stabilizer keeps up a constant voltage drop across a constant resistor; so, a constant current flows through the resistor and respectively through the load. If the input voltage varies, this arrangement will act as a voltage-to-current converter (voltage-controlled current source, VCCS); it can be thought as a reversed (by means of negative feedback) current-to-voltage converter. The resistance R determines the transfer ratio (transconductance). Current sources implemented as circuits with series negative feedback have the disadvantage that the voltage drop across the current sensing resistor decreases the maximal voltage across the load (the compliance voltage). Simple transistor current sources Constant current diode The simplest constant-current source or sink is formed from one component: a JFET with its gate attached to its source. Once the drain-source voltage reaches a certain minimum value, the JFET enters saturation where current is approximately constant. This configuration is known as a constant-current diode, as it behaves much like a dual to the constant voltage diode (Zener diode) used in simple voltage sources. Due to the large variability in saturation current of JFETs, it is common to also include a source resistor (shown in the adjacent image) which allows the current to be tuned down to a desired value. Zener diode current source In this bipolar junction transistor (BJT) implementation (Figure 4) of the general idea above, a Zener voltage stabilizer (R1 and DZ1) drives an emitter follower (Q1) loaded by a constant emitter resistor (R2) sensing the load current. The external (floating) load of this current source is connected to the collector so that almost the same current flows through it and the emitter resistor (they can be thought of as connected in series). The transistor, Q1, adjusts the output (collector) current so as to keep the voltage drop across the constant emitter resistor, R2, almost equal to the relatively constant voltage drop across the Zener diode, DZ1. As a result, the output current is almost constant even if the load resistance and/or voltage vary. The operation of the circuit is considered in details below. A Zener diode, when reverse biased (as shown in the circuit) has a constant voltage drop across it irrespective of the current flowing through it. Thus, as long as the Zener current () is above a certain level (called holding current), the voltage across the Zener diode () will be constant. Resistor, R1, supplies the Zener current and the base current () of NPN transistor (Q1). The constant Zener voltage is applied across the base of Q1 and emitter resistor, R2. Voltage across () is given by , where is the base-emitter drop of Q1. The emitter current of Q1 which is also the current through R2 is given by Since is constant and is also (approximately) constant for a given temperature, it follows that is constant and hence is also constant. Due to transistor action, emitter current, , is very nearly equal to the collector current, , of the transistor (which in turn, is the current through the load). Thus, the load current is constant (neglecting the output resistance of the transistor due to the Early effect) and the circuit operates as a constant current source. As long as the temperature remains constant (or doesn't vary much), the load current will be independent of the supply voltage, R1 and the transistor's gain. R2 allows the load current to be set at any desirable value and is calculated by where is typically 0.65 V for a silicon device. ( is also the emitter current and is assumed to be the same as the collector or required load current, provided is sufficiently large). Resistance is calculated as where = 1.2 to 2 (so that is low enough to ensure adequate ), and is the lowest acceptable current gain for the particular transistor type being used. LED current source The Zener diode can be replaced by any other diode; e.g., a light-emitting diode LED1 as shown in Figure 5. The LED voltage drop () is now used to derive the constant voltage and also has the additional advantage of tracking (compensating) changes due to temperature. is calculated as and as , where ID is the LED current Transistor current source with diode compensation Temperature changes will change the output current delivered by the circuit of Figure 4 because is sensitive to temperature. Temperature dependence can be compensated using the circuit of Figure 6 that includes a standard diode, D, (of the same semiconductor material as the transistor) in series with the Zener diode as shown in the image on the left. The diode drop () tracks the changes due to temperature and thus significantly counteracts temperature dependence of the CCS. Resistance is now calculated as Since , (In practice, is never exactly equal to and hence it only suppresses the change in rather than nulling it out.) is calculated as (the compensating diode's forward voltage drop, , appears in the equation and is typically 0.65 V for silicon devices.) Note that this only works well if DZ1 is a reference diode or another stable voltage source. Together with 'normal' Zener diodes especially with lower Zener voltages (<5V) the diode might even worsen overall temperature dependency. Current mirror with emitter degeneration Series negative feedback is also used in the two-transistor current mirror with emitter degeneration. Negative feedback is a basic feature in some current mirrors using multiple transistors, such as the Widlar current source and the Wilson current source. Constant current source with thermal compensation One limitation with the circuits in Figures 5 and 6 is that the thermal compensation is imperfect. In bipolar transistors, as the junction temperature increases the drop (voltage drop from base to emitter) decreases. In the two previous circuits, a decrease in will cause an increase in voltage across the emitter resistor, which in turn will cause an increase in collector current drawn through the load. The end result is that the amount of 'constant' current supplied is at least somewhat dependent on temperature. This effect is mitigated to a large extent, but not completely, by corresponding voltage drops for the diode, D1, in Figure 6, and the LED, LED1, in Figure 5. If the power dissipation in the active device of the CCS is not small and/or insufficient emitter degeneration is used, this can become a non-trivial issue. Imagine in Figure 5, at power up, that the LED has 1 V across it driving the base of the transistor. At room temperature there is about 0.6 V drop across the junction and hence 0.4 V across the emitter resistor, giving an approximate collector (load) current of amps. Now imagine that the power dissipation in the transistor causes it to heat up. This causes the drop (which was 0.6 V at room temperature) to drop to, say, 0.2 V. Now the voltage across the emitter resistor is 0.8 V, twice what it was before the warmup. This means that the collector (load) current is now twice the design value! This is an extreme example of course, but serves to illustrate the issue. The circuit to the left overcomes the thermal problem (see also, current limiting). To see how the circuit works, assume the voltage has just been applied at V+. Current runs through R1 to the base of Q1, turning it on and causing current to begin to flow through the load into the collector of Q1. This same load current then flows out of Q1's emitter and consequently through to ground. When this current through to ground is sufficient to cause a voltage drop that is equal to the drop of Q2, Q2 begins to turn on. As Q2 turns on it pulls more current through its collector resistor, R1, which diverts some of the injected current in the base of Q1, causing Q1 to conduct less current through the load. This creates a negative feedback loop within the circuit, which keeps the voltage at Q1's emitter almost exactly equal to the drop of Q2. Since Q2 is dissipating very little power compared to Q1 (since all the load current goes through Q1, not Q2), Q2 will not heat up any significant amount and the reference (current setting) voltage across will remain steady at ≈0.6 V, or one diode drop above ground, regardless of the thermal changes in the drop of Q1. The circuit is still sensitive to changes in the ambient temperature in which the device operates as the BE voltage drop in Q2 varies slightly with temperature. Op-amp current sources The simple transistor current source from Figure 4 can be improved by inserting the base-emitter junction of the transistor in the feedback loop of an op-amp (Figure 7). Now the op-amp increases its output voltage to compensate for the drop. The circuit is actually a buffered non-inverting amplifier driven by a constant input voltage. It keeps up this constant voltage across the constant sense resistor. As a result, the current flowing through the load is constant as well; it is exactly the Zener voltage divided by the sense resistor. The load can be connected either in the emitter (Figure 7) or in the collector (Figure 4) but in both the cases it is floating as in all the circuits above. The transistor is not needed if the required current doesn't exceed the sourcing ability of the op-amp. The article on current mirror discusses another example of these so-called gain-boosted current mirrors. Voltage regulator current sources The general negative feedback arrangement can be implemented by an IC voltage regulator (LM317 voltage regulator on Figure 8). As with the bare emitter follower and the precise op-amp follower above, it keeps up a constant voltage drop (1.25 V) across a constant resistor (1.25 Ω); so, a constant current (1 A) flows through the resistor and the load. The LED is on when the voltage across the load exceeds 1.8 V (the indicator circuit introduces some error). The grounded load is an important advantage of this solution. Curpistor tubes Nitrogen-filled glass tubes with two electrodes and a calibrated Becquerel (decays per second) amount of 226Ra offer a constant number of charge carriers per second for conduction, which determines the maximum current the tube can pass over a voltage range from 25 to 500 V. Current and voltage source comparison Most sources of electrical energy (mains electricity, a battery, etc.) are best modeled as voltage sources, however some (notably solar cells) are better modeled using current sources. Sometimes it is easier to view a current source as a voltage source and vice versa (see conversion in Figure 9) using Norton's and Thévenin's theorems. Voltage sources provide an almost-constant output voltage as long as the current drawn from the source is within the source's capabilities. An ideal voltage source loaded by an open circuit (i.e., an infinite impedance) will provide no current (and hence no power). But when the load resistance approaches zero (a short circuit), the current (and thus power) approach infinity. Such a theoretical device has a zero ohm output impedance in series with the source. Real-world voltage sources instead have a non-zero output impedance, which is preferably very low (often much less than 1 ohm). Conversely, a current source provides a constant current, as long as the impedance of the load is sufficiently lower than the current source's parallel impedance (which is preferably very high and ideally infinite). In the case of transistor current sources, impedances of a few megohms (at low frequencies) are typical. Because power is current squared times resistance, as a load resistance connected to a current source approaches zero (a short circuit), the current and thus power both approach zero. Ideal current sources don't exist. Hypothetically connecting one to an ideal open circuit would create the paradox of running a constant, non-zero current (from the current source) through an element with a defined zero current (the open circuit). As the load resistance of an ideal current source approaches infinity (an open circuit), the voltage across the load would approach infinity (because voltage equals current times resistance), and hence the power drawn would also approach infinity. The current of a real current source connected to an open circuit would instead flow through the current source's internal parallel impedance (and be wasted as heat). Similarly, ideal voltage sources don't exist. Hypothetically connecting one to an ideal short circuit would result a similar paradox of finite non-zero voltage across an element with defined zero voltage (the short circuit). Just like how voltage sources should not be connected in parallel to another voltage source with different voltages, a current source also should not be connected in series to another current source. Note, some circuits use elements that are similar but not identical to voltage or current sources and may work when connected in these manners that are disallowed for actual current or voltage sources. Also, just like voltage sources may be connected in series to add their voltages, current sources may be connected in parallel to add their currents. Charging a capacitor Because the charge on a capacitor is equal to the integral of current with respect to time, an ideal constant current source charges a capacitor linearly with time, regardless of any series resistance. The Wilkinson analog-to-digital converter, for instance, uses this linear behavior to measure an unknown voltage by measuring the amount of time it takes a current source to charge a capacitor to that voltage. A voltage source instead charges a capacitor through a resistor non-linearly with time, because the charging current from the voltage source decreases exponentially with time. See also Constant current Current limiting Current loop Current mirror Current sources and sinks Fontana bridge, a compensated current source Iron-hydrogen resistor Saturable reactor Voltage-to-current converter Welding power supply, a device used for arc welding, many of which are designed as constant current devices. Widlar current source References Further reading "Current Sources & Voltage References" Linden T. Harrison; Publ. Elsevier-Newnes 2005; 608-pages; External links Current Sources and Current Mirrors FET Constant-Current Source/Limiter - Vishay JFET Current Source and pSpice Simulation Using Current Sources / Sinks / Mirrors In Audio Differential Amplifiers and Current Sources Analog circuits Electric current Electrical power control
Current source
[ "Physics", "Engineering" ]
5,039
[ "Physical quantities", "Analog circuits", "Electronic engineering", "Electric current", "Wikipedia categories named after physical quantities" ]
750,326
https://en.wikipedia.org/wiki/Compact%20group
In mathematics, a compact (topological) group is a topological group whose topology realizes it as a compact topological space (when an element of the group is operated on, the result is also within the group). Compact groups are a natural generalization of finite groups with the discrete topology and have properties that carry over in significant fashion. Compact groups have a well-understood theory, in relation to group actions and representation theory. In the following we will assume all groups are Hausdorff spaces. Compact Lie groups Lie groups form a class of topological groups, and the compact Lie groups have a particularly well-developed theory. Basic examples of compact Lie groups include the circle group T and the torus groups Tn, the orthogonal group O(n), the special orthogonal group SO(n) and its covering spin group Spin(n), the unitary group U(n) and the special unitary group SU(n), the compact forms of the exceptional Lie groups: G2, F4, E6, E7, and E8. The classification theorem of compact Lie groups states that up to finite extensions and finite covers this exhausts the list of examples (which already includes some redundancies). This classification is described in more detail in the next subsection. Classification Given any compact Lie group G one can take its identity component G0, which is connected. The quotient group G/G0 is the group of components π0(G) which must be finite since G is compact. We therefore have a finite extension Meanwhile, for connected compact Lie groups, we have the following result: Theorem: Every connected compact Lie group is the quotient by a finite central subgroup of a product of a simply connected compact Lie group and a torus. Thus, the classification of connected compact Lie groups can in principle be reduced to knowledge of the simply connected compact Lie groups together with information about their centers. (For information about the center, see the section below on fundamental group and center.) Finally, every compact, connected, simply-connected Lie group K is a product of finitely many compact, connected, simply-connected simple Lie groups Ki each of which is isomorphic to exactly one of the following: The compact symplectic group The special unitary group The spin group or one of the five exceptional groups G2, F4, E6, E7, and E8. The restrictions on n are to avoid special isomorphisms among the various families for small values of n. For each of these groups, the center is known explicitly. The classification is through the associated root system (for a fixed maximal torus), which in turn are classified by their Dynkin diagrams. The classification of compact, simply connected Lie groups is the same as the classification of complex semisimple Lie algebras. Indeed, if K is a simply connected compact Lie group, then the complexification of the Lie algebra of K is semisimple. Conversely, every complex semisimple Lie algebra has a compact real form isomorphic to the Lie algebra of a compact, simply connected Lie group. Maximal tori and root systems A key idea in the study of a connected compact Lie group K is the concept of a maximal torus, that is a subgroup T of K that is isomorphic to a product of several copies of and that is not contained in any larger subgroup of this type. A basic example is the case , in which case we may take to be the group of diagonal elements in . A basic result is the torus theorem which states that every element of belongs to a maximal torus and that all maximal tori are conjugate. The maximal torus in a compact group plays a role analogous to that of the Cartan subalgebra in a complex semisimple Lie algebra. In particular, once a maximal torus has been chosen, one can define a root system and a Weyl group similar to what one has for semisimple Lie algebras. These structures then play an essential role both in the classification of connected compact groups (described above) and in the representation theory of a fixed such group (described below). The root systems associated to the simple compact groups appearing in the classification of simply connected compact groups are as follows: The special unitary groups correspond to the root system The odd spin groups correspond to the root system The compact symplectic groups correspond to the root system The even spin groups correspond to the root system The exceptional compact Lie groups correspond to the five exceptional root systems G2, F4, E6, E7, or E8 Fundamental group and center It is important to know whether a connected compact Lie group is simply connected, and if not, to determine its fundamental group. For compact Lie groups, there are two basic approaches to computing the fundamental group. The first approach applies to the classical compact groups , , , and and proceeds by induction on . The second approach uses the root system and applies to all connected compact Lie groups. It is also important to know the center of a connected compact Lie group. The center of a classical group can easily be computed "by hand," and in most cases consists simply of whatever roots of the identity are in . (The group SO(2) is an exception—the center is the whole group, even though most elements are not roots of the identity.) Thus, for example, the center of consists of nth roots of unity times the identity, a cyclic group of order . In general, the center can be expressed in terms of the root lattice and the kernel of the exponential map for the maximal torus. The general method shows, for example, that the simply connected compact group corresponding to the exceptional root system has trivial center. Thus, the compact group is one of very few simple compact groups that are simultaneously simply connected and center free. (The others are and .) Further examples Amongst groups that are not Lie groups, and so do not carry the structure of a manifold, examples are the additive group Zp of p-adic integers, and constructions from it. In fact any profinite group is a compact group. This means that Galois groups are compact groups, a basic fact for the theory of algebraic extensions in the case of infinite degree. Pontryagin duality provides a large supply of examples of compact commutative groups. These are in duality with abelian discrete groups. Haar measure Compact groups all carry a Haar measure, which will be invariant by both left and right translation (the modulus function must be a continuous homomorphism to positive reals (R+, ×), and so 1). In other words, these groups are unimodular. Haar measure is easily normalized to be a probability measure, analogous to dθ/2π on the circle. Such a Haar measure is in many cases easy to compute; for example for orthogonal groups it was known to Adolf Hurwitz, and in the Lie group cases can always be given by an invariant differential form. In the profinite case there are many subgroups of finite index, and Haar measure of a coset will be the reciprocal of the index. Therefore, integrals are often computable quite directly, a fact applied constantly in number theory. If is a compact group and is the associated Haar measure, the Peter–Weyl theorem provides a decomposition of as an orthogonal direct sum of finite-dimensional subspaces of matrix entries for the irreducible representations of . Representation theory The representation theory of compact groups (not necessarily Lie groups and not necessarily connected) was founded by the Peter–Weyl theorem. Hermann Weyl went on to give the detailed character theory of the compact connected Lie groups, based on maximal torus theory. The resulting Weyl character formula was one of the influential results of twentieth century mathematics. The combination of the Peter–Weyl theorem and the Weyl character formula led Weyl to a complete classification of the representations of a connected compact Lie group; this theory is described in the next section. A combination of Weyl's work and Cartan's theorem gives a survey of the whole representation theory of compact groups G. That is, by the Peter–Weyl theorem the irreducible unitary representations ρ of G are into a unitary group (of finite dimension) and the image will be a closed subgroup of the unitary group by compactness. Cartan's theorem states that Im(ρ) must itself be a Lie subgroup in the unitary group. If G is not itself a Lie group, there must be a kernel to ρ. Further one can form an inverse system, for the kernel of ρ smaller and smaller, of finite-dimensional unitary representations, which identifies G as an inverse limit of compact Lie groups. Here the fact that in the limit a faithful representation of G is found is another consequence of the Peter–Weyl theorem. The unknown part of the representation theory of compact groups is thereby, roughly speaking, thrown back onto the complex representations of finite groups. This theory is rather rich in detail, but is qualitatively well understood. Representation theory of a connected compact Lie group Certain simple examples of the representation theory of compact Lie groups can be worked out by hand, such as the representations of the rotation group SO(3), the special unitary group SU(2), and the special unitary group SU(3). We focus here on the general theory. See also the parallel theory of representations of a semisimple Lie algebra. Throughout this section, we fix a connected compact Lie group K and a maximal torus T in K. Representation theory of T Since T is commutative, Schur's lemma tells us that each irreducible representation of T is one-dimensional: Since, also, T is compact, must actually map into . To describe these representations concretely, we let be the Lie algebra of T and we write points as In such coordinates, will have the form for some linear functional on . Now, since the exponential map is not injective, not every such linear functional gives rise to a well-defined map of T into . Rather, let denote the kernel of the exponential map: where is the identity element of T. (We scale the exponential map here by a factor of in order to avoid such factors elsewhere.) Then for to give a well-defined map , must satisfy where is the set of integers. A linear functional satisfying this condition is called an analytically integral element. This integrality condition is related to, but not identical to, the notion of integral element in the setting of semisimple Lie algebras. Suppose, for example, T is just the group of complex numbers of absolute value 1. The Lie algebra is the set of purely imaginary numbers, and the kernel of the (scaled) exponential map is the set of numbers of the form where is an integer. A linear functional takes integer values on all such numbers if and only if it is of the form for some integer . The irreducible representations of T in this case are one-dimensional and of the form Representation theory of K We now let denote a finite-dimensional irreducible representation of K (over ). We then consider the restriction of to T. This restriction is not irreducible unless is one-dimensional. Nevertheless, the restriction decomposes as a direct sum of irreducible representations of T. (Note that a given irreducible representation of T may occur more than once.) Now, each irreducible representation of T is described by a linear functional as in the preceding subsection. If a given occurs at least once in the decomposition of the restriction of to T, we call a weight of . The strategy of the representation theory of K is to classify the irreducible representations in terms of their weights. We now briefly describe the structures needed to formulate the theorem; more details can be found in the article on weights in representation theory. We need the notion of a root system for K (relative to a given maximal torus T). The construction of this root system is very similar to the construction for complex semisimple Lie algebras. Specifically, the weights are the nonzero weights for the adjoint action of T on the complexified Lie algebra of K. The root system R has all the usual properties of a root system, except that the elements of R may not span . We then choose a base for R and we say that an integral element is dominant if for all . Finally, we say that one weight is higher than another if their difference can be expressed as a linear combination of elements of with non-negative coefficients. The irreducible finite-dimensional representations of K are then classified by a theorem of the highest weight, which is closely related to the analogous theorem classifying representations of a semisimple Lie algebra. The result says that: every irreducible representation has highest weight, the highest weight is always a dominant, analytically integral element, two irreducible representations with the same highest weight are isomorphic, and every dominant, analytically integral element arises as the highest weight of an irreducible representation. The theorem of the highest weight for representations of K is then almost the same as for semisimple Lie algebras, with one notable exception: The concept of an integral element is different. The weights of a representation are analytically integral in the sense described in the previous subsection. Every analytically integral element is integral in the Lie algebra sense, but not the other way around. (This phenomenon reflects that, in general, not every representation of the Lie algebra comes from a representation of the group K.) On the other hand, if K is simply connected, the set of possible highest weights in the group sense is the same as the set of possible highest weights in the Lie algebra sense. The Weyl character formula If is representation of K, we define the character of to be the function given by . This function is easily seen to be a class function, i.e., for all and in K. Thus, is determined by its restriction to T. The study of characters is an important part of the representation theory of compact groups. One crucial result, which is a corollary of the Peter–Weyl theorem, is that the characters form an orthonormal basis for the set of square-integrable class functions in K. A second key result is the Weyl character formula, which gives an explicit formula for the character—or, rather, the restriction of the character to T—in terms of the highest weight of the representation. In the closely related representation theory of semisimple Lie algebras, the Weyl character formula is an additional result established after the representations have been classified. In Weyl's analysis of the compact group case, however, the Weyl character formula is actually a crucial part of the classification itself. Specifically, in Weyl's analysis of the representations of K, the hardest part of the theorem—showing that every dominant, analytically integral element is actually the highest weight of some representation—is proved in a totally different way from the usual Lie algebra construction using Verma modules. In Weyl's approach, the construction is based on the Peter–Weyl theorem and an analytic proof of the Weyl character formula. Ultimately, the irreducible representations of K are realized inside the space of continuous functions on K. The SU(2) case We now consider the case of the compact group SU(2). The representations are often considered from the Lie algebra point of view, but we here look at them from the group point of view. We take the maximal torus to be the set of matrices of the form According to the example discussed above in the section on representations of T, the analytically integral elements are labeled by integers, so that the dominant, analytically integral elements are non-negative integers . The general theory then tells us that for each , there is a unique irreducible representation of SU(2) with highest weight . Much information about the representation corresponding to a given is encoded in its character. Now, the Weyl character formula says, in this case, that the character is given by We can also write the character as sum of exponentials as follows: (If we use the formula for the sum of a finite geometric series on the above expression and simplify, we obtain the earlier expression.) From this last expression and the standard formula for the character in terms of the weights of the representation, we can read off that the weights of the representation are each with multiplicity one. (The weights are the integers appearing in the exponents of the exponentials and the multiplicities are the coefficients of the exponentials.) Since there are weights, each with multiplicity 1, the dimension of the representation is . Thus, we recover much of the information about the representations that is usually obtained from the Lie algebra computation. An outline of the proof We now outline the proof of the theorem of the highest weight, following the original argument of Hermann Weyl. We continue to let be a connected compact Lie group and a fixed maximal torus in . We focus on the most difficult part of the theorem, showing that every dominant, analytically integral element is the highest weight of some (finite-dimensional) irreducible representation. The tools for the proof are the following: The torus theorem. The Weyl integral formula. The Peter–Weyl theorem for class functions, which states that the characters of the irreducible representations form an orthonormal basis for the space of square integrable class functions on . With these tools in hand, we proceed with the proof. The first major step in the argument is to prove the Weyl character formula. The formula states that if is an irreducible representation with highest weight , then the character of satisfies: for all in the Lie algebra of . Here is half the sum of the positive roots. (The notation uses the convention of "real weights"; this convention requires an explicit factor of in the exponent.) Weyl's proof of the character formula is analytic in nature and hinges on the fact that the norm of the character is 1. Specifically, if there were any additional terms in the numerator, the Weyl integral formula would force the norm of the character to be greater than 1. Next, we let denote the function on the right-hand side of the character formula. We show that even if is not known to be the highest weight of a representation, is a well-defined, Weyl-invariant function on , which therefore extends to a class function on . Then using the Weyl integral formula, one can show that as ranges over the set of dominant, analytically integral elements, the functions form an orthonormal family of class functions. We emphasize that we do not currently know that every such is the highest weight of a representation; nevertheless, the expressions on the right-hand side of the character formula gives a well-defined set of functions , and these functions are orthonormal. Now comes the conclusion. The set of all —with ranging over the dominant, analytically integral elements—forms an orthonormal set in the space of square integrable class functions. But by the Weyl character formula, the characters of the irreducible representations form a subset of the 's. And by the Peter–Weyl theorem, the characters of the irreducible representations form an orthonormal basis for the space of square integrable class functions. If there were some that is not the highest weight of a representation, then the corresponding would not be the character of a representation. Thus, the characters would be a proper subset of the set of 's. But then we have an impossible situation: an orthonormal basis (the set of characters of the irreducible representations) would be contained in a strictly larger orthonormal set (the set of 's). Thus, every must actually be the highest weight of a representation. Duality The topic of recovering a compact group from its representation theory is the subject of the Tannaka–Krein duality, now often recast in terms of Tannakian category theory. From compact to non-compact groups The influence of the compact group theory on non-compact groups was formulated by Weyl in his unitarian trick. Inside a general semisimple Lie group there is a maximal compact subgroup, and the representation theory of such groups, developed largely by Harish-Chandra, uses intensively the restriction of a representation to such a subgroup, and also the model of Weyl's character theory. See also Peter–Weyl theorem Maximal torus Root system Locally compact group p-compact group Protorus Classifying finite-dimensional representations of Lie algebras Weights in the representation theory of semisimple Lie algebras References Bibliography Topological groups Lie groups Fourier analysis
Compact group
[ "Mathematics" ]
4,263
[ "Lie groups", "Mathematical structures", "Space (mathematics)", "Topological spaces", "Algebraic structures", "Topological groups" ]
750,712
https://en.wikipedia.org/wiki/Geometrodynamics
In theoretical physics, geometrodynamics is an attempt to describe spacetime and associated phenomena completely in terms of geometry. Technically, its goal is to unify the fundamental forces and reformulate general relativity as a configuration space of three-metrics, modulo three-dimensional diffeomorphisms. The origin of this idea can be found in an English mathematician William Kingdon Clifford's works. This theory was enthusiastically promoted by John Wheeler in the 1960s, and work on it continues in the 21st century. Einstein's geometrodynamics The term geometrodynamics is as a synonym for general relativity. More properly, some authors use the phrase Einstein's geometrodynamics to denote the initial value formulation of general relativity, introduced by Arnowitt, Deser, and Misner (ADM formalism) around 1960. In this reformulation, spacetimes are sliced up into spatial hyperslices in a rather arbitrary fashion, and the vacuum Einstein field equation is reformulated as an evolution equation describing how, given the geometry of an initial hyperslice (the "initial value"), the geometry evolves over "time". This requires giving constraint equations which must be satisfied by the original hyperslice. It also involves some "choice of gauge"; specifically, choices about how the coordinate system used to describe the hyperslice geometry evolves. Wheeler's geometrodynamics Wheeler wanted to reduce physics to geometry in an even more fundamental way than the ADM reformulation of general relativity with a dynamic geometry whose curvature changes with time. It attempts to realize three concepts: mass without mass charge without charge field without field He wanted to lay the foundation for quantum gravity and unify gravitation with electromagnetism (the strong and weak interactions were not yet sufficiently well understood in 1960 to be included). Wheeler introduced the notion of geons, gravitational wave packets confined to a compact region of spacetime and held together by the gravitational attraction of the (gravitational) field energy of the wave itself. Wheeler was intrigued by the possibility that geons could affect test particles much like a massive object, hence mass without mass. Wheeler was also much intrigued by the fact that the (nonspinning) point-mass solution of general relativity, the Schwarzschild vacuum, has the nature of a wormhole. Similarly, in the case of a charged particle, the geometry of the Reissner–Nordström electrovacuum solution suggests that the symmetry between electric (which "end" in charges) and magnetic field lines (which never end) could be restored if the electric field lines do not actually end but only go through a wormhole to some distant location or even another branch of the universe. George Rainich had shown decades earlier that one can obtain the electromagnetic field tensor from the electromagnetic contribution to the stress–energy tensor, which in general relativity is directly coupled to spacetime curvature; Wheeler and Misner developed this into the so-called already-unified field theory which partially unifies gravitation and electromagnetism, yielding charge without charge. In the ADM reformulation of general relativity, Wheeler argued that the full Einstein field equation can be recovered once the momentum constraint can be derived, and suggested that this might follow from geometrical considerations alone, making general relativity something like a logical necessity. Specifically, curvature (the gravitational field) might arise as a kind of "averaging" over very complicated topological phenomena at very small scales, the so-called spacetime foam. This would realize geometrical intuition suggested by quantum gravity, or field without field. These ideas captured the imagination of many physicists, even though Wheeler himself quickly dashed some of the early hopes for his program. In particular, spin 1/2 fermions proved difficult to handle. For this, one has to go to the Einsteinian Unified Field Theory of the Einstein–Maxwell–Dirac system, or more generally, the Einstein–Yang–Mills-Dirac-Higgs System. Geometrodynamics also attracted attention from philosophers intrigued by the possibility of realizing some of Descartes' and Spinoza's ideas about the nature of space. Modern notions of geometrodynamics More recently, Christopher Isham, Jeremy Butterfield, and their students have continued to develop quantum geometrodynamics to take account of recent work toward a quantum theory of gravity and further developments in the very extensive mathematical theory of initial value formulations of general relativity. Some of Wheeler's original goals remain important for this work, particularly the hope of laying a solid foundation for quantum gravity. The philosophical program also continues to motivate several prominent contributors. Topological ideas in the realm of gravity date back to Riemann, Clifford, and Weyl and found a more concrete realization in the wormholes of Wheeler characterized by the Euler-Poincaré invariant. They result from attaching handles to black holes. Observationally, Albert Einstein's general relativity (GR) is rather well established for the solar system and double pulsars. However, in GR the metric plays a double role: Measuring distances in spacetime and serving as a gravitational potential for the Christoffel connection. This dichotomy seems to be one of the main obstacles for quantizing gravity. Arthur Stanley Eddington suggested already in 1924 in his book The Mathematical Theory of Relativity (2nd Edition) to regard the connection as the basic field and the metric merely as a derived concept. Consequently, the primordial action in four dimensions should be constructed from a metric-free topological action such as the Pontryagin invariant of the corresponding gauge connection. Similarly as in the Yang–Mills theory, a quantization can be achieved by amending the definition of curvature and the Bianchi identities via topological ghosts. In such a graded Cartan formalism, the nilpotency of the ghost operators is on par with the Poincaré lemma for the exterior derivative. Using a BRST antifield formalism with a duality gauge fixing, a consistent quantization in spaces of double dual curvature is obtained. The constraint imposes instanton type solutions on the curvature-squared 'Yang-Mielke theory' of gravity, proposed in its affine form already by Weyl 1919 and by Yang in 1974. However, these exact solutions exhibit a 'vacuum degeneracy'. One needs to modify the double duality of the curvature via scale breaking terms, in order to retain Einstein's equations with an induced cosmological constant of partially topological origin as the unique macroscopic 'background'. Such scale breaking terms arise more naturally in a constraint formalism, the so-called BF scheme, in which the gauge curvature is denoted by F. In the case of gravity, it departs from the special linear group SL(5, R) in four dimensions, thus generalizing (Anti-)de Sitter gauge theories of gravity. After applying spontaneous symmetry breaking to the corresponding topological BF theory, again Einstein spaces emerge with a tiny cosmological constant related to the scale of symmetry breaking. Here the 'background' metric is induced via a Higgs-like mechanism. The finiteness of such a deformed topological scheme may convert into asymptotic safeness after quantization of the spontaneously broken model. Richard J. Petti believes that cosmological models with torsion but no rotating particles based on Einstein–Cartan theory illustrate a situation of "a (nonpropagating) field without a field". See also Mathematics of general relativity Hamilton–Jacobi–Einstein equation (HJEE) Numerical relativity Black hole electron Teleparallelism References Works cited General references This Ph.D. thesis offers a readable account of the long development of the notion of "geometrodynamics". This book focuses on the philosophical motivations and implications of the modern geometrodynamics program. See chapter 43 for superspace and chapter 44 for spacetime foam. online version (subscription required) online version (subscription required) Further reading Grünbaum, Adolf (1973): Geometrodynamics and Ontology, The Journal of Philosophy, vol. 70, no. 21, December 6, 1973, pp. 775–800, online version (subscription required) Mielke, Eckehard W. (1987): Geometrodynamics of Gauge Fields --- On the geometry of Yang—Mills and gravitational gauge theories, (Akademie—Verlag, Berlin), 242 pages. (2nd Edition, Springer International Publishing Switzerland, Mathematical Physics Studies 2017), 373 pages. Theories of gravity
Geometrodynamics
[ "Physics" ]
1,756
[ "Theoretical physics", "Theories of gravity" ]
751,115
https://en.wikipedia.org/wiki/Topographic%20prominence
In topography, prominence or relative height (also referred to as autonomous height, and shoulder drop in US English, and drop in British English) measures the height of a mountain or hill's summit relative to the lowest contour line encircling it but containing no higher summit within it. It is a measure of the independence of a summit. The key col ("saddle") around the peak is a unique point on this contour line and the parent peak is some higher mountain, selected according to various criteria. Definitions The prominence of a peak is the least drop in height necessary in order to get from the summit to any higher terrain. This can be calculated for a given peak in the following manner: for every path connecting the peak to higher terrain, find the lowest point on the path; the key col (or highest saddle, or linking col, or link) is defined as the highest of these points, along all connecting paths; the prominence is the difference between the elevation of the peak and the elevation of its key col. On a given landmass, the highest peak's prominence will be identical to its elevation. An alternative equivalent definition is that the prominence is the height of the peak's summit above the lowest contour line encircling it, but containing no higher summit within it; see Figure 1. Illustration The parent peak may be either close or far from the subject peak. The summit of Mount Everest is the parent peak of Aconcagua in Argentina at a distance of 17,755 km (11,032 miles), as well as the parent of the South Summit of Mount Everest at a distance of 360 m (1200 feet). The key col may also be close to the subject peak or far from it. The key col for Aconcagua, if sea level is disregarded, is the Bering Strait at a distance of 13,655 km (8,485 miles). The key col for the South Summit of Mount Everest is about 100 m (330 feet) distant. A way to visualize prominence is to imagine raising sea level so the parent peak and subject peak are two separate islands. Then lower it until a tiny land bridge forms between the two islands. This land bridge is the key col of the subject peak, and the peak's prominence is its elevation from that key col. In mountaineering Prominence is interesting to many mountaineers because it is an objective measurement that is strongly correlated with the subjective significance of a summit. Peaks with low prominence are either subsidiary tops of some higher summit or relatively insignificant independent summits. Peaks with high prominence tend to be the highest points around and are likely to have extraordinary views. Only summits with a sufficient degree of prominence are regarded as independent mountains. For example, the world's second-highest mountain is K2 (height 8,611 m, prominence 4,017 m). While Mount Everest's South Summit (height 8,749 m, prominence 11 m) is taller than K2, it is not considered an independent mountain because it is a sub-summit of the main summit (which has a height and prominence of 8,848 m). Many lists of mountains use topographic prominence as a criterion for inclusion in the list, or cutoff. John and Anne Nuttall's The Mountains of England and Wales uses a cutoff of 15 m (about 50 ft), and Alan Dawson's list of Marilyns uses 150 m (about 500 ft). (Dawson's list and the term "Marilyn" are limited to Britain and Ireland). In the contiguous United States, the famous list of "fourteeners" (14,000 foot / 4268 m peaks) uses a cutoff of 300 ft / 91 m (with some exceptions). Also in the U.S., 2000 ft (610 m) of prominence has become an informal threshold that signifies that a peak has major stature. Lists with a high topographic prominence cutoff tend to favor isolated peaks or those that are the highest point of their massif; a low value, such as the Nuttalls', results in a list with many summits that may be viewed by some as insignificant. While the use of prominence as a cutoff to form a list of peaks ranked by elevation is standard and is the most common use of the concept, it is also possible to use prominence as a mountain measure in itself. This generates lists of peaks ranked by prominence, which are qualitatively different from lists ranked by elevation. Such lists tend to emphasize isolated high peaks, such as range or island high points and stratovolcanoes. One advantage of a prominence-ranked list is that it needs no cutoff since a peak with high prominence is automatically an independent peak. Parent peak It is common to define a peak's parent as a particular peak in the higher terrain connected to the peak by the key col. If there are many higher peaks there are various ways of defining which one is the parent, not necessarily based on geological or geomorphological factors. The "parent" relationship defines a hierarchy which defines some peaks as subpeaks of others. For example, in Figure 1, the middle peak is a subpeak of the right peak, which is a subpeak of the left peak, which is the highest point on its landmass. In that example, there is no controversy about the hierarchy; in practice, there are different definitions of parent. These different definitions follow. Encirclement or island parentage Also known as prominence island parentage, this is defined as follows. In Figure 2 the key col of peak A is at the meeting place of two closed contours, one encircling A (and no higher peaks) and the other containing at least one higher peak. The encirclement parent of A is the highest peak that is inside this other contour. In terms of the falling-sea model, the two contours together bound an "island", with two pieces connected by an isthmus at the key col. The encirclement parent is the highest point on this entire island. For example, the encirclement parent of Mont Blanc, the highest peak in the Alps, is Mount Everest. Mont Blanc's key col is a piece of low ground near Lake Onega in northwestern Russia (at elevation), on the divide between lands draining into the Baltic and Caspian Seas. This is the meeting place of two contours, one of them encircling Mont Blanc; the other contour encircles Mount Everest. This example demonstrates that the encirclement parent can be very far away from the peak in question when the key col is low. This means that, while simple to define, the encirclement parent often does not satisfy the intuitive requirement that the parent peak should be close to the child peak. For example, one common use of the concept of parent is to make clear the location of a peak. If we say that Peak A has Mont Blanc for a parent, we would expect to find Peak A somewhere close to Mont Blanc. This is not always the case for the various concepts of parent, and is least likely to be the case for encirclement parentage. Figure 3 shows a schematic range of peaks with the color underlying the minor peaks indicating the encirclement parent. In this case the encirclement parent of M is H whereas an intuitive view might be that L was the parent. Indeed, if col "k" were slightly lower, L would be the true encirclement parent. The encirclement parent is the highest possible parent for a peak; all other definitions indicate a (possibly different) peak on the combined island, a "closer" peak than the encirclement parent (if there is one), which is still "better" than the peak in question. The differences lie in what criteria are used to define "closer" and "better." Prominence parentage The (prominence) parent peak of peak A can be found by dividing the island or region in question into territories, by tracing the two hydrographic runoffs, one in each direction, downwards from the key col of every peak that is more prominent than peak A. The parent is the peak whose territory peak A is in. For hills with low prominence in Britain, a definition of "parent Marilyn" is sometimes used to classify low hills ("Marilyn" being a British term for a hill with a prominence of at least 150 m). This is found by dividing the region of Britain in question into territories, one for each Marilyn. The parent Marilyn is the Marilyn whose territory the hill's summit is in. If the hill is on an island (in Britain) whose highest point is less than 150 m, it has no parent Marilyn. Prominence parentage is the only definition used in the British Isles because encirclement parentage breaks down when the key col approaches sea level. Using the encirclement definition, the parent of almost any small hill in a low-lying coastal area would be Ben Nevis, an unhelpful and confusing outcome. Meanwhile, "height" parentage (see below) is not used because there is no obvious choice of cutoff. This choice of method might at first seem arbitrary, but it provides every hill with a clear and unambiguous parent peak that is taller and more prominent than the hill itself, while also being connected to it (via ridge lines). The parent of a low hill will also usually be nearby; this becomes less likely as the hill's height and prominence increase. Using prominence parentage, one may produce a "hierarchy" of peaks going back to the highest point on the island. One such chain in Britain would read: Billinge Hill → Winter Hill → Hail Storm Hill → Boulsworth Hill → Kinder Scout → Cross Fell → Helvellyn → Scafell Pike → Snowdon → Ben Nevis. At each stage in the chain, both height and prominence increase. Line parentage Line parentage, also called height parentage, is similar to prominence parentage, but it requires a prominence cutoff criterion. The height parent is the closest peak to peak A (along all ridges connected to A) that has a greater height than A, and satisfies some prominence criteria. The disadvantage of this concept is that it goes against the intuition that a parent peak should always be more significant than its child. However it can be used to build an entire lineage for a peak which contains a great deal of information about the peak's position. In general, the analysis of parents and lineages is intimately linked to studying the topology of watersheds. Issues in choice of summit and key col Alteration of the landscape by humans and presence of water features can give rise to issues in the choice of location and height of a summit or col. In Britain, extensive discussion has resulted in a protocol that has been adopted by the main sources of prominence data in Britain and Ireland. Other sources of data commonly ignore human-made alterations, but this convention is not universally agreed upon; for example, some authors discount modern structures but allow ancient ones. Another disagreement concerns mountaintop removal, though for high-prominence peaks (and for low-prominence subpeaks with intact summits), the difference in prominence values for the two conventions is typically relatively small. Examples The key col and parent peak are often close to the sub-peak but this is not always the case, especially when the key col is relatively low. It is only with the advent of computer programs and geographical databases that thorough analysis has become possible. For example, the key col of Denali in Alaska (6,194 m) is a 56 m col near Lake Nicaragua. Denali's encirclement parent is Aconcagua (6,960 m), in Argentina, and its prominence is 6,138 m. (To further illustrate the rising-sea model of prominence, if sea level rose 56 m, North and South America would be separate continents and Denali would be 6138 m, its current prominence, above sea level. At a slightly lower level, the continents would still be connected and the high point of the combined landmass would be Aconcagua, the encirclement parent.) While it is natural for Aconcagua to be the parent of Denali, since Denali is a major peak, consider the following situation: Peak A is a small hill on the coast of Alaska, with elevation 100 m and key col 50 m. Then the encirclement parent of Peak A is also Aconcagua, even though there will be many peaks closer to Peak A which are much higher and more prominent than Peak A (for example, Denali). This illustrates the disadvantage in using the encirclement parent. A hill in a low-lying area like the Netherlands will often be a direct child of Mount Everest, with its prominence about the same as its height and its key col placed at or near the foot of the hill, well below, for instance, the 113-meter-high key col of Mont Blanc. Calculations and mathematics When the key col for a peak is close to the peak itself, prominence is easily computed by hand using a topographic map. However, when the key col is far away, or when one wants to calculate the prominence of many peaks at once, software can apply surface network modeling to a digital elevation model to find exact or approximate key cols. Since topographic maps typically show elevation using contour lines, the exact elevation is typically bounded by an upper and lower contour, and not specified exactly. Prominence calculations may use the high contour (giving in a pessimistic estimate), the low contour (giving an optimistic estimate), their mean (giving a "midrange" or "rise" prominence) or an interpolated value (customary in Britain). The choice of method depends largely on the preference of the author and historical precedent. Pessimistic prominence, (and sometimes optimistic prominence) were for many years used in USA and international lists, but mean prominence is becoming preferred. Wet prominence and dry prominence There are two varieties of topographic prominence: wet prominence and dry prominence. Wet prominence is the standard topographic prominence discussed in this article. Wet prominence assumes that the surface of the earth includes all permanent water, snow, and ice features. Thus, the wet prominence of the highest summit of an ocean island or landmass is always equal to the summit's elevation. Dry prominence, on the other hand, ignores water, snow, and ice features and assumes that the surface of the earth is defined by the solid bottom of those features. The dry prominence of a summit is equal to its wet prominence unless the summit is the highest point of a landmass or island, or its key col is covered by snow or ice. If its highest surface col is on water, snow, or ice, the dry prominence of that summit is equal to its wet prominence plus the depth of its highest submerged col. Because Earth has no higher summit than Mount Everest, Everest's prominence is either undefined or its height from the lowest contour line. In a dry Earth, the lowest contour line would be the deepest hydrologic feature, the Challenger Deep, at 10,924 m depth. Everest's dry prominence would be this depth plus Everest's wet prominence of 8848 m, totaling 19,772 m. The dry prominence of Mauna Kea is equal to its wet prominence (4205 m) plus the depth of its highest submerged col (about 5125 m). Totaling 9330 m, this is greater than any mountain apart from Everest. The dry prominence of Aconcagua is equal to its wet prominence (6960 m) plus the depth of the highest submerged col of the Bering Strait (about 40 m), or about 7000 m. Mauna Kea is relatively close to its submerged key col in the Pacific Ocean, and the corresponding contour line that surrounds Mauna Kea is a relatively compact area of the ocean floor. Whereas a contour line around Everest that is lower than 9330m from Everest's peak would surround most of the major continents of the Earth. Even just surrounding Afro-Eurasia would run a contour line through the Bering Straight, with a highest submerged col of about 40 m, or only 8888 m below the peak of Everest. As a result, Mauna Kea's prominence might be subjectively more impressive than Everest's, and some authorities have called it the tallest mountain from peak to underwater base. Dry prominence is also useful for measuring submerged seamounts. Seamounts have a dry topographic prominence, a topographic isolation, and a negative topographic elevation. List of most prominent summits on Earth by 'dry' prominence Prominence values are accurate to perhaps 100m owing to uncertainties in ocean sounding depths. See also Height above average terrain (HAAT) – a similar measurement for FM and TV transmitters Ultra-prominent summit Lists List of mountain lists List of tallest mountains in the Solar System List of the most prominent summits of the world List of ultra-prominent summits of Africa List of ultra-prominent summits of Antarctica List of ultra-prominent summits of Australia List of ultra-prominent summits of the Alps List of the most prominent summits of the British Isles List of European ultra-prominent peaks List of ultra-prominent summits of North America List of the most prominent summits of Greenland List of the most prominent summits of Canada List of the most prominent summits of the Rocky Mountains List of the most prominent summits of the United States List of the most prominent summits of New England List of the most prominent summits of Mexico List of the most prominent summits of Central America List of the most prominent summits of the Caribbean List of ultra-prominent summits of South America List of islands by highest point References Topography Physical geography Mountaineering Vertical extent
Topographic prominence
[ "Physics", "Mathematics" ]
3,690
[ "Vertical extent", "Physical quantities", "Quantity", "Size", "Wikipedia categories named after physical quantities" ]
751,413
https://en.wikipedia.org/wiki/Colorado%20River%20Aqueduct
The Colorado River Aqueduct, or CRA, is a water conveyance in Southern California in the United States, operated by the Metropolitan Water District of Southern California. The aqueduct impounds water from the Colorado River at Lake Havasu on the California–Arizona border, west across the Mojave and Colorado deserts to the east side of the Santa Ana Mountains. It is one of the primary sources of drinking water for Southern California. Originally conceived by William Mulholland and designed by Chief Engineer Frank E. Weymouth of the MWD, it was the largest public works project in southern California during the Great Depression. The project employed 30,000 people over an eight-year period and as many as 10,000 at one time. The system is composed of two reservoirs, five pumping stations, of canals, of tunnels, and of buried conduit and siphons. Average annual throughput is . Route The Colorado River Aqueduct begins near Parker Dam on the Colorado River. There, the water is pumped up the Whipple Mountains where the water emerges and begins flowing through of siphons and open canals on the southern Mojave Desert. At Iron Mountain, the water is again lifted, . The aqueduct then turns southwest towards the Eagle Mountains. There the water is lifted two more times, first by to an elevation of more than , then by to an elevation of above sea level. The CRA then runs through the deserts of the Coachella Valley and through the San Gorgonio Pass. Near Cabazon, the aqueduct begins to run underground until it enters the San Jacinto Tunnel at the base of the San Jacinto Mountains. On the other side of the mountains the aqueduct continues to run underground until it reaches the terminus at Lake Mathews. From there, of distribution lines, along with eight more tunnels, delivers water to member cities. Some of the water is siphoned off in San Jacinto via the San Diego canal, part of the San Diego Aqueduct that delivers water to San Diego County. Background and construction As the Los Angeles metropolitan area grew in the early 1900s, Mulholland and others began looking for new sources of water. Eventually, Los Angeles laid claim to the waters of the Owens Valley, east of the Sierra Nevada, and in 1913 completed the Los Angeles Aqueduct to deliver its waters to the burgeoning city. By the early 1920s, Los Angeles had grown so rapidly that the Owens River watershed could no longer supply the city's needs for domestic and agricultural water. By 1923, Mulholland and his engineers were looking east to an even larger water supply, the Colorado River. The plan was to dam the Colorado River and carry its waters across hundreds of miles of mountains and deserts. In 1924, the first steps were taken to create a metropolitan water district, made up of various cities throughout southern California. The Metropolitan Water District ("Met") was incorporated on December 6, 1928, and in 1929 took over where Los Angeles had left off, planning for a Colorado River aqueduct. (During the same period, as a hedge against the possible abandonment of the planned Colorado River aqueduct, Los Angeles also undertook an extension of the Los Angeles Aqueduct to the Mono Lake Basin.) The MWD considered eight routes for the aqueduct. In 1931, the MWD board of directors chose the Parker route which would require the building of the Parker Dam. The Parker route was chosen because it was seen as the safest and most economical. A $220 million bond was approved on September 29, 1931. Work began in January 1933 near Thousand Palms, and in 1934 the United States Bureau of Reclamation began work on the Parker Dam. Construction of the aqueduct was finished in 1935. Water first flowed in the aqueduct on January 7, 1939. The CRA contributed to urban growth in the south coast region. Although the CRA brought "too much, too expensive" water in its early years of operation, subsidies (via property taxes) and expansion of MWD's service area brought reduced prices and expanded demand. (Holding supply constant, that meant that the quantity demanded rose to meet supplies.) On subsidies and sprawl, note that it was not until 1954 that Met's revenue from selling water exceed the cost of delivering it; it was not until 1973 that revenue from sales exceeded revenue from taxes. Since about 80 percent of Met's costs are fixed, revenue needs to cover far more than operating expenses in order to pay for all costs. In 1955, the aqueduct was recognized by the American Society of Civil Engineers (ASCE) as one of their "Seven Modern Civil Engineering Wonders". In popular culture The Colorado River Aqueduct was featured by Huell Howser in California's Gold. See also San Jacinto Tunnel Kaleshwaram Lift Irrigation Project Ganga Water Lift Project References External links Metropolitan Water District of Southern California official website Colorado River Aqueduct Page at Maven's Notebook The Colorado River Aqueduct at The Center for Land Use Interpretation The Los Angeles Aqueduct at the Los Angeles Department of Water and Power Aqueducts in California Aqueduct,Colorado River Geography of the Colorado Desert Geography of Riverside County, California Geography of San Bernardino County, California Historic American Engineering Record in California History of Los Angeles Interbasin transfer Water in California Historic Civil Engineering Landmarks 1939 establishments in California Coachella Valley San Gorgonio Pass
Colorado River Aqueduct
[ "Engineering", "Environmental_science" ]
1,076
[ "Hydrology", "Civil engineering", "Interbasin transfer", "Historic Civil Engineering Landmarks" ]
3,454,072
https://en.wikipedia.org/wiki/Structural%20gene
A structural gene is a gene that codes for any RNA or protein product other than a regulatory factor (i.e. regulatory protein). A term derived from the lac operon, structural genes are typically viewed as those containing sequences of DNA corresponding to the amino acids of a protein that will be produced, as long as said protein does not function to regulate gene expression. Structural gene products include enzymes and structural proteins. Also encoded by structural genes are non-coding RNAs, such as rRNAs and tRNAs (but excluding any regulatory miRNAs and siRNAs). Placement in the genome In prokaryotes, structural genes of related function are typically adjacent to one another on a single strand of DNA, forming an operon. This permits simpler regulation of gene expression, as a single regulatory factor can affect transcription of all associated genes. This is best illustrated by the well-studied lac operon, in which three structural genes (lacZ, lacY, and lacA) are all regulated by a single promoter and a single operator. Prokaryotic structural genes are transcribed into a polycistronic mRNA and subsequently translated. In eukaryotes, structural genes are not sequentially placed. Each gene is instead composed of coding exons and interspersed non-coding introns. Regulatory sequences are typically found in non-coding regions upstream and downstream from the gene. Structural gene mRNAs must be spliced prior to translation to remove intronic sequences. This in turn lends itself to the eukaryotic phenomenon of alternative splicing, in which a single mRNA from a single structural gene can produce several different proteins based on which exons are included. Despite the complexity of this process, it is estimated that up to 94% of human genes are spliced in some way. Furthermore, different splicing patterns occur in different tissue types. An exception to this layout in eukaryotes are genes for histone proteins, which lack introns entirely. Also distinct are the rDNA clusters of structural genes, in which 28S, 5.8S, and 18S sequences are adjacent, separated by short internally transcribed spacers, and likewise the 45S rDNA occurs five distinct places on the genome, but is clustered into adjacent repeats. In eubacteria these genes are organized into operons. However, in archaebacteria these genes are non-adjacent and exhibit no linkage. Role in human disease The identification of the genetic basis for the causative agent of a disease can be an important component of understanding its effects and spread. Location and content of structural genes can elucidate the evolution of virulence, as well as provide necessary information for treatment. Likewise understanding the specific changes in structural gene sequences underlying a gain or loss of virulence aids in understanding the mechanism by which diseases affect their hosts. For example, Yersinia pestis (the bubonic plague) was found to carry several virulence and inflammation-related structural genes on plasmids. Likewise, the structural gene responsible for tetanus was determined to be carried on a plasmid as well. Diphtheria is caused by a bacterium, but only after that bacterium has been infected by a bacteriophage carrying the structural genes for the toxin. In Herpes simplex virus, the structural gene sequence responsible for virulence was found in two locations in the genome despite only one location actually producing the viral gene product. This was hypothesized to serve as a potential mechanism for strains to regain virulence if lost through mutation. Understanding the specific changes in structural genes underlying a gain or loss of virulence is a necessary step in the formation of specific treatments, as well the study of possible medicinal uses of toxins. Phylogenetics As far back as 1974, DNA sequence similarity was recognized as a valuable tool for determining relationships among taxa. Structural genes in general are more highly conserved due to functional constraint, and so can prove useful in examinations of more disparate taxa. Original analyses enriched samples for structural genes via hybridization to mRNA. More recent phylogenetic approaches focused on structural genes of known function, conserved to varying degrees. rRNA sequences frequent targets, as they are conserved in all species. Microbiology has specifically targeted the 16S gene to determine species level differences. In higher-order taxa, COI is now considered the “barcode of life,” and is applied for most biological identification. Debate Despite the widespread classification of genes as either structural or regulatory, these categories are not an absolute division. Recent genetic discoveries call into question the distinction between regulatory and structural genes. The distinction between regulatory and structural genes can be attributed to the original 1959 work on Lac operon protein expression. In this instance, a single regulatory protein was detected that affected the transcription of the other proteins now known to compose the Lac operon. From this point forward, the two types of coding sequences were separated. However, increasing discoveries of gene regulation suggest greater complexity. Structural gene expression is regulated by numerous factors including epigenetics (e.g. methylation), RNAi, and more. Regulatory and structural genes can be epigenetically regulated identically, so not all regulation is coded for by “regulatory genes”. There are also examples of proteins that do not decidedly fit either category, such as chaperone proteins. These proteins aid in the folding of other proteins, a seemingly regulatory role. Yet these same proteins also aid in the movement of their chaperoned proteins across membranes, and have now been implicated in immune responses (see Hsp60) and in the apoptotic pathway (see Hsp70). More recently, microRNAs were found to be produced from the internal transcribed spacers of rRNA genes. Thus an internal component of a structural gene is, in fact, regulatory. Binding sites for microRNAs were also detected within coding sequences of genes. Typically interfering RNAs target the 3’UTR, but inclusion of binding sites within the sequence of the protein itself allows the transcripts of these proteins to effectively regulate the microRNAs within the cell. This interaction was demonstrated to have an effect on expression, and thus again a structural gene contains a regulatory component. References External links Model of Lac Operon The SGC protein browser SILVA database of aligned rRNA sequence data Barcode of Life database of COI barcoded species Genes Gene expression
Structural gene
[ "Chemistry", "Biology" ]
1,304
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
3,454,720
https://en.wikipedia.org/wiki/Regulator%20gene
In genetics, a regulator gene, regulator, or regulatory gene is a gene involved in controlling the expression of one or more other genes. Regulatory sequences, which encode regulatory genes, are often at the five prime end (5') to the start site of transcription of the gene they regulate. In addition, these sequences can also be found at the three prime end (3') to the transcription start site. In both cases, whether the regulatory sequence occurs before (5') or after (3') the gene it regulates, the sequence is often many kilobases away from the transcription start site. A regulator gene may encode a protein, or it may work at the level of RNA, as in the case of genes encoding microRNAs. An example of a regulator gene is a gene that codes for a repressor protein that inhibits the activity of an operator (a gene which binds repressor proteins thus inhibiting the translation of RNA to protein via RNA polymerase). In prokaryotes, regulator genes often code for repressor proteins. Repressor proteins bind to operators or promoters, preventing RNA polymerase from transcribing RNA. They are usually constantly expressed so the cell always has a supply of repressor molecules on hand. Inducers cause repressor proteins to change shape or otherwise become unable to bind DNA, allowing RNA polymerase to continue transcription. Regulator genes can be located within an operon, adjacent to it, or far away from it. Other regulatory genes code for activator proteins. An activator binds to a site on the DNA molecule and causes an increase in transcription of a nearby gene. In prokaryotes, a well-known activator protein is the catabolite activator protein (CAP), involved in positive control of the lac operon. In the regulation of gene expression, studied in evolutionary developmental biology (evo-devo), both activators and repressors play important roles. Regulatory genes can also be described as positive or negative regulators, based on the environmental conditions that surround the cell. Positive regulators are regulatory elements that permit RNA polymerase binding to the promoter region, thus allowing transcription to occur. In terms of the lac operon, the positive regulator would be the CRP-cAMP complex that must be bound close to the site of the start of transcription of the lac genes. The binding of this positive regulator allows RNA polymerase to bind successfully to the promoter of the lac gene sequence which advances the transcription of lac genes; lac Z, lac Y, and lac A. Negative regulators are regulatory elements which obstruct the binding of RNA polymerase to the promoter region, thus repressing transcription. In terms of the lac operon, the negative regulator would be the lac repressor which binds to the promoter in the same site that RNA polymerase normally binds. The binding of the lac repressor to RNA polymerase's binding site inhibits the transcription of the lac genes. Only when an inducer is bound to the lac repressor will the binding site be free for RNA polymerase to carry out transcription of the lac genes. Gene regulatory elements Promoters reside at the beginning of the gene and serve as the site where the transcription machinery assembles and transcription of the gene begins. Enhancers turn on the promoters at specific locations, times, and levels and can be simply defined as the “promoters of the promoter.” Silencers are thought to turn off gene expression at specific time points and locations. Insulators, also called boundary elements, are DNA sequences that create cis-regulatory boundaries that prevent the regulatory elements of one gene from affecting neighboring genes. The general dogma is that these regulatory elements get activated by the binding of transcription factors, proteins that bind to specific DNA sequences, and control mRNA transcription. There could be several transcription factors that need to bind to one regulatory element in order to activate it. In addition, several other proteins, called transcription cofactors, bind to the transcription factors themselves to control transcription. Negative regulators Negative regulators act to prevent transcription or translation. Examples such as cFLIP suppress cell death mechanisms leading to pathological disorders like cancer, and thus play a crucial role in drug resistance. Circumvention of such actors is a challenge in cancer therapy. Negative regulators of cell death in cancer include cFLIP, Bcl2 family, Survivin, HSP, IAP, NF-κB, Akt, mTOR, and FADD. Detection There are several different techniques to detect regulatory genes, but of the many there are a certain few that are used more frequently than others. One of these select few is called ChIP-chip. ChIP-chip is an in vivo technique used to determine genomic binding sites for transcription factors in two component system response regulators. In vitro microarray based assay (DAP-chip) can be used to determine gene targets and functions of two component signal transduction systems. This assay takes advantage of the fact that response regulators can be phosphorylated and thus activated in vitro using small molecule donors like acetyl phosphate. Phylogenetic footprinting Phylogenetic footprinting is a technique that utilizes multiple sequence alignments to determine locations of conserved sequences such as regulatory elements. Along with multiple sequence alignments, phylogenetic footprinting also requires statistical rates of conserved and non-conserved sequences. Using the information provided by multiple sequence alignments and statistical rates, one can identify the best conserved motifs in the orthologous regions of interest. References External links Plant Transcription Factor Database http://www.news-medical.net/life-sciences/Gene-Expression-Techniques.aspx http://www.britannica.com/science/regulator-gene https://www.boundless.com/biology/textbooks/boundless-biology-textbook/gene-expression-16/regulation-of-gene-expression-111/prokaryotic-versus-eukaryotic-gene-expression-453-11678/ Gene expression
Regulator gene
[ "Chemistry", "Biology" ]
1,241
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
22,105,918
https://en.wikipedia.org/wiki/Cosmological%20constant%20problem
In cosmology, the cosmological constant problem or vacuum catastrophe is the substantial disagreement between the observed values of vacuum energy density (the small value of the cosmological constant) and the much larger theoretical value of zero-point energy suggested by quantum field theory. Depending on the Planck energy cutoff and other factors, the quantum vacuum energy contribution to the effective cosmological constant is calculated to be between 50 and as many as 120 orders of magnitude greater than has actually been observed, a state of affairs described by physicists as "the largest discrepancy between theory and experiment in all of science" and "the worst theoretical prediction in the history of physics". History The basic problem of a vacuum energy producing a gravitational effect was identified as early as 1916 by Walther Nernst. He predicted that the value had to be either zero or very small. In 1926, Wilhelm Lenz concluded that "If one allows waves of the shortest observed wavelengths cm, ... and if this radiation, converted to material density (), contributed to the curvature of the observable universe – one would obtain a vacuum energy density of such a value that the radius of the observable universe would not reach even to the Moon." After the development of quantum field theory in the 1940s, the first to address contributions of quantum fluctuations to the cosmological constant was Yakov Zeldovich in the 1960s. In quantum mechanics, the vacuum itself should experience quantum fluctuations. In general relativity, those quantum fluctuations constitute energy that would add to the cosmological constant. However, this calculated vacuum energy density is many orders of magnitude bigger than the observed cosmological constant. Original estimates of the degree of mismatch were as high as 120 to 122 orders of magnitude; however, modern research suggests that, when Lorentz invariance is taken into account, the degree of mismatch is closer to 60 orders of magnitude. With the development of inflationary cosmology in the 1980s, the problem became much more important: as cosmic inflation is driven by vacuum energy, differences in modeling vacuum energy lead to huge differences in the resulting cosmologies. Were the vacuum energy precisely zero, as was once believed, then the expansion of the universe would not accelerate as observed, according to the standard Λ-CDM model. Cutoff dependence The calculated vacuum energy is a positive, rather than negative, contribution to the cosmological constant because the existing vacuum has negative quantum-mechanical pressure, while in general relativity, the gravitational effect of negative pressure is a kind of repulsion. (Pressure here is defined as the flux of quantum-mechanical momentum across a surface.) Roughly, the vacuum energy is calculated by summing over all known quantum-mechanical fields, taking into account interactions and self-interactions between the ground states, and then removing all interactions below a minimum "cutoff" wavelength to reflect that existing theories break down and may fail to be applicable around the cutoff scale. Because the energy is dependent on how fields interact within the current vacuum state, the vacuum energy contribution would have been different in the early universe; for example, the vacuum energy would have been significantly different prior to electroweak symmetry breaking during the quark epoch. Renormalization The vacuum energy in quantum field theory can be set to any value by renormalization. This view treats the cosmological constant as simply another fundamental physical constant not predicted or explained by theory. Such a renormalization constant must be chosen very accurately because of the many-orders-of-magnitude discrepancy between theory and observation, and many theorists consider this ad-hoc constant as equivalent to ignoring the problem. Estimated values The vacuum energy density of the Universe based on 2015 measurements by the Planck collaboration is = ≘ = or about in geometrized units. One assessment, made by Jérôme Martin of the Institut d'Astrophysique de Paris in 2012, placed the expected theoretical vacuum energy scale around 108 GeV4, for a difference of about 55 orders of magnitude. Proposed solutions Some proposals involve modifying gravity to diverge from general relativity. These proposals face the hurdle that the results of observations and experiments so far have tended to be extremely consistent with general relativity and the ΛCDM model, and inconsistent with thus-far proposed modifications. In addition, some of the proposals are arguably incomplete, because they solve the "new" cosmological constant problem by proposing that the actual cosmological constant is exactly zero rather than a tiny number, but fail to solve the "old" cosmological constant problem of why quantum fluctuations seem to fail to produce substantial vacuum energy in the first place. Nevertheless, many physicists argue that, due in part to a lack of better alternatives, proposals to modify gravity should be considered "one of the most promising routes to tackling" the cosmological constant problem. Bill Unruh and collaborators have argued that when the energy density of the quantum vacuum is modeled more accurately as a fluctuating quantum field, the cosmological constant problem does not arise. Going in a different direction, George F. R. Ellis and others have suggested that in unimodular gravity, the troublesome contributions simply do not gravitate. Recently, a fully diffeomorphism-invariant action principle that gives the equations of motion for trace-free Einstein gravity has been proposed, where the cosmological constant emerges as an integration constant. Another argument, due to Stanley Brodsky and Robert Shrock, is that in light front quantization, the quantum field theory vacuum becomes essentially trivial. In the absence of vacuum expectation values, there is no contribution from quantum electrodynamics, weak interactions, and quantum chromodynamics to the cosmological constant. It is thus predicted to be zero in a flat spacetime. From light front quantization insight, the origin of the cosmological constant problem is traced back to unphysical non-causal terms in the standard calculation, which lead to an erroneously large value of the cosmological constant. In 2018, a mechanism for cancelling Λ out has been proposed through the use of a symmetry breaking potential in a Lagrangian formalism in which matter shows a non-vanishing pressure. The model assumes that standard matter provides a pressure which counterbalances the action due to the cosmological constant. Luongo and Muccino have shown that this mechanism permits to take vacuum energy as quantum field theory predicts, but removing the huge magnitude through a counterbalance term due to baryons and cold dark matter only. In 1999, Andrew Cohen, David B. Kaplan and Ann Nelson proposed that correlations between the UV and IR cutoffs in effective quantum field theory are enough to reduce the theoretical cosmological constant down to the measured cosmological constant due to the Cohen–Kaplan–Nelson (CKN) bound. In 2021, Nikita Blinov and Patrick Draper confirmed through the holographic principle that the CKN bound predicts the measured cosmological constant, all while maintaining the predictions of effective field theory in less extreme conditions. Some propose an anthropic solution, and argue that we live in one region of a vast multiverse that has different regions with different vacuum energies. These anthropic arguments posit that only regions of small vacuum energy such as the one in which we live are reasonably capable of supporting intelligent life. Such arguments have existed in some form since at least 1981. Around 1987, Steven Weinberg estimated that the maximum allowable vacuum energy for gravitationally-bound structures to form is problematically large, even given the observational data available in 1987, and concluded the anthropic explanation appears to fail; however, more recent estimates by Weinberg and others, based on other considerations, find the bound to be closer to the actual observed level of dark energy. Anthropic arguments gradually gained credibility among many physicists after the discovery of dark energy and the development of the theoretical string theory landscape, but are still derided by a substantial skeptical portion of the scientific community as being problematic to verify. Proponents of anthropic solutions are themselves divided on multiple technical questions surrounding how to calculate the proportion of regions of the universe with various dark energy constants. See also Notes References External links (video by Fermilab's Don "Dr. Don" Lincoln) Physical cosmology Physics beyond the Standard Model
Cosmological constant problem
[ "Physics", "Astronomy" ]
1,702
[ "Astronomical sub-disciplines", "Theoretical physics", "Unsolved problems in physics", "Astrophysics", "Particle physics", "Physics beyond the Standard Model", "Physical cosmology" ]
22,108,748
https://en.wikipedia.org/wiki/Hippo%20signaling%20pathway
The Hippo signaling pathway, also known as the Salvador-Warts-Hippo (SWH) pathway, is a signaling pathway that controls organ size in animals through the regulation of cell proliferation and apoptosis. The pathway takes its name from one of its key signaling components—the protein kinase Hippo (Hpo). Mutations in this gene lead to tissue overgrowth, or a "hippopotamus"-like phenotype. A fundamental question in developmental biology is how an organ knows to stop growing after reaching a particular size. Organ growth relies on several processes occurring at the cellular level, including cell division and programmed cell death (or apoptosis). The Hippo signaling pathway is involved in restraining cell proliferation and promoting apoptosis. As many cancers are marked by unchecked cell division, this signaling pathway has become increasingly significant in the study of human cancer. The Hippo pathway also has a critical role in stem cell and tissue specific progenitor cell self-renewal and expansion. The Hippo signaling pathway appears to be highly conserved. While most of the Hippo pathway components were identified in the fruit fly (Drosophila melanogaster) using mosaic genetic screens, orthologs to these components (genes that are related through speciation events and thus tend to retain the same function in different species) have subsequently been found in mammals. Thus, the delineation of the pathway in Drosophila has helped to identify many genes that function as oncogenes or tumor suppressors in mammals. Mechanism The Hippo pathway consists of a core kinase cascade in which Hpo phosphorylates (Drosophila) the protein kinase Warts (Wts). Hpo (MST1/2 in mammals) is a member of the Ste-20 family of protein kinases. This highly conserved group of serine/threonine kinases regulates several cellular processes, including cell proliferation, apoptosis, and various stress responses. Once phosphorylated, Wts (LATS1/2 in mammals) becomes active. Misshapen (Msn, MAP4K4/6/7 in mammals) and Happyhour (Hppy, MAP4K1/2/3/5 in mammals) act in parallel to Hpo to activate Wts. Wts is a nuclear DBF-2-related kinase. These kinases are known regulators of cell cycle progression, growth, and development. Two proteins are known to facilitate the activation of Wts: Salvador (Sav) and Mob as tumor suppressor (Mats). Sav (SAV1 in mammals) is a WW domain-containing protein, meaning that this protein contains a sequence of amino acids in which a tryptophan and an invariant proline are highly conserved. Hpo can bind to and phosphorylate Sav, which may function as a scaffold protein because this Hpo-Sav interaction promotes phosphorylation of Wts. Hpo can also phosphorylate and activate Mats (MOBKL1A/B in mammals), which allows Mats to associate with and strengthen the kinase activity of Wts. Activated Wts can then go on to phosphorylate and inactivate the transcriptional coactivator Yorkie (Yki). Yki is unable to bind DNA by itself. In its active state, Yki binds to the transcription factor Scalloped (Sd), and the Yki-Sd complex becomes localized to the nucleus. This allows for the expression of several genes that promote organ growth, such as cyclin E, which promotes cell cycle progression, and diap1 (Drosophila inhibitor of apoptosis protein-1), which, as its name suggests, prevents apoptosis. Yki also activates expression of the bantam microRNA, a positive growth regulator that specifically affects cell number. Thus, the inactivation of Yki by Wts inhibits growth through the transcriptional repression of these pro-growth regulators. By phosphorylating Yki at serine 168, Wts promotes the association of Yki with 14-3-3 proteins, which help to anchor Yki in the cytoplasm and prevent its transport to the nucleus. In mammals, the two Yki orthologs are Yes-associated protein (YAP) and transcriptional coactivator with PDZ-binding motif (WWTR1, also known as TAZ). When activated, YAP and TAZ can bind to several transcription factors including p73, Runx2 and several TEADs. YAP regulates the expression of Hoxa1 and Hoxc13 in mouse and human epithelial cells in vivo and in vitro. The upstream regulators of the core Hpo/Wts kinase cascade include the transmembrane protein Fat and several membrane-associated proteins. As an atypical cadherin, Fat (FAT1-4 in mammals) may function as a receptor, though an extracellular ligand has not been positively identified. The GPI-anchored cell surface protein glypican-3 (GPC3) is known to interact with Fat1 in human liver cancer. GPC3 is also shown to modulate Yap signaling in liver cancer. While Fat is known to bind to another atypical cadherin, Dachsous (Ds), during tissue patterning, it is unclear what role Ds has in regulating tissue growth. Nevertheless, Fat is recognized as an upstream regulator of the Hpo pathway. Fat activates Hpo through the apical protein Expanded (Ex; FRMD6/Willin in mammals). Ex interacts with two other apically-localized proteins, Kibra (KIBRA in mammals) and Merlin (Mer; NF2 in mammals), to form the Kibra-Ex-Mer (KEM) complex. Both Ex and Mer are FERM domain-containing proteins, while Kibra, like Sav, is a WW domain-containing protein. The KEM complex physically interacts with the Hpo kinase cascade, thereby localizing the core kinase cascade to the plasma membrane for activation. Fat may also regulate Wts independently of Ex/Hpo, through the inhibition of the unconventional myosin Dachs. Normally, Dachs can bind to and promote the degradation of Wts. In cancer In fruitfly, the Hippo signaling pathway involves a kinase cascade involving the Salvador (Sav), Warts (Wts) and Hippo (Hpo) protein kinases. Many of the genes involved in the Hippo signaling pathway are recognized as tumor suppressors, while Yki/YAP/TAZ is identified as an oncogene. YAP/TAZ can reprogram cancer cells into cancer stem cells. YAP has been found to be elevated in some human cancers, including breast cancer, colorectal cancer, and liver cancer. This may be explained by YAP’s recently defined role in overcoming contact inhibition, a fundamental growth control property of normal cells in vitro and in vivo, in which proliferation stops after cells reach confluence (in culture) or occupy maximum available space inside the body and touch one another. This property is typically lost in cancerous cells, allowing them to proliferate in an uncontrolled manner. In fact, YAP overexpression antagonizes contact inhibition. Many of the pathway components recognized as tumor suppressor genes are mutated in human cancers. For example, mutations in Fat4 have been found in breast cancer, while NF2 is mutated in familial and sporadic schwannomas. Additionally, several human cancer cell lines invoke mutations of the SAV1 and MOBK1B proteins. However, recent research by Marc Kirschner and Taran Gujral has demonstrated that Hippo pathway components may play a more nuanced role in cancer than previously thought. Hippo pathway inactivation enhanced the effect of 15 FDA-approved oncology drugs by promoting chemo-retention. In another study, the Hippo pathway kinases LATS1/2 were found to suppress cancer immunity in mice. Not all studies, however, support a role for Hippo signaling in promoting carcinogenesis. In hepatocellular carcinoma, for instance, it was suggesting that AXIN1 mutations would provoke Hippo signaling pathway activation, fostering the cancer development, but a recent study demonstrated that such an effect cannot be detected. Thus the exact role of Hippo signaling in the cancer process awaits further elucidation. As a drug target Two venture-backed oncology startups, Vivace Therapeutics and the General Biotechnologies subsidiary Nivien Therapeutics, are actively developing kinase inhibitors targeting the Hippo pathway. Regulation of human organ size The heart is the first organ formed during mammalian development. A properly sized and functional heart is vital throughout the entire lifespan. Loss of cardiomyocytes because of injury or diseases leads to heart failure, which is a major cause of human morbidity and mortality. Unfortunately, regenerative potential of the adult heart is limited. The Hippo pathway is a recently identified signaling cascade that plays an evolutionarily conserved role in organ size control by inhibiting cell proliferation, promoting apoptosis, regulating fates of stem/progenitor cells, and in some circumstances, limiting cell size. Research indicates a key role of this pathway in regulation of cardiomyocyte proliferation and heart size. Inactivation of the Hippo pathway or activation of its downstream effector, the Yes-associated protein transcription coactivator, improves cardiac regeneration. Several known upstream signals of the Hippo pathway such as mechanical stress, G-protein-coupled receptor signaling, and oxidative stress are known to play critical roles in cardiac physiology. In addition, Yes-associated protein has been shown to regulate cardiomyocyte fate through multiple transcriptional mechanisms. Gene name confusion Note that Hippo TAZ protein is often confused with the gene TAZ, which is unrelated to the Hippo pathway. The gene TAZ produces the protein tafazzin. The official gene name for the Hippo TAZ protein is WWTR1. Also, the official names for MST1 and MST2 are STK4 and STK3, respectively. All databases for bioinformatics use the official gene symbols, and commercial sources for PCR primers or siRNA also go by the official gene names. Summary table References Further reading Valentina Rausch, Carsten G. Hansen (2020). The Hippo Pathway, YAP/TAZ, and the Plasma Membrane. Trends in Cell Biology https://doi.org/10.1016/j.tcb.2019.10.005 Signal transduction Animal genes
Hippo signaling pathway
[ "Chemistry", "Biology" ]
2,235
[ "Biochemistry", "Neurochemistry", "Signal transduction" ]
8,028,338
https://en.wikipedia.org/wiki/Systematic%20evolution%20of%20ligands%20by%20exponential%20enrichment
Systematic evolution of ligands by exponential enrichment (SELEX), also referred to as in vitro selection or in vitro evolution, is a combinatorial chemistry technique in molecular biology for producing oligonucleotides of either single-stranded DNA or RNA that specifically bind to a target ligand or ligands. These single-stranded DNA or RNA are commonly referred to as aptamers. Although SELEX has emerged as the most commonly used name for the procedure, some researchers have referred to it as SAAB (selected and amplified binding site) and CASTing (cyclic amplification and selection of targets) SELEX was first introduced in 1990. In 2015, a special issue was published in the Journal of Molecular Evolution in the honor of quarter century of the discovery of SELEX. The process begins with the synthesis of a very large oligonucleotide library, consisting of randomly generated sequences of fixed length flanked by constant 5' and 3' ends. The constant ends serve as primers, while a small number of random regions are expected to bind specifically to the chosen target. For a randomly generated region of length n, the number of possible sequences in the library using conventional DNA or RNA is 4n (n positions with four possibilities (A,T,C,G) at each position). The sequences in the library are exposed to the target ligand - which may be a protein or a small organic compound - and those that do not bind the target are removed, usually by affinity chromatography or target capture on paramagnetic beads. The bound sequences are eluted and amplified by PCR to prepare for subsequent rounds of selection in which the stringency of the elution conditions can be increased to identify the tightest-binding sequences. A caution to consider in this method is that the selection of extremely high, sub-nanomolar binding affinity entities may not in fact improve specificity for the target molecule. Off-target binding to related molecules could have significant clinical effects. SELEX has been used to develop a number of aptamers that bind targets interesting for both clinical and research purposes. Nucleotides with chemically modified sugars and bases have been incorporated into SELEX reactions to increase the chemical diversity at each base, expanding the possibilities for specific and sensitive binding, or increasing stability in serum or in vivo. Procedure Aptamers have emerged as a novel category in the field of bioreceptors due to their wide applications ranging from biosensing to therapeutics. Several variations of their screening process, called SELEX have been reported which can yield sequences with desired properties needed for their final use. Generating single stranded oligonucleotide library The first step of SELEX involves the synthesis of fully or partially randomized oligonucleotide sequences of some length flanked by defined regions which allow PCR amplification of those randomized regions and, in the case of RNA SELEX, in vitro transcription of the randomized sequence. While Ellington and Szostak demonstrated that chemical synthesis is capable of generating ~1015 unique sequences for oligonucleotide libraries in their 1990 paper on in vitro selection, they found that amplification of these synthesized oligonucleotides led to significant loss of pool diversity due to PCR bias and defects in synthesized fragments. The oligonucleotide pool is amplified and a sufficient amount of the initial library is added to the reaction so that there are numerous copies of each individual sequence to minimize the loss of potential binding sequences due to stochastic events. Before the library is introduced to target for incubation and selective retention, the sequence library must be converted to single stranded oligonucleotides to achieve structural conformations with target binding properties. Target incubation Immediately prior to target introduction, the single stranded oligonucleotide library is often heated and cooled slowly to renature oligonucleotides into thermodynamically stable secondary and tertiary structures. Once prepared, the randomized library is incubated with immobilized target to allow oligonucleotide-target binding. There are several considerations for this target incubation step, including the target immobilization method and strategies for subsequent unbound oligonucleotide separation, incubation time and temperature, incubation buffer conditions, and target versus oligonucleotide concentrations. Examples of target immobilization methods include affinity chromatography columns, nitrocellulose binding assay filters, and paramagnetic beads. Recently, SELEX reactions have been developed where the target is whole cells, which are expanded near complete confluence and incubated with the oligonucleotide library on culture plates. Incubation buffer conditions are altered based on the intended target and desired function of the selected aptamer. For example, in the case of negatively charged small molecules and proteins, high salt buffers are used for charge screening to allow nucleotides to approach the target and increase the chance of a specific binding event. Alternatively, if the desired aptamer function is in vivo protein or whole cell binding for potential therapeutic or diagnostic application, incubation buffer conditions similar to in vivo plasma salt concentrations and homeostatic temperatures are more likely to generate aptamers that can bind in vivo. Another consideration in incubation buffer conditions is non-specific competitors. If there is a high likelihood of non-specific oligonucleotide retention in the reaction conditions, non specific competitors, which are small molecules or polymers other than the SELEX library that have similar physical properties to the library oligonucleotides, can be used to occupy these non-specific binding sites. Varying the relative concentration of target and oligonucleotides can also affect properties of the selected aptamers. If a good binding affinity for the selected aptamer is not a concern, then an excess of target can be used to increase the probability that at least some sequences will bind during incubation and be retained. However, this provides no selective pressure for high binding affinity, which requires the oligonucleotide library to be in excess so that there is competition between unique sequences for available specific binding sites. Binding sequence elution and amplification Once the oligonucleotide library has been incubated with target for sufficient time, unbound oligonucleotides are washed away from immobilized target, often using the incubation buffer so that specifically bound oligonucleotides are retained. With unbound sequences washed away, the specifically bound sequences are then eluted by creating denaturing conditions that promote oligonucleotide unfolding or loss of binding conformation including flowing in deionized water, using denaturing solutions containing urea and EDTA, or by applying high heat and physical force. Upon elution of bound sequences, the retained oligonucleotides are reverse-transcribed to DNA in the case of RNA or modified base selections, or simply collected for amplification in the case of DNA SELEX. These DNA templates from eluted sequences are then amplified via PCR and converted to single stranded DNA, RNA, or modified base oligonucleotides, which are used as the initial input for the next round of selection. Obtaining ssDNA One of the most critical steps in the SELEX procedure is obtaining single stranded DNA (ssDNA) after the PCR amplification step. This will serve as input for the next cycle so it is of vital importance that all the DNA is single stranded and as little as possible is lost. Because of the relative simplicity, one of the most used methods is using biotinylated reverse primers in the amplification step, after which the complementary strands can be bound to a resin followed by elution of the other strand with lye. Another method is asymmetric PCR, where the amplification step is performed with an excess of forward primer and very little reverse primer, which leads to the production of more of the desired strand. A drawback of this method is that the product should be purified from double stranded DNA (dsDNA) and other left-over material from the PCR reaction. Enzymatic degradation of the unwanted strand can be performed by tagging this strand using a phosphate-probed primer, as it is recognized by enzymes such as Lambda exonuclease. These enzymes then selectively degrade the phosphate tagged strand leaving the complementary strand intact. All of these methods recover approximately 50 to 70% of the DNA. For a detailed comparison refer to the article by Svobodová et al. where these, and other, methods are experimentally compared. In classical SELEX, the process of randomized single stranded library generation, target incubation, and binding sequence elution and amplification described above are repeated until the vast majority of the retained pool consists of target binding sequences, though there are modifications and additions to the procedure that are often used, which are discussed below. Negative or counter selection In order to increase the specificity of aptamers selected by a given SELEX procedure, a negative selection, or counter selection, step can be added prior to or immediately following target incubation. To eliminate sequences with affinity for target immobilization matrix components from the pool, negative selection can be used where the library is incubated with target immobilization matrix components and unbound sequences are retained. Negative selection can also be used to eliminate sequences that bind target-like molecules or cells by incubating the oligonucleotide library with small molecule target analogs, undesired cell types, or non-target proteins and retaining the unbound sequences. Tracking selection progression To track the progress of a SELEX reaction, the number of target bound molecules, which is equivalent to the number of oligonucleotides eluted, can be compared to the estimated total input of oligonucleotides following elution at each round. The number of eluted oligonucleotides can be estimated through elution concentration estimations via 260 nm wavelength absorbance or fluorescent labeling of oligonucleotides. As the SELEX reaction approaches completion, the fraction of the oligonucleotide library that binds target approaches 100%, such that the number of eluted molecules approaches the total oligonucleotide input estimate, but may converge at a lower number. Caveats and considerations Some SELEX reactions can generate probes that are dependent on primer binding regions for secondary structure formation. There are aptamer applications for which a short sequence, and thus primer truncation, is desirable. An advancement on the original method allows an RNA library to omit the constant primer regions, which can be difficult to remove after the selection process because they stabilize secondary structures that are unstable when formed by the random region alone. Chemically modified nucleotides Recently, SELEX has expanded to include the use of chemically modified nucleotides. These chemically modified oligonucleotides offer many potential advantages for selected aptamers including greater stability and nuclease resistance, enhanced binding for select targets, expanded physical properties - like increased hydrophobicity, and more diverse structural conformations. The genetic alphabet, and thus possible aptamers, is also expanded using unnatural base pairs the use of these unnatural base pairs was applied to SELEX and high affinity DNA aptamers were generated. SELEX variants and alternative aptamer selection methods FRELEX was developed in 2016 by NeoVentures Biotechnology Inc to allow the selection of aptamers without immobilizing the target or the oligonucleotide library. Immobilization is a necessary component of SELEX; however, it has the potential to inhibit key epitopes, and thus weaken the likelihood of successful binding, particularly when working with small molecules. FRELEX follows a similar overall methodology to SELEX; however, instead of immobilizing the target, the researcher introduces a series of random and blocker oligonucleotides to an immobilization field before introduction to the target. This allows the researcher to better target small molecules that may be lost during partitioning. It also can be used in some circumstances to select an aptamer library without knowing the target. Most modern aptamer selection methods strive to improve the conventional SELEX aptamer search method. Despite the publication of various methods aimed at increasing the affinity and specificity of aptamers, experimental approaches face limitations in the number and variety of sequences that can be examined and selected. Library capacity for SELEX experiments is practically limited to 1015 candidates, whereas, assuming there is a 4-monomeric repertoire from which pools can be created, there are ~1.6 × 1060 unique sequences in sequence space limited to a 100-residue matrix, which is clearly beyond experimental capabilities. The library of oligonucleotides must be extremely diverse and not contain linear, incapable of providing a stable spatial arrangement, and double-stranded structures; due to these limitations, oligonucleotide libraries can cover the diversity of only ~106 sequences. This means that existing aptamers may not fully cover the diversity of target molecules or may not have optimal properties due to limitations of the underlying method. To yield the best possible aptamers one must maximize the effectiveness of the discovery process and the library itself. RNA and DNA secondary structure prediction by dynamic programming algorithms such as RNAfold (ViennaRNA) and by machine learning models such as SPOT-RNA, MXfold2 provides the opportunity to assess the ability of sequences in the primary library to fold into complex structures, allowing for the selection of only the most promising sequences from the entire pool. However, these algorithms are low-performance, making them poorly suited for this task. For this reason, algorithms like Ufold from the University of California and AliNA from Xelari Inc. have been developed, which demonstrate a significant increase in computational speed due to their faster architecture, and can be applied for preliminary in silico analysis of these libraries. Prior targets The technique has been used to evolve aptamers of extremely high binding affinity to a variety of target ligands, including small molecules such as ATP and adenosine and proteins such as prions and vascular endothelial growth factor (VEGF). Moreover, SELEX has been used to select high-affinity aptamers for complex targets such as tumor cells, tumor exosomes, or tumor tissue. Clinical uses of the technique are suggested by aptamers that bind tumor markers, GFP-related fluorophores, and a VEGF-binding aptamer trade-named Macugen has been approved by the FDA for treatment of macular degeneration. Additionally, SELEX has been utilized to obtain highly specific catalytic DNA or DNAzymes. Several metal-specific DNAzymes have been reported including the GR-5 DNAzyme (lead-specific), the CA1-3 DNAzymes (copper-specific), the 39E DNAzyme (uranyl-specific) and the NaA43 DNAzyme (sodium-specific). These developed aptamers have seen diverse application in therapies for macular degeneration and various research applications including biosensors, fluorescent labeling of proteins and cells, and selective enzyme inhibition. See also References Further reading External links Aptamer Base Evolution Genetics techniques Molecular biology
Systematic evolution of ligands by exponential enrichment
[ "Chemistry", "Engineering", "Biology" ]
3,184
[ "Genetics techniques", "Biochemistry", "Genetic engineering", "Molecular biology" ]
8,030,835
https://en.wikipedia.org/wiki/Pneumatic%20barrier
A pneumatic barrier is a method to contain oil spills. It is also called a bubble curtain. Air bubbling through a perforated pipe causes an upward water flow that slows the spread of oil. It can also be used to stop fish from entering polluted water. A further application of the pneumatic barrier is to decrease the salt-water exchange in navigation locks and prevent salt intrusion in rivers. . Pneumatic barriers are also known as air curtains. The pneumatic barrier is a (non-patented) invention of the Dutch engineer Johan van Veen from around 1940 . A pneumatic barrier is an active (as opposed to passive) method of waterway oil spill control. (An example of a passive method would be a containment boom.) Method of operation The pneumatic barrier consists of a perforated pipe and a compressed air source. Air escaping from the pipe provides a "hump" of rising water and air which contains the oil spill. Anchors to keep the pipe in a particular spot are helpful. In case of a density current due to salinity differences the barrier mixes the salt water, but also slows down the speed of the density current. Unique considerations At water-current speeds exceeding one foot per second, the pneumatic barrier no longer functions effectively, limiting deployable sites. Environmental issues The release of compressed air in the water adds oxygen to the local environment. This may be particularly useful in areas that have become a dead zone due to eutrophication. Air curtains may have another application. Dolphin and whale beaching has increased with the rise in ocean temperatures. On Thursday February 12th, 2017, a group of nearly 400 whales beached near Golden Bay on the tip of New Zealand’s South Island, following a similar incident earlier that week. The simplicity of an air curtain system, requiring only air compressors and perforated hoses, could allow for rapid deployment and create aerated zones of oxygenated seawater during a marine emergency. Air curtains are also used to control the release of smoke particles into the environment. After a natural disaster, or during brush clearing activities, debris is disposed of by incineration in either a ceramic or earth pit containment. Similar to an air curtain to separate indoor air from outdoor air, for instance in restaurants and walk-in refrigerators, a powerful air curtain can defeat the chimney effect of the incineration process to eliminate any smoke from a brush incinerator. The air curtain acts as a lid on the process, and forces the smoke back into the fuel bed for a cleaner burn. Disadvantages Like all active systems of any type, a mechanical failure can result in total failure of protection. External links Development of an air bubble curtain to reduce underwater noise of percussive piling Marine Environmental Research 49(2000)79-93, Elsevier Retrieved 2/16/2017 Bubble Curtains: Can They Dampen Offshore Energy Sound for Whales? National Geographic Retrieved 2/16/2017 You Tube: How an Air Curtain Works by Berner International Retrieved 2/16/2017 Fluid dynamics Pollution
Pneumatic barrier
[ "Chemistry", "Engineering" ]
626
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
8,031,252
https://en.wikipedia.org/wiki/Osmotic-controlled%20release%20oral%20delivery%20system
The osmotic-controlled release oral delivery system (OROS) is an advanced controlled release oral drug delivery system in the form of a rigid tablet with a semi-permeable outer membrane and one or more small laser drilled holes in it. As the tablet passes through the body, water is absorbed through the semipermeable membrane via osmosis, and the resulting osmotic pressure is used to push the active drug through the laser drilled opening(s) in the tablet and into the gastrointestinal tract. OROS is a trademarked name owned by ALZA Corporation, which pioneered the use of osmotic pumps for oral drug delivery. Rationale Pros and cons Osmotic release systems have a number of major advantages over other controlled-release mechanisms. They are significantly less affected by factors such as pH, food intake, GI motility, and differing intestinal environments. Using an osmotic pump to deliver drugs has additional inherent advantages regarding control over drug delivery rates. This allows for much more precise drug delivery over an extended period of time, which results in much more predictable pharmacokinetics. However, osmotic release systems are relatively complicated, somewhat difficult to manufacture, and may cause irritation or even blockage of the GI tract due to prolonged release of irritating drugs from the non-deformable tablet. Oral osmotic release systems Single-layer The Elementary Osmotic Pump (EOP) was developed by ALZA in 1974, and was the first practical example of an osmotic pump based drug release system for oral use. It was introduced to the market in the early 1980s in Osmosin (indomethacin) and Acutrim (phenylpropanolamine), but unexpectedly severe issues with GI irritation and cases of GI perforation led to the withdrawal of Osmosin. Merck & Co. later developed the Controlled-Porosity Osmotic Pump (CPOP) with the intention of addressing some of the issues that led to Osmosin's withdrawal via a new approach to the final stage of the release mechanism. Unlike the EOP, the CPOP had no pre-formed hole in the outer shell for the drug to be expelled out of. Instead, the CPOP's semipermeable membrane was designed to form numerous small pores upon contact with water through which the drug would be expelled via osmotic pressure. The pores were formed via the use of a pH insensitive leachable or dissolvable additive such as sorbitol. Multi-layer Both the EOP and CPOP were relatively simple designs, and were limited by their inability to deliver poorly soluble drugs. This led to the development of an additional internal "push layer" composed of material (a swellable polymer) that would expand as it absorbed water, which then pushed the drug layer (which incorporates a viscous polymer for suspension of poorly soluble drugs) out of the exit hole at a controlled rate. Osmotic agents such as sodium chloride, potassium chloride, or xylitol are added to both the drug and push layers to increase the osmotic pressure. The initial design developed in 1982 by ALZA researchers was designated the Push-Pull Osmotic Pump (PPOP), and Procardia XL (nifedipine) was one of the first drugs to utilize this PPOP design. In the early 1990s, an ALZA-funded research program began to develop a new dosage form of methylphenidate for the treatment of children with attention deficit hyperactivity disorder (ADHD). Methylphenidate's short half-life required multiple doses to be administered each day to attain long-lasting coverage, which made it an ideal candidate for the OROS technology. Multiple candidate pharmacokinetic profiles were evaluated and tested in an attempt to determine the optimal way to deliver the drug, which was especially important given the puzzling failure of an existing extended-release formulation of methylphenidate (Ritalin SR) to act as expected. The zero-order (flat) release profile that the PPOP was optimal at delivering failed to maintain its efficacy over time, which suggested that acute tolerance to methylphenidate formed over the course of the day. This explained why Ritalin SR was inferior to twice-daily Ritalin IR, and led to the hypothesis that an ascending pattern of drug delivery was necessary to maintain clinical effect. Trials designed to test this hypothesis were successful, and ALZA subsequently developed a modified PPOP design that utilized an overcoat of methylphenidate designed to release immediately and rapidly raise serum levels, followed by 10 hours of first-order (ascending) drug delivery from the modified PPOP design. This design was called the Push-Stick Osmotic Pump (PSOP), and utilized two separate drug layers with different concentrations of methylphenidate in addition to the (now quite robust) push layer. List of OROS medications OROS medications include: References Pharmaceutical industry Drug delivery devices Dosage forms Alza brands Pharmacology Pharmacokinetics
Osmotic-controlled release oral delivery system
[ "Chemistry", "Biology" ]
1,042
[ "Pharmacology", "Life sciences industry", "Pharmacokinetics", "Pharmaceutical industry", "Drug delivery devices", "Medicinal chemistry" ]
8,031,769
https://en.wikipedia.org/wiki/Vlaams%20Instituut%20voor%20Biotechnologie
VIB is a research institute located in Flanders, Belgium. It was founded by the Flemish government in 1995, and became a full-fledged institute on 1 January 1996. The main objective of VIB is to strengthen the excellence of Flemish life sciences research and to turn the results into new economic growth. VIB spends almost 80% of its budget on research activities, while almost 12% is spent on technology transfer activities and stimulating the creation of new businesses, in addition VIB spends approximately 2% on socio-economic activities. VIB is member of EU-LIFE, an alliance of leading life sciences research centres in Europe. The institute is led by Christine Durinx and Jérôme Van Biervliet. Ajit Shetty is chairman of the board of directors. Goals VIB's mission is to conduct frontline biomolecular research in life sciences for the benefit of scientific progress and the benefit of society. The strategic goals of the VIB are: Strategic basic research Technology transfer policy to transfer the inventions to consumers and patients Scientific information for the general public Research Centers The VIB scientist works on the normal and abnormal or pathological processes occurring in a cell, an organ and an organism (humans, plants, micro organisms). Instead of relocating scientists to a new campus, the VIB researchers work in research departments on six Flemish campuses: Ghent University, KU Leuven, University of Antwerp, Vrije Universiteit Brussel, IMEC and Hasselt University. Ghent University: VIB Inflammation Research Center, UGent (Bart Lambrecht) VIB Center for Plant Systems Biology, UGent (Dirk Inzé) VIB Medical Biotechnology Center, UGent (Nico Callewaert) Institute of Plant Biotechnology Outreach (IPBO), UGent (Marc Van Montagu) KU Leuven: VIB Center for Cancer Biology, KU Leuven (Scientific directors: Diether Lambrechts and Chris Marine) VIB Center for Brain & Disease Research, KU Leuven (Scientific directors: Patrik Verstreken and Joris de Wit) VIB Center for Microbiology, KU Leuven (Scientific director: Kevin Verstrepen) IMEC Campus NERF, a joint research initiative between IMEC, VIB and KU Leuven University of Antwerp: VIB Department of Molecular Genetics, University of Antwerp (Rosa Rademakers) VIB Center for Molecular Neurology, University of Antwerp Vrije Universiteit Brussel: VIB Structural Biology Research Center, Vrije Universiteit Brussel (Jan Steyaert) VIB Laboratory Myeloid Cell Immunology, Vrije Universiteit Brussel (Jo Van Ginderachter) VIB Nanobody Service Facility, Vrije Universiteit Brussel Hasselt University Campus Service facilities VIB has established several core facilities focused on advanced technologies, which make high through-flow technologies available to academic and industrial researchers in Flanders. VIB BioInformatics Training and Service facility VIB Compound Screening service Facility, UGent VIB Genetic Service Facility, University of Antwerp VIB Nucleomics Core, KU Leuven VIB Nanobody Service Facility, Vrije Universiteit Brussel VIB Protein Service Facility, UGent VIB Proteomics Expertise Center, UGent VIB Bio Imaging Core, UGent and KU Leuven VIB Metabolomics Core, UGent Spin-offs VIB was involved in the creation of spin-offs from academic research groups, such as for Ablynx, DevGen, CropDesign, ActoGeniX, Pronota (formerly Peakadilly), Agrosavfe, Multiplicom, Q-biologicals, SoluCel, Aphea.Bio and Aelin Therapeutics. See also Belgian Society of Biochemistry and Molecular Biology BIOMED (University of Hasselt) EMBL Flanders Investment and Trade Flemish institute for technological research GIMV Herman Van Den Berghe Institute for the promotion of Innovation by Science and Technology (IWT) Jozef Schell Lisbon Strategy Marc Van Montagu Participatiemaatschappij Vlaanderen Raymond Hamers Science and technology in Flanders Walter Fiers Wellcome Trust References Sources J. Comijn, P. Raeymaekers, A. Van Gysel, M. Veugelers, Today = Tomorrow : a tribute to life sciences research and innovation : 10 years of VIB, Snoeck, 2006, Biotechnology industry in Belgium External links Official website Bioinformatics organizations Biological research institutes Biology societies Education in Belgium Educational organisations based in Belgium Flanders Genetics organizations Gene banks Information technology organizations based in Europe International research institutes International scientific organizations based in Europe Medical and health organisations based in Belgium Molecular biology institutes Molecular biology organizations Scientific organisations based in Belgium Research institutes Research institutes in Belgium Science and technology in Belgium Science and technology in Europe Systems science institutes Vrije Universiteit Brussel
Vlaams Instituut voor Biotechnologie
[ "Chemistry", "Biology" ]
997
[ "Bioinformatics", "Bioinformatics organizations", "Molecular biology organizations", "Molecular biology" ]
8,032,596
https://en.wikipedia.org/wiki/Gasoline%20and%20diesel%20usage%20and%20pricing
The usage and pricing of gasoline (or petrol) results from factors such as crude oil prices, processing and distribution costs, local demand, the strength of local currencies, local taxation or subsidy, and the availability of local sources of gasoline (supply). Since fuels are traded worldwide, the trade prices are similar. The price paid by consumers largely reflects national pricing policy. Most countries impose taxes on gasoline (petrol), which causes air pollution and climate change; whereas a few, such as Venezuela, subsidize the cost. Some country's taxes do not cover all the negative externalities, that is they do not make the polluter pay the full cost. Western countries have among the highest usage rates per person. The largest consumer is the United States. Fuel prices in the United States In 2008, a report by Cambridge Energy Research Associates stated that 2007 had been the year of peak gasoline usage in the United States, and that record energy prices would cause an "enduring shift" in energy consumption practices. According to the report, in April fuel consumption had been lower than a year before for the sixth straight month, suggesting 2008 would be the first year US usage declined in 17 years. The total annual distance driven in the US began declining in 2006. After Hurricane Katrina and Hurricane Rita, gas prices started rising to record high levels. In terms of the aggregate economy, increases in crude oil prices significantly predict the growth of real gross domestic product (GDP), but increases in natural gas prices do not. In August 2005, after the damages from Hurricane Katrina ran up gas prices, on August 30, a day after Katrina's landfall, prices in the spot market, which typically include a premium above the wellhead price, had surged past , and on 22 September 2005, the day before Rita's landfall, the spot price had risen to . In the fifteen years prior to the 1973 oil crisis, gasoline prices in the U.S. had lagged well behind inflation. A 249-page May 2004 report by the Government Accountability Office (GAO)—the "congressional watchdog", investigating the impact of about 2,600 petroleum industry mergers from the 1990s to 2004, said that one of the consequences of the mergers was that they enhanced the ability for the merged companies to control prices. Companies engaged in exploration and production were more likely to merge to improve efficiency and to decrease costs. In early 2020 the gas price has dropped down to a national average of $1.73 a gallon. In 2021, the average price increased to $3.01/gallon. By the end of June 2022, the price of gasoline reached a record high of over $5/gallon with some places reporting $6/gallon. While prices have come down since the peak in June, prices were beginning to tick up again. Gas prices hit $3.79 a gallon the week of September 29, 2022, up from $3.73 on September 23, 2022 — an increase of $0.06 per gallon over the last week. Since October 10, 2022, the price of gasoline has gone down again. Gas prices dropped down to $3.64 a gallon by November 28, 2022, down from $3.76 the week before – a decrease of $0.11 per gallon during that time. Factors affecting gasoline prices According to the Energy Information Administration (EIA), as of March 2022, factors that affect the price of gasoline in the United States include the price of crude oil per barrel, costs and profits related to refining, distribution, and marketing, and taxes, along with the charge set by refiners for gasoline based on based on octane levels, with higher octane levels—premium grade cost about 68 cents per gallon more than regular grade in 2021. The largest component of the average price of $2.80/gallon of regular grade gasoline in the United States from 2012 through 2021, representing 54.8% of the price of gas, was the price of crude oil. The second largest component during the same period was taxes—federal and state taxes representing 17% of the price of gas. The third component, representing 14.3% was distribution and marketing. The fourth component representing 14% was refining costs and profits. In 2021, with the average price increased to $3.01/gallon, crude oil accounted for 53.6%, taxes for 16.4%, distribution and marketing for 15.6%, and refining costs and profits, for 14.4%. Crude oil Crude oil is the greatest contributing factor when it comes to the price of gasoline and diesel. This includes the resources it takes for exploration, to remove it from the ground, and transport it. Between 2004 and 2008, there was an increase in fuel costs due in large part to a worldwide increase in demand for crude oil. Prices leapt from , causing a corresponding increase in gas prices. On the supply side, OPEC (or the Organization of the Petroleum Exporting Countries) has a great deal to do with the price of gasoline, both in the United States and around the world. The speculation of oil commodities can also affect the gasoline market. Marketing and distribution Distribution and marketing makes up the remaining 5%. The price of transporting crude oil to a refinery then gasoline to a point of distribution is passed on to the consumer. In addition the price to market the fuel brand is passed on. Taxes Other factors Aside from this breakdown, many other factors affect gasoline prices. Extreme weather, war or natural disaster in areas where oil is produced can also in turn raise the price of a gallon of gasoline. Legislation by several states for cleaner burning fuel also affects certain areas' prices of gasoline. Balance between supply and demand directly affects the price of gasoline. U.S. consumption of gasoline follows a seasonal pattern, where every year it is more expensive during the summer months when more people are driving. The long-term trend of consumption has been to increase year over year since 1950, with dips around the introduction of corporate average fuel economy standards and the 1979 energy crisis, the early 1990s recession and Gulf War, and the Great Depression. Petrol usage and pricing in Europe Most European countries have higher fuel taxes than the US, but Russia and some neighboring countries have a much smaller tax, with fuel prices similar to the US. Competitive petrol pricing in the UK is led by supermarkets with their own forecourts. Generally each supermarket tends to match the other's prices; the lead players being Asda, Tesco, Sainsbury's and Morrisons. Countries with subsidised gasoline A number of countries subsidize fossil fuels such as petrol/gasoline and other petroleum products. Subsidies make transport of people and goods cheaper, but discourage fuel efficiency. In some countries, the soaring cost of crude oil since 2003 has led to these subsidies being cut, moving inflation from the government debt to the general populace, sometimes resulting in political unrest. Fuel subsidies are common in oil-rich nations. Countries with subsidized fuel include Saudi Arabia, Iran, Egypt, Burma, Kuwait, Bahrain, Trinidad and Tobago, Brunei, Venezuela, Ecuador and Bolivia. In February 2010, the Iranian government implemented an energy price reform by which the energy subsidies were to be removed in five years; the most important price hike was in gasoline, as the price went up from 1000 rials ($0.10 US) to 4000 rials ($0.40 US) per litre, with a ration of 100 litres per month for private passenger cars (later reduced to 60 litres per month). On 26 December 2010, the Bolivian government issued a decree removing subsidies which had fixed petrol/gasoline and diesel prices for the past seven years. Arguing that illegal exports (contraband) of gasoline and diesel fuel to neighboring countries by individuals for personal profit was harming the economy, Bolivia eliminated the subsidies and raised gasoline prices as much as 83%. After widespread labor strikes, the Bolivian government canceled all future planned price hikes. Venezuela used to have the cheapest gasoline in the world for decades, however on 19 February 2016 president Nicolás Maduro used decree powers to raise fuel prices by 6000%. This was the first rise in petrol prices in 20 years and he also set in place a sharp devaluation of the currency which he said aimed to shore up the country's failing economy, hard hit by falling oil prices which make up 95% of foreign income. Prices at the pump in Venezuela jumped as much as 6,086% for 95 octane gasoline, from 0.097 bolivars to 6 bolivars. May 20, 2020 government increased the price to US$0.5 a litre. Iran The Iranian government introduced an energy price reform in February 2010. The reform was brought forward by the government and approved with some changes by the parliament. The major aim of the policy was to slow down the increasing trend of energy consumption in Iran by removing the energy subsidies. The plan included electricity, natural gas, gasoline, and diesel subsidies. According to the plan, all energy prices were to increase by 20 percent annually. The price reform was particularly important in gasoline, as consumption had been increasing dramatically creating a huge burden on government budget. Furthermore, to meet demand, Iran had to import gasoline from other countries, which made the country vulnerable to possible sanctions by the US and European countries. The gas price prior to reform was $0.10 US per liter with the quota of 100 liters per month per passenger car. The reform raised the price to $0.40 US per liter and later reduced the ration to 60 liters per month. The price for over-quota consumption and the imported cars were $0.70 US per liter. The energy price reform included a cash-rebate program through which each person received 455,000 rials ($15 US) per month from the government. The overall consumption of gasoline after the reform decreased from about 65 million liters per day to about 54 million liters per day. The price of gasoline based on the official USD to IRR rate is US$0.29/Litre in 2018 which is the second cheapest price in the world after Venezuela. Nigeria Petrol subsidies mainly benefit rich people. On 1 January 2012, the Nigerian government headed by president Goodluck Ebele Jonathan, tried to cease the subsidy on petrol and deregulate the oil prices by announcing the new price for petrol as US$0.88/litre from the old subsidised price of US$0.406/litre (LAGOS), which in areas distant from Lagos petrol was priced at US$1.25/litre. This led to the longest general strike (eight days), riots, Arab spring like protests and on 16 January 2012 the government capitulated by announcing a new price of US$0.60/litre with an envisaged price of US$2.0/litre in distant areas. In May 2016 the Buhari administration increased fuel prices again to NGN 145 per litre ($0.43 at black market rates for the currency). In September 2020, the government had announced an increase in the pump price of petrol to NGN 151.56 per litre from NGN 148. Mexico PEMEX, a government company in charge of selling oil in Mexico is subsidized by the Mexican government. This serves to quell inflationary pressures in Mexico. Mexico buys much of its gasoline and diesel from the United States and resells it at US$98 per barrel. Many residents of US border communities cross the border to buy fuel in Mexico, thereby enjoying a cheaper fuel subsidy at the expense of Mexican taxpayers. This has caused frequent supply shortages at a number of filling stations along the border for Mexican drivers, especially truck and bus drivers who use diesel. In 2017, Mexico ended its oil industry subsidies, leading to increased prices and widespread protests throughout the country. Trinidad and Tobago Trinidad and Tobago through its national energy agencies Petrotrin and Trinidad and Tobago National Petroleum Marketing Company Limited (NP) offers petroleum fuels at varying subsidised prices to the users within the country. Unleaded Gasoline is offered at two grades - Ron 91 at US$0.43/Litre and Ron 95 at US$0.91/Litre. Diesel is offered at US$0.24/Litre making this fuel some of the cheapest in the world. There are an estimated 791,086 cars in the country as at February 2015 and they consume 1.2 billion litres of liquid fuel annually The Government of Trinidad and Tobago spent an estimated US$173.2 million in subsidies for gasoline and diesel in half year period October 2014 - March 2015. United States The oil industry receives subsidies through the United States tax code, which include Percentage Depletion Allowance, Domestic Manufacturing Tax Deduction, the Foreign Tax Credit and Expensing Intangible Drilling Costs. It is estimated that these tax deductions are worth $4 billion annually and are currently being debated by the government for reform. Although such subsidies exist, the sale of fuel is also taxed at rates that far exceed the sales tax rates for other goods, to help pay for bridge and road repair. It is thus unclear whether the tax impact on fuel is a net subsidy or not. However, fuel taxes can account for less than half of government road and highway spending. The additional spending is clearly a subsidy, since the existence of these roads creates fuel demand. Moreover, since the fuel tax itself is allocated to road repair, and petroleum vehicles are the main users of roads, one cannot claim the existence of such a tax cancels out other subsidies. In 2021-22, gasoline and diesel prices surged in the United States, reaching record highs, as part of a larger trend of inflation. The average price of regular gasoline rose from $1.773 during the week of April 27, 2020, to $5 as of June 11, 2022, an all-time high. California led the nation in gas prices with an average gallon of regular gas in the state reaching $6.43 as of June 11, 2022. Venezuela Currently, in Venezuela, there are a total of 1,500 gas stations. 500 of these gas stations sell gas for the subsidized price, 500 sell gas at a dollar price, and the rest sell gas interchangeably between both (subsidized & unsubsidized). In 2013, PDVSA, Venezuelan state-owned company, spent US$1.7 billion in direct costs of importation of gasoline, and subsidizing all sales of gasoline in the internal Venezuelan market. The sale price of gasoline was US$0.015 per liter, on a fixed price in the local currency that has been in effect since 1997. Given the low price of gasoline, it is distributed free of charge to gas stations. On May 30, 2020, government did announce a price increase to US$0.5 per liter which is the price until now Jun 4 2021, but with supply shortages at the service stations. Countries that formerly subsidised gasoline Indonesia With oil reaching over US$145 a barrel, Indonesia further increased prices in May 2008 to Rp 6,000 (approx. US$0.65) per litre, and diesel to Rp 5,500 (approx. US$0.60) per litre, while kerosene was raised to Rp 2,500 (approx. US$0.28), moves which caused widespread protests. Furthermore, in November 2014, the new government led by President Joko Widodo reallocated the government subsidy for gasoline and diesel into nation's infrastructure, education and health budget, hence raised the price of subsidized gasoline and diesel by Rp 2,000 each, so the price of gasoline and diesel became Rp 8,500 and Rp 7,500 respectively. This decision created inflation and protest throughout the archipelago. Malaysia Malaysia had been subsidising gasoline since 1983. In 2014, Malaysia abolished fuel subsidies and began using a managed float system, in order to control the country's large current account deficit. Gasoline prices around the world Protests India Wide protests on petrol price hikes have been frequent in the last decade. On 24 May 2012, the petrol price was hiked by ₹7.50, resulting prices in the range of ₹73–82 ($0.97–1.09) per liter all over the country. Opposition had declared a bandh on 31 May 2012 across the country to protest against the price hike, which evoked mixed response amid incidents of stone pelting, arson and road blockades in some parts of the country. See also Automobile costs References Petrol Prices in India Petrol Prices in Malaysia External links United States Who is in the Oil Futures Market and How Has It Changed?, by Rice University's Baker Institute For Public Policy FAQs about gas prices at FuelEconomy.gov by the US Dept. of Energy Factors affecting gas prices (US Dept. of Energy) Understanding Gasoline Prices - 2005 report from the United States Government Accountability Office AAA's Daily Fuel Gauge Report US EIA Gasoline and Diesel Fuel Report US Energy Information Administration 2012 NACS Retail Fuels Report International European Market Observatory Official EU statistics on Gasoline and Diesel prices Global Fuel Price Comparison UK petrol prices compared to other countries showing contributing factors. Gas Prices Around the World Interactive Gas Prices Around the World Conde Nast Portfolio EU Fuel Prices Fuel prices in EU countries. Fuel prices with and without taxes & duties. Petroleum economics Oil and gas markets Pricing Energy economics Car costs Late modern economic history
Gasoline and diesel usage and pricing
[ "Environmental_science" ]
3,570
[ "Energy economics", "Environmental social science" ]
8,033,091
https://en.wikipedia.org/wiki/Half-metal
A half-metal is any substance that acts as a conductor to electrons of one spin orientation, but as an insulator or semiconductor to those of the opposite orientation. Although all half-metals are ferromagnetic (or ferrimagnetic), most ferromagnets are not half-metals. Many of the known examples of half-metals are oxides, sulfides, or Heusler alloys. Types of half-metallic compounds theoretically predicted so far include some Heusler alloys, such as , NiMnSb, and PtMnSb; some Si-containing half–Heusler alloys with Curie temperatures over 600 K, such as NiCrSi and PdCrSi; some transition-metal oxides, including rutile structured ; some perovskites, such as and ; and a few more simply structured zincblende (ZB) compounds, including CrAs and superlattices. NiMnSb and have been experimentally determined to be half-metals at very low temperatures. In half-metals, the valence band for one spin orientation is partially filled while there is a gap in the density of states for the other spin orientation. This results in conducting behavior for only electrons in the first spin orientation. In some half-metals, the majority spin channel is the conducting one while in others the minority channel is. Half-metals were first described in 1983, as an explanation for the electrical properties of manganese-based Heusler alloys. Some notable half-metals are chromium(IV) oxide, magnetite, and lanthanum strontium manganite (LSMO), as well as chromium arsenide. Half-metals have attracted some interest for their potential use in spintronics. References Further reading http://www-users.york.ac.uk/~ah566/research/half_metals.html http://www.tcd.ie/Physics/People/Michael.Coey/oxsen/newsletter/january98/halfmeta.htm Metals Spintronics
Half-metal
[ "Physics", "Chemistry", "Materials_science" ]
435
[ "Materials science stubs", "Metals", "Spintronics", "Condensed matter physics", "Condensed matter stubs" ]
8,034,347
https://en.wikipedia.org/wiki/Life-cycle%20engineering
Life-cycle engineering (LCE) is a sustainability-oriented engineering methodology that takes into account the comprehensive technical, environmental, and economic impacts of decisions within the product life cycle. Alternatively, it can be defined as "sustainability-oriented product development activities within the scope of one to several product life cycles." LCE requires analysis to quantify sustainability, setting appropriate targets for environmental impact. The application of complementary methodologies and technologies enables engineers to apply LCE to fulfill environmental objectives. LCE was first introduced in the 1980s as a bottom-up engineering approach, and widely adopted in the 1990s as a systematic 'cradle-to-grave' approach. The goal of LCE is to find the best possible compromise in product engineering to meet the needs of society while minimizing environmental impacts.  The methodology is closely related to, and overlaps with, life-cycle assessment (LCA) to assess environmental impacts; and life cycle costing (LCC) to assess economic impacts. The product life cycle is formally defined by ISO 14040 as the "consecutive and interlinked stages of a product system, from raw material acquisition or generation from natural resources to final disposal." Comprehensive life cycle analysis considers both upstream and downstream processes. Upstream processes include "the extraction and production of raw materials and manufacturing," and downstream processes include product disposal (such as recycling or sending waste to landfill). LCE aims to reduce the negative consequences of consumption and production, and ensure a good quality standard of living for future generations, by reducing waste and making product development and engineering processes more efficient and sustainable. Definition Life cycle engineering is defined in the CIRP Encyclopedia of Production Engineering as: "the engineering activities which include the application of technological and scientific principles to manufacturing products with the goal of protecting the environment, conserving resources, encouraging economic progress, keeping in mind social concerns, and the need for sustainability, while optimizing the product life cycle and minimizing pollution and waste." The definition of LCE is often challenged in regard to its primary purpose, but the consensus purpose of LCE is to evaluate and contribute to the improvement of environmental, health, and overall sustainability services and consequences of products at all life cycle stages. Quantifying environmental sustainability The first step in completing LCA or LCE is determining the appropriate sustainability thresholds to use as environmental targets for the product system. The proposed Lyngby framework for LCE is a combined top-down and bottom-up approach for LCE that uses targets based on planetary boundaries. Planetary boundaries can be used to establish limits for the earth's carrying capacity, defining upper thresholds for the environmental system. The IPAT equation [Impact = Population (or Volume) x Affluence (or Consumption) x Technology (or Consumption per Unit Produced)] is an accepted method for quantifying the impact of consumption. LCE can be leveraged to manage total environmental impact by addressing the technology effect (single product and product life cycle) and the volume effect (anticipated volume growth as consumption and population increase) of product engineering. Impacts are considered within the context of technical boundary conditions to verify the feasibility of proposed solutions. Complementary methodologies and technologies Modern technology provides innovative new opportunities for LCE: Visual analytics (VA) integrates visualization and data analytics to process large, dynamic data sets and solve complex problems. Researchers gather and synthesize historical and real-time data and information flow across all life cycle stages including impacts from upstream and downstream stages. LCA uses quantified data to build predictive (i.e. simulation-based methods, scenario analysis) and visual models to guide decision-making. By simplifying the presentation of models/results and tailoring visualizations to the audience, VA makes it easier for people to interact with data, enabling collaboration and improved knowledge transfer. Augmented reality (AR) and Mixed reality (MR) allow interaction with real and virtual objects in a given environment. In the interpretation phase of LCA, where inventories and process impacts are considered, AR/MR facilitates interaction with complex data sets to investigate scenarios and validate assumptions. It has the potential to break down barriers that inhibit the flow of information. Integrated process design is a methodology that involves identifying and integrating processes throughout the entire life cycle with the objective of improving performance. Using this information, analysis identifies enhancements, redefining information exchange and increasing interoperability between systems. The proposed integrated approach promotes synergies between fields like life cycle engineering and product design to improve performance compared to the current product life cycle. These systems and processes need to be integrated to break down barriers when "gathering & synthesizing information flows across life cycle stages." Building information modeling (BIM) empowers LCE via digital rendering of buildings and building systems, encouraging more advanced building system analysis through interchange, use, and constant upgrade of building data for the duration of the building life cycle. BIM allows for overall improved information management in buildings and building systems at all points in the life cycle through advanced data visualization, communication and coordination. BIM includes calculation models and processes that estimate environmental impacts of buildings by considering energy use, material use, and emission information throughout the life cycle of building systems. Application LCE is most commonly used as a part of green building rating systems or individual parties aiming to assess environmental or sustainability consequences of specific building projects or products. Stakeholders that want to develop more sustainable operations on a life-cycle level or assess their products from a life-cycle perspective use LCE to assess and improve operations to maximize efficiency and meet desired environmental or economic goals. Minimizing adverse environmental consequences and optimizing resource use are two central concepts to the application of LCE. A major implementation of LCE on an international scale is in the United Nations' Sustainable Development Goals (SDG). The SDGs are 17 goals for international environmental, economic, and social complications or subjects that are to be addressed by 2030. LCE is to be implemented in the solutions to these issues, as they require evaluation and action on a full life-cycle level, and are directly or indirectly tied to sustainable policies and decision-making. Key themes in life cycle engineering Key themes in LCE are economic, social, environmental and technological. These themes are interlinking and can be influenced by life cycle engineering. Economic implications Life cycle engineering is an assessment methodology and practice faced with increasing demand in the architectural, construction, and design industries. The shift toward "green building" or sustainable construction has increased the need for LCE in the design, construction, operation, and demolition of buildings. Newly realized environmental and economic benefits of sustainable building practices are determined and made accessible through LCE. LCE provides value to businesses by revealing and quantifying the benefits of sustainable construction with regard to environmental impact, energy reduction, economic savings, and commercial or social attractiveness. The costs LCE or of conducting life-cycle assessment (LCA) and life-cycle cost analysis (LCCA) are outweighed and justified by the benefits of such assessments, increasing the integration of LCE within sustainable construction practices. Specific demand for LCE in sustainable construction practices can be attributed to green building rating systems such as Leadership in Energy and Environmental Design (LEED) – developed by the U.S. Green Building Council – and Green Globes – developed by the Green Building Initiative. Green building rating systems have supported and encouraged the use of LCE and LCA as methods to improve the standards and requirements of rating systems, while also advancing industry-wide standards for integrated building sustainability considerations. External links Department Life Cycle Engineering – LBP – University of Stuttgart Life Cycle Engineering – A Industrial ecology
Life-cycle engineering
[ "Chemistry", "Engineering" ]
1,557
[ "Industrial ecology", "Industrial engineering", "Environmental engineering" ]
10,377,480
https://en.wikipedia.org/wiki/Fusion%20splicing
Fusion splicing is the act of joining two optical fibers end-to-end. The goal is to fuse the two fibers together in such a way that light passing through the fibers is not scattered or reflected back by the splice, and so that the splice and the region surrounding it are almost as strong as the intact fiber. The source of heat used to melt and fuse the two glass fibers being spliced is usually an electric arc, but can also be a laser, a gas flame, or a tungsten filament through which current is passed. Governing standards ANSI/EIA/TIA-455 See also Fiber-optic communication Optical fiber connector Optical time-domain reflectometer References Further reading "How to Precision Clean All Fiber Optic Connections": Edward J. Forrest, Jr. Fiber Optic Association Industrial processes Fiber optics Glass production Articles containing video clips
Fusion splicing
[ "Materials_science", "Engineering" ]
178
[ "Glass engineering and science", "Glass production" ]
19,425,312
https://en.wikipedia.org/wiki/Asymmetric%20hydrogenation
Asymmetric hydrogenation is a chemical reaction that adds two atoms of hydrogen to a target (substrate) molecule with three-dimensional spatial selectivity. Critically, this selectivity does not come from the target molecule itself, but from other reagents or catalysts present in the reaction. This allows spatial information (what chemists refer to as chirality) to transfer from one molecule to the target, forming the product as a single enantiomer. The chiral information is most commonly contained in a catalyst and, in this case, the information in a single molecule of catalyst may be transferred to many substrate molecules, amplifying the amount of chiral information present. Similar processes occur in nature, where a chiral molecule like an enzyme can catalyse the introduction of a chiral centre to give a product as a single enantiomer, such as amino acids, that a cell needs to function. By imitating this process, chemists can generate many novel synthetic molecules that interact with biological systems in specific ways, leading to new pharmaceutical agents and agrochemicals. The importance of asymmetric hydrogenation in both academia and industry contributed to two of its pioneers — William Standish Knowles and Ryōji Noyori — being collectively awarded one half of the 2001 Nobel Prize in Chemistry. History In 1956 a heterogeneous catalyst made of palladium deposited on silk was shown to effect asymmetric hydrogenation. Later, in 1968, the groups of William Knowles and Leopold Horner independently published the examples of asymmetric hydrogenation using a homogeneous catalysts. While exhibiting only modest enantiomeric excesses, these early reactions demonstrated feasibility. By 1972, enantiomeric excess of 90% was achieved, and the first industrial synthesis of the Parkinson's drug L-DOPA commenced using this technology. The field of asymmetric hydrogenation continued to experience a number of notable advances. Henri Kagan developed DIOP, an easily prepared C2-symmetric diphosphine that gave high ee's in certain reactions. Ryōji Noyori introduced the ruthenium-based catalysts for the asymmetric hydrogenated polar substrates, such as ketones and aldehydes. Robert H. Crabtree demonstrated the ability for Iridium compounds to catalyse asymmetric hydrogenation reactions in 1979 with the invention of Crabtree's catalyst. In the early 1990's, the introduction of P,N ligands by several groups independently then further expanded the scope of the C2-symmetric ligands, although they are not fundamentally superior to chiral ligands lacking rotational symmetry. Today, asymmetric hydrogenation is a routine methodology in laboratory and industrial scale organic chemistry. The importance of asymmetric hydrogenation was recognized by the 2001 Nobel Prize in Chemistry awarded to William Standish Knowles and Ryōji Noyori. Mechanism Asymmetric hydrogenations operate by conventional mechanisms invoked for other hydrogenations. This includes inner sphere mechanisms, outer sphere mechanisms and the σ-bond metathesis mechanisms. The type of mechanism employed by a catalyst is largely dependent on the ligands used in a system, which in turn leads to certain catalyst-substrate affinities. Inner sphere mechanisms The so-called inner sphere mechanism entails coordination of the alkene to the metal center. Other characteristics of this mechanism include a tendency for a homolytic splitting of dihydrogen when more electron-rich, low-valent metals are present while electron-poor, high valent metals normally exhibit a heterolytic cleavage of dihydrogen assisted by a base. The diagram below depicts purposed mechanisms for catalytic hydrogenation with rhodium complexes which are inner sphere mechanisms. In the unsaturated mechanism, the chiral product formed will have the opposite mode compared to the catalyst used. While the thermodynamically favoured complex between the catalyst and the substrate is unable to undergo hydrogenation, the unstable, unfavoured complex undergoes hydrogenation rapidly. The dihydride mechanism on the other hand sees the complex initially hydrogenated to the dihydride form. This subsequently allows for the coordination of the double bond on the non-hindered side. Through insertion and reductive elimination, the product's chirality matches that of the ligand. The preference for producing one enantiomer instead of another in these reactions is often explained in terms of steric interactions between the ligand and the prochiral substrate. Consideration of these interactions has led to the development of quadrant diagrams where "blocked" areas are denoted with a shaded box, while "open" areas are left unfilled. In the modeled reaction, large groups on an incoming olefin will tend to orient to fill the open areas of the diagram, while smaller groups will be directed to the blocked areas and hydrogen delivery will then occur to the back face of the olefin, fixing the stereochemistry. Note that only part of the chiral phosphine ligand is shown for the sake of clarity. Outer sphere mechanisms Some catalysts operate by "outer sphere mechanisms" such that the substrate never bonds directly to the metal but rather interacts with its ligands, which is often a metal hydride and a protic hydrogen on a ligand. As such, in most cases dihydrogen is split heterolytically, with the metal acting as a Lewis acid and either an external or internal base "deprotonating" the hydride. For an example of this mechanism we can consider the BINAP-Ru-diamine system. The dihalide form of the catalyst is converted to the catalysts by reaction of H2 in the presence of base: RuCl2(BINAP)(diamine) + 2 KOBu-t + 2 H2 → RuH2(BINAP)(diamine) + 2 KCl + 2 HOBu-t The resulting catalysts have three kinds of ligands: hydrides, which transfer to the unsaturated substrate diamines, which interact with substrate and with base activator by the second coordination sphere diphosphine, which confers asymmetry. The "Noyori-class" of catalysts are often referred to as bifunctional catalysts to emphasize the fact that both the metal and the (amine) ligand are functional. In the hydrogenation of C=O containing substrates, the mechanism was long assumed to operate by a six membered pericyclic transition state/intermediate whereby the hydrido ruthenium hydride center (HRu-NH) interacts with the carbonyl substrate R2C=O. More recent DFT and experimental studies have shown that this model is largely incorrect. Instead, the amine backbone interacts strongly with the base activator, which often is used in large excess. However in both cases, the substrate does not bond directly with the metal centre, thus making it a great example of an outer sphere mechanism. Metals Practical AH employ platinum metal-based catalysts. Base metals Iron is a popular research target for many catalytic processes, owing largely to its low cost and low toxicity relative to other transition metals. Asymmetric hydrogenation methods using iron have been realized, although in terms of rates and selectivity, they are inferior to catalysts based on precious metals. In some cases, structurally ill-defined nanoparticles have proven to be the active species in situ and the modest selectivity observed may result from their uncontrolled geometries. Ligand classes Phosphine ligands Chiral phosphine ligands, especially C2-symmetric ligands, are the source of chirality in most asymmetric hydrogenation catalysts. Of these the BINAP ligand is well-known, as a result of its Nobel Prize-winning application in the Noyori asymmetric hydrogenation. Chiral phosphine ligands can be generally classified as mono- or bidentate. They can be further classified according to the location of the stereogenic centre – phosphorus vs the organic substituents. Ligands with a C2 symmetry element have been particularly popular, in part because the presence of such an element reduces the possible binding conformations of a substrate to a metal-ligand complex dramatically (often resulting in exceptional enantioselectivity). Monodentate phosphines Monophosphine-type ligands were among the first to appear in asymmetric hydrogenation, e.g., the ligand CAMP. Continued research into these types of ligands has explored both P-alkyl and P-heteroatom bonded ligands, with P-heteroatom ligands like the phosphites and phosphoramidites generally achieving more impressive results. Structural classes of ligands that have been successful include those based on the binapthyl structure of MonoPHOS or the spiro ring system of SiPHOS. Notably, these monodentate ligands can be used in combination with each other to achieve a synergistic improvement in enantioselectivity; something that is not possible with the diphosphine ligands. Chiral diphosphine ligands The diphosphine ligands have received considerably more attention than the monophosphines and, perhaps as a consequence, have a much longer list of achievement. This class includes the first ligand to achieve high selectivity (DIOP), the first ligand to be used in industrial asymmetric synthesis (DIPAMP) and what is likely the best known chiral ligand (BINAP). Chiral diphosphine ligands are now ubiquitous in asymmetric hydrogenation. P,N and P,O ligands The use of P,N ligands in asymmetric hydrogenation can be traced to the C2 symmetric bisoxazoline ligand. However, these symmetric ligands were soon superseded by monooxazoline ligands whose lack of C2 symmetry has in no way limits their efficacy in asymmetric catalysis. Such ligands generally consist of an achiral nitrogen-containing heterocycle that is functionalized with a pendant phosphorus-containing arm, although both the exact nature of the heterocycle and the chemical environment phosphorus center has varied widely. No single structure has emerged as consistently effective with a broad range of substrates, although certain privileged structures (like the phosphine-oxazoline or PHOX architecture) have been established. Moreover, within a narrowly defined substrate class the performance of metallic complexes with chiral P,N ligands can closely approach perfect conversion and selectivity in systems otherwise very difficult to target. Certain complexes derived from chelating P-O ligands have shown promising results in the hydrogenation of α,β-unsaturated ketones and esters. NHC ligands Simple N-heterocyclic carbene (NHC)-based ligands have proven impractical for asymmetrical hydrogenation. Some C,N ligands combine an NHC with a chiral oxazoline to give a chelating ligand. NHC-based ligands of the first type have been generated as large libraries from the reaction of smaller libraries of individual NHCs and oxazolines. NHC-based catalysts featuring a bulky seven-membered metallocycle on iridium have been applied to the catalytic hydrogenation of unfunctionalized olefins and vinyl ether alcohols with conversions and ee's in the high 80s or 90s. The same system has been applied to the synthesis of a number of aldol, vicinal dimethyl and deoxypolyketide motifs, and to the deoxypolyketides themselves. C2-symmetric NHCs have shown themselves to be highly useful ligands for the asymmetric hydrogenation. Acyclic substrates Substrates can be classified according to their polarity. Nonpolar substrates are dominated by alkenes. Polar substrates include ketones, enamines ketimines. Nonpolar substrates Alkenes that are particularly amenable to asymmetric hydrogenation often feature a polar functional group adjacent to the site to be hydrogenated. In the absence of this functional group, catalysis often results in low ee's. For some unfunctionalized olefins, iridium with P,N-based ligands) have proven effective, however. Alkene substrates are often classified according to their substituents, e.g., 1,1-disubstituted, 1,2-diaryl trisubstituted, 1,1,2-trialkyl and tetrasubstituted olefins. and even within these classes variations may exist that make different solutions optimal. Conversely to the case of olefins, asymmetric hydrogenation of enamines has favoured diphosphine-type ligands; excellent results have been achieved with both iridium- and rhodium-based systems. However, even the best systems often suffer from low ee's and a lack of generality. Certain pyrrolidine-derived enamines of aromatic ketones are amenable to asymmetrically hydrogenation with cationic rhodium(I) phosphonite systems, and I2 and acetic acid system with ee values usually above 90% and potentially as high as 99.9%. A similar system using iridium(I) and a very closely related phosphoramidite ligand is effective for the asymmetric hydrogenation of pyrrolidine-type enamines where the double bond was inside the ring: in other words, of dihydropyrroles. In both cases, the enantioselectivity dropped substantially when the ring size was increased from five to six. Imines and ketones Ketones and imines are related functional groups, and effective technologies for the asymmetric hydrogenation of each are also closely related. Early examples are Noyori's ruthenium-chiral diphosphine-diamine system. For carbonyl and imine substrates, end-on, η1 coordination can compete with η2 mode. For η1-bound substrates, the hydrogen-accepting carbon is removed from the catalyst and resists hydrogenation. Iridium/P,N ligand-based systems have been effective for some ketones and imines. For example, a consistent system for benzylic aryl imines uses the P,N ligand SIPHOX in conjunction with iridium(I) in a cationic complex to achieve asymmetric hydrogenation with ee >90%. An efficient catalyst for ketones, (turnover number (TON) up to 4,550,000 and ee up to 99.9%) is an iridium(I) system with a closely related tridentate ligand. The BINAP/diamine-Ru catalyst is effective for the asymmetric reduction of both functionalized and simple ketones, and BINAP/diamine-Ru catalyst can catalyze aromatic, heteroaromatic, and olefinic ketones enantioselectively. Better stereoselectivity is achieved when one substituent is larger than the other (see Flippin-Lodge angle). Aromatic substrates The asymmetric hydrogenation of aromatic (especially heteroaromatic), substrates is a very active field of ongoing research. Catalysts in this field must contend with a number of complicating factors, including the tendency of highly stable aromatic compounds to resist hydrogenation, the potential coordinating (and therefore catalyst-poisoning) abilities of both substrate and product, and the great diversity in substitution patterns that may be present on any one aromatic ring. Of these substrates the most consistent success has been seen with nitrogen-containing heterocycles, where the aromatic ring is often activated either by protonation or by further functionalization of the nitrogen (generally with an electron-withdrawing protecting group). Such strategies are less applicable to oxygen- and sulfur-containing heterocycles, since they are both less basic and less nucleophilic; this additional difficulty may help to explain why few effective methods exist for their asymmetric hydrogenation. Quinolines, isoquinolines and quinoxalines Two systems exist for the asymmetric hydrogenation of 2-substituted quinolines with isolated yields generally greater than 80% and ee values generally greater than 90%. The first is an iridium(I)/chiral phosphine/I2 system, first reported by Zhou et al.. While the first chiral phosphine used in this system was MeOBiPhep, newer iterations have focused on improving the performance of this ligand. To this end, systems use phosphines (or related ligands) with improved air stability, recyclability, ease of preparation, lower catalyst loading and the potential role of achiral phosphine additives. As of October 2012 no mechanism appears to have been proposed, although both the necessity of I2 or a halogen surrogate and the possible role of the heteroaromatic N in assisting reactivity have been documented. The second is an organocatalytic transfer hydrogenation system based on Hantzsch esters and a chiral Brønsted acid. In this case, the authors envision a mechanism where the isoquinoline is alternately protonated in an activating step, then reduced by conjugate addition of hydride from the Hantzsch ester. Much of the asymmetric hydrogenation chemistry of quinoxalines is closely related to that of the structurally similar quinolines. Effective (and efficient) results can be obtained with an Ir(I)/phophinite/I2 system and a Hantzsh ester-based organocatalytic system, both of which are similar to the systems discussed earlier with regards to quinolines. Pyridines Pyridines are highly variable substrates for asymmetric reduction (even compared to other heteroaromatics), in that five carbon centers are available for differential substitution on the initial ring. As of October 2012 no method seems to exist that can control all five, although at least one reasonably general method exists. The most-general method of asymmetric pyridine hydrogenation is actually a heterogeneous method, where asymmetry is generated from a chiral oxazolidinone bound to the C2 position of the pyridine. Hydrogenating such functionalized pyridines over a number of different heterogeneous metal catalysts gave the corresponding piperidine with the substituents at C3, C4, and C5 positions in an all-cis geometry, in high yield and excellent enantioselectivity. The oxazolidinone auxiliary is also conveniently cleaved under the hydrogenation conditions. Methods designed specifically for 2-substituted pyridine hydrogenation can involve asymmetric systems developed for related substrates like 2-substituted quinolines and quinoxalines. For example, an iridium(I)\chiral phosphine\I2 system is effective in the asymmetric hydrogenation of activated (alkylated) 2-pyridiniums or certain cyclohexanone-fused pyridines. Similarly, chiral Brønsted acid catalysis with a Hantzsh ester as a hydride source is effective for some 2-alkyl pyridines with additional activating substitution. Indoles and pyrroles The asymmetric hydrogenation of indoles has been established with N-Boc protection. A Pd(TFA)2/H8-BINAP system achieves the enantioselective cis-hydrogenation of 2,3- and 2-substituted indoles. Akin to the behavior of indoles, pyrroles can be converted to pyrrolidines by asymmetric hydrogenation. Oxygen- and sulfur-containing heterocycles The asymmetric hydrogenation of furans and benzofurans is challenging. Asymmetric hydrogenation of thiophenes and benzothiophenes has been catalyzed by some ruthenium(II) complexes of N-heterocyclic carbenes (NHC). This system appears to possess superb selectivity (ee > 90%) and perfect diastereoselectivity (all cis) if the substrate has a fused (or directly bound) phenyl ring but yields only racemic product in all other tested cases. Heterogeneous catalysis No heterogeneous catalyst has been commercialized for asymmetric hydrogenation. The first asymmetric hydrogenation focused on palladium deposited on a silk support. Cinchona alkaloids have been used as chiral modifiers for enantioselectivity hydrogenation. An alternative technique and one that allows more control over the structural and electronic properties of active catalytic sites is the immobilization of catalysts that have been developed for homogeneous catalysis on a heterogeneous support. Covalent bonding of the catalyst to a polymer or other solid support is perhaps most common, although immobilization of the catalyst may also be achieved by adsorption onto a surface, ion exchange, or even physical encapsulation. One drawback of this approach is the potential for the proximity of the support to change the behaviour of the catalyst, lowering the enantioselectivity of the reaction. To avoid this, the catalyst is often bound to the support by a long linker though cases are known where the proximity of the support can actually enhance the performance of the catalyst. The final approach involves the construction of MOFs that incorporate chiral reaction sites from a number of different components, potentially including chiral and achiral organic ligands, structural metal ions, catalytically active metal ions, and/or preassembled catalytically active organometallic cores. One of these involved ruthenium-based catalysts. As little as 0.005 mol% of such catalysts proved sufficient to achieve the asymmetric hydrogenation of aryl ketones, although the usual conditions featured 0.1 mol % of catalyst and resulted in an enantiomeric excess of 90.6–99.2%. Industrial applications Asymmetric hydrogenations are used in the production of several drugs, such as the antibacterial levofloxin, the antibiotic carbapenem, and the antipsychotic agent BMS181100. Knowles' research into asymmetric hydrogenation and its application to the production scale synthesis of L-Dopa gave asymmetric hydrogenation a strong start in the industrial world. A 2001 review indicated that asymmetric hydrogenation accounted for 50% of production scale, 90% of pilot scale, and 74% of bench scale catalytic, enantioselective processes in industry, with the caveat that asymmetric catalytic methods in general were not yet widely used. Asymmetric hydrogenation has replaced kinetic resolution based methods has resulted in substantial improvements in the process's efficiency. can be seen in a number of specific cases where the For example, Roche's Catalysis Group was able to achieve the synthesis of (S,S)-Ro 67-8867 in 53% overall yield, a dramatic increase above the 3.5% that was achieved in the resolution based synthesis. Roche's synthesis of mibefradil was likewise improved by replacing resolution with asymmetric hydrogenation, reducing the step count by three and increasing the yield of a key intermediate to 80% from the original 70%. Noyori-inspired hydrogenation catalysts have been applied to the commercial synthesis of number of fine chemicals. (R)-1,2-Propandiol, precursor to the antibacterial levofloxacin, can be efficiently synthesized from hydroxyacetone using Noyori asymmetric hydrogenation: Newer routes focus on the hydrogenation of (R)-methyl lactate. An antibiotic carbapenem is also prepared using Noyori asymmetric hydrogenation via (2S,3R)-methyl 2-(benzamidomethyl)-3-hydroxybutanoate, which is synthesized from racemic methyl 2-(benzamidomethyl)-3-oxobutanoate by dynamic kinetic resolution. An antipsychotic agent BMS 181100 is synthesized using BINAP/diamine-Ru catalyst. References Organic reactions Chemical processes Green chemistry Hydrogenation Hydrogenation
Asymmetric hydrogenation
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
5,074
[ "Green chemistry", "Asymmetry", "Chemical engineering", "Environmental chemistry", "Organic reactions", "Chemical processes", "nan", "Hydrogenation", "Chemical process engineering", "Symmetry" ]
19,426,859
https://en.wikipedia.org/wiki/Lambeth%20Waterworks%20Company
The Lambeth Waterworks Company was a utility company supplying water to parts of south London in England. The company was established in 1785 with works in north Lambeth and became part of the publicly owned Metropolitan Water Board in 1904. Origins The Lambeth Waterworks Company, founded by the (25 Geo. 3. c. 89) to supply water to south and west London, established premises on the south bank of the River Thames close to the present site of Hungerford Bridge where the Royal Festival Hall now stands. The company's first water-intake lay on the south side of the river, supplied directly from the river. After complaints that the water was foul, the intake was moved to the middle of the river. The Company expanded to supply Kennington in 1802 and about this time replaced its wooden pipes with iron ones. Infrastructure In 1832 the company built a reservoir at Streatham Hill, and then obtained the (4 & 5 Will. 4. c. vii) to extend its area of supply. In the same year the company purchased of land in Brixton and built a reservoir and works on Brixton Hill adjacent to Brixton Prison. Around the 1850s the quality of drinking water became a matter of public concern, and John Snow examined the state of waters in 1849. Parliament passed the Metropolis Water Act 1852 to "make provision for securing the supply to the Metropolis of pure and wholesome water". Under the act, it became unlawful for any water company to extract water for domestic use from the tidal reaches of the Thames after 31 August 1855, and from 31 December 1855 all such water was required to be "effectually filtered". The directors had already decided in 1847 to move the intake for their reservoirs to Seething Wells. The facilities were completed in 1852, and the Lambeth was joined there taking advantage of its pipes to the city by the Chelsea Waterworks Company. The facilities played a role in John Snow's statistical investigations during a cholera outbreak, the facility was further upriver than many other waterworks and hence had cleaner water, leading to fewer cholera deaths. However the inlets pumped in too much silt with the water because of turbulence caused by the discharge (confluence) of the River Mole/Ember and The Rythe into the Thames immediately upstream. The Lambeth Waterworks Company thus moved upstream to Molesey between Sunbury and Molesey Locks, where they built the Molesey Reservoirs in 1872 and the Chelsea Waterworks Company followed them there three years later. See also London water supply infrastructure References London water infrastructure British companies established in 1785 Former water company predecessors of Thames Water 1785 establishments in England Water supply Companies established in 1785
Lambeth Waterworks Company
[ "Chemistry", "Engineering", "Environmental_science" ]
536
[ "Hydrology", "Water supply", "Environmental engineering" ]
18,419,244
https://en.wikipedia.org/wiki/Gadoteridol
Gadoteridol (INN) is a gadolinium-based MRI contrast agent, used particularly in the imaging of the central nervous system. It is sold under the brand name ProHance. Gadoteridol was first approved for use in the United States in 1992. References Organogadolinium compounds MRI contrast agents
Gadoteridol
[ "Chemistry" ]
69
[ "Pharmacology", "Nuclear magnetic resonance", "Medicinal chemistry stubs", "Nuclear chemistry stubs", "Nuclear magnetic resonance stubs", "Pharmacology stubs" ]
18,419,320
https://en.wikipedia.org/wiki/ITPR2
Inositol 1,4,5-trisphosphate receptor, type 2, also known as ITPR2, is a protein which in humans is encoded by the ITPR2 gene. The protein encoded by this gene is both a receptor for inositol triphosphate and a calcium channel. See also Inositol trisphosphate receptor References Further reading External links Ion channels
ITPR2
[ "Chemistry" ]
84
[ "Neurochemistry", "Ion channels" ]
18,419,422
https://en.wikipedia.org/wiki/Ryanodine%20receptor%203
Ryanodine receptor 3 is one of a class of ryanodine receptors and a protein that in humans is encoded by the RYR3 gene. The protein encoded by this gene is both a calcium channel and a receptor for the plant alkaloid ryanodine. RYR3 and RYR1 control the resting calcium ion concentration in skeletal muscle. See also Ryanodine receptor References Further reading External links Ion channels EF-hand-containing proteins
Ryanodine receptor 3
[ "Chemistry" ]
93
[ "Neurochemistry", "Ion channels" ]
18,419,544
https://en.wikipedia.org/wiki/TPCN1
Two pore segment channel 1 (TPC1) is a human protein encoded by the TPCN1 gene. The protein encoded by this gene is an ion channel. In contrast to other calcium and sodium channels which have four homologous domains, each containing six transmembrane segments (S1 to S6), TPCN1 only contains two domains (each containing segments S1 to S6). Structure The structure of a TPC1 ortholog from Arabidopsis thaliana has been solved by two laboratories. The structures were solved using X-ray crystallography and contained the fold of a voltage-gated ion channel and EF hands. Only a single voltage sensor domain appears to responsible for voltage sensing. Filoviral infections Genetic knockout and pharmacological inhibition experiments demonstrate that the two-pore channels, TPC1 and TPC2, are required for infection by Filoviruses Ebola and Marburg in mice. See also Two-pore channel References Further reading External links Ion channels
TPCN1
[ "Chemistry" ]
217
[ "Neurochemistry", "Ion channels" ]
18,419,902
https://en.wikipedia.org/wiki/KCNK7
Potassium channel, subfamily K, member 7, also known as KCNK7 or K2P7.1 is a protein which is encoded in humans by the KCNK7 gene. K2P7.1 is a potassium channel containing two pore-forming P domains. Multiple transcript variants encoding different isoforms have been found for this gene. Function This gene encodes a member of the superfamily of potassium channel proteins containing two pore-forming P domains. The product of this gene has not been shown to be a functional channel; It may require other non-pore-forming proteins for activity. See also Tandem pore domain potassium channel References Further reading External links Ion channels
KCNK7
[ "Chemistry" ]
139
[ "Neurochemistry", "Ion channels" ]
18,419,910
https://en.wikipedia.org/wiki/KCNK16
Potassium channel subfamily K member 16 is a protein that in humans is encoded by the KCNK16 gene. The protein encoded by this gene, K2P16.1, is a potassium channel containing two pore-forming P domains. See also Tandem pore domain potassium channel References Further reading External links Ion channels
KCNK16
[ "Chemistry" ]
65
[ "Neurochemistry", "Ion channels" ]
18,419,911
https://en.wikipedia.org/wiki/KCNK18
Potassium channel subfamily K member 18 (KCNK18), also known as TWIK-related spinal cord potassium channel (TRESK) or K2P18.1 is a protein that in humans is encoded by the KCNK18 gene. K2P18.1 is a potassium channel containing two pore-forming P domains. A flaw in this gene could help trigger migraine headaches. If the gene does not work properly, environmental factors can more easily trigger pain centres in the brain and cause a severe headache. See also Tandem pore domain potassium channel References Further reading External links Ion channels
KCNK18
[ "Chemistry" ]
124
[ "Neurochemistry", "Ion channels" ]
18,420,709
https://en.wikipedia.org/wiki/Paraspecies
A paraspecies (a paraphyletic species) is a species, living or fossil, that gave rise to one or more daughter species without itself becoming extinct. Geographically widespread species that have given rise to one or more daughter species as peripheral isolates without themselves becoming extinct (i.e. through peripatric speciation) are examples of paraspecies. Paraspecies are expected from evolutionary theory (Crisp and Chandler, 1996), and are empirical realities in many terrestrial and aquatic taxa. Examples A well-documented example of a living mammal species that gave rise to another living species is the evolution of the polar bear from the brown bear. An example of a living reptile paraspecies is New Zealand's North Island tuatara Sphenodon punctatus, which gave rise to the Brothers Island tuatara Sphenodon guntheri. An example of a living bird paraspecies is Empidonax occidentalis, the Cordilleran flycatcher. An example of a living plant paraspecies is Pouteria cuspidata, the pouteria trees or eggfruits. See also Cladogenesis Anagenesis, also known as "phyletic change", where no branching event occurred (or is known to have occurred) Notes and references Evolutionary biology
Paraspecies
[ "Biology" ]
275
[ "Evolutionary biology" ]
18,421,531
https://en.wikipedia.org/wiki/Precision%20engineering
Precision engineering is a subdiscipline of electrical engineering, software engineering, electronics engineering, mechanical engineering, and optical engineering concerned with designing machines, fixtures, and other structures that have exceptionally low tolerances, are repeatable, and are stable over time. These approaches have applications in machine tools, MEMS, NEMS, optoelectronics design, and many other fields. Precision engineering is a branch of engineering that focus on the design, development and manufacture of product with high levels of accuracy and repeatability. It involves the use of advanced technologies and techniques to achieve tight tolerance and dimensional control in the manufacturing process. Overview Professors Hiromu Nakazawa and Pat McKeown provide the following list of goals for precision engineering: Create a highly precise movement. Reduce the dispersion of the product's or part's function. Eliminate fitting and promote assembly, especially automatic assembly. Reduce the initial cost. Reduce the running cost. Extend the life span. Enable the design safety factor to be lowered. Improve interchangeability of components so that corresponding parts made by other factories or firms can be used in their place. Improve quality control through higher machine accuracy capabilities and hence reduce scrap, rework, and conventional inspection. Achieve a greater wear/fatigue life of components. Make functions independent of one another. Achieve greater miniaturization and packing densities. Achieve further advances in technology and the underlying sciences. Technical Societies American Society for Precision Engineering euspen - European Society for Precision Engineering and Nanotechnology JSPE- The Japan Society for Precision Engineering DSPE - Dutch Society for Precision Engineering SPETA - Singapore Precision Engineering and Technology Association See also Abbe error Accuracy and precision Flexures Kinematic coupling Measurement uncertainty Kinematic determinacy References External links Precision Engineering, the Journal of the International Societies for Precision Engineering and Nanotechnology Mechanical engineering Precision Engineering Centre at Cranfield University
Precision engineering
[ "Physics", "Engineering" ]
386
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
18,421,631
https://en.wikipedia.org/wiki/Radius%20of%20curvature
In differential geometry, the radius of curvature, , is the reciprocal of the curvature. For a curve, it equals the radius of the circular arc which best approximates the curve at that point. For surfaces, the radius of curvature is the radius of a circle that best fits a normal section or combinations thereof. Definition In the case of a space curve, the radius of curvature is the length of the curvature vector. In the case of a plane curve, then is the absolute value of where is the arc length from a fixed point on the curve, is the tangential angle and is the curvature. Formula In two dimensions If the curve is given in Cartesian coordinates as , i.e., as the graph of a function, then the radius of curvature is (assuming the curve is differentiable up to order 2) where and denotes the absolute value of . If the curve is given parametrically by functions and , then the radius of curvature is where and Heuristically, this result can be interpreted as where In dimensions If is a parametrized curve in then the radius of curvature at each point of the curve, , is given by As a special case, if is a function from to , then the radius of curvature of its graph, , is Derivation Let be as above, and fix . We want to find the radius of a parametrized circle which matches in its zeroth, first, and second derivatives at . Clearly the radius will not depend on the position , only on the velocity and acceleration . There are only three independent scalars that can be obtained from two vectors and , namely , , and . Thus the radius of curvature must be a function of the three scalars , and . The general equation for a parametrized circle in is where is the center of the circle (irrelevant since it disappears in the derivatives), are perpendicular vectors of length (that is, and ), and is an arbitrary function which is twice differentiable at . The relevant derivatives of work out to be If we now equate these derivatives of to the corresponding derivatives of at we obtain These three equations in three unknowns (, and ) can be solved for , giving the formula for the radius of curvature: or, omitting the parameter for readability, Examples Semicircles and circles For a semi-circle of radius in the upper half-plane with For a semi-circle of radius in the lower half-plane The circle of radius has a radius of curvature equal to . Ellipses In an ellipse with major axis and minor axis , the vertices on the major axis have the smallest radius of curvature of any points, and the vertices on the minor axis have the largest radius of curvature of any points, . The radius of curvature of an ellipse, as a function of parameter (the Jacobi amplitude), is where The radius of curvature of an ellipse, as a function of , is where the eccentricity of the ellipse, , is given by Applications For the use in differential geometry, see Cesàro equation. For the radius of curvature of the Earth (approximated by an oblate ellipsoid); see also: arc measurement Radius of curvature is also used in a three part equation for bending of beams. Radius of curvature (optics) Thin films technologies Printed electronics Minimum railway curve radius AFM probe Stress in semiconductor structures Stress in the semiconductor structure involving evaporated thin films usually results from the thermal expansion (thermal stress) during the manufacturing process. Thermal stress occurs because film depositions are usually made above room temperature. Upon cooling from the deposition temperature to room temperature, the difference in the thermal expansion coefficients of the substrate and the film cause thermal stress. Intrinsic stress results from the microstructure created in the film as atoms are deposited on the substrate. Tensile stress results from microvoids (small holes, considered to be defects) in the thin film, because of the attractive interaction of atoms across the voids. The stress in thin film semiconductor structures results in the buckling of the wafers. The radius of the curvature of the stressed structure is related to stress tensor in the structure, and can be described by modified Stoney formula. The topography of the stressed structure including radii of curvature can be measured using optical scanner methods. The modern scanner tools have capability to measure full topography of the substrate and to measure both principal radii of curvature, while providing the accuracy of the order of 0.1% for radii of curvature of 90 meters and more. See also Base curve radius Bend radius Degree of curvature (civil engineering) Osculating circle Track transition curve References Further reading External links The Geometry Center: Principal Curvatures 15.3 Curvature and Radius of Curvature Differential geometry Curvature (mathematics) Curves Integral calculus Multivariable calculus Theoretical physics Radii
Radius of curvature
[ "Physics", "Mathematics" ]
979
[ "Geometric measurement", "Physical quantities", "Calculus", "Theoretical physics", "Integral calculus", "Multivariable calculus", "Curvature (mathematics)" ]
18,422,596
https://en.wikipedia.org/wiki/Second%20moment%20method
In mathematics, the second moment method is a technique used in probability theory and analysis to show that a random variable has positive probability of being positive. More generally, the "moment method" consists of bounding the probability that a random variable fluctuates far from its mean, by using its moments. The method is often quantitative, in that one can often deduce a lower bound on the probability that the random variable is larger than some constant times its expectation. The method involves comparing the second moment of random variables to the square of the first moment. First moment method The first moment method is a simple application of Markov's inequality for integer-valued variables. For a non-negative, integer-valued random variable , we may want to prove that with high probability. To obtain an upper bound for , and thus a lower bound for , we first note that since takes only integer values, . Since is non-negative we can now apply Markov's inequality to obtain . Combining these we have ; the first moment method is simply the use of this inequality. Second moment method In the other direction, being "large" does not directly imply that is small. However, we can often use the second moment to derive such a conclusion, using Cauchy–Schwarz inequality. The method can also be used on distributional limits of random variables. Furthermore, the estimate of the previous theorem can be refined by means of the so-called Paley–Zygmund inequality. Suppose that is a sequence of non-negative real-valued random variables which converge in law to a random variable . If there are finite positive constants , such that hold for every , then it follows from the Paley–Zygmund inequality that for every and in Consequently, the same inequality is satisfied by . Example application of method Setup of problem The Bernoulli bond percolation subgraph of a graph at parameter is a random subgraph obtained from by deleting every edge of with probability , independently. The infinite complete binary tree is an infinite tree where one vertex (called the root) has two neighbors and every other vertex has three neighbors. The second moment method can be used to show that at every parameter with positive probability the connected component of the root in the percolation subgraph of is infinite. Application of method Let be the percolation component of the root, and let be the set of vertices of that are at distance from the root. Let be the number of vertices in . To prove that is infinite with positive probability, it is enough to show that . Since the events form a decreasing sequence, by continuity of probability measures this is equivalent to showing that . The Cauchy–Schwarz inequality gives Therefore, it is sufficient to show that that is, that the second moment is bounded from above by a constant times the first moment squared (and both are nonzero). In many applications of the second moment method, one is not able to calculate the moments precisely, but can nevertheless establish this inequality. In this particular application, these moments can be calculated. For every specific in , Since , it follows that which is the first moment. Now comes the second moment calculation. For each pair , in let denote the vertex in that is farthest away from the root and lies on the simple path in to each of the two vertices and , and let denote the distance from to the root. In order for , to both be in , it is necessary and sufficient for the three simple paths from to , and the root to be in . Since the number of edges contained in the union of these three paths is , we obtain The number of pairs such that is equal to , for and equal to for . Hence, for , so that which completes the proof. Discussion The choice of the random variables was rather natural in this setup. In some more difficult applications of the method, some ingenuity might be required in order to choose the random variables for which the argument can be carried through. The Paley–Zygmund inequality is sometimes used instead of the Cauchy–Schwarz inequality and may occasionally give more refined results. Under the (incorrect) assumption that the events , in are always independent, one has , and the second moment is equal to the first moment squared. The second moment method typically works in situations in which the corresponding events or random variables are “nearly independent". In this application, the random variables are given as sums In other applications, the corresponding useful random variables are integrals where the functions are random. In such a situation, one considers the product measure and calculates where the last step is typically justified using Fubini's theorem. References Probabilistic inequalities Articles containing proofs Moment (mathematics)
Second moment method
[ "Physics", "Mathematics" ]
963
[ "Mathematical analysis", "Moments (mathematics)", "Physical quantities", "Theorems in probability theory", "Probabilistic inequalities", "Inequalities (mathematics)", "Articles containing proofs", "Moment (physics)" ]
18,422,886
https://en.wikipedia.org/wiki/Uvaricin
Uvaricin is a bis(tetrahydrofuranoid) fatty acid lactone that was first isolated in 1982 from the roots of the Annonaceae Uvaria acuminata. Uvaricin was the first known example in a class of compounds known as acetogenins. Acetogenins, which are found in plants of the family Annonaceae, seem to kill cells by inhibiting NADH dehydrogenase in the mitochondrion. A method to synthesize uvaricin was first published in 1998, and an improved stereoselective synthesis published in 2001. References Acetate esters Tetrahydrofurans Furanones Lipid metabolism Polyketides
Uvaricin
[ "Chemistry" ]
147
[ "Biomolecules by chemical classification", "Lipid biochemistry", "Natural products", "Polyketides", "Lipid metabolism", "Metabolism" ]
18,424,177
https://en.wikipedia.org/wiki/Wavelength%20selective%20switching
Wavelength selective switching components are used in WDM optical communications networks to route (switch) signals between optical fibres on a per-wavelength basis. What is a WSS A WSS comprises a switching array that operates on light that has been dispersed in wavelength without the requirement that the dispersed light be physically demultiplexed into separate ports. This is termed a ‘disperse and switch’ configuration. For example, an 88 channel WDM system can be routed from a “common” fiber to any one of N fibers by employing 88 1 x N switches. This represents a significant simplification of a demux and switch and multiplex architecture that would require (in addition to N +1 mux/demux elements) a non-blocking switch for 88 N x N channels which would test severely the manufacturability limits of large-scale optical cross-connects for even moderate fiber counts. A more practical approach, and one adopted by the majority of WSS manufacturers is shown schematically in Figure 1 (to be uploaded). The various incoming channels of a common port are dispersed continuously onto a switching element which then directs and attenuates each of these channels independently to the N switch ports. The dispersive mechanism is generally based on holographic or ruled diffraction gratings similar to those used commonly in spectrometers. It can be advantageous, for achieving resolution and coupling efficiency, to employ a combination of a reflective or transmissive grating and a prism – known as a GRISM. The operation of the WSS can be bidirectional so the wavelengths can be multiplexed together from different ports onto a single common port. To date, the majority of deployments have used a fixed channel bandwidth of 50 or 100 GHz and 9 output ports are typically used. Microelectromechanical Mirrors (MEMS) The simplest and earliest commercial WSS were based on movable mirrors using Micro-Electro-Mechanical Systems (MEMS). The incoming light is broken into a spectrum by a diffraction grating (shown at RHS of Figure) and each wavelength channel then focuses on a separate MEMS mirror. By tilting the mirror in one dimension, the channel can be directed back into any of the fibers in the array. A second tilting axis allows transient crosstalk to be minimized, otherwise switching (eg) from port 1 to port 3 will always involve passing the beam across port 2. The second axis provides a means to attenuate the signal without increasing the coupling into neighboring fibers. This technology has the advantage of a single steering surface, not necessarily requiring polarization diversity optics. It works well in the presence of a continuous signal, allowing the mirror tracking circuits to dither the mirror and maximise coupling. MEMS based WSS typically produce good extinction ratios, but poor open loop performance for setting a given attenuation level. The main limitations of the technology arise from the channelization that the mirrors naturally enforce. During manufacturing, the channels must be carefully aligned with the mirrors, complicating the manufacturing process. Post-manufacturing alignment adjustments have been mainly limited to adjusting the gas pressure within the hermetic enclosure. This enforced channelization has also proved, so far, an insurmountable obstacle to implementing flexible channel plans where different channel sizes are required within a network. Additionally the phase of light at the mirror edge is not well controlled in a physical mirror so artefacts can arise in the switching of light near the channel edge due to interference of the light from each channel. Binary Liquid Crystal (LC) Liquid crystal switching avoids both the high cost of small volume MEMS fabrication and potentially some of its fixed channel limitations. The concept is illustrated in Figure 3 (to be uploaded). A diffraction grating breaks the incoming light into a spectrum. A software controlled binary liquid crystal stack, individually tilts each optical channel and a second grating (or a second pass of the first grating) is used to spectrally recombine the beams. The offsets created by the liquid crystal stack cause the resulting spectrally recombined beams to be spatially offset, and hence to focus, through a lens array, into separate fibers. Polarization diversity optics ensures low Polarization Dependent Losses (PDL). This technology has the advantages of relatively low cost parts, simple electronic control and stable beam positions without active feedback. It is capable of configuring to a flexible grid spectrum by the use of a fine pixel grid. The inter-pixel gaps must be small compared to the beam size, to avoid perturbing the transmitted light significantly. Furthermore, each grid must be replicated for each of the switching stages creating the requirement of individually controlling thousands of pixels on different substrates so the advantages of this technology in terms of simplicity are negated as the wavelength resolution becomes finer. The main disadvantage of this technology arises from the thickness of the stacked switching elements. Keeping the optical beam tightly focused over this depth is difficult and has, so far, limited the ability of high port count WSS to achieve very fine (12.5 GHz or less) granularity. Liquid Crystal on Silicon (LCoS) Liquid Crystal on Silicon LCoS is particularly attractive as a switching mechanism in a WSS because of the near continuous addressing capability, enabling much new functionality. In particular the bands of wavelengths which are switched together (channels) need not be preconfigured in the optical hardware but can be programmed into the switch through the software control. Additionally, it is possible to take advantage of this ability to reconfigure channels while the device is operating. A schematic of an LCoS WSS is shown in Figure 4 (to be uploaded). LCoS technology has enabled the introduction of more flexible wavelength grids which help to unlock the full spectral capacity of optical fibers. Even more surprising features rely on the phase matrix nature of the LCoS switching element. Features in common use include such things as shaping the power levels within a channel or broadcasting the optical signal to more than one port. LCoS-based WSS also permit dynamic control of channel centre frequency and bandwidth through on-the-fly modification of the pixel arrays via embedded software. The degree of control of channel parameters can be very fine-grained, with independent control of the centre frequency and either upper- or lower-band-edge of a channel with better than 1 GHz resolution possible. This is advantageous from a manufacturability perspective, with different channel plans being able to be created from a single platform and even different operating bands (such as C and L) being able to use an identical switch matrix. Products have been introduced allowing switching between 50 GHz channels and 100 GHz channels, or a mix of channels, without introducing any errors or “hits” to the existing traffic. More recently, this has been extended to support the whole concept of Flexible or Elastic networks under ITU G.654.2 through products such as Finisar's Flexgrid™ WSS. For more detailed information on the applications of LCoS in telecommunications and, in particular, Wavelength Selective Switches, see chapter 16 in Optical Fiber Telecommunications VIA, edited by Kaminov, Li and Wilner, Academic Press . MEMS Arrays A further array-based switch engine uses an array of individual reflective MEMS mirrors to perform the necessary beam steering (Figure 5 (to be uploaded). These arrays are typically a derivative of the Texas Instruments DLP range of spatial light modulators. In this case, the angle of the MEMs mirrors is changed to deflect the beam. However, current implementations only allow the mirrors to have two possible states, giving two potential beam angles. This complicates the design of multi-port WSS and has limited their application to relatively low-port-count devices. Future Developments Dual WSS It is likely that in future two WSS could use the same optical module utilizing different wavelength processing regions of a single matrix switch such as LCoS, provided that the issues associated with device isolation are able to be appropriately addressed. Channel selectivity ensures only wavelengths required to be dropped locally (up to the maximum number of transceivers in the bank) are presented to any mux/demux module through each fiber, which in turn reduces the filtering and extinction requirements on the mux/demux module. Contentionless WSS This provides cost and performance benefits for next generation colorless, directionless, contentionless (CDC) reconfigurable optical add-drop multiplexer (ROADM) networks, resulting from improved scalability of add/drop ports and removal of erbium-doped fiber amplifier (EDFA) arrays (which are required to overcome splitting losses in multicast switches). Advanced Spatial Light Modulators The technical maturity of spatial light modulators based on consumer driven applications has been highly advantageous to their adoption in the telecommunications arena. There are developments in MEMs phased arrays and other electro-optic spatial light modulators that could be envisaged in the future to be applicable to telecom switching and wavelength processing, perhaps bringing faster switching or having an advantage in simplicity of optical design through polarisation-independent operation. For example, the design principles developed for LCoS could be applied to other phase-controllable arrays in a straightforward fashion if a suitable phase stroke (greater than 2π at 1550 nm) can be achieved. However the requirements for low electrical crosstalk and high fill factor over very small pixels required to allow switching in a compact form factor remain serious practical impediments to achieving these goals. References External links Lumentum WSS Products Finisar WSS Products II-VI WSS Products Calient WSS Products Optical devices Photonics
Wavelength selective switching
[ "Materials_science", "Engineering" ]
1,984
[ "Glass engineering and science", "Optical devices" ]
18,426,062
https://en.wikipedia.org/wiki/Structured%20ASIC%20platform
Structured ASIC is an intermediate technology between ASIC and FPGA, offering high performance, a characteristic of ASIC, and low NRE cost, a characteristic of FPGA. Using Structured ASIC allows products to be introduced quickly to market, to have lower cost and to be designed with ease. In a FPGA, interconnects and logic blocks are programmable after fabrication, offering high flexibility of design and ease of debugging in prototyping. However, the capability of FPGAs to implement large circuits is limited, in both size and speed, due to complexity in programmable routing, and significant space occupied by programming elements, e.g. SRAMs, MUXes. On the other hand, ASIC design flow is expensive. Every different design needs a complete different set of masks. The Structured ASIC is a solution between these two. It has basically the same structure as a FPGA, but being mask-programmable instead of field-programmable, by configuring one or several via layers between metal layers. Every SRAM configuration bit can be replaced by a choice of putting a via or not between metal contacts. A number of commercial vendors have introduced structured ASIC products. They have a wide range of configurability, from a single via layer to 6 metal and 6 via layers. Altera's Hardcopy-II, eASIC's Nextreme are examples of commercial structured ASICs. See also Gate array Altera Corp - "HardCopy II Structured ASICs" eASIC Corp - "Nextreme Structured ASIC" References Chun Hok Ho et al. - "Floating Point FPGA: Architecture and Modelling" Chun Hok Ho et al. - "DOMAIN-SPECIFIC HYBRID FPGA: ARCHITECTURE AND FLOATING POINT APPLICATIONS" Steve Wilton et al. - "A Synthesizable Datapath-Oriented Embedded FPGA Fabric" Steve Wilton et al. - "A Synthesizable Datapath-Oriented Embedded FPGA Fabric for Silicon Debug Applications" Andy Ye and Jonathan Rose - "Using Bus-Based Connections to Improve Field-Programmable Gate Array Density for Implementing Datapath Circuits" Ian Kuon, Aaron Egier and Jonathan Rose - "Design, Layout and Verification of an FPGA using Automated Tools" Ian Kuon, Russell Tessier and Jonathan Rose - "FPGA Architecture: Survey and Challenges" Ian Kuon and Jonathan Rose - "Measuring the Gap Between FPGAs and ASICs" Stephane Badel and Elizabeth J. Brauer - "Implementation of Structured ASIC Fabric Using Via-Programmable Differential MCML Cells" Kanupriya Gulati, Nikhil Jayakumar and Sunil P. Khatri - "A Structured ASIC Design Approach Using Pass Transistor Logic" Hee Kong Phoon, Matthew Yap and Chuan Khye Chai - "A Highly Compatible Architecture Design for Optimum FPGA to Structured-ASIC Migration" Yajun Ran and Malgorzata Marek-Sadowska - "Designing Via-Configurable Logic Blocks for Regular Fabric" R. Reed Taylor and Herman Schrnit - "Creating a Power-aware Structured ASIC" Jennifer L. Wong, Farinaz Kourshanfar and Miodrag Potkonjak - "Flexible ASIC: Shared Masking for Multiple Media Processors" External Links: eda.ee.ucla.edu/EE201A-04Spring/ASICslides.ppt Application-specific integrated circuits Electronic circuits Logic design
Structured ASIC platform
[ "Technology", "Engineering" ]
735
[ "Electronic engineering", "Application-specific integrated circuits", "Electronic circuits", "Computer engineering" ]
6,100,843
https://en.wikipedia.org/wiki/Crystallographic%20Information%20File
Crystallographic Information File (CIF) is a standard text file format for representing crystallographic information, promulgated by the International Union of Crystallography (IUCr). CIF was developed by the IUCr Working Party on Crystallographic Information in an effort sponsored by the IUCr Commission on Crystallographic Data and the IUCr Commission on Journals. The file format was initially published by Hall, Allen, and Brown and has since been revised, most recently versions 1.1 and 2.0. Full specifications for the format are available at the IUCr website. Many computer programs for molecular viewing are compatible with this format, including Jmol. mmCIF Closely related is mmCIF, macromolecular CIF, which is intended as an successor to the Protein Data Bank (PDB) format. It is now the default format used by the Protein Data Bank. Also closely related is Crystallographic Information Framework, a broader system of exchange protocols based on data dictionaries and relational rules expressible in different machine-readable manifestations, including, but not restricted to, Crystallographic Information File and XML. References External links International Union of Crystallography Chemical file formats Computer file formats Crystallography
Crystallographic Information File
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
258
[ "Chemistry software", "Materials science", "Crystallography", "Condensed matter physics", "Chemical file formats" ]
6,101,309
https://en.wikipedia.org/wiki/Quantities%20of%20information
The mathematical theory of information is based on probability theory and statistics, and measures information with several quantities of information. The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. The most common unit of information is the bit, or more correctly the shannon, based on the binary logarithm. Although bit is more frequently used in place of shannon, its name is not distinguished from the bit as used in data processing to refer to a binary value or stream regardless of its entropy (information content). Other units include the nat, based on the natural logarithm, and the hartley, based on the base 10 or common logarithm. In what follows, an expression of the form is considered by convention to be equal to zero whenever is zero. This is justified because for any logarithmic base. Self-information Shannon derived a measure of information content called the self-information or "surprisal" of a message : where is the probability that message is chosen from all possible choices in the message space . The base of the logarithm only affects a scaling factor and, consequently, the units in which the measured information content is expressed. If the logarithm is base 2, the measure of information is expressed in units of shannons or more often simply "bits" (a bit in other contexts is rather defined as a "binary digit", whose average information content is at most 1 shannon). Information from a source is gained by a recipient only if the recipient did not already have that information to begin with. Messages that convey information over a certain (P=1) event (or one which is known with certainty, for instance, through a back-channel) provide no information, as the above equation indicates. Infrequently occurring messages contain more information than more frequently occurring messages. It can also be shown that a compound message of two (or more) unrelated messages would have a quantity of information that is the sum of the measures of information of each message individually. That can be derived using this definition by considering a compound message providing information regarding the values of two random variables M and N using a message which is the concatenation of the elementary messages m and n, each of whose information content are given by and respectively. If the messages m and n each depend only on M and N, and the processes M and N are independent, then since (the definition of statistical independence) it is clear from the above definition that . An example: The weather forecast broadcast is: "Tonight's forecast: Dark. Continued darkness until widely scattered light in the morning." This message contains almost no information. However, a forecast of a snowstorm would certainly contain information since such does not happen every evening. There would be an even greater amount of information in an accurate forecast of snow for a warm location, such as Miami. The amount of information in a forecast of snow for a location where it never snows (impossible event) is the highest (infinity). Entropy The entropy of a discrete message space is a measure of the amount of uncertainty one has about which message will be chosen. It is defined as the average self-information of a message from that message space: where denotes the expected value operation. An important property of entropy is that it is maximized when all the messages in the message space are equiprobable (e.g. ). In this case . Sometimes the function is expressed in terms of the probabilities of the distribution: where each and An important special case of this is the binary entropy function: Joint entropy The joint entropy of two discrete random variables and is defined as the entropy of the joint distribution of and : If and are independent, then the joint entropy is simply the sum of their individual entropies. (Note: The joint entropy should not be confused with the cross entropy, despite similar notations.) Conditional entropy (equivocation) Given a particular value of a random variable , the conditional entropy of given is defined as: where is the conditional probability of given . The conditional entropy of given , also called the equivocation of about is then given by: This uses the conditional expectation from probability theory. A basic property of the conditional entropy is that: Kullback–Leibler divergence (information gain) The Kullback–Leibler divergence (or information divergence, information gain, or relative entropy) is a way of comparing two distributions, a "true" probability distribution , and an arbitrary probability distribution . If we compress data in a manner that assumes is the distribution underlying some data, when, in reality, is the correct distribution, Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression, or, mathematically, It is in some sense the "distance" from to , although it is not a true metric due to its not being symmetric. Mutual information (transinformation) It turns out that one of the most useful and important measures of information is the mutual information, or transinformation. This is a measure of how much information can be obtained about one random variable by observing another. The mutual information of relative to (which represents conceptually the average amount of information about that can be gained by observing ) is given by: A basic property of the mutual information is that: That is, knowing , we can save an average of bits in encoding compared to not knowing . Mutual information is symmetric: Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) of the posterior probability distribution of given the value of to the prior distribution on : In other words, this is a measure of how much, on the average, the probability distribution on will change if we are given the value of . This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution: Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the multinomial distribution and to Pearson's χ2 test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution. Differential entropy The basic measures of discrete entropy have been extended by analogy to continuous spaces by replacing sums with integrals and probability mass functions with probability density functions. Although, in both cases, mutual information expresses the number of bits of information common to the two sources in question, the analogy does not imply identical properties; for example, differential entropy may be negative. The differential analogies of entropy, joint entropy, conditional entropy, and mutual information are defined as follows: where is the joint density function, and are the marginal distributions, and is the conditional distribution. See also Information theory References Information theory
Quantities of information
[ "Mathematics", "Technology", "Engineering" ]
1,394
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory" ]
6,106,797
https://en.wikipedia.org/wiki/Charge%20carrier%20density
Charge carrier density, also known as carrier concentration, denotes the number of charge carriers per volume. In SI units, it is measured in m−3. As with any density, in principle it can depend on position. However, usually carrier concentration is given as a single number, and represents the average carrier density over the whole material. Charge carrier densities involve equations concerning the electrical conductivity, related phenomena like the thermal conductivity, and chemicals bonds like covalent bond. Calculation The carrier density is usually obtained theoretically by integrating the density of states over the energy range of charge carriers in the material (e.g. integrating over the conduction band for electrons, integrating over the valence band for holes). If the total number of charge carriers is known, the carrier density can be found by simply dividing by the volume. To show this mathematically, charge carrier density is a particle density, so integrating it over a volume gives the number of charge carriers in that volume where is the position-dependent charge carrier density. If the density does not depend on position and is instead equal to a constant this equation simplifies to Semiconductors The carrier density is important for semiconductors, where it is an important quantity for the process of chemical doping. Using band theory, the electron density, is number of electrons per unit volume in the conduction band. For holes, is the number of holes per unit volume in the valence band. To calculate this number for electrons, we start with the idea that the total density of conduction-band electrons, , is just adding up the conduction electron density across the different energies in the band, from the bottom of the band to the top of the band . Because electrons are fermions, the density of conduction electrons at any particular energy, is the product of the density of states, or how many conducting states are possible, with the Fermi–Dirac distribution, which tells us the portion of those states which will actually have electrons in them In order to simplify the calculation, instead of treating the electrons as fermions, according to the Fermi–Dirac distribution, we instead treat them as a classical non-interacting gas, which is given by the Maxwell–Boltzmann distribution. This approximation has negligible effects when the magnitude , which is true for semiconductors near room temperature. This approximation is invalid at very low temperatures or an extremely small band-gap. The three-dimensional density of states is: After combination and simplification, these expressions lead to: Here is the effective mass of the electrons in that particular semiconductor, and the quantity is the difference in energy between the conduction band and the Fermi level, which is half the band gap, : A similar expression can be derived for holes. The carrier concentration can be calculated by treating electrons moving back and forth across the bandgap just like the equilibrium of a reversible reaction from chemistry, leading to an electronic mass action law. The mass action law defines a quantity called the intrinsic carrier concentration, which for undoped materials: The following table lists a few values of the intrinsic carrier concentration for intrinsic semiconductors, in order of increasing band gap. These carrier concentrations will change if these materials are doped. For example, doping pure silicon with a small amount of phosphorus will increase the carrier density of electrons, n. Then, since n > p, the doped silicon will be a n-type extrinsic semiconductor. Doping pure silicon with a small amount of boron will increase the carrier density of holes, so then p > n, and it will be a p-type extrinsic semiconductor. Metals The carrier density is also applicable to metals, where it can be estimated from the simple Drude model. In this case, the carrier density (in this context, also called the free electron density) can be estimated by: Where is the Avogadro constant, Z is the number of valence electrons, is the density of the material, and is the atomic mass. Since metals can display multiple oxidation numbers, the exact definition of how many "valence electrons" an element should have in elemental form is somewhat arbitrary, but the following table lists the free electron densities given in Ashcroft and Mermin, which were calculated using the formula above based on reasonable assumptions about valence, , and with mass densities, calculated from experimental crystallography data. The values for n among metals inferred for example by the Hall effect are often on the same orders of magnitude, but this simple model cannot predict carrier density to very high accuracy. Measurement The density of charge carriers can be determined in many cases using the Hall effect, the voltage of which depends inversely on the carrier density. References Density Charge carriers
Charge carrier density
[ "Physics", "Materials_science", "Mathematics" ]
963
[ "Physical phenomena", "Physical quantities", "Charge carriers", "Quantity", "Mass", "Electrical phenomena", "Density", "Condensed matter physics", "Wikipedia categories named after physical quantities", "Matter" ]
4,655,598
https://en.wikipedia.org/wiki/List%20of%20boiling%20and%20freezing%20information%20of%20solvents
See also Freezing-point depression Boiling-point elevation References Chemistry-related lists Phase transitions Phases of matter
List of boiling and freezing information of solvents
[ "Physics", "Chemistry" ]
22
[ "Physical phenomena", "Phase transitions", "Phases of matter", "Critical phenomena", "nan", "Statistical mechanics", "Matter" ]
4,656,507
https://en.wikipedia.org/wiki/Bose%E2%80%93Hubbard%20model
The Bose–Hubbard model gives a description of the physics of interacting spinless bosons on a lattice. It is closely related to the Hubbard model that originated in solid-state physics as an approximate description of superconducting systems and the motion of electrons between the atoms of a crystalline solid. The model was introduced by Gersch and Knollman in 1963 in the context of granular superconductors. (The term 'Bose' in its name refers to the fact that the particles in the system are bosonic.) The model rose to prominence in the 1980s after it was found to capture the essence of the superfluid-insulator transition in a way that was much more mathematically tractable than fermionic metal-insulator models. The Bose–Hubbard model can be used to describe physical systems such as bosonic atoms in an optical lattice, as well as certain magnetic insulators. Furthermore, it can be generalized and applied to Bose–Fermi mixtures, in which case the corresponding Hamiltonian is called the Bose–Fermi–Hubbard Hamiltonian. Hamiltonian The physics of this model is given by the Bose–Hubbard Hamiltonian: Here, denotes summation over all neighboring lattice sites and , while and are bosonic creation and annihilation operators such that gives the number of particles on site . The model is parametrized by the hopping amplitude that describes boson mobility in the lattice, the on-site interaction which can be attractive () or repulsive (), and the chemical potential , which essentially sets the number of particles. If unspecified, typically the phrase 'Bose–Hubbard model' refers to the case where the on-site interaction is repulsive. This Hamiltonian has a global symmetry, which means that it is invariant (its physical properties are unchanged) by the transformation . In a superfluid phase, this symmetry is spontaneously broken. Hilbert space The dimension of the Hilbert space of the Bose–Hubbard model is given by , where is the total number of particles, while denotes the total number of lattice sites. At fixed or , the Hilbert space dimension grows polynomially, but at a fixed density of bosons per site, it grows exponentially as . Analogous Hamiltonians may be formulated to describe spinless fermions (the Fermi-Hubbard model) or mixtures of different atom species (Bose–Fermi mixtures, for example). In the case of a mixture, the Hilbert space is simply the tensor product of the Hilbert spaces of the individual species. Typically additional terms are included to model interaction between species. Phase diagram At zero temperature, the Bose–Hubbard model (in the absence of disorder) is in either a Mott insulating state at small , or in a superfluid state at large . The Mott insulating phases are characterized by integer boson densities, by the existence of an energy gap for particle-hole excitations, and by zero compressibility. The superfluid is characterized by long-range phase coherence, a spontaneous breaking of the Hamiltonian's continuous symmetry, a non-zero compressibility and superfluid susceptibility. At non-zero temperature, in certain parameter regimes a regular fluid phase appears that does not break the symmetry and does not display phase coherence. Both of these phases have been experimentally observed in ultracold atomic gases. In the presence of disorder, a third, "Bose glass" phase exists. The Bose glass is a Griffiths phase, and can be thought of as a Mott insulator containing rare 'puddles' of superfluid. These superfluid pools are not interconnected, so the system remains insulating, but their presence significantly changes model thermodynamics. The Bose glass phase is characterized by finite compressibility, the absence of a gap, and by an infinite superfluid susceptibility. It is insulating despite the absence of a gap, as low tunneling prevents the generation of excitations which, although close in energy, are spatially separated. The Bose glass has a non-zero Edwards–Anderson order parameter and has been suggested (but not proven) to display replica symmetry breaking. Mean-field theory The phases of the clean Bose–Hubbard model can be described using a mean-field Hamiltonian:where is the lattice co-ordination number. This can be obtained from the full Bose–Hubbard Hamiltonian by setting where , neglecting terms quadratic in (assumedly infinitesimal) and relabelling . Because this decoupling breaks the symmetry of the initial Hamiltonian for all non-zero values of , this parameter acts as a superfluid order parameter. For simplicity, this decoupling assumes to be the same on every site, which precludes exotic phases such as supersolids or other inhomogeneous phases. (Other decouplings are possible.) The phase diagram can be determined by calculating the energy of this mean-field Hamiltonian using second-order perturbation theory and finding the condition for which . To do this, the Hamiltonian is written as a site-local piece plus a perturbation:where the bilinear terms and its conjugate are treated as the perturbation. The order parameter is assumed to be small near the phase transition. The local term is diagonal in the Fock basis, giving the zeroth-order energy contribution:where is an integer that labels the filling of the Fock state. The perturbative piece can be treated with second-order perturbation theory, which leads to:The energy can be expressed as a series expansion in even powers of the order parameter (also known as the Landau formalism):After doing so, the condition for the mean-field, second-order phase transition between the Mott insulator and the superfluid phase is given by:where the integer describes the filling of the Mott insulating lobe. Plotting the line for different integer values of generates the boundary of the different Mott lobes, as shown in the phase diagram. Implementation in optical lattices Ultracold atoms in optical lattices are considered a standard realization of the Bose–Hubbard model. The ability to tune model parameters using simple experimental techniques and the lack of the lattice dynamics that are present in solid-state electronic systems mean that ultracold atoms offer a clean, controllable realisation of the Bose–Hubbard model. The biggest downside with optical lattice technology is the trap lifetime, with atoms typically trapped for only a few tens of seconds. To see why ultracold atoms offer such a convenient realization of Bose–Hubbard physics, the Bose–Hubbard Hamiltonian can be derived starting from the second quantized Hamiltonian that describes a gas of ultracold atoms in the optical lattice potential. This Hamiltonian is given by: , where is the optical lattice potential, is the (contact) interaction amplitude, and is the chemical potential. The tight binding approximation results in the substitution , which leads to the Bose–Hubbard Hamiltonian the physics are restricted to the lowest band () and the interactions are local at the level of the discrete mode. Mathematically, this can be stated as the requirement that except for case . Here, is a Wannier function for a particle in an optical lattice potential localized around site of the lattice and for the th Bloch band. Subtleties and approximations The tight-binding approximation significantly simplifies the second quantized Hamiltonian, though it introduces several limitations at the same time: For single-site states with several particles in a single state, the interactions may couple to higher Bloch bands, which contradicts base assumptions. Still, a single band model is able to address low-energy physics of such a setting but with parameters U and J becoming density-dependent. Instead of one parameter U, the interaction energy of n particles may be described by close, but not equal to U. When considering (fast) lattice dynamics, additional terms are added to the Hamiltonian so that the time-dependent Schrödinger equation is obeyed in the (time-dependent) Wannier function basis. The terms come from the Wannier functions' time dependence. Otherwise, the lattice dynamics may be incorporated by making the key parameters of the model time-dependent, varying with the instantaneous value of the optical potential. Experimental results Quantum phase transitions in the Bose–Hubbard model were experimentally observed by Greiner et al., and density dependent interaction parameters were observed by Immanuel Bloch's group. Single-atom resolution imaging of the Bose–Hubbard model has been possible since 2009 using quantum gas microscopes. Further applications The Bose–Hubbard model is of interest in the field of quantum computation and quantum information. Entanglement of ultra-cold atoms can be studied using this model. Numerical simulation In the calculation of low energy states the term proportional to means that large occupation of a single site is improbable, allowing for truncation of local Hilbert space to states containing at most particles. Then the local Hilbert space dimension is The dimension of the full Hilbert space grows exponentially with the number of lattice sites, limiting exact computer simulations of the entire Hilbert space to systems of 15-20 particles in 15-20 lattice sites. Experimental systems contain several million sites, with average filling above unity. One-dimensional lattices may be studied using density matrix renormalization group (DMRG) and related techniques such as time-evolving block decimation (TEBD). This includes calculating the ground state of the Hamiltonian for systems of thousands of particles on thousands of lattice sites, and simulating its dynamics governed by the time-dependent Schrödinger equation. Recently, two dimensional lattices have been studied using projected entangled pair states, a generalization of matrix product states in higher dimensions, both for the ground state and finite temperature. Higher dimensions are significantly more difficult due to the rapid growth of entanglement. All dimensions may be treated by quantum Monte Carlo algorithms, which provide a way to study properties of the Hamiltonian's thermal states, and in particular the ground state. Generalizations Bose–Hubbard-like Hamiltonians may be derived for different physical systems containing ultracold atom gas in the periodic potential. They include: systems with longer-ranged density-density interactions of the form , which may stabilise a supersolid phase for certain parameter values dimerised magnets, where spin-1/2 electrons are bound together in pairs called dimers that have bosonic excitation statistics and are described by a Bose–Hubbard model long-range dipolar interaction systems with interaction-induced tunneling terms internal spin structure of atoms, for example due to trapping an entire degenerate manifold of hyperfine spin states (for F=1 it leads to the spin-1 Bose–Hubbard model) situations where the gas experiences an additional potential—for example, in disordered systems. The disorder might be realised by a speckle pattern, or using a second, incommensurate, weaker, optical lattice. In the latter case inclusion of the disorder amounts to including extra term of the form: . See also Jaynes–Cummings–Hubbard model References Quantum lattice models
Bose–Hubbard model
[ "Physics" ]
2,285
[ "Quantum mechanics", "Quantum lattice models" ]
4,658,070
https://en.wikipedia.org/wiki/American%20Superconductor
American Superconductor Corporation (AMSC) is an American energy technologies company headquartered in Ayer, Massachusetts. The firm specializes in using superconductors for the development of diverse power systems, including but not limited to superconducting wire. Moreover, AMSC employs superconductors in the construction of ship protection systems. The company has a subsidiary, AMSC Windtec, located in Klagenfurt, Austria. History American Superconductor was founded on April 9, 1987, by MIT professor and material scientist Gregory J. Yurek, in his kitchen. The founding team included Yet-Ming Chiang, David A. Rudman and John B. Vander Sande. The company completed its initial public offering in 1991. Over the next twenty years, the company made several acquisitions, including that of the Austrian wind power company WindTec. The company operates across three primary business segments: production of high-temperature superconductor (HTS) wire, which has a significantly higher electrical current capacity than copper wire; development of HTS-based motors and generators; and design and manufacturing of power electronic systems for wind farms and transmission systems. Projects Chicago ComEd Resilient Electric Grid Project On Aug 31, 2021 American Superconductor and ComEd announced the successful integration of AMSC’s REG system, which utilizes high-temperature superconductor wire to enhance the reliability, resiliency and performance of the electric power grid. This REG system has been running in commercial service since then. This project was partially funded by Homeland Security as it protects this part of the grid from EMP and other hazards. A second, larger phase is under design. Detroit Edison Project American Superconductor installed a test of a superconducting electric power transmission power cable in the Detroit Edison Frisbee substation in 2001. Holbrook Superconductor Project The world's first production superconducting transmission power cable, the Holbrook Superconductor Project, was commissioned in late June 2008. The suburban Long Island electrical substation is fed by about 600 meters of high-temperature superconductor wire manufactured by American Superconductor, installed underground and chilled to superconducting temperatures with liquid nitrogen. Tres Amigas Project American Superconductor was chosen as a supplier for the Tres Amigas Project, the United States' first renewable energy market hub. The Tres Amigas renewable energy market hub will be a multi-mile, triangular electricity pathway of Superconductor Electricity Pipelines capable of transferring and balancing many gigawatts of power between three U.S. power grids (the Eastern Interconnection, the Western Interconnection and the Texas Interconnection). Unlike traditional powerlines, it will transfer power as DC instead of AC current. It will be located in Clovis, New Mexico. Korea's LS Cable AMSC will sell three million meters of wire to allow LS Cable to build 10–15 miles of superconducting cabling for the grid. This represents an order of magnitude increase over the size of the current largest installation, at Long Island Power. HTS rotors AMSC has demonstrated a 36.5 MW (49,000 horsepower) high-temperature superconductor (HTS) electric motor for the United States Navy, and is developing a similar 10 megawatt wind turbine generator through its wholly owned Austria-based subsidiary AMSC Windtec. This would be one of the world's most powerful turbines. It operates at 30–40 kelvins, and the cooling system uses 40 kW. 2009 government stimulus In 2009, the Department of Energy announced that they would provide $4.8M to AMSC for further development of superconducting electrical cables. Sinovel controversy In early 2011, a Serbian employee of American Superconductor sold the company's proprietary wind turbine control software to the company's largest customer, China based Sinovel. Sinovel promptly ended its payments to American Superconductor, causing the company to lose 84% of its market cap. The employee was bribed for only $20,500, and later pleaded guilty to bribery charges. References External links American Superconductor website Superconductivity Technology companies based in Massachusetts Ayer, Massachusetts Companies based in Middlesex County, Massachusetts Energy companies established in 1987 Technology companies established in 1987 1978 establishments in Massachusetts Companies listed on the Nasdaq 1991 initial public offerings
American Superconductor
[ "Physics", "Materials_science", "Engineering" ]
912
[ "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Electrical resistance and conductance" ]
15,504,805
https://en.wikipedia.org/wiki/Indexing%20%28motion%29
Indexing in reference to motion is moving (or being moved) into a new position or location quickly and easily but also precisely. When indexing a machine part, its new location is known to within a few hundredths of a millimeter (thousandths of an inch), or often even to within a few thousandths of a millimeter (ten-thousandths of an inch), despite the fact that no elaborate measuring or layout was needed to establish that location. In reference to multi-edge cutting inserts, indexing is the process of exposing a new cutting edge for use. Indexing is a necessary kind of motion in many areas of mechanical engineering and machining. An object that indexes, or can be indexed, is said to be indexable. Usually when the word indexing is used, it refers specifically to rotation. That is, indexing is most often the quick and easy but precise rotation of a machine part through a certain known number of degrees. For example, Machinery's Handbook, 25th edition, in its section on milling machine indexing, says, "Positioning a workpiece at a precise angle or interval of rotation for a machining operation is called indexing." In addition to that most classic sense of the word, the swapping of one part for another, or other controlled movements, are also sometimes referred to as indexing, even if rotation is not the focus. Examples from everyday life There are various examples of indexing that laypersons (non-engineers and non-machinists) can find in everyday life. These motions are not always called by the name indexing, but the idea is essentially similar: The motion of a retractable utility knife blade, which often will have well-defined discrete positions (fully retracted, ¼-exposed, ½-exposed, ¾-exposed, fully exposed) The indexing of a revolver's cylinder with each shot Manufacturing applications Indexing is vital in manufacturing, especially mass production, where a well-defined cycle of motions must be repeated quickly and easily—but precisely—for each interchangeable part that is made. Without indexing capability, all manufacturing would have to be done on a craft basis, and interchangeable parts would have very high unit cost because of the time and skill needed to produce each unit. In fact, the evolution of modern technologies depended on the shift in methods from crafts (in which toolpath is controlled via operator skill) to indexing-capable toolpath control. A prime example of this theme was the development of the turret lathe, whose turret indexes tool positions, one after another, to allow successive tools to move into place, take precisely placed cuts, then make way for the next tool. How indexing is achieved in manufacturing Indexing capability is provided in two fundamental ways: with or without Information technology (IT). Non-IT-assisted physical guidance Non-IT-assisted physical guidance was the first means of providing indexing capability, via purely mechanical means. It allowed the Industrial Revolution to progress into the Machine Age. It is achieved by jigs, fixtures, and machine tool parts and accessories, which control toolpath by the very nature of their shape, physically limiting the path for motion. Some archetypal examples, developed to perfection before the advent of the IT era, are drill jigs, the turrets on manual turret lathes, indexing heads for manual milling machines, rotary tables, and various indexing fixtures and blocks that are simpler and less expensive than indexing heads, and serve quite well for most indexing needs in small shops. Although indexing heads of the pre-CNC era are now mostly obsolete in commercial manufacturing, the principle of purely mechanical indexing is still a vital part of current technology, in concert with IT, even as it has been extended to newer uses, such as the indexing of CNC milling machine toolholders or of indexable cutter inserts, whose precisely controlled size and shape allows them to be rotated or replaced quickly and easily without changing overall tool geometry. IT-assisted physical guidance IT-assisted physical guidance (for example, via NC, CNC, or robotics) has been developed since the World War II era and uses electromechanical and electrohydraulic servomechanisms to translate digital information into position control. These systems also ultimately physically limit the path for motion, as jigs and other purely mechanical means do; but they do it not simply through their own shape, but rather using changeable information. References Bibliography Mechanical engineering Metalworking terminology
Indexing (motion)
[ "Physics", "Engineering" ]
921
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
15,505,098
https://en.wikipedia.org/wiki/Binding%20coefficient
In medicinal chemistry and pharmacology, a binding coefficient is a quantity representing the extent to which a chemical compound will bind to a macromolecule. The preferential binding coefficient can be derived from the Kirkwood-Buff solution theory of solutions. Preferential binding is defined as a thermodynamic expression that describes the binding of the cosolvent over the solvent. This is in a system that is open to both the solvent and cosolvent. Consequently, preferential interaction coefficients are measures of interactions that involve “solutes that participate in a reaction in solution.” See also Binding constant Partition coefficient Binding affinity References Medicinal chemistry
Binding coefficient
[ "Chemistry", "Biology" ]
132
[ "Pharmacology", "Medicinal chemistry stubs", "Biochemistry stubs", "Medicinal chemistry", "nan", "Biochemistry", "Pharmacology stubs" ]
15,509,033
https://en.wikipedia.org/wiki/Photoacid
Photoacids are molecules that become more acidic upon absorption of light. Either the light causes a photodissociation to produce a strong acid, or the light causes photoassociation (such as a ring forming reaction) that leads to an increased acidity and dissociation of a proton. There are two main types of molecules that release protons upon illumination: photoacid generators (PAGs) and photoacids (PAHs). PAGs undergo proton photodissociation irreversibly, while PAHs are molecules that undergo proton photodissociation and thermal reassociation. In this latter case, the excited state is strongly acidic, but reversible. Photoacid generators An example due to photodissociation is triphenylsulfonium triflate. This colourless salt consists of a sulfonium cation and the triflate anion. Many related salts are known including those with other noncoordinating anions and those with diverse substituents on the phenyl rings. The triphenylsulfonium salts absorb at a wavelength of 233 nm, which induces a dissociation of one of the three phenyl rings. This dissociated phenyl radical then re-combines with remaining diphenylsulfonium to liberate an H+ ion. The second reaction is irreversible, and therefore the entire process is irreversible, so triphenylsulfonium triflate is a photoacid generator. The ultimate products are thus a neutral organic sulfide and the strong acid triflic acid. [(C6H5)3S+][CF3SO] + hν → [(C6H5)2S+.][CF3SO] + C6H [(C6H5)2S+.][CF3SO] + C6H → (C6H5C6H4)(C6H5)S + [CF3SO][H+] Applications of these photoacids include photolithography and catalysis of the polymerization of epoxides. Photoacids An example of a photoacid which undergoes excited-state proton transfer without prior photolysis is the fluorescent dye pyranine (8-hydroxy-1,3,6-pyrenetrisulfonate or HPTS). The Förster cycle was proposed by Theodor Förster and combines knowledge of the ground state acid dissociation constant (pKa), absorption, and fluorescence spectra to predict the pKa in the excited state of a photoacid. The name photoacid can be abbreviated PAH, where the H does not stand for a word starting with H, but rather for a hydrogen atom which is lost when the molecule reacts as a Brønsted acid. This use of PAH should not be confused with other meanings of PAH in chemistry and in medicine. References Photochemistry Lithography (microfabrication) Microtechnology Light-sensitive chemicals Acids
Photoacid
[ "Chemistry", "Materials_science", "Engineering" ]
632
[ "Light-sensitive chemicals", "Acids", "Microtechnology", "Materials science", "Light reactions", "nan", "Nanotechnology", "Lithography (microfabrication)" ]
17,234,513
https://en.wikipedia.org/wiki/DSIF
DSIF (DRB Sensitivity Inducing Factor) is a protein complex that can either negatively or positively affect transcription by RNA polymerase II (Pol II). It can interact with the negative elongation factor (NELF) to promote the stalling of Pol II at some genes, which is called promoter proximal pausing. The pause occurs soon after initiation, once 20–60 nucleotides have been transcribed. This stalling is relieved by positive transcription elongation factor b (P-TEFb) and Pol II enters productive elongation to resume synthesis till finish. In humans, DSIF is composed of hSPT4 and hSPT5. hSPT5 has a direct role in mRNA capping which occurs while the elongation is paused. SPT5 is preserved in humans to bacteria. SPT4 and SPT5 in yeast are the homologs of hSPT4 and hSPT5. In bacteria, the homologous complex only contains NusG, a Spt5 homolog. Archaea have both proteins. The complex locks the RNA polymerase (RNAP) clamp into a closed state to prevent the elongation complex (EC) from dissociating. The Spt5 NGN domain helps anneal the two strands of DNA upstream. The single KOW domain in bacteria and archaea anchors a ribosome to the RNAP. Role in Diseases HIV DSIF plays the same role for HIV-1 gene expression as it would normally in transcription. This is because P-TEFb phosphorylates DSIF the same regardless of whether or not P-TEFb goes through normal cellular regulation or bypasses it due to Tat. References Gene expression
DSIF
[ "Chemistry", "Biology" ]
351
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
17,234,794
https://en.wikipedia.org/wiki/Fracture%20%28mineralogy%29
In the field of mineralogy, fracture is the texture and shape of a rock's surface formed when a mineral is fractured. Minerals often have a highly distinctive fracture, making it a principal feature used in their identification. Fracture differs from cleavage in that the latter involves clean splitting along the cleavage planes of the mineral's crystal structure, as opposed to more general breakage. All minerals exhibit fracture, but when very strong cleavage is present, it can be difficult to see. Terminology Five types of fractures are recognized in mineralogy: conchoidal, earthy, hackly, splintery (or fibrous), and uneven factures. Conchoidal fracture Conchoidal fracture breakage that resembles the concentric ripples of a mussel shell. It often occurs in amorphous or fine-grained mineraloids such as flint, opal or obsidian, but may also occur in crystalline minerals such as quartz. Subconchoidal fracture is similar to conchoidal fracture, but with less significant curvature. Note that obsidian is an igneous rock, not a mineral, but it does illustrate conchoidal fracture well. Earthy fracture Earthy fracture is reminiscent of freshly broken soil. It is frequently seen in relatively soft, loosely bound minerals, such as limonite, kaolinite and aluminite. Hackly fracture Hackly fracture (also known as jagged fracture) is jagged, sharp and not even. It occurs when metals are torn, and so is often encountered in native metals such as copper and silver. Splintery fracture Splintery fracture comprises sharp elongated points. It is particularly seen in fibrous minerals such as chrysotile, but may also occur in non-fibrous minerals such as kyanite. Uneven fracture Uneven fracture is a rough surface or one with random irregularities. It occurs in a wide range of minerals including arsenopyrite, pyrite and magnetite. See also Cleavage (crystal) Fracture (geology) Mineral#Cleavage, parting, fracture, and tenacity References Rudolf Duda and Lubos Rejl: Minerals of the World (Arch Cape Press, 1990) What is fracture? Mineralogy Fracture mechanics
Fracture (mineralogy)
[ "Materials_science", "Engineering" ]
444
[ "Structural engineering", "Materials degradation", "Materials science", "Fracture mechanics" ]
17,235,432
https://en.wikipedia.org/wiki/Negative%20elongation%20factor
In molecular biology, the NELF (negative elongation factor) is a four-subunit protein complex (NELF-A, NELF-B, NELF-C/NELF-D, and NELF-E) that negatively impacts transcription by RNA polymerase II (Pol II) by pausing about 20-60 nucleotides downstream from the transcription start site (TSS). Structure The NELF has four subunits within its complex which are the following: NELF-A, NELF-B, NELF-C/NELF-D, and NELF-E. The NELF-A subunit is encoded by the gene WHSC2 (Wolf-Hirschhorn syndrome candidate 2). Micro-sequencing analysis demonstrated that NELF-B was the protein previously identified as being encoded by the gene COBRA1. It is unknown whether or not NELF-C and NELF-D are peptides resulting from the same mRNA with different translation initiation sites; possibly differing only in an extra 9 amino acids for NELF-C at the N-terminus, or peptides from different mRNAs entirely. A single NELF complex consists of either NELF-C or NELF-D, but not both. NELF-E is also known as RDBP. Function and Interactions NELF is located in the nucleus. NELF binds in a stable complex with DSIF (5,6-dichloro-1-β-d-ribofuranosylbenzimidazole (DRB)-sensitivity inducing factor) and RNA polymerase II together, but not with either alone. Due to its role in transcription, NELF is also a key player in the negative function of DSIF. NELF also works with DSIF to inhibit the speed of Pol II during the elongation phase in transcription. In D. melanogaster, the HSP70 gene is affected by NELF and DSIF through the induction of promoter proximal pausing. It is thought that NELF arose to assist DSIF by amplifying its negative effects in order to increase gene expression control. P-TEFb (positive transcription elongation factor b) inhibits the effect of NELF and DSIF on Pol II elongation, via its phosphorylation of serine-2 of the C-terminal domain of Pol II, and the SPT5 subunit of DSIF, causing dissociation of NELF. Another mechanism, interaction of enhancer RNA with NELF, causes dissociation of NELF from RNA polymerase II, resulting in productive elongation of mRNA, as studied in two immediate early genes. However, many mechanisms by which NELF and DSIF operate remain unclear. NELF homologues exist in some metazoans (e.g. insects and vertebrates) but have not been found in plants, yeast, or nematodes (worms). Interactions by subunit: NELF-A: Pol II complex. NELF-B: KIAA1191, NELF-E, and an early sequence of BRCA1. NELF-C/D: ARAF1, PCF11, and KAT8. NELF-E: NELF-B and HIV TAR RNA. NELF undergoes Phase separation in vitro and Condensation in vivo Clinical Significance The NELF complex is also possibly a player in the enlistment of gene PCF11 to the stopped Pol II in HIV-1 latency. NELF-A may play a role in the phenotype of Wolf-Hirschhorn syndrome (WHS) as it is mapped to the critical area of deletion on the short arm of chromosome 4. Pol II pausing controlled by NELF is a key source of R-loop aggregation in mammary epithelial cells that are BRCA1-deficient, which could ultimately lead to tumorigenesis. References Protein complexes Transcription factors
Negative elongation factor
[ "Chemistry", "Biology" ]
811
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
17,238,630
https://en.wikipedia.org/wiki/Multiple-try%20Metropolis
Multiple-try Metropolis (MTM) is a sampling method that is a modified form of the Metropolis–Hastings method, first presented by Liu, Liang, and Wong in 2000. It is designed to help the sampling trajectory converge faster, by increasing both the step size and the acceptance rate. Background Problems with Metropolis–Hastings In Markov chain Monte Carlo, the Metropolis–Hastings algorithm (MH) can be used to sample from a probability distribution which is difficult to sample from directly. However, the MH algorithm requires the user to supply a proposal distribution, which can be relatively arbitrary. In many cases, one uses a Gaussian distribution centered on the current point in the probability space, of the form . This proposal distribution is convenient to sample from and may be the best choice if one has little knowledge about the target distribution, . If desired, one can use the more general multivariate normal distribution, , where is the covariance matrix which the user believes is similar to the target distribution. Although this method must converge to the stationary distribution in the limit of infinite sample size, in practice the progress can be exceedingly slow. If is too large, almost all steps under the MH algorithm will be rejected. On the other hand, if is too small, almost all steps will be accepted, and the Markov chain will be similar to a random walk through the probability space. In the simpler case of , we see that steps only takes us a distance of . In this event, the Markov Chain will not fully explore the probability space in any reasonable amount of time. Thus the MH algorithm requires reasonable tuning of the scale parameter ( or ). Problems with high dimensionality Even if the scale parameter is well-tuned, as the dimensionality of the problem increases, progress can still remain exceedingly slow. To see this, again consider . In one dimension, this corresponds to a Gaussian distribution with mean 0 and variance 1. For one dimension, this distribution has a mean step of zero, however the mean squared step size is given by As the number of dimensions increases, the expected step size becomes larger and larger. In dimensions, the probability of moving a radial distance is related to the Chi distribution, and is given by This distribution is peaked at which is for large . This means that the step size will increase as the roughly the square root of the number of dimensions. For the MH algorithm, large steps will almost always land in regions of low probability, and therefore be rejected. If we now add the scale parameter back in, we find that to retain a reasonable acceptance rate, we must make the transformation . In this situation, the acceptance rate can now be made reasonable, but the exploration of the probability space becomes increasingly slow. To see this, consider a slice along any one dimension of the problem. By making the scale transformation above, the expected step size is any one dimension is not but instead is . As this step size is much smaller than the "true" scale of the probability distribution (assuming that is somehow known a priori, which is the best possible case), the algorithm executes a random walk along every parameter. The multiple-try Metropolis algorithm Suppose is an arbitrary proposal function. We require that only if . Additionally, is the likelihood function. Define where is a non-negative symmetric function in and that can be chosen by the user. Now suppose the current state is . The MTM algorithm is as follows: 1) Draw k independent trial proposals from . Compute the weights for each of these. 2) Select from the with probability proportional to the weights. 3) Now produce a reference set by drawing from the distribution . Set (the current point). 4) Accept with probability It can be shown that this method satisfies the detailed balance property and therefore produces a reversible Markov chain with as the stationary distribution. If is symmetric (as is the case for the multivariate normal distribution), then one can choose which gives . Disadvantages Multiple-try Metropolis needs to compute the energy of other states at every step. If the slow part of the process is calculating the energy, then this method can be slower. If the slow part of the process is finding neighbors of a given point, or generating random numbers, then again this method can be slower. It can be argued that this method only appears faster because it puts much more computation into a "single step" than Metropolis-Hastings does. See also Markov chain Monte Carlo Metropolis–Hastings algorithm Detailed balance References Liu, J. S., Liang, F. and Wong, W. H. (2000). The multiple-try method and local optimization in Metropolis sampling, Journal of the American Statistical Association, 95(449): 121–134 JSTOR Monte Carlo methods Markov chain Monte Carlo
Multiple-try Metropolis
[ "Physics" ]
974
[ "Monte Carlo methods", "Computational physics" ]
1,818,265
https://en.wikipedia.org/wiki/Quantum%20fluid
A quantum fluid refers to any system that exhibits quantum mechanical effects at the macroscopic level such as superfluids, superconductors, ultracold atoms, etc. Typically, quantum fluids arise in situations where both quantum mechanical effects and quantum statistical effects are significant. Most matter is either solid or gaseous (at low densities) near absolute zero. However, for the cases of helium-4 and its isotope helium-3, there is a pressure range where they can remain liquid down to absolute zero because the amplitude of the quantum fluctuations experienced by the helium atoms is larger than the inter-atomic distances. In the case of solid quantum fluids, it is only a fraction of its electrons or protons that behave like a “fluid”. One prominent example is that of superconductivity where quasi-particles made up of pairs of electrons and a phonon act as bosons which are then capable of collapsing into the ground state to establish a supercurrent with a resistivity near zero. Derivation Quantum mechanical effects become significant for physics in the range of the de Broglie wavelength. For condensed matter, this is when the de Broglie wavelength of a particle is greater than the spacing between the particles in the lattice that comprises the matter. The de Broglie wavelength associated with a massive particle is where h is the Planck constant. The momentum can be found from the kinetic theory of gases, where Here, the temperature can be found as Of course, we can replace the momentum here with the momentum derived from the de Broglie wavelength like so: Hence, we can say that quantum fluids will manifest at approximate temperature regions where , where d is the lattice spacing (or inter-particle spacing). Mathematically, this is stated like so: It is easy to see how the above definition relates to the particle density, n. We can write as for a three dimensional lattice The above temperature limit has different meaning depending on the quantum statistics followed by each system, but generally refers to the point at which the system manifests quantum fluid properties. For a system of fermions, is an estimation of the Fermi energy of the system, where processes important to phenomena such as superconductivity take place. For bosons, gives an estimation of the Bose-Einstein condensation temperature. See also Bose–Einstein condensate Superconductivity Superfluidity Classical fluid Liquid helium Fermi liquid Luttinger liquid Quantum spin liquid Macroscopic quantum phenomena Topological order References Condensed matter physics Quantum phases Exotic matter
Quantum fluid
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
519
[ "Quantum phases", "Phases of matter", "Quantum mechanics", "Materials science", "Condensed matter physics", "Exotic matter", "Matter" ]
1,818,270
https://en.wikipedia.org/wiki/Hamiltonian%20vector%20field
In mathematics and physics, a Hamiltonian vector field on a symplectic manifold is a vector field defined for any energy function or Hamiltonian. Named after the physicist and mathematician Sir William Rowan Hamilton, a Hamiltonian vector field is a geometric manifestation of Hamilton's equations in classical mechanics. The integral curves of a Hamiltonian vector field represent solutions to the equations of motion in the Hamiltonian form. The diffeomorphisms of a symplectic manifold arising from the flow of a Hamiltonian vector field are known as canonical transformations in physics and (Hamiltonian) symplectomorphisms in mathematics. Hamiltonian vector fields can be defined more generally on an arbitrary Poisson manifold. The Lie bracket of two Hamiltonian vector fields corresponding to functions f and g on the manifold is itself a Hamiltonian vector field, with the Hamiltonian given by the Poisson bracket of f and g. Definition Suppose that is a symplectic manifold. Since the symplectic form is nondegenerate, it sets up a fiberwise-linear isomorphism between the tangent bundle and the cotangent bundle , with the inverse Therefore, one-forms on a symplectic manifold may be identified with vector fields and every differentiable function determines a unique vector field , called the Hamiltonian vector field with the Hamiltonian , by defining for every vector field on , Note: Some authors define the Hamiltonian vector field with the opposite sign. One has to be mindful of varying conventions in physical and mathematical literature. Examples Suppose that is a -dimensional symplectic manifold. Then locally, one may choose canonical coordinates on , in which the symplectic form is expressed as: where denotes the exterior derivative and denotes the exterior product. Then the Hamiltonian vector field with Hamiltonian takes the form: where is a square matrix and The matrix is frequently denoted with . Suppose that M = R2n is the 2n-dimensional symplectic vector space with (global) canonical coordinates. If then if then if then if then Properties The assignment is linear, so that the sum of two Hamiltonian functions transforms into the sum of the corresponding Hamiltonian vector fields. Suppose that are canonical coordinates on (see above). Then a curve is an integral curve of the Hamiltonian vector field if and only if it is a solution of Hamilton's equations: The Hamiltonian is constant along the integral curves, because . That is, is actually independent of . This property corresponds to the conservation of energy in Hamiltonian mechanics. More generally, if two functions and have a zero Poisson bracket (cf. below), then is constant along the integral curves of , and similarly, is constant along the integral curves of . This fact is the abstract mathematical principle behind Noether's theorem. The symplectic form is preserved by the Hamiltonian flow. Equivalently, the Lie derivative Poisson bracket The notion of a Hamiltonian vector field leads to a skew-symmetric bilinear operation on the differentiable functions on a symplectic manifold M, the Poisson bracket, defined by the formula where denotes the Lie derivative along a vector field X. Moreover, one can check that the following identity holds: where the right hand side represents the Lie bracket of the Hamiltonian vector fields with Hamiltonians f and g. As a consequence (a proof at Poisson bracket), the Poisson bracket satisfies the Jacobi identity: which means that the vector space of differentiable functions on , endowed with the Poisson bracket, has the structure of a Lie algebra over , and the assignment is a Lie algebra homomorphism, whose kernel consists of the locally constant functions (constant functions if is connected). Remarks Notes Works cited See section 3.2. External links Hamiltonian vector field on nLab Hamiltonian mechanics Symplectic geometry William Rowan Hamilton
Hamiltonian vector field
[ "Physics", "Mathematics" ]
776
[ "Hamiltonian mechanics", "Theoretical physics", "Classical mechanics", "Dynamical systems" ]
1,818,980
https://en.wikipedia.org/wiki/Danburite
Danburite is a calcium boron silicate mineral with a chemical formula of CaB2(SiO4)2. It has a Mohs hardness of 7 to 7.5 and a specific gravity of 3.0. The mineral has an orthorhombic crystal form. It is usually colourless, like quartz, but can also be either pale yellow or yellowish-brown. It typically occurs in contact metamorphic rocks. The Dana classification of minerals categorizes danburite as a sorosilicate, while the Strunz classification scheme lists it as a tectosilicate; its structure can be interpreted as either. Its crystal symmetry and form are similar to topaz; however, topaz is a calcium fluorine bearing nesosilicate. The clarity, resilience, and strong dispersion of danburite make it valuable as cut stones for jewelry. It is named for Danbury, Connecticut, United States, where it was first discovered in 1839 by Charles Upham Shephard. References Calcium minerals Tectosilicates Orthorhombic minerals Minerals in space group 62 Luminescent minerals Gemstones
Danburite
[ "Physics", "Chemistry" ]
237
[ "Luminescence", "Luminescent minerals", "Materials", "Gemstones", "Matter" ]
1,820,902
https://en.wikipedia.org/wiki/Pyroelectric%20fusion
Pyroelectric fusion refers to the technique of using pyroelectric crystals to generate high strength electrostatic fields to accelerate deuterium ions (tritium might also be used someday) into a metal hydride target also containing deuterium (or tritium) with sufficient kinetic energy to cause these ions to undergo nuclear fusion. It was reported in April 2005 by a team at UCLA. The scientists used a pyroelectric crystal heated from , combined with a tungsten needle to produce an electric field of about 25 gigavolts per meter to ionize and accelerate deuterium nuclei into an erbium deuteride target. Though the energy of the deuterium ions generated by the crystal has not been directly measured, the authors used 100 keV (a temperature of about 109 K) as an estimate in their modeling. At these energy levels, two deuterium nuclei can fuse to produce a helium-3 nucleus, a 2.45 MeV neutron and bremsstrahlung. Although it makes a useful neutron generator, the apparatus is not intended for power generation since it requires far more energy than it produces. History The process of light ion acceleration using electrostatic fields and deuterium ions to produce fusion in solid deuterated targets was first demonstrated by Cockcroft and Walton in 1932 (see Cockcroft–Walton generator). That process is used in miniaturized versions of their original accelerator, in the form of small sealed tube neutron generators, for petroleum exploration. The process of pyroelectricity has been known from ancient times. The first use of a pyroelectric field to accelerate deuterons was in a 1997 experiment conducted by Drs. V.D. Dougar Jabon, G.V. Fedorovich, and N.V. Samsonenko. This group was the first to utilize a lithium tantalate () pyroelectric crystal in fusion experiments. The novel idea with the pyroelectric approach to fusion is in its application of the pyroelectric effect to generate accelerating electric fields. This is done by heating the crystal from −34 °C to +7 °C over a period of a few minutes. Nuclear D-D fusion driven by pyroelectric crystals was proposed by Naranjo and Putterman in 2002. It was also discussed by Brownridge and Shafroth in 2004. The possibility of using pyroelectric crystals in a neutron production device (by D-D fusion) was proposed in a conference paper by Geuther and Danon in 2004 and later in a publication discussing electron and ion acceleration by pyroelectric crystals. None of these later authors had prior knowledge of the earlier 1997 experimental work conducted by Dougar Jabon, Fedorovich, and Samsonenko which mistakenly believed that fusion occurred within the crystals. The key ingredient of using a tungsten needle to produce sufficient ion beam current for use with a pyroelectric crystal power supply was first demonstrated in the 2005 Nature paper, although in a broader context tungsten emitter tips have been used as ion sources in other applications for many years. In 2010, it was found that tungsten emitter tips are not necessary to increase the acceleration potential of pyroelectric crystals; the acceleration potential can allow positive ions to reach kinetic energies between 300 and 310 keV. 2005–2009 In April 2005, a UCLA team headed by chemistry professor James K. Gimzewski and physics professor Seth Putterman utilized a tungsten probe attached to a pyroelectric crystal to increase the electric field strength. Brian Naranjo, a graduate student working under Putterman, conducted the experiment demonstrating the use of a pyroelectric power source for producing fusion on a laboratory bench top device. The device used a lithium tantalate () pyroelectric crystal to ionize deuterium atoms and to accelerate the deuterons towards a stationary erbium dideuteride (ErD2) target. Around 1000 fusion reactions per second took place, each resulting in the production of an 820 keV helium-3 nucleus and a 2.45 MeV neutron. The team anticipates applications of the device as a neutron generator or possibly in microthrusters for space propulsion. A team at Rensselaer Polytechnic Institute, led by Yaron Danon and his graduate student Jeffrey Geuther, improved upon the UCLA experiments using a device with two pyroelectric crystals and capable of operating at non-cryogenic temperatures. Pyroelectric fusion has been hyped in the news media, which overlooked the work of Dougar Jabon, Fedorovich and Samsonenko. Pyroelectric fusion is not related to the earlier claims of fusion reactions, having been observed during sonoluminescence (bubble fusion) experiments conducted under the direction of Rusi Taleyarkhan of Purdue University. Naranjo of the UCLA team was one of the main critics of these earlier prospective fusion claims from Taleyarkhan. 2010–present The first successful results with pyroelectric fusion using a tritiated target was reported in 2010. Putterman and Naranjo worked with T. Venhaus of Los Alamos National Laboratory to measure a 14.1 MeV neutron signal far above background. See also Neutron sources Neutron generator Pyroelectricity References External links "UCLA crystal fusion website" "Mostly cold fusion" "Physics News Update 729" "Coming in out of the cold: nuclear fusion, for real | csmonitor.com" "Nuclear fusion on the desktop...really! - Science - nbcnews.com - an article to hype the great UCLA team! Again, see reference 2 (above) for who should first get the credit" "Supplementary methods for "Observation of nuclear fusion driven by a pyroelectric crystal" is found in reference 3 (above)" Nuclear fusion Neutron sources
Pyroelectric fusion
[ "Physics", "Chemistry" ]
1,227
[ "Nuclear fusion", "Nuclear physics" ]
1,821,411
https://en.wikipedia.org/wiki/Diamond-like%20carbon
Diamond-like carbon (DLC) is a class of amorphous carbon material that displays some of the typical properties of diamond. DLC is usually applied as coatings to other materials that could benefit from such properties. DLC exists in seven different forms. All seven contain significant amounts of sp3 hybridized carbon atoms. The reason that there are different types is that even diamond can be found in two crystalline polytypes. The more common one uses a cubic lattice, while the less common one, lonsdaleite, has a hexagonal lattice. By mixing these polytypes at the nanoscale, DLC coatings can be made that at the same time are amorphous, flexible, and yet purely sp3 bonded "diamond". The hardest, strongest, and slickest is tetrahedral amorphous carbon (ta-C). Ta-C can be considered to be the "pure" form of DLC, since it consists almost entirely of sp3 bonded carbon atoms. Fillers such as hydrogen, graphitic sp2 carbon, and metals are used in the other 6 forms to reduce production expenses or to impart other desirable properties. The various forms of DLC can be applied to almost any material that is compatible with a vacuum environment. History In 2006, the market for outsourced DLC coatings was estimated as about €30,000,000 in the European Union. In 2011, researchers at Stanford University announced a super-hard amorphous diamond under conditions of ultrahigh pressure. The diamond lacks the crystalline structure of diamond but has the light weight characteristic of carbon. In 2021, Chinese researchers announced AM-III, a super-hard, fullerene-based form of amorphous carbon. It is also a semi-conductor with a bandgap range of 1.5 to 2.2 eV. The material demonstrated a hardness of 113 GPa on a Vickers hardness test vs diamonds rate at around 70 to 100 GPa. It was hard enough to scratch the surface of a diamond. Distinction from natural and synthetic diamond Naturally occurring diamond is almost always found in the crystalline form with a purely cubic orientation of sp3 bonded carbon atoms. Sometimes there are lattice defects or inclusions of atoms of other elements that give color to the stone, but the lattice arrangement of the carbons remains cubic and bonding is purely sp3. The internal energy of the cubic polytype is slightly lower than that of the hexagonal form and growth rates from molten material in both natural and bulk synthetic diamond production methods are slow enough that the lattice structure has time to grow in the lowest energy (cubic) form that is possible for sp3 bonding of carbon atoms. In contrast, DLC is typically produced by processes in which high energy precursive carbons (e.g. in plasmas, in filtered cathodic arc deposition, in sputter deposition and in ion beam deposition) are rapidly cooled or quenched on relatively cold surfaces. In those cases cubic and hexagonal lattices can be randomly intermixed, layer by atomic layer, because there is no time available for one of the crystalline geometries to grow at the expense of the other before the atoms are "frozen" in place in the material. Amorphous DLC coatings can result in materials that have no long-range crystalline order. Without long range order there are no brittle fracture planes, so such coatings are flexible and conformal to the underlying shape being coated, while still being as hard as diamond. In fact this property has been exploited to study atom-by-atom wear at the nanoscale in DLC. Production There are several methods for producing DLC, which rely on the lower density of sp2 than sp3 carbon. So the application of pressure, impact, catalysis, or some combination of these at the atomic scale can force sp2 bonded carbon atoms closer together into sp3 bonds. This must be done vigorously enough that the atoms cannot simply spring back apart into separations characteristic of sp2 bonds. Usually techniques either combine such a compression with a push of the new cluster of sp3 bonded carbon deeper into the coating so that there is no room for expansion back to separations needed for sp2 bonding; or the new cluster is buried by the arrival of new carbon destined for the next cycle of impacts. It is reasonable to envisage the process as a "hail" of projectiles that produce localized, faster, nanoscale versions of the classic combinations of heat and pressure that produce natural and synthetic diamond. Because they occur independently at many places across the surface of a growing film or coating, they tend to produce an analog of a cobblestone street with the cobbles being nodules or clusters of sp3 bonded carbon. Depending upon the particular "recipe" being used, there are cycles of deposition of carbon and impact or continuous proportions of new carbon arriving and projectiles conveying the impacts needed to force the formation of the sp3 bonds. As a result, ta-C may have the structure of a cobblestone street, or the nodules may "melt together" to make something more like a sponge or the cobbles may be so small as to be nearly invisible to imaging. A classic "medium" morphology for a ta-C film is shown in the figure. Properties As implied by the name, diamond-like carbon (DLC), the value of such coatings accrues from their ability to provide some of the properties of diamond to surfaces of almost any material. The primary desirable qualities are hardness, wear resistance, and slickness (DLC film friction coefficient against polished steel ranges from 0.05 to 0.20 ). DLC properties highly depends on plasma processing deposition parameters, like effect of bias voltage, DLC coating thickness, interlayer thickness, etc. Moreover, the heat treatment also change the coating properties such as hardness, toughness and wear rate. However, which properties are added to a surface and to what degree depends upon which of the 7 forms are applied, and further upon the amounts and types of diluents added to reduce the cost of production. In 2006 the Association of German Engineers, VDI, the largest engineering association in Western Europe issued an authoritative report VDI2840 in order to clarify the existing multiplicity of confusing terms and trade names. It provides a unique classification and nomenclature for diamond-like-carbon (DLC) and diamond films. It succeeded in reporting all information necessary to identify and to compare different DLC films which are offered on the market. Quoting from that document:These [sp3] bonds can occur not only with crystals - in other words, in solids with long-range order - but also in amorphous solids where the atoms are in a random arrangement. In this case there will be bonding only between a few individual atoms and not in a long-range order extending over a large number of atoms. The bond types have a considerable influence on the material properties of amorphous carbon films. If the sp2 type is predominant the film will be softer, if the sp3 type is predominant the film will be harder. A secondary determinant of quality was found to be the fractional content of hydrogen. Some of the production methods involve hydrogen or methane as a catalyst and a considerable percentage of hydrogen can remain in the finished DLC material. When it is recalled that the soft plastic, polyethylene is made from carbon that is bonded purely by the diamond-like sp3 bonds, but also includes chemically bonded hydrogen, it is not surprising to learn that fractions of hydrogen remaining in DLC films degrade them almost as much as do residues of sp2 bonded carbon. The VDI2840 report confirmed the utility of locating a particular DLC material onto a 2-dimensional map on which the X-axis described the fraction of hydrogen in the material and the Y-axis described the fraction of sp3 bonded carbon atoms. The highest quality of diamond-like properties was affirmed to be correlated with the proximity of the map point plotting the (X,Y) coordinates of a particular material to the upper left corner at (0,1), namely 0% hydrogen and 100% sp3 bonding. That "pure" DLC material is ta-C and others are approximations that are degraded by diluents such as hydrogen, sp2 bonded carbon, and metals. Valuable properties of materials that are ta-C, or nearly ta-C follow. Hardness Within the "cobblestones", nodules, clusters, or "sponges" (the volumes in which local bonding is sp3) bond angles may be distorted from those found in either pure cubic or hexagonal lattices because of intermixing of the two. The result is internal (compressive) stress that can appear to add to the hardness measured for a sample of DLC. Hardness is often measured by nanoindentation methods in which a finely pointed stylus of natural diamond is forced into the surface of a specimen. If the sample is so thin that there is only a single layer of nodules, then the stylus may enter the DLC layer between the hard cobblestones and push them apart without sensing the hardness of the sp3 bonded volumes. Measurements would be low. Conversely, if the probing stylus enters a film thick enough to have several layers of nodules so it cannot be spread laterally, or if it enters on top of a cobblestone in a single layer, then it will measure not only the real hardness of the diamond bonding, but an apparent hardness even greater because the internal compressive stress in those nodules would provide further resistance to penetration of the material by the stylus. Nanoindentation measurements have reported hardness as great as 50% more than values for natural crystalline diamond. Since the stylus is blunted in such cases or even broken, actual numbers for hardness that exceed that of natural diamond are meaningless. They only show that the hard parts of an optimal ta-C material will break natural diamond rather than the inverse. Nevertheless, from a practical viewpoint it does not matter how the resistance of a DLC material is developed, it can be harder than natural diamond in usage. One method of testing the coating hardness is by means of the Persoz pendulum. In a microhardness test of a DLC coating (without metal added), a case-hardened 9310 bearing steel was tested using a diamond-tipped indenter tool supplied by Fisher Scientific International. The tool used a comparison of force applied to indentation depth, similar to the Rockwell Scale hardness measurement method. Microhardness testing of uncoated steel was limited to an indentation depth of approximately 1.2 microns. This same bearing steel was then coated with a 2.0 micron thickness DLC coating. Microhardness testing on the coated steel was then conducted, limiting indentation of the coating to a depth of approximately 0.15 microns, or 7.5 percent of the coating thickness. Measurements were repeated five times on uncoated steel and 12 times on coated steel. As a reference, the uncoated bearing steel had a hardness of Rockwell C 60. The average microhardness measured was 7,133 N/mm2 for the uncoated steel and 9,571 N/mm2 for the coated steel, suggesting the coating had a microhardness of approximately 34 percent harder than Rockwell C 60. A measurement of the plastic deformation, or permanent indentation scar, caused by the micro-indenter, indicated an elasticity of 35 percent for steel and 86 percent for the DLC. Measurement of plastic deformation is used for Vickers hardness measurements. As expected, the greater "closing" of the indentation scar for the coating suggested much higher Vickers hardness, in a ratio of greater than two times that of the uncoated steel, and therefore rendering Vickers hardness calculations not meaningful. Bonding of DLC coatings The same internal stress that benefits the hardness of DLC materials makes it to bond such coatings to the substrates to be protected. The internal stresses try to "pop" the DLC coatings off of the underlying samples. This challenging downside of extreme hardness is answered in several ways, depending upon the particular "art" of the production process. The most simple is to exploit the natural chemical bonding that happens in cases in which incident carbon ions supply the material to be impacted into sp3 bonded carbon atoms and the impacting energies that are compressing carbon volumes condensed earlier. In this case the first carbon ions will impact the surface of the item to be coated. If that item is made of a carbide-forming substance such as Ti or Fe in steel a layer of carbide will be formed that is later bonded to the DLC grown on top of it. Other methods of bonding include such strategies as depositing intermediate layers that have atomic spacings that grade from those of the substrate to those characteristic of sp3 bonded carbon. In 2006 there were as many successful recipes for bonding DLC coatings as there were sources of DLC. Tribology DLC coatings are often used to prevent wear due to their excellent tribological properties. DLC is very resistant to abrasive and adhesive wear making it suitable for use in applications that experience extreme contact pressure, both in rolling and sliding contact. DLC is often used to prevent wear on razor blades and metal cutting tools, including lathe inserts and milling cutters. DLC is used in bearings, cams, cam followers, and shafts in the automobile industry. The coatings reduce wear during the 'break-in' period, where drive train components may be starved for lubrication. DLCs may also be used in chameleon coatings that are designed to prevent wear during launch, orbit, and re-entry of land-launched space vehicles. DLC provides lubricity at ambient atmosphere and at vacuum unlike graphite, which requires moisture to be lubricious. Isolated carbon particles embedded diamond-like carbon coatings are the recent development in this area. The wear rate of amorphous DLC can be reduced up to 60% by embedding isolated carbon nanoparticles embedded simultaneous to DLC deposition. The isolated particles were in-situ created through rapid plasma quenching with Helium pulses. Despite the favorable tribological properties of DLC it must be used with caution on ferrous metals. If it is used at higher temperatures, the substrate or counter face may carburize, which could lead to loss of function due to a change in hardness. The final, end use temperature of a coated component should be kept below the temperature at which a PVC DLC coating is applied. A new interface design between DLC-coated silicon wafer and metal is reported to increase the durability of DLC-coated silicon wafer against high contact stress from approximately 1.0 GPa to beyond 2.5 GPa. Electrical If a DLC material is close enough to ta-C on plots of bonding ratios and hydrogen content it can be an insulator with a high value of resistivity. Perhaps more interesting is that if prepared in the "medium" cobblestone version such as shown in the above figure, electricity is passed through it by a mechanism of hopping conductivity. In this type of conduction of electricity the electrons move by quantum mechanical tunneling between pockets of conductive material isolated in an insulator. The result is that such a process makes the material something like a semiconductor. Further research on electrical properties is needed to explicate such conductivity in ta-C in order to determine its practical value. However, a different electrical property of emissivity has been shown to occur at unique levels for ta-C. Such high values allow for electrons to be emitted from ta-C coated electrodes into vacuum or into other solids with application of modest levels of applied voltage. This has supported important advances in medical technology. Applications Applications of DLC typically utilize the ability of the material to reduce abrasive wear. Tooling components, such as endmills, drill bits, dies and molds often use DLC in this manner. DLC is also used in the engines of modern supersport motorcycles, Formula 1 racecars, NASCAR vehicles, and as a coating on hard-disk platters and hard-disk read heads to protect against head crashes. Virtually all of the multi-bladed razors used for wet shaving have the edges coated with hydrogen-free DLC to reduce friction, preventing abrasion of sensitive skin. It is also being used as a coating by some weapon manufacturers/custom gunsmiths. Some forms have been certified in the EU for food service and find extensive uses in the high-speed actions involved in processing novelty foods such as potato chips and in guiding material flows in packaging foodstuffs with plastic wraps. DLC coats the cutting edges of tools for the high-speed, dry shaping of difficult exposed surfaces of wood and aluminium, for example on automobile dashboards. DLC coatings are widely used for lithium based energy storage batteries to improve their performance. DLC can increase retention capacity by 40% and cycle life by 400% for lithium batteries. The wear, friction, and electrical properties of DLC make it an appealing material for medical applications. DLC has proved to have excellent bio-compatibility as well. This has enabled many medical procedures, such as Percutaneous coronary intervention employing brachytherapy to benefit from the unique electrical properties of DLC. At low voltages and low temperatures electrodes coated with DLC can emit enough electrons to be arranged into disposable, micro-X-ray tubes as small as the radioactive seeds that are introduced into arteries or tumors in conventional brachytherapy. The same dose of prescribed radiation can be applied from the inside, out with the additional possibility to switch on and off the radiation in the prescribed pattern for the X-rays being used. DLC has proved to be an excellent coating to prolong the life of and reduce complications with replacement hip joints and artificial knees. It also has been successfully applied to coronary artery stents, reducing the incidence of thrombosis. The implantable human heart pump can be considered the ultimate biomedical application where DLC coating is used on blood contacting surfaces of the key components of the device. At hardness index, soft DLC coatings have shown better biocompatibility levels than hard DLC coatings, which may help to choose appropriate DLC coating for specific biomechanical applications, such as load-carrying or non-load carrying implants. Environmental benefits of durable products The increase in lifetime of articles coated with DLC that wear out because of abrasion can be described by the formula f = (g)μ, where g is a number that characterizes the type of DLC, the type of abrasion, the substrate material and μ is the thickness of the DLC coating in μm. For "low-impact" abrasion (pistons in cylinders, impellers in pumps for sandy liquids, etc.), g for pure ta-C on 304 stainless steel is 66. This means that one-μm thickness (that is ≈5% of the thickness of a human hair-end) would increase service lifetime for the article it coated from a week to over a year and two-μm thickness would increase it from a week to 85 years. These are measured values; though in the case of the 2 μm coating the lifetime was extrapolated from the last time the sample was evaluated until the testing apparatus itself wore out. There are environmental arguments that a sustainable economy ought to encourage products to be engineered for durability—in other words, to have planned durability (the opposite of planned obsolescence). Currently there are about 100 outsource vendors of DLC coatings that are loaded with amounts of graphite and hydrogen and so give much lower g-numbers than 66 on the same substrates. See also Chemical vapor deposition Cathodic arc deposition Poly(hydridocarbyne) References External links "Diamond-like carbon coatings" at AZo Materials "Selected Manuscripts we published on Noncrystalline Diamond Films": Bibliography of early work on DLC "Diamond-like tip better than the best": Recent applications of DLC at the nanoscale (1 March 2010) Allotropes of carbon Coatings Superhard materials Thin film deposition
Diamond-like carbon
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
4,171
[ "Allotropes of carbon", "Thin film deposition", "Allotropes", "Coatings", "Thin films", "Materials", "Superhard materials", "Planes (geometry)", "Solid state engineering", "Matter" ]
12,690,112
https://en.wikipedia.org/wiki/KASCADE
KASCADE was a European physics experiment started in 1996 at Forschungszentrum Karlsruhe, Germany (now Karlsruher Institut für Technologie), an extensive air shower experiment array to study the cosmic ray primary composition and the hadronic interactions, measuring simultaneously the electronic, muonic and hadronic components. KASCADE-Grande was a further extension of the previous project by reassembling 37 detectors of the former EAS-TOP experiment running between 1987 and 2000 at Campo Imperatore, Gran Sasso Laboratories, Italy. By this Grande extension of KASCADE the energy range was extended to 1014–1018 eV. The experiment contributed significantly to the development of the CORSIKA simulation program which is use heavily in astroparticle physics. Co-located with KASCADE-Grande is the LOPES experiment. LOPES consists of radio antennas and measures the radio emission of extensive air showers. KASCADE (including all extensions) stopped operation in 2013, but a part of the detectors is still used in other experiments for cosmic-ray air showers, e.g., LOFAR or Tunka. The data acquired by KASCADE-Grande has meanwhile been made accessible to the public in the KASCADE Cosmic-Ray Data Center (KCDC). Results KASCADE studied heavier components of cosmic rays, finding a "knee" near 80 PeV in 2011, and extending the spectrum measurements to 200PeV. Later, a knee-like feature in the heavy component and an ankle-like feature in the light component of cosmic rays was discovered at an energy of about 1017 eV. Participants Institut für Kernphysik and Institut für Experimentelle Kernphysik of Karlsruher Institut für Technologie (KIT), Germany Dipartimento di Fisica Generale dell' Università and Istituto di Fisica dello Spazio Interplanetario of Istituto Nazionale di Astrofisica Torino, Italy Universität Siegen, Germany Universität Wuppertal, Germany Soltan Institute for Nuclear Studies, Łódź, Poland Institute of Physics and Nuclear Engineering, Bucharest, Romania References External links of KASCADE KASCADE Cosmic-Ray Data Center (KCDC) LOPES experiment Cosmic-ray experiments
KASCADE
[ "Physics", "Astronomy" ]
466
[ "Astronomy stubs", "Astrophysics", "Astrophysics stubs", "Particle physics", "Particle physics stubs" ]
12,691,877
https://en.wikipedia.org/wiki/Analytic%20function%20of%20a%20matrix
In mathematics, every analytic function can be used for defining a matrix function that maps square matrices with complex entries to square matrices of the same size. This is used for defining the exponential of a matrix, which is involved in the closed-form solution of systems of linear differential equations. Extending scalar function to matrix functions There are several techniques for lifting a real function to a square matrix function such that interesting properties are maintained. All of the following techniques yield the same matrix function, but the domains on which the function is defined may differ. Power series If the analytic function has the Taylor expansion then a matrix function can be defined by substituting by a square matrix: powers become matrix powers, additions become matrix sums and multiplications by coefficients become scalar multiplications. If the series converges for , then the corresponding matrix series converges for matrices such that for some matrix norm that satisfies . Diagonalizable matrices A square matrix is diagonalizable, if there is an invertible matrix such that is a diagonal matrix, that is, has the shape As it is natural to set It can be verified that the matrix does not depend on a particular choice of . For example, suppose one is seeking for One has for Application of the formula then simply yields Likewise, Jordan decomposition All complex matrices, whether they are diagonalizable or not, have a Jordan normal form , where the matrix J consists of Jordan blocks. Consider these blocks separately and apply the power series to a Jordan block: This definition can be used to extend the domain of the matrix function beyond the set of matrices with spectral radius smaller than the radius of convergence of the power series. Note that there is also a connection to divided differences. A related notion is the Jordan–Chevalley decomposition which expresses a matrix as a sum of a diagonalizable and a nilpotent part. Hermitian matrices A Hermitian matrix has all real eigenvalues and can always be diagonalized by a unitary matrix P, according to the spectral theorem. In this case, the Jordan definition is natural. Moreover, this definition allows one to extend standard inequalities for real functions: If for all eigenvalues of , then . (As a convention, is a positive-semidefinite matrix.) The proof follows directly from the definition. Cauchy integral Cauchy's integral formula from complex analysis can also be used to generalize scalar functions to matrix functions. Cauchy's integral formula states that for any analytic function defined on a set , one has where is a closed simple curve inside the domain enclosing . Now, replace by a matrix and consider a path inside that encloses all eigenvalues of . One possibility to achieve this is to let be a circle around the origin with radius larger than for an arbitrary matrix norm . Then, is definable by This integral can readily be evaluated numerically using the trapezium rule, which converges exponentially in this case. That means that the precision of the result doubles when the number of nodes is doubled. In routine cases, this is bypassed by Sylvester's formula. This idea applied to bounded linear operators on a Banach space, which can be seen as infinite matrices, leads to the holomorphic functional calculus. Matrix perturbations The above Taylor power series allows the scalar to be replaced by the matrix. This is not true in general when expanding in terms of about unless . A counterexample is , which has a finite length Taylor series. We compute this in two ways, Distributive law: Using scalar Taylor expansion for and replacing scalars with matrices at the end: The scalar expression assumes commutativity while the matrix expression does not, and thus they cannot be equated directly unless . For some f(x) this can be dealt with using the same method as scalar Taylor series. For example, . If exists then . The expansion of the first term then follows the power series given above, The convergence criteria of the power series then apply, requiring to be sufficiently small under the appropriate matrix norm. For more general problems, which cannot be rewritten in such a way that the two matrices commute, the ordering of matrix products produced by repeated application of the Leibniz rule must be tracked. Arbitrary function of a 2×2 matrix An arbitrary function f(A) of a 2×2 matrix A has its Sylvester's formula simplify to where are the eigenvalues of its characteristic equation, , and are given by However, if there is degeneracy, the following formula is used, where f' is the derivative of f. Examples Matrix polynomial Matrix root Matrix logarithm Matrix exponential Matrix sign function Classes of matrix functions Using the semidefinite ordering ( is positive-semidefinite and is positive definite), some of the classes of scalar functions can be extended to matrix functions of Hermitian matrices. Operator monotone A function is called operator monotone if and only if for all self-adjoint matrices with spectra in the domain of . This is analogous to monotone function in the scalar case. Operator concave/convex A function is called operator concave if and only if for all self-adjoint matrices with spectra in the domain of and . This definition is analogous to a concave scalar function. An operator convex function can be defined be switching to in the definition above. Examples The matrix log is both operator monotone and operator concave. The matrix square is operator convex. The matrix exponential is none of these. Loewner's theorem states that a function on an open interval is operator monotone if and only if it has an analytic extension to the upper and lower complex half planes so that the upper half plane is mapped to itself. See also Algebraic Riccati equation Sylvester's formula Loewner order Matrix calculus Trace inequalities Trigonometric functions of matrices Notes References Matrix theory Mathematical physics
Analytic function of a matrix
[ "Physics", "Mathematics" ]
1,220
[ "Applied mathematics", "Theoretical physics", "Mathematical physics" ]
12,693,432
https://en.wikipedia.org/wiki/Jet%20quenching
In high-energy physics, jet quenching is a phenomenon that can occur in the collision of ultra-high-energy particles. In general, the collision of high-energy particles can produce jets of elementary particles that emerge from these collisions. Collisions of ultra-relativistic heavy-ion particle beams create a hot and dense medium comparable to the conditions in the early universe, and then these jets interact strongly with the medium, leading to a marked reduction of their energy. This energy reduction is called "jet quenching". Physics background In the context of high-energy hadron collisions, quarks and gluons are collectively called partons. The jets emerging from the collisions originally consist of partons, which quickly combine to form hadrons, a process called hadronization. Only the resulting hadrons can be directly observed. The hot, dense medium produced in the collisions is also composed of partons; it is known as a quark–gluon plasma (QGP). In this realm, the laws of physics that apply are those of quantum chromodynamics (QCD). High-energy nucleus–nucleus collisions make it possible to study the properties of the QGP medium through the observed changes in the jet fragmentation functions as compared to the unquenched case. According to QCD, high-momentum partons produced in the initial stage of a nucleus–nucleus collision will undergo multiple interactions inside the collision region prior to hadronization. In these interactions, the energy of the partons is reduced through collisional energy loss and medium-induced gluon radiation, the latter being the dominant mechanism in a QGP. The effect of jet quenching in QGP is the main motivation for studying jets as well as high-momentum particle spectra and particle correlations in heavy-ion collisions. Accurate jet reconstruction will allow measurements of the jet fragmentation functions and consequently the degree of quenching and therefore provide insight on the properties of the hot dense QGP medium created in the collisions. Experimental evidence of jet quenching First evidence of parton energy loss has been observed at the Relativistic Heavy Ion Collider (RHIC) from the suppression of high-pt particles studying the nuclear modification factor and the suppression of back-to-back correlations. In ultra-relativistic heavy-ion collisions at center-of-momentum energy of at the Large Hadron Collider (LHC), interactions between the high-momentum parton and the hot, dense medium produced in the collisions, are expected to lead to jet quenching. Indeed, in November 2010 CERN announced the first direct observation of jet quenching, based on experiments with heavy-ion collisions, which involved ATLAS, CMS and ALICE. See also Parity (physics) References External links Jet Suppression in Heavy Ion Collisions Jetting through the Quark Soup Review of Jet Quenching (2017) Review of Jet Quenching (2009) Experimental particle physics
Jet quenching
[ "Physics" ]
610
[ "Particle physics stubs", "Experimental physics", "Particle physics", "Experimental particle physics" ]
12,694,871
https://en.wikipedia.org/wiki/Iodolactonization
Iodolactonization (or, more generally, halolactonization) is an organic reaction that forms a ring (the lactone) by the addition of an oxygen and iodine across a carbon-carbon double bond. It is an intramolecular variant of the halohydrin synthesis reaction. The reaction was first reported by M. J. Bougalt in 1904 and has since become one of the most effective ways to synthesize lactones. Strengths of the reaction include the mild conditions and incorporation of the versatile iodine atom into the product. Iodolactonization has been used in the synthesis of many natural products including those with medicinal applications such as vernolepin and vernomenin, two compounds used in tumor growth inhibition, and vibralactone, a pancreatic lipase inhibitor. Iodolactonization has also been used by Elias James Corey to synthesize numerous prostaglandins. History Bougalt's report of iodolactonization represented the first example of a reliable lactonization that could be used in many different systems. Bromolactonization was actually developed in the twenty years prior to Bougalt’s publication of iodolactonization. However, bromolactonization is much less commonly used because the simple electrophilic addition of bromine to an alkene, seen below, can compete with the bromolactonization reaction and reduce the yield of the desired lactone. Chlorolactonization methods first appeared in the 1950s but are even less commonly employed than bromolactonization. The use of elemental chlorine is procedurally difficult because it is a gas at room temperature, and the electrophilic addition product can be rapidly produced as in bromolactonization. Mechanism The reaction mechanism involves the formation of a positively charged halonium ion in a molecule that also contains a carboxylic acid (or other functional group that is a precursor to it). The oxygen of the carboxyl acts as a nucleophile, attacking to open the halonium ring and instead form a lactone ring. The reaction is usually performed under mildly basic conditions to increase the nucleophilicity of the carboxyl group. Scope The iodolactonization reaction includes a number of nuances that affect product formation including regioselectivity, ring size preference, and thermodynamic and kinetic control. In terms of regioselectivity, iodolactonization preferentially occurs at the most hindered carbon atom adjacent to the iodonium cation. This is due to the fact that the more substituted carbon is better able to maintain a partial positive charge and is thus more electrophilic and susceptible to nucleophilic attack. When multiple double bonds in a molecule are equally reactive, conformational preferences dominate. However, when one double bond is more reactive, that reactivity always dominates regardless of conformational preference. Both five- and six-membered rings could be formed in the iodolactonization shown below, but the five-membered ring is formed preferentially as predicted by Baldwin's rules for ring closure. According to the rules, 5-exo-tet ring closures are favored while 6-endo-tet ring closures are disfavored. The regioselectivity of each iodolactonization can be predicted and explained using Baldwin's rules. Stereoselective iodolactonizations have been seen in literature and can be very useful in synthesizing large molecules such as the aforementioned vernopelin and vernomenin because the lactone can be formed while maintaining other stereocenters. The ring closure can even be driven by stereocenters adjacent to the carbon-carbon multiple bond as shown below. Even in systems without existing stereocenters, Bartlett and coworkers found that stereoselectivity was achievable. They were able to synthesize the cis and trans five membered lactones by adjusting reactions conditions such as temperature and reaction time. The trans product was formed under thermodynamic conditions (e.g. a long reaction time) while the cis product was formed under kinetic conditions (e.g. a relatively shorter reaction time). Applications Iodolactonization has been used in the synthesis of many biologically important products such as the tumor growth inhibitors vernolepin and vernomenin, the pancreatic lipase inhibitor vibralactone, and prostaglandins, a lipid found in animals. The following total syntheses all use iodolactonization as a key step in obtaining the desired product. In 1977, Samuel Danishefsky and coworkers were able to synthesize the tumor growth inhibitors dl-vernolepin and dl-vernomenin via a multistep process in which a lactonization was employed. This synthesis demonstrates the use of iodolactonization to preferentially form a five-membered ring over a four- or six-membered ring, as expected from Baldwin's rules. In 2006, Zhou and coworkers synthesized another natural product, vibralactone, in which the key step was the formation of a lactone. The stereoselectivity of the iodolactonization sets a critical stereochemical configuration for the target compound. In 1969, Corey and coworkers synthesized prostaglandin E2 using an iodolactone intermediate. Again, the stereoselectivity of the iodolactonization plays an integral role in product formation. See also Iodolactamization — the lactam analogue Halogen addition reaction References Oxygen heterocycle forming reactions Lactones Organic reactions
Iodolactonization
[ "Chemistry" ]
1,190
[ "Ring forming reactions", "Organic reactions" ]
2,515,660
https://en.wikipedia.org/wiki/Universal%20polar%20stereographic%20coordinate%20system
The universal polar stereographic (UPS) coordinate system is used in conjunction with the universal transverse Mercator (UTM) coordinate system to locate positions on the surface of the Earth. Like the UTM coordinate system, the UPS coordinate system uses a metric-based cartesian grid laid out on a conformally projected surface. UPS covers the Earth's polar regions, specifically the areas north of 84°N and south of 80°S, which are not covered by the UTM grids, plus an additional 30 minutes of latitude extending into UTM grid to provide some overlap between the two systems. In the polar regions, directions can become complicated, with all geographic north–south lines converging at the poles. The difference between UPS grid north and true north can therefore be anything up to 180°—in some places, grid north is true south, and vice versa. UPS grid north is arbitrarily defined as being along the prime meridian in the Antarctic and the 180th meridian in the Arctic; thus, east and west on the grids when moving directly away from the pole are along the 90°E and 90°W meridians respectively. Projection system As the name indicates, the UPS system uses a stereographic projection. Specifically, the projection used in the system is a secant version based on an elliptical model of the earth. The scale factor at each pole is adjusted to 0.994 so that the latitude of true scale is 81.11451786859362545° (about 81° 06' 52.3") North and South. The scale factor inside the regions at latitudes higher than this parallel is too small, whereas the regions at latitudes below this line have scale factors that are too large, reaching 1.0016 at 80° latitude. The scale factor at the origin (the poles) is adjusted to minimize the overall distortion of scale within the mapped region. As with the Mercator projection, the region near the tangent (or secant) point on a Stereographic map remains very close to true scale for an angular distance of a few degrees. In the ellipsoidal model, a stereographic projection tangent to the pole has a scale factor of less than 1.003 at 84° latitude and 1.008 at 80° latitude. The adjustment of the scale factor in the UPS projection reduces the average scale distortion over the entire zone. References External links National Geospatial-Intelligence Agency, Geospatial Sciences Publications GeographicLib provides a utility GeoConvert (with source code) for conversions between geographic, UTM, UPS, and MGRS. Here is an online version of GeoConvert. Geographic coordinate systems
Universal polar stereographic coordinate system
[ "Mathematics" ]
548
[ "Geographic coordinate systems", "Coordinate systems" ]
2,515,701
https://en.wikipedia.org/wiki/Copper%28I%29%20thiophene-2-carboxylate
Copper(I) thiophene-2-carboxylate or CuTC is a coordination complex derived from copper and thiophene-2-carboxylic acid. It is used as a reagent to promote the Ullmann reaction between aryl halides. References Thiophenes Copper(I) compounds Reagents for organic chemistry
Copper(I) thiophene-2-carboxylate
[ "Chemistry" ]
78
[ "Reagents for organic chemistry" ]
2,515,784
https://en.wikipedia.org/wiki/Regime%20shift
Regime shifts are large, abrupt, persistent changes in the structure and function of ecosystems, the climate, financial systems or other complex systems. A regime is a characteristic behaviour of a system which is maintained by mutually reinforced processes or feedbacks. Regimes are considered persistent relative to the time period over which the shift occurs. The change of regimes, or the shift, usually occurs when a smooth change in an internal process (feedback) or a single disturbance (external shocks) triggers a completely different system behavior. Although such non-linear changes have been widely studied in different disciplines ranging from atoms to climate dynamics, regime shifts have gained importance in ecology because they can substantially affect the flow of ecosystem services that societies rely upon, such as provision of food, clean water or climate regulation. Moreover, regime shift occurrence is expected to increase as human influence on the planet increases – the Anthropocene – including current trends on human induced climate change and biodiversity loss. When regime shifts are associated with a critical or bifurcation point, they may also be referred to as critical transitions. History of the concept Scholars have been interested in systems exhibiting non-linear change for a long time. Since the early twentieth century, mathematicians have developed a body of concepts and theory for the study of such phenomena based on the study of non-linear system dynamics. This research led to the development of concepts such as catastrophe theory; a branch of bifurcation theory in dynamical systems. In ecology the idea of systems with multiple regimes, domains of attraction called alternative stable states, only arose in the late '60s based upon the first reflections on the meaning of stability in ecosystems by Richard Lewontin and Crawford "Buzz" Holling. The first work on regime shifts in ecosystems was done in a diversity of ecosystems and included important work by Noy-Meir (1975) in grazing systems; May (1977) in grazing systems, harvesting systems, insect pests and host-parasitoid systems; Jones and Walters (1976) with fisheries systems; and Ludwig et al. (1978) with insect outbreaks. These early efforts to understand regime shifts were criticized for the difficulty of demonstrating bi-stability, their reliance on simulation models, and lack of high quality long-term data. However, by the 1990s more substantial evidence of regime shifts was collected for kelp forest, coral reefs, drylands and shallow lakes. This work led to revitalization of research on ecological reorganization and the conceptual clarification that resulted in the regime shift conceptual framework in the early 2000s. Outside of ecology, similar concepts of non-linear change have been developed in other academic disciplines. One example is historical institutionalism in political science, sociology and economics, where concepts like path dependency and critical junctures are used to explain phenomena where the output of a system is determined by its history, or the initial conditions, and where its domains of attraction are reinforced by feedbacks. Concept such as international institutional regimes, socio-technical transitions and increasing returns have an epistemological basis similar to regime shifts, and utilize similar mathematical models. Current applications of the regime shift concept During the last decades, research on regime shift has grown exponentially. Academic papers reported by ISI Web of Knowledge rose from less than 5 per year prior to 1990 to more than 300 per year from 2007 to 2011. However, the application of regime shift related concepts is still contested. Although there is not agreement on one definition, the slight differences among definitions reside on the meaning of stability – the measure of what a regime is – and the meaning of abruptness. Both depend on the definition of the system under study, thus it is relative. At the end it is a matter of scale. Mass extinctions are regime shifts on the geological time scale, while financial crises or pest outbreaks are regime shifts that require a totally different parameter setting. In order to apply the concept to a particular problem, one has to conceptually limit its range of dynamics by fixing analytical categories such as time and space scales, range of variations and exogenous / endogenous processes. For example, while for oceanographers a regime must last for at least decades and should include climate variability as a driver, for marine biologists regimes of only five years are acceptable and could be induced by only population dynamics. A non-exhaustive range of current definitions of regime shifts in recent scientific literature from ecology and allied fields is collected in Table 1. Table 1. Definitions of regime shifts and modifications used to apply the concept to particular research questions from scientific literature published between 2004 and 2009. Theoretical basis The theoretical basis for regime shifts has been developed from the mathematics of non-linear systems. In short, regime shifts describe dynamics characterized by the possibility that a small disturbance can produce big effects. In such situations the common notion of proportionality between inputs and outputs of a system is incorrect. Conversely, the regime shift concept also emphasizes the resilience of systems – suggesting that in some situations substantial management or human impact can have little effect on a system. Regime shifts are hard to reverse and in some cases irreversible. The regime shift concept shifts analytical attention away from linearity and predictability, towards reorganization and surprise. Thus, the regime shift concept offers a framework to explore the dynamics and causal explanations of non-linear change in nature and society. Regime shifts are triggered either by the weakening of stabilizing internal processes – feedbacks – or by external shocks which exceed the stabilizing capacity of a system. Systems prone to regime shifts can show three different types of change: smooth, abrupt or discontinuous, depending on the configuration of processes that define a system – in particular the interaction between a system's fast and slow processes. Smooth change can be described by a quasi-linear relationship between fast and slow processes; abrupt change shows a non-linear relationship among fast and slow variables, while discontinuous change is characterized by the difference in the trajectory on the fast variable when the slow one increases compared to when it decreases. In other words, the point at which the system flips from one regime to another is different from the point at which the system flips back. Systems that exhibit this last type of change demonstrate hysteresis. Hysteretic systems have two important properties. First, the reversal of discontinuous change requires that a system change back past the conditions at which the change first occurred. This occurs because systemic change alters feedback processes that maintain a system in a particular regime. Second, hysteresis greatly enhances the role of history in a system, and demonstrates that the system has memory – in that its dynamics are shaped by past events. Conditions at which a system shifts its dynamics from one set of processes to another are often called thresholds. In ecology for example, a threshold is a point at which there is an abrupt change in an ecosystem quality, property or phenomenon; or where small changes in an environmental driver produce large responses in an ecosystem. Thresholds are, however, a function of several interacting parameters, thus they change in time and space. Hence, the same system can present smooth, abrupt or discontinuous change depending on its parameters' configurations. Thresholds will be present, however, only in cases where abrupt and discontinuous change is possible. Evidence Empirical evidence has increasingly completed model based work on regime shifts. Early work on regime shifts in ecology was developed in models for predation, grazing, fisheries and inset outbreak dynamics. Since the 1980s, further development of models has been complemented by empirical evidence for regime shifts from ecosystems including kelp forest, coral reefs, drylands and lakes. Scholars have collected evidence for regime shifts across a wide variety of ecosystems and across a range of scales. For example, at the local scale, one of the best documented examples is woody plant encroachment, which is thought to follow a smooth change dynamic. Woody encroachment refers to small changes in herbivory rates that can shift drylands from grassy dominated regimes towards woody dominated savannas. Encroachment has been documented to impact ecosystem services related with cattle ranching in wet savannas in Africa and South America. At the regional scale, rainforest areas in the Amazon and East Asia are thought to be at risk of shifting towards savanna regimes given the weakening of the moisture recycling feedback driven by deforestation. The shift from forest to savanna potentially affects the provision of food, fresh water, climate regulation and support for biodiversity. On the global realm, the faster retreating of the arctic ice sheet in summer time is reinforcing climate warming through the albedo feedback, potentially affecting sea water levels and climate regulation worldwide. Aquatic systems have been heavily studied in the search for regime shifts. Lakes work like microcosms (almost closed systems) that to some extent allow experimentation and data gathering. Eutrophication is a well-documented abrupt change from clear water to murky water regimes, which leads to toxic algae blooms and reduction of fish productivity in lakes and coastal ecosystems. Eutrophication is driven by nutrient inputs, particularly those coming from fertilizers used in agriculture. It is an example of discontinuous change with hysteresis. Once the lake has shifted to a murky water regime, a new feedback of phosphorus recycling maintains the system in the eutrophic state even if nutrient inputs are significantly reduced. Another example widely studied in aquatic and marine systems is trophic level decline in food webs. It usually implies the shift from ecosystems dominated by high numbers of predatory fish to a regime dominated by lower trophic groups like pelagic planktivores (i.e. jellyfish). Affected food webs often have impacts on fisheries productivity, a major risk of eutrophication, hypoxia, invasion of non-native species and impacts on recreational values. Hypoxia, or the development of so-called death zones, is another regime shift in aquatic and marine-coastal environments. Hypoxia, similarly to eutrophication, is driven by nutrient inputs of anthropogenic origin but also from natural origin in the form of upwellings. In high nutrient concentrations the levels of dissolved oxygen decrease, making life impossible for the majority of aquatic organisms. Impacts on ecosystem services include collapse of fisheries and the production of toxic gases for humans. In marine systems, two well-studied regime shifts happen in coral reefs and kelp forests. Coral reefs are three-dimensional structures which work as habitat for marine biodiversity. Hard coral-dominated reefs can shift to a regime dominated by fleshy algae; but they also have been reported to shift towards soft-corals, corallimorpharians, urchin barrens or sponge-dominated regimes. Coral reef transitions are reported to affect ecosystem services like calcium fixation, water cleansing, support for biodiversity, fisheries productivity, coastline protection and recreational services. On the other hand, kelp forests are highly productive marine ecosystems found in temperate regions of the ocean. Kelp forests are characteristically dominated by brown macroalgae and host high levels of biodiversity, providing provisioning ecosystem services for both the cosmetic industry and fisheries. Such services are substantially reduced when a kelp forest shifts towards urchin barren regimes driven mainly by discharge of nutrients from the coast and overfishing. Overfishing and overharvest of keystone predators, such as sea otters, applies top-down pressure on the system. Bottom-up pressure arises from nutrient pollution. Soil salinization is an example of a well-known regime shift in terrestrial systems. It is driven by the removal of deep root vegetation and irrigation, which causes elevation of the soil water table and the increase of soil surface salinity. Once the system flips, ecosystem services related with food production – both crops and cattle – are significantly reduced. Dryland degradation, also known as desertification, is a well-known but controversial type of regime shift. Dryland degradation occurs when the loss of vegetation transforms an ecosystem from being vegetated to being dominated by bare soils. While this shift has been proposed to be driven by a combination of farming and cattle grazing, loss of semi-nomad traditions, extension of infrastructure, reduction of managerial flexibility and other economic factors, it is controversial because it has been difficult to determine whether there is indeed a regime shift and which drivers have caused it. For example, poverty has been proposed as a driver of dry land degradation, but studies continuously find contradictory evidence. Ecosystem services affected by dry land degradation usually include low biomass productivity, thus reducing provisioning and supporting services for agriculture and water cycling. Polar regions have been the focus on research examining the impacts of climate warming. Regime shifts in polar regions include the melting of the Greenland ice sheet and the possible collapse of the thermohaline circulation system. While the melting of the Greenland ice sheet is driven by global warming and threatens worldwide coastlines with an increase of sea level, the collapse of the thermohaline circulation is driven by the increase of fresh water in the North Atlantic which in turn weakens the density driven water transport between the tropics and polar areas. Both regime shifts have serious implications for marine biodiversity, water cycling, security of housing and infrastructure and climate regulation amongst other ecosystem services. Detection of whether a regime shift has occurred Using current well-known statistical methods such as average standard deviates, principal component analysis, or artificial neural networks one can detect whether a regime shift has occurred. Such analyses require long term data series and that the threshold under study has to be crossed. Hence, the answer will depend on the quality of the data; it is event-driven and only allows one to explore past trends. Some scholars have argued based on statistical analysis of time series that certain phenomena do not correspond to regime shifts. Nevertheless, the statistical rejection of the hypothesis that a system has multiple attractors does not imply that the null hypothesis is true. In order to do so one has to prove that the system only has one attractor. In other words, evidence that data does not exhibit multiple regimes does not rule out the possibility a system could shift to an alternative regime in the future. Moreover, in management decision making, it can be risky to assume that a system has only one regime, when plausible alternative regimes have highly negative consequences. On the other hand, a more relevant question than "has a regime shift occurred?" is "is the system prone to regime shifts?". This question is important because, even if they have shown smooth change in the past, their dynamics can potentially become abrupt or discontinuous in the future depending on its parameters' configuration. Such a question has been explored separately in different disciplines for different systems, pushing methods development forward (e.g. climate driven regime shifts in the ocean or the stability of food webs) and continuing to inspire new research. Frontiers of research Regime shift research is occurring across multiple ecosystems and at multiple scales. New areas of research include early warnings of regime shifts and new forms of modeling. Early-warning signals and critical slowing down It remains unclear how well such signals work for all regime shifts, and if the early warnings give time enough to take appropriate managerial corrections to avoid the shift. Additionally, early warning signals also depend on intensive good-quality data series that are rare in ecology. However, researchers have used high quality data to predict regime shifts in a lake ecosystem. Changes in spatial patterns as an indicator of regime shifts have also become a topic of research. New approaches to modeling Another front of research is the development of new approaches to modeling. Dynamic models, Bayesian belief networks, Fisher information, and fuzzy cognitive maps have been used as a tool to explore the phase space where regime shifts are likely to happen and understand the dynamics that govern dynamic thresholds. Models are useful oversimplifications of reality, whose limits are given by the current understanding of the real system as well as the assumptions of the modeler. Therefore, a deep understanding of causal relationships and the strength of feedbacks is required to capture possible regime shift dynamics. Nevertheless, such deep understanding is available only for heavily studied systems such as shallow lakes. Methods development is required to tackle the problem of limited time series data and limited understanding of system dynamics, in such a way that allow identification of the main drivers of regime shifts as well as prioritization of managerial options. Other emerging areas Other emerging areas of research include the role of regime shifts in the earth system, cascading consequences among regime shifts, and regime shifts in social-ecological systems. References Ecology
Regime shift
[ "Biology" ]
3,345
[ "Ecology" ]
2,515,807
https://en.wikipedia.org/wiki/Structural%20stability
In mathematics, structural stability is a fundamental property of a dynamical system which means that the qualitative behavior of the trajectories is unaffected by small perturbations (to be exact C1-small perturbations). Examples of such qualitative properties are numbers of fixed points and periodic orbits (but not their periods). Unlike Lyapunov stability, which considers perturbations of initial conditions for a fixed system, structural stability deals with perturbations of the system itself. Variants of this notion apply to systems of ordinary differential equations, vector fields on smooth manifolds and flows generated by them, and diffeomorphisms. Structurally stable systems were introduced by Aleksandr Andronov and Lev Pontryagin in 1937 under the name "systèmes grossiers", or rough systems. They announced a characterization of rough systems in the plane, the Andronov–Pontryagin criterion. In this case, structurally stable systems are typical, they form an open dense set in the space of all systems endowed with appropriate topology. In higher dimensions, this is no longer true, indicating that typical dynamics can be very complex (cf. strange attractor). An important class of structurally stable systems in arbitrary dimensions is given by Anosov diffeomorphisms and flows. During the late 1950s and the early 1960s, Maurício Peixoto and Marília Chaves Peixoto, motivated by the work of Andronov and Pontryagin, developed and proved Peixoto's theorem, the first global characterization of structural stability. Definition Let G be an open domain in Rn with compact closure and smooth (n−1)-dimensional boundary. Consider the space X1(G) consisting of restrictions to G of C1 vector fields on Rn that are transversal to the boundary of G and are inward oriented. This space is endowed with the C1 metric in the usual fashion. A vector field F ∈ X1(G) is weakly structurally stable if for any sufficiently small perturbation F1, the corresponding flows are topologically equivalent on G: there exists a homeomorphism h: G → G which transforms the oriented trajectories of F into the oriented trajectories of F1. If, moreover, for any ε > 0 the homeomorphism h may be chosen to be C0 ε-close to the identity map when F1 belongs to a suitable neighborhood of F depending on ε, then F is called (strongly) structurally stable. These definitions extend in a straightforward way to the case of n-dimensional compact smooth manifolds with boundary. Andronov and Pontryagin originally considered the strong property. Analogous definitions can be given for diffeomorphisms in place of vector fields and flows: in this setting, the homeomorphism h must be a topological conjugacy. It is important to note that topological equivalence is realized with a loss of smoothness: the map h cannot, in general, be a diffeomorphism. Moreover, although topological equivalence respects the oriented trajectories, unlike topological conjugacy, it is not time-compatible. Thus, the relevant notion of topological equivalence is a considerable weakening of the naïve C1 conjugacy of vector fields. Without these restrictions, no continuous time system with fixed points or periodic orbits could have been structurally stable. Weakly structurally stable systems form an open set in X1(G), but it is unknown whether the same property holds in the strong case. Examples Necessary and sufficient conditions for the structural stability of C1 vector fields on the unit disk D that are transversal to the boundary and on the two-sphere S2 have been determined in the foundational paper of Andronov and Pontryagin. According to the Andronov–Pontryagin criterion, such fields are structurally stable if and only if they have only finitely many singular points (equilibrium states) and periodic trajectories (limit cycles), which are all non-degenerate (hyperbolic), and do not have saddle-to-saddle connections. Furthermore, the non-wandering set of the system is precisely the union of singular points and periodic orbits. In particular, structurally stable vector fields in two dimensions cannot have homoclinic trajectories, which enormously complicate the dynamics, as discovered by Henri Poincaré. Structural stability of non-singular smooth vector fields on the torus can be investigated using the theory developed by Poincaré and Arnaud Denjoy. Using the Poincaré recurrence map, the question is reduced to determining structural stability of diffeomorphisms of the circle. As a consequence of the Denjoy theorem, an orientation preserving C2 diffeomorphism ƒ of the circle is structurally stable if and only if its rotation number is rational, ρ(ƒ) = p/q, and the periodic trajectories, which all have period q, are non-degenerate: the Jacobian of ƒq at the periodic points is different from 1, see circle map. Dmitri Anosov discovered that hyperbolic automorphisms of the torus, such as the Arnold's cat map, are structurally stable. He then generalized this statement to a wider class of systems, which have since been called Anosov diffeomorphisms and Anosov flows. One celebrated example of Anosov flow is given by the geodesic flow on a surface of constant negative curvature, cf Hadamard billiards. History and significance Structural stability of the system provides a justification for applying the qualitative theory of dynamical systems to analysis of concrete physical systems. The idea of such qualitative analysis goes back to the work of Henri Poincaré on the three-body problem in celestial mechanics. Around the same time, Aleksandr Lyapunov rigorously investigated stability of small perturbations of an individual system. In practice, the evolution law of the system (i.e. the differential equations) is never known exactly, due to the presence of various small interactions. It is, therefore, crucial to know that basic features of the dynamics are the same for any small perturbation of the "model" system, whose evolution is governed by a certain known physical law. Qualitative analysis was further developed by George Birkhoff in the 1920s, but was first formalized with introduction of the concept of rough system by Andronov and Pontryagin in 1937. This was immediately applied to analysis of physical systems with oscillations by Andronov, Witt, and Khaikin. The term "structural stability" is due to Solomon Lefschetz, who oversaw translation of their monograph into English. Ideas of structural stability were taken up by Stephen Smale and his school in the 1960s in the context of hyperbolic dynamics. Earlier, Marston Morse and Hassler Whitney initiated and René Thom developed a parallel theory of stability for differentiable maps, which forms a key part of singularity theory. Thom envisaged applications of this theory to biological systems. Both Smale and Thom worked in direct contact with Maurício Peixoto, who developed Peixoto's theorem in the late 1950s. When Smale started to develop the theory of hyperbolic dynamical systems, he hoped that structurally stable systems would be "typical". This would have been consistent with the situation in low dimensions: dimension two for flows and dimension one for diffeomorphisms. However, he soon found examples of vector fields on higher-dimensional manifolds that cannot be made structurally stable by an arbitrarily small perturbation (such examples have been later constructed on manifolds of dimension three). This means that in higher dimensions, structurally stable systems are not dense. In addition, a structurally stable system may have transversal homoclinic trajectories of hyperbolic saddle closed orbits and infinitely many periodic orbits, even though the phase space is compact. The closest higher-dimensional analogue of structurally stable systems considered by Andronov and Pontryagin is given by the Morse–Smale systems. See also Homeostasis Self-stabilization Superstabilization Stability theory References Dynamical systems Stability theory
Structural stability
[ "Physics", "Mathematics" ]
1,681
[ "Stability theory", "Mechanics", "Dynamical systems" ]
2,516,966
https://en.wikipedia.org/wiki/Phosphor%20thermometry
Phosphor thermometry is an optical method for surface temperature measurement. The method exploits luminescence emitted by phosphor material. Phosphors are fine white or pastel-colored inorganic powders which may be stimulated by any of a variety of means to luminesce, i.e. emit light. Certain characteristics of the emitted light change with temperature, including brightness, color, and afterglow duration. The latter is most commonly used for temperature measurement. History The first mention of temperature measurement utilizing a phosphor is in two patents originally filed in 1932 by Paul Neubert. Time dependence of luminescence Typically a short duration ultraviolet lamp or laser source illuminates the phosphor coating which in turn luminesces visibly. When the illuminating source ceases, the luminescence will persist for a characteristic time, steadily decreasing. The time required for the brightness to decrease to 1/e of its original value is known as the decay time or lifetime and signified as . It is a function of temperature, T. The intensity, I of the luminescence commonly decays exponentially as: Where I0 is the initial intensity (or amplitude). The 't' is the time and is parameter which can be temperature dependent. A temperature sensor based on direct decay time measurement has been shown to reach a temperature from 1000 to as high as 1,600 °C. In that work, a doped YAG phosphor was grown onto an undoped YAG fiber to form a monolithic structure for the probe, and a laser was used as the excitation source. Subsequently, other versions using LEDs as the excitation source were realized. These devices can measure temperature up to 1,000 °C, and are used in microwave and plasma processing applications. If the excitation source is periodic rather than pulsed, then the time response of the luminescence is correspondingly different. For instance, there is a phase difference between a sinusoidally varying light-emitting diode (LED) signal of frequency f and the fluorescence that results (see figure). The phase difference varies with decay time and hence temperature as: Temperature dependence of emission lines: intensity ratio The second method of temperature detection is based on intensity ratios of two separate emission lines; the change in coating temperature is reflected by the change of the phosphorescence spectrum. This method enables surface temperature distributions to be measured. The intensity ratio method has the advantage that polluted optics has little effect on the measurement as it compares ratios between emission lines. The emission lines are equally affected by 'dirty' surfaces or optics. Temperature dependence Several observations are pertinent to the figure on the right: Oxysulfide materials exhibit several different emission lines, each having a different temperature dependence. Substituting one rare-earth for another, in this instance changing La to Gd, shifts the temperature dependence. The YAG:Cr material (Y3Al5O12:Cr3+) shows less sensitivity but covers a wider temperature range than the more sensitive materials. Sometime decay times are constant over a wide range before becoming temperature dependent at some threshold value. This is illustrated for the YVO4:Dy curve; it also holds for several other materials (not shown in the figure). Manufacturers sometimes add a second rare earth as a sensitizer. This may enhance the emission and alter the nature of the temperature dependence. Also, gallium is sometimes substituted for some of the aluminium in YAG, also altering the temperature dependence. The emission decay of dysprosium (Dy) phosphors is sometimes non-exponential with time. Consequently, the value assigned to decay time will depend on the analysis method chosen. This non-exponential character often becomes more pronounced as the dopant concentration increases. In the high-temperature part, the two lutetium phosphate samples are single crystals rather than powders. This has minor effect on decay time and its temperature dependence though. However, the decay time of a given phosphor depends on the particle size, especially below one micrometer. There are further parameters influencing the luminescence of thermographic phosphors, e.g. the excitation energy, the dopant concentration or the composition or the absolute pressure of the surrounding gas phase. Therefore, care has to be taken in order to keep constant these parameters for all measurements. Thermographic phosphor application in a thermal barrier coating A thermal barrier coating (TBC) allows gas turbine components to survive higher temperatures in the hot section of engines, while having acceptable life times. These coatings are thin ceramic coatings (several hundred micrometers) usually based on oxide materials. Early works considered the integration of luminescent materials as erosion sensors in TBCs. The notion of a "thermal barrier sensor coating" (sensor TBC) for temperature detection was introduced in 1998. Instead of applying a phosphor layer on the surface where the temperature needs to be measured, it was proposed to locally modify the composition of the TBC so that it acts as a thermographic phosphor as well as a protective thermal barrier. This dual functional material enables surface temperature measurement but also could provide a means to measure temperature within the TBC and at the metal/topcoat interface, hence enabling the manufacturing of an integrated heat flux gauge. First results on yttria-stabilized zirconia co-doped with europia (YSZ:Eu) powders were published in 2000. They also demonstrated sub-surface measurements looking through a 50 μm undoped YSZ layer and detecting the phosphorescence of a thin (10 μm) YSZ:Eu layer (bi-layer system) underneath using the ESAVD technique to produce the coating. The first results on electron beam physical vapour deposition of TBCs were published in 2001. The coating tested was a monolayer coating of standard YSZ co-doped with dysprosia (YSZ:Dy). First work on industrial atmospheric plasma sprayed (APS) sensor coating systems commenced around 2002 and was published in 2005. They demonstrated the capabilities of APS sensor coatings for in-situ two-dimensional temperature measurements in burner rigs using a high speed camera system. Further, temperature measurement capabilities of APS sensor coatings were demonstrated beyond 1400 °C. Results on multilayer sensing TBCs, enabling simultaneous temperature measurements below and on the surface of the coating, were reported. Such a multilayer coating could also be used as a heat flux gauge in order to monitor the thermal gradient and also to determine the heat flux through the thickness of the TBC under realistic service conditions. Applications for thermographic phosphors in TBCs While the previously mentioned methods are focusing on the temperature detection, the inclusion of phosphorescent materials into the thermal barrier coating can also work as a micro probe to detect the aging mechanisms or changes to other physical parameters that affect the local atomic surroundings of the optical active ion. Detection was demonstrated of hot corrosion processes in YSZ due to vanadium attack. See also Fluorescence Luminescence Photoluminescence Thermometer Thermometry References Further reading Thermometers Measurement
Phosphor thermometry
[ "Physics", "Mathematics", "Technology", "Engineering" ]
1,495
[ "Physical quantities", "Quantity", "Measuring instruments", "Measurement", "Size", "Thermometers" ]
2,517,465
https://en.wikipedia.org/wiki/Enzyme%20multiplied%20immunoassay%20technique
Enzyme multiplied immunoassay technique (EMIT) is a common method for qualitative and quantitative determination of therapeutic and recreational drugs and certain proteins in serum and urine. It is an immunoassay in which a drug or metabolite in the sample competes with a drug/metabolite labelled with an enzyme, to bind to an antibody. The more drug there is in the sample, the more free enzyme there will be, and the increased enzyme activity causes a change in color. Determination of drug levels in serum is particularly important when the difference in the concentrations needed to produce a therapeutic effect and adverse side reactions (the therapeutic window) is small. EMIT therapeutic drug monitoring tests provide accurate information about the concentration of such drugs such as immunosuppressant drugs and some antibiotics. EMIT urine assays for drugs such as cannabinoids, morphine, and amphetamine are designed to detect the drug itself or a metabolite of the drug present in a concentration above a pre-specified minimum detection cutoff limit. In the U.S., the cutoff limits must be set in accordance with Mandatory Guidelines for Federal Workplace Drug Testing Programs that were developed by SAMHSA (The Substance Abuse and Mental Health Services Administration is a branch of the U.S. Department of Health and Human Services). The setting of reasonable cutoff limits help reduce false positive results that occur from assay limitations. Because of the social and legal consequences, a positive test result must be confirmed by an alternative method, usually Gas Chromatography/Mass spectrometry. As an example the SAMHSA cutoffs for cannabinoids are 50 ng/ml for the immunoassay and 15 ng/ml as confirmed by GC/MS. Immunoassays that do not conform with SAMHSA, featuring a cutoff of 20 ng/ml, have been shown to produce false positives from passive inhalation of marijuana smoke. See also Drug tests Blood tests Screening (medicine) References Urine tests Blood tests Drug testing
Enzyme multiplied immunoassay technique
[ "Chemistry" ]
423
[ "Blood tests", "Chemical pathology" ]
2,518,272
https://en.wikipedia.org/wiki/Aeroacoustics
Aeroacoustics is a branch of acoustics that studies noise generation via either turbulent fluid motion or aerodynamic forces interacting with surfaces. Noise generation can also be associated with periodically varying flows. A notable example of this phenomenon is the Aeolian tones produced by wind blowing over fixed objects. Although no complete scientific theory of the generation of noise by aerodynamic flows has been established, most practical aeroacoustic analysis relies upon the so-called aeroacoustic analogy, proposed by Sir James Lighthill in the 1950s while at the University of Manchester. whereby the governing equations of motion of the fluid are coerced into a form reminiscent of the wave equation of "classical" (i.e. linear) acoustics in the left-hand side with the remaining terms as sources in the right-hand side. History The modern discipline of aeroacoustics can be said to have originated with the first publication of Light hill in the early 1950s, when noise generation associated with the jet engine was beginning to be placed under scientific scrutiny. Lighthill's equation Lighthill rearranged the Navier–Stokes equations, which govern the flow of a compressible viscous fluid, into an inhomogeneous wave equation, thereby making a connection between fluid mechanics and acoustics. This is often called "Lighthill's analogy" because it presents a model for the acoustic field that is not, strictly speaking, based on the physics of flow-induced/generated noise, but rather on the analogy of how they might be represented through the governing equations of a compressible fluid. The continuity and the momentum equations are given by where is the fluid density, is the velocity field, is the fluid pressure and is the viscous stress tensor. Note that is a tensor (see also tensor product). Differentiating the conservation of mass equation with respect to time, taking the divergence of the last equation and subtracting the latter from the former, we arrive at Subtracting , where is the speed of sound in the medium in its equilibrium (or quiescent) state, from both sides of the last equation results in celebrated Lighthill equation of aeroacoustics, where is the Hessian and is the so-called Lighthill turbulence stress tensor for the acoustic field. The Lighthill equation is an inhomogenous wave equation. Using Einstein notation, Lighthill’s equation can be written as Each of the acoustic source terms, i.e. terms in , may play a significant role in the generation of noise depending upon flow conditions considered. The first term describes inertial effect of the flow (or Reynolds' Stress, developed by Osborne Reynolds) whereas the second term describes non-linear acoustic generation processes and finally the last term corresponds to sound generation/attenuation due to viscous forces. In practice, it is customary to neglect the effects of viscosity on the fluid as it effects are small in turbulent noise generation problems such as the jet noise. Lighthill provides an in-depth discussion of this matter. In aeroacoustic studies, both theoretical and computational efforts are made to solve for the acoustic source terms in Lighthill's equation in order to make statements regarding the relevant aerodynamic noise generation mechanisms present. Finally, it is important to realize that Lighthill's equation is exact in the sense that no approximations of any kind have been made in its derivation. Landau–Lifshitz aeroacoustic equation In their classical text on fluid mechanics, Landau and Lifshitz derive an aeroacoustic equation analogous to Lighthill's (i.e., an equation for sound generated by "turbulent" fluid motion), but for the incompressible flow of an inviscid fluid. The inhomogeneous wave equation that they obtain is for the pressure rather than for the density of the fluid. Furthermore, unlike Lighthill's equation, Landau and Lifshitz's equation is not exact; it is an approximation. If one is to allow for approximations to be made, a simpler way (without necessarily assuming the fluid is incompressible) to obtain an approximation to Lighthill's equation is to assume that , where and are the (characteristic) density and pressure of the fluid in its equilibrium state. Then, upon substitution the assumed relation between pressure and density into we obtain the equation (for an inviscid fluid, σ = 0) And for the case when the fluid is indeed incompressible, i.e. (for some positive constant ) everywhere, then we obtain exactly the equation given in Landau and Lifshitz, namely A similar approximation [in the context of equation ], namely , is suggested by Lighthill [see Eq. (7) in the latter paper]. Of course, one might wonder whether we are justified in assuming that . The answer is affirmative, if the flow satisfies certain basic assumptions. In particular, if and , then the assumed relation follows directly from the linear theory of sound waves (see, e.g., the linearized Euler equations and the acoustic wave equation). In fact, the approximate relation between and that we assumed is just a linear approximation to the generic barotropic equation of state of the fluid. However, even after the above deliberations, it is still not clear whether one is justified in using an inherently linear relation to simplify a nonlinear wave equation. Nevertheless, it is a very common practice in nonlinear acoustics as the textbooks on the subject show: e.g., Naugolnykh and Ostrovsky and Hamilton and Morfey. See also Acoustic theory Aeolian harp Computational aeroacoustics References External links M. J. Lighthill, "On Sound Generated Aerodynamically. I. General Theory," Proc. R. Soc. Lond. A 211 (1952) pp. 564–587. This article on JSTOR. M. J. Lighthill, "On Sound Generated Aerodynamically. II. Turbulence as a Source of Sound," Proc. R. Soc. Lond. A 222 (1954) pp. 1–32. This article on JSTOR. L. D. Landau and E. M. Lifshitz, Fluid Mechanics 2ed., Course of Theoretical Physics vol. 6, Butterworth-Heinemann (1987) §75. , Preview from Amazon. K. Naugolnykh and L. Ostrovsky, Nonlinear Wave Processes in Acoustics, Cambridge Texts in Applied Mathematics vol. 9, Cambridge University Press (1998) chap. 1. , Preview from Google. M. F. Hamilton and C. L. Morfey, "Model Equations," Nonlinear Acoustics, eds. M. F. Hamilton and D. T. Blackstock, Academic Press (1998) chap. 3. , Preview from Google. Aeroacoustics at the University of Mississippi Aeroacoustics at the University of Leuven International Journal of Aeroacoustics Examples in Aeroacoustics from NASA Aeroacoustics.info Acoustics Aerodynamics Fluid dynamics Sound
Aeroacoustics
[ "Physics", "Chemistry", "Engineering" ]
1,453
[ "Chemical engineering", "Classical mechanics", "Acoustics", "Aerodynamics", "Aerospace engineering", "Piping", "Fluid dynamics" ]
2,518,328
https://en.wikipedia.org/wiki/Herbrand%27s%20theorem
Herbrand's theorem is a fundamental result of mathematical logic obtained by Jacques Herbrand (1930). It essentially allows a certain kind of reduction of first-order logic to propositional logic. Herbrand's theorem is the logical foundation for most automatic theorem provers. Although Herbrand originally proved his theorem for arbitrary formulas of first-order logic, the simpler version shown here, restricted to formulas in prenex form containing only existential quantifiers, became more popular. Statement Let be a formula of first-order logic with quantifier-free, though it may contain additional free variables. This version of Herbrand's theorem states that the above formula is valid if and only if there exists a finite sequence of terms , possibly in an expansion of the language, with and , such that is valid. If it is valid, it is called a Herbrand disjunction for Informally: a formula in prenex form containing only existential quantifiers is provable (valid) in first-order logic if and only if a disjunction composed of substitution instances of the quantifier-free subformula of is a tautology (propositionally derivable). The restriction to formulas in prenex form containing only existential quantifiers does not limit the generality of the theorem, because formulas can be converted to prenex form and their universal quantifiers can be removed by Herbrandization. Conversion to prenex form can be avoided, if structural Herbrandization is performed. Herbrandization can be avoided by imposing additional restrictions on the variable dependencies allowed in the Herbrand disjunction. Proof sketch A proof of the non-trivial direction of the theorem can be constructed according to the following steps: If the formula is valid, then by completeness of cut-free sequent calculus, which follows from Gentzen's cut-elimination theorem, there is a cut-free proof of . Starting from leaves and working downwards, remove the inferences that introduce existential quantifiers. Remove contraction inferences on previously existentially quantified formulas, since the formulas (now with terms substituted for previously quantified variables) might not be identical anymore after the removal of the quantifier inferences. The removal of contractions accumulates all the relevant substitution instances of in the right side of the sequent, thus resulting in a proof of , from which the Herbrand disjunction can be obtained. However, sequent calculus and cut-elimination were not known at the time of Herbrand's proof, and Herbrand had to prove his theorem in a more complicated way. Generalizations of Herbrand's theorem Herbrand's theorem has been extended to higher-order logic by using expansion-tree proofs. The deep representation of expansion-tree proofs corresponds to a Herbrand disjunction, when restricted to first-order logic. Herbrand disjunctions and expansion-tree proofs have been extended with a notion of cut. Due to the complexity of cut-elimination, Herbrand disjunctions with cuts can be non-elementarily smaller than a standard Herbrand disjunction. Herbrand disjunctions have been generalized to Herbrand sequents, allowing Herbrand's theorem to be stated for sequents: "a Skolemized sequent is derivable if and only if it has a Herbrand sequent". See also Herbrand structure Herbrand interpretation Herbrand universe Compactness theorem Notes References . Proof theory Theorems in the foundations of mathematics Metatheorems
Herbrand's theorem
[ "Mathematics" ]
748
[ "Proof theory", "Foundations of mathematics", "Mathematical logic", "Mathematical problems", "Mathematical theorems", "Theorems in the foundations of mathematics" ]
2,518,584
https://en.wikipedia.org/wiki/Prescaler
A prescaler is an electronic counting circuit used to reduce a high frequency electrical signal to a lower frequency by integer division. The prescaler takes the basic timer clock frequency (which may be the CPU clock frequency or may be some higher or lower frequency) and divides it by some value before feeding it to the timer, according to how the prescaler register(s) are configured. The prescaler values, referred to as prescales, that may be configured might be limited to a few fixed values (powers of 2), or they may be any integer value from 1 to 2^P, where P is the number of prescaler bits. The purpose of the prescaler is to allow the timer to be clocked at the rate a user desires. For shorter (8 and 16-bit) timers, there will often be a tradeoff between resolution (high resolution requires a high clock rate) and range (high clock rates cause the timer to overflow more quickly). For example, one cannot (without some tricks) achieve 1 μs resolution and a 1 sec maximum period using a 16-bit timer. In this example using 1 μs resolution would limit the period to about 65ms maximum. However the prescaler allows tweaking the ratio between resolution and maximum period to achieve a desired effect. Example of use Prescalers are typically used at very high frequency to extend the upper frequency range of frequency counters, phase locked loop (PLL) synthesizers, and other counting circuits. When used in conjunction with a PLL, a prescaler introduces a normally undesired change in the relationship between the frequency step size and phase detector comparison frequency. For this reason, it is common to either restrict the integer to a low value, or use a dual-modulus prescaler in this application. A dual-modulus prescaler is one that has the ability to selectively divide the input frequency by one of two (normally consecutive) integers, such as 32 and 33. Common fixed-integer microwave prescalers are available in modulus 2, 4, 8, 5 and 10, and can operate at frequencies in excess of 10 GHz. Nomenclature A prescaler is essentially a counter-divider, and thus the names may be used somewhat interchangeably. See also Frequency divider References Electronic circuits
Prescaler
[ "Engineering" ]
479
[ "Electronic engineering", "Electronic circuits" ]
2,520,153
https://en.wikipedia.org/wiki/Laser%20ultrasonics
Laser-ultrasonics uses lasers to generate and detect ultrasonic waves. It is a non-contact technique used to measure materials thickness, detect flaws and carry out materials characterization. The basic components of a laser-ultrasonic system are a generation laser, a detection laser and a detector. Ultrasound generation by laser The generation lasers are short pulse (from tens of nanoseconds to femtoseconds) and high peak power lasers. Common lasers used for ultrasound generation are solid state Q-Switched Nd:YAG and gas lasers (CO2 or Excimers). The physical principle is of thermal expansion (also called thermoelastic regime) or ablation. In the thermoelastic regime, the ultrasound is generated by the sudden thermal expansion due to the heating of a tiny surface of the material by the laser pulse. If the laser power is sufficient to heat the surface above the material boiling point, some material is evaporated (typically some nanometres) and ultrasound is generated by the recoil effect of the expanding material evaporated. In the ablation regime, a plasma is often formed above the material surface and its expansion can make a substantial contribution to the ultrasonic generation. consequently the emissivity patterns and modal content are different for the two different mechanisms. The frequency content of the generated ultrasound is partially determined by the frequency content of the laser pulses with shorter pulses giving higher frequencies. For very high frequency generation (up to 100sGHz) femtosecond lasers are used often in a pump-probe configuration with the detection system (see picosecond ultrasonics). Historically, fundamental research into the nature of laser-ultrasonics was started in 1979, by Richard J Dewhurst and Stuart B Palmer. They set up a new laboratory in the Department of Applied Physics, University of Hull. Dewhurst provided the laser-matter expertise and Palmer the ultrasound expertise. Investigations were directed towards the development of a scientific insight into physical processes converting laser-matter interaction into ultrasound. The studies were also aimed at assessing the characteristics of the ultrasound propagating from the near field into the far field. Importantly, quantitative measurements were performed between 1979 and 1982. In solids, the measurements included amplitudes of longitudinal and shear waves in absolute terms. Ultrasound generation by a laser pulse for both the thermoelastic regime and the transition to the plasma regime was examined. By comparing measurements with theoretical predictions, a description of the magnitude and direction of stresses leading to ultrasonic generation was presented for the first time. It led to the proposition that laser-generated ultrasound could be regarded as a standard acoustic source. Additionally, they showed that surface modification can sometimes be used to amplify the magnitude of ultrasonic signals. Their research also included the first quantitative studies of laser induced Rayleigh waves, which can dominate ultrasonic surface waves. In studies beyond 1982, surface waves were shown to have a potential use in non-destructive testing. One type of investigation included surface–breaking crack depth estimations in metals, using artificial cracks. Crack sizing was demonstrated, using wideband laser-ultrasonics. Findings were first reported at a Royal Society meeting in London with detailed publications elsewhere. Important features of laser ultrasonics were summarised in 1990. Ultrasound detection by laser For scientific investigations in the early 1980s, Michelson interferometers were exploited. They were capable of measuring ultrasonic signals quantitatively, in typical ranges of 20nm down to 5pm. They possessed a broadband frequency response, up to about 50MHz. Unfortunately, for good signals, they required samples that had polished surfaces. They suffered from serious sensitivity loss when used on rough industrial surfaces. A significant breakthrough for the application of laser ultrasonics came in 1986, when the first optical interferometer capable of reasonable detection sensitivity on rough industrial surfaces was demonstrated. Monchalin et al. at the National Research Council of Canada in Boucherville showed that a Fabry–Pérot interferometer system could assess optical speckle returning from rough surfaces. It provided the impetus for the translation of laser ultrasonics into industrial applications. Today, ultrasound waves may be detected optically by a variety of techniques. Most techniques use continuous or long pulse (typically of tens of microseconds) lasers but some use short pulses to down convert very high frequencies to DC in a classic pump-probe configuration with the generation. Some techniques (notably conventional Fabry–Pérot detectors) require high frequency stability and this usually implies long coherence length. Common detection techniques include: interferometry (homodyne or heterodyne or Fabry–Pérot) and optical beam deflection (GCLAD) or knife edge detection. With GCLAD, (Gas-coupled laser acoustic detection), a laser beam is passed through a region where one wants to measure or record the acoustic changes. The ultrasound waves create changes in the air's index of refraction. When the laser encounters these changes, the beam slightly deflects and displaces to a new course. This change is detected and converted to an electric signal by a custom-built photodetector. This enables high sensitivity detection of ultrasound on rough surfaces for frequencies up to 10 MHz. In practice the choice of technique is often determined by the physical optics and the sample (surface) condition. Many techniques fail to work well on rough surfaces (e.g. simple interferometers) and there are many different schemes to overcome this problem. For instance, photorefractive crystals and four wave mixing are used in an interferometer to compensate for the effects of surface roughness. These techniques are usually expensive in terms of monetary cost and in terms of light budget (thus requiring more laser power to achieve the same signal to noise under ideal conditions). At low to moderate frequencies (say < 1 GHz), the mechanism for detection is the movement of the surface of the sample. At high frequencies (say >1 GHz), other mechanisms may come into play (for instance modulation of the sample refractive index with stress). Under ideal circumstances most detection techniques can be considered theoretically as interferometers and, as such, their ultimate sensitivities are all roughly equal. This is because, in all these techniques, interferometry is used to linearize the detection transfer function and when linearized, maximum sensitivity is achieved. Under these conditions, photon shot noise dominates the sensitivity and this is fundamental to all the optical detection techniques. However, the ultimate limit is determined by the phonon shot noise. Since the phonon frequency is many orders of magnitude lower than the photon frequency, the ultimate sensitivity of ultrasonic detection can be much higher. The usual method for increasing the sensitivity of optical detection is to use more optical power. However, the shot noise limited SNR is proportional to the square root of the total detection power. Thus, increasing optical power has limited effect, and damaging power levels are easily reached before achieving an adequate SNR. Consequently, optical detection frequent has lower SNR than non-optical contacting techniques. Optical generation (at least in the firmly thermodynamic regime) is proportional to the optical power used and it is generally more efficient to improve the generation rather than the detection (again the limit is the damage threshold). Techniques like CHOTs (cheap optical transducers) can overcome the limit of optical detection sensitivity by passively amplifying the amplitude of vibration before optical detection and can result in an increase in sensitivity by several orders of magnitude. Ultrasonic laser technique operation The "Laser Ultrasonic" technique is part of those measurement techniques known as "non-destructive techniques or NDT", that is, methods which do not change the state of measurand itself. Laser ultrasonics is a contactless ultrasonic inspection technique based on excitation and ultrasound measurement using two lasers. A laser pulse is directed onto the sample under test and the interaction with the surface generates an ultrasonic pulse that propagates through the material. The reading of the vibrations produced by the ultrasounds can be subsequently measured by the self-mixing vibrometer: the high performance of the instrument makes it suitable for an accurate measurement of the ultrasonic wave and therefore for a modeling of the characteristics of the sample. When the laser beam hits the surface of the material, its behavior may vary according to the power of the laser used. In the case of high power, there is a real "ablation" or "vaporization" of the material at the point of incidence between the laser and the surface: this causes the disappearance of a small portion of material and a small recall force, due to compression longitudinal, which would be the origin of the ultrasonic wave. This longitudinal wave tends to propagate in the normal direction to the surface of the material, regardless of the angle of incidence of the laser: this would allow to accurately estimate the thickness of the material, knowing the speed of propagation of the wave, without worrying about the angle of incidence. The use of a high power laser, with consequent vaporization of the material, is the optimal way to obtain an ultrasonic response from the object. However, to fall within the scope of non-destructive measurements, it is preferred to avoid this phenomenon by using low power lasers. In this case, the generation of ultrasound takes place thanks to the local overheating of the point of incidence of the laser: the cause of wave generation is now the thermal expansion of the material. In this way there is both the generation of waves longitudinal, similarly to the previous case, and the generation of transverse waves, whose angle with the normal direction to the surface depends on the material. After a few moments the thermal energy dissipates, leaving the surface intact: in this way the measurement is repeatable an infinite number of times (assuming the use of a material sufficiently resistant to thermal stresses) and non-destructive, as required in almost all areas of application of this technology. The movement of the object causes a shift in the phase of the signal, which cannot be identified directly by an optical receiver: to do this it is first necessary to transform the phase modulation into an amplitude modulation (in this case, in a modulation of luminous intensity ). Ultrasound detection can therefore be divided into 3 steps: the conversion from ultrasound to phase-modulated optical signal, the transition from phase modulation to amplitude and finally the reading of the amplitude modulated signal with consequent conversion into an electrical signal. Industrial applications Well established applications of laser-ultrasonics are composite inspections for the aerospace industry and on-line hot tube thickness measurements for the metallurgical industry. Optical generation and detection of ultrasound offers scanning techniques to produce ultrasonic images known as B- and C-scans, and for TOFD (time-of-flight-diffraction) studies. One of the first demonstrations on small defects (as small as 3mm x 3mm) in composites was demonstrated by Dewhurst and Shan in 1993, for which they were awarded an outstanding paper award by the American Society for Non-Destructive Testing in 1994. This was also the time when significant developments on composite examinations were developed from the National Research Council of Canada and elsewhere. A wide range of applications have since been described in the literature. In 2022 the first online grain size gauge based on laser ultrasonics was installed in the hot strip mill in Borlänge Sweden to monitor the grain size along the length of the steel strip after the last stand. References Laser applications Nondestructive testing Acoustics Ultrasound
Laser ultrasonics
[ "Physics", "Materials_science" ]
2,361
[ "Nondestructive testing", "Materials testing", "Classical mechanics", "Acoustics" ]