id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
18,093,768
https://en.wikipedia.org/wiki/Hybrid%20sulfur%20cycle
The hybrid sulfur cycle (HyS) is a two-step water-splitting process intended to be used for hydrogen production. Based on sulfur oxidation and reduction, it is classified as a hybrid thermochemical cycle because it uses an electrochemical (instead of a thermochemical) reaction for one of the two steps. The remaining thermochemical step is shared with the sulfur-iodine cycle. The Hybrid sulphur cycle (HyS)was initially proposed and developed by Westinghouse Electric Corp. in the 1970s, so it is also known as the "Westinghouse" cycle. Current development efforts in the United States are being led by the Savannah River National Laboratory. Process description The two reactions in the HyS cycle are as follows: H2SO4 → H2O + SO2 + ½ O2 (thermochemical, T > 800 °C) SO2 + 2 H2O → H2SO4 + H2 (electrochemical, T = 80-120 °C) Net reaction: H2O → H2 + ½ O2 Sulfur dioxide acts to depolarize the anode of the electrolyzer. This results in a significant decrease in the reversible cell potential (and, therefore, the electric power requirement) for reaction (2). The standard cell potential for reaction (2) is -0.158 V at 298.15 K, compared to -1.229 V for the electrolysis of water (with oxygen evolution as the anodic reaction). See also Cerium(IV) oxide-cerium(III) oxide cycle Copper-chlorine cycle Iron oxide cycle Sulfur-iodine cycle Zinc zinc-oxide cycle References Chemical reactions Inorganic reactions Hydrogen production
Hybrid sulfur cycle
[ "Chemistry" ]
359
[ "Chemical reaction stubs", "Inorganic reactions", "nan" ]
18,097,410
https://en.wikipedia.org/wiki/Canadian%20Penning%20Trap%20Mass%20Spectrometer
The Canadian Penning Trap Mass Spectrometer (CPT) is one of the major pieces of experimental equipment that is installed on the ATLAS superconducting heavy-ion linac facility at the Physics Division of the Argonne National Laboratory. It was developed and operated by physicist Guy Savard and a collaboration of other scientists at Argonne, the University of Manitoba, McGill University, Texas A&M University and the State University of New York. Development The CPT was originally built for the Tandem Accelerator Superconducting Cyclotron (TASCC) facility at Chalk River Laboratories in Chalk River, Ontario, Canada. However, it was transferred to Argonne National Laboratory when the TASCC accelerator was decommissioned in 1998 due to funding issues. The CPT spectrometer is designed to provide high-precision mass measurements of short-lived isotopes using radio-frequency (RF) fields. Accurate mass measurements of particular isotopes such as selenium-68 are important in the understanding of the detailed reaction mechanisms involved in the rapid-proton capture process, which occurs in astrophysical events like supernovae explosions and X-ray bursts. An X-ray burst is one possible site for the rp-process mechanism which involves the accretion of hydrogen and helium from one star onto the surface of its neutron star binary companion. Mass measurements are required as key inputs to network calculations used to describe this process in terms of the abundances of the nuclides produced, the light-curve profile of the X-ray bursts, and the energy produced. In the current configuration, more than 100 radioactive isotopes have been measured with half-lives much less than a second and with a precision (Δm/m) approaching 10−9. Recently, a novel injection system, the RF gas cooler, has been installed on the CPT to allow fast reaction products to be decelerated, thermalized and bunched for rapid and efficient injection. This enhances the investigative capabilities of the CPT on isotopes around the N=Z line with particular emphasis on isotopes of interest to low-energy tests of the electroweak interaction and the rp-process. See also Gammasphere Helical Orbit Spectrometer (HELIOS) Mass Spectrometry Penning Trap RFQ Beam Coolers References External links ANL site Mass spectrometry Spectrometers Science and technology in Canada
Canadian Penning Trap Mass Spectrometer
[ "Physics", "Chemistry" ]
494
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Nuclear and atomic physics stubs", "Nuclear physics", "Spectrometers", "Spectroscopy", "Matter" ]
18,097,527
https://en.wikipedia.org/wiki/Delivery%20Bar%20Code%20Sorter
A Delivery Bar Code Sorter (DBCS) is a mail sorting machine used primarily by the United States Postal Service. Introduced in 1990, these machines sort letters at a rate of approximately 36,000 pieces per hour, with a 99% accuracy rate. A computer scans the addresses of the mail, and sorts it to one of up to 286 pockets, setting it up for delivery by the letter carrier. References Machines Mail sorting United States Postal Service
Delivery Bar Code Sorter
[ "Physics", "Technology", "Engineering" ]
91
[ "Physical systems", "Machines", "Mechanical engineering" ]
91,100
https://en.wikipedia.org/wiki/Michelson%E2%80%93Morley%20experiment
The Michelson–Morley experiment was an attempt to measure the motion of the Earth relative to the luminiferous aether, a supposed medium permeating space that was thought to be the carrier of light waves. The experiment was performed between April and July 1887 by American physicists Albert A. Michelson and Edward W. Morley at what is now Case Western Reserve University in Cleveland, Ohio, and published in November of the same year. The experiment compared the speed of light in perpendicular directions in an attempt to detect the relative motion of matter, including their laboratory, through the luminiferous aether, or "aether wind" as it was sometimes called. The result was negative, in that Michelson and Morley found no significant difference between the speed of light in the direction of movement through the presumed aether, and the speed at right angles. This result is generally considered to be the first strong evidence against some aether theories, as well as initiating a line of research that eventually led to special relativity, which rules out motion against an aether. Of this experiment, Albert Einstein wrote, "If the Michelson–Morley experiment had not brought us into serious embarrassment, no one would have regarded the relativity theory as a (halfway) redemption." Michelson–Morley type experiments have been repeated many times with steadily increasing sensitivity. These include experiments from 1902 to 1905, and a series of experiments in the 1920s. More recently, in 2009, optical resonator experiments confirmed the absence of any aether wind at the 10−17 level. Together with the Ives–Stilwell and Kennedy–Thorndike experiments, Michelson–Morley type experiments form one of the fundamental tests of special relativity. Detecting the aether Physics theories of the 19th century assumed that just as surface water waves must have a supporting substance, i.e., a "medium", to move across (in this case water), and audible sound requires a medium to transmit its wave motions (such as air or water), so light must also require a medium, the "luminiferous aether", to transmit its wave motions. Because light can travel through a vacuum, it was assumed that even a vacuum must be filled with aether. Because the speed of light is so great, and because material bodies pass through the aether without obvious friction or drag, it was assumed to have a highly unusual combination of properties. Designing experiments to investigate these properties was a high priority of 19th-century physics. Earth orbits around the Sun at a speed of around , or . The Earth is in motion, so two main possibilities were considered: (1) The aether is stationary and only partially dragged by Earth (proposed by Augustin-Jean Fresnel in 1818), or (2) the aether is completely dragged by Earth and thus shares its motion at Earth's surface (proposed by Sir George Stokes, 1st Baronet in 1844). In addition, James Clerk Maxwell (1865) recognized the electromagnetic nature of light and developed what are now called Maxwell's equations, but these equations were still interpreted as describing the motion of waves through an aether, whose state of motion was unknown. Eventually, Fresnel's idea of an (almost) stationary aether was preferred because it appeared to be confirmed by the Fizeau experiment (1851) and the aberration of star light. According to the stationary and the partially dragged aether hypotheses, Earth and the aether are in relative motion, implying that a so-called "aether wind" (Fig. 2) should exist. Although it would be theoretically possible for the Earth's motion to match that of the aether at one moment in time, it was not possible for the Earth to remain at rest with respect to the aether at all times, because of the variation in both the direction and the speed of the motion. At any given point on the Earth's surface, the magnitude and direction of the wind would vary with time of day and season. By analyzing the return speed of light in different directions at various different times, it was thought to be possible to measure the motion of the Earth relative to the aether. The expected relative difference in the measured speed of light was quite small, given that the velocity of the Earth in its orbit around the Sun has a magnitude of about one hundredth of one percent of the speed of light. During the mid-19th century, measurements of aether wind effects of first order, i.e., effects proportional to v/c (v being Earth's velocity, c the speed of light) were thought to be possible, but no direct measurement of the speed of light was possible with the accuracy required. For instance, the Fizeau wheel could measure the speed of light to perhaps 5% accuracy, which was quite inadequate for measuring directly a first-order 0.01% change in the speed of light. A number of physicists therefore attempted to make measurements of indirect first-order effects not of the speed of light itself, but of variations in the speed of light (see First order aether-drift experiments). The Hoek experiment, for example, was intended to detect interferometric fringe shifts due to speed differences of oppositely propagating light waves through water at rest. The results of such experiments were all negative. This could be explained by using Fresnel's dragging coefficient, according to which the aether and thus light are partially dragged by moving matter. Partial aether-dragging would thwart attempts to measure any first order change in the speed of light. As pointed out by Maxwell (1878), only experimental arrangements capable of measuring second order effects would have any hope of detecting aether drift, i.e., effects proportional to v2/c2. Existing experimental setups, however, were not sensitive enough to measure effects of that size. 1881 and 1887 experiments Michelson experiment (1881) Michelson had a solution to the problem of how to construct a device sufficiently accurate to detect aether flow. In 1877, while teaching at his alma mater, the United States Naval Academy in Annapolis, Michelson conducted his first known light speed experiments as a part of a classroom demonstration. In 1881, he left active U.S. Naval service while in Germany concluding his studies. In that year, Michelson used a prototype experimental device to make several more measurements. The device he designed, later known as a Michelson interferometer, sent yellow light from a sodium flame (for alignment), or white light (for the actual observations), through a half-silvered mirror that was used to split it into two beams traveling at right angles to one another. After leaving the splitter, the beams traveled out to the ends of long arms where they were reflected back into the middle by small mirrors. They then recombined on the far side of the splitter in an eyepiece, producing a pattern of constructive and destructive interference whose transverse displacement would depend on the relative time it takes light to transit the longitudinal vs. the transverse arms. If the Earth is traveling through an aether medium, a light beam traveling parallel to the flow of that aether will take longer to reflect back and forth than would a beam traveling perpendicular to the aether, because the increase in elapsed time from traveling against the aether wind is more than the time saved by traveling with the aether wind. Michelson expected that the Earth's motion would produce a fringe shift equal to 0.04 fringes—that is, of the separation between areas of the same intensity. He did not observe the expected shift; the greatest average deviation that he measured (in the northwest direction) was only 0.018 fringes; most of his measurements were much less. His conclusion was that Fresnel's hypothesis of a stationary aether with partial aether dragging would have to be rejected, and thus he confirmed Stokes' hypothesis of complete aether dragging. However, Alfred Potier (and later Hendrik Lorentz) pointed out to Michelson that he had made an error of calculation, and that the expected fringe shift should have been only 0.02 fringes. Michelson's apparatus was subject to experimental errors far too large to say anything conclusive about the aether wind. Definitive measurement of the aether wind would require an experiment with greater accuracy and better controls than the original. Nevertheless, the prototype was successful in demonstrating that the basic method was feasible. Michelson–Morley experiment (1887) In 1885, Michelson began a collaboration with Edward Morley, spending considerable time and money to confirm with higher accuracy Fizeau's 1851 experiment on Fresnel's drag coefficient, to improve on Michelson's 1881 experiment, and to establish the wavelength of light as a standard of length. At this time Michelson was professor of physics at the Case School of Applied Science, and Morley was professor of chemistry at Western Reserve University (WRU), which shared a campus with the Case School on the eastern edge of Cleveland. Michelson suffered a mental health crisis in September 1885, from which he recovered by October 1885. Morley ascribed this breakdown to the intense work of Michelson during the preparation of the experiments. In 1886, Michelson and Morley successfully confirmed Fresnel's drag coefficient – this result was also considered as a confirmation of the stationary aether concept. This result strengthened their hope of finding the aether wind. Michelson and Morley created an improved version of the Michelson experiment with more than enough accuracy to detect this hypothetical effect. The experiment was performed in several periods of concentrated observations between April and July 1887, in the basement of Adelbert Dormitory of WRU (later renamed Pierce Hall, demolished in 1962). As shown in the diagram to the right, the light was repeatedly reflected back and forth along the arms of the interferometer, increasing the path length to . At this length, the drift would be about 0.4 fringes. To make that easily detectable, the apparatus was assembled in a closed room in the basement of the heavy stone dormitory, eliminating most thermal and vibrational effects. Vibrations were further reduced by building the apparatus on top of a large block of sandstone (Fig. 1), about a foot thick and square, which was then floated in a circular trough of mercury. They estimated that effects of about 0.01 fringe would be detectable. Michelson and Morley and other early experimentalists using interferometric techniques in an attempt to measure the properties of the luminiferous aether, used (partially) monochromatic light only for initially setting up their equipment, always switching to white light for the actual measurements. The reason is that measurements were recorded visually. Purely monochromatic light would result in a uniform fringe pattern. Lacking modern means of environmental temperature control, experimentalists struggled with continual fringe drift even when the interferometer was set up in a basement. Because the fringes would occasionally disappear due to vibrations caused by passing horse traffic, distant thunderstorms and the like, an observer could easily "get lost" when the fringes returned to visibility. The advantages of white light, which produced a distinctive colored fringe pattern, far outweighed the difficulties of aligning the apparatus due to its low coherence length. As Dayton Miller wrote, "White light fringes were chosen for the observations because they consist of a small group of fringes having a central, sharply defined black fringe which forms a permanent zero reference mark for all readings." Use of partially monochromatic light (yellow sodium light) during initial alignment enabled the researchers to locate the position of equal path length, more or less easily, before switching to white light. The mercury trough allowed the device to turn with close to zero friction, so that once having given the sandstone block a single push it would slowly rotate through the entire range of possible angles to the "aether wind", while measurements were continuously observed by looking through the eyepiece. The hypothesis of aether drift implies that because one of the arms would inevitably turn into the direction of the wind at the same time that another arm was turning perpendicularly to the wind, an effect should be noticeable even over a period of minutes. The expectation was that the effect would be graphable as a sine wave with two peaks and two troughs per rotation of the device. This result could have been expected because during each full rotation, each arm would be parallel to the wind twice (facing into and away from the wind giving identical readings) and perpendicular to the wind twice. Additionally, due to the Earth's rotation, the wind would be expected to show periodic changes in direction and magnitude during the course of a sidereal day. Because of the motion of the Earth around the Sun, the measured data were also expected to show annual variations. Most famous "failed" experiment After all this thought and preparation, the experiment became what has been called the most famous failed experiment in history. Instead of providing insight into the properties of the aether, Michelson and Morley's article in the American Journal of Science reported the measurement to be as small as one-fortieth of the expected displacement (Fig. 7), but "since the displacement is proportional to the square of the velocity" they concluded that the measured velocity was "probably less than one-sixth" of the expected velocity of the Earth's motion in orbit and "certainly less than one-fourth". Although this small "velocity" was measured, it was considered far too small to be used as evidence of speed relative to the aether, and it was understood to be within the range of an experimental error that would allow the speed to actually be zero. For instance, Michelson wrote about the "decidedly negative result" in a letter to Lord Rayleigh in August 1887: From the standpoint of the then current aether models, the experimental results were conflicting. The Fizeau experiment and its 1886 repetition by Michelson and Morley apparently confirmed the stationary aether with partial aether dragging, and refuted complete aether dragging. On the other hand, the much more precise Michelson–Morley experiment (1887) apparently confirmed complete aether dragging and refuted the stationary aether. In addition, the Michelson–Morley null result was further substantiated by the null results of other second-order experiments of different kind, namely the Trouton–Noble experiment (1903) and the experiments of Rayleigh and Brace (1902–1904). These problems and their solution led to the development of the Lorentz transformation and special relativity. After the "failed" experiment Michelson and Morley ceased their aether drift measurements and started to use their newly developed technique to establish the wavelength of light as a standard of length. Light path analysis and consequences Observer resting in the aether The beam travel time in the longitudinal direction can be derived as follows: Light is sent from the source and propagates with the speed of light in the aether. It passes through the half-silvered mirror at the origin at . The reflecting mirror is at that moment at distance (the length of the interferometer arm) and is moving with velocity . The beam hits the mirror at time and thus travels the distance . At this time, the mirror has traveled the distance . Thus and consequently the travel time . The same consideration applies to the backward journey, with the sign of reversed, resulting in and . The total travel time is: Michelson obtained this expression correctly in 1881, however, in transverse direction he obtained the incorrect expression because he overlooked the increase in path length in the rest frame of the aether. This was corrected by Alfred Potier (1882) and Hendrik Lorentz (1886). The derivation in the transverse direction can be given as follows (analogous to the derivation of time dilation using a light clock): The beam is propagating at the speed of light and hits the mirror at time , traveling the distance . At the same time, the mirror has traveled the distance in the x direction. So in order to hit the mirror, the travel path of the beam is in the y direction (assuming equal-length arms) and in the x direction. This inclined travel path follows from the transformation from the interferometer rest frame to the aether rest frame. Therefore, the Pythagorean theorem gives the actual beam travel distance of . Thus and consequently the travel time , which is the same for the backward journey. The total travel time is: The time difference between and is given by To find the path difference, simply multiply by ; The path difference is denoted by because the beams are out of phase by a some number of wavelengths (). To visualise this, consider taking the two beam paths along the longitudinal and transverse plane, and lying them straight (an animation of this is shown at minute 11:00, The Mechanical Universe, episode 41). One path will be longer than the other, this distance is . Alternatively, consider the rearrangement of the speed of light formula . If the relation is true (if the velocity of the aether is small relative to the speed of light), then the expression can be simplified using a first order binomial expansion; So, rewriting the above in terms of powers; Applying binomial simplification; Therefore; It can be seen from this derivation that aether wind manifests as a path difference. The path difference is zero only when the interferometer is aligned with or perpendicular to the aether wind, and it reaches a maximum when it is at a 45° angle. The path difference can be any fraction of the wavelength, depending on the angle and speed of the aether wind. To prove the existence of the aether, Michelson and Morley sought to find the "fringe shift". The idea was simple, the fringes of the interference pattern should shift when rotating it by 90° as the two beams have exchanged roles. To find the fringe shift, subtract the path difference in first orientation by the path difference in the second, then divide by the wavelength, , of light; Note the difference between , which is some number of wavelengths, and which is a single wavelength. As can be seen by this relation, fringe shift n is a unitless quantity. Since L ≈ 11 meters and λ ≈ 500 nanometers, the expected fringe shift was n ≈ 0.44. The negative result led Michelson to the conclusion that there is no measurable aether drift. However, he never accepted this on a personal level, and the negative result haunted him for the rest of his life. Observer comoving with the interferometer If the same situation is described from the view of an observer co-moving with the interferometer, then the effect of aether wind is similar to the effect experienced by a swimmer, who tries to move with velocity against a river flowing with velocity . In the longitudinal direction the swimmer first moves upstream, so his velocity is diminished due to the river flow to . On his way back moving downstream, his velocity is increased to . This gives the beam travel times and as mentioned above. In the transverse direction, the swimmer has to compensate for the river flow by moving at a certain angle against the flow direction, in order to sustain his exact transverse direction of motion and to reach the other side of the river at the correct location. This diminishes his speed to , and gives the beam travel time as mentioned above. Mirror reflection The classical analysis predicted a relative phase shift between the longitudinal and transverse beams which in Michelson and Morley's apparatus should have been readily measurable. What is not often appreciated (since there was no means of measuring it), is that motion through the hypothetical aether should also have caused the two beams to diverge as they emerged from the interferometer by about 10−8 radians. For an apparatus in motion, the classical analysis requires that the beam-splitting mirror be slightly offset from an exact 45° if the longitudinal and transverse beams are to emerge from the apparatus exactly superimposed. In the relativistic analysis, Lorentz-contraction of the beam splitter in the direction of motion causes it to become more perpendicular by precisely the amount necessary to compensate for the angle discrepancy of the two beams. Length contraction and Lorentz transformation A first step to explaining the Michelson and Morley experiment's null result was found in the FitzGerald–Lorentz contraction hypothesis, now simply called length contraction or Lorentz contraction, first proposed by George FitzGerald (1889) in a letter to same journal that published the Michelson-Morley paper, as "almost the only hypothesis that can reconcile" the apparent contradictions. It was independently also proposed by Hendrik Lorentz (1892). According to this law all objects physically contract by along the line of motion (originally thought to be relative to the aether), being the Lorentz factor. This hypothesis was partly motivated by Oliver Heaviside's discovery in 1888 that electrostatic fields are contracting in the line of motion. But since there was no reason at that time to assume that binding forces in matter are of electric origin, length contraction of matter in motion with respect to the aether was considered an ad hoc hypothesis. If length contraction of is inserted into the above formula for , then the light propagation time in the longitudinal direction becomes equal to that in the transverse direction: However, length contraction is only a special case of the more general relation, according to which the transverse length is larger than the longitudinal length by the ratio . This can be achieved in many ways. If is the moving longitudinal length and the moving transverse length, being the rest lengths, then it is given: can be arbitrarily chosen, so there are infinitely many combinations to explain the Michelson–Morley null result. For instance, if the relativistic value of length contraction of occurs, but if then no length contraction but an elongation of occurs. This hypothesis was later extended by Joseph Larmor (1897), Lorentz (1904) and Henri Poincaré (1905), who developed the complete Lorentz transformation including time dilation in order to explain the Trouton–Noble experiment, the Experiments of Rayleigh and Brace, and Kaufmann's experiments. It has the form It remained to define the value of , which was shown by Lorentz (1904) to be unity. In general, Poincaré (1905) demonstrated that only allows this transformation to form a group, so it is the only choice compatible with the principle of relativity, i.e., making the stationary aether undetectable. Given this, length contraction and time dilation obtain their exact relativistic values. Special relativity Albert Einstein formulated the theory of special relativity by 1905, deriving the Lorentz transformation and thus length contraction and time dilation from the relativity postulate and the constancy of the speed of light, thus removing the ad hoc character from the contraction hypothesis. Einstein emphasized the kinematic foundation of the theory and the modification of the notion of space and time, with the stationary aether no longer playing any role in his theory. He also pointed out the group character of the transformation. Einstein was motivated by Maxwell's theory of electromagnetism (in the form as it was given by Lorentz in 1895) and the lack of evidence for the luminiferous aether. This allows a more elegant and intuitive explanation of the Michelson–Morley null result. In a comoving frame the null result is self-evident, since the apparatus can be considered as at rest in accordance with the relativity principle, thus the beam travel times are the same. In a frame relative to which the apparatus is moving, the same reasoning applies as described above in "Length contraction and Lorentz transformation", except the word "aether" has to be replaced by "non-comoving inertial frame". Einstein wrote in 1916: The extent to which the null result of the Michelson–Morley experiment influenced Einstein is disputed. Alluding to some statements of Einstein, many historians argue that it played no significant role in his path to special relativity, while other statements of Einstein probably suggest that he was influenced by it. In any case, the null result of the Michelson–Morley experiment helped the notion of the constancy of the speed of light gain widespread and rapid acceptance. It was later shown by Howard Percy Robertson (1949) and others (see Robertson–Mansouri–Sexl test theory), that it is possible to derive the Lorentz transformation entirely from the combination of three experiments. First, the Michelson–Morley experiment showed that the speed of light is independent of the orientation of the apparatus, establishing the relationship between longitudinal (β) and transverse (δ) lengths. Then in 1932, Roy Kennedy and Edward Thorndike modified the Michelson–Morley experiment by making the path lengths of the split beam unequal, with one arm being very short. The Kennedy–Thorndike experiment took place for many months as the Earth moved around the sun. Their negative result showed that the speed of light is independent of the velocity of the apparatus in different inertial frames. In addition it established that besides length changes, corresponding time changes must also occur, i.e., it established the relationship between longitudinal lengths (β) and time changes (α). So both experiments do not provide the individual values of these quantities. This uncertainty corresponds to the undefined factor as described above. It was clear due to theoretical reasons (the group character of the Lorentz transformation as required by the relativity principle) that the individual values of length contraction and time dilation must assume their exact relativistic form. But a direct measurement of one of these quantities was still desirable to confirm the theoretical results. This was achieved by the Ives–Stilwell experiment (1938), measuring α in accordance with time dilation. Combining this value for α with the Kennedy–Thorndike null result shows that β must assume the value of relativistic length contraction. Combining β with the Michelson–Morley null result shows that δ must be zero. Therefore, the Lorentz transformation with is an unavoidable consequence of the combination of these three experiments. Special relativity is generally considered the solution to all negative aether drift (or isotropy of the speed of light) measurements, including the Michelson–Morley null result. Many high precision measurements have been conducted as tests of special relativity and modern searches for Lorentz violation in the photon, electron, nucleon, or neutrino sector, all of them confirming relativity. Incorrect alternatives As mentioned above, Michelson initially believed that his experiment would confirm Stokes' theory, according to which the aether was fully dragged in the vicinity of the earth (see Aether drag hypothesis). However, complete aether drag contradicts the observed aberration of light and was contradicted by other experiments as well. In addition, Lorentz showed in 1886 that Stokes's attempt to explain aberration is contradictory. Furthermore, the assumption that the aether is not carried in the vicinity, but only within matter, was very problematic as shown by the Hammar experiment (1935). Hammar directed one leg of his interferometer through a heavy metal pipe plugged with lead. If aether were dragged by mass, it was theorized that the mass of the sealed metal pipe would have been enough to cause a visible effect. Once again, no effect was seen, so aether-drag theories are considered to be disproven. Walther Ritz's emission theory (or ballistic theory) was also consistent with the results of the experiment, not requiring aether. The theory postulates that light has always the same velocity in respect to the source. However de Sitter noted that emitter theory predicted several optical effects that were not seen in observations of binary stars in which the light from the two stars could be measured in a spectrometer. If emission theory were correct, the light from the stars should experience unusual fringe shifting due to the velocity of the stars being added to the speed of the light, but no such effect could be seen. It was later shown by J. G. Fox that the original de Sitter experiments were flawed due to extinction, but in 1977 Brecher observed X-rays from binary star systems with similar null results. Furthermore, Filippas and Fox (1964) conducted terrestrial particle accelerator tests specifically designed to address Fox's earlier "extinction" objection, the results being inconsistent with source dependence of the speed of light. Subsequent experiments Although Michelson and Morley went on to different experiments after their first publication in 1887, both remained active in the field. Other versions of the experiment were carried out with increasing sophistication. Morley was not convinced of his own results, and went on to conduct additional experiments with Dayton Miller from 1902 to 1904. Again, the result was negative within the margins of error. Miller worked on increasingly larger interferometers, culminating in one with a (effective) arm length that he tried at various sites, including on top of a mountain at the Mount Wilson Observatory. To avoid the possibility of the aether wind being blocked by solid walls, his mountaintop observations used a special shed with thin walls, mainly of canvas. From noisy, irregular data, he consistently extracted a small positive signal that varied with each rotation of the device, with the sidereal day, and on a yearly basis. His measurements in the 1920s amounted to approximately instead of the nearly expected from the Earth's orbital motion alone. He remained convinced this was due to partial entrainment or aether dragging, though he did not attempt a detailed explanation. He ignored critiques demonstrating the inconsistency of his results and the refutation by the Hammar experiment. Miller's findings were considered important at the time, and were discussed by Michelson, Lorentz and others at a meeting reported in 1928. There was general agreement that more experimentation was needed to check Miller's results. Miller later built a non-magnetic device to eliminate magnetostriction, while Michelson built one of non-expanding Invar to eliminate any remaining thermal effects. Other experimenters from around the world increased accuracy, eliminated possible side effects, or both. So far, no one has been able to replicate Miller's results, and modern experimental accuracies have ruled them out. Roberts (2006) has pointed out that the primitive data reduction techniques used by Miller and other early experimenters, including Michelson and Morley, were capable of creating apparent periodic signals even when none existed in the actual data. After reanalyzing Miller's original data using modern techniques of quantitative error analysis, Roberts found Miller's apparent signals to be statistically insignificant. Using a special optical arrangement involving a 1/20 wave step in one mirror, Roy J. Kennedy (1926) and K.K. Illingworth (1927) (Fig. 8) converted the task of detecting fringe shifts from the relatively insensitive one of estimating their lateral displacements to the considerably more sensitive task of adjusting the light intensity on both sides of a sharp boundary for equal luminance. If they observed unequal illumination on either side of the step, such as in Fig. 8e, they would add or remove calibrated weights from the interferometer until both sides of the step were once again evenly illuminated, as in Fig. 8d. The number of weights added or removed provided a measure of the fringe shift. Different observers could detect changes as little as 1/1500 to 1/300 of a fringe. Kennedy also carried out an experiment at Mount Wilson, finding only about 1/10 the drift measured by Miller and no seasonal effects. In 1930, Georg Joos conducted an experiment using an automated interferometer with arms forged from pressed quartz having a very low coefficient of thermal expansion, that took continuous photographic strip recordings of the fringes through dozens of revolutions of the apparatus. Displacements of 1/1000 of a fringe could be measured on the photographic plates. No periodic fringe displacements were found, placing an upper limit to the aether wind of . In the table below, the expected values are related to the relative speed between Earth and Sun of . With respect to the speed of the solar system around the galactic center of about , or the speed of the solar system relative to the CMB rest frame of about , the null results of those experiments are even more obvious. Recent experiments Optical tests Optical tests of the isotropy of the speed of light became commonplace. New technologies, including the use of lasers and masers, have significantly improved measurement precision. (In the following table, only Essen (1955), Jaseja (1964), and Shamir/Fox (1969) are experiments of Michelson–Morley type, i.e., comparing two perpendicular beams. The other optical experiments employed different methods.) Recent optical resonator experiments During the early 21st century, there has been a resurgence in interest in performing precise Michelson–Morley type experiments using lasers, masers, cryogenic optical resonators, etc. This is in large part due to predictions of quantum gravity that suggest that special relativity may be violated at scales accessible to experimental study. The first of these highly accurate experiments was conducted by Brillet & Hall (1979), in which they analyzed a laser frequency stabilized to a resonance of a rotating optical Fabry–Pérot cavity. They set a limit on the anisotropy of the speed of light resulting from the Earth's motions of Δc/c ≈ 10−15, where Δc is the difference between the speed of light in the x- and y-directions. As of 2015, optical and microwave resonator experiments have improved this limit to Δc/c ≈ 10−18. In some of them, the devices were rotated or remained stationary, and some were combined with the Kennedy–Thorndike experiment. In particular, Earth's direction and velocity (ca. ) relative to the CMB rest frame are ordinarily used as references in these searches for anisotropies. Other tests of Lorentz invariance Examples of other experiments not based on the Michelson–Morley principle, i.e., non-optical isotropy tests achieving an even higher level of precision, are Clock comparison or Hughes–Drever experiments. In Drever's 1961 experiment, 7Li nuclei in the ground state, which has total angular momentum J = 3/2, were split into four equally spaced levels by a magnetic field. Each transition between a pair of adjacent levels should emit a photon of equal frequency, resulting in a single, sharp spectral line. However, since the nuclear wave functions for different MJ have different orientations in space relative to the magnetic field, any orientation dependence, whether from an aether wind or from a dependence on the large-scale distribution of mass in space (see Mach's principle), would perturb the energy spacings between the four levels, resulting in an anomalous broadening or splitting of the line. No such broadening was observed. Modern repeats of this kind of experiment have provided some of the most accurate confirmations of the principle of Lorentz invariance. See also Michelson–Morley Award Moving magnet and conductor problem The Light (Glass) LIGO References Notes Experiments Bibliography (Series "A" references) External links Physics experiments Aether theories Case Western Reserve University Tests of special relativity 1887 in science
Michelson–Morley experiment
[ "Physics" ]
7,241
[ "Experimental physics", "Physics experiments" ]
91,173
https://en.wikipedia.org/wiki/Axial%20tilt
In astronomy, axial tilt, also known as obliquity, is the angle between an object's rotational axis and its orbital axis, which is the line perpendicular to its orbital plane; equivalently, it is the angle between its equatorial plane and orbital plane. It differs from orbital inclination. At an obliquity of 0 degrees, the two axes point in the same direction; that is, the rotational axis is perpendicular to the orbital plane. The rotational axis of Earth, for example, is the imaginary line that passes through both the North Pole and South Pole, whereas the Earth's orbital axis is the line perpendicular to the imaginary plane through which the Earth moves as it revolves around the Sun; the Earth's obliquity or axial tilt is the angle between these two lines. Over the course of an orbital period, the obliquity usually does not change considerably, and the orientation of the axis remains the same relative to the background of stars. This causes one pole to be pointed more toward the Sun on one side of the orbit, and more away from the Sun on the other side—the cause of the seasons on Earth. Standards There are two standard methods of specifying a planet's tilt. One way is based on the planet's north pole, defined in relation to the direction of Earth's north pole, and the other way is based on the planet's positive pole, defined by the right-hand rule: The International Astronomical Union (IAU) defines the north pole of a planet as that which lies on Earth's north side of the invariable plane of the Solar System; under this system, Venus is tilted 3° and rotates retrograde, opposite that of most of the other planets. The IAU also uses the right-hand rule to define a positive pole for the purpose of determining orientation. Using this convention, Venus is tilted 177° ("upside down") and rotates prograde. Earth Earth's orbital plane is known as the ecliptic plane, and Earth's tilt is known to astronomers as the obliquity of the ecliptic, being the angle between the ecliptic and the celestial equator on the celestial sphere. It is denoted by the Greek letter Epsilon ε. Earth currently has an axial tilt of about 23.44°. This value remains about the same relative to a stationary orbital plane throughout the cycles of axial precession. But the ecliptic (i.e., Earth's orbit) moves due to planetary perturbations, and the obliquity of the ecliptic is not a fixed quantity. At present, it is decreasing at a rate of about 46.8″ per century (see details in Short term below). History The ancient Greeks had good measurements of the obliquity since about 350 BCE, when Pytheas of Marseilles measured the shadow of a gnomon at the summer solstice. About 830 CE, the Caliph Al-Mamun of Baghdad directed his astronomers to measure the obliquity, and the result was used in the Arab world for many years. In 1437, Ulugh Beg determined the Earth's axial tilt as 23°30′17″ (23.5047°). During the Middle Ages, it was widely believed that both precession and Earth's obliquity oscillated around a mean value, with a period of 672 years, an idea known as trepidation of the equinoxes. Perhaps the first to realize this was incorrect (during historic time) was Ibn al-Shatir in the fourteenth century and the first to realize that the obliquity is decreasing at a relatively constant rate was Fracastoro in 1538. The first accurate, modern, western observations of the obliquity were probably those of Tycho Brahe from Denmark, about 1584, although observations by several others, including al-Ma'mun, al-Tusi, Purbach, Regiomontanus, and Walther, could have provided similar information. Seasons Earth's axis remains tilted in the same direction with reference to the background stars throughout a year (regardless of where it is in its orbit) – this is known as axial parallelism. This means that one pole (and the associated hemisphere of Earth) will be directed away from the Sun at one side of the orbit, and half an orbit later (half a year later) this pole will be directed towards the Sun. This is the cause of Earth's seasons. Summer occurs in the Northern hemisphere when the north pole is directed toward and the south pole away from the Sun. Variations in Earth's axial tilt can influence the seasons and is likely a factor in long-term climatic change (also see Milankovitch cycles). Oscillation Short term The exact angular value of the obliquity is found by observation of the motions of Earth and planets over many years. Astronomers produce new fundamental ephemerides as the accuracy of observation improves and as the understanding of the dynamics increases, and from these ephemerides various astronomical values, including the obliquity, are derived. Annual almanacs are published listing the derived values and methods of use. Until 1983, the Astronomical Almanac's angular value of the mean obliquity for any date was calculated based on the work of Newcomb, who analyzed positions of the planets until about 1895: where is the obliquity and is tropical centuries from B1900.0 to the date in question. From 1984, the Jet Propulsion Laboratory's DE series of computer-generated ephemerides took over as the fundamental ephemeris of the Astronomical Almanac. Obliquity based on DE200, which analyzed observations from 1911 to 1979, was calculated: where hereafter is Julian centuries from J2000.0. JPL's fundamental ephemerides have been continually updated. For instance, according to IAU resolution in 2006 in favor of the P03 astronomical model, the Astronomical Almanac for 2010 specifies: These expressions for the obliquity are intended for high precision over a relatively short time span, perhaps several centuries. Jacques Laskar computed an expression to order good to 0.02″ over 1000 years and several arcseconds over 10,000 years. where here is multiples of 10,000 Julian years from J2000.0. These expressions are for the so-called mean obliquity, that is, the obliquity free from short-term variations. Periodic motions of the Moon and of Earth in its orbit cause much smaller (9.2 arcseconds) short-period (about 18.6 years) oscillations of the rotation axis of Earth, known as nutation, which add a periodic component to Earth's obliquity. The true or instantaneous obliquity includes this nutation. Long term Using numerical methods to simulate Solar System behavior over a period of several million years, long-term changes in Earth's orbit, and hence its obliquity, have been investigated. For the past 5 million years, Earth's obliquity has varied between and , with a mean period of 41,040 years. This cycle is a combination of precession and the largest term in the motion of the ecliptic. For the next 1 million years, the cycle will carry the obliquity between and . The Moon has a stabilizing effect on Earth's obliquity. Frequency map analysis conducted in 1993 suggested that, in the absence of the Moon, the obliquity could change rapidly due to orbital resonances and chaotic behavior of the Solar System, reaching as high as 90° in as little as a few million years (also see Orbit of the Moon). However, more recent numerical simulations made in 2011 indicated that even in the absence of the Moon, Earth's obliquity might not be quite so unstable; varying only by about 20–25°. To resolve this contradiction, diffusion rate of obliquity has been calculated, and it was found that it takes more than billions of years for Earth's obliquity to reach near 90°. The Moon's stabilizing effect will continue for less than two billion years. As the Moon continues to recede from Earth due to tidal acceleration, resonances may occur which will cause large oscillations of the obliquity. Solar System bodies All four of the innermost, rocky planets of the Solar System may have had large variations of their obliquity in the past. Since obliquity is the angle between the axis of rotation and the direction perpendicular to the orbital plane, it changes as the orbital plane changes due to the influence of other planets. But the axis of rotation can also move (axial precession), due to torque exerted by the Sun on a planet's equatorial bulge. Like Earth, all of the rocky planets show axial precession. If the precession rate were very fast the obliquity would actually remain fairly constant even as the orbital plane changes. The rate varies due to tidal dissipation and core-mantle interaction, among other things. When a planet's precession rate approaches certain values, orbital resonances may cause large changes in obliquity. The amplitude of the contribution having one of the resonant rates is divided by the difference between the resonant rate and the precession rate, so it becomes large when the two are similar. Mercury and Venus have most likely been stabilized by the tidal dissipation of the Sun. Earth was stabilized by the Moon, as mentioned above, but before its formation, Earth, too, could have passed through times of instability. Mars's obliquity is quite variable over millions of years and may be in a chaotic state; it varies as much as 0° to 60° over some millions of years, depending on perturbations of the planets. Some authors dispute that Mars's obliquity is chaotic, and show that tidal dissipation and viscous core-mantle coupling are adequate for it to have reached a fully damped state, similar to Mercury and Venus. The occasional shifts in the axial tilt of Mars have been suggested as an explanation for the appearance and disappearance of rivers and lakes over the course of the existence of Mars. A shift could cause a burst of methane into the atmosphere, causing warming, but then the methane would be destroyed and the climate would become arid again. The obliquities of the outer planets are considered relatively stable. Extrasolar planets The stellar obliquity , i.e. the axial tilt of a star with respect to the orbital plane of one of its planets, has been determined for only a few systems. By 2012, 49 stars have had sky-projected spin-orbit misalignment has been observed, which serves as a lower limit to . Most of these measurements rely on the Rossiter–McLaughlin effect. Since the launch of space-based telescopes such as Kepler space telescope, it has been made possible to determine and estimate the obliquity of an extrasolar planet. The rotational flattening of the planet and the entourage of moons and/or rings, which are traceable with high-precision photometry provide access to planetary obliquity, . Many extrasolar planets have since had their obliquity determined, such as Kepler-186f and Kepler-413b. Astrophysicists have applied tidal theories to predict the obliquity of extrasolar planets. It has been shown that the obliquities of exoplanets in the habitable zone around low-mass stars tend to be eroded in less than 109 years, which means that they would not have tilt-induced seasons as Earth has. See also Axial parallelism Milankovitch cycles Polar motion Pole shift Rotation around a fixed axis True polar wander References External links National Space Science Data Center Obliquity of the Ecliptic Calculator Precession Planetary science
Axial tilt
[ "Physics", "Astronomy" ]
2,497
[ "Physical quantities", "Precession", "Planetary science", "Wikipedia categories named after physical quantities", "Astronomical sub-disciplines" ]
91,256
https://en.wikipedia.org/wiki/Computer%20and%20network%20surveillance
Computer and network surveillance is the monitoring of computer activity and data stored locally on a computer or data being transferred over computer networks such as the Internet. This monitoring is often carried out covertly and may be completed by governments, corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent government agencies. Computer and network surveillance programs are widespread today and almost all Internet traffic can be monitored. Surveillance allows governments and other agencies to maintain social control, recognize and monitor threats or any suspicious or abnormal activity, and prevent and investigate criminal activities. With the advent of programs such as the Total Information Awareness program, technologies such as high-speed surveillance computers and biometrics software, and laws such as the Communications Assistance For Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens. Many civil rights and privacy groups, such as Reporters Without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Union, have expressed concern that increasing surveillance of citizens will result in a mass surveillance society, with limited political and/or personal freedoms. Such fear has led to numerous lawsuits such as Hepting v. AT&T. The hacktivist group Anonymous has hacked into government websites in protest of what it considers "draconian surveillance". Network surveillance The vast majority of computer surveillance involves the monitoring of personal data and traffic on the Internet. For example, in the United States, the Communications Assistance For Law Enforcement Act mandates that all phone calls and broadband internet traffic (emails, web traffic, instant messaging, etc.) be available for unimpeded, real-time monitoring by Federal law enforcement agencies. Packet capture (also known as "packet sniffing") is the monitoring of data traffic on a network. Data sent between computers over the Internet or between any networks takes the form of small chunks called packets, which are routed to their destination and assembled back into a complete message. A packet capture appliance intercepts these packets, so that they may be examined and analyzed. Computer technology is needed to perform traffic analysis and sift through intercepted data to look for important/useful information. Under the Communications Assistance For Law Enforcement Act, all U.S. telecommunications providers are required to install such packet capture technology so that Federal law enforcement and intelligence agencies are able to intercept all of their customers' broadband Internet and voice over Internet protocol (VoIP) traffic. These technologies can be used both by the intelligence and for illegal activities. There is far too much data gathered by these packet sniffers for human investigators to manually search through. Thus, automated Internet surveillance computers sift through the vast amount of intercepted Internet traffic, filtering out, and reporting to investigators those bits of information which are "interesting", for example, the use of certain words or phrases, visiting certain types of web sites, or communicating via email or chat with a certain individual or group. Billions of dollars per year are spent by agencies such as the Information Awareness Office, NSA, and the FBI, for the development, purchase, implementation, and operation of systems which intercept and analyze this data, extracting only the information that is useful to law enforcement and intelligence agencies. Similar systems are now used by Iranian Security dept. to more easily distinguish between peaceful citizens and terrorists. All of the technology has been allegedly installed by German Siemens AG and Finnish Nokia. The Internet's rapid development has become a primary form of communication. More people are potentially subject to Internet surveillance. There are advantages and disadvantages to network monitoring. For instance, systems described as "Web 2.0" have greatly impacted modern society. Tim O’ Reilly, who first explained the concept of "Web 2.0", stated that Web 2.0 provides communication platforms that are "user generated", with self-produced content, motivating more people to communicate with friends online. However, Internet surveillance also has a disadvantage. One researcher from Uppsala University said "Web 2.0 surveillance is directed at large user groups who help to hegemonically produce and reproduce surveillance by providing user-generated (self-produced) content. We can characterize Web 2.0 surveillance as mass self-surveillance". Surveillance companies monitor people while they are focused on work or entertainment. Yet, employers themselves also monitor their employees. They do so in order to protect the company's assets and to control public communications but most importantly, to make sure that their employees are actively working and being productive. This can emotionally affect people; this is because it can cause emotions like jealousy. A research group states "...we set out to test the prediction that feelings of jealousy lead to 'creeping' on a partner through Facebook, and that women are particularly likely to engage in partner monitoring in response to jealousy". The study shows that women can become jealous of other people when they are in an online group. Virtual assistants have become socially integrated into many people's lives. Currently, virtual assistants such as Amazon's Alexa or Apple's Siri cannot call 911 or local services. They are constantly listening for command and recording parts of conversations that will help improve algorithms. If the law enforcement is able to be called using a virtual assistant, the law enforcement would then be able to have access to all the information saved for the device. The device is connected to the home's internet, because of this law enforcement would be the exact location of the individual calling for law enforcement. While the virtual assistance devices are popular, many debates the lack of privacy. The devices are listening to every conversation the owner is having. Even if the owner is not talking to a virtual assistant, the device is still listening to the conversation in hopes that the owner will need assistance, as well as to gather data. Corporate surveillance Corporate surveillance of computer activity is very common. The data collected is most often used for marketing purposes or sold to other corporations, but is also regularly shared with government agencies. It can be used as a form of business intelligence, which enables the corporation to better tailor their products and/or services to be desirable by their customers. The data can also be sold to other corporations so that they can use it for the aforementioned purpose, or it can be used for direct marketing purposes, such as targeted advertisements, where ads are targeted to the user of the search engine by analyzing their search history and emails (if they use free webmail services), which are kept in a database. Such type of surveillance is also used to establish business purposes of monitoring, which may include the following: Preventing misuse of resources. Companies can discourage unproductive personal activities such as online shopping or web surfing on company time. Monitoring employee performance is one way to reduce unnecessary network traffic and reduce the consumption of network bandwidth. Promoting adherence to policies. Online surveillance is one means of verifying employee observance of company networking policies. Preventing lawsuits. Firms can be held liable for discrimination or employee harassment in the workplace. Organizations can also be involved in infringement suits through employees that distribute copyrighted material over corporate networks. Safeguarding records. Federal legislation requires organizations to protect personal information. Monitoring can determine the extent of compliance with company policies and programs overseeing information security. Monitoring may also deter unlawful appropriation of personal information, and potential spam or viruses. Safeguarding company assets. The protection of intellectual property, trade secrets, and business strategies is a major concern. The ease of information transmission and storage makes it imperative to monitor employee actions as part of a broader policy. The second component of prevention is determining the ownership of technology resources. The ownership of the firm's networks, servers, computers, files, and e-mail should be explicitly stated. There should be a distinction between an employee's personal electronic devices, which should be limited and proscribed, and those owned by the firm. For instance, Google Search stores identifying information for each web search. An IP address and the search phrase used are stored in a database for up to 18 months. Google also scans the content of emails of users of its Gmail webmail service in order to create targeted advertising based on what people are talking about in their personal email correspondences. Google is, by far, the largest Internet advertising agency—millions of sites place Google's advertising banners and links on their websites in order to earn money from visitors who click on the ads. Each page containing Google advertisements adds, reads, and modifies "cookies" on each visitor's computer. These cookies track the user across all of these sites and gather information about their web surfing habits, keeping track of which sites they visit, and what they do when they are on these sites. This information, along with the information from their email accounts, and search engine histories, is stored by Google to use to build a profile of the user to deliver better-targeted advertising. The United States government often gains access to these databases, either by producing a warrant for it, or by simply asking. The Department of Homeland Security has openly stated that it uses data collected from consumer credit and direct marketing agencies for augmenting the profiles of individuals that it is monitoring. Malicious software In addition to monitoring information sent over a computer network, there is also a way to examine data stored on a computer's hard drive, and to monitor the activities of a person using the computer. A surveillance program installed on a computer can search the contents of the hard drive for suspicious data, can monitor computer use, collect passwords, and/or report back activities in real-time to its operator through the Internet connection. A keylogger is an example of this type of program. Normal keylogging programs store their data on the local hard drive, but some are programmed to automatically transmit data over the network to a remote computer or Web server. There are multiple ways of installing such software. The most common is remote installation, using a backdoor created by a computer virus or trojan. This tactic has the advantage of potentially subjecting multiple computers to surveillance. Viruses often spread to thousands or millions of computers, and leave "backdoors" which are accessible over a network connection, and enable an intruder to remotely install software and execute commands. These viruses and trojans are sometimes developed by government agencies, such as CIPAV and Magic Lantern. More often, however, viruses created by other people or spyware installed by marketing agencies can be used to gain access through the security breaches that they create. Another method is "cracking" into the computer to gain access over a network. An attacker can then install surveillance software remotely. Servers and computers with permanent broadband connections are most vulnerable to this type of attack. Another source of security cracking is employees giving out information or users using brute force tactics to guess their password. One can also physically place surveillance software on a computer by gaining entry to the place where the computer is stored and install it from a compact disc, floppy disk, or thumbdrive. This method shares a disadvantage with hardware devices in that it requires physical access to the computer. One well-known worm that uses this method of spreading itself is Stuxnet. Social network analysis One common form of surveillance is to create maps of social networks based on data from social networking sites as well as from traffic analysis information from phone call records such as those in the NSA call database, and internet traffic data gathered under CALEA. These social network "maps" are then data mined to extract useful information such as personal interests, friendships and affiliations, wants, beliefs, thoughts, and activities. Many U.S. government agencies such as the Defense Advanced Research Projects Agency (DARPA), the National Security Agency (NSA), and the Department of Homeland Security (DHS) are currently investing heavily in research involving social network analysis. The intelligence community believes that the biggest threat to the U.S. comes from decentralized, leaderless, geographically dispersed groups. These types of threats are most easily countered by finding important nodes in the network, and removing them. To do this requires a detailed map of the network. Jason Ethier of Northeastern University, in his study of modern social network analysis, said the following of the Scalable Social Network Analysis Program developed by the Information Awareness Office: Monitoring from a distance With only commercially available equipment, it has been shown that it is possible to monitor computers from a distance by detecting the radiation emitted by the CRT monitor. This form of computer surveillance, known as TEMPEST, involves reading electromagnetic emanations from computing devices in order to extract data from them at distances of hundreds of meters. IBM researchers have also found that, for most computer keyboards, each key emits a slightly different noise when pressed. The differences are individually identifiable under some conditions, and so it's possible to log key strokes without actually requiring logging software to run on the associated computer. In 2015, lawmakers in California passed a law prohibiting any investigative personnel in the state to force businesses to hand over digital communication without a warrant, calling this Electronic Communications Privacy Act. At the same time in California, state senator Jerry Hill introduced a bill making law enforcement agencies to disclose more information on their usage and information from the Stingray phone tracker device. As the law took into effect in January 2016, it will now require cities to operate with new guidelines in relation to how and when law enforcement use this device. Some legislators and those holding a public office have disagreed with this technology because of the warrantless tracking, but now if a city wants to use this device, it must be heard by a public hearing. Some cities have pulled out of using the StingRay such as Santa Clara County. And it has also been shown, by Adi Shamir et al., that even the high frequency noise emitted by a CPU includes information about the instructions being executed. Policeware and govware In German-speaking countries, spyware used or made by the government is sometimes called govware. Some countries like Switzerland and Germany have a legal framework governing the use of such software. Known examples include the Swiss MiniPanzer and MegaPanzer and the German R2D2 (trojan). Policeware is a software designed to police citizens by monitoring the discussion and interaction of its citizens. Within the U.S., Carnivore was the first incarnation of secretly installed e-mail monitoring software installed in Internet service providers' networks to log computer communication, including transmitted e-mails. Magic Lantern is another such application, this time running in a targeted computer in a trojan style and performing keystroke logging. CIPAV, deployed by the FBI, is a multi-purpose spyware/trojan. The Clipper Chip, formerly known as MYK-78, is a small hardware chip that the government can install into phones, designed in the nineties. It was intended to secure private communication and data by reading voice messages that are encoded and decode them. The Clipper Chip was designed during the Clinton administration to, “…protect personal safety and national security against a developing information anarchy that fosters criminals, terrorists and foreign foes.” The government portrayed it as the solution to the secret codes or cryptographic keys that the age of technology created. Thus, this has raised controversy in the public, because the Clipper Chip is thought to have been the next “Big Brother” tool. This led to the failure of the Clipper proposal, even though there have been many attempts to push the agenda. The "Consumer Broadband and Digital Television Promotion Act" (CBDTPA) was a bill proposed in the United States Congress. CBDTPA was known as the "Security Systems and Standards Certification Act" (SSSCA) while in draft form and was killed in committee in 2002. Had CBDTPA become law, it would have prohibited technology that could be used to read digital content under copyright (such as music, video, and e-books) without digital rights management (DRM) that prevented access to this material without the permission of the copyright holder. Surveillance as an aid to censorship Surveillance and censorship are different. Surveillance can be performed without censorship, but it is harder to engage in censorship without some forms of surveillance. And even when surveillance does not lead directly to censorship, the widespread knowledge or belief that a person, their computer, or their use of the Internet is under surveillance can lead to self-censorship. In March 2013 Reporters Without Borders issued a Special report on Internet surveillance that examines the use of technology that monitors online activity and intercepts electronic communication in order to arrest journalists, citizen-journalists, and dissidents. The report includes a list of "State Enemies of the Internet", Bahrain, China, Iran, Syria, and Vietnam, countries whose governments are involved in active, intrusive surveillance of news providers, resulting in grave violations of freedom of information and human rights. Computer and network surveillance is on the increase in these countries. The report also includes a second list of "Corporate Enemies of the Internet", Amesys (France), Blue Coat Systems (U.S.), Gamma (UK and Germany), Hacking Team (Italy), and Trovicor (Germany), companies that sell products that are liable to be used by governments to violate human rights and freedom of information. Neither list is exhaustive and they are likely to be expanded in the future. Protection of sources is no longer just a matter of journalistic ethics. Journalists should equip themselves with a "digital survival kit" if they are exchanging sensitive information online, storing it on a computer hard-drive or mobile phone. Individuals associated with high-profile rights organizations, dissident groups, protest groups, or reform groups are urged to take extra precautions to protect their online identities. Countermeasures Countermeasures against surveillance vary based on the type of eavesdropping targeted. Electromagnetic eavesdropping, such as TEMPEST and its derivatives, often requires hardware shielding, such as Faraday cages, to block unintended emissions. To prevent interception of data in transit, encryption is a key defense. When properly implemented with end-to-end encryption, or while using tools such as Tor, and provided the device remains uncompromised and free from direct monitoring via electromagnetic analysis, audio recording, or similar methodologies, the content of communication is generally considered secure. For a number of years, numerous government initiatives have sought to weaken encryption or introduce backdoors for law enforcement access. Privacy advocates and the broader technology industry strongly oppose these measures, arguing that any backdoor would inevitably be discovered and exploited by malicious actors. Such vulnerabilities would endanger everyone's private data while failing to hinder criminals, who could switch to alternative platforms or create their own encrypted systems. Surveillance remains effective even when encryption is correctly employed, by exploiting metadata that is often accessible to packet sniffers unless countermeasures are applied. This includes DNS queries, IP addresses, phone numbers, URLs, timestamps, and communication durations, which can reveal significant information about user activity and interactions or associations with a person of interest. See also Anonymizer, a software system that attempts to make network activity untraceable Computer surveillance in the workplace Cyber spying Datacasting, a means of broadcasting files and Web pages using radio waves, allowing receivers near total immunity from traditional network surveillance techniques. Differential privacy, a method to maximize the accuracy of queries from statistical databases while minimizing the chances of violating the privacy of individuals. ECHELON, a signals intelligence (SIGINT) collection and analysis network operated on behalf of Australia, Canada, New Zealand, the United Kingdom, and the United States, also known as AUSCANNZUKUS and Five Eyes GhostNet, a large-scale cyber spying operation discovered in March 2009 List of government surveillance projects Internet censorship and surveillance by country Mass surveillance China's Golden Shield Project Mass surveillance in Australia Mass surveillance in China Mass surveillance in East Germany Mass surveillance in India Mass surveillance in North Korea Mass surveillance in the United Kingdom Mass surveillance in the United States Surveillance Surveillance by the United States government: 2013 mass surveillance disclosures, reports about NSA and its international partners' mass surveillance of foreign nationals and U.S. citizens Bullrun (code name), a highly classified NSA program to preserve its ability to eavesdrop on encrypted communications by influencing and weakening encryption standards, by obtaining master encryption keys, and by gaining access to data before or after it is encrypted either by agreement, by force of law, or by computer network exploitation (hacking) Carnivore, a U.S. Federal Bureau of Investigation system to monitor email and electronic communications COINTELPRO, a series of covert, and at times illegal, projects conducted by the FBI aimed at U.S. domestic political organizations Communications Assistance For Law Enforcement Act Computer and Internet Protocol Address Verifier (CIPAV), a data gathering tool used by the U.S. Federal Bureau of Investigation (FBI) Dropmire, a secret surveillance program by the NSA aimed at surveillance of foreign embassies and diplomatic staff, including those of NATO allies Magic Lantern, keystroke logging software developed by the U.S. Federal Bureau of Investigation Mass surveillance in the United States NSA call database, a database containing metadata for hundreds of billions of telephone calls made in the U.S. NSA warrantless surveillance (2001–07) NSA whistleblowers: William Binney, Thomas Andrews Drake, Mark Klein, Edward Snowden, Thomas Tamm, Russ Tice Spying on United Nations leaders by United States diplomats Stellar Wind (code name), code name for information collected under the President's Surveillance Program Tailored Access Operations, NSA's hacking program Terrorist Surveillance Program, an NSA electronic surveillance program Total Information Awareness, a project of the Defense Advanced Research Projects Agency (DARPA) TEMPEST, codename for studies of unintentional intelligence-bearing signals which, if intercepted and analyzed, may disclose the information transmitted, received, handled, or otherwise processed by any information-processing equipment References External links "Selected Papers in Anonymity", Free Haven Project, accessed 16 September 2011. Yan, W. (2019) Introduction to Intelligent Surveillance: Surveillance Data Capture, Transmission, and Analytics, Springer. Computer forensics Surveillance Espionage techniques
Computer and network surveillance
[ "Engineering" ]
4,513
[ "Cybersecurity engineering", "Computer forensics" ]
92,193
https://en.wikipedia.org/wiki/Circular%20dichroism
Circular dichroism (CD) is dichroism involving circularly polarized light, i.e., the differential absorption of left- and right-handed light. Left-hand circular (LHC) and right-hand circular (RHC) polarized light represent two possible spin angular momentum states for a photon, and so circular dichroism is also referred to as dichroism for spin angular momentum. This phenomenon was discovered by Jean-Baptiste Biot, Augustin Fresnel, and Aimé Cotton in the first half of the 19th century. Circular dichroism and circular birefringence are manifestations of optical activity. It is exhibited in the absorption bands of optically active chiral molecules. CD spectroscopy has a wide range of applications in many different fields. Most notably, UV CD is used to investigate the secondary structure of proteins. UV/Vis CD is used to investigate charge-transfer transitions. Near-infrared CD is used to investigate geometric and electronic structure by probing metal d→d transitions. Vibrational circular dichroism, which uses light from the infrared energy region, is used for structural studies of small organic molecules, and most recently proteins and DNA. Physical principles Circular polarization of light Electromagnetic radiation consists of an electric and magnetic field that oscillate perpendicular to one another and to the propagating direction, a transverse wave. While linearly polarized light occurs when the electric field vector oscillates only in one plane, circularly polarized light occurs when the direction of the electric field vector rotates about its propagation direction while the vector retains constant magnitude. At a single point in space, the circularly polarized-vector will trace out a circle over one period of the wave frequency, hence the name. The two diagrams below show the electric field vectors of linearly and circularly polarized light, at one moment of time, for a range of positions; the plot of the circularly polarized electric vector forms a helix along the direction of propagation . For left circularly polarized light (LCP) with propagation towards the observer, the electric vector rotates counterclockwise. For right circularly polarized light (RCP), the electric vector rotates clockwise. Interaction of circularly polarized light with matter When circularly polarized light passes through an absorbing optically active medium, the speeds between right and left polarizations differ () as well as their wavelength() and the extent to which they are absorbed (). Circular dichroism is the difference . The electric field of a light beam causes a linear displacement of charge when interacting with a molecule (electric dipole), whereas its magnetic field causes a circulation of charge (magnetic dipole). These two motions combined cause an excitation of an electron in a helical motion, which includes translation and rotation and their associated operators. The experimentally determined relationship between the rotational strength of a sample and the is given by The rotational strength has also been determined theoretically, We see from these two equations that in order to have non-zero , the electric and magnetic dipole moment operators ( and ) must transform as the same irreducible representation. and are the only point groups where this can occur, making only chiral molecules CD active. Simply put, since circularly polarized light itself is "chiral", it interacts differently with chiral molecules. That is, the two types of circularly polarized light are absorbed to different extents. In a CD experiment, equal amounts of left and right circularly polarized light of a selected wavelength are alternately radiated into a (chiral) sample. One of the two polarizations is absorbed more than the other one, and this wavelength-dependent difference of absorption is measured, yielding the CD spectrum of the sample. Due to the interaction with the molecule, the electric field vector of the light traces out an elliptical path after passing through the sample. It is important that the chirality of the molecule can be conformational rather than structural. That is, for instance, a protein molecule with a helical secondary structure can have a CD that changes with changes in the conformation. Delta absorbance By definition, where (Delta Absorbance) is the difference between absorbance of left circularly polarized (LCP) and right circularly polarized (RCP) light (this is what is usually measured). is a function of wavelength, so for a measurement to be meaningful the wavelength at which it was performed must be known. Molar circular dichroism It can also be expressed, by applying Beer's law, as: where and are the molar extinction coefficients for LCP and RCP light, is the molar concentration, is the path length in centimeters (cm). Then is the molar circular dichroism. This intrinsic property is what is usually meant by the circular dichroism of the substance. Since is a function of wavelength, a molar circular dichroism value () must specify the wavelength at which it is valid. Extrinsic effects on circular dichroism In many practical applications of circular dichroism (CD), as discussed below, the measured CD is not simply an intrinsic property of the molecule, but rather depends on the molecular conformation. In such a case the CD may also be a function of temperature, concentration, and the chemical environment, including solvents. In this case the reported CD value must also specify these other relevant factors in order to be meaningful. In ordered structures lacking two-fold rotational symmetry, optical activity, including differential transmission (and reflection) of circularly polarized waves also depends on the propagation direction through the material. In this case, so-called extrinsic 3d chirality is associated with the mutual orientation of light beam and structure. Molar ellipticity Although is usually measured, for historical reasons most measurements are reported in degrees of ellipticity. Molar ellipticity is circular dichroism corrected for concentration. Molar circular dichroism and molar ellipticity, , are readily interconverted by the equation: This relationship is derived by defining the ellipticity of the polarization as: where and are the magnitudes of the electric field vectors of the right-circularly and left-circularly polarized light, respectively. When equals (when there is no difference in the absorbance of right- and left-circular polarized light), is 0° and the light is linearly polarized. When either or is equal to zero (when there is complete absorbance of the circular polarized light in one direction), is 45° and the light is circularly polarized. Generally, the circular dichroism effect is small, so is small and can be approximated as in radians. Since the intensity or irradiance, , of light is proportional to the square of the electric-field vector, the ellipticity becomes: Then by substituting for I using Beer's law in natural logarithm form: The ellipticity can now be written as: Since , this expression can be approximated by expanding the exponentials in a Taylor series to first-order and then discarding terms of in comparison with unity and converting from radians to degrees: The linear dependence of solute concentration and pathlength is removed by defining molar ellipticity as, Then combining the last two expression with Beer's law, molar ellipticity becomes: The units of molar ellipticity are historically (deg·cm2/dmol). To calculate molar ellipticity, the sample concentration (g/L), cell pathlength (cm), and the molecular weight (g/mol) must be known. If the sample is a protein, the mean residue weight (average molecular weight of the amino acid residues it contains) is often used in place of the molecular weight, essentially treating the protein as a solution of amino acids. Using mean residue ellipticity facilitates comparing the CD of proteins of different molecular weight; use of this normalized CD is important in studies of protein structure. Mean residue ellipticity Methods for estimating secondary structure in polymers, proteins and polypeptides in particular, often require that the measured molar ellipticity spectrum be converted to a normalized value, specifically a value independent of the polymer length. Mean residue ellipticity is used for this purpose; it is simply the measured molar ellipticity of the molecule divided by the number of monomer units (residues) in the molecule. Application to biological molecules In general, this phenomenon will be exhibited in absorption bands of any optically active molecule. As a consequence, circular dichroism is exhibited by biological molecules, because of their dextrorotary and levorotary components. Even more important is that a secondary structure will also impart a distinct CD to its respective molecules. Therefore, the alpha helix of proteins and the double helix of nucleic acids have CD spectral signatures representative of their structures. The capacity of CD to give a representative structural signature makes it a powerful tool in modern biochemistry with applications that can be found in virtually every field of study. CD is closely related to the optical rotatory dispersion (ORD) technique, and is generally considered to be more advanced. CD is measured in or near the absorption bands of the molecule of interest, while ORD can be measured far from these bands. CD's advantage is apparent in the data analysis. Structural elements are more clearly distinguished since their recorded bands do not overlap extensively at particular wavelengths as they do in ORD. In principle, these two spectral measurements can be interconverted through an integral transform (Kramers–Kronig relation), if all the absorptions are included in the measurements. The far-UV (ultraviolet) CD spectrum of proteins can reveal important characteristics of their secondary structure. CD spectra can be readily used to estimate the fraction of a molecule that is in the alpha-helix conformation, the beta-sheet conformation, the beta-turn conformation, or some other (e.g. random coil) conformation. These fractional assignments place important constraints on the possible secondary conformations that the protein can be in. CD cannot, in general, say where the alpha helices that are detected are located within the molecule or even completely predict how many there are. Despite this, CD is a valuable tool, especially for showing changes in conformation. It can, for instance, be used to study how the secondary structure of a molecule changes as a function of temperature or of the concentration of denaturing agents, e.g. Guanidinium chloride or urea. In this way it can reveal important thermodynamic information about the molecule (such as the enthalpy and Gibbs free energy of denaturation) that cannot otherwise be easily obtained. Anyone attempting to study a protein will find CD a valuable tool for verifying that the protein is in its native conformation before undertaking extensive and/or expensive experiments with it. Also, there are a number of other uses for CD spectroscopy in protein chemistry not related to alpha-helix fraction estimation. Moreover, CD spectroscopy has been used in bioinorganic interface studies. Specifically it has been used to analyze the differences in secondary structure of an engineered protein before and after titration with a reagent. The near-UV CD spectrum (>250 nm) of proteins provides information on the tertiary structure. The signals obtained in the 250–300 nm region are due to the absorption, dipole orientation and the nature of the surrounding environment of the phenylalanine, tyrosine, cysteine (or S-S disulfide bridges) and tryptophan amino acids. Unlike in far-UV CD, the near-UV CD spectrum cannot be assigned to any particular 3D structure. Rather, near-UV CD spectra provide structural information on the nature of the prosthetic groups in proteins, e.g., the heme groups in hemoglobin and cytochrome c. Visible CD spectroscopy is a very powerful technique to study metal–protein interactions and can resolve individual d–d electronic transitions as separate bands. CD spectra in the visible light region are only produced when a metal ion is in a chiral environment, thus, free metal ions in solution are not detected. This has the advantage of only observing the protein-bound metal, so pH dependence and stoichiometries are readily obtained. Optical activity in transition metal ion complexes have been attributed to configurational, conformational and the vicinal effects. Klewpatinond and Viles (2007) have produced a set of empirical rules for predicting the appearance of visible CD spectra for Cu2+ and Ni2+ square-planar complexes involving histidine and main-chain coordination. CD gives less specific structural information than X-ray crystallography and protein NMR spectroscopy, for example, which both give atomic resolution data. However, CD spectroscopy is a quick method that does not require large amounts of proteins or extensive data processing. Thus CD can be used to survey a large number of solvent conditions, varying temperature, pH, salinity, and the presence of various cofactors. CD spectroscopy is usually used to study proteins in solution, and thus it complements methods that study the solid state. This is also a limitation, in that many proteins are embedded in membranes in their native state, and solutions containing membrane structures are often strongly scattering. CD is sometimes measured in thin films. CD spectroscopy has also been done using semiconducting materials such as TiO2 to obtain large signals in the UV range of wavelengths, where the electronic transitions for biomolecules often occur. Experimental limitations CD has also been studied in carbohydrates, but with limited success due to the experimental difficulties associated with measurement of CD spectra in the vacuum ultraviolet (VUV) region of the spectrum (100–200 nm), where the corresponding CD bands of unsubstituted carbohydrates lie. Substituted carbohydrates with bands above the VUV region have been successfully measured. Measurement of CD is also complicated by the fact that typical aqueous buffer systems often absorb in the range where structural features exhibit differential absorption of circularly polarized light. Phosphate, sulfate, carbonate, and acetate buffers are generally incompatible with CD unless made extremely dilute e.g. in the 10–50 mM range. The TRIS buffer system should be completely avoided when performing far-UV CD. Borate and Onium compounds are often used to establish the appropriate pH range for CD experiments. Some experimenters have substituted fluoride for chloride ion because fluoride absorbs less in the far UV, and some have worked in pure water. Another, almost universal, technique is to minimize solvent absorption by using shorter path length cells when working in the far UV, 0.1 mm path lengths are not uncommon in this work. In addition to measuring in aqueous systems, CD, particularly far-UV CD, can be measured in organic solvents e.g. ethanol, methanol, trifluoroethanol (TFE). The latter has the advantage to induce structure formation of proteins, inducing beta-sheets in some and alpha helices in others, which they would not show under normal aqueous conditions. Most common organic solvents such as acetonitrile, THF, chloroform, dichloromethane are however, incompatible with far-UV CD. It may be of interest to note that the protein CD spectra used in secondary structure estimation are related to the π to π* orbital absorptions of the amide bonds linking the amino acids. These absorption bands lie partly in the so-called vacuum ultraviolet (wavelengths less than about 200 nm). The wavelength region of interest is actually inaccessible in air because of the strong absorption of light by oxygen at these wavelengths. In practice these spectra are measured not in vacuum but in an oxygen-free instrument (filled with pure nitrogen gas). Once oxygen has been eliminated, perhaps the second most important technical factor in working below 200 nm is to design the rest of the optical system to have low losses in this region. Critical in this regard is the use of aluminized mirrors whose coatings have been optimized for low loss in this region of the spectrum. The usual light source in these instruments is a high pressure, short-arc xenon lamp. Ordinary xenon arc lamps are unsuitable for use in the low UV. Instead, specially constructed lamps with envelopes made from high-purity synthetic fused silica must be used. Light from synchrotron sources has a much higher flux at short wavelengths, and has been used to record CD down to 160 nm. In 2010 the CD spectrophotometer at the electron storage ring facility ISA at the University of Aarhus in Denmark was used to record solid state CD spectra down to 120 nm. At the quantum mechanical level, the feature density of circular dichroism and optical rotation are identical. Optical rotary dispersion and circular dichroism share the same quantum information content. See also Chirality-induced spin selectivity Hyper Rayleigh scattering optical activity Linear dichroism Magnetic circular dichroism Optical activity Optical isomerism Optical rotation Optical rotatory dispersion Protein Circular Dichroism Data Bank Synchrotron radiation circular dichroism spectroscopy Two-photon circular dichroism Vibrational circular dichroism References External links Circular Dichroism spectroscopy by Alliance Protein Laboratories, a commercial service provider An Introduction to Circular Dichroism Spectroscopy by Applied Photophysics, an equipment supplier An animated, step-by-step tutorial on Circular Dichroism and Optical Rotation by Prof Valev. Polarization (waves) Physical quantities Protein structure
Circular dichroism
[ "Physics", "Chemistry", "Mathematics" ]
3,663
[ "Physical phenomena", "Physical quantities", "Quantity", "Astrophysics", "Structural biology", "Protein structure", "Polarization (waves)", "Physical properties" ]
92,206
https://en.wikipedia.org/wiki/Magnetic%20circular%20dichroism
Magnetic circular dichroism (MCD) is the differential absorption of left and right circularly polarized (LCP and RCP) light, induced in a sample by a strong magnetic field oriented parallel to the direction of light propagation. MCD measurements can detect transitions which are too weak to be seen in conventional optical absorption spectra, and it can be used to distinguish between overlapping transitions. Paramagnetic systems are common analytes, as their near-degenerate magnetic sublevels provide strong MCD intensity that varies with both field strength and sample temperature. The MCD signal also provides insight into the symmetry of the electronic levels of the studied systems, such as metal ion sites. History It was first shown by Faraday that optical activity (the Faraday effect) could be induced in matter by a longitudinal magnetic field (a field in the direction of light propagation). The development of MCD really began in the 1930s when a quantum mechanical theory of MOR (magnetic optical rotatory dispersion) in regions outside absorption bands was formulated. The expansion of the theory to include MCD and MOR effects in the region of absorptions, which were referred to as "anomalous dispersions" was developed soon thereafter. There was, however, little effort made to refine MCD as a modern spectroscopic technique until the early 1960s. Since that time there have been numerous studies of MCD spectra for a very large variety of samples, including stable molecules in solutions, in isotropic solids, and in the gas phase, as well as unstable molecules entrapped in noble gas matrices. More recently, MCD has found useful application in the study of biologically important systems including metalloenzymes and proteins containing metal centers. Differences between CD and MCD In natural optical activity, the difference between the LCP light and the RCP light is caused by the asymmetry of the molecules (i.e. chiral molecules). Because of the handedness of the molecule, the absorption of the LCP light would be different from the RCP light. However, in MCD in the presence of a magnetic field, LCP and RCP no longer interact equivalently with the absorbing medium. Thus, there is not the same direct relation between magnetic optical activity and molecular stereochemistry which would be expected, because it is found in natural optical activity. So, natural CD is much more rare than MCD which does not strictly require the target molecule to be chiral. Although there is much overlap in the requirements and use of instruments, ordinary CD instruments are usually optimized for operation in the ultraviolet, approximately 170–300 nm, while MCD instruments are typically required to operate in the visible to near infrared, approximately 300–2000 nm. The physical processes that lead to MCD are substantively different from those of CD. However, like CD, it is dependent on the differential absorption of left and right hand circularly polarized light. MCD will only exist at a given wavelength if the studied sample has an optical absorption at that wavelength. This is distinctly different from the related phenomenon of optical rotatory dispersion (ORD), which can be observed at wavelengths far from any absorption band. Measurement The MCD signal ΔA is derived via the absorption of the LCP and RCP light as This signal is often presented as a function of wavelength λ, temperature T or magnetic field H. MCD spectrometers can simultaneously measure absorbance and ΔA along the same light path. This eliminates error introduced through multiple measurements or different instruments that previously occurred before this advent. The MCD spectrometer example shown below begins with a light source that emits a monochromatic wave of light. This wave is passed through a Rochon prism linear polarizer, which separates the incident wave into two beams that are linearly polarized by 90 degrees. The two beams follow different paths- one beam (the extraordinary beam) traveling directly to a photomultiplier (PMT), and the other beam (the ordinary beam) passing through a photoelastic modulator (PEM) oriented at 45 degrees to the direction of the ordinary ray polarization. The PMT for the extraordinary beam detects the light intensity of the input beam. The PEM is adjusted to cause an alternating plus and minus 1/4 wavelength shift of one of the two orthogonal components of the ordinary beam. This modulation converts the linearly polarized light into circularly polarized light at the peaks of the modulation cycle. Linearly polarized light can be decomposed into two circular components with intensity represented as The PEM will delay one component of linearly polarized light with a time dependence that advances the other component by 1/4 λ (hence, quarter-wave shift). The departing circularly polarized light oscillates between RCP and LCP in a sinusoidal time-dependence as depicted below: The light finally travels through a magnet containing the sample, and the transmittance is recorded by another PMT. The schematic is given below: The intensity of light from the ordinary wave that reaches the PMT is governed by the equation: Here A– and A+ are the absorbances of LCP or RCP, respectively; ω is the modulator frequency – usually a high acoustic frequency such as 50 kHz; t is time; and δ0 is the time-dependent wavelength shift. This intensity of light passing through the sample is converted into a two-component voltage via a current/voltage amplifier. A DC voltage will emerge corresponding to the intensity of light passed through the sample. If there is a ΔA, then a small AC voltage will be present that corresponds to the modulation frequency, ω. This voltage is detected by the lock in amplifier, which receives its reference frequency, ω, directly from the PEM. From such voltage, ΔA and A can be derived using the following relations: where Vex is the (DC) voltage measured by the PMT from the extraordinary wave, and Vdc is the DC component of the voltage measured by the PMT for the ordinary wave (measurement path not shown in the diagram). Some superconducting magnets have a small sample chamber, far too small to contain the entire optical system. Instead, the magnet sample chamber has windows on two opposite sides. Light from the source enters one side, interacts with the sample (usually also temperature controlled) in the magnetic field, and exits through the opposite window to the detector. Optical relay systems that allow the source and detector each to be about a meter from the sample are typically employed. This arrangement avoids many of the difficulties that would be encountered if the optical apparatus had to operate in the high magnetic field, and also allows for a much less expensive magnet. Applications MCD can be used as an optical technique for the detection of electronic structure of both the ground states and excited states. It is also a strong addition to the more commonly used absorption spectroscopy, and there are two reasons that explain this. First, a transition buried under a stronger transition can appear in MCD if the first derivative of the absorption is much larger for the weaker transition or it is of the opposite sign. Second, MCD will be found where no absorption is detected at all if ΔA > (ΔAmin) but A < Amin, where (ΔA)min and Amin are the minimum of ΔA and A that are detectable. Typically, (ΔAmin) and Amin are of the magnitudes around 10−5 and 10−3 respectively. So, a transition can only be detected in MCD, not in the absorption spectroscopy, if ΔA/A > 10−2. This happens in paramagnetic systems that are at lower temperature or that have sharp lines in the spectroscopy. In biology, metalloproteins are the most likely candidates for MCD measurements, as the presence of metals with degenerate energy levels leads to strong MCD signals. In the case of ferric heme proteins, MCD is capable of determining both oxidation and spin state to a remarkably exquisite degree. In regular proteins, MCD is capable of stoichiometrically measuring the tryptophan content of proteins, assuming there are no other competing absorbers in the spectroscopic system. In addition, the application of MCD spectroscopy greatly improved the level of understanding in the ferrous non-heme systems because of the direct observation of the d–d transitions, which generally can not be obtained in optical absorption spectroscopy owing to the weak extinction coefficients and are often electron paramagnetic resonance silent due to relatively large ground-state sublevel splittings and fast relaxation times. Theory Consider a system of localized, non-interacting absorbing centers. Based on the semi-classical radiation absorption theory within the electric dipole approximation, the electric vector of the circularly polarized waves propagates along the +z direction. In this system, is the angular frequency, and = n – ik is the complex refractive index. As the light travels, the attenuation of the beam is expressed as where is the intensity of light at position , is the absorption coefficient of the medium in the direction, and is the speed of light. Circular dichroism (CD) is then defined by the difference between left () and right () circularly polarized light, , following the sign convention of natural optical activity. In the presence of a static, uniform external magnetic field applied parallel to the direction of propagation of light, the Hamiltonian for the absorbing center takes the form for describing the system in the external magnetic field and describing the applied electromagnetic radiation. The absorption coefficient for a transition between two eigenstates of , and , can be described using the electric dipole transition operator as The term is a frequency-independent correction factor allowing for the effect of the medium on the light wave electric field, composed of the permittivity and the real refractive index . Discrete line spectrum In cases of a discrete spectrum, the observed at a particular frequency can be treated as a sum of contributions from each transition, where is the contribution at from the transition, is the absorption coefficient for the transition, and is a bandshape function (). Because eigenstates and depend on the applied external field, the value of varies with field. It is frequently useful to compare this value to the absorption coefficient in the absence of an applied field, often denoted When the Zeeman effect is small compared to zero-field state separations, line width, and and when the line shape is independent of the applied external field , first-order perturbation theory can be applied to separate into three contributing Faraday terms, called , , and . The subscript indicates the moment such that contributes a derivative-shaped signal and and contribute regular absorptions. Additionally, a zero-field absorption term is defined. The relationships between , , and these Faraday terms are for external field strength , Boltzmann constant , temperature , and a proportionality constant . This expression requires assumptions that is sufficiently high in energy that , and that the temperature of the sample is high enough that magnetic saturation does not produce nonlinear term behavior. Though one must pay attention to proportionality constants, there is a proportionality between and molar extinction coefficient and absorbance for concentration and path length . These Faraday terms are the usual language in which MCD spectra are discussed. Their definitions from perturbation theory are where is the degeneracy of ground state , labels states other than or , and and label the levels within states and and (respectively), is the energy of unperturbed state , is the angular momentum operator, is the spin operator, and indicates the real part of the expression. Origins of A, B, and C Faraday Terms The equations in the previous subsection reveal that the , , and terms originate through three distinct mechanisms. The term arises from Zeeman splitting of the ground or excited degenerate states. These field-dependent changes in energies of the magnetic sublevels causes small shifts in the bands to higher/lower energy. The slight offsets result in incomplete cancellation of the positive and negative features, giving a net derivative shape in the spectrum. This intensity mechanism is generally independent of sample temperature. The term is due to the field-induced mixing of states. Energetic proximity of a third state to either the ground state or excited state gives appreciable Zeeman coupling in the presence of an applied external field. As the strength of the magnetic field increases, the amount of mixing increases to give growth of an absorption band shape. Like the term, the term is generally temperature independent. Temperature dependence of term intensity can sometimes be observed when is particularly low-lying in energy. The term requires the degeneracy of the ground state, often encountered for paramagnetic samples. This happens due to a change in the Boltzmann population of the magnetic sublevels, which is dependent on the degree of field-induced splitting of the sublevel energies and on the sample temperature. Decrease of the temperature and increase of the magnetic field increases the term intensity until it reaches the maximum (saturation limit). Experimentally, the term spectrum can be obtained from MCD raw data by subtraction of MCD spectra measured in the same applied magnetic field at different temperatures, while and terms can be distinguished via their different band shapes. The relative contributions of A, B and C terms to the MCD spectrum are proportional to the inverse line width, energy splitting, and temperature: where is line width and is the zero-field state separation. For typical values of = 1000 cm−1, = 10,000 cm−1 and = 6 cm−1 (at 10 K), the three terms make relative contributions 1:0.1:150. So, at low temperature the term dominates over and for paramagnetic samples. Example on C terms In the visible and near-ultraviolet regions, the hexacyanoferrate(III) ion (Fe(CN)63−) exhibits three strong absorptions at 24500, 32700, and 40500 cm−1, which have been ascribed to ligand to metal charge transfer (LMCT) transitions. They all have lower energy than the lowest-energy intense band for the Fe(II) complex Fe(CN)62− found at 46000 cm−1. The red shift with increasing oxidation state of the metal is characteristic of LMCT bands. Additionally, only A terms, which are temperature independent, should be involved in MCD structure for closed-shell species. These features can be explained as follows. The ground state of the anion is 2T2g, which derives from the electronic configuration (t2g)5. So, there would be an unpaired electron in the d orbital of Fe3+ From that, the three bands can be assigned to the transitions 2t2g→2t1u1, 2t2g →2t1u2, 2t2g →2t2u. Two of the excited states are of the same symmetry, and, based on the group theory, they could mix with each other so that there are no pure σ and π characters in the two t1u states, but for t2u, there would be no intermixing. The A terms are also possible from the degenerate excited states, but the studies of temperature dependence showed that the A terms are not as dependent as the C term. An MCD study of Fe(CN)63− embedded in a thin polyvinyl alcohol (PVA) film revealed a temperature dependence of the C term. The room-temperature C0/D0 values for the three bands in the Fe(CN)63− spectrum are 1.2, −0.6, and 0.6, respectively, and their signs (positive, negative, and positive) establish the energy ordering as 2t2g→2t1u2<2t2g→2t2u<2t2g→2t1u1 Example on A and B terms To have an A- and B-term in the MCD spectrum, a molecule must contain degenerate excited states (A-term) and excited states close enough in energy to allow mixing (B-term). One case exemplifying these conditions is a square planar, d8 complex such as [(n-C4H9)4N]2Pt(CN)4. In addition to containing A- and B-terms, this example demonstrates the effects of spin-orbit coupling in metal to ligand charge transfer (MLCT) transitions. As shown in figure 1, the molecular orbital diagram of [(n-C4H9)4N]2Pt(CN)4 reveals MLCT into the antibonding π* orbitals of cyanide. The ground state is diamagnetic (thereby eliminating any C-terms) and the LUMO is the a2u. The dipole-allowed MLCT transitions are a1g-a2u and eg-a2u. Another transition, b2u-a2u, is a weak (orbitally forbidden singlet) but can still be observed in MCD. Because A- and B-terms arise from the properties of states, all singlet and triplet excited states are given in figure 2. Mixing of all these singlet and triplet states will occur and is attributed to the spin orbit coupling of platinum 5d orbitals (ζ ~ 3500 cm−1), as shown in figure 3. The black lines on the figure indicate the mixing of 1A2u with 3Eu to give two A2u states. The red lines show the 1Eu, 3Eu, 3A2u, and 3B1u states mixing to give four Eu states. The blue lines indicate remnant orbitals after spin-orbit coupling that are not a result of mixing. See also Circular dichroism Faraday effect X-ray magnetic circular dichroism References Polarization (waves) Spectroscopy Magneto-optic effects
Magnetic circular dichroism
[ "Physics", "Chemistry", "Materials_science" ]
3,693
[ "Physical phenomena", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Electric and magnetic fields in matter", "Astrophysics", "Optical phenomena", "Polarization (waves)", "Magneto-optic effects", "Spectroscopy" ]
92,236
https://en.wikipedia.org/wiki/Electron%20transport%20chain
An electron transport chain (ETC) is a series of protein complexes and other molecules which transfer electrons from electron donors to electron acceptors via redox reactions (both reduction and oxidation occurring simultaneously) and couples this electron transfer with the transfer of protons (H+ ions) across a membrane. Many of the enzymes in the electron transport chain are embedded within the membrane. The flow of electrons through the electron transport chain is an exergonic process. The energy from the redox reactions creates an electrochemical proton gradient that drives the synthesis of adenosine triphosphate (ATP). In aerobic respiration, the flow of electrons terminates with molecular oxygen as the final electron acceptor. In anaerobic respiration, other electron acceptors are used, such as sulfate. In an electron transport chain, the redox reactions are driven by the difference in the Gibbs free energy of reactants and products. The free energy released when a higher-energy electron donor and acceptor convert to lower-energy products, while electrons are transferred from a lower to a higher redox potential, is used by the complexes in the electron transport chain to create an electrochemical gradient of ions. It is this electrochemical gradient that drives the synthesis of ATP via coupling with oxidative phosphorylation with ATP synthase. In eukaryotic organisms, the electron transport chain, and site of oxidative phosphorylation, is found on the inner mitochondrial membrane. The energy released by reactions of oxygen and reduced compounds such as cytochrome c and (indirectly) NADH and FADH is used by the electron transport chain to pump protons into the intermembrane space, generating the electrochemical gradient over the inner mitochondrial membrane. In photosynthetic eukaryotes, the electron transport chain is found on the thylakoid membrane. Here, light energy drives electron transport through a proton pump and the resulting proton gradient causes subsequent synthesis of ATP. In bacteria, the electron transport chain can vary between species but it always constitutes a set of redox reactions that are coupled to the synthesis of ATP through the generation of an electrochemical gradient and oxidative phosphorylation through ATP synthase. Mitochondrial electron transport chains Most eukaryotic cells have mitochondria, which produce ATP from reactions of oxygen with products of the citric acid cycle, fatty acid metabolism, and amino acid metabolism. At the inner mitochondrial membrane, electrons from NADH and FADH pass through the electron transport chain to oxygen, which provides the energy driving the process as it is reduced to water. The electron transport chain comprises an enzymatic series of electron donors and acceptors. Each electron donor will pass electrons to an acceptor of higher redox potential, which in turn donates these electrons to another acceptor, a process that continues down the series until electrons are passed to oxygen, the terminal electron acceptor in the chain. Each reaction releases energy because a higher-energy donor and acceptor convert to lower-energy products. Via the transferred electrons, this energy is used to generate a proton gradient across the mitochondrial membrane by "pumping" protons into the intermembrane space, producing a state of higher free energy that has the potential to do work. This entire process is called oxidative phosphorylation since ADP is phosphorylated to ATP by using the electrochemical gradient that the redox reactions of the electron transport chain have established driven by energy-releasing reactions of oxygen. Mitochondrial redox carriers Energy associated with the transfer of electrons down the electron transport chain is used to pump protons from the mitochondrial matrix into the intermembrane space, creating an electrochemical proton gradient (ΔpH) across the inner mitochondrial membrane. This proton gradient is largely but not exclusively responsible for the mitochondrial membrane potential (ΔΨ). It allows ATP synthase to use the flow of H+ through the enzyme back into the matrix to generate ATP from adenosine diphosphate (ADP) and inorganic phosphate. Complex I (NADH coenzyme Q reductase; labeled I) accepts electrons from the Krebs cycle electron carrier nicotinamide adenine dinucleotide (NADH), and passes them to coenzyme Q (ubiquinone; labeled Q), which also receives electrons from Complex II (succinate dehydrogenase; labeled II). Q passes electrons to Complex III (cytochrome bc1 complex; labeled III), which passes them to cytochrome c (cyt c). Cyt c passes electrons to Complex IV (cytochrome c oxidase; labeled IV). Four membrane-bound complexes have been identified in mitochondria. Each is an extremely complex transmembrane structure that is embedded in the inner membrane. Three of them are proton pumps. The structures are electrically connected by lipid-soluble electron carriers and water-soluble electron carriers. The overall electron transport chain can be summarized as follows: NADH, H → Complex I → Q → Complex III → cytochrome c → Complex IV → HO ↑ Complex II ↑ Succinate Complex I In Complex I (NADH ubiquinone oxidoreductase, Type I NADH dehydrogenase, or mitochondrial complex I; ), two electrons are removed from NADH and transferred to a lipid-soluble carrier, ubiquinone (Q). The reduced product, ubiquinol (QH), freely diffuses within the membrane, and Complex I translocates four protons (H) across the membrane, thus producing a proton gradient. Complex I is one of the main sites at which premature electron leakage to oxygen occurs, thus being one of the main sites of production of superoxide. The pathway of electrons is as follows: NADH is oxidized to NAD, by reducing flavin mononucleotide to FMNH in one two-electron step. FMNH is then oxidized in two one-electron steps, through a semiquinone intermediate. Each electron thus transfers from the FMNH to an Fe–S cluster, from the Fe-S cluster to ubiquinone (Q). Transfer of the first electron results in the free-radical (semiquinone) form of Q, and transfer of the second electron reduces the semiquinone form to the ubiquinol form, QH. During this process, four protons are translocated from the mitochondrial matrix to the intermembrane space. As the electrons move through the complex an electron current is produced along the 180 Angstrom width of the complex within the membrane. This current powers the active transport of four protons to the intermembrane space per two electrons from NADH. Complex II In Complex II (succinate dehydrogenase or succinate-CoQ reductase; ) additional electrons are delivered into the quinone pool (Q) originating from succinate and transferred (via flavin adenine dinucleotide (FAD)) to Q. Complex II consists of four protein subunits: succinate dehydrogenase (SDHA); succinate dehydrogenase [ubiquinone] iron–sulfur subunit mitochondrial (SDHB); succinate dehydrogenase complex subunit C (SDHC); and succinate dehydrogenase complex subunit D (SDHD). Other electron donors (e.g., fatty acids and glycerol 3-phosphate) also direct electrons into Q (via FAD). Complex II is a parallel electron transport pathway to Complex I, but unlike Complex I, no protons are transported to the intermembrane space in this pathway. Therefore, the pathway through Complex II contributes less energy to the overall electron transport chain process. Complex III In Complex III (cytochrome bc1 complex or CoQH-cytochrome c reductase; ), the Q-cycle contributes to the proton gradient by an asymmetric absorption/release of protons. Two electrons are removed from QH at the QO site and sequentially transferred to two molecules of cytochrome c, a water-soluble electron carrier located within the intermembrane space. The two other electrons sequentially pass across the protein to the Qi site where the quinone part of ubiquinone is reduced to quinol. A proton gradient is formed by one quinol (2H+2e-) oxidations at the Qo site to form one quinone (2H+2e-) at the Qi site. (In total, four protons are translocated: two protons reduce quinone to quinol and two protons are released from two ubiquinol molecules.) QH2 + 2(Fe^{III}) + 2 H -> Q + 2(Fe^{II}) + 4 H When electron transfer is reduced (by a high membrane potential or respiratory inhibitors such as antimycin A), Complex III may leak electrons to molecular oxygen, resulting in superoxide formation. This complex is inhibited by dimercaprol (British Anti-Lewisite, BAL), naphthoquinone and antimycin. Complex IV In Complex IV (cytochrome c oxidase; ), sometimes called cytochrome AA3, four electrons are removed from four molecules of cytochrome c and transferred to molecular oxygen (O) and four protons, producing two molecules of water. The complex contains coordinated copper ions and several heme groups. At the same time, eight protons are removed from the mitochondrial matrix (although only four are translocated across the membrane), contributing to the proton gradient. The exact details of proton pumping in Complex IV are still under study. Cyanide is an inhibitor of Complex IV. Coupling with oxidative phosphorylation According to the chemiosmotic coupling hypothesis, proposed by Nobel Prize in Chemistry winner Peter D. Mitchell, the electron transport chain and oxidative phosphorylation are coupled by a proton gradient across the inner mitochondrial membrane. The efflux of protons from the mitochondrial matrix creates an electrochemical gradient (proton gradient). This gradient is used by the FF ATP synthase complex to make ATP via oxidative phosphorylation. ATP synthase is sometimes described as Complex V of the electron transport chain. The F component of ATP synthase acts as an ion channel that provides for a proton flux back into the mitochondrial matrix. It is composed of a, b and c subunits. Protons in the inter-membrane space of mitochondria first enter the ATP synthase complex through an a subunit channel. Then protons move to the c subunits. The number of c subunits determines how many protons are required to make the F turn one full revolution. For example, in humans, there are 8 c subunits, thus 8 protons are required. After c subunits, protons finally enter the matrix through an a subunit channel that opens into the mitochondrial matrix. This reflux releases free energy produced during the generation of the oxidized forms of the electron carriers (NAD and Q) with energy provided by O. The free energy is used to drive ATP synthesis, catalyzed by the F component of the complex.Coupling with oxidative phosphorylation is a key step for ATP production. However, in specific cases, uncoupling the two processes may be biologically useful. The uncoupling protein, thermogenin—present in the inner mitochondrial membrane of brown adipose tissue—provides for an alternative flow of protons back to the inner mitochondrial matrix. Thyroxine is also a natural uncoupler. This alternative flow results in thermogenesis rather than ATP production. Reverse electron flow Reverse electron flow is the transfer of electrons through the electron transport chain through the reverse redox reactions. Usually requiring a significant amount of energy to be used, this can reduce the oxidized forms of electron donors. For example, NAD+ can be reduced to NADH by Complex I. There are several factors that have been shown to induce reverse electron flow. However, more work needs to be done to confirm this. One example is blockage of ATP synthase, resulting in a build-up of protons and therefore a higher proton-motive force, inducing reverse electron flow. Prokaryotic electron transport chains In eukaryotes, NADH is the most important electron donor. The associated electron transport chain is NADH → Complex I → Q → Complex III → cytochrome c → Complex IV → O where Complexes I, III and IV are proton pumps, while Q and cytochrome c are mobile electron carriers. The electron acceptor for this process is molecular oxygen. In prokaryotes (bacteria and archaea) the situation is more complicated, because there are several different electron donors and several different electron acceptors. The generalized electron transport chain in bacteria is: Donor Donor Donor ↓ ↓ ↓ dehydrogenase → quinone → bc → cytochrome ↓ ↓ oxidase(reductase) oxidase(reductase) ↓ ↓ Acceptor Acceptor Electrons can enter the chain at three levels: at the level of a dehydrogenase, at the level of the quinone pool, or at the level of a mobile cytochrome electron carrier. These levels correspond to successively more positive redox potentials, or to successively decreased potential differences relative to the terminal electron acceptor. In other words, they correspond to successively smaller Gibbs free energy changes for the overall redox reaction. Individual bacteria use multiple electron transport chains, often simultaneously. Bacteria can use a number of different electron donors, a number of different dehydrogenases, a number of different oxidases and reductases, and a number of different electron acceptors. For example, E. coli (when growing aerobically using glucose and oxygen as an energy source) uses two different NADH dehydrogenases and two different quinol oxidases, for a total of four different electron transport chains operating simultaneously. A common feature of all electron transport chains is the presence of a proton pump to create an electrochemical gradient over a membrane. Bacterial electron transport chains may contain as many as three proton pumps, like mitochondria, or they may contain two or at least one. Electron donors In the current biosphere, the most common electron donors are organic molecules. Organisms that use organic molecules as an electron source are called organotrophs. Chemoorganotrophs (animals, fungi, protists) and photolithotrophs (plants and algae) constitute the vast majority of all familiar life forms. Some prokaryotes can use inorganic matter as an electron source. Such an organism is called a (chemo)lithotroph ("rock-eater"). Inorganic electron donors include hydrogen, carbon monoxide, ammonia, nitrite, sulfur, sulfide, manganese oxide, and ferrous iron. Lithotrophs have been found growing in rock formations thousands of meters below the surface of Earth. Because of their volume of distribution, lithotrophs may actually outnumber organotrophs and phototrophs in our biosphere. The use of inorganic electron donors such as hydrogen as an energy source is of particular interest in the study of evolution. This type of metabolism must logically have preceded the use of organic molecules and oxygen as an energy source. Dehydrogenases: equivalents to complexes I and II Bacteria can use several different electron donors. When organic matter is the electron source, the donor may be NADH or succinate, in which case electrons enter the electron transport chain via NADH dehydrogenase (similar to Complex I in mitochondria) or succinate dehydrogenase (similar to Complex II). Other dehydrogenases may be used to process different energy sources: formate dehydrogenase, lactate dehydrogenase, glyceraldehyde-3-phosphate dehydrogenase, H dehydrogenase (hydrogenase), electron transport chain. Some dehydrogenases are also proton pumps, while others funnel electrons into the quinone pool. Most dehydrogenases show induced expression in the bacterial cell in response to metabolic needs triggered by the environment in which the cells grow. In the case of lactate dehydrogenase in E. coli, the enzyme is used aerobically and in combination with other dehydrogenases. It is inducible and is expressed when the concentration of DL-lactate in the cell is high. Quinone carriers Quinones are mobile, lipid-soluble carriers that shuttle electrons (and protons) between large, relatively immobile macromolecular complexes embedded in the membrane. Bacteria use ubiquinone (Coenzyme Q, the same quinone that mitochondria use) and related quinones such as menaquinone (Vitamin K). Archaea in the genus Sulfolobus use caldariellaquinone. The use of different quinones is due to slight changes in redox potentials caused by changes in structure. The change in redox potentials of these quinones may be suited to changes in the electron acceptors or variations of redox potentials in bacterial complexes. Proton pumps A proton pump is any process that creates a proton gradient across a membrane. Protons can be physically moved across a membrane, as seen in mitochondrial Complexes I and IV. The same effect can be produced by moving electrons in the opposite direction. The result is the disappearance of a proton from the cytoplasm and the appearance of a proton in the periplasm. Mitochondrial Complex III is this second type of proton pump, which is mediated by a quinone (the Q cycle). Some dehydrogenases are proton pumps, while others are not. Most oxidases and reductases are proton pumps, but some are not. Cytochrome bc1 is a proton pump found in many, but not all, bacteria (not in E. coli). As the name implies, bacterial bc1 is similar to mitochondrial bc1 (Complex III). Cytochrome electron carriers Cytochromes are proteins that contain iron. They are found in two very different environments. Some cytochromes are water-soluble carriers that shuttle electrons to and from large, immobile macromolecular structures imbedded in the membrane. The mobile cytochrome electron carrier in mitochondria is cytochrome c. Bacteria use a number of different mobile cytochrome electron carriers. Other cytochromes are found within macromolecules such as Complex III and Complex IV. They also function as electron carriers, but in a very different, intramolecular, solid-state environment. Electrons may enter an electron transport chain at the level of a mobile cytochrome or quinone carrier. For example, electrons from inorganic electron donors (nitrite, ferrous iron, electron transport chain) enter the electron transport chain at the cytochrome level. When electrons enter at a redox level greater than NADH, the electron transport chain must operate in reverse to produce this necessary, higher-energy molecule. Electron acceptors and terminal oxidase/reductase As there are a number of different electron donors (organic matter in organotrophs, inorganic matter in lithotrophs), there are a number of different electron acceptors, both organic and inorganic. As with other steps of the ETC, an enzyme is required to help with the process. If oxygen is available, it is most often used as the terminal electron acceptor in aerobic bacteria and facultative anaerobes. An oxidase reduces the O to water while oxidizing something else. In mitochondria, the terminal membrane complex (Complex IV) is cytochrome oxidase, which oxidizes the cytochrome. Aerobic bacteria use a number of differet terminal oxidases. For example, E. coli (a facultative anaerobe) does not have a cytochrome oxidase or a bc1 complex. Under aerobic conditions, it uses two different terminal quinol oxidases (both proton pumps) to reduce oxygen to water. Bacterial terminal oxidases can be split into classes according to the molecules act as terminal electron acceptors. Class I oxidases are cytochrome oxidases and use oxygen as the terminal electron acceptor. Class II oxidases are quinol oxidases and can use a variety of terminal electron acceptors. Both of these classes can be subdivided into categories based on what redox-active components they contain. E.g. Heme aa3 Class 1 terminal oxidases are much more efficient than Class 2 terminal oxidases. Mostly in anaerobic environments different electron acceptors are used, including nitrate, nitrite, ferric iron, sulfate, carbon dioxide, and small organic molecules such as fumarate. When bacteria grow in anaerobic environments, the terminal electron acceptor is reduced by an enzyme called a reductase. E. coli can use fumarate reductase, nitrate reductase, nitrite reductase, DMSO reductase, or trimethylamine-N-oxide reductase, depending on the availability of these acceptors in the environment. Most terminal oxidases and reductases are inducible. They are synthesized by the organism as needed, in response to specific environmental conditions. Photosynthetic In oxidative phosphorylation, electrons are transferred from an electron donor such as NADH to an acceptor such as O through an electron transport chain, releasing energy. In photophosphorylation, the energy of sunlight is used to create a high-energy electron donor which can subsequently reduce oxidized components and couple to ATP synthesis via proton translocation by the electron transport chain. Photosynthetic electron transport chains, like the mitochondrial chain, can be considered as a special case of the bacterial systems. They use mobile, lipid-soluble quinone carriers (phylloquinone and plastoquinone) and mobile, water-soluble carriers (cytochromes). They also contain a proton pump. The proton pump in all photosynthetic chains resembles mitochondrial Complex III. The commonly-held theory of symbiogenesis proposes that both organelles descended from bacteria. See also Charge-transfer complex CoRR hypothesis Electron equivalent Hydrogen hypothesis Respirasome Electric bacteria References Further reading – Editorial commentary mentioning two unusual ETCs: that of Geobacter sulfurreducens and that of cable bacteria. Also has schematic of E. coli ETC. External links Khan Academy, video lecture KEGG pathway: Oxidative phosphorylation, overlaid with genes found in Pseudomonas fluorescens Pf0-1. Click "help" for a how-to. Cellular respiration Integral membrane proteins
Electron transport chain
[ "Chemistry", "Biology" ]
4,781
[ "Biochemistry", "Cellular respiration", "Metabolism" ]
16,901,031
https://en.wikipedia.org/wiki/Dispersionless%20equation
Dispersionless (or quasi-classical) limits of integrable partial differential equations (PDE) arise in various problems of mathematics and physics and have been intensively studied in recent literature (see e.g. references below). They typically arise when considering slowly modulated long waves of an integrable dispersive PDE system. Examples Dispersionless KP equation The dispersionless Kadomtsev–Petviashvili equation (dKPE), also known (up to an inessential linear change of variables) as the Khokhlov–Zabolotskaya equation, has the form It arises from the commutation of the following pair of 1-parameter families of vector fields where is a spectral parameter. The dKPE is the -dispersionless limit of the celebrated Kadomtsev–Petviashvili equation, arising when considering long waves of that system. The dKPE, like many other (2+1)-dimensional integrable dispersionless systems, admits a (3+1)-dimensional generalization. The Benney moment equations The dispersionless KP system is closely related to the Benney moment hierarchy, each of which is a dispersionless integrable system: These arise as the consistency condition between and the simplest two evolutions in the hierarchy are: The dKP is recovered on setting and eliminating the other moments, as well as identifying and . If one sets , so that the countably many moments are expressed in terms of just two functions, the classical shallow water equations result: These may also be derived from considering slowly modulated wave train solutions of the nonlinear Schrödinger equation. Such 'reductions', expressing the moments in terms of finitely many dependent variables, are described by the Gibbons-Tsarev equation. Dispersionless Korteweg–de Vries equation The dispersionless Korteweg–de Vries equation (dKdVE) reads as It is the dispersionless or quasiclassical limit of the Korteweg–de Vries equation. It is satisfied by -independent solutions of the dKP system. It is also obtainable from the -flow of the Benney hierarchy on setting Dispersionless Novikov–Veselov equation The dispersionless Novikov-Veselov equation is most commonly written as the following equation for a real-valued function : where the following standard notation of complex analysis is used: , . The function here is an auxiliary function, defined uniquely from up to a holomorphic summand. Multidimensional integrable dispersionless systems See for systems with contact Lax pairs, and e.g., and references therein for other systems. See also Integrable systems Nonlinear Schrödinger equation Nonlinear systems Davey–Stewartson equation Dispersive partial differential equation Kadomtsev–Petviashvili equation Korteweg–de Vries equation References Citations Bibliography Kodama Y., Gibbons J. "Integrability of the dispersionless KP hierarchy", Nonlinear World 1, (1990). Zakharov V.E. "Dispersionless limit of integrable systems in 2+1 dimensions", Singular Limits of Dispersive Waves, NATO ASI series, Volume 320, 165-174, (1994). Dunajski M. "Solitons, instantons and twistors", Oxford University Press, 2010. External links Ishimori_system at the dispersive equations wiki Takebe T. "Lectures on Dispersionless Integrable Hierarchies", 2014 Partial differential equations Integrable systems
Dispersionless equation
[ "Physics" ]
770
[ "Integrable systems", "Theoretical physics" ]
16,907,616
https://en.wikipedia.org/wiki/Electromagnetically%20induced%20grating
Electromagnetically induced grating (EIG) is an optical interference phenomenon where an interference pattern is used to build a dynamic spatial diffraction grating in matter. EIGs are dynamically created by light interference on optically resonant materials and rely on population inversion and/or optical coherence properties of the material. They were first demonstrated with population gratings on atoms. EIGs can be used for purposes of atomic/molecular velocimetry, to probe the material optical properties such as coherence and population life-times, and switching and routing of light. Related but different effects are thermally induced gratings and photolithography gratings. Writing, reading and phase-matching conditions for EIG diffraction Figure 1 shows a possible beam configuration to write and read an EIG. The period of the grating is controlled by the angle . The writing and reading frequencies are not necessarily the same. EB is referred as the "backward" reading beam and ER is the signal obtained by diffraction on the grating. The phase-matching conditions for the EIG for the plane-wave approximation is given by the simple geometric relation: , where the angles are given according to Fig. 2, and are the frequencies of the writing (W, W') and reading beam (R), respectively, and n is the effective index of refraction of the medium. Types of EIG Matter gratings The writing lasers form a grating by modulating density of matter or by localizing matter (trapping) on the regions of maxima (or minima) of the writing interference fields. A thermal grating is an example. Matter gratings have slow dynamics (milliseconds) compared to population and phase gratings (potentially nanoseconds and faster). Population gratings The writing lasers are resonant with optical transitions in the matter and the grating is formed by optical pumping (See Fig. 3). This type of grating can be easily tuned to produce multiple orders of diffraction. Coherence gratings A grating where the writing lasers form a coherent matter pattern. An example is a pattern of electromagnetically induced transparency. Applications Usually two lasers at an angle are used to build an EIG. The EIG is used to diffract a third laser, to monitor the behavior of the underlying substrate where the EIG was written or to serve as a switch for one of the lasers involved in the process. See also Atomic coherence Electromagnetically induced transparency Bragg's law Optical lattice Spectral hole burning Kerr effect Stimulated Raman spectroscopy References Wave mechanics Quantum mechanics Nonlinear optics
Electromagnetically induced grating
[ "Physics" ]
547
[ "Physical phenomena", "Theoretical physics", "Quantum mechanics", "Classical mechanics", "Waves", "Wave mechanics" ]
16,908,428
https://en.wikipedia.org/wiki/Post-transcriptional%20regulation
Post-transcriptional regulation is the control of gene expression at the RNA level. It occurs once the RNA polymerase has been attached to the gene's promoter and is synthesizing the nucleotide sequence. Therefore, as the name indicates, it occurs between the transcription phase and the translation phase of gene expression. These controls are critical for the regulation of many genes across human tissues. It also plays a big role in cell physiology, being implicated in pathologies such as cancer and neurodegenerative diseases. Mechanism After being produced, the stability and distribution of the different transcripts is regulated (post-transcriptional regulation) by means of RNA binding protein (RBP) that control the various steps and rates controlling events such as alternative splicing, nuclear degradation (exosome), processing, nuclear export (three alternative pathways), sequestration in P-bodies for storage or degradation and ultimately translation. These proteins achieve these events thanks to an RNA recognition motif (RRM) that binds a specific sequence or secondary structure of the transcripts, typically at the 5’ and 3’ UTR of the transcript. In short, the dsRNA sequences, which will be broken down into siRNA inside of the organism, will match up with the RNA to inhibit the gene expression in the cell. Modulating the capping, splicing, addition of a Poly(A) tail, the sequence-specific nuclear export rates and in several contexts sequestration of the RNA transcript occurs in eukaryotes but not in prokaryotes. This modulation is a result of a protein or transcript which in turn is regulated and may have an affinity for certain sequences. Capping changes the five prime end of the mRNA to a three prime end by 5'-5' linkage, which protects the mRNA from 5' exonuclease, which degrades foreign RNA. The cap also helps in ribosomal binding. In addition, it represents a unique mark for a correct gene. Therefore, it helps to select the mRNA that is going to be translated. RNA splicing removes the introns, noncoding regions that are transcribed into RNA, in order to make the mRNA able to create proteins. Cells do this by spliceosomes binding on either side of an intron, looping the intron into a circle and then cleaving it off. The two ends of the exons are then joined. Addition of poly(A) tail otherwise known as polyadenylation. That is, a stretch of RNA that is made solely of adenine bases is added to the 3' end, and acts as a buffer to the 3' exonuclease in order to increase the half-life of mRNA. In addition, a long poly(A) tail can increase translation. Poly(A)-binding protein (PABP) binds to a long poly(A) tail and mediates the interaction between EIF4E and EIF4G which encourages the initiation of translation. RNA editing is a process which results in sequence variation in the RNA molecule, and is catalyzed by enzymes. These enzymes include the adenosine deaminase acting on RNA (ADAR) enzymes, which convert specific adenosine residues to inosine in an mRNA molecule by hydrolytic deamination. Three ADAR enzymes have been cloned, ADAR1, ADAR2 and ADAR3, although only the first two subtypes have been shown to have RNA editing activity. Many mRNAs are vulnerable to the effects of RNA editing, including the glutamate receptor subunits GluR2, GluR3, GluR4, GluR5 and GluR6 (which are components of the AMPA and kainate receptors), the serotonin2C receptor, the GABA-alpha3 receptor subunit, the tryptophan hydroxylase enzyme TPH2, the hepatitis delta virus and more than 16% of microRNAs. In addition to ADAR enzymes, CDAR enzymes exist and these convert cytosines in specific RNA molecules, to uracil. These enzymes are termed 'APOBEC' and have genetic loci at 22q13, a region close to the chromosomal deletion which occurs in velocardiofacial syndrome (22q11) and which is linked to psychosis. RNA editing is extensively studied in relation to infectious diseases, because the editing process alters viral function. mRNA Stability can be manipulated in order to control its half-life, and the poly(A) tail has some effect on this stability, as previously stated. Stable mRNA can have a half-life of up to a day or more which allows for the production of more protein product; unstable mRNA is used in regulation that must occur quickly. mRNA stability is an important factor that is based on mRNA degradation rates. Nuclear export. Only one-twentieth of the total amount of RNA leaves the nucleus to proceed with translation. The rest of the RNA molecules, usually excised introns and damaged RNAs, are kept in the nucleus where they are eventually degraded. mRNA only leaves the nucleus when it is ready to keep going, which means that nuclear export is delayed until the processing is complete. As an interesting fact, there are some mechanisms that attack this nuclear export process to regulate gene expression. An example of regulated nuclear transport of mRNA can be observed in HIV. Transcription attenuation Transcription attenuation is a type of prokaryotic regulation that happens only under certain conditions. This process occurs at the beginning of RNA transcription and causes the RNA chain to terminate before gene expression. Transcription attenuation is caused by the incorrect formation of a nascent RNA chain. This nascent RNA chain adopts an alternative secondary structure that does not interact appropriately with the RNA polymerase. In order for gene expression to proceed, regulatory proteins must bind to the RNA chain and remove the attenuation, which is costly for the cell. In prokaryotes there are two mechanisms of transcription attenuation. These two mechanisms are intrinsic termination and factor-dependent termination. - In the intrinsic termination mechanism, also known as Rho-independent termination, the RNA chain forms a stable transcript hairpin structure at the 3'end of the genes that cause the RNA polymerase to stop transcribing. The stem-loop is followed by a run of U's (poly U tail) which stalls the polymerase, so the RNA hairpin have enough time to form. Then, the polymerase is dissociated due to the weak binding between the poly U tail, from the transcript RNA, and the poly A tail, from the DNA template, causing the mRNA to be prematurely released. This process inhibits transcription. To clarify, this mechanism is called Rho-independent because it does not require any additional protein factor as the factor-dependent termination does, which is a simpler mechanism for the cell to regulate gene transcription. Some examples of bacteria where this type of regulation predominates are Neisseria, Psychrobacter and Pasteurellaceae, as well as the majority of bacteria in the Firmicutes phylum. - In factor-dependent termination, which is a protein factor complex containing Rho factor, is bound to a segment from the RNA chain transcript. The Rho complex then starts looking in the 3' direction for a paused RNA polymerase. If the polymerase is found, the process immediately stops, which results in the abortion of RNA transcription. Even though this system is not as common as the one described above, there are some bacteria that uses this type of termination, such as the tna operon in E.coli. This type of regulation is not efficient in eukaryotes because transcription occurs in the nucleus while translation occurs in the cytoplasm. Therefore, the mechanism is not continued and it cannot execute appropriately as it would if both processes happen on the cytoplasm. MicroRNA mediated regulation MicroRNAs (miRNAs) appear to regulate the expression of more than 60% of protein coding genes of the human genome. If an miRNA is abundant it can behave as a "switch", turning some genes on or off. However, altered expression of many miRNAs only leads to a modest 1.5- to 4-fold change in protein expression of their target genes. Individual miRNAs often repress several hundred target genes. Repression usually occurs either through translational silencing of the mRNA or through degradation of the mRNA, via complementary binding, mostly to specific sequences in the 3' untranslated region of the target gene's mRNA. The mechanism of translational silencing or degradation of mRNA is implemented through the RNA-induced silencing complex (RISC). Feedback in the regulation of RNA binding proteins RNA-Binding Proteins (RBPs) are dynamic assemblages between mRNAs and different proteins that form messenger ribonucleoprotein complexes (mRNPs). These complexes are essential for the regulation of gene expression to ensure that all the steps are performed correctly throughout the whole process. Therefore, they are important control factors for protein levels and cell phenotypes. Moreover, they affect mRNA stability by regulating its conformation due to the environment, stress or extracellular signals. However, their ability to bind and control such a wide variety of RNA targets allows them to form complex regulatory networks (PTRNs).These networks represent a challenge to study each RNA-binding protein individually. Thankfully, due to new methodological advances, the identification of RBPs is slowly expanding, which demonstrates that they are contained in broad families of proteins. RBPs can significantly impact multiple biological processes, and have to be very accurately expressed. Overexpression can change the mRNA target rate, binding to low-affinity RNA sites and causing deleterious results on cellular fitness. Not being able to synthesize at the right level is also problematic because it can lead to cell death. Therefore, RBPs are regulated via auto-regulation, so they are in control of their own actions. Furthermore, they use both negative feedback, to maintain homeostasis, and positive feedback, to create binary genetic changes in the cell. In metazoans and bacteria, many genes involved in post-post transcriptional regulation are regulated post transcriptionally. For Drosophila RBPs associated with splicing or nonsense mediated decay, analyses of protein-protein and protein-RNA interaction profiles have revealed ubiquitous interactions with RNA and protein products of the same gene. It remains unclear whether these observations are driven by ribosome proximal or ribosome mediated contacts, or if some protein complexes, particularly RNPs, undergo co-translational assembly. Significance This area of study has recently gained more importance due to the increasing evidence that post-transcriptional regulation plays a larger role than previously expected. Even though protein with DNA binding domains are more abundant than protein with RNA binding domains, a recent study by Cheadle et al. (2005) showed that during T-cell activation 55% of significant changes at the steady-state level had no corresponding changes at the transcriptional level, meaning they were a result of stability regulation alone. Furthermore, RNA found in the nucleus is more complex than that found in the cytoplasm: more than 95% (bases) of the RNA synthesized by RNA polymerase II never reaches the cytoplasm. The main reason for this is due to the removal of introns which account for 80% of the total bases. Some studies have shown that even after processing the levels of mRNA between the cytoplasm and the nucleus differ greatly. Developmental biology is a good source of models of regulation, but due to the technical difficulties it was easier to determine the transcription factor cascades than regulation at the RNA level. In fact several key genes such as nanos are known to bind RNA but often their targets are unknown. Although RNA binding proteins may regulate post transcriptionally large amount of the transcriptome, the targeting of a single gene is of interest to the scientific community for medical reasons, this is RNA interference and microRNAs which are both examples of posttranscriptional regulation, which regulate the destruction of RNA and change the chromatin structure. To study post-transcriptional regulation several techniques are used, such as RIP-Chip (RNA immunoprecipitation on chip). microRNA role in cancer Deficiency of expression of a DNA repair gene occurs in many cancers (see DNA repair defect and cancer risk and microRNA and DNA repair). Altered microRNA (miRNA) expression that either decreases accurate DNA repair or increases inaccurate microhomology-mediated end joining (MMEJ) DNA repair is often observed in cancers. Deficiency of accurate DNA repair may be a major source of the high frequency of mutations in cancer (see mutation frequencies in cancers). Repression of DNA repair genes in cancers by changes in the levels of microRNAs may be a more frequent cause of repression than mutation or epigenetic methylation of DNA repair genes. For instance, BRCA1 is employed in the accurate homologous recombinational repair (HR) pathway. Deficiency of BRCA1 can cause breast cancer. Down-regulation of BRCA1 due to mutation occurs in about 3% of breast cancers. Down-regulation of BRCA1 due to methylation of its promoter occurs in about 14% of breast cancers. However, increased expression of miR-182 down-regulates BRCA1 mRNA and protein expression, and increased miR-182 is found in 80% of breast cancers. In another example, a mutated constitutively (persistently) expressed version of the oncogene c-Myc is found in many cancers. Among many functions, c-Myc negatively regulates microRNAs miR-150 and miR-22. These microRNAs normally repress expression of two genes essential for MMEJ, Lig3 and Parp1, thereby inhibiting this inaccurate, mutagenic DNA repair pathway. Muvarak et al. showed, in leukemias, that constitutive expression of c-Myc, leading to down-regulation of miR-150 and miR-22, allowed increased expression of Lig3 and Parp1. This generates genomic instability through increased inaccurate MMEJ DNA repair, and likely contributes to progression to leukemia. To show the frequent ability of microRNAs to alter DNA repair expression, Hatano et al. performed a large screening study, in which 810 microRNAs were transfected into cells that were then subjected to ionizing radiation (IR). For 324 of these microRNAs, DNA repair was reduced (cells were killed more efficiently by IR) after transfection. For a further 75 microRNAs, DNA repair was increased, with less cell death after IR. This indicates that alterations in microRNAs may often down-regulate DNA repair, a likely important early step in progression to cancer. See also Cis-regulatory element Glossary of gene expression terms RNA interference References External links Wormbook.org on RNA-binding protein Gene expression Molecular biology RNA
Post-transcriptional regulation
[ "Chemistry", "Biology" ]
3,094
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
11,487,713
https://en.wikipedia.org/wiki/Cyclic%20olefin%20copolymer
Cyclic olefin copolymer (COC) is an amorphous polymer made by several polymer manufacturers. COC is a relatively new class of polymers as compared to commodities such as polypropylene and polyethylene. This newer material is used in a wide variety of applications including packaging films, lenses, vials, displays, and medical devices. Various types In 2005 there were "several types of commercial cyclic olefin copolymers based on different types of cyclic monomers and polymerization methods. Cyclic olefin copolymers are produced by chain copolymerization of cyclic monomers such as 8,9,10-trinorborn-2-ene (norbornene) or 1,2,3,4,4a,5,8,8a-octahydro-1,4:5,8-dimethanonaphthalene (tetracyclododecene) with ethene (such as Polyplastics subsidiary TOPAS Advanced Polymers' TOPAS, Mitsui Chemical's APEL), or by ring-opening metathesis polymerization of various cyclic monomers followed by hydrogenation (Japan Synthetic Rubber's ARTON, Zeon Chemical's Zeonex and Zeonor)." These later materials using a single type of monomer are more properly named cyclic olefin polymers (COP). Chemical and physical properties Typical COC material has a higher modulus than HDPE and PP, similar to PET or PC. COC also has a high moisture barrier for a clear polymer along with a low moisture absorption rate. In medical and analytical applications, COC is noted to be a high purity product with low extractables. COC is also a halogen-free and BPA-free product. Some grades of COC have shown a lack of estrogenic activity. The optical properties of COC are exceptional, and in many ways very similar to glass. COC materials offer exceptional transparency, low birefringence, high Abbe number and high heat resistance. The moisture insensitivity of COC is often an advantage over competing materials such as polycarbonate and acrylics. The high flow of COC enables higher aspect ratio (squatter and shallower) optical component fabrication than other optical polymers. High ultraviolet transmission is a hallmark of COC materials, with optimized grades the leading polymer alternatives to quartz glass in analytical and diagnostic applications. Some properties vary due to monomer content. These include glass transition temperature, viscosity, and stiffness. The glass transition temperature of these polymers can exceed 200°C. COC resins are commonly supplied in pellet form and are suited to standard polymer processing techniques such as single and twin screw extrusion, injection molding, injection blow molding and stretch blow molding (ISBM), compression molding, extrusion coating, biaxial orientation, thermoforming and many others. COC is noted for high dimensional stability with little change seen after processing. COC and COP are generally attacked by non-polar solvents, such as toluene. COC shows good chemical resistance and barrier to other solvents, such as alcohols, and is very resistant to attack from acids and bases. Electronic properties of COC are in some respects similar to fluoropolymers, most notably a similarly low dissipation factor or tan delta, and low permittivity. It is a very good insulator. Applications Packaging COC is commonly extruded with cast or blown film equipment in the manufacture of packaging films. Most often, due to cost, COC is used as a modifier in monolayer or multilayer film to provide properties not delivered by base resins such as polyethylene. Grades of COC based on ethylene show a certain amount of compatibility with polyethylene and can be blended with PE via commercial dry blending equipment. These films are then used in consumer applications including food and healthcare packaging. Key COC enhancements can include thermoformability, shrink, deadfold, easy tear, enhanced stiffness, heat resistance and higher moisture barrier. Common applications include shrink films and labels, twist films, protective or bubble packaging, and forming films. Another noted application which often relies on a high percentage of COC in the end product is pharmaceutical blister packaging. Healthcare The high purity, moisture barrier, clarity, and sterilization compatibility of COC resins make them an excellent alternative to glass in a wide range of medical products. Breakage prevention and weight reduction are common reasons for choosing COC in these applications. COC has a very low-energy and nonreactive surface, which can extend shelf life and purity of medications such as insulin and other protein drugs in applications such as vials, syringes and cartridges. The high UV transmission of COC also drives diagnostic applications such as cuvettes and microplates. COC plays an increasingly important role in microfluidics due to its chemical resistance, clarity and unusually high mold detail replication which makes it possible to reliably mold submicron features. Most COC grades can undergo sterilization by gamma radiation, steam, or ethylene oxide. Optics These polymers are commercially used in optical films, lenses, touch screens, light guide panels, reflection films, and other components for mobile devices, displays, cameras, copiers and other optical assemblies. Fiber spinning COC has unique electrical properties that resist dielectric breakdown and have a very low dielectric loss over time. Because of this COC is used in filter media that require a charge retention to work properly. Electronics The low dielectric constant of COC, even at high frequency, has led to its use in certain antenna applications as well as capacitors requiring higher temperature resistance than polypropylene can provide. See also Ethylene Polynorbornene References External links Physical and electrical properties of TOPAS Polymer chemistry Copolymers
Cyclic olefin copolymer
[ "Chemistry", "Materials_science", "Engineering" ]
1,233
[ "Materials science", "Polymer chemistry" ]
11,489,016
https://en.wikipedia.org/wiki/Molecular%20binding
Molecular binding is an attractive interaction between two molecules that results in a stable association in which the molecules are in close proximity to each other. It is formed when atoms or molecules bind together by sharing of electrons. It often, but not always, involves some chemical bonding. In some cases, the associations can be quite strong—for example, the protein streptavidin and the vitamin biotin have a dissociation constant (reflecting the ratio between bound and free biotin) on the order of 10−14—and so the reactions are effectively irreversible. The result of molecular binding is sometimes the formation of a molecular complex in which the attractive forces holding the components together are generally non-covalent, and thus are normally energetically weaker than covalent bonds. Molecular binding occurs in biological complexes (e.g., between pairs or sets of proteins, or between a protein and a small molecule ligand it binds) and also in abiologic chemical systems, e.g. as in cases of coordination polymers and coordination networks such as metal-organic frameworks. Types Molecular binding can be classified into the following types: Non-covalent – no chemical bonds are formed between the two interacting molecules hence the association is fully reversible Reversible covalent – a chemical bond is formed, however the free energy difference separating the noncovalently-bonded reactants from bonded product is near equilibrium and the activation barrier is relatively low such that the reverse reaction which cleaves the chemical bond easily occurs Irreversible covalent – a chemical bond is formed in which the product is thermodynamically much more stable than the reactants such that the reverse reaction does not take place. Bound molecules are sometimes called a "molecular complex"—the term generally refers to non-covalent associations. Non-covalent interactions can effectively become irreversible; for example, tight binding inhibitors of enzymes can have kinetics that closely resemble irreversible covalent inhibitors. Among the tightest known protein–protein complexes is that between the enzyme angiogenin and ribonuclease inhibitor; the dissociation constant for the human proteins is 5x10−16 mol/L. Another biological example is the binding protein streptavidin, which has extraordinarily high affinity for biotin (vitamin B7/H, dissociation constant, Kd ≈10−14 mol/L). In such cases, if the reaction conditions change (e.g., the protein moves into an environment where biotin concentrations are very low, or pH or ionic conditions are altered), the reverse reaction can be promoted. For example, the biotin-streptavidin interaction can be broken by incubating the complex in water at 70 °C, without damaging either molecule. An example of change in local concentration causing dissociation can be found in the Bohr effect, which describes the dissociation of ligands from hemoglobin in the lung versus peripheral tissues. Some protein–protein interactions result in covalent bonding, and some pharmaceuticals are irreversible antagonists that may or may not be covalently bound. Drug discovery has been through periods when drug candidates that bind covalently to their targets are attractive and then are avoided; the success of bortezomib made boron-based covalently binding candidates more attractive in the late 2000s. Driving force In order for the complex to be stable, the free energy of complex by definition must be lower than the solvent separated molecules. The binding may be primarily entropy-driven (release of ordered solvent molecules around the isolated molecule that results in a net increase of entropy of the system). When the solvent is water, this is known as the hydrophobic effect. Alternatively, the binding may be enthalpy-driven where non-covalent attractive forces such as electrostatic attraction, hydrogen bonding, and van der Waals / London dispersion forces are primarily responsible for the formation of a stable complex. Complexes that have a strong entropy contribution to formation tend to have weak enthalpy contributions. Conversely complexes that have strong enthalpy component tend to have a weak entropy component. This phenomenon is known as enthalpy-entropy compensation. Measurement The strength of binding between the components of molecular complex is measured quantitatively by the binding constant (KA), defined as the ratio of the concentration of the complex divided by the product of the concentrations of the isolated components at equilibrium in molar units: When the molecular complex prevents the normal functioning of an enzyme, the binding constant is also referred to as inhibition constant (KI). Examples Molecules that can participate in molecular binding include proteins, nucleic acids, carbohydrates, lipids, and small organic molecules such as drugs. Hence the types of complexes that form as a result of molecular binding include: protein–protein protein–DNA protein–hormone protein–drug Proteins that form stable complexes with other molecules are often referred to as receptors while their binding partners are called ligands. See also Receptor (biochemistry) Supramolecular chemistry References Medicinal chemistry Molecular physics
Molecular binding
[ "Physics", "Chemistry", "Biology" ]
1,045
[ "Molecular physics", "Biochemistry", " molecular", "Medicinal chemistry", "nan", "Atomic", " and optical physics" ]
11,491,735
https://en.wikipedia.org/wiki/Markov%20partition
A Markov partition in mathematics is a tool used in dynamical systems theory, allowing the methods of symbolic dynamics to be applied to the study of hyperbolic dynamics. By using a Markov partition, the system can be made to resemble a discrete-time Markov process, with the long-term dynamical characteristics of the system represented as a Markov shift. The appellation 'Markov' is appropriate because the resulting dynamics of the system obeys the Markov property. The Markov partition thus allows standard techniques from symbolic dynamics to be applied, including the computation of expectation values, correlations, topological entropy, topological zeta functions, Fredholm determinants and the like. Motivation Let be a discrete dynamical system. A basic method of studying its dynamics is to find a symbolic representation: a faithful encoding of the points of by sequences of symbols such that the map becomes the shift map. Suppose that has been divided into a number of pieces which are thought to be as small and localized, with virtually no overlaps. The behavior of a point under the iterates of can be tracked by recording, for each , the part which contains . This results in an infinite sequence on the alphabet which encodes the point. In general, this encoding may be imprecise (the same sequence may represent many different points) and the set of sequences which arise in this way may be difficult to describe. Under certain conditions, which are made explicit in the rigorous definition of a Markov partition, the assignment of the sequence to a point of becomes an almost one-to-one map whose image is a symbolic dynamical system of a special kind called a shift of finite type. In this case, the symbolic representation is a powerful tool for investigating the properties of the dynamical system . Formal definition A Markov partition is a finite cover of the invariant set of the manifold by a set of curvilinear rectangles such that For any pair of points , that for If and , then Here, and are the unstable and stable manifolds of x, respectively, and simply denotes the interior of . These last two conditions can be understood as a statement of the Markov property for the symbolic dynamics; that is, the movement of a trajectory from one open cover to the next is determined only by the most recent cover, and not the history of the system. It is this property of the covering that merits the 'Markov' appellation. The resulting dynamics is that of a Markov shift; that this is indeed the case is due to theorems by Yakov Sinai (1968) and Rufus Bowen (1975), thus putting symbolic dynamics on a firm footing. Variants of the definition are found, corresponding to conditions on the geometry of the pieces . Examples Markov partitions have been constructed in several situations. Anosov diffeomorphisms of the torus. Dynamical billiards, in which case the covering is countable. Markov partitions make homoclinic and heteroclinic orbits particularly easy to describe. The system has the Markov partition , and in this case the symbolic representation of a real number in is its binary expansion. For example: . The assignment of points of to their sequences in the Markov partition is well defined except on the dyadic rationals - morally speaking, this is because , in the same way as in decimal expansions. References Dynamical systems Symbolic dynamics Diffeomorphisms Markov models
Markov partition
[ "Physics", "Mathematics" ]
706
[ "Symbolic dynamics", "Mechanics", "Dynamical systems" ]
11,493,977
https://en.wikipedia.org/wiki/Mekorot
Mekorot (, lit. "Sources") is the national water company of Israel and the country's top agency for water management. Founded in 1937, it supplies Israel with approx. 80% of its drinking water and operates a cross-country water supply network known as the National Water Carrier. Mekorot and its subsidiaries have partnered with numerous countries around the world in areas including desalination and water management. History Mekorot was established as the "Ḥevrat ha-Mayim" ('Water Company') on 15 February 1937 by Levi Shkolnik (later Eshkol, Prime Minister of Israel between 1963-1969), water engineer Simcha Blass, and Pinchas Koslovsky (later Sapir, Minister of Finance between 1963-1968). Water supply system Mekorot supplies approx. 80% of Israel's drinking water and approx. 65% of its water supplies. Mekorot supplies over 1.7 billion cubic meters of water to homes, agricultural fields, & industrial plants throughout Israel. The company provides water & services to the private & public sectors in Israel & to the Palestinian Authority & the Kingdom of Jordan, through political agreements. The company operates about 13,000 km of pipelines, 3,000 production & supply facilities, 1,200 drillings, 1,000 water reservoirs & pools and 20 desalination facilities. The company has accumulated investments of NIS 1.5-2 billion per year, & a three-year development plan. Mekorot's water supply system unifies most of the regional water plants, the National Water Carrier and Yarkon-Negev plant, and draws water from the Sea of Galilee, aquifers, boreholes, seawater, desalinated water, and brackish water. 2004 - Establishment of the WaTech technological entrepreneurship center For technological entrepreneurship and collaborations between Mekorot and start-up companies, entrepreneurs, academia and investors in the field of water technologies in Israel and around the world. As of 2024, Mekorot has contracted with nearly 10 startups and operates R&D centers in the water field. National Water Carrier Mekorot's National Water Carrier, known in Hebrew as Hamovil ha'artzi, runs from Lake Kinneret (also known as The Sea of Galilee) in the north to the northern Negev Desert in the south. The system has been expanded to pipe water from desalination plants on the Mediterranean coast. Groundbreaking national project for transferring water throughout Israel, in all directions and collecting water from desalination facilities that were established along the coastal strip in the west of the country, to cope with the increased demand for water in the country. 2004-2015 – The New National Carrier Groundbreaking national project for transferring water throughout the country, in all directions and collecting water from desalination facilities that were established along the coastal strip in the west of the country, to cope with the increased demand for water in the country. 2007 – Launching of the National Filtration Plant As part of its commitment to water quality, Mekorot builds the main filtration plant at the Eshkol site in the Beit Natofa Valley to improve the water of the Sea of Galilee before its flows into the National Carrier pipeline system. 2022 - Inauguration of the fifth water supply system to Jerusalem Inaugurated with the aim of expanding the scope of water supply to Jerusalem and the surrounding towns, including meeting future water needs in the region. 2023 - Launching of the National Carrier Flow Reversal Project   Delivery of surplus desalinated water to their destinations: strengthening the  Sea of Galilee, increasing water supply to northern towns and continued compliance with Israel’s obligations to Jordan in the east. Water Tariffs Water tariffs are set by the Water Authority. Tariffs are updated every six months according to changes in the Consumer Price Index, electricity rates and the average wage index. The rates vary according to use: domestic, consumption and services, industry and agriculture. Rates for industrial and agricultural use are lower than those for domestic consumption and services. The bulk water tariff is the same throughout the country, regardless of the difference in supply costs. Water Treatment In 2008, Mekorot established a central water filtering plant for water pumped from Lake Kinneret (The Sea Of Galilee). The company also improved quality control. As a result, water quality has improved and less chlorine is added to the water as a disinfecting agent. During the years Mekorot established at the site the National Laboratory, which can assist and monitor water quality. Mekorot’s operations achieved maximum utilization of the mix of water resources through various means. In the field of Desalination Mekorot supplies desalinated seawater, and operates 23 desalination facilities, including in the desert, which produce more than one million cubic meters of desalinated water a day. In the field of water recycling, the company operates nine waste treatment and water reuse facilities, which provide 85% of the water for Israel’s agriculture. In addition, the company operates more than 1,000 wells nationwide, some of which reach the depth of up to 1.5 km, to produce water from aquifers and treat it for drinking and agriculture. In other areas of Israel, Mekorot developed innovative technologies for capturing floodwater in different geographical regions. Integrated Water Solutions The company provides an integrated package of water solutions, thanks to the company’s specialization in a wide range of areas, including advanced models for water management & operation, the optimal combination of different water types, seawater & water desalination, wastewater processing & water recycling for agricultural uses, improving water resources to the proper quality, security of water sources & more. Innovation & Digitization Mekorot uses advanced technologies such as IT & OT & through them implements a comprehensive digital transformation of processes, in favor of fast, efficient, & secure flow of professional-business information, beyond paperless online work, & protection against cyber & other attacks on critical infrastructures, like information infrastructure. Moreover, the company continues to implement innovative water technologies, i.e. the deployment of a fiberoptic network in its water piping, while carrying high speed communications throughout Israel. International Activities The “Abraham Agreements” signing opened a new business horizon for the company - in March 2021, Mekorot became the first government company in the infrastructure field operating in the Persian Gulf, signing a development contract with the Kingdom of Bahrain. In September 2021 the company completed the doubling of the water supply to the Kingdom of Jordan, & in April 2022 signed an agreement for the development of the water economy in Azerbaijan. Since then, the company signed agreements and memorandum of understandings in several countries, including France, Singapore, Morocco, Argentina, Chile and Italy. Financial Stability Since 2003, Mekorot has been ranked consistently in the highest financial strength rating (ilAAA) by the international rating company Maalot Standard & Poor’s. In 2019, Mekorot was listed at the Tel Aviv Stock Exchange, after raising its first capital in the company’s history by issuing tradable bonds, accompanied by the publication of a prospectus. To date, the company has raised over NIS 5 billion while maintaining its high credit rating. List of CEO's Mekorot's CEO is Amit Lang, The chairman of the board is Yitzhak Aharonovich. See also Water supply and sanitation in Israel References External links Mekorot web site Government-owned companies of Israel Hydraulic engineering Utilities of Israel Water supply and sanitation in Israel
Mekorot
[ "Physics", "Engineering", "Environmental_science" ]
1,543
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Hydraulic engineering" ]
11,494,409
https://en.wikipedia.org/wiki/Element%20%28category%20theory%29
In category theory, the concept of an element, or a point, generalizes the more usual set theoretic concept of an element of a set to an object of any category. This idea often allows restating of definitions or properties of morphisms (such as monomorphism or product) given by a universal property in more familiar terms, by stating their relation to elements. Some very general theorems, such as Yoneda's lemma and the Mitchell embedding theorem, are of great utility for this, by allowing one to work in a context where these translations are valid. This approach to category theory – in particular the use of the Yoneda lemma in this way – is due to Grothendieck, and is often called the method of the functor of points. Definition Suppose C is any category and A, T are two objects of C. A T-valued point of A is simply a morphism . The set of all T-valued points of A varies functorially with T, giving rise to the "functor of points" of A; according to the Yoneda lemma, this completely determines A as an object of C. Properties of morphisms Many properties of morphisms can be restated in terms of points. For example, a map is said to be a monomorphism if For all maps , , if then . Suppose and in C. Then g and h are A-valued points of B, and therefore monomorphism is equivalent to the more familiar statement f is a monomorphism if it is an injective function on points of B. Some care is necessary. f is an epimorphism if the dual condition holds: For all maps g, h (of some suitable type), implies . In set theory, the term "epimorphism" is synonymous with "surjection", i.e. Every point of C is the image, under f, of some point of B. This is clearly not the translation of the first statement into the language of points, and in fact these statements are not equivalent in general. However, in some contexts, such as abelian categories, "monomorphism" and "epimorphism" are backed by sufficiently strong conditions that in fact they do allow such a reinterpretation on points. Similarly, categorical constructions such as the product have pointed analogues. Recall that if A, B are two objects of C, their product A × B is an object such that There exist maps , and for any T and maps , there exists a unique map such that and . In this definition, f and g are T-valued points of A and B, respectively, while h is a T-valued point of A × B. An alternative definition of the product is therefore: A × B is an object of C, together with projection maps and , such that p and q furnish a bijection between points of A × B and pairs of points of A and B. This is the more familiar definition of the product of two sets. Geometric origin The terminology is geometric in origin; in algebraic geometry, Grothendieck introduced the notion of a scheme in order to unify the subject with arithmetic geometry, which dealt with the same idea of studying solutions to polynomial equations (i.e. algebraic varieties) but where the solutions are not complex numbers but rational numbers, integers, or even elements of some finite field. A scheme is then just that: a scheme for collecting together all the manifestations of a variety defined by the same equations but with solutions taken in different number sets. One scheme gives a complex variety, whose points are its -valued points, as well as the set of -valued points (rational solutions to the equations), and even -valued points (solutions modulo p). One feature of the language of points is evident from this example: it is, in general, not enough to consider just points with values in a single object. For example, the equation (which defines a scheme) has no real solutions, but it has complex solutions, namely . It also has one solution modulo 2 and two modulo 5, 13, 29, etc. (all primes that are 1 modulo 4). Just taking the real solutions would give no information whatsoever. Relation with set theory The situation is analogous to the case where C is the category Set, of sets of actual elements. In this case, we have the "one-pointed" set {1}, and the elements of any set S are the same as the points of S. In addition, though, there are the points, which are pairs of elements of S, or elements of S × S. In the context of sets, these higher points are extraneous: S is determined completely by its . However, as shown above, this is special (in this case, it is because all sets are iterated coproducts of {1}). References Category theory
Element (category theory)
[ "Mathematics" ]
1,020
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Category theory", "Mathematical relations" ]
699,689
https://en.wikipedia.org/wiki/Polaron
A polaron is a quasiparticle used in condensed matter physics to understand the interactions between electrons and atoms in a solid material. The polaron concept was proposed by Lev Landau in 1933 and Solomon Pekar in 1946 to describe an electron moving in a dielectric crystal where the atoms displace from their equilibrium positions to effectively screen the charge of an electron, known as a phonon cloud. This lowers the electron mobility and increases the electron's effective mass. The general concept of a polaron has been extended to describe other interactions between the electrons and ions in metals that result in a bound state, or a lowering of energy compared to the non-interacting system. Major theoretical work has focused on solving Fröhlich and Holstein Hamiltonians. This is still an active field of research to find exact numerical solutions to the case of one or two electrons in a large crystal lattice, and to study the case of many interacting electrons. Experimentally, polarons are important to the understanding of a wide variety of materials. The electron mobility in semiconductors can be greatly decreased by the formation of polarons. Organic semiconductors are also sensitive to polaronic effects, which is particularly relevant in the design of organic solar cells that effectively transport charge. Polarons are also important for interpreting the optical conductivity of these types of materials. The polaron, a fermionic quasiparticle, should not be confused with the polariton, a bosonic quasiparticle analogous to a hybridized state between a photon and an optical phonon. Polaron theory The energy spectrum of an electron moving in a periodical potential of a rigid crystal lattice is called the Bloch spectrum, which consists of allowed bands and forbidden bands. An electron with energy inside an allowed band moves as a free electron but has an effective mass that differs from the electron mass in vacuum. However, a crystal lattice is deformable and displacements of atoms (ions) from their equilibrium positions are described in terms of phonons. Electrons interact with these displacements, and this interaction is known as electron-phonon coupling. One possible scenario was proposed in the seminal 1933 paper by Lev Landau, which includes the production of a lattice defect such as an F-center and a trapping of the electron by this defect. A different scenario was proposed by Solomon Pekar that envisions dressing the electron with lattice polarization (a cloud of virtual polar phonons). Such an electron with the accompanying deformation moves freely across the crystal, but with increased effective mass. Pekar coined for this charge carrier the term polaron. Landau and Pekar constructed the basis of polaron theory. A charge placed in a polarizable medium will be screened. Dielectric theory describes the phenomenon by the induction of a polarization around the charge carrier. The induced polarization will follow the charge carrier when it is moving through the medium. The carrier together with the induced polarization is considered as one entity, which is called a polaron (see Fig. 1). While polaron theory was originally developed for electrons, there is no fundamental reason why it could not be any other charged particle interacting with phonons. Indeed, other charged particles such as (electron) holes and ions generally follow the polaron theory. For example, the proton polaron was identified experimentally in 2017 and on ceramic electrolytes after its existence was hypothesized. Usually, in covalent semiconductors the coupling of electrons with lattice deformation is weak and polarons do not form. In polar semiconductors the electrostatic interaction with induced polarization is strong and polarons are formed at low temperature, provided that their concentration is not large and the screening is not efficient. Another class of materials in which polarons are observed is molecular crystals, where the interaction with molecular vibrations may be strong. In the case of polar semiconductors, the interaction with polar phonons is described by the Fröhlich Hamiltonian. On the other hand, the interaction of electrons with molecular phonons is described by the Holstein Hamiltonian. Usually, the models describing polarons may be divided into two classes. The first class represents continuum models where the discreteness of the crystal lattice is neglected. In that case, polarons are weakly coupled or strongly coupled depending on whether the polaron binding energy is small or large compared to the phonon frequency. The second class of systems commonly considered are lattice models of polarons. In this case, there may be small or large polarons, depending on the relative size of the polaron radius to the lattice constant . A conduction electron in an ionic crystal or a polar semiconductor is the prototype of a polaron. Herbert Fröhlich proposed a model Hamiltonian for this polaron through which its dynamics are treated quantum mechanically (Fröhlich Hamiltonian). The strength of electron-phonon interaction is determined by the dimensionless coupling constant . Here is electron mass, is the phonon frequency and , , are static and high frequency dielectric constants. In table 1 the Fröhlich coupling constant is given for a few solids. The Fröhlich Hamiltonian for a single electron in a crystal using second quantization notation is: The exact form of γ depends on the material and the type of phonon being used in the model. In the case of a single polar mode , here is the volume of the unit cell. In the case of molecular crystal γ is usually momentum independent constant. A detailed advanced discussion of the variations of the Fröhlich Hamiltonian can be found in J. T. Devreese and A. S. Alexandrov. The terms Fröhlich polaron and large polaron are sometimes used synonymously since the Fröhlich Hamiltonian includes the continuum approximation and long range forces. There is no known exact solution for the Fröhlich Hamiltonian with longitudinal optical (LO) phonons and linear (the most commonly considered variant of the Fröhlich polaron) despite extensive investigations. Despite the lack of an exact solution, some approximations of the polaron properties are known. The physical properties of a polaron differ from those of a band-carrier. A polaron is characterized by its self-energy , an effective mass and by its characteristic response to external electric and magnetic fields (e. g. dc mobility and optical absorption coefficient). When the coupling is weak ( small), the self-energy of the polaron can be approximated as: and the polaron mass , which can be measured by cyclotron resonance experiments, is larger than the band mass of the charge carrier without self-induced polarization: When the coupling is strong (α large), a variational approach due to Landau and Pekar indicates that the self-energy is proportional to α² and the polaron mass scales as α⁴. The Landau–Pekar variational calculation yields an upper bound to the polaron self-energy , valid for all α, where is a constant determined by solving an integro-differential equation. It was an open question for many years whether this expression was asymptotically exact as α tends to infinity. Finally, Donsker and Varadhan, applying large deviation theory to Feynman's path integral formulation for the self-energy, showed the large α exactitude of this Landau–Pekar formula. Later, Lieb and Thomas gave a shorter proof using more conventional methods, and with explicit bounds on the lower order corrections to the Landau–Pekar formula. Feynman introduced the variational principle for path integrals to study the polaron. He simulated the interaction between the electron and the polarization modes by a harmonic interaction between a hypothetical particle and the electron. The analysis of an exactly solvable ("symmetrical") 1D-polaron model, Monte Carlo schemes and other numerical schemes demonstrate the remarkable accuracy of Feynman's path-integral approach to the polaron ground-state energy. Experimentally more directly accessible properties of the polaron, such as its mobility and optical absorption, have been investigated subsequently. In the strong coupling limit, , the spectrum of excited states of a polaron begins with polaron-phonon bound states with energies less than , where is the frequency of optical phonons. In the lattice models the main parameter is the polaron binding energy: , here summation is taken over the Brillouin zone. Note that this binding energy is purely adiabatic, i.e. does not depend on the ionic masses. For polar crystals the value of the polaron binding energy is strictly determined by the dielectric constants ,, and is of the order of 0.3-0.8 eV. If polaron binding energy is smaller than the hopping integral the large polaron is formed for some type of electron-phonon interactions. In the case when the small polaron is formed. There are two limiting cases in the lattice polaron theory. In the physically important adiabatic limit all terms which involve ionic masses are cancelled and formation of polaron is described by nonlinear Schrödinger equation with nonadiabatic correction describing phonon frequency renormalization and polaron tunneling. In the opposite limit the theory represents the expansion in . Polaron optical absorption The expression for the magnetooptical absorption of a polaron is: Here, is the cyclotron frequency for a rigid-band electron. The magnetooptical absorption Γ(Ω) at the frequency Ω takes the form Σ(Ω) is the so-called "memory function", which describes the dynamics of the polaron. Σ(Ω) depends also on α, β (β, where is the Boltzmann constant and is the temperature) and . In the absence of an external magnetic field () the optical absorption spectrum (3) of the polaron at weak coupling is determined by the absorption of radiation energy, which is reemitted in the form of LO phonons. At larger coupling, , the polaron can undergo transitions toward a relatively stable internal excited state called the "relaxed excited state" (RES) (see Fig. 2). The RES peak in the spectrum also has a phonon sideband, which is related to a Franck–Condon-type transition. A comparison of the DSG results with the optical conductivity spectra given by approximation-free numerical and approximate analytical approaches is given in ref. Calculations of the optical conductivity for the Fröhlich polaron performed within the Diagrammatic Quantum Monte Carlo method, see Fig. 3, fully confirm the results of the path-integral variational approach at In the intermediate coupling regime the low-energy behavior and the position of the maximum of the optical conductivity spectrum of ref. follow well the prediction of Devreese. There are the following qualitative differences between the two approaches in the intermediate and strong coupling regime: in ref., the dominant peak broadens and the second peak does not develop, giving instead rise to a flat shoulder in the optical conductivity spectrum at . This behavior can be attributed to the optical processes with participation of two or more phonons. The nature of the excited states of a polaron needs further study. The application of a sufficiently strong external magnetic field allows one to satisfy the resonance condition , which {(for )} determines the polaron cyclotron resonance frequency. From this condition also the polaron cyclotron mass can be derived. Using the most accurate theoretical polaron models to evaluate , the experimental cyclotron data can be well accounted for. Evidence for the polaron character of charge carriers in AgBr and AgCl was obtained through high-precision cyclotron resonance experiments in external magnetic fields up to 16 T. The all-coupling magneto-absorption calculated in ref., leads to the best quantitative agreement between theory and experiment for AgBr and AgCl. This quantitative interpretation of the cyclotron resonance experiment in AgBr and AgCl by the theory of Peeters provided one of the most convincing and clearest demonstrations of Fröhlich polaron features in solids. Experimental data on the magnetopolaron effect, obtained using far-infrared photoconductivity techniques, have been applied to study the energy spectrum of shallow donors in polar semiconductor layers of CdTe. The polaron effect well above the LO phonon energy was studied through cyclotron resonance measurements, e. g., in II–VI semiconductors, observed in ultra-high magnetic fields. The resonant polaron effect manifests itself when the cyclotron frequency approaches the LO phonon energy in sufficiently high magnetic fields. In the lattice models the optical conductivity is given by the formula: Here is the activation energy of polaron, which is of the order of polaron binding energy . This formula was derived and extensively discussed in and was tested experimentally for example in photodoped parent compounds of high temperature superconductors. Polarons in two dimensions and in quasi-2D structures The great interest in the study of the two-dimensional electron gas (2DEG) has also resulted in many investigations on the properties of polarons in two dimensions. A simple model for the 2D polaron system consists of an electron confined to a plane, interacting via the Fröhlich interaction with the LO phonons of a 3D surrounding medium. The self-energy and the mass of such a 2D polaron are no longer described by the expressions valid in 3D; for weak coupling they can be approximated as: It has been shown that simple scaling relations exist, connecting the physical properties of polarons in 2D with those in 3D. An example of such a scaling relation is: where () and () are, respectively, the polaron and the electron-band masses in 2D (3D). The effect of the confinement of a Fröhlich polaron is to enhance the effective polaron coupling. However, many-particle effects tend to counterbalance this effect because of screening. Also in 2D systems cyclotron resonance is a convenient tool to study polaron effects. Although several other effects have to be taken into account (nonparabolicity of the electron bands, many-body effects, the nature of the confining potential, etc.), the polaron effect is clearly revealed in the cyclotron mass. An interesting 2D system consists of electrons on films of liquid He. In this system the electrons couple to the ripplons of the liquid He, forming "ripplopolarons". The effective coupling can be relatively large and, for some values of the parameters, self-trapping can result. The acoustic nature of the ripplon dispersion at long wavelengths is a key aspect of the trapping. For GaAs/AlxGa1−xAs quantum wells and superlattices, the polaron effect is found to decrease the energy of the shallow donor states at low magnetic fields and leads to a resonant splitting of the energies at high magnetic fields. The energy spectra of such polaronic systems as shallow donors ("bound polarons"), e. g., the D0 and D− centres, constitute the most complete and detailed polaron spectroscopy realised in the literature. In GaAs/AlAs quantum wells with sufficiently high electron density, anticrossing of the cyclotron-resonance spectra has been observed near the GaAs transverse optical (TO) phonon frequency rather than near the GaAs LO-phonon frequency. This anticrossing near the TO-phonon frequency was explained in the framework of the polaron theory. Besides optical properties, many other physical properties of polarons have been studied, including the possibility of self-trapping, polaron transport, magnetophonon resonance, etc. Extensions of the polaron concept Significant are also the extensions of the polaron concept: acoustic polaron, piezoelectric polaron, electronic polaron, bound polaron, trapped polaron, spin polaron, molecular polaron, solvated polarons, polaronic exciton, Jahn-Teller polaron, small polaron, bipolarons and many-polaron systems. These extensions of the concept are invoked, e. g., to study the properties of conjugated polymers, colossal magnetoresistance perovskites, high- superconductors, layered MgB2 superconductors, fullerenes, quasi-1D conductors, semiconductor nanostructures. The possibility that polarons and bipolarons play a role in high- superconductors has renewed interest in the physical properties of many-polaron systems and, in particular, in their optical properties. Theoretical treatments have been extended from one-polaron to many-polaron systems. A new aspect of the polaron concept has been investigated for semiconductor nanostructures: the exciton-phonon states are not factorizable into an adiabatic product Ansatz, so that a non-adiabatic treatment is needed. The non-adiabaticity of the exciton-phonon systems leads to a strong enhancement of the phonon-assisted transition probabilities (as compared to those treated adiabatically) and to multiphonon optical spectra that are considerably different from the Franck–Condon progression even for small values of the electron-phonon coupling constant as is the case for typical semiconductor nanostructures. In biophysics Davydov soliton is a propagating along the protein α-helix self-trapped amide I excitation that is a solution of the Davydov Hamiltonian. The mathematical techniques that are used to analyze Davydov's soliton are similar to some that have been developed in polaron theory. In this context the Davydov soliton corresponds to a polaron that is (i) large so the continuum limit approximation in justified, (ii) acoustic because the self-localization arises from interactions with acoustic modes of the lattice, and (iii) weakly coupled because the anharmonic energy is small compared with the phonon bandwidth. It has been shown that the system of an impurity in a Bose–Einstein condensate is also a member of the polaron family. This allows the hitherto inaccessible strong coupling regime to be studied, since the interaction strengths can be externally tuned through the use of a Feshbach resonance. This was recently realized experimentally by two research groups. The existence of the polaron in a Bose–Einstein condensate was demonstrated for both attractive and repulsive interactions, including the strong coupling regime and dynamically observed. See also Exciton Sigurd Zienau TI-polaron References External links Quasiparticles Ions
Polaron
[ "Physics", "Chemistry", "Materials_science" ]
3,839
[ "Matter", "Subatomic particles", "Condensed matter physics", "Quasiparticles", "Ions" ]
700,131
https://en.wikipedia.org/wiki/One-loop%20Feynman%20diagram
In physics, a one-loop Feynman diagram is a connected Feynman diagram with only one cycle (unicyclic). Such a diagram can be obtained from a connected tree diagram by taking two external lines of the same type and joining them together into an edge. Diagrams with loops (in graph theory, these kinds of loops are called cycles, while the word loop is an edge connecting a vertex with itself) correspond to the quantum corrections to the classical field theory. Because one-loop diagrams only contain one cycle, they express the next-to-classical contributions called the semiclassical contributions. One-loop diagrams are usually computed as the integral over one independent momentum that can "run in the cycle". The Casimir effect, Hawking radiation and Lamb shift are examples of phenomena whose existence can be implied using one-loop Feynman diagrams, especially the well-known "triangle diagram": The evaluation of one-loop Feynman diagrams usually leads to divergent expressions, which are either due to: zero-mass particles in the cycle of the diagram (infrared divergence) or insufficient falloff of the integrand for high momenta (ultraviolet divergence). Infrared divergences are usually dealt with by assigning the zero mass particles a small mass λ, evaluating the corresponding expression and then taking the limit . Ultraviolet divergences are dealt with by renormalization. See also Tadpole (physics) Quantum field theory Diagrams
One-loop Feynman diagram
[ "Physics" ]
297
[ "Quantum field theory", "Quantum mechanics", "Quantum physics stubs" ]
700,141
https://en.wikipedia.org/wiki/Semiclassical%20physics
In physics, semiclassical refers to a theory in which one part of a system is described quantum mechanically, whereas the other is treated classically. For example, external fields will be constant, or when changing will be classically described. In general, it incorporates a development in powers of the Planck constant, resulting in the classical physics of power 0, and the first nontrivial approximation to the power of (−1). In this case, there is a clear link between the quantum-mechanical system and the associated semi-classical and classical approximations, as it is similar in appearance to the transition from physical optics to geometric optics. History Max Planck was the first to introduce the idea of quanta of energy in 1900 while studying black-body radiation. In 1906, he was also the first to write that quantum theory should replicate classical mechanics at some limit, particularly if the Planck constant h were infinitesimal. With this idea he showed that Planck's law for thermal radiation leads to the Rayleigh–Jeans law, the classical prediction (valid for large wavelength). Instances Some examples of a semiclassical approximation include: WKB approximation: electrons in classical external electromagnetic fields. semiclassical gravity: quantum field theory within a classical curved gravitational background (see general relativity). quantum chaos; quantization of classical chaotic systems. magnetic properties of materials and astrophysical bodies under the effect of large magnetic fields (see for example De Haas–Van Alphen effect) quantum field theory, only Feynman diagrams with at most a single closed loop (see for example one-loop Feynman diagram) are considered, which corresponds to the powers of the Planck constant. See also Bohr model Correspondence principle Classical limit Eikonal approximation Einstein–Brillouin–Keller method Old quantum theory References Quantum mechanics Quantum field theory Quantum chemistry Theoretical chemistry Computational chemistry
Semiclassical physics
[ "Physics", "Chemistry" ]
378
[ "Quantum field theory", "Quantum chemistry", "Theoretical physics", "Quantum mechanics", "Theoretical chemistry", "Computational chemistry", " molecular", "nan", "Atomic", "Quantum physics stubs", " and optical physics" ]
700,154
https://en.wikipedia.org/wiki/WKB%20approximation
In mathematical physics, the WKB approximation or WKB method is a method for finding approximate solutions to linear differential equations with spatially varying coefficients. It is typically used for a semiclassical calculation in quantum mechanics in which the wavefunction is recast as an exponential function, semiclassically expanded, and then either the amplitude or the phase is taken to be changing slowly. The name is an initialism for Wentzel–Kramers–Brillouin. It is also known as the LG or Liouville–Green method. Other often-used letter combinations include JWKB and WKBJ, where the "J" stands for Jeffreys. Brief history This method is named after physicists Gregor Wentzel, Hendrik Anthony Kramers, and Léon Brillouin, who all developed it in 1926. In 1923, mathematician Harold Jeffreys had developed a general method of approximating solutions to linear, second-order differential equations, a class that includes the Schrödinger equation. The Schrödinger equation itself was not developed until two years later, and Wentzel, Kramers, and Brillouin were apparently unaware of this earlier work, so Jeffreys is often neglected credit. Early texts in quantum mechanics contain any number of combinations of their initials, including WBK, BWK, WKBJ, JWKB and BWKJ. An authoritative discussion and critical survey has been given by Robert B. Dingle. Earlier appearances of essentially equivalent methods are: Francesco Carlini in 1817, Joseph Liouville in 1837, George Green in 1837, Lord Rayleigh in 1912 and Richard Gans in 1915. Liouville and Green may be said to have founded the method in 1837, and it is also commonly referred to as the Liouville–Green or LG method. The important contribution of Jeffreys, Wentzel, Kramers, and Brillouin to the method was the inclusion of the treatment of turning points, connecting the evanescent and oscillatory solutions at either side of the turning point. For example, this may occur in the Schrödinger equation, due to a potential energy hill. Formulation Generally, WKB theory is a method for approximating the solution of a differential equation whose highest derivative is multiplied by a small parameter . The method of approximation is as follows. For a differential equation assume a solution of the form of an asymptotic series expansion in the limit . The asymptotic scaling of in terms of will be determined by the equation – see the example below. Substituting the above ansatz into the differential equation and cancelling out the exponential terms allows one to solve for an arbitrary number of terms in the expansion. WKB theory is a special case of multiple scale analysis. An example This example comes from the text of Carl M. Bender and Steven Orszag. Consider the second-order homogeneous linear differential equation where . Substituting results in the equation To leading order in ϵ (assuming, for the moment, the series will be asymptotically consistent), the above can be approximated as In the limit , the dominant balance is given by So is proportional to ϵ. Setting them equal and comparing powers yields which can be recognized as the eikonal equation, with solution Considering first-order powers of fixes This has the solution where is an arbitrary constant. We now have a pair of approximations to the system (a pair, because can take two signs); the first-order WKB-approximation will be a linear combination of the two: Higher-order terms can be obtained by looking at equations for higher powers of . Explicitly, for . Precision of the asymptotic series The asymptotic series for is usually a divergent series, whose general term starts to increase after a certain value . Therefore, the smallest error achieved by the WKB method is at best of the order of the last included term. For the equation with an analytic function, the value and the magnitude of the last term can be estimated as follows: where is the point at which needs to be evaluated and is the (complex) turning point where , closest to . The number can be interpreted as the number of oscillations between and the closest turning point. If is a slowly changing function, the number will be large, and the minimum error of the asymptotic series will be exponentially small. Application in non relativistic quantum mechanics The above example may be applied specifically to the one-dimensional, time-independent Schrödinger equation, which can be rewritten as Approximation away from the turning points The wavefunction can be rewritten as the exponential of another function (closely related to the action), which could be complex, so that its substitution in Schrödinger's equation gives: Next, the semiclassical approximation is used. This means that each function is expanded as a power series in . Substituting in the equation, and only retaining terms up to first order in , we get: which gives the following two relations: which can be solved for 1D systems, first equation resulting in:and the second equation computed for the possible values of the above, is generally expressed as: Thus, the resulting wavefunction in first order WKB approximation is presented as, In the classically allowed region, namely the region where the integrand in the exponent is imaginary and the approximate wave function is oscillatory. In the classically forbidden region , the solutions are growing or decaying. It is evident in the denominator that both of these approximate solutions become singular near the classical turning points, where , and cannot be valid. (The turning points are the points where the classical particle changes direction.) Hence, when , the wavefunction can be chosen to be expressed as:and for ,The integration in this solution is computed between the classical turning point and the arbitrary position x'. Validity of WKB solutions From the condition: It follows that: For which the following two inequalities are equivalent since the terms in either side are equivalent, as used in the WKB approximation: The first inequality can be used to show the following: where is used and is the local de Broglie wavelength of the wavefunction. The inequality implies that the variation of potential is assumed to be slowly varying. This condition can also be restated as the fractional change of or that of the momentum , over the wavelength , being much smaller than . Similarly it can be shown that also has restrictions based on underlying assumptions for the WKB approximation that:which implies that the de Broglie wavelength of the particle is slowly varying. Behavior near the turning points We now consider the behavior of the wave function near the turning points. For this, we need a different method. Near the first turning points, , the term can be expanded in a power series, To first order, one finds This differential equation is known as the Airy equation, and the solution may be written in terms of Airy functions, Although for any fixed value of , the wave function is bounded near the turning points, the wave function will be peaked there, as can be seen in the images above. As gets smaller, the height of the wave function at the turning points grows. It also follows from this approximation that: Connection conditions It now remains to construct a global (approximate) solution to the Schrödinger equation. For the wave function to be square-integrable, we must take only the exponentially decaying solution in the two classically forbidden regions. These must then "connect" properly through the turning points to the classically allowed region. For most values of , this matching procedure will not work: The function obtained by connecting the solution near to the classically allowed region will not agree with the function obtained by connecting the solution near to the classically allowed region. The requirement that the two functions agree imposes a condition on the energy , which will give an approximation to the exact quantum energy levels.The wavefunction's coefficients can be calculated for a simple problem shown in the figure. Let the first turning point, where the potential is decreasing over x, occur at and the second turning point, where potential is increasing over x, occur at . Given that we expect wavefunctions to be of the following form, we can calculate their coefficients by connecting the different regions using Airy and Bairy functions. First classical turning point For ie. decreasing potential condition or in the given example shown by the figure, we require the exponential function to decay for negative values of x so that wavefunction for it to go to zero. Considering Bairy functions to be the required connection formula, we get: We cannot use Airy function since it gives growing exponential behaviour for negative x. When compared to WKB solutions and matching their behaviours at , we conclude: , and . Thus, letting some normalization constant be , the wavefunction is given for increasing potential (with x) as: Second classical turning point For ie. increasing potential condition or in the given example shown by the figure, we require the exponential function to decay for positive values of x so that wavefunction for it to go to zero. Considering Airy functions to be the required connection formula, we get: We cannot use Bairy function since it gives growing exponential behaviour for positive x. When compared to WKB solutions and matching their behaviours at , we conclude: , and . Thus, letting some normalization constant be , the wavefunction is given for increasing potential (with x) as: Common oscillating wavefunction Matching the two solutions for region , it is required that the difference between the angles in these functions is where the phase difference accounts for changing cosine to sine for the wavefunction and difference since negation of the function can occur by letting . Thus: Where n is a non-negative integer. This condition can also be rewritten as saying that: The area enclosed by the classical energy curve is . Either way, the condition on the energy is a version of the Bohr–Sommerfeld quantization condition, with a "Maslov correction" equal to 1/2. It is possible to show that after piecing together the approximations in the various regions, one obtains a good approximation to the actual eigenfunction. In particular, the Maslov-corrected Bohr–Sommerfeld energies are good approximations to the actual eigenvalues of the Schrödinger operator. Specifically, the error in the energies is small compared to the typical spacing of the quantum energy levels. Thus, although the "old quantum theory" of Bohr and Sommerfeld was ultimately replaced by the Schrödinger equation, some vestige of that theory remains, as an approximation to the eigenvalues of the appropriate Schrödinger operator. General connection conditions Thus, from the two cases the connection formula is obtained at a classical turning point, : and: The WKB wavefunction at the classical turning point away from it is approximated by oscillatory sine or cosine function in the classically allowed region, represented in the left and growing or decaying exponentials in the forbidden region, represented in the right. The implication follows due to the dominance of growing exponential compared to decaying exponential. Thus, the solutions of oscillating or exponential part of wavefunctions can imply the form of wavefunction on the other region of potential as well as at the associated turning point. Probability density One can then compute the probability density associated to the approximate wave function. The probability that the quantum particle will be found in the classically forbidden region is small. In the classically allowed region, meanwhile, the probability the quantum particle will be found in a given interval is approximately the fraction of time the classical particle spends in that interval over one period of motion. Since the classical particle's velocity goes to zero at the turning points, it spends more time near the turning points than in other classically allowed regions. This observation accounts for the peak in the wave function (and its probability density) near the turning points. Applications of the WKB method to Schrödinger equations with a large variety of potentials and comparison with perturbation methods and path integrals are treated in Müller-Kirsten. Examples in quantum mechanics Although WKB potential only applies to smoothly varying potentials, in the examples where rigid walls produce infinities for potential, the WKB approximation can still be used to approximate wavefunctions in regions of smoothly varying potentials. Since the rigid walls have highly discontinuous potential, the connection condition cannot be used at these points and the results obtained can also differ from that of the above treatment. Bound states for 1 rigid wall The potential of such systems can be given in the form: where . Finding wavefunction in bound region, ie. within classical turning points and , by considering approximations far from and respectively we have two solutions: Since wavefunction must vanish near , we conclude . For airy functions near , we require . We require that angles within these functions have a phase difference where the phase difference accounts for changing sine to cosine and allowing . Where n is a non-negative integer. Note that the right hand side of this would instead be if n was only allowed to non-zero natural numbers. Thus we conclude that, for In 3 dimensions with spherically symmetry, the same condition holds where the position x is replaced by radial distance r, due to its similarity with this problem. Bound states within 2 rigid wall The potential of such systems can be given in the form: where . For between and which are thus the classical turning points, by considering approximations far from and respectively we have two solutions: Since wavefunctions must vanish at and . Here, the phase difference only needs to account for which allows . Hence the condition becomes: where but not equal to zero since it makes the wavefunction zero everywhere. Quantum bouncing ball Consider the following potential a bouncing ball is subjected to: The wavefunction solutions of the above can be solved using the WKB method by considering only odd parity solutions of the alternative potential . The classical turning points are identified and . Thus applying the quantization condition obtained in WKB: Letting where , solving for with given , we get the quantum mechanical energy of a bouncing ball: This result is also consistent with the use of equation from bound state of one rigid wall without needing to consider an alternative potential. Quantum Tunneling The potential of such systems can be given in the form: where . Its solutions for an incident wave is given as where the wavefunction in the classically forbidden region is the WKB approximation but neglecting the growing exponential. This is a fair assumption for wide potential barriers through which the wavefunction is not expected to grow to high magnitudes. By the requirement of continuity of wavefunction and its derivatives, the following relation can be shown: where and . Using we express the values without signs as: Thus, the transmission coefficient is found to be: where , and . The result can be stated as where . See also Airy function Einstein–Brillouin–Keller method Field electron emission Instanton Langer correction Maslov index Method of dominant balance Method of matched asymptotic expansions Method of steepest descent Old quantum theory Perturbation methods Quantum tunneling Slowly varying envelope approximation Supersymmetric WKB approximation References Modern references Historical references External links (An application of the WKB approximation to the scattering of radio waves from the ionosphere.) Approximations Asymptotic analysis Mathematical physics
WKB approximation
[ "Physics", "Mathematics" ]
3,209
[ "Mathematical analysis", "Applied mathematics", "Theoretical physics", "Mathematical relations", "Asymptotic analysis", "Mathematical physics", "Approximations" ]
701,099
https://en.wikipedia.org/wiki/Luttinger%20liquid
A Luttinger liquid, or Tomonaga–Luttinger liquid, is a theoretical model describing interacting electrons (or other fermions) in a one-dimensional conductor (e.g. quantum wires such as carbon nanotubes). Such a model is necessary as the commonly used Fermi liquid model breaks down for one dimension. The Tomonaga–Luttinger's liquid was first proposed by Sin-Itiro Tomonaga in 1950. The model showed that under certain constraints, second-order interactions between electrons could be modelled as bosonic interactions. In 1963, J.M. Luttinger reformulated the theory in terms of Bloch sound waves and showed that the constraints proposed by Tomonaga were unnecessary in order to treat the second-order perturbations as bosons. But his solution of the model was incorrect; the correct solution was given by and Elliot H. Lieb 1965. Theory Luttinger liquid theory describes low energy excitations in a 1D electron gas as bosons. Starting with the free electron Hamiltonian: is separated into left and right moving electrons and undergoes linearization with the approximation over the range : Expressions for bosons in terms of fermions are used to represent the Hamiltonian as a product of two boson operators in a Bogoliubov transformation. The completed bosonization can then be used to predict spin-charge separation. Electron-electron interactions can be treated to calculate correlation functions. Features Among the hallmark features of a Luttinger liquid are the following: The response of the charge (or particle) density to some external perturbation are waves ("plasmons" - or charge density waves) propagating at a velocity that is determined by the strength of the interaction and the average density. For a non-interacting system, this wave velocity is equal to the Fermi velocity, while it is higher (lower) for repulsive (attractive) interactions among the fermions. Likewise, there are spin density waves (whose velocity, to lowest approximation, is equal to the unperturbed Fermi velocity). These propagate independently from the charge density waves. This fact is known as spin-charge separation. Charge and spin waves are the elementary excitations of the Luttinger liquid, unlike the quasiparticles of the Fermi liquid (which carry both spin and charge). The mathematical description becomes very simple in terms of these waves (solving the one-dimensional wave equation), and most of the work consists in transforming back to obtain the properties of the particles themselves (or treating impurities and other situations where 'backscattering' is important). See bosonization for one technique used. Even at zero temperature, the particles' momentum distribution function does not display a sharp jump, in contrast to the Fermi liquid (where this jump indicates the Fermi surface). There is no 'quasiparticle peak' in the momentum-dependent spectral function (i.e. no peak whose width becomes much smaller than the excitation energy above the Fermi level, as is the case for the Fermi liquid). Instead, there is a power-law singularity, with a 'non-universal' exponent that depends on the interaction strength. Around impurities, there are the usual Friedel oscillations in the charge density, at a wavevector of . However, in contrast to the Fermi liquid, their decay at large distances is governed by yet another interaction-dependent exponent. At small temperatures, the scattering of these Friedel oscillations becomes so efficient that the effective strength of the impurity is renormalized to infinity, 'pinching off' the quantum wire. More precisely, the conductance becomes zero as temperature and transport voltage go to zero (and rises like a power law in voltage and temperature, with an interaction-dependent exponent). Likewise, the tunneling rate into a Luttinger liquid is suppressed to zero at low voltages and temperatures, as a power law. The Luttinger model is thought to describe the universal low-frequency/long-wavelength behaviour of any one-dimensional system of interacting fermions (that has not undergone a phase transition into some other state). Physical systems Attempts to demonstrate Luttinger-liquid-like behaviour in those systems are the subject of ongoing experimental research in condensed matter physics. Among the physical systems believed to be described by the Luttinger model are: artificial 'quantum wires' (one-dimensional strips of electrons) defined by applying gate voltages to a two-dimensional electron gas, or by other means (lithography, AFM, etc.) electrons in carbon nanotubes electrons moving along edge states in the fractional quantum Hall effect or integer quantum Hall effect although the latter is often considered a more trivial example. electrons hopping along one-dimensional chains of molecules (e.g. certain organic molecular crystals) fermionic atoms in quasi-one-dimensional atomic traps a 1D 'chain' of half-odd-integer spins described by the Heisenberg model (the Luttinger liquid model also works for integer spins in a large enough magnetic field) electrons in lithium molybdenum purple bronze. See also Fermi liquid Bibliography References External links Short introduction (Stuttgart University, Germany) List of books (FreeScience Library) Theoretical physics Statistical mechanics Condensed matter physics
Luttinger liquid
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,106
[ "Theoretical physics", "Phases of matter", "Materials science", "Condensed matter physics", "Statistical mechanics", "Matter" ]
701,141
https://en.wikipedia.org/wiki/Supersymmetry%20breaking
In particle physics, supersymmetry breaking or SUSY breaking is a process via which a seemingly non-supersymmetric physics emerges from a supersymmetric theory. Assuming a breaking of supersymmetry is a necessary step to reconcile supersymmetry with experimental observations. Superpartner particles, whose mass is equal to the mass of the regular particles in supersymmetry, become much heavier with supersymmetry breaking. In supergravity, this results in a slightly modified counterpart of the Higgs mechanism where the gravitinos become massive. Supersymmetry breaking is relevant in the domain of applicability of stochastic differential equations, which includes classical physics, and encompasses nonlinear dynamical phenomena as chaos, turbulence, and pink noise. Various mechanisms for this breaking have been discussed by physicists, including soft SUSY breaking and types of spontaneous symmetry breaking. Supersymmetry breaking scale The energy scale where supersymmetry breaking takes place is known as the supersymmetry breaking scale. In the scenario known as low energy supersymmetry, in which supersymmetry fully solves the hierarchy problem, this scale should not be far from 1000 GeV, and therefore should be accessible using the Large Hadron Collider and future accelerators. However, supersymmetry may also be broken at high energy scales. Nature does not have to be supersymmetric at any scale. See also Supersymmetric theory of stochastic dynamics Big Bang References Supersymmetric quantum field theory Symmetry
Supersymmetry breaking
[ "Physics", "Mathematics" ]
317
[ "Symmetry", "Supersymmetric quantum field theory", "Quantum mechanics", "Geometry", "Supersymmetry", "Quantum physics stubs" ]
701,188
https://en.wikipedia.org/wiki/Quantum%20vacuum%20state
In quantum field theory, the quantum vacuum state (also called the quantum vacuum or vacuum state) is the quantum state with the lowest possible energy. Generally, it contains no physical particles. The term zero-point field is sometimes used as a synonym for the vacuum state of a quantized field which is completely individual. According to present-day understanding of what is called the vacuum state or the quantum vacuum, it is "by no means a simple empty space". According to quantum mechanics, the vacuum state is not truly empty but instead contains fleeting electromagnetic waves and particles that pop into and out of the quantum field. The QED vacuum of quantum electrodynamics (or QED) was the first vacuum of quantum field theory to be developed. QED originated in the 1930s, and in the late 1940s and early 1950s, it was reformulated by Feynman, Tomonaga, and Schwinger, who jointly received the Nobel prize for this work in 1965. Today, the electromagnetic interactions and the weak interactions are unified (at very high energies only) in the theory of the electroweak interaction. The Standard Model is a generalization of the QED work to include all the known elementary particles and their interactions (except gravity). Quantum chromodynamics (or QCD) is the portion of the Standard Model that deals with strong interactions, and QCD vacuum is the vacuum of quantum chromodynamics. It is the object of study in the Large Hadron Collider and the Relativistic Heavy Ion Collider, and is related to the so-called vacuum structure of strong interactions. Non-zero expectation value If the quantum field theory can be accurately described through perturbation theory, then the properties of the vacuum are analogous to the properties of the ground state of a quantum mechanical harmonic oscillator, or more accurately, the ground state of a measurement problem. In this case, the vacuum expectation value (VEV) of any field operator vanishes. For quantum field theories in which perturbation theory breaks down at low energies (for example, Quantum chromodynamics or the BCS theory of superconductivity), field operators may have non-vanishing vacuum expectation values called condensates. In the Standard Model, the non-zero vacuum expectation value of the Higgs field, arising from spontaneous symmetry breaking, is the mechanism by which the other fields in the theory acquire mass. Energy The vacuum state is associated with a zero-point energy, and this zero-point energy (equivalent to the lowest possible energy state) has measurable effects. It may be detected as the Casimir effect in the laboratory. In physical cosmology, the energy of the cosmological vacuum appears as the cosmological constant. The energy of a cubic centimeter of empty space has been calculated figuratively to be one trillionth of an erg (or 0.6 eV). An outstanding requirement imposed on a potential Theory of Everything is that the energy of the quantum vacuum state must explain the physically observed cosmological constant. Symmetry For a relativistic field theory, the vacuum is Poincaré invariant, which follows from Wightman axioms but can also be proved directly without these axioms. Poincaré invariance implies that only scalar combinations of field operators have non-vanishing VEV's. The VEV may break some of the internal symmetries of the Lagrangian of the field theory. In this case, the vacuum has less symmetry than the theory allows, and one says that spontaneous symmetry breaking has occurred. See Higgs mechanism, standard model. Non-linear permittivity Quantum corrections to Maxwell's equations are expected to result in a tiny nonlinear electric polarization term in the vacuum, resulting in a field-dependent electrical permittivity ε deviating from the nominal value ε0 of vacuum permittivity. These theoretical developments are described, for example, in Dittrich and Gies. The theory of quantum electrodynamics predicts that the QED vacuum should exhibit a slight nonlinearity so that in the presence of a very strong electric field, the permittivity is increased by a tiny amount with respect to ε0. Subject to ongoing experimental efforts is the possibility that a strong electric field would modify the effective permeability of free space, becoming anisotropic with a value slightly below μ0 in the direction of the electric field and slightly exceeding μ0 in the perpendicular direction. The quantum vacuum exposed to an electric field exhibits birefringence for an electromagnetic wave traveling in a direction other than the electric field. The effect is similar to the Kerr effect but without matter being present. This tiny nonlinearity can be interpreted in terms of virtual pair production A characteristic electric field strength for which the nonlinearities become sizable is predicted to be enormous, about V/m, known as the Schwinger limit; the equivalent Kerr constant has been estimated, being about 1020 times smaller than the Kerr constant of water. Explanations for dichroism from particle physics, outside quantum electrodynamics, also have been proposed. Experimentally measuring such an effect is challenging, and has not yet been successful. Virtual particles The presence of virtual particles can be rigorously based upon the non-commutation of the quantized electromagnetic fields. Non-commutation means that although the average values of the fields vanish in a quantum vacuum, their variances do not. The term "vacuum fluctuations" refers to the variance of the field strength in the minimal energy state, and is described picturesquely as evidence of "virtual particles". It is sometimes attempted to provide an intuitive picture of virtual particles, or variances, based upon the Heisenberg energy-time uncertainty principle: (with ΔE and Δt being the energy and time variations respectively; ΔE is the accuracy in the measurement of energy and Δt is the time taken in the measurement, and is the Reduced Planck constant) arguing along the lines that the short lifetime of virtual particles allows the "borrowing" of large energies from the vacuum and thus permits particle generation for short times. Although the phenomenon of virtual particles is accepted, this interpretation of the energy-time uncertainty relation is not universal. One issue is the use of an uncertainty relation limiting measurement accuracy as though a time uncertainty Δt determines a "budget" for borrowing energy ΔE. Another issue is the meaning of "time" in this relation because energy and time (unlike position and momentum , for example) do not satisfy a canonical commutation relation (such as ). Various schemes have been advanced to construct an observable that has some kind of time interpretation, and yet does satisfy a canonical commutation relation with energy. Many approaches to the energy-time uncertainty principle are a long and continuing subject. Physical nature of the quantum vacuum According to Astrid Lambrecht (2002): "When one empties out a space of all matter and lowers the temperature to absolute zero, one produces in a Gedankenexperiment [thought experiment] the quantum vacuum state." According to Fowler & Guggenheim (1939/1965), the third law of thermodynamics may be precisely enunciated as follows: It is impossible by any procedure, no matter how idealized, to reduce any assembly to the absolute zero in a finite number of operations. (See also.Wilks, J. (1971). The Third Law of Thermodynamics, Chapter 6 in Thermodynamics, volume 1, ed. W. Jost, of H. Eyring, D. Henderson, W. Jost, Physical Chemistry. An Advanced Treatise, Academic Press, New York, p. 477.) Photon-photon interaction can occur only through interaction with the vacuum state of some other field, such as the Dirac electron-positron vacuum field; this is associated with the concept of vacuum polarization. According to Milonni (1994): "... all quantum fields have zero-point energies and vacuum fluctuations." This means that there is a component of the quantum vacuum respectively for each component field (considered in the conceptual absence of the other fields), such as the electromagnetic field, the Dirac electron-positron field, and so on. According to Milonni (1994), some of the effects attributed to the vacuum electromagnetic field can have several physical interpretations, some more conventional than others. The Casimir attraction between uncharged conductive plates is often proposed as an example of an effect of the vacuum electromagnetic field. Schwinger, DeRaad, and Milton (1978) are cited by Milonni (1994) as validly, though unconventionally, explaining the Casimir effect with a model in which "the vacuum is regarded as truly a state with all physical properties equal to zero." In this model, the observed phenomena are explained as the effects of the electron motions on the electromagnetic field, called the source field effect. Milonni writes: The basic idea here will be that the Casimir force may be derived from the source fields alone even in completely conventional QED, ... Milonni provides detailed argument that the measurable physical effects usually attributed to the vacuum electromagnetic field cannot be explained by that field alone, but require in addition a contribution from the self-energy of the electrons, or their radiation reaction. He writes: "The radiation reaction and the vacuum fields are two aspects of the same thing when it comes to physical interpretations of various QED processes including the Lamb shift, van der Waals forces, and Casimir effects." This point of view is also stated by Jaffe (2005): "The Casimir force can be calculated without reference to vacuum fluctuations, and like all other observable effects in QED, it vanishes as the fine structure constant, , goes to zero." Notations The vacuum state is written as or . The vacuum expectation value (see also Expectation value) of any field should be written as . See also Pair production Vacuum energy Lamb shift False vacuum decay Squeezed coherent state Quantum fluctuation Scharnhorst effect Van der Waals force* Casimir effect References Further reading Free pdf copy of The Structured Vacuum – thinking about nothing by Johann Rafelski and Berndt Muller (1985) . M. E. Peskin and D. V. Schroeder, An introduction to Quantum Field Theory. H. Genz, Nothingness: The Science of Empty Space. E. W. Davis, V. L. Teofilo, B. Haisch, H. E. Puthoff, L. J. Nickisch, A. Rueda and D. C. Cole (2006), "Review of Experimental Concepts for Studying the Quantum Vacuum Field". External links Energy into Matter Quantum field theory Vacuum Quantum states Articles containing video clips
Quantum vacuum state
[ "Physics" ]
2,209
[ "Quantum field theory", "Vacuum", "Quantum mechanics", "Quantum states", "Matter" ]
701,333
https://en.wikipedia.org/wiki/Nuclear%20fission%20product
Nuclear fission products are the atomic fragments left after a large atomic nucleus undergoes nuclear fission. Typically, a large nucleus like that of uranium fissions by splitting into two smaller nuclei, along with a few neutrons, the release of heat energy (kinetic energy of the nuclei), and gamma rays. The two smaller nuclei are the fission products. (See also Fission products (by element)). About 0.2% to 0.4% of fissions are ternary fissions, producing a third light nucleus such as helium-4 (90%) or tritium (7%). The fission products themselves are usually unstable and therefore radioactive. Due to being relatively neutron-rich for their atomic number, many of them quickly undergo beta decay. This releases additional energy in the form of beta particles, antineutrinos, and gamma rays. Thus, fission events normally result in beta and additional gamma radiation that begins immediately after, even though this radiation is not produced directly by the fission event itself. The produced radionuclides have varying half-lives, and therefore vary in radioactivity. For instance, strontium-89 and strontium-90 are produced in similar quantities in fission, and each nucleus decays by beta emission. But 90Sr has a 30-year half-life, and 89Sr a 50.5-day half-life. Thus in the 50.5 days it takes half the 89Sr atoms to decay, emitting the same number of beta particles as there were decays, less than 0.4% of the 90Sr atoms have decayed, emitting only 0.4% of the betas. The radioactive emission rate is highest for the shortest lived radionuclides, although they also decay the fastest. Additionally, less stable fission products are less likely to decay to stable nuclides, instead decaying to other radionuclides, which undergo further decay and radiation emission, adding to the radiation output. It is these short lived fission products that are the immediate hazard of spent fuel, and the energy output of the radiation also generates significant heat which must be considered when storing spent fuel. As there are hundreds of different radionuclides created, the initial radioactivity level fades quickly as short lived radionuclides decay, but never ceases completely as longer lived radionuclides make up more and more of the remaining unstable atoms. In fact the short lived products are so predominant that 87 percent decay to stable isotopes within the first month after removal from the reactor core. Formation and decay The sum of the atomic mass of the two atoms produced by the fission of one fissile atom is always less than the atomic mass of the original atom. This is because some of the mass is lost as free neutrons, and once kinetic energy of the fission products has been removed (i.e., the products have been cooled to extract the heat provided by the reaction), then the mass associated with this energy is lost to the system also, and thus appears to be "missing" from the cooled fission products. Since the nuclei that can readily undergo fission are particularly neutron-rich (e.g. 61% of the nucleons in uranium-235 are neutrons), the initial fission products are often more neutron-rich than stable nuclei of the same mass as the fission product (e.g. stable zirconium-90 is 56% neutrons compared to unstable strontium-90 at 58%). The initial fission products therefore may be unstable and typically undergo beta decay to move towards a stable configuration, converting a neutron to a proton with each beta emission. (Fission products do not decay via alpha decay.) A few neutron-rich and short-lived initial fission products decay by ordinary beta decay (this is the source of perceptible half life, typically a few tenths of a second to a few seconds), followed by immediate emission of a neutron by the excited daughter-product. This process is the source of so-called delayed neutrons, which play an important role in control of a nuclear reactor. The first beta decays are rapid and may release high energy beta particles or gamma radiation. However, as the fission products approach stable nuclear conditions, the last one or two decays may have a long half-life and release less energy. Radioactivity over time Fission products have half-lives of 90 years (samarium-151) or less, except for seven long-lived fission products that have half lives of 211,100 years (technetium-99) or more. Therefore, the total radioactivity of a mixture of pure fission products decreases rapidly for the first several hundred years (controlled by the short-lived products) before stabilizing at a low level that changes little for hundreds of thousands of years (controlled by the seven long-lived products). This behavior of pure fission products with actinides removed, contrasts with the decay of fuel that still contains actinides. This fuel is produced in the so-called "open" (i.e., no nuclear reprocessing) nuclear fuel cycle. A number of these actinides have half lives in the missing range of about 100 to 200,000 years, causing some difficulty with storage plans in this time-range for open cycle non-reprocessed fuels. Proponents of nuclear fuel cycles which aim to consume all their actinides by fission, such as the Integral Fast Reactor and molten salt reactor, use this fact to claim that within 200 years, their fuel wastes are no more radioactive than the original uranium ore. Fission products emit beta radiation, while actinides primarily emit alpha radiation. Many of each also emit gamma radiation. Yield Each fission of a parent atom produces a different set of fission product atoms. However, while an individual fission is not predictable, the fission products are statistically predictable. The amount of any particular isotope produced per fission is called its yield, typically expressed as percent per parent fission; therefore, yields total to 200%, not 100%. (The true total is in fact slightly greater than 200%, owing to rare cases of ternary fission.) While fission products include every element from zinc through the lanthanides, the majority of the fission products occur in two peaks. One peak occurs at about (expressed by atomic masses 85 through 105) strontium to ruthenium while the other peak is at about tellurium to neodymium (expressed by atomic masses 130 through 145). The yield is somewhat dependent on the parent atom and also on the energy of the initiating neutron. In general the higher the energy of the state that undergoes nuclear fission, the more likely that the two fission products have similar mass. Hence, as the neutron energy increases and/or the energy of the fissile atom increases, the valley between the two peaks becomes more shallow. For instance, the curve of yield against mass for 239Pu has a more shallow valley than that observed for 235U when the neutrons are thermal neutrons. The curves for the fission of the later actinides tend to make even more shallow valleys. In extreme cases such as 259Fm, only one peak is seen; this is a consequence of symmetric fission becoming dominant due to shell effects. The adjacent figure shows a typical fission product distribution from the fission of uranium. Note that in the calculations used to make this graph, the activation of fission products was ignored and the fission was assumed to occur in a single moment rather than a length of time. In this bar chart results are shown for different cooling times (time after fission). Because of the stability of nuclei with even numbers of protons and/or neutrons, the curve of yield against element is not a smooth curve but tends to alternate. Note that the curve against mass number is smooth. Production Small amounts of fission products are naturally formed as the result of either spontaneous fission of natural uranium, which occurs at a low rate, or as a result of neutrons from radioactive decay or reactions with cosmic ray particles. The microscopic tracks left by these fission products in some natural minerals (mainly apatite and zircon) are used in fission track dating to provide the cooling (crystallization) ages of natural rocks. The technique has an effective dating range of 0.1 Ma to >1.0 Ga depending on the mineral used and the concentration of uranium in that mineral. About 1.5 billion years ago in a uranium ore body in Africa, a natural nuclear fission reactor operated for a few hundred thousand years and produced approximately 5 tonnes of fission products. These fission products were important in providing proof that the natural reactor had occurred. Fission products are produced in nuclear weapon explosions, with the amount depending on the type of weapon. The largest source of fission products is from nuclear reactors. In current nuclear power reactors, about 3% of the uranium in the fuel is converted into fission products as a by-product of energy generation. Most of these fission products remain in the fuel unless there is fuel element failure or a nuclear accident, or the fuel is reprocessed. Power reactors Commercial nuclear fission reactors are operated in the otherwise self-extinguishing prompt subcritical state. Certain fission products decay over seconds to minutes, producing additional delayed neutrons crucial to sustaining criticality. An example is bromine-87 with a half-life of about a minute. Operating in this delayed critical state, power changes slowly enough to permit human and automatic control. Analogous to fire dampers varying the movement of wood embers towards new fuel, control rods are moved as the nuclear fuel burns up over time. In a nuclear power reactor, the main sources of radioactivity are fission products along with actinides and activation products. Fission products are most of the radioactivity for the first several hundred years, while actinides dominate roughly 103 to 105 years after fuel use. Most fission products are retained near their points of production. They are important to reactor operation not only because some contribute delayed neutrons useful for reactor control, but some are neutron poisons that inhibit the nuclear reaction. Buildup of neutron poisons is a key to how long a given fuel element can be kept in the reactor. Fission product decay also generates heat that continues even after the reactor has been shut down and fission stopped. This decay heat requires removal after shutdown; loss of this cooling damaged the reactors at Three Mile Island and Fukushima. If the fuel cladding around the fuel develops holes, fission products can leak into the primary coolant. Depending on the chemistry, they may settle within the reactor core or travel through the coolant system and chemistry control systems are provided to remove them. In a well-designed power reactor running under normal conditions, coolant radioactivity is very low. The isotope responsible for most of the gamma exposure in fuel reprocessing plants (and the Chernobyl site in 2005) is caesium-137. Iodine-129 is a major radioactive isotope released from reprocessing plants. In nuclear reactors both caesium-137 and strontium-90 are found in locations away from the fuel because they're formed by the beta decay of noble gases (xenon-137, with a 3.8-minute half-life, and krypton-90, with a 32-second half-life) which enable them to be deposited away from the fuel, e.g. on control rods. Nuclear reactor poisons Some fission products decay with the release of delayed neutrons, important to nuclear reactor control. Other fission products, such as xenon-135 and samarium-149, have a high neutron absorption cross section. Since a nuclear reactor must balance neutron production and absorption rates, fission products that absorb neutrons tend to "poison" or shut the reactor down; this is controlled with burnable poisons and control rods. Build-up of xenon-135 during shutdown or low-power operation may poison the reactor enough to impede restart or interfere with normal control of the reaction during restart or restoration of full power. This played a major role in the Chernobyl disaster. Nuclear weapons Nuclear weapons use fission as either the partial or the main energy source. Depending on the weapon design and where it is exploded, the relative importance of the fission product radioactivity will vary compared to the activation product radioactivity in the total fallout radioactivity. The immediate fission products from nuclear weapon fission are essentially the same as those from any other fission source, depending slightly on the particular nuclide that is fissioning. However, the very short time scale for the reaction makes a difference in the particular mix of isotopes produced from an atomic bomb. For example, the 134Cs/137Cs ratio provides an easy method of distinguishing between fallout from a bomb and the fission products from a power reactor. Almost no caesium-134 is formed by nuclear fission (because xenon-134 is stable). The 134Cs is formed by the neutron activation of the stable 133Cs which is formed by the decay of isotopes in the isobar (A = 133). So in a momentary criticality, by the time that the neutron flux becomes zero too little time will have passed for any 133Cs to be present. While in a power reactor plenty of time exists for the decay of the isotopes in the isobar to form 133Cs, the 133Cs thus formed can then be activated to form 134Cs only if the time between the start and the end of the criticality is long. According to Jiri Hala's textbook, the radioactivity in the fission product mixture in an atom bomb is mostly caused by short-lived isotopes such as iodine-131 and barium-140. After about four months, cerium-141, zirconium-95/niobium-95, and strontium-89 represent the largest share of radioactive material. After two to three years, cerium-144/praseodymium-144, ruthenium-106/rhodium-106, and promethium-147 are responsible for the bulk of the radioactivity. After a few years, the radiation is dominated by strontium-90 and caesium-137, whereas in the period between 10,000 and a million years it is technetium-99 that dominates. Application Some fission products (such as 137Cs) are used in medical and industrial radioactive sources. 99TcO4− (pertechnetate) ion can react with steel surfaces to form a corrosion resistant layer. In this way these metaloxo anions act as anodic corrosion inhibitors - it renders the steel surface passive. The formation of 99TcO2 on steel surfaces is one effect which will retard the release of 99Tc from nuclear waste drums and nuclear equipment which has become lost prior to decontamination (e.g. nuclear submarine reactors which have been lost at sea). In a similar way the release of radio-iodine in a serious power reactor accident could be retarded by adsorption on metal surfaces within the nuclear plant. Much of the other work on the iodine chemistry which would occur during a bad accident has been done. Decay For fission of uranium-235, the predominant radioactive fission products include isotopes of iodine, caesium, strontium, xenon and barium. The threat becomes smaller with the passage of time. Locations where radiation fields once posed immediate mortal threats, such as much of the Chernobyl Nuclear Power Plant on day one of the accident and the ground zero sites of U.S. atomic bombings in Japan (6 hours after detonation) are now relatively safe because the radioactivity has decreased to a low level. Many of the fission products decay through very short-lived isotopes to form stable isotopes, but a considerable number of the radioisotopes have half-lives longer than a day. The radioactivity in the fission product mixture is initially mostly caused by short lived isotopes such as 131I and 140Ba; after about four months 141Ce, 95Zr/95Nb and 89Sr take the largest share, while after about two or three years the largest share is taken by 144Ce/144Pr, 106Ru/106Rh and 147Pm. Later 90Sr and 137Cs are the main radioisotopes, being succeeded by 99Tc. In the case of a release of radioactivity from a power reactor or used fuel, only some elements are released; as a result, the isotopic signature of the radioactivity is very different from an open air nuclear detonation, where all the fission products are dispersed. Fallout countermeasures The purpose of radiological emergency preparedness is to protect people from the effects of radiation exposure after a nuclear accident or bomb. Evacuation is the most effective protective measure. However, if evacuation is impossible or even uncertain, then local fallout shelters and other measures provide the best protection. Iodine At least three isotopes of iodine are important. 129I, 131I (radioiodine) and 132I. Open air nuclear testing and the Chernobyl disaster both released iodine-131. The short-lived isotopes of iodine are particularly harmful because the thyroid collects and concentrates iodide – radioactive as well as stable. Absorption of radioiodine can lead to acute, chronic, and delayed effects. Acute effects from high doses include thyroiditis, while chronic and delayed effects include hypothyroidism, thyroid nodules, and thyroid cancer. It has been shown that the active iodine released from Chernobyl and Mayak has resulted in an increase in the incidence of thyroid cancer in the former Soviet Union. One measure which protects against the risk from radio-iodine is taking a dose of potassium iodide (KI) before exposure to radioiodine. The non-radioactive iodide "saturates" the thyroid, causing less of the radioiodine to be stored in the body. Administering potassium iodide reduces the effects of radio-iodine by 99% and is a prudent, inexpensive supplement to fallout shelters. A low-cost alternative to commercially available iodine pills is a saturated solution of potassium iodide. Long-term storage of KI is normally in the form of reagent-grade crystals. The administration of known goitrogen substances can also be used as a prophylaxis in reducing the bio-uptake of iodine, (whether it be the nutritional non-radioactive iodine-127 or radioactive iodine, radioiodine - most commonly iodine-131, as the body cannot discern between different iodine isotopes). Perchlorate ions, a common water contaminant in the USA due to the aerospace industry, has been shown to reduce iodine uptake and thus is classified as a goitrogen. Perchlorate ions are a competitive inhibitor of the process by which iodide is actively deposited into thyroid follicular cells. Studies involving healthy adult volunteers determined that at levels above 0.007 milligrams per kilogram per day (mg/(kg·d)), perchlorate begins to temporarily inhibit the thyroid gland's ability to absorb iodine from the bloodstream ("iodide uptake inhibition", thus perchlorate is a known goitrogen). The reduction of the iodide pool by perchlorate has dual effects – reduction of excess hormone synthesis and hyperthyroidism, on the one hand, and reduction of thyroid inhibitor synthesis and hypothyroidism on the other. Perchlorate remains very useful as a single dose application in tests measuring the discharge of radioiodide accumulated in the thyroid as a result of many different disruptions in the further metabolism of iodide in the thyroid gland. Treatment of thyrotoxicosis (including Graves' disease) with 600–2,000 mg potassium perchlorate (430-1,400 mg perchlorate) daily for periods of several months or longer was once common practice, particularly in Europe, and perchlorate use at lower doses to treat thyroid problems continues to this day. Although 400 mg of potassium perchlorate divided into four or five daily doses was used initially and found effective, higher doses were introduced when 400 mg/day was discovered not to control thyrotoxicosis in all subjects. Current regimens for treatment of thyrotoxicosis (including Graves' disease), when a patient is exposed to additional sources of iodine, commonly include 500 mg potassium perchlorate twice per day for 18–40 days. Prophylaxis with perchlorate-containing water at concentrations of 17 ppm, which corresponds to 0.5 mg/kg-day personal intake, if one is 70 kg and consumes 2 litres of water per day, was found to reduce baseline radioiodine uptake by 67% This is equivalent to ingesting a total of just 35 mg of perchlorate ions per day. In another related study where subjects drank just 1 litre of perchlorate-containing water per day at a concentration of 10 ppm, i.e. daily 10 mg of perchlorate ions were ingested, an average 38% reduction in the uptake of iodine was observed. However, when the average perchlorate absorption in perchlorate plant workers subjected to the highest exposure has been estimated as approximately 0.5 mg/kg-day, as in the above paragraph, a 67% reduction of iodine uptake would be expected. Studies of chronically exposed workers though have thus far failed to detect any abnormalities of thyroid function, including the uptake of iodine. this may well be attributable to sufficient daily exposure or intake of healthy iodine-127 among the workers and the short 8 hr biological half life of perchlorate in the body. To completely block the uptake of iodine-131 by the purposeful addition of perchlorate ions to a populace's water supply, aiming at dosages of 0.5 mg/kg-day, or a water concentration of 17 ppm, would therefore be grossly inadequate at truly reducing radioiodine uptake. Perchlorate ion concentrations in a region's water supply would need to be much higher, at least 7.15 mg/kg of body weight per day, or a water concentration of 250 ppm, assuming people drink 2 liters of water per day, to be truly beneficial to the population at preventing bioaccumulation when exposed to a radioiodine environment, independent of the availability of iodate or iodide drugs. The continual distribution of perchlorate tablets or the addition of perchlorate to the water supply would need to continue for no less than 80–90 days, beginning immediately after the initial release of radioiodine was detected. After 80–90 days passed, released radioactive iodine-131 would have decayed to less than 0.1% of its initial quantity, at which time the danger from biouptake of iodine-131 is essentially over. In the event of a radioiodine release, the ingestion of prophylaxis potassium iodide, if available, or even iodate, would rightly take precedence over perchlorate administration, and would be the first line of defense in protecting the population from a radioiodine release. However, in the event of a radioiodine release too massive and widespread to be controlled by the limited stock of iodide and iodate prophylaxis drugs, then the addition of perchlorate ions to the water supply, or distribution of perchlorate tablets would serve as a cheap, efficacious, second line of defense against carcinogenic radioiodine bioaccumulation. The ingestion of goitrogen drugs is, much like potassium iodide also not without its dangers, such as hypothyroidism. In all these cases however, despite the risks, the prophylaxis benefits of intervention with iodide, iodate, or perchlorate outweigh the serious cancer risk from radioiodine bioaccumulation in regions where radioiodine has sufficiently contaminated the environment. Caesium The Chernobyl accident released a large amount of caesium isotopes which were dispersed over a wide area. 137Cs is an isotope which is of long-term concern as it remains in the top layers of soil. Plants with shallow root systems tend to absorb it for many years. Hence grass and mushrooms can carry a considerable amount of 137Cs, which can be transferred to humans through the food chain. One of the best countermeasures in dairy farming against 137Cs is to mix up the soil by deeply ploughing the soil. This has the effect of putting the 137Cs out of reach of the shallow roots of the grass, hence the level of radioactivity in the grass will be lowered. Also the removal of top few centimeters of soil and its burial in a shallow trench will reduce the dose to humans and animals as the gamma rays from 137Cs will be attenuated by their passage through the soil. The deeper and more remote the trench is, the better the degree of protection. Fertilizers containing potassium can be used to dilute cesium and limit its uptake by plants. In livestock farming, another countermeasure against 137Cs is to feed to animals prussian blue. This compound acts as an ion-exchanger. The cyanide is so tightly bonded to the iron that it is safe for a human to consume several grams of prussian blue per day. The prussian blue reduces the biological half-life (different from the nuclear half-life) of the caesium. The physical or nuclear half-life of 137Cs is about 30 years. Caesium in humans normally has a biological half-life of between one and four months. An added advantage of the prussian blue is that the caesium which is stripped from the animal in the droppings is in a form which is not available to plants. Hence it prevents the caesium from being recycled. The form of prussian blue required for the treatment of animals, including humans is a special grade. Attempts to use the pigment grade used in paints have not been successful. Strontium The addition of lime to soils which are poor in calcium can reduce the uptake of strontium by plants. Likewise in areas where the soil is low in potassium, the addition of a potassium fertilizer can discourage the uptake of cesium into plants. However such treatments with either lime or potash should not be undertaken lightly as they can alter the soil chemistry greatly, so resulting in a change in the plant ecology of the land. Health concerns For introduction of radionuclides into an organism, ingestion is the most important route. Insoluble compounds are not absorbed from the gut and cause only local irradiation before they are excreted. Soluble forms however show wide range of absorption percentages. See also Fission product yield Fission products (by element) Long-lived fission product Notes References Bibliography Paul Reuss, Neutron Physics, chp 2.10.2, p 75 External links Iodine fallout studies in the United States The Live Chart of Nuclides - IAEA Color-map of product yields, and detailed data by click on a nuclide. Radioactive contamination Nuclear technology Nuclear chemistry Nuclear physics Nuclear fission Inorganic chemistry Radiobiology he:ביקוע גרעיני#תוצרי הביקוע
Nuclear fission product
[ "Physics", "Chemistry", "Technology", "Biology" ]
5,651
[ "Nuclear fission", "Radioactive contamination", "Nuclear chemistry", "Radiobiology", "Nuclear technology", "Fission products", "Nuclear fallout", "Environmental impact of nuclear power", "nan", "Nuclear physics", "Radioactivity" ]
701,756
https://en.wikipedia.org/wiki/Salt%20%28cryptography%29
In cryptography, a salt is random data fed as an additional input to a one-way function that hashes data, a password or passphrase. Salting helps defend against attacks that use precomputed tables (e.g. rainbow tables), by vastly growing the size of table needed for a successful attack. It also helps protect passwords that occur multiple times in a database, as a new salt is used for each password instance. Additionally, salting does not place any burden on users. Typically, a unique salt is randomly generated for each password. The salt and the password (or its version after key stretching) are concatenated and fed to a cryptographic hash function, and the output hash value is then stored with the salt in a database. The salt does not need to be encrypted, because knowing the salt would not help the attacker. Salting is broadly used in cybersecurity, from Unix system credentials to Internet security. Salts are related to cryptographic nonces. Example Without a salt, identical passwords will map to identical hash values, which could make it easier for a hacker to guess the passwords from their hash value. Instead, a salt is generated and appended to each password, which causes the resultant hash to output different values for the same original password. The salt and hash are then stored in the database. To later test if a password a user enters is correct, the same process can be performed on it (appending that user's salt to the password and calculating the resultant hash): if the result does not match the stored hash, it could not have been the correct password that was entered. In practice, a salt is usually generated using a Cryptographically Secure PseudoRandom Number Generator. CSPRNGs are designed to produce unpredictable random numbers which can be alphanumeric. While generally discouraged due to lower security, some systems use timestamps or simple counters as a source of salt. Sometimes, a salt may be generated by combining a random value with additional information, such as a timestamp or user-specific data, to ensure uniqueness across different systems or time periods. Common mistakes Salt re-use Using the same salt for all passwords is dangerous because a precomputed table which simply accounts for the salt will render the salt useless. Generation of precomputed tables for databases with unique salts for every password is not viable because of the computational cost of doing so. But, if a common salt is used for all the entries, creating such a table (that accounts for the salt) then becomes a viable and possibly successful attack. Because salt re-use can cause users with the same password to have the same hash, cracking a single hash can result in other passwords being compromised too. Salt length If a salt is too short, an attacker may precompute a table of every possible salt appended to every likely password. Using a long salt ensures such a table would be prohibitively large. 16 bytes (128 bits) or more is generally sufficient to provide a large enough space of possible values, minimizing the risk of collisions (i.e., two different passwords ending up with the same salt). Benefits To understand the difference between cracking a single password and a set of them, consider a file with users and their hashed passwords. Say the file is unsalted. Then an attacker could pick a string, call it , and then compute . A user whose hash stored in the file is may or may not have password . However, even if is not the user's actual password, it will be accepted as if it were, because the system can only check passwords by computing the hash of the password entered and comparing it to the hash stored in the file. Thus, each match cracks a user password, and the chance of a match rises with the number of passwords in the file. In contrast, if salts are used, the attacker would have to compute hash(attempt[0] || salt[a]), compare against entry A, then hash(attempt[0] || salt[b]), compare against entry B, and so on. This prevents any one attempt from cracking multiple passwords, given that salt re-use is avoided. Salts also combat the use of precomputed tables for cracking passwords. Such a table might simply map common passwords to their hashes, or it might do something more complex, like store the start and end points of a set of precomputed hash chains. In either case, salting can defend against the use of precomputed tables by lengthening hashes and having them draw from larger character sets, making it less likely that the table covers the resulting hashes. In particular, a precomputed table would need to cover the string rather than simply . The modern shadow password system, in which password hashes and other security data are stored in a non-public file, somewhat mitigates these concerns. However, they remain relevant in multi-server installations which use centralized password management systems to push passwords or password hashes to multiple systems. In such installations, the root account on each individual system may be treated as less trusted than the administrators of the centralized password system, so it remains worthwhile to ensure that the security of the password hashing algorithm, including the generation of unique salt values, is adequate. Another (lesser) benefit of a salt is as follows: two users might choose the same string as their password. Without a salt, this password would be stored as the same hash string in the password file. This would disclose the fact that the two accounts have the same password, allowing anyone who knows one of the account's passwords to access the other account. By salting the passwords with two random characters, even if two accounts use the same password, no one can discover this just by reading hashes. Salting also makes it extremely difficult to determine if a person has used the same password for multiple systems. Unix implementations 1970s–1980s Earlier versions of Unix used a password file /etc/passwd to store the hashes of salted passwords (passwords prefixed with two-character random salts). In these older versions of Unix, the salt was also stored in the passwd file (as cleartext) together with the hash of the salted password. The password file was publicly readable for all users of the system. This was necessary so that user-privileged software tools could find user names and other information. The security of passwords is therefore protected only by the one-way functions (enciphering or hashing) used for the purpose. Early Unix implementations limited passwords to eight characters and used a 12-bit salt, which allowed for 4,096 possible salt values. This was an appropriate balance for 1970s computational and storage costs. 1980s–present The shadow password system is used to limit access to hashes and salt. The salt is eight characters, the hash is 86 characters, and the password length is effectively unlimited, barring stack overflow errors. Web-application implementations It is common for a web application to store in a database the hash value of a user's password. Without a salt, a successful SQL injection attack may yield easily crackable passwords. Because many users re-use passwords for multiple sites, the use of a salt is an important component of overall web application security. Some additional references for using a salt to secure password hashes in specific languages or libraries (PHP, the .NET libraries, etc.) can be found in the external links section below. See also Password cracking Cryptographic nonce Initialization vector Padding "Spice" in the Hasty Pudding cipher Rainbow tables Pepper (cryptography) References External links OWASP Cryptographic Cheat Sheet how to encrypt user passwords Cryptography Password authentication
Salt (cryptography)
[ "Mathematics", "Engineering" ]
1,605
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
701,991
https://en.wikipedia.org/wiki/Creation%20and%20annihilation%20operators
Creation operators and annihilation operators are mathematical operators that have widespread applications in quantum mechanics, notably in the study of quantum harmonic oscillators and many-particle systems. An annihilation operator (usually denoted ) lowers the number of particles in a given state by one. A creation operator (usually denoted ) increases the number of particles in a given state by one, and it is the adjoint of the annihilation operator. In many subfields of physics and chemistry, the use of these operators instead of wavefunctions is known as second quantization. They were introduced by Paul Dirac. Creation and annihilation operators can act on states of various types of particles. For example, in quantum chemistry and many-body theory the creation and annihilation operators often act on electron states. They can also refer specifically to the ladder operators for the quantum harmonic oscillator. In the latter case, the creation operator is interpreted as a raising operator, adding a quantum of energy to the oscillator system (similarly for the lowering operator). They can be used to represent phonons. Constructing Hamiltonians using these operators has the advantage that the theory automatically satisfies the cluster decomposition theorem. The mathematics for the creation and annihilation operators for bosons is the same as for the ladder operators of the quantum harmonic oscillator. For example, the commutator of the creation and annihilation operators that are associated with the same boson state equals one, while all other commutators vanish. However, for fermions the mathematics is different, involving anticommutators instead of commutators. Ladder operators for the quantum harmonic oscillator In the context of the quantum harmonic oscillator, one reinterprets the ladder operators as creation and annihilation operators, adding or subtracting fixed quanta of energy to the oscillator system. Creation/annihilation operators are different for bosons (integer spin) and fermions (half-integer spin). This is because their wavefunctions have different symmetry properties. First consider the simpler bosonic case of the photons of the quantum harmonic oscillator. Start with the Schrödinger equation for the one-dimensional time independent quantum harmonic oscillator, Make a coordinate substitution to nondimensionalize the differential equation The Schrödinger equation for the oscillator becomes Note that the quantity is the same energy as that found for light quanta and that the parenthesis in the Hamiltonian can be written as The last two terms can be simplified by considering their effect on an arbitrary differentiable function which implies, coinciding with the usual canonical commutation relation , in position space representation: . Therefore, and the Schrödinger equation for the oscillator becomes, with substitution of the above and rearrangement of the factor of 1/2, If one defines as the "creation operator" or the "raising operator" and as the "annihilation operator" or the "lowering operator", the Schrödinger equation for the oscillator reduces to This is significantly simpler than the original form. Further simplifications of this equation enable one to derive all the properties listed above thus far. Letting , where is the nondimensionalized momentum operator one has and Note that these imply The operators and may be contrasted to normal operators, which commute with their adjoints. Using the commutation relations given above, the Hamiltonian operator can be expressed as One may compute the commutation relations between the and operators and the Hamiltonian: These relations can be used to easily find all the energy eigenstates of the quantum harmonic oscillator as follows. Assuming that is an eigenstate of the Hamiltonian . Using these commutation relations, it follows that This shows that and are also eigenstates of the Hamiltonian, with eigenvalues and respectively. This identifies the operators and as "lowering" and "raising" operators between adjacent eigenstates. The energy difference between adjacent eigenstates is . The ground state can be found by assuming that the lowering operator possesses a nontrivial kernel: with . Applying the Hamiltonian to the ground state, So is an eigenfunction of the Hamiltonian. This gives the ground state energy , which allows one to identify the energy eigenvalue of any eigenstate as Furthermore, it turns out that the first-mentioned operator in (*), the number operator plays the most important role in applications, while the second one, can simply be replaced by . Consequently, The time-evolution operator is then Explicit eigenfunctions The ground state of the quantum harmonic oscillator can be found by imposing the condition that Written out as a differential equation, the wavefunction satisfies with the solution The normalization constant is found to be from ,  using the Gaussian integral. Explicit formulas for all the eigenfunctions can now be found by repeated application of to . Matrix representation The matrix expression of the creation and annihilation operators of the quantum harmonic oscillator with respect to the above orthonormal basis is These can be obtained via the relationships and . The eigenvectors are those of the quantum harmonic oscillator, and are sometimes called the "number basis". Generalized creation and annihilation operators Thanks to representation theory and C*-algebras the operators derived above are actually a specific instance of a more generalized notion of creation and annihilation operators in the context of CCR and CAR algebras. Mathematically and even more generally ladder operators can be understood in the context of a root system of a semisimple Lie group and the associated semisimple Lie algebra without the need of realizing the representation as operators on a functional Hilbert space. In the Hilbert space representation case the operators are constructed as follows: Let be a one-particle Hilbert space (that is, any Hilbert space, viewed as representing the state of a single particle). The (bosonic) CCR algebra over is the algebra-with-conjugation-operator (called *) abstractly generated by elements , where ranges freely over , subject to the relations in bra–ket notation. The map from to the bosonic CCR algebra is required to be complex antilinear (this adds more relations). Its adjoint is , and the map is complex linear in . Thus embeds as a complex vector subspace of its own CCR algebra. In a representation of this algebra, the element will be realized as an annihilation operator, and as a creation operator. In general, the CCR algebra is infinite dimensional. If we take a Banach space completion, it becomes a C*-algebra. The CCR algebra over is closely related to, but not identical to, a Weyl algebra. For fermions, the (fermionic) CAR algebra over is constructed similarly, but using anticommutator relations instead, namely The CAR algebra is finite dimensional only if is finite dimensional. If we take a Banach space completion (only necessary in the infinite dimensional case), it becomes a algebra. The CAR algebra is closely related, but not identical to, a Clifford algebra. Physically speaking, removes (i.e. annihilates) a particle in the state whereas creates a particle in the state . The free field vacuum state is the state with no particles, characterized by If is normalized so that , then gives the number of particles in the state . Creation and annihilation operators for reaction-diffusion equations The annihilation and creation operator description has also been useful to analyze classical reaction diffusion equations, such as the situation when a gas of molecules diffuse and interact on contact, forming an inert product: . To see how this kind of reaction can be described by the annihilation and creation operator formalism, consider particles at a site on a one dimensional lattice. Each particle moves to the right or left with a certain probability, and each pair of particles at the same site annihilates each other with a certain other probability. The probability that one particle leaves the site during the short time period is proportional to , let us say a probability to hop left and to hop right. All particles will stay put with a probability . (Since is so short, the probability that two or more will leave during is very small and will be ignored.) We can now describe the occupation of particles on the lattice as a 'ket' of the form . It represents the juxtaposition (or conjunction, or tensor product) of the number states , located at the individual sites of the lattice. Recall that and for all , while This definition of the operators will now be changed to accommodate the "non-quantum" nature of this problem and we shall use the following definition: note that even though the behavior of the operators on the kets has been modified, these operators still obey the commutation relation Now define so that it applies to . Correspondingly, define as applying to . Thus, for example, the net effect of is to move a particle from the to the -th site while multiplying with the appropriate factor. This allows writing the pure diffusive behavior of the particles as The reaction term can be deduced by noting that particles can interact in different ways, so that the probability that a pair annihilates is , yielding a term where number state is replaced by number state at site at a certain rate. Thus the state evolves by Other kinds of interactions can be included in a similar manner. This kind of notation allows the use of quantum field theoretic techniques to be used in the analysis of reaction diffusion systems. Creation and annihilation operators in quantum field theories In quantum field theories and many-body problems one works with creation and annihilation operators of quantum states, and . These operators change the eigenvalues of the number operator, by one, in analogy to the harmonic oscillator. The indices (such as ) represent quantum numbers that label the single-particle states of the system; hence, they are not necessarily single numbers. For example, a tuple of quantum numbers is used to label states in the hydrogen atom. The commutation relations of creation and annihilation operators in a multiple-boson system are, where is the commutator and is the Kronecker delta. For fermions, the commutator is replaced by the anticommutator Therefore, exchanging disjoint (i.e. ) operators in a product of creation or annihilation operators will reverse the sign in fermion systems, but not in boson systems. If the states labelled by i are an orthonormal basis of a Hilbert space H, then the result of this construction coincides with the CCR algebra and CAR algebra construction in the previous section but one. If they represent "eigenvectors" corresponding to the continuous spectrum of some operator, as for unbound particles in QFT, then the interpretation is more subtle. Normalization conventions While Zee obtains the momentum space normalization via the symmetric convention for Fourier transforms, Tong and Peskin & Schroeder use the common asymmetric convention to obtain . Each derives . Srednicki additionally merges the Lorentz-invariant measure into his asymmetric Fourier measure, , yielding . See also Fock space Segal–Bargmann space Optical phase space Bogoliubov–Valatin transformation Holstein–Primakoff transformation Jordan–Wigner transformation Jordan–Schwinger transformation Klein transformation Canonical commutation relations Notes References Albert Messiah, 1966. Quantum Mechanics (Vol. I), English translation from French by G. M. Temmer. North Holland, John Wiley & Sons. Ch. XII. online Quantum mechanics Quantum field theory
Creation and annihilation operators
[ "Physics" ]
2,443
[ "Quantum field theory", "Quantum operators", "Quantum mechanics" ]
702,351
https://en.wikipedia.org/wiki/Ham%20sandwich%20theorem
In mathematical measure theory, for every positive integer the ham sandwich theorem states that given measurable "objects" in -dimensional Euclidean space, it is possible to divide each one of them in half (with respect to their measure, e.g. volume) with a single -dimensional hyperplane. This is possible even if the objects overlap. It was proposed by Hugo Steinhaus and proved by Stefan Banach (explicitly in dimension 3, without stating the theorem in the -dimensional case), and also years later called the Stone–Tukey theorem after Arthur H. Stone and John Tukey. Naming The ham sandwich theorem takes its name from the case when and the three objects to be bisected are the ingredients of a ham sandwich. Sources differ on whether these three ingredients are two slices of bread and a piece of ham , bread and cheese and ham , or bread and butter and ham . In two dimensions, the theorem is known as the pancake theorem to refer to the flat nature of the two objects to be bisected by a line . History According to , the earliest known paper about the ham sandwich theorem, specifically the case of bisecting three solids with a plane, is a 1938 note in a Polish mathematics journal . Beyer and Zardecki's paper includes a translation of this note, which attributes the posing of the problem to Hugo Steinhaus, and credits Stefan Banach as the first to solve the problem, by a reduction to the Borsuk–Ulam theorem. The note poses the problem in two ways: first, formally, as "Is it always possible to bisect three solids, arbitrarily located, with the aid of an appropriate plane?" and second, informally, as "Can we place a piece of ham under a meat cutter so that meat, bone, and fat are cut in halves?" The note then offers a proof of the theorem. A more modern reference is , which is the basis of the name "Stone–Tukey theorem". This paper proves the -dimensional version of the theorem in a more general setting involving measures. The paper attributes the case to Stanislaw Ulam, based on information from a referee; but claim that this is incorrect, given the note mentioned above, although "Ulam did make a fundamental contribution in proposing" the Borsuk–Ulam theorem. Two-dimensional variant: proof using a rotating-knife The two-dimensional variant of the theorem (also known as the pancake theorem) can be proved by an argument which appears in the fair cake-cutting literature (see e.g. Robertson–Webb rotating-knife procedure). For each angle , a straight line ("knife") of angle can bisect pancake #1. To see this, translate along its normal a straight line of angle from to ; the fraction of pancake #1 covered by the line changes continuously from 0 to 1, so by the intermediate value theorem it must be equal to 1/2 somewhere along the way. It is possible that an entire range of translations of our line yield a fraction of 1/2; in this case, it is a canonical choice to pick the middle one of all such translations. When the knife is at angle 0, it also cuts pancake #2, but the pieces are probably unequal (if we are lucky and the pieces are equal, we are done). Define the 'positive' side of the knife as the side in which the fraction of pancake #2 is larger. We now turn the knife, and translate it as described above. When the angle is , define as the fraction of pancake #2 at the positive side of the knife. Initially . The function is continuous, since small changes in the angle lead to small changes in the position of the knife. When the knife is at angle 180, the knife is upside-down, so . By the intermediate value theorem, there must be an angle in which . Cutting at that angle bisects both pancakes simultaneously. n-dimensional variant: proof using the Borsuk–Ulam theorem The ham sandwich theorem can be proved as follows using the Borsuk–Ulam theorem. This proof follows the one described by Steinhaus and others (1938), attributed there to Stefan Banach, for the case. In the field of Equivariant topology, this proof would fall under the configuration-space/tests-map paradigm. Let denote the compact (or more genreral: bounded and Lebesgue-measurable) subsets of that we wish to simultaneously bisect. Let be the unit -sphere in . For each point on , we can define a continuum of affine hyperplanes with normal vector : for . For each , we call the space the "positive side" of , which is the side pointed to by the vector . By the intermediate value theorem, every family of such hyperplanes contains at least one hyperplane that bisects the bounded set : at one extreme translation, no volume of is on the positive side, and at the other extreme translation, all of 's volume is on the positive side, so in between there must be a closed interval of possible values of , for which bisects the volume of . If has volume zero, we pick for all . Otherwise, the interval is compact and we can canonically pick as its midpoint for each . Thus we obtain a continuous function such that for each point on the sphere the hyperplane bisects . Note further that we have and thus for all . Now we define a function as follows: . This function is continuous (which can be proven with the dominated convergence theorem). By the Borsuk–Ulam theorem, there are antipodal points and on the sphere such that . Antipodal points correspond to hyperplanes and that are equal except that they have opposite positive sides. Thus, means that the volume of is the same on the positive and negative side of , for . Thus, is the desired ham sandwich cut that simultaneously bisects the volumes of . Measure theoretic versions In measure theory, proved two more general forms of the ham sandwich theorem. Both versions concern the bisection of subsets of a common set , where has a Carathéodory outer measure and each has finite outer measure. Their first general formulation is as follows: for any continuous real function , there is a point of the -sphere and a real number s0 such that the surface divides into and of equal measure and simultaneously bisects the outer measure of . The proof is again a reduction to the Borsuk-Ulam theorem. This theorem generalizes the standard ham sandwich theorem by letting . Their second formulation is as follows: for any measurable functions over that are linearly independent over any subset of of positive measure, there is a linear combination such that the surface , dividing into and , simultaneously bisects the outer measure of . This theorem generalizes the standard ham sandwich theorem by letting and letting , for , be the -th coordinate of . Discrete and computational geometry versions In discrete geometry and computational geometry, the ham sandwich theorem usually refers to the special case in which each of the sets being divided is a finite set of points. Here the relevant measure is the counting measure, which simply counts the number of points on either side of the hyperplane. In two dimensions, the theorem can be stated as follows: For a finite set of points in the plane, each colored "red" or "blue", there is a line that simultaneously bisects the red points and bisects the blue points, that is, the number of red points on either side of the line is equal and the number of blue points on either side of the line is equal. There is an exceptional case when points lie on the line. In this situation, we count each of these points as either being on one side, on the other, or on neither side of the line (possibly depending on the point), i.e. "bisecting" in fact means that each side contains less than half of the total number of points. This exceptional case is actually required for the theorem to hold, of course when the number of red points or the number of blue is odd, but also in specific configurations with even numbers of points, for instance when all the points lie on the same line and the two colors are separated from each other (i.e. colors don't alternate along the line). A situation where the numbers of points on each side cannot match each other is provided by adding an extra point out of the line in the previous configuration. In computational geometry, this ham sandwich theorem leads to a computational problem, the ham sandwich problem. In two dimensions, the problem is this: given a finite set of points in the plane, each colored "red" or "blue", find a ham sandwich cut for them. First, described an algorithm for the special, separated case. Here all red points are on one side of some line and all blue points are on the other side, a situation where there is a unique ham sandwich cut, which Megiddo could find in linear time. Later, gave an algorithm for the general two-dimensional case; the running time of their algorithm is , where the symbol indicates the use of Big O notation. Finally, found an optimal -time algorithm. This algorithm was extended to higher dimensions by where the running time is . Given sets of points in general position in -dimensional space, the algorithm computes a -dimensional hyperplane that has an equal number of points of each of the sets in both of its half-spaces, i.e., a ham-sandwich cut for the given points. If is a part of the input, then no polynomial time algorithm is expected to exist, as if the points are on a moment curve, the problem becomes equivalent to necklace splitting, which is PPA-complete. A linear-time algorithm that area-bisects two disjoint convex polygons is described by . Generalizations The original theorem works for at most collections, where is the number of dimensions. To bisect a larger number of collections without going to higher dimensions, one can use, instead of a hyperplane, an algebraic surface of degree , i.e., an ()–dimensional surface defined by a polynomial function of degree : Given measures in an –dimensional space, there exists an algebraic surface of degree which bisects them all. (). This generalization is proved by mapping the –dimensional plane into a dimensional plane, and then applying the original theorem. For example, for and , the 2–dimensional plane is mapped to a 5–dimensional plane via: . See also Exact division References . . . . . . . . . . External links ham sandwich theorem on the Earliest known uses of some of the words of mathematics Ham Sandwich Cuts by Danielle MacNevin An interactive 2D demonstration Theorems in measure theory Articles containing proofs Theorems in topology
Ham sandwich theorem
[ "Mathematics" ]
2,231
[ "Theorems in mathematical analysis", "Theorems in measure theory", "Theorems in topology", "Topology", "Mathematical problems", "Articles containing proofs", "Mathematical theorems" ]
702,705
https://en.wikipedia.org/wiki/Sonification
Sonification is the use of non-speech audio to convey information or perceptualize data. Auditory perception has advantages in temporal, spatial, amplitude, and frequency resolution that open possibilities as an alternative or complement to visualization techniques. For example, the rate of clicking of a Geiger counter conveys the level of radiation in the immediate vicinity of the device. Though many experiments with data sonification have been explored in forums such as the International Community for Auditory Display (ICAD), sonification faces many challenges to widespread use for presenting and analyzing data. For example, studies show it is difficult, but essential, to provide adequate context for interpreting sonifications of data. Many sonification attempts are coded from scratch due to the lack of flexible tooling for sonification research and data exploration. History The Geiger counter, invented in 1908, is one of the earliest and most successful applications of sonification. A Geiger counter has a tube of low-pressure gas; each particle detected produces a pulse of current when it ionizes the gas, producing an audio click. The original version was only capable of detecting alpha particles. In 1928, Geiger and Walther Müller (a PhD student of Geiger) improved the counter so that it could detect more types of ionizing radiation. In 1913, Dr. Edmund Fournier d'Albe of University of Birmingham invented the optophone, which used selenium photosensors to detect black print and convert it into an audible output. A blind reader could hold a book up to the device and hold an apparatus to the area she wanted to read. The optophone played a set group of notes: . Each note corresponded with a position on the optophone's reading area, and that note was silenced if black ink was sensed. Thus, the missing notes indicated the positions where black ink was on the page and could be used to read. Pollack and Ficks published the first perceptual experiments on the transmission of information via auditory display in 1954. They experimented with combining sound dimensions such as timing, frequency, loudness, duration, and spatialization and found that they could get subjects to register changes in multiple dimensions at once. These experiments did not get into much more detail than that, since each dimension had only two possible values. In 1970, Nonesuch Records released a new electronic music composition by the American composer Charles Dodge, "The Earth's Magnetic Field." It was produced at the Columbia-Princeton Electronic Music Center. As the title suggests, the composition's electronic sounds were synthesized from data from the earth's magnetic field. As such, it may well be the first sonification of scientific data for artistic, rather than scientific, purposes. John M. Chambers, Max Mathews, and F.R. Moore at Bell Laboratories did the earliest work on auditory graphing in their "Auditory Data Inspection" technical memorandum in 1974. They augmented a scatterplot using sounds that varied along frequency, spectral content, and amplitude modulation dimensions to use in classification. They did not do any formal assessment of the effectiveness of these experiments. In 1976, philosopher of technology, Don Ihde, wrote, "Just as science seems to produce an infinite set of visual images for virtually all of its phenomena--atoms to galaxies are familiar to us from coffee table books to science magazines; so 'musics,' too, could be produced from the same data that produces visualizations." This appears to be one of the earliest references to sonification as a creative practice. In early 1982 Sara Bly of the University of California, Davis, released two publications - with examples - of her work on the use of computer-generated sound to present data. At the time, the field of scientific visualization was gaining momentum. Among other things, her studies and the accompanying examples compared the properties between visual and aural presentation, demonstrating that "Sound offers and enhancement and an alternative to graphic tools." Her work provides early experiment-based data to help inform matching appropriate data representation to type and purpose. Also in the 1980s, pulse oximeters came into widespread use. Pulse oximeters can sonify oxygen concentration of blood by emitting higher pitches for higher concentrations. However, in practice this particular feature of pulse oximeters may not be widely utilized by medical professionals because of the risk of too many audio stimuli in medical environments. In 1990, the National Center for Supercomputing Applications began generating scientific data sonifications and visualizations from the same source data and a paper describing this work was presented at the June 1991 SPIE Conference on Extracting Meaning from Complex Data. Included in the supporting information for the paper was a video, winner of the 1991 Nicograph Multimedia Grand Prize, comprising several data visualizations paired with their corresponding data sonifications. In 1992, the International Community for Auditory Display (ICAD) was founded by Gregory Kramer as a forum for research on auditory display which includes data sonification. ICAD has since become a home for researchers from many different disciplines interested in the use of sound to convey information through its conference and peer-reviewed proceedings. In May 2022, NASA reported the sonification (converting astronomical data associated with pressure waves into sound) of the black hole at the center of the Perseus galaxy cluster. In 2024, Adhyâropa Records released The Volcano Listening Project by Leif Karlstrom, which merges geophysics research and computer music synthesis with acoustic instrumental and vocal performances by Billy Contreras, Todd Sickafoose, and other acoustic musicians. Some existing applications and projects Auditory thermometer Biodiversity Clocks, e.g., with an audible tick every second, and with special chimes every 15 minutes Cluster analysis of high dimensional data Cockpit auditory displays DNA Financial market monitoring Geiger counter Gravitational waves at LIGO Image sonification for the visually impaired Interactive sonification Medical and surgical auditory displays Multimodal (combined sense) displays to minimize visual overload and fatigue Navigation Ocean science Protein folding dynamics Pulse oximetry in operating rooms and intensive care Sonar Space physics Storm and weather sonification Synaesthesia Tiltification based on psychoacoustic sonification Variometer (rate-of-climb indicator) in a glider (sailplane) beeps with a variable pitch corresponding to the meter reading Vehicle speed alarm Volcanic activity detection Video sonification Sonification techniques Many different components can be altered to change the user's perception of the sound, and in turn, their perception of the underlying information being portrayed. Often, an increase or decrease in some level in this information is indicated by an increase or decrease in pitch, amplitude or tempo, but could also be indicated by varying other less commonly used components. For example, a stock market price could be portrayed by rising pitch as the stock price rose, and lowering pitch as it fell. To allow the user to determine that more than one stock was being portrayed, different timbres or brightnesses might be used for the different stocks, or they may be played to the user from different points in space, for example, through different sides of their headphones. Many studies have been undertaken to try to find the best techniques for various types of information to be presented, and as yet, no conclusive set of techniques to be used has been formulated. As the area of sonification is still considered to be in its infancy, current studies are working towards determining the best set of sound components to vary in different situations. Several different techniques for auditory rendering of data can be categorized: Acoustic sonification Audification Model-based sonification Parameter mapping Stream-based sonification An alternative approach to traditional sonification is "sonification by replacement", for example Pulsed Melodic Affective Processing (PMAP). In PMAP rather than sonifying a data stream, the computational protocol is musical data itself, for example MIDI. The data stream represents a non-musical state: in PMAP an affective state. Calculations can then be done directly on the musical data, and the results can be listened to with the minimum of translation. See also References External links International Community for Auditory Display Sonification Report (1997) provides an introduction to the status of the field and current research agendas. The Sonification Handbook, an Open Access book that gives a comprehensive introductory presentation of the key research areas in sonification and auditory display. Using Sound to Extract Meaning from Complex Data, C. Scaletti and A. Craig, 1991. Supporting video Auditory Information Design, PhD Thesis by Stephen Barrass 1998, User Centred Approach to Designing Sonifications. Mozzi : interactive sensor sonification on Arduino microprocessor. Preliminary report on design rationale, syntax, and semantics of LSL: A specification language for program auralization, D. Boardman and AP Mathur, 1993. A specification language for program auralization, D. Boardman, V. Khandelwal, and AP Mathur, 1994. Sonification tutorial SonEnvir general sonification environment Sonification.de provides information about Sonification and Auditory Display, links to interesting event and related projects Sonification for Exploratory Data Analysis, PhD Thesis by Thomas Hermann 2002, developing Model Based Sonification. Sonification of Mobile and Wireless Communications Interactive Sonification a hub to news and upcoming events in the field of interactive sonification zero-th space-time association CodeSounding — an open source sonification framework which makes possible to hear how any existing Java program "sounds like", by assigning instruments and pitches to code statements (if, for, etc.) and playing them as they are executed at runtime. In this way the flowing of execution is played as a flow of music and its rhythm changes depending on user interaction. LYCAY, a Java library for sonification of Java source code WebMelody, a system for sonification of activity of web servers. Sonification of a Cantor set Sonification Sandbox v.3.0, a Java program to convert datasets to sounds, GT Sonification Lab, School of Psychology, Georgia Institute of Technology. Program Sonification using Java, an online chapter (with code) explaining how to implement sonification using speech synthesis, MIDI note generation, and audio clips. Live Sonification of Ocean Swell Multimodal interaction Display technology Auditory displays Sound Acoustics
Sonification
[ "Physics", "Engineering" ]
2,088
[ "Display technology", "Electronic engineering", "Classical mechanics", "Acoustics" ]
703,071
https://en.wikipedia.org/wiki/Brennschluss
Brennschluss (a loanword, from the German Brennschluss) is either the cessation of fuel burning in a rocket or the time that the burning ceases: the cessation may result from the consumption of the propellants, from deliberate shutoff, or from some other cause. After Brennschluss, the rocket is subject only to external forces, notably that due to gravity. According to Walter Dornberger, Brennschluss literally meant "end of burning," He goes on to state, "the German word is preferred to the form 'all-burnt,' which is used in England, because at Brennschluss considerable quantities of fuel may still be left in the tanks." Cultural references The term Brennschluss is used in various English literary works, including: The science fiction short story Honeymoon in Hell (1950) by Fredric Brown The science fiction short story Desire No More (1954) by Algis Budrys The novel Gravity's Rainbow (1973) by Thomas Pynchon The science fiction novel Rip Foster Rides the Gray Planet (1952) written by Harold L. Goodwin (under the pseudonym Blake Savage) The science fiction novel The Rolling Stones (also called Space Family Stone) (1952) by Robert A. Heinlein The science fiction novel Double Star (1956) by Robert A. Heinlein The science fiction short story ‘’Delivery Guaranteed’’ by Robert Silverberg The science fiction novel Fiasco (1986) by Stanislaw Lem References German language Rocketry
Brennschluss
[ "Engineering" ]
328
[ "Rocketry", "Aerospace engineering" ]
703,764
https://en.wikipedia.org/wiki/Nuclear%20bag%20fiber
A nuclear bag fiber is a type of intrafusal muscle fiber that lies in the center of a muscle spindle. Each has many nuclei concentrated in bags and they cause excitation of the primary sensory fibers. There are two kinds of bag fibers based upon contraction speed and motor innervation. BAG2 fibers are the largest. They have no striations in middle region and swell to enclose nuclei, hence their name. BAG1 fibers, smaller than BAG2. Both bag types extend beyond the spindle capsule. These sense dynamic length of the muscle. They are sensitive to length and velocity. See also Nuclear chain fiber List of distinct cell types in the adult human body References External links http://www.unmc.edu/Physiology/Mann/mann11.html Nervous system Muscular system
Nuclear bag fiber
[ "Biology" ]
166
[ "Organ systems", "Nervous system" ]
703,772
https://en.wikipedia.org/wiki/Nuclear%20chain%20fiber
A nuclear chain fiber is a specialized sensory organ contained within a muscle. Nuclear chain fibers are intrafusal fibers that, along with nuclear bag fibers, make up the muscle spindle responsible for the detection of changes in muscle length. There are 3–9 nuclear chain fibers per muscle spindle that are half the size of the nuclear bag fibers. Their nuclei are aligned in a chain and they excite the secondary nerve. They are static, whereas the nuclear bag fibers are dynamic in comparison. The name "nuclear chain" refers to the structure of the central region of the fiber, where the sensory axons wrap around the intrafusal fibers. The secondary nerve association involves an efferent and afferent pathway that measure the stress and strain placed on the muscle (usually the extrafusal fibers connected from the muscle portion to a bone). The afferent pathway resembles a spring wrapping around the nuclear chain fiber and connecting to one of its ends away from the bone. Again, depending on the stress and strain the muscles sustains, this afferent and efferent coordination will measure the "stretch of the spring" and communicate the results to the central nervous system. A similar structure attaching one end to muscle and the other end to a tendon is known as a Golgi tendon organ. However, Golgi tendon organs differ from nuclear chain and nuclear bag fibers in that they are considered in series rather than in parallel to the muscle fibers. Innervation As intrafusal muscle fibers, nuclear chain fibers are innervated by both sensory afferents and motor efferents. The afferent innervation is via type Ia sensory fibers and type II sensory fibers. These project to the nucleus proprius in the dorsal horn of the spinal cord. Efferent innervation is via static γ motor neurons. Stimulation of γ neurons causes the nuclear chain to shorten along with the extrafusal muscle fibers. This shortening allows the nuclear chain fiber to be sensitive to changes in length while its corresponding muscle is contracted. See also List of distinct cell types in the adult human body References External links Unmc.edu Nervous system Muscular system
Nuclear chain fiber
[ "Biology" ]
439
[ "Organ systems", "Nervous system" ]
12,437,267
https://en.wikipedia.org/wiki/Lamella%20clarifier
A lamella clarifier or inclined plate settler (IPS) is a type of clarifier designed to remove particulates from liquids. Range of applications Lamella clarifiers can be used in a range of industries, including mining and metal finishing, as well as to treat groundwater, industrial process water and backwash from sand filters. Lamella clarifiers are ideal for applications where the solids loading is variable and the solids sizing is fine. They are more common than conventional clarifiers at many industrial sites, due to their smaller footprint. One specific application is pre-treatment for effluent entering membrane filters. Lamella clarifiers are considered one of the best options for pre-treatment ahead of ultrafiltration. Their all-steel design minimizes the chances that part of the inclined plate will chip off and be carried over into the membrane, especially compared to tube settlers, which are made of plastic. Further, lamella clarifiers may maintain the required water quality to the membrane with or without the use of chemicals. This is a cost-saving measure, both in purchasing chemicals and limiting damage to the membrane, as membranes do not work well with the large particles contained in flocculants and coagulants. Lamella clarifiers are also used in the municipal wastewater treatment processes. The most common wastewater application for lamella clarifiers is as part of the tertiary treatment stage. Lamella clarifiers can be integrated into the treatment process or stand-alone units can be used to increase the flow through existing water treatment plants. One option for integrating lamella clarifiers into existing plants is for conventional or sludge blanket clarifiers to be upgraded by attaching a bundle of inclined plates or tubes before the overflow in the so-called "clear water zone". This can increase the settling area by two-fold resulting in a decrease in the solids loading in the overflow. Advantages and limitations The main advantage of lamella clarifiers over other clarifying systems is the large effective settling area caused by the use of inclined plates, which improves the operating conditions of the clarifiers in a number of ways. The unit is more compact usually requiring only 65-80 % of the area of clarifiers operating without inclined plates. Therefore, where site footprint constraints are of concern a lamella clarifier system is preferred. The reduced required area allows the possibility for the clarifiers to be located and operated inside, reducing some of the common problems of algae growth, clogging due to blowing debris accumulation and odour control, that occur when the machinery is outdoors. Operation within an enclosed space also allows for a better control of operating temperature and pressure conditions. The inclined plates mean the clarifier can operate with overflow rates 2 to 4 times that of traditional clarifiers which allow a greater influent flow rate and thus a more time efficient clarification process. Lamella clarifiers also offer a simple design without requiring the use of chemicals. They are therefore able to act as pre-treatment for delicate membrane processes. Where necessary flocculants may be added to promote efficiency. Lamella clarifier performance may be improved by the addition of flocculants and coagulants. These chemicals optimize the settling process and cause a higher purity of overflow water by ensuring all smaller solids are settled into the sludge underflow. A further advantage of the lamella clarifier is its distinct absence of mechanical, moving parts. The system therefore requires no energy input except for the influent pump and has a much lower propensity for mechanical failure than other clarifiers. This advantage extends to safety considerations when operating the plant. The absence of mechanical results in a safer working environment, with less possibility for injury. Whilst the lamella clarifier has overcome many difficulties encountered by the use of more traditional clarifiers, there are still some disadvantages involved with the configuration and running of the equipment. Lamella clarifiers are unable to treat most raw feed mixtures, which require some pre-treatment to remove materials that could decrease separation efficiency. The feed requires initial processing in advanced fine screening and grit and grease removal to ensure the influent mixture is of an appropriate composition. The layout of the clarifier creates extra turbulence as the water turns a corner from the feed to the inclined plates. This area of increased turbulence coincides with the sludge collection point and the flowing water can cause some re suspension of solids, whilst simultaneously diluting the sludge. This results in the need for further treatment to remove the excess moisture from the sludge. Clarifier inlets and discharge must be designed to distribute flow evenly. Regular maintenance is required as sludge flows down the inclined plates leaving them dirty. Regular cleaning helps prevent uneven flow distribution. Additionally, poorly maintained plates can cause uneven flow distribution and sacrifice the efficiency of the process. The closely packed plates make the cleaning difficult. However, removable and independently supported lamellar plates can be installed. Commercially available lamella clarifiers require different concrete basin geometry and structural support to conventional clarifications system widely used in industry, thus increasing the cost of installing a new (lamellar) clarification system. Available designs Typical lamella clarifier design consists of a series of inclined plates inside a vessel, see first figure. The untreated feed water stream enters from the top of the vessel and flows down a feed channel underneath the inclined plates. Water then flows up inside the clarifier between the inclined plates. During this time solids settle onto the plates and eventually fall to the bottom of the vessel. The route a particle takes will be dependent upon the flow rate of the suspension and the settling rate of the particle and can be seen in the second figure. At the bottom of the vessel a hopper or funnel collects these particles as sludge. Sludge may be continuously or intermittently discharged. Above the inclined plates all particles have settled and clarified water is produced which is drawn off into an outlet channel. The clarified water exits the system in an outlet stream. There are a number of proprietary lamella clarifier designs. Inclined plates may be based on circular, hexagonal or rectangular tubes. Some possible design characteristics include: Tube or plate spacing of 50 mm Tube or plate length 1–2 m Plate pitches between 45° and 70° allow for self-cleaning, lower pitches require backwash Minimum plate pitch 7° Typical loading rates are 5 to 10 m/h Main process characteristics Lamella clarifiers can handle a maximum feed water concentration of 10000 mg/L of grease and 3000 mg/L of solids. Expected separation efficiencies for a typical unit are: 90-99% removal of free oils and greases under standard operation conditions. 20-40% removal of emulsified oils and greases with no chemical amendment. 50-99% removal with the addition of chemical agent(s). Treated water has a turbidity of around 1-2 NTU. Initial investment required for a typical lamella clarifier varies from US$750 to US$2500 per cubic meter of water to be treated, depending on the design of the clarifier. The surface loading rate (also known as surface overflow rate or surface settling rate) for a lamella clarifier falls between 10 and 25 m/h. For these settling rates, the retention time in the clarifier is low, at around 20 minutes or less, with operating capacities tending to range from 1–3 m3/hour/m2 (of projected area). Assessment of characteristics Separation of solids is described by sedimentation effectiveness, η. Which is dependent on concentration, flow rate, particle size distribution, flow patterns and plate packing and is defined by the following equation. η = (c1-c2)/c2 where c1 is inlet concentration and c2 outlet concentration. Inclined angle of plates allows for increased loading rate/throughput and decreased retention time relative to conventional clarifiers. Increase in the loading rate of 2-3 times the conventional clarifier (of the same size). The total surface area required for settling can be calculated for a lamella plate with N plates, each plate of width W, with plate pitch θ and tube spacing p. Where, A = W∙(Np+cos θ) Table 1 presents the characteristics and operating ranges of different clarification units. Where overflow rate is a measure of the fluid loading capacity of the clarifier and is defined as, the influent flow rate divided by the horizontal area of the clarifier. The retention time is the average time that a particulate remains in the clarifier. The turbidity is a measure of cloudiness. Higher values for turbidity removal efficiency correspond to less particulates remaining in the clarified stream. The settling velocity of a particulate can also be determined by using Stokes' law. Design heuristics Rise rate: Rise rates can be between 0.8 and 4.88 m/h from different sources (Kucera, 2011). Plate loading: Loadings on plates should be limited to 2.9 m/h to ensure laminar flow is maintained between plates. Plate angle: The general consensus is that plates should be inclined at a 50-70° angle from the horizontal to allow for self-cleaning. This results in the projected plate area of the lamella clarifier taking up approximately 50% of the space of a conventional clarifier. Plate spacing: Typical spacing between plates is 50 mm, though plates can be spaced in the range of 50–80 mm apart, given that the particles > 50 mm in size have been removed in pre-treatment stages. Plate length: Depending on the scale of the system, total plate lengths can vary, however, the plate length should allow for the plates to rise 125 mm above the top water level, with 1.5 m of space left below the plates at the bottom of the clarifier for collection of sludge. Most plates have a length of 1–2 m. Plate materials: Plates should be made of stainless steel, with the exception of situations in which the system has been dosed with chlorine to prevent algal growth. In these circumstances, the plates may be plastic or plastic coated. Feed point: Feed should be introduced at least 20% above the base of the plate to prevent disturbance of the settling zones at the base of the plates. Post-treatment systems Both the overflow stream and the underflow stream from a lamella clarifier will often require post-treatment. The underflow stream is often put through a dewatering process such as a thickener or a belt press filter to increase the density of slurry. This is an important post-treatment as the underflow slurry is often not able to be recycled back into the process. In such a case it often needs to be transported to a disposal plant, and the cost of this transport depends on the volume and weight of the slurry. Hence an efficient dewatering process can result in a substantial cost saving. Where the slurry can be recycled through the process it often needs to be dried, and dewatering again is an important step in this process. The post-treatment required for the overflow stream depends both on the nature of the inlet stream and what the overflow will be used for. For example, if the fluid being put through the lamella clarifier comes from a heavy industrial plant it may require post-treatment to remove oil and grease especially if the effluent is going to be discharged to the environment. A separation process unit such as a coalescer is often used to physically trigger a separation of the water and the oils. For the treatment of potable water the overflow from the lamella clarifier will require further treatment to remove organic molecules as well as disinfection to remove bacteria. It will also be passed through a series of polishing units to remove the odour and improve the colour of the water. There is a tendency with lamella clarifiers for algae to grow on the inclined plates and this can be a problem especially if the overflow is being discharged to the environment or if the lamella clarifier is being utilized as pre-treatment for a membrane filtration unit. In either of these cases the overflow requires post-treatment such as an anthracite-sand filter to prevent the algae from spreading downstream of the lamella clarifier. As the inclined plates in the lamella clarifier are made of steel it is not recommended that chlorine be used to control the biological growth as it could accelerate the corrosion of the plates. New developments One variation on the standard design of a lamella clarifier being developed is the way the effluent is collected at the top of the inclined plates. Rather than the effluent flowing over the top of the inclined plates to the outlet channel it flows through orifices at the top of the plates. This design allows for more consistent back pressure in the channels between the plates and hence a more consistent flow profile develops. Obviously this design only works for relatively clean effluent streams as the orifices would quickly become blocked with deposits which would severely reduce the efficiency of the unit. Another new design includes an adjustable upper portion of the vessel so that vessel height can be changed. This height adjustment is relative to a deflector, which directs the inlet stream. This design intended to be used for decanting storm water. Another design variation, which improves the efficiency of the separation unit is the way the effluent enters the lamella clarifier. Standard clarifier design has the effluent entering at the bottom of the inclined plates, colliding with the sludge sliding down the plates. This mixing region renders the bottom 20% of the inclined plates unusable for settling. By designing the lamella clarifier so that the effluent enters the inclined plates without interfering with the downward slurry flow the capacity of the lamella clarifier can be improved by 25%. References Water treatment
Lamella clarifier
[ "Chemistry", "Engineering", "Environmental_science" ]
2,959
[ "Water treatment", "Water pollution", "Water technology", "Environmental engineering" ]
12,437,648
https://en.wikipedia.org/wiki/Antisymmetrizer
In quantum mechanics, an antisymmetrizer (also known as an antisymmetrizing operator) is a linear operator that makes a wave function of N identical fermions antisymmetric under the exchange of the coordinates of any pair of fermions. After application of the wave function satisfies the Pauli exclusion principle. Since is a projection operator, application of the antisymmetrizer to a wave function that is already totally antisymmetric has no effect, acting as the identity operator. Mathematical definition Consider a wave function depending on the space and spin coordinates of N fermions: where the position vector ri of particle i is a vector in and σi takes on 2s+1 values, where s is the half-integral intrinsic spin of the fermion. For electrons s = 1/2 and σ can have two values ("spin-up": 1/2 and "spin-down": −1/2). It is assumed that the positions of the coordinates in the notation for Ψ have a well-defined meaning. For instance, the 2-fermion function Ψ(1,2) will in general be not the same as Ψ(2,1). This implies that in general and therefore we can define meaningfully a transposition operator that interchanges the coordinates of particle i and j. In general this operator will not be equal to the identity operator (although in special cases it may be). A transposition has the parity (also known as signature) −1. The Pauli principle postulates that a wave function of identical fermions must be an eigenfunction of a transposition operator with its parity as eigenvalue Here we associated the transposition operator with the permutation of coordinates π that acts on the set of N coordinates. In this case π = (ij), where (ij) is the cycle notation for the transposition of the coordinates of particle i and j. Transpositions may be composed (applied in sequence). This defines a product between the transpositions that is associative. It can be shown that an arbitrary permutation of N objects can be written as a product of transpositions and that the number of transposition in this decomposition is of fixed parity. That is, either a permutation is always decomposed in an even number of transpositions (the permutation is called even and has the parity +1), or a permutation is always decomposed in an odd number of transpositions and then it is an odd permutation with parity −1. Denoting the parity of an arbitrary permutation π by (−1)π, it follows that an antisymmetric wave function satisfies where we associated the linear operator with the permutation π. The set of all N! permutations with the associative product: "apply one permutation after the other", is a group, known as the permutation group or symmetric group, denoted by SN. We define the antisymmetrizer as Properties of the antisymmetrizer In the representation theory of finite groups the antisymmetrizer is a well-known object, because the set of parities forms a one-dimensional (and hence irreducible) representation of the permutation group known as the antisymmetric representation. The representation being one-dimensional, the set of parities form the character of the antisymmetric representation. The antisymmetrizer is in fact a character projection operator and is quasi-idempotent, This has the consequence that for any N-particle wave function Ψ(1, ...,N) we have Either Ψ does not have an antisymmetric component, and then the antisymmetrizer projects onto zero, or it has one and then the antisymmetrizer projects out this antisymmetric component Ψ'. The antisymmetrizer carries a left and a right representation of the group: with the operator representing the coordinate permutation π. Now it holds, for any N-particle wave function Ψ(1, ...,N) with a non-vanishing antisymmetric component, that showing that the non-vanishing component is indeed antisymmetric. If a wave function is symmetric under any odd parity permutation it has no antisymmetric component. Indeed, assume that the permutation π, represented by the operator , has odd parity and that Ψ is symmetric, then As an example of an application of this result, we assume that Ψ is a spin-orbital product. Assume further that a spin-orbital occurs twice (is "doubly occupied") in this product, once with coordinate k and once with coordinate q. Then the product is symmetric under the transposition (k, q) and hence vanishes. Notice that this result gives the original formulation of the Pauli principle: no two electrons can have the same set of quantum numbers (be in the same spin-orbital). Permutations of identical particles are unitary, (the Hermitian adjoint is equal to the inverse of the operator), and since π and π−1 have the same parity, it follows that the antisymmetrizer is Hermitian, The antisymmetrizer commutes with any observable (Hermitian operator corresponding to a physical—observable—quantity) If it were otherwise, measurement of could distinguish the particles, in contradiction with the assumption that only the coordinates of indistinguishable particles are affected by the antisymmetrizer. Connection with Slater determinant In the special case that the wave function to be antisymmetrized is a product of spin-orbitals the Slater determinant is created by the antisymmetrizer operating on the product of spin-orbitals, as below: The correspondence follows immediately from the Leibniz formula for determinants, which reads where B is the matrix To see the correspondence we notice that the fermion labels, permuted by the terms in the antisymmetrizer, label different columns (are second indices). The first indices are orbital indices, n1, ..., nN labeling the rows. Example By the definition of the antisymmetrizer Consider the Slater determinant By the Laplace expansion along the first row of D so that By comparing terms we see that Intermolecular antisymmetrizer One often meets a wave function of the product form where the total wave function is not antisymmetric, but the factors are antisymmetric, and Here antisymmetrizes the first NA particles and antisymmetrizes the second set of NB particles. The operators appearing in these two antisymmetrizers represent the elements of the subgroups SNA and SNB, respectively, of SNA+NB. Typically, one meets such partially antisymmetric wave functions in the theory of intermolecular forces, where is the electronic wave function of molecule A and is the wave function of molecule B. When A and B interact, the Pauli principle requires the antisymmetry of the total wave function, also under intermolecular permutations. The total system can be antisymmetrized by the total antisymmetrizer which consists of the (NA + NB)! terms in the group SNA+NB. However, in this way one does not take advantage of the partial antisymmetry that is already present. It is more economic to use the fact that the product of the two subgroups is also a subgroup, and to consider the left cosets of this product group in SNA+NB: where τ is a left coset representative. Since we can write The operator represents the coset representative τ (an intermolecular coordinate permutation). Obviously the intermolecular antisymmetrizer has a factor NA! NB! fewer terms then the total antisymmetrizer. Finally, so that we see that it suffices to act with if the wave functions of the subsystems are already antisymmetric. See also Slater determinant Identical particles References Pauli exclusion principle Permutations Quantum chemistry Quantum mechanics Determinants
Antisymmetrizer
[ "Physics", "Chemistry", "Mathematics" ]
1,734
[ "Functions and mappings", "Quantum chemistry", "Permutations", "Pauli exclusion principle", "Quantum mechanics", "Mathematical objects", "Combinatorics", "Theoretical chemistry", "Quantum operators", "Mathematical relations", " molecular", "Atomic", " and optical physics" ]
12,440,032
https://en.wikipedia.org/wiki/C5H4O2
{{DISPLAYTITLE:C5H4O2}} The molecular formula C5H4O2 (molar mass: 96.085 g/mol) may refer to: Furfural (2-furaldehyde) 3-Furaldehyde Protoanemonin Pyrones 2-Pyrone 4-Pyrone Molecular formulas
C5H4O2
[ "Physics", "Chemistry" ]
79
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
15,172,243
https://en.wikipedia.org/wiki/Bolidophyceae
Bolidophyceae is a class of photosynthetic heterokont picophytoplankton, and consist of less than 20 known species. They are distinguished by the angle of flagellar insertion and swimming patterns as well as recent molecular analyses. Bolidophyceae is the sister taxon to the diatoms (Bacillariophyceae). They lack the characteristic theca of the diatoms, and have been proposed as an intermediate group between the diatoms and all other heterokonts. Taxonomy Class Bolidophyceae Guillou & Chretiennot-Dinet 1999 Order Parmales Booth & Marchant 1987 Family Pentalaminaceae Marchant 1987 Genus Pentalamina Marchant 1987 Species Pentalamina corona Marchant 1987 Family Triparmaceae Booth & Marchant 1988 Genus Tetraparma Booth 1987 Species T. catinifera Species T. gracilis Species T. insecta Bravo-Sierra & Hernández-Becerril 2003 Species T. pelagica Booth & Marchant 1987 Species T. silverae Fujita & Jordan 2017 Species T. trullifera Fujita & Jordan 2017 Genus Triparma Booth & Marchant 1987 Species T. columacea Booth 1987 Species T. eleuthera Ichinomiya & Lopes dos Santos 2016 Species T. laevis Booth 1987 Species T. mediterranea (Guillou & Chrétiennot-Dinet) Ichinomiya & Lopes dos Santos 2016 Species T. pacifica (Guillou & Chrétiennot-Dinet) Ichinomiya & Lopes dos Santos 2016 Species T. retinervis Booth 1987 Species T. strigata Booth 1987 Species T. verrucosa Booth 1987 Gallery In the gallery, all scale bar represent 1 μm. References External links SEM images of Bolidophyceae (Parmales): http://www.mikrotax.org/Nannotax3/index.php?dir=non_cocco/Parmales Ochrophyte classes Ochrophyta
Bolidophyceae
[ "Biology" ]
431
[ "Ochrophyta", "Algae" ]
15,173,176
https://en.wikipedia.org/wiki/Fermi%20and%20Frost
"Fermi and Frost" is a science fiction short story by American writer Frederik Pohl, first published in the January 1985 issue of Isaac Asimov's Science Fiction Magazine. It won the Hugo Award for Best Short Story in 1986. Summary The story opens with an astronomer who is at an airport when a nuclear war begins. Recognized by a fan, he is offered a seat on a plane escaping to Iceland. Though Reykjavík is destroyed by a thermonuclear warhead, the rest of the island is unharmed. The survivors take advantage of Iceland's geology and experience with cold weather to prepare for the nuclear winter that follows. Interwoven into the story is speculation about the Fermi paradox and the perspective on the possibility of alien life given the prospects of nuclear war. Sources, references, external links, quotations Short stories by Frederik Pohl 1985 short stories Hugo Award for Best Short Story–winning works Works originally published in Asimov's Science Fiction Fermi paradox
Fermi and Frost
[ "Astronomy" ]
202
[ "Astronomical hypotheses", "Fermi paradox" ]
15,179,440
https://en.wikipedia.org/wiki/Sakacin
Sakacins are bacteriocins produced by Lactobacillus sakei. They are often clustered with the other lactic acid bacteriocins. The best known sakacins are sakacin A, G, K, P, and Q. In particular, sakacin A and P have been well characterized. List of named sakacins Sakacin A is a small, 41 amino acid (the precursor is 90 aa), heat-stable polypeptide. It has been characterized genetically. The regulation of sakacin A has been shown to be related to pheromones (possibly quorum sensing) and temperature changes. It is identical to curvacin/curvaticin A. Sakacin B is a heat and pH stable protein. Sakacin G is a 37 amino acid long (small) polypeptide. Sakacin K is closely related to Sakacin A (and curvacin A), sharing the first 30 N-terminal amino acids. It has been studied extensively for its industrial applications. Sakacin M is a heat-resistant protein, MW = 4640. Sakacin P is a small, heat-stable, ribosomally synthesized polypeptide. Its genetics has been well-characterized. Sakacin Q was discovered in a strain producing Sakacin P. Sakacin R is very similar to sakacin P. It is 43 amino acids long, and is also known as sakacin 674. Sakacin T is a class II bacteriocin. It is produced from a single operon with sakacin X; there are three distinct promoters in the operon, the two sakacins are chemically distinct, though similar. Sakacin T Sakacin X is a class IIa bacteriocin. It appears in the references with Sakacin T (above). Sakacin Z was apparently never published and is known from a reference to unpublished data (refers to B. Ray, under Table 6, page 551) The conventions governing the naming of sakacins are somewhat confused. Sakacin Z was named because it is produced by L. sakei Z, just as Sakacin 670 was named because it was produced by L. sakei 670; but the remaining naming convention uses letters A-Z, of which few are unambiguously available. Worse yet, many strains produce several sakacins so that naming them by strain is ambiguous. Applications of the Sakacins Many of the sakacins have been tested for industrial applications and inserted into other lactic acid bacteria. Some have been engineered for production in food environments as well. Many were actually discovered in food contexts, like Greek dry cured sausage (sakacin B). In modern food chemistry, the sakacins have been studied for their use against Listeria in the production of sausages (like Portuguese lingüiça) and cured meat products (such as ham and cold cuts), cheeses, and other lactic acid fermented products. They are also used to repress unwanted bacterial growth that might cause ropiness, sliminess, malodor and other product defects. References Bacteriocins Peptides Bacterial toxins
Sakacin
[ "Chemistry" ]
681
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
15,181,102
https://en.wikipedia.org/wiki/ZNF259
Zinc finger protein ZPR1 is a protein that in humans is encoded by the ZNF259 gene. References Further reading External links Transcription factors
ZNF259
[ "Chemistry", "Biology" ]
32
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
1,739,569
https://en.wikipedia.org/wiki/Mass-independent%20fractionation
Mass-independent isotope fractionation or Non-mass-dependent fractionation (NMD), refers to any chemical or physical process that acts to separate isotopes, where the amount of separation does not scale in proportion with the difference in the masses of the isotopes. Most isotopic fractionations (including typical kinetic fractionations and equilibrium fractionations) are caused by the effects of the mass of an isotope on atomic or molecular velocities, diffusivities or bond strengths. Mass-independent fractionation processes are less common, occurring mainly in photochemical and spin-forbidden reactions. Observation of mass-independently fractionated materials can therefore be used to trace these types of reactions in nature and in laboratory experiments. Mass-independent fractionation in nature The most notable examples of mass-independent fractionation in nature are found in the isotopes of oxygen and sulfur. The first example was discovered by Robert N. Clayton, Toshiko Mayeda, and Lawrence Grossman in 1973, in the oxygen isotopic composition of refractory calcium–aluminium-rich inclusions in the Allende meteorite. The inclusions, thought to be among the oldest solid materials in the Solar System, show a pattern of low 18O/16O and 17O/16O relative to samples from the Earth and Moon. Both ratios vary by the same amount in the inclusions, although the mass difference between 18O and 16O is almost twice as large as the difference between 17O and 16O. Originally this was interpreted as evidence of incomplete mixing of 16O-rich material (created and distributed by a large star in a supernova) into the Solar nebula. However, recent measurement of the oxygen-isotope composition of the Solar wind, using samples collected by the Genesis spacecraft, shows that the most 16O-rich inclusions are close to the bulk composition of the solar system. This implies that Earth, the Moon, Mars, and asteroids all formed from 18O- and 17O-enriched material. Photodissociation of carbon monoxide in the Solar nebula has been proposed to explain this isotope fractionation. Mass-independent fractionation also has been observed in ozone. Large, 1:1 enrichments of 18O/16O and 17O/16O in ozone were discovered in laboratory synthesis experiments by Mark Thiemens and John Heidenreich in 1983, and later found in stratospheric air samples measured by Konrad Mauersberger. These enrichments were eventually traced to the three-body ozone formation reaction. O + O2 → O3* + M → O3 + M* Theoretical calculations by Rudolph Marcus and others suggest that the enrichments are the result of a combination of mass-dependent and mass-independent kinetic isotope effects (KIE) involving the excited state O3* intermediate related to some unusual symmetry properties. The mass-dependent isotope effect occurs in asymmetric species, and arises from the difference in zero-point energy of the two formation channels available (e.g., 18O16O + 16O vs 18O + 16O16O for formation of 18O16O16O.) These mass-dependent zero-point energy effects cancel one another out and do not affect the enrichment in heavy isotopes observed in ozone. The mass-independent enrichment in ozone is still not fully understood, but may be due to isotopically symmetric O3* having a shorter lifetime than asymmetric O3*, thus not allowing a statistical distribution of energy throughout all the degrees of freedom, resulting in a mass-independent distribution of isotopes. Mass-independent carbon dioxide fractionation The mass-independent distribution of isotopes in stratospheric ozone can be transferred to carbon dioxide (CO2). This anomalous isotopic composition in CO2 can be used to quantify gross primary production, the uptake of CO2 by vegetation through photosynthesis. This effect of terrestrial vegetation on the isotopic signature of atmospheric CO2 was simulated with a global model and confirmed experimentally. Mass-independent sulfur fractionation Mass-independent fractionation of sulfur can be observed in ancient sediments, where it preserves a signal of the prevailing environmental conditions. The creation and transfer of the mass-independent signature into minerals would be unlikely in an atmosphere containing abundant oxygen, constraining the Great Oxygenation Event to some time after . Prior to this time, the MIS record implies that sulfate-reducing bacteria did not play a significant role in the global sulfur cycle, and that the MIS signal is due primarily to changes in volcanic activity. See also Equilibrium fractionation Kinetic fractionation Isotope geochemistry References Isotopes Geochemistry Fractionation
Mass-independent fractionation
[ "Physics", "Chemistry" ]
953
[ "Fractionation", "Separation processes", "Isotopes", "nan", "Nuclear physics" ]
1,740,295
https://en.wikipedia.org/wiki/Transition%20metal%20pincer%20complex
In chemistry, a transition metal pincer complex is a type of coordination complex with a pincer ligand. Pincer ligands are chelating agents that binds tightly to three adjacent coplanar sites in a meridional configuration. The inflexibility of the pincer-metal interaction confers high thermal stability to the resulting complexes. This stability is in part ascribed to the constrained geometry of the pincer, which inhibits cyclometallation of the organic substituents on the donor sites at each end. In the absence of this effect, cyclometallation is often a significant deactivation process for complexes, in particular limiting their ability to effect C-H bond activation. The organic substituents also define a hydrophobic pocket around the reactive coordination site. Stoichiometric and catalytic applications of pincer complexes have been studied at an accelerating pace since the mid-1970s. Most pincer ligands contain phosphines. Reactions of metal-pincer complexes are localized at three sites perpendicular to the plane of the pincer ligand, although in some cases one arm is hemi-labile and an additional coordination site is generated transiently. Early examples of pincer ligands (not called such originally) were anionic with a carbanion as the central donor site and flanking phosphine donors; these compounds are referred to as PCP pincers. Scope of pincer ligands Although the most common class of pincer ligands features PCP donor sets, variations have been developed where the phosphines are replaced by thioethers and tertiary amines. Many pincer ligands also feature nitrogenous donors at the central coordinating group position (see figure), such as pyridines. An easily prepared pincer ligand is POCOP. Many tridentate ligands types occupy three contiguous, coplanar coordination sites. The most famous such ligand is terpyridine (“terpy”). Terpy and its relatives lack the steric bulk of the two terminal donor sites found in traditional pincer ligands. Metal pincer complexes are often prepared through C-H bond activation. Ni(II) N,N,N pincer complexes are active in Kumada, Sonogashira, and Suzuki-Miyaura coupling reactions with unactivated alkyl halides. Types of pincer ligands The pincer ligand is most often an anionic, two-electron donor to the metal centre. It consists of a rigid, planar backbone usually consisting of aryl frameworks and has two neutral, two-electron donor groups at the meta-positions. The general formula for pincer ligands is 2,6-(ER2)2C6H3 – abbreviated ECE – where E is the two-electron donor and C is the ipso-carbon of the aromatic backbone (e.g. PCP – two phosphine donors). Due to the firm tridentate coordination mode, it allows the metal complexes to exhibit high thermal stability as well as air-stability. It also implies that a reduced number of coordination sites are available for reactivity, which often limits the number of undesirable products formed in the reaction due to ligand exchange, as this process is suppressed. There are various types of pincer ligands that are used in transition metal catalysis. Often, they have the same two-electron donor flanking the metal centre, but this is not a requirement. The most common pincer ligand designs are PCP, NCN, PCN, SCS, and PNO. Other elements that have been employed at different positions in the ligand are boron, arsenic, silicon, and even selenium. By altering the properties of the pincer ligands, it is possible to significantly alter the chemistry at the metal centre. Changing the hardness/softness of the donor, using electron-withdrawing groups (EWGs) in the backbone, and the altering the steric constraints of the ligands are all methods used to tune the reactivity at the metal centre. Synthesis The synthesis of the ligands often involves the reaction between 1,3-dibromoethylbenzene with a secondary phosphine followed by deprotonation of the quaternary phosphorus intermediates to generate the ligand. To generate the metal complex, two common routes are employed. One is a simple oxidative addition of the ipso-C-X bond where X = Br, I to a metal centre, often a M(0) (M = Pd, Mo, Fe, Ru, Ni, Pt) though other metal complexes with higher oxidation states available can also be used (e.g. Rh(COD)Cl2). The other significant method of metal introduction is through C-H bond activation., The major difference is that the metal used in this method is already in a higher oxidation state (e.g. PdCl2 – Pd(II) species). However, these reactions have been found to proceed much more efficiently by employing metal complexes with weakly-bound ligands (e.g. Pd(BF4)2(CH3CN)2 or Pd(OTf)2(CH3CN)2 where OTf = F3CO2SO−). Role in catalysis The potential value of pincer ligands in catalysis has been investigated, although no process has been commercialized. Aspirational applications are motivated by the high thermal stability and rigidity. Disadvantages include the cost of the ligands. Suzuki-Miyaura coupling Pincer complexes have been shown to catalyse Suzuki-Miyaura coupling reactions, a versatile carbon-carbon bond forming reaction. Typical Suzuki coupling employ Pd(0) catalysts with monodentate tertiary phosphine ligands (e.g. Pd(PPh3)4). It is a very selective method to couple aryl substituents together, but requires elevated temperatures. Using PCP pincer-palladium catalysts, aryl-aryl couplings can be achieved with turnover numbers (TONs) upwards of 900,000 and high yields. Additionally, other groups have found that very low catalyst loadings can be achieved with asymmetric palladium pincer complexes. Catalyst loadings of 0.0001 mol % have been found to have TONs upwards of 190,000 and upper limit TONs can reach 1,100,000. Sonogashira coupling Sonogashira coupling has found widespread use in coupling aryl halides with alkynes. TONs upwards of 2,000,000 and low catalyst loadings of 0.005 mol % can be achieved with PNP-based catalysts. Dehydrogenation of alkanes Alkanes undergo dehydrogenation at high temperatures. Typically this conversion is promoted heterogeneously because typically homogeneous catalysts do not survive the required temperatures (~200 °C) The corresponding conversion can be catalyzed homogeneously by pincer catalysts, which are sufficiently thermally robust. Proof of concept was established in 1996 by Jensen and co-workers. They reported that an iridium and rhodium pincer complex catalyze the dehydrogenation of cyclooctane with a turnover frequency of 12 min−1 at 200 °C. They found that the dehydrogenation was performed at a rate two orders of magnitude greater than those previously reported. The iridium pincer complex was also found to exhibit higher activity than the rhodium complex. This rate difference may be due to the availability of the Ir(V) oxidation state which allows stronger Ir-C and Ir-H bonds. The homogeneously catalyzed process can be coupled to other reactions such as alkene metathesis. Such tandem reactions have not been demonstrated with heterogeneous catalysts. History The original work on PCP ligands arose from studies of the Pt(II) complexes derived from long-chain ditertiary phosphines, species of the type R2P(CH2)nPR2 where n >4 and R = tert-butyl. Platinum metalates one methylene group with release of HCl, giving species such as PtCl(R2P(CH2)2CH(CH2)2PR2). Pincer complexes catalyze the dehydrogenation of alkanes. Early reports described the dehydrogenation of cyclooctane by an Ir pincer complex with a turnover frequency of 12 min−1 at 200 °C. The complexes are thermally stable at such temperatures for days. See also Carbene Coupling reaction Heck reaction Kumada coupling References Coordination chemistry
Transition metal pincer complex
[ "Chemistry" ]
1,791
[ "Coordination chemistry" ]
1,740,611
https://en.wikipedia.org/wiki/C6H8O7
{{DISPLAYTITLE:C6H8O7}} The molecular formula C6H8O7 (molar mass: 192.12 g/mol, exact mass: 192.0270 u) may refer to: Citric acid Isocitric acid Molecular formulas
C6H8O7
[ "Physics", "Chemistry" ]
60
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
1,740,813
https://en.wikipedia.org/wiki/Acid%20salt
Acid salts are a class of salts that produce an acidic solution after being dissolved in a solvent. Its formation as a substance has a greater electrical conductivity than that of the pure solvent. An acidic solution formed by acid salt is made during partial neutralization of diprotic or polyprotic acids. A half-neutralization occurs due to the remaining of replaceable hydrogen atoms from the partial dissociation of weak acids that have not been reacted with hydroxide ions () to create water molecules. Formation Acid–base property of the resulting solution from a neutralization reaction depends on the remaining salt products. A salt containing reactive cations undergo hydrolysis by which they react with water molecules, causing deprotonation of the conjugate acids. For example, the acid salt ammonium chloride is the main species formed upon the half neutralization of ammonia in aqueous solution of hydrogen chloride: Examples of acid salts Use in food Acid salts are often used in foods as part of leavening agents. In this context, the acid salts are referred to as "leavening acids." Common leavening acids include cream of tartar and monocalcium phosphate. An acid salt can be mixed with certain base salt (such as sodium bicarbonate or baking soda) to create baking powders which release carbon dioxide. Leavening agents can be slow-acting (e.g. sodium aluminum phosphate) which react when heated, or fast-acting (e.g., cream of tartar) which react immediately at low temperatures. Double-acting baking powders contain both slow- and fast-acting leavening agents and react at low and high temperatures to provide leavening rising throughout the baking process. Disodium phosphate, , is used in foods and monosodium phosphate, , is used in animal feed, toothpaste and evaporated milk. Intensity of acid An acid with higher value dominates the chemical reaction. It serves as a better contributor of protons (). A comparison between the and indicates the acid–base property of the resulting solution by which: The solution is acidic if . It contains a greater concentration of ions than concentration of ions due more extensive cation hydrolysis compared to that of anion hydrolysis. The solution is alkaline if . Anions hydrolyze more than cations, causing an exceeding concentration of ions. The solution is expected to be neutral only when . Other possible factors that could vary pH level of a solution are the relevant equilibrium constants and the additional amounts of any base or acid. For example, in ammonium chloride solution, is the main influence for acidic solution. It has greater value compared to that of water molecules; of is , and of is . This ensures its deprotonation when reacting with water, and is responsible for the pH below 7 at room temperature. will have no affinity for nor tendency to hydrolyze, as its value is very low ( of is ). Hydrolysis of ammonium at room temperature produces: NH4+_{(aq)}\ + H2O_{(aq)} <=> NH3_{(aq)}\ + H3O+_{(aq)} See also Base salt Salt (chemistry) Oxoacid Sodium bicarbonate Sodium bisulfate: an example of acid salt Disodium phosphate: an example of acid salt Monosodium phosphate: an example of acid salt References Salts Acids
Acid salt
[ "Chemistry" ]
703
[ "Acids", "Salts" ]
1,741,375
https://en.wikipedia.org/wiki/Rearrangement%20reaction
In organic chemistry, a rearrangement reaction is a broad class of organic reactions where the carbon skeleton of a molecule is rearranged to give a structural isomer of the original molecule. Often a substituent moves from one atom to another atom in the same molecule, hence these reactions are usually intramolecular. In the example below, the substituent R moves from carbon atom 1 to carbon atom 2: Intermolecular rearrangements also take place. A rearrangement is not well represented by simple and discrete electron transfers (represented by curved arrows in organic chemistry texts). The actual mechanism of alkyl groups moving, as in Wagner–Meerwein rearrangement, probably involves transfer of the moving alkyl group fluidly along a bond, not ionic bond-breaking and forming. In pericyclic reactions, explanation by orbital interactions give a better picture than simple discrete electron transfers. It is, nevertheless, possible to draw the curved arrows for a sequence of discrete electron transfers that give the same result as a rearrangement reaction, although these are not necessarily realistic. In allylic rearrangement, the reaction is indeed ionic. Three key rearrangement reactions are 1,2-rearrangements, pericyclic reactions and olefin metathesis. 1,2-rearrangements A 1,2-rearrangement is an organic reaction where a substituent moves from one atom to another atom in a chemical compound. In a 1,2 shift the movement involves two adjacent atoms but moves over larger distances are possible. Skeletal isomerization is not normally encountered in the laboratory, but is the basis of large applications in oil refineries. In general, straight-chain alkanes are converted to branched isomers by heating in the presence of a catalyst. Examples include isomerisation of n-butane to isobutane and pentane to isopentane. Highly branched alkanes have favorable combustion characteristics for internal combustion engines. Further examples are the Wagner–Meerwein rearrangement: and the Beckmann rearrangement, which is relevant to the production of certain nylons: Pericyclic reactions A pericyclic reaction is a type of reaction with multiple carbon–carbon bond making and breaking wherein the transition state of the molecule has a cyclic geometry, and the reaction progresses in a concerted fashion. Examples are hydride shifts and the Claisen rearrangement: Olefin metathesis Olefin metathesis is a formal exchange of the alkylidene fragments in two alkenes. It is a catalytic reaction with carbene, or more accurately, transition metal carbene complex intermediates. In this example (ethenolysis, a pair of vinyl compounds form a new symmetrical alkene with expulsion of ethylene. Other rearragement reactions 1,3-rearrangements 1,3-rearrangements take place over 3 carbon atoms. Examples: the Fries rearrangement a 1,3-alkyl shift of verbenone to chrysanthenone See also Beckmann rearrangement Curtius rearrangement Hofmann rearrangement Lossen rearrangement Schmidt reaction Tiemann rearrangement Wolff rearrangement Photochemical rearrangements Thermal rearrangement of aromatic hydrocarbons Mumm rearrangement References
Rearrangement reaction
[ "Chemistry" ]
700
[ "Rearrangement reactions", "Organic reactions" ]
1,741,453
https://en.wikipedia.org/wiki/Parent%E2%80%93offspring%20conflict
Parent–offspring conflict (POC) is an expression coined in 1974 by Robert Trivers. It is used to describe the evolutionary conflict arising from differences in optimal parental investment (PI) in an offspring from the standpoint of the parent and the offspring. PI is any investment by the parent in an individual offspring that decreases the parent's ability to invest in other offspring, while the selected offspring's chance of surviving increases. POC occurs in sexually reproducing species and is based on a genetic conflict: Parents are equally related to each of their offspring and are therefore expected to equalize their investment among them. Offspring are only half or less related to their siblings (and fully related to themselves), so they try to get more PI than the parents intended to provide even at their siblings' disadvantage. However, POC is limited by the close genetic relationship between parent and offspring: If an offspring obtains additional PI at the expense of its siblings, it decreases the number of its surviving siblings. Therefore, any gene in an offspring that leads to additional PI decreases (to some extent) the number of surviving copies of itself that may be located in siblings. Thus, if the costs in siblings are too high, such a gene might be selected against despite the benefit to the offspring. The problem of specifying how an individual is expected to weigh a relative against itself has been examined by W. D. Hamilton in 1964 in the context of kin selection. Hamilton's rule says that altruistic behavior will be positively selected if the benefit to the recipient multiplied by the genetic relatedness of the recipient to the performer is greater than the cost to the performer of a social act. Conversely, selfish behavior can only be favoured when Hamilton's inequality is not satisfied. This leads to the prediction that, other things being equal, POC will be stronger under half siblings (e.g., unrelated males father a female's successive offspring) than under full siblings. Occurrence In plants In plants, POC over the allocation of resources to the brood members may affect both brood size (number of seeds matured within a single fruit) and seed size. Concerning brood size, the most economic use of maternal resources is achieved by packing as many seeds as possible in one fruit, i.e., minimizing the cost of packing per seed. In contrast, offspring benefits from low numbers of seeds per fruit, which reduces sibling competition before and after dispersal. Conflict over seed size arises because there usually exists an inverse exponential relationship between seed size and fitness, that is, the fitness of a seed increases at a diminishing rate with resource investment but the fitness of the maternal parent has an optimum, as demonstrated by Smith and Fretwell (see also marginal value theorem). However, the optimum resource investment from the offspring's point of view would be the amount that optimizes its inclusive fitness (direct and indirect fitness), which is higher than the maternal parent's optimum. This conflict about resource allocation is most obviously manifested in the reduction of brood size (i.e. a decrease in the proportion of ovules matured into seeds). Such reduction can be assumed to be caused by the offspring: If the maternal parent's interest were to produce as few seeds as observed, selection would not favour the production of extra ovules that do not mature into seeds. (Although other explanations for this phenomenon exist, such as genetic load, resource depletion or maternal regulation of offspring quality, they could not be supported by experiments.) There are several possibilities how the offspring can affect paternal resource allocation to brood members. Evidence exists for siblicide by dominant embryos: Embryos formed early kill the remaining embryos through an aborting chemical. In oaks, early fertilized ovules prevent the fertilization of other ovules by inhibiting the pollen tube entry into the embryo sac. In some species, the maternal parent has evolved postfertilization abortion of few seeded pods. Nevertheless, cheating by the offspring is also possible here, namely by late siblicide, when the postfertilization abortion has ceased. According to the general POC model, reduction of brood size – if caused by POC – should depend on genetic relatedness between offspring in a fruit. Indeed, abortion of embryos is more common in out-crossing than in self-pollinating plants (seeds in cross-pollinating plants are less related than in self-pollinating plants). Moreover, the level of solicitation of resources by the offspring is also increased in cross-pollinating plants: There are several reports that the average weight of crossed seeds is greater than of seeds produced by self-fertilization. In birds Some of the earliest examples of parent-offspring conflict were seen in bird broods and especially in raptor species. While parent birds often lay two eggs and attempt to raise two or more young, the strongest fledgling takes a greater share of the food brought by parents and will often kill the weaker sibling (siblicide). Such conflicts have been suggested as a driving force in the evolution of optimal clutch size in birds. In the blue-footed booby, parent-offspring conflict results in times of food scarcity. When there is less food available in a given year, the older, dominant chick will often kill the younger chick by either attacking directly, or by driving it from the nest. Parents try to prevent siblicide by building nests with steeper sides and by laying heavier second eggs. In mammals Even before POC theory arose, debates took place over whether infants wean themselves or mothers actively wean their infants. Furthermore, it was discussed whether maternal rejections increase infant independence. It turned out that both mother and infant contribute to infant independence. Maternal rejections can be followed by a short-term increase in infant contact but they eventually result in a long-term decrease of contact as has been shown for several primates: In wild baboons infants that are rejected early and frequently spend less time in contact whereas those that are not rejected stay much longer in the proximity of their mother and suckle or ride even in advanced ages. In wild chimpanzees an abrupt increase in maternal rejections and a decrease in mother-offspring contact is found when mothers resume estrus and consort with males. In rhesus macaques a high probability of conception in the following mating season is associated with a high rate of maternal rejection. Rejection and behavioral conflicts can occur during the first months of an infant's life and when the mother resumes estrus. These findings suggest that the reproduction of the mother is influenced by the interaction with their offspring. So there is a potential for conflicts over PI. It was also observed in rhesus macaques that the number of contacts made by offspring is significantly higher than the number of contacts made by mother during a mating season, whereas the opposite holds for the number of broken contacts. This fact suggests that the mother resists offspring's demands for contact, whereas offspring is apparently more interested in spending time in contact. At three months of infant age a shift from mother to infant in responsibility for maintaining contact takes place. So when the infant becomes more independent, its effort to maintain proximity to its mother increases. This might sound paradoxical but becomes clear when one takes into account that POC increases during the period of PI. In summary, all these findings are consistent with POC-theory. One might object that time in contact is not a reasonable measure for PI and that, for example, time for milk transfer (lactation) would be a better one. Here one can argue that mother and infant have different thermoregulatory needs due to the fact that they have different surface-to-volume ratios resulting in more rapid loss of heat in infants compared to adults. So infants may be more sensitive to low temperatures than their mothers. An infant might try to compensate by increased contact time with their mother, which could initiate a behavioral conflict over time. Consistency of this hypothesis has been shown for Japanese macaques where decreasing temperatures result in higher maternal rejections and increased number of contacts made by infants. In social insects In eusocial species, the parent-offspring conflict takes on a unique role because of haplodiploidy and the prevalence of sterile workers. Sisters are more related to each other (0.75) than to their mothers (0.5) or brothers (0.25). In most cases, this drives female workers to try and obtain a sex ratio of 3:1 (females to males) in the colony. However, queens are equally related to both sons and daughters, so they prefer a sex ratio of 1:1. The conflict in social insects is about the level investment the queen should provide for each sex for current and future offspring. It is generally thought that workers will win this conflict and the sex ratio will be closer to 3:1, however there are examples, like in Bombus terrestris, where the queen has considerable control in forcing a 1:1 ratio. In amphibians Many species of frogs and salamanders display complex social behavior with highly involved parental care that includes egg attendance, tadpole transport, and tadpole feeding. Energy expenditure Both males and females of the strawberry poison-dart frog care for their offspring, however, females invest in more costly ways. Females of certain poison frog species produce unfertilized, non-developing trophic eggs which provide nutrition to her tadpoles. The tadpoles vibrate vigorously against mother frogs to solicit nutritious eggs. These maternal trophic eggs are beneficial for offspring, positively influencing larval survival, size at metamorphosis, and post metamorphic survival. In the neotropical, foam-nesting pointedbelly frog (Leptodactylus podicipinus), females providing parental care to tadpoles have reduced body condition and food ingestion. Females that are attending to her offspring have significantly lower body mass, ovary mass, and stomach volume. This indicates that the cost of parental care in the pointedbelly frog has the potential to affect future reproduction of females due to the reaction in body condition and food intake. In the Puerto Rican common coqui, parental care is performed exclusively by males and consists of attending to the eggs and tadpoles at an oviposition site. When brooding, males have a higher frequency of empty stomachs and lose a significant portion of their initial body mass during parental care. Abdominal fat bodies of brooding males during the middle of parental care were significantly smaller than those of non-brooding males. Another major behavioral component of parental care is nest defense against conspecific egg cannibals. This defense behavior includes aggressive calling, sustained biting, wrestling, and blocking directed against the nest intruder. Females of the Allegheny Mountain dusky salamander exhibit less activity and become associated with the nest site well in advance of oviposition in preparation for the reproductive season. This results in a reduced food intake and a decrease in body weight over the brooding period. Females either stop or greatly reduce their foraging activities and instead will eat opportunistically following oviposition. Since nutritional intake is reduced, there is a decrease in body weight in females. Females of the red-backed salamander make a substantial parental investment in terms of clutch size and brooding behavior. When brooding, females usually do not leave their eggs to forage but rather rely upon their fat reserves and any resources they encounter at their oviposition site. In addition, females could experience metabolic costs while safeguarding their offspring from desiccation, intruders, and predators. Time investment The plasticity of tadpoles may play a role in the weaning conflict in egg-feeding frogs, in which the offspring prefer to devote resources to growth, while the mother prefers nutrients to help her young become independent. A similar conflict happens in direct-developing frogs that care for clutches, with protected tadpoles having the advantage of a slower, safer development, but they need to be ready to reach independence rapidly due to the risks of predation or desiccation. In the neotropical Zimmerman’s poison frog, the males provide a specific parental care in the form of transportation. The tadpoles are cannibalistic, hence why the males typically separate them from their siblings after hatching by transporting them to small bodies of water. However, in some cases parents do not transport their tadpoles but let them all hatch into the same pool. In order to escape their cannibalistic siblings, the tadpoles will actively seek transportative parental care. When a male frog approaches the water body in which the tadpoles had been deposited in, tadpoles will almost “jump” on the back of the adult, mimicking an attack, while adults would not assist with this movement. While this is an obvious example of sibling conflict, the one-sided interaction between tadpoles and frogs could be seen as a form of parent-offspring conflict, in which the offspring attempts to extract more from the interaction than the parent is willing to provide. In this scenario, a tadpole climbing onto an unwilling frog— who enters the pool for reasons other than tadpole transportation, such as egg deposition, cooling off, or sleeping— might be analogous to mammalian offspring seeking to nurse after weaning. In times of danger, the tadpoles of Zimmerman’s poison frog don't passively await parental assistance but instead exhibit an almost aggressive approach in mounting the adult frogs. Trade-offs with mating Reproductive attempts in strawberry poison-dart frog such as courtship activity, significantly decreases or will entirely cease in tadpole-rearing females compared to non-rearing females. Most brooding males of the common coqui cease calling during parental care while gravid females are still available and known to mate, hence why non-calling males miss potential opportunities to reproduce. Caring for tadpoles comes at the cost of other current reproductive opportunities for females, leading to the hypothesis that frequent reproduction is associated with reduced survival in frogs. In humans An important illustration of POC within humans is provided by David Haig’s (1993) work on genetic conflicts in pregnancy. Haig argued that fetal genes would be selected to draw more resources from the mother than would be optimal for the mother to give. The placenta, for example, secretes allocrine hormones that decrease the sensitivity of the mother to insulin and thus make a larger supply of blood sugar available to the fetus. The mother responds by increasing the level of insulin in her bloodstream and to counteract this effect the placenta has insulin receptors that stimulate the production of insulin-degrading enzymes. About 30 percent of human conceptions do not progress to full term (22 percent before becoming clinical pregnancies) creating a second arena for conflict between the mother and the fetus. The fetus will have a lower quality cut off point for spontaneous abortion than the mother. The mother's quality cut-off point also declines as she nears the end of her reproductive life, which becomes significant for older mothers. Older mothers have a higher incidence of offspring with genetic defects. Indeed, with parental age on both sides, the mutational load increases as well. Initially, the maintenance of pregnancy is controlled by the maternal hormone progesterone, but in later stages it is controlled by the fetal human chorionic gonadotrophin released into the maternal bloodstream. The release of fetal human chorionic gonadotrophin causes the release of maternal progesterone. There is also conflict over blood supply to the placenta, with the fetus being prepared to demand a larger blood supply than is optimal for the mother (or even for itself, since high birth weight is a risk factor). This results in hypertension and, significantly, high birth weight is positively correlated with maternal blood pressure. A tripartite (fetus–mother–father) immune conflict in humans and other placentals During pregnancy, there is a two-way traffic of immunologically active cell lines through the placenta. Fetal lymphocyte lines may survive in women even decades after giving birth. See also Intrauterine cannibalism The kinship theory of genomic imprinting Intragenomic and intrauterine conflict in humans References External links Genetic conflicts in human pregnancy Parent-offspring conflict Evolutionary biology Reproduction
Parent–offspring conflict
[ "Biology" ]
3,344
[ "Biological interactions", "Behavior", "Evolutionary biology", "Reproduction" ]
1,742,210
https://en.wikipedia.org/wiki/System%20equivalence
In the systems sciences system equivalence is the behavior of a parameter or component of a system in a way similar to a parameter or component of a different system. Similarity means that mathematically the parameters and components will be indistinguishable from each other. Equivalence can be very useful in understanding how complex systems work. Overview Examples of equivalent systems are first- and second-order (in the independent variable) translational, electrical, torsional, fluidic, and caloric systems. Equivalent systems can be used to change large and expensive mechanical, thermal, and fluid systems into a simple, cheaper electrical system. Then the electrical system can be analyzed to validate that the system dynamics will work as designed. This is a preliminary inexpensive way for engineers to test that their complex system performs the way they are expecting. This testing is necessary when designing new complex systems that have many components. Businesses do not want to spend millions of dollars on a system that does not perform the way that they were expecting. Using the equivalent system technique, engineers can verify and prove to the business that the system will work. This lowers the risk factor that the business is taking on the project. The following is a chart of equivalent variables for the different types of systems: {| class="wikitable" ! System type ! Flow variable ! Effort variable ! Compliance ! Inductance ! Resistance |- | Mechanical | dx/dt | F = force | spring (k) | mass (m) | damper (c) |- | Electrical | i = current | V = voltage | capacitance (C) | inductance (L) | resistance (R) |- | Thermal | qh = heat flow rate | ∆T = change in temperature | object (C) | inductance (L) | conduction and convection (R) |- | Fluid | qm = mass flow rate, qv = volume flow rate | p = pressure, h = height | tank (C) | mass (m) | valve or orifice (R) |} Flow variable: moves through the system Effort variable: puts the system into action Compliance: stores energy as potential Inductance: stores energy as kinetic Resistance: dissipates or uses energy The equivalents shown in the chart are not the only way to form mathematical analogies. In fact there are any number of ways to do this. A common requirement for analysis is that the analogy correctly models energy storage and flow across energy domains. To do this, the equivalences must be compatible. A pair of variables whose product is power (or energy) in one domain must be equivalent to a pair of variables in the other domain whose product is also power (or energy). These are called power conjugate variables. The thermal variables shown in the chart are not power conjugates and thus do not meet this criterion. See mechanical–electrical analogies for more detailed information on this. Even specifying power conjugate variables does not result in a unique analogy and there are at least three analogies of this sort in use. At least one more criterion is needed to uniquely specify the analogy, such as the requirement that impedance is equivalent in all domains as is done in the impedance analogy. Examples Mechanical systems Force Electrical systems Voltage All the fundamental variables of these systems have the same functional form. Discussion The system equivalence method may be used to describe systems of two types: "vibrational" systems (which are thus described - approximately - by harmonic oscillation) and "translational" systems (which deal with "flows"). These are not mutually exclusive; a system may have features of both. Similarities also exist; the two systems can often be analysed by the methods of Euler, Lagrange and Hamilton, so that in both cases the energy is quadratic in the relevant degree(s) of freedom, provided they are linear. Vibrational systems are often described by some sort of wave (partial differential) equation, or oscillator (ordinary differential) equation. Furthermore, these sorts of systems follow the capacitor or spring analogy, in the sense that the dominant degree of freedom in the energy is the generalized position. In more physical language, these systems are predominantly characterised by their potential energy. This often works for solids, or (linearized) undulatory systems near equilibrium. On the other hand, flow systems may be easier described by the hydraulic analogy or the diffusion equation. For example, Ohm's law was said to be inspired by Fourier's law (as well as the work of C.-L. Navier). Other laws include Fick's laws of diffusion and generalized transport problems. The most important idea is the flux, or rate of transfer of some important physical quantity considered (like electric or magnetic fluxes). In these sorts of systems, the energy is dominated by the derivative of the generalized position (generalized velocity). In physics parlance, these systems tend to be kinetic energy-dominated. Field theories, in particular electromagnetism, draw heavily from the hydraulic analogy. See also Capacitor analogy Hydraulic analogy Analogical models For harmonic oscillators, see Universal oscillator equation and Equivalent systems Linear time-invariant system Resonance Q-factor Impedance Thermal inductance References Further reading Panos J. Antsaklis, Anthony N. Michel (2006), Linear Systems, 670 pp. M.F. Kaashoek & J.H. Van Schuppen (1990), Realization and Modelling in System Theory. Katsuhiko Ogata (2003), System dynamics, Prentice Hall; 4 edition (July 30, 2003), 784 pp. External links A simulation using a hydraulic analog as a mental model for the dynamics of a first order system System Analogies, Engs 22 — Systems Course, Dartmouth College. Applied mathematics Dynamical systems Systems engineering Systems theory Equivalence (mathematics)
System equivalence
[ "Physics", "Mathematics", "Engineering" ]
1,214
[ "Applied mathematics", "Systems engineering", "Mechanics", "Dynamical systems" ]
1,742,660
https://en.wikipedia.org/wiki/Dicyclopentadiene
Dicyclopentadiene, abbreviated DCPD, is a chemical compound with formula . At room temperature, it is a white brittle wax, although lower purity samples can be straw coloured liquids. The pure material smells somewhat of soy wax or camphor, with less pure samples possessing a stronger acrid odor. Its energy density is 10,975 Wh/l. Dicyclopentadiene is a co-produced in large quantities in the steam cracking of naphtha and gas oils to ethylene. The major use is in resins, particularly, unsaturated polyester resins. It is also used in inks, adhesives, and paints. The top seven suppliers worldwide together had an annual capacity in 2001 of 179 kilotonnes (395 million pounds). DCPD was discovered in 1885 as a hydrocarbon among the products of pyrolysis of phenol by Henry Roscoe, who didn't identify the structure (that was made during the following decade) but accurately assumed that it was a dimer of some hydrocarbon. History and structure For many years the structure of dicyclopentadiene was thought to feature a cyclobutane ring as the fusion between the two subunits. Through the efforts of Alder and coworker, the structure was deduced in 1931. The spontaneous dimerization of neat cyclopentadiene at room temperature to form dicyclopentadiene proceeds to around 50% conversion over 24 hours and yields the endo isomer in better than 99:1 ratio as the kinetically favored product (about 150:1 endo:exo at 80 °C). However, prolonged heating results in isomerization to the exo isomer. The pure exo isomer was first prepared by base-mediated elimination of hydroiodo-exo-dicyclopentadiene. Thermodynamically, the exo isomer is about 0.7 kcal/mol more stable than the endo isomer. The exo isomer also has a lower reported melting point of 19°C. Both isomers are chiral. Reactions Above 150 °C, dicyclopentadiene undergoes a retro-Diels–Alder reaction at an appreciable rate to yield cyclopentadiene. The reaction is reversible and at room temperature cyclopentadiene dimerizes over the course of hours to re-form dicyclopentadiene. Cyclopentadiene is a useful diene in Diels–Alder reactions as well as a precursor to metallocenes in organometallic chemistry. It is not available commercially as the monomer, due to the rapid formation of dicyclopentadiene; hence, it must be prepared by "cracking" the dicyclopentadiene (heating the dimer and isolating the monomer by distillation) shortly before it is needed. The thermodynamic parameters of this process have been measured. At temperatures above about 125 °C in the vapor phase, dissociation to cyclopentadiene monomer starts to become thermodynamically favored (the dissociation constant Kd = ). For instance, the values of Kd at 149 °C and 195 °C were found to be 277 and 2200, respectively. By extrapolation, Kd is on the order of 10–4 at 25 °C, and dissociation is disfavored. In accord with the negative values of ΔH° and ΔS° for the Diels–Alder reaction, dissociation of dicyclopentadiene is more thermodynamically favorable at high temperatures. Equilibrium constant measurements imply that ΔH° = –18 kcal/mol and ΔS° = –40 eu for cyclopentadiene dimerization. Dicyclopentadiene polymerizes. Copolymers are formed with ethylene or styrene. The "norbornene double bond" participates. Using ring-opening metathesis polymerization a homopolymer polydicyclopentadiene is formed. Hydroformylation of DCP gives the dialdehyde called TCD dialdehyde (TCD = tricyclodecane). This dialdehyde can be oxidized to the dicarboxylic acid and to a diol. All of these derivatives have some use in polymer science. Hydrogenation of dicyclopentadiene gives tetrahydrodicyclopentadiene (), which is a component of jet fuel JP-10, and rearranges to adamantane with aluminium chloride or acid at elevated temperature. References External links MSDS for dicyclopentadiene Inchem fact sheet for dicyclopentadiene CDC — NIOSH Pocket Guide to Chemical Hazards Cyclopentadienes Monomers Dimers (chemistry) Cyclopentenes
Dicyclopentadiene
[ "Chemistry", "Materials_science" ]
1,041
[ "Monomers", "Dimers (chemistry)", "Polymer chemistry" ]
1,743,649
https://en.wikipedia.org/wiki/Serial%20analysis%20of%20gene%20expression
Serial Analysis of Gene Expression (SAGE) is a transcriptomic technique used by molecular biologists to produce a snapshot of the messenger RNA population in a sample of interest in the form of small tags that correspond to fragments of those transcripts. Several variants have been developed since, most notably a more robust version, LongSAGE, RL-SAGE and the most recent SuperSAGE. Many of these have improved the technique with the capture of longer tags, enabling more confident identification of a source gene. Overview Briefly, SAGE experiments proceed as follows: The mRNA of an input sample (e.g. a tumour) is isolated and a reverse transcriptase and biotinylated primers are used to synthesize cDNA from mRNA. The cDNA is bound to Streptavidin beads via interaction with the biotin attached to the primers, and is then cleaved using a restriction endonuclease called an anchoring enzyme (AE). The location of the cleavage site and thus the length of the remaining cDNA bound to the bead will vary for each individual cDNA (mRNA). The cleaved cDNA downstream from the cleavage site is then discarded, and the remaining immobile cDNA fragments upstream from cleavage sites are divided in half and exposed to one of two adaptor oligonucleotides (A or B) containing several components in the following order upstream from the attachment site: 1) Sticky ends with the AE cut site to allow for attachment to cleaved cDNA; 2) A recognition site for a restriction endonuclease known as the tagging enzyme (TE), which cuts about 15 nucleotides downstream of its recognition site (within the original cDNA/mRNA sequence); 3) A short primer sequence unique to either adaptor A or B, which will later be used for further amplification via PCR. After adaptor ligation, cDNA are cleaved using TE to remove them from the beads, leaving only a short "tag" of about 11 nucleotides of original cDNA (15 nucleotides minus the 4 corresponding to the AE recognition site). The cleaved cDNA tags are then repaired with DNA polymerase to produce blunt end cDNA fragments. These cDNA tag fragments (with adaptor primers and AE and TE recognition sites attached) are ligated, sandwiching the two tag sequences together, and flanking adaptors A and B at either end. These new constructs, called ditags, are then PCR amplified using anchor A and B specific primers. The ditags are then cleaved using the original AE, and allowed to link together with other ditags, which will be ligated to create a cDNA concatemer with each ditag being separated by the AE recognition site. These concatemers are then transformed into bacteria for amplification through bacterial replication. The cDNA concatemers can then be isolated and sequenced using modern high-throughput DNA sequencers, and these sequences can be analysed with computer programs which quantify the recurrence of individual tags. Analysis The output of SAGE is a list of short sequence tags and the number of times it is observed. Using sequence databases a researcher can usually determine, with some confidence, from which original mRNA (and therefore which gene) the tag was extracted. Statistical methods can be applied to tag and count lists from different samples in order to determine which genes are more highly expressed. For example, a normal tissue sample can be compared against a corresponding tumor to determine which genes tend to be more (or less) active. History In 1979 teams at Harvard and Caltech extended the basic idea of making DNA copies of mRNAs in vitro to amplifying a library of such in bacterial plasmids. In 1982–1983, the idea of selecting random or semi-random clones from such a cDNA library for sequencing was explored by Greg Sutcliffe and coworkers. and Putney et al. who sequenced 178 clones from a rabbit muscle cDNA library. In 1991 Adams and co-workers coined the term expressed sequence tag (EST) and initiated more systematic sequencing of cDNAs as a project (starting with 600 brain cDNAs). The identification of ESTs proceeded rapidly, millions of ESTs now available in public databases (e.g. GenBank). In 1995, the idea of reducing the tag length from 100 to 800 bp down to tag length of 10 to 22 bp helped reduce the cost of mRNA surveys. In this year, the original SAGE protocol was published by Victor Velculescu at the Oncology Center of Johns Hopkins University. Although SAGE was originally conceived for use in cancer studies, it has been successfully used to describe the transcriptome of other diseases and in a wide variety of organisms. Comparison to DNA microarrays The general goal of the technique is similar to the DNA microarray. However, SAGE sampling is based on sequencing mRNA output, not on hybridization of mRNA output to probes, so transcription levels are measured more quantitatively than by microarray. In addition, the mRNA sequences do not need to be known a priori, so genes or gene variants which are not known can be discovered. Microarray experiments are much cheaper to perform, so large-scale studies do not typically use SAGE. Quantifying gene expressions is more exact in SAGE because it involves directly counting the number of transcripts whereas spot intensities in microarrays fall in non-discrete gradients and are prone to background noise. Variant protocols miRNA cloning MicroRNAs, or miRNAs for short, are small (~22nt) segments of RNA which have been found to play a crucial role in gene regulation. One of the most commonly used methods for cloning and identifying miRNAs within a cell or tissue was developed in the Bartel Lab and published in a paper by Lau et al. (2001). Since then, several variant protocols have arisen, but most have the same basic format. The procedure is quite similar to SAGE: The small RNA are isolated, then linkers are added to each, and the RNA is converted to cDNA by RT-PCR. Following this, the linkers, containing internal restriction sites, are digested with the appropriate restriction enzyme and the sticky ends are ligated together into concatamers. Following concatenation, the fragments are ligated into plasmids and are used to transform bacteria to generate many copies of the plasmid containing the inserts. Those may then be sequenced to identify the miRNA present, as well as analysing expression levels of a given miRNA by counting the number of times it is present, similar to SAGE. LongSAGE and RL-SAGE LongSAGE was a more robust version of the original SAGE developed in 2002 which had a higher throughput, using 20 μg of mRNA to generate a cDNA library of thousands of tags. Robust LongSage (RL-SAGE) Further improved on the LongSAGE protocol with the ability to generate a library with an insert size of 50 ng mRNA, much smaller than previous LongSAGE insert size of 2 μg mRNA and using a lower number of ditag polymerase chain reactions (PCR) to obtain a complete cDNA library. SuperSAGE SuperSAGE is a derivative of SAGE that uses the type III-endonuclease EcoP15I of phage P1, to cut 26 bp long sequence tags from each transcript's cDNA, expanding the tag-size by at least 6 bp as compared to the predecessor techniques SAGE and LongSAGE. The longer tag-size allows for a more precise allocation of the tag to the corresponding transcript, because each additional base increases the precision of the annotation considerably. Like in the original SAGE protocol, so-called ditags are formed, using blunt-ended tags. However, SuperSAGE avoids the bias observed during the less random LongSAGE 20 bp ditag-ligation. By direct sequencing with high-throughput sequencing techniques (next-generation sequencing, i.e. pyrosequencing), hundred thousands or millions of tags can be analyzed simultaneously, producing very precise and quantitative gene expression profiles. Therefore, tag-based gene expression profiling also called "digital gene expression profiling" (DGE) can today provide most accurate transcription profiles that overcome the limitations of microarrays. 3'end mRNA sequencing, massive analysis of cDNA ends In the mid 2010s several techniques combined with Next Generation Sequencing were developed that employ the "tag" principle for "digital gene expression profiling" but without the use of the tagging enzyme. The "MACE" approach, (=Massive Analysis of cDNA Ends) generates tags somewhere in the last 1500 bps of a transcript. The technique does not depend on restriction enzymes anymore and thereby circumvents bias that is related to the absence or location of the restriction site within the cDNA. Instead, the cDNA is randomly fragmented and the 3'ends are sequenced from the 5' end of the cDNA molecule that carries the poly-A tail. The sequencing length of the tag can be freely chosen. Because of this, the tags can be assembled into contigs and the annotation of the tags can be drastically improved. Therefore, MACE is also use for the analyses of non-model organisms. In addition, the longer contigs can be screened for polymorphisms. As UTRs show a large number of polymorphisms between individuals, the MACE approach can be applied for allele determination, allele specific gene expression profiling and the search for molecular markers for breeding. In addition, the approach allows determining alternative polyadenylation of the transcripts. Because MACE does only require 3’ ends of transcripts, even partly degraded RNA can be analyzed with less degradation dependent bias. The MACE approach uses unique molecular identifiers to allow for identification of PCR bias. See also High-throughput sequencing Transcriptomics Cap analysis of gene expression RNA-Seq DNA microarrays Expressed sequence tags References External links SAGEnet SAGE for Beginners A review of the SAGE technique at the Science Creative Quarterly Molecular biology
Serial analysis of gene expression
[ "Chemistry", "Biology" ]
2,102
[ "Biochemistry", "Molecular biology" ]
1,744,212
https://en.wikipedia.org/wiki/Tazobactam
Tazobactam is a pharmaceutical drug that inhibits the action of bacterial β-lactamases, especially those belonging to the SHV-1 and TEM groups. It is commonly used as its sodium salt, tazobactam sodium. Tazobactam is combined with the extended spectrum β-lactam antibiotic piperacillin in the drug piperacillin/tazobactam, used in infections due to Pseudomonas aeruginosa. Tazobactam broadens the spectrum of piperacillin by making it effective against organisms that express β-lactamase and would normally degrade piperacillin. Tazobactam was patented in 1982 and came into medical use in 1992. See also Ceftolozane Sulbactam Clavulanate References Antibiotics Beta-lactam antibiotics Beta-lactamase inhibitors Sulfones Triazoles
Tazobactam
[ "Chemistry", "Biology" ]
190
[ "Biotechnology products", "Functional groups", "Antibiotics", "Sulfones", "Biocides" ]
1,744,318
https://en.wikipedia.org/wiki/Underground%20storage%20tank
An underground storage tank (UST) is, according to United States federal regulations, a storage tank, including any underground piping connected to the tank, that has at least 10 percent of its volume underground. Definition & Regulation in U.S. federal law "Underground storage tank" or "UST" means any one or combination of tanks including connected underground pipes that is used to contain regulated substances, and the volume of which including the volume of underground pipes is 10 percent or more beneath the surface of the ground. This does not include, among other things, any farm or residential tank of 1,100 gallons or less capacity used for storing motor fuel for noncommercial purposes, tanks for storing heating oil for consumption on the premises, or septic tanks. USTs are regulated in the United States by the U.S. Environmental Protection Agency to prevent the leaking of petroleum or other hazardous substances and the resulting contamination of groundwater and soil. In 1984, U.S. Congress amended the Resource Conservation Recovery Act to include Subtitle I: Underground Storage Tanks, calling on the U.S. Environmental Protection Agency (EPA) to regulate the tanks. In 1985, when it was launched, there were more than 2 million tanks in the country and more than 750,000 owners and operators. The program was given 90 staff to oversee this responsibility. In September 1988, the EPA published initial underground storage tank regulations, including a 10-year phase-in period that required all operators to upgrade their USTs with spill prevention and leak detection equipment. For USTs in service in the United States, the EPA and states collectively require tank operators to take financial responsibility for any releases or leaks associated with the operation of those below ground tanks. As a condition to keep a tank in operation a demonstrated ability to pay for any release must be shown via UST insurance, a bond, or some other ability to pay. EPA updated UST and state program approval regulations in 2015, the first major changes since 1988. The revisions increase the emphasis on properly operating and maintaining UST equipment. The revisions will help prevent and detect UST releases, which are a leading source of groundwater contamination. The revisions will also help ensure all USTs in the United States, including those in Indian country, meet the same minimum standards. The changes established federal requirements that are similar to key portions of the Energy Policy Act of 2005. In addition, EPA added new operation and maintenance requirements and addressed UST systems deferred in the 1988 UST regulation. The changes: Added secondary containment requirements for new and replaced tanks and piping Added operator training requirements Added periodic operation and maintenance requirements for UST systems Added requirements to ensure UST system compatibility before storing certain biofuel blends Removed past deferrals for emergency generator tanks, field constructed tanks, and airport hydrant systems Updated codes of practice Made editorial and technical corrections. Tank types Underground storage tanks fall into four different types: Steel/aluminum tanks, made by manufacturers in most states and conforming to standards set by the Steel Tank Institute. Composite overwrapped, a metal tank (aluminum/steel) with filament windings like glass fiber/aramid or carbon fiber or a plastic compound around the metal cylinder for corrosion protection and to form an interstitial space. Tanks made from composite material, fiberglass/aramid or carbon fiber with a metal liner (aluminum or steel). See metal matrix composite. Composite tanks such as carbon fiber with a polymer liner (thermoplastic). See rotational molding and fibre-reinforced plastic (FRP). Underground storage tanks for water are traditionally called cisterns and are usually constructed from bricks and mortar or concrete. Petroleum underground storage tanks Petroleum USTs are used throughout North America at automobile filling stations and by the US military. Many have leaked, allowing petroleum to contaminate the soil and groundwater and enter as vapor into buildings, ending up as brownfields or Superfund sites. Many USTs installed before 1980 consisted of bare steel pipes, which corrode over time. Faulty installation can also cause structural failure of the tank or piping, causing leaks. Regulation in the US The 1984 Hazardous and Solid Waste Amendments to the Resource Conservation and Recovery Act (RCRA) required EPA to develop regulations for the underground storage of motor fuels to minimize and prevent environmental damage, by mandating owners and operators of UST systems to verify, maintain, and clean up sites damaged by petroleum contamination. In December 1988, EPA regulations asking owners to locate, remove, upgrade, or replace underground storage tanks became effective. Each state was given authority to establish such a program within its own jurisdiction, to compensate owners for the cleanup of underground petroleum leaks, to set standards and licensing for installers, and to register and inspect underground tanks. Most upgrades to USTs consisted of the installation of corrosion control (cathodic protection, interior lining, or a combination of cathodic protection and interior lining), overfill protection (to prevent overfills of the tank during tank filling operations), spill containment (to catch spills when filling), and leak detection for both the tank and piping. Many USTs were removed without replacement during the 10-year program. Many thousands of old underground tanks were replaced with newer tanks made of corrosion resistant materials (such as fiberglass, steel clad with a thick FRP shell, and well-coated steel with galvanic anodes) and others constructed as double walled tanks to form an interstice between two tank walls (a tank within a tank) which allowed for the detection of leaks from the inner or outer tank wall through monitoring of the interstice using vacuum, pressure or a liquid sensor probe. Piping was replaced during the same period with much of the new piping being double-wall construction and made of fiberglass or plastic materials. Tank monitoring systems capable of detecting small leaks (must be capable of detecting a 0.1 gallons-per-hour with a probability of detection of 95% or greater and a probability of false alarm of 5% or less) were installed and other methods were adopted to alert the tank operator of leaks and potential leaks. U.S. regulations required that UST cathodic protection systems be tested by a cathodic protection expert (minimum every three years) and that systems be monitored to ensure continued compliant operation. Some industrial owners, who previously stored fuel in underground tanks, switched to above-ground tanks to avoid environmental regulations that require monitoring of fuel storage. Many states, however, do not permit above-ground storage of motor fuel for resale to the public. The EPA Underground Storage Tank Program is considered to have been very successful. The national inventory of underground tanks has been reduced by more than half, and most of the rest have been replaced or upgraded to much safer standards. Of the approximately one million underground storage tanks sites in the United States as of 2008, most of which handled some type of fuel, an estimated 500,000 have had leaks. , there were approximately 600,000 active USTs at 223,000 sites subject to federal regulation. In 2012, EPA published how to screen buildings vulnerable to petroleum vapor intrusion, and in June 2015, U.S. EPA finally released its "Technical Guide for Assessing and Mitigating the Vapor Intrusion Pathway from Subsurface Vapor Sources to Indoor Air" and "Technical Guide For Addressing Petroleum Vapor Intrusion At Leaking Underground Storage Tank Sites" Definition in the UK Similarly to the US, the UK defines an underground tank as having 10% of its combined potential volume below the ground. Decommissioning an underground tank in the UK The requirements set by The Environment Agency for Decommissioning an underground tank apply to all underground storage tanks and not just those used for the storage of fuels. They give extensive guidance in The Blue Book and PETEL 65/34. The Environment Agency states that any tank no longer in use should be immediately decommissioned. This process includes both the closing and removal of a UST system (the tank and any ancillaries connected to it) as a whole and the replacing of individual tanks or lengths of pipe. Regardless of whether the decommissioning of the tank is permanent or temporary; it must be ensured that the tank and all components don't cause pollution. This is true of the removal and of any filling of the tank with inert material. The Decommissioning of a tank can be via removal from the ground after any volatile gas or liquid has been removed. This is called bottoming and degassing the tank. The other option involved filling the tank with either: A Sand and Cement Slurry Hydrophobic Foam Foamed Concrete If any plan is made to leave the tank on site, the owner will be responsible for keeping record of: The Tank's Capacity The Product it Contained The Method used to Decommission the tank, if any The Date of the Decommissioning If any tanks and their pipework have been deemed unsuitable for petroleum spirits then they shouldn't be used for the storage of any hydrocarbon based products without first checking their integrity. See also Environmental impact of the petroleum industry References External links EPA UST Program EPA summary of 2015 rule and compliance information State UST programs: California New Mexico Oklahoma Pennsylvania Texas Wisconsin Oil storage Containers Fuels Filling stations Petroleum infrastructure Storage tanks
Underground storage tank
[ "Chemistry", "Engineering" ]
1,895
[ "Chemical equipment", "Storage tanks", "Fuels", "Chemical energy sources" ]
1,744,650
https://en.wikipedia.org/wiki/Dispersion%20%28water%20waves%29
In fluid dynamics, dispersion of water waves generally refers to frequency dispersion, which means that waves of different wavelengths travel at different phase speeds. Water waves, in this context, are waves propagating on the water surface, with gravity and surface tension as the restoring forces. As a result, water with a free surface is generally considered to be a dispersive medium. For a certain water depth, surface gravity waves – i.e. waves occurring at the air–water interface and gravity as the only force restoring it to flatness – propagate faster with increasing wavelength. On the other hand, for a given (fixed) wavelength, gravity waves in deeper water have a larger phase speed than in shallower water. In contrast with the behavior of gravity waves, capillary waves (i.e. only forced by surface tension) propagate faster for shorter wavelengths. Besides frequency dispersion, water waves also exhibit amplitude dispersion. This is a nonlinear effect, by which waves of larger amplitude have a different phase speed from small-amplitude waves. Frequency dispersion for surface gravity waves This section is about frequency dispersion for waves on a fluid layer forced by gravity, and according to linear theory. For surface tension effects on frequency dispersion, see surface tension effects in Airy wave theory and capillary wave. Wave propagation and dispersion The simplest propagating wave of unchanging form is a sine wave. A sine wave with water surface elevation is given by: where a is the amplitude (in metres) and ) is the phase function (in radians), depending on the horizontal position (x, in metres) and time (t, in seconds): with and where: λ is the wavelength (in metres), T is the period (in seconds), k is the wavenumber (in radians per metre) and ω is the angular frequency (in radians per second). Characteristic phases of a water wave are: the upward zero-crossing at θ = 0, the wave crest at θ =  π, the downward zero-crossing at θ = π and the wave trough at θ =  π. A certain phase repeats itself after an integer m multiple of 2π: sin(θ) = sin(θ+m•2π). Essential for water waves, and other wave phenomena in physics, is that free propagating waves of non-zero amplitude only exist when the angular frequency ω and wavenumber k (or equivalently the wavelength λ and period T ) satisfy a functional relationship: the frequency dispersion relation The dispersion relation has two solutions: ω = +Ω(k) and ω = −Ω(k), corresponding to waves travelling in the positive or negative x–direction. The dispersion relation will in general depend on several other parameters in addition to the wavenumber k. For gravity waves, according to linear theory, these are the acceleration by gravity g and the water depth h. The dispersion relation for these waves is: an implicit equation with tanh denoting the hyperbolic tangent function. An initial wave phase θ = θ0 propagates as a function of space and time. Its subsequent position is given by: This shows that the phase moves with the velocity: which is called the phase velocity. Phase velocity A sinusoidal wave, of small surface-elevation amplitude and with a constant wavelength, propagates with the phase velocity, also called celerity or phase speed. While the phase velocity is a vector and has an associated direction, celerity or phase speed refer only to the magnitude of the phase velocity. According to linear theory for waves forced by gravity, the phase speed depends on the wavelength and the water depth. For a fixed water depth, long waves (with large wavelength) propagate faster than shorter waves. In the left figure, it can be seen that shallow water waves, with wavelengths λ much larger than the water depth h, travel with the phase velocity with g the acceleration by gravity and cp the phase speed. Since this shallow-water phase speed is independent of the wavelength, shallow water waves do not have frequency dispersion. Using another normalization for the same frequency dispersion relation, the figure on the right shows that for a fixed wavelength λ the phase speed cp increases with increasing water depth. Until, in deep water with water depth h larger than half the wavelength λ (so for h/λ > 0.5), the phase velocity cp is independent of the water depth: with T the wave period (the reciprocal of the frequency f, T=1/f ). So in deep water the phase speed increases with the wavelength, and with the period. Since the phase speed satisfies , wavelength and period (or frequency) are related. For instance in deep water: The dispersion characteristics for intermediate depth are given below. Group velocity Interference of two sinusoidal waves with slightly different wavelengths, but the same amplitude and propagation direction, results in a beat pattern, called a wave group. As can be seen in the animation, the group moves with a group velocity cg different from the phase velocity cp, due to frequency dispersion. The group velocity is depicted by the red lines (marked B) in the two figures above. In shallow water, the group velocity is equal to the shallow-water phase velocity. This is because shallow water waves are not dispersive. In deep water, the group velocity is equal to half the phase velocity: {{math|cg cp. The group velocity also turns out to be the energy transport velocity. This is the velocity with which the mean wave energy is transported horizontally in a narrow-band wave field. In the case of a group velocity different from the phase velocity, a consequence is that the number of waves counted in a wave group is different when counted from a snapshot in space at a certain moment, from when counted in time from the measured surface elevation at a fixed position. Consider a wave group of length Λg and group duration of τg. The group velocity is: The number of waves in a wave group, measured in space at a certain moment is: Λg / λ. While measured at a fixed location in time, the number of waves in a group is: τg / T. So the ratio of the number of waves measured in space to those measured in time is: So in deep water, with cg = cp, a wave group has twice as many waves in time as it has in space. The water surface elevation η(x,t), as a function of horizontal position x and time t, for a bichromatic wave group of full modulation can be mathematically formulated as: with: a the wave amplitude of each frequency component in metres, k1 and k2 the wave number of each wave component, in radians per metre, and ω1 and ω2 the angular frequency of each wave component, in radians per second. Both ω1 and k1, as well as ω2 and k2, have to satisfy the dispersion relation: and Using trigonometric identities, the surface elevation is written as: The part between square brackets is the slowly varying amplitude of the group, with group wave number  ( k1 − k2 ) and group angular frequency  ( ω1 − ω2 ). As a result, the group velocity is, for the limit k1 → k2 : Wave groups can only be discerned in case of a narrow-banded signal, with the wave-number difference k1 − k2 small compared to the mean wave number  (k1 + k2). Multi-component wave patterns The effect of frequency dispersion is that the waves travel as a function of wavelength, so that spatial and temporal phase properties of the propagating wave are constantly changing. For example, under the action of gravity, water waves with a longer wavelength travel faster than those with a shorter wavelength. While two superimposed sinusoidal waves, called a bichromatic wave, have an envelope which travels unchanged, three or more sinusoidal wave components result in a changing pattern of the waves and their envelope. A sea state – that is: real waves on the sea or ocean – can be described as a superposition of many sinusoidal waves with different wavelengths, amplitudes, initial phases and propagation directions. Each of these components travels with its own phase velocity, in accordance with the dispersion relation. The statistics of such a surface can be described by its power spectrum. Dispersion relation In the table below, the dispersion relation ω2 = [Ω(k)]2 between angular frequency ω = 2π / T and wave number k = 2π / λ is given, as well as the phase and group speeds. Deep water corresponds with water depths larger than half the wavelength, which is the common situation in the ocean. In deep water, longer period waves propagate faster and transport their energy faster. The deep-water group velocity is half the phase velocity. In shallow water, for wavelengths larger than twenty times the water depth, as found quite often near the coast, the group velocity is equal to the phase velocity. History The full linear dispersion relation was first found by Pierre-Simon Laplace, although there were some errors in his solution for the linear wave problem. The complete theory for linear water waves, including dispersion, was derived by George Biddell Airy and published in about 1840. A similar equation was also found by Philip Kelland at around the same time (but making some mistakes in his derivation of the wave theory). The shallow water (with small h / λ) limit, ω2 = gh k2, was derived by Joseph Louis Lagrange. Surface tension effects In case of gravity–capillary waves, where surface tension affects the waves, the dispersion relation becomes: with σ the surface tension (in N/m). For a water–air interface (with and ) the waves can be approximated as pure capillary waves – dominated by surface-tension effects – for wavelengths less than . For wavelengths above the waves are to good approximation pure surface gravity waves with very little surface-tension effects. Interfacial waves For two homogeneous layers of fluids, of mean thickness h below the interface and above – under the action of gravity and bounded above and below by horizontal rigid walls – the dispersion relationship ω2 = Ω2(k) for gravity waves is provided by: where again ρ and are the densities below and above the interface, while coth is the hyperbolic cotangent function. For the case is zero this reduces to the dispersion relation of surface gravity waves on water of finite depth h. When the depth of the two fluid layers becomes very large (h→∞, →∞), the hyperbolic cotangents in the above formula approaches the value of one. Then: Nonlinear effects Shallow water Amplitude dispersion effects appear for instance in the solitary wave: a single hump of water traveling with constant velocity in shallow water with a horizontal bed. Note that solitary waves are near-solitons, but not exactly – after the interaction of two (colliding or overtaking) solitary waves, they have changed a bit in amplitude and an oscillatory residual is left behind. The single soliton solution of the Korteweg–de Vries equation, of wave height H in water depth h far away from the wave crest, travels with the velocity: So for this nonlinear gravity wave it is the total water depth under the wave crest that determines the speed, with higher waves traveling faster than lower waves. Note that solitary wave solutions only exist for positive values of H, solitary gravity waves of depression do not exist. Deep water The linear dispersion relation – unaffected by wave amplitude – is for nonlinear waves also correct at the second order of the perturbation theory expansion, with the orders in terms of the wave steepness (where a is wave amplitude). To the third order, and for deep water, the dispersion relation is so This implies that large waves travel faster than small ones of the same frequency. This is only noticeable when the wave steepness is large. Waves on a mean current: Doppler shift Water waves on a mean flow (so a wave in a moving medium) experience a Doppler shift. Suppose the dispersion relation for a non-moving medium is: with k the wavenumber. Then for a medium with mean velocity vector V, the dispersion relationship with Doppler shift becomes: where k is the wavenumber vector, related to k as: k = |k|. The dot product k•V is equal to: k•V = kV cos α, with V the length of the mean velocity vector V: V = |V|. And α the angle between the wave propagation direction and the mean flow direction. For waves and current in the same direction, k•V=kV. See also Other articles on dispersion Dispersive partial differential equation Capillary wave Dispersive water-wave models Airy wave theory Benjamin–Bona–Mahony equation Boussinesq approximation (water waves) Cnoidal wave Camassa–Holm equation Davey–Stewartson equation Kadomtsev–Petviashvili equation (also known as KP equation) Korteweg–de Vries equation (also known as KdV equation) Luke's variational principle Nonlinear Schrödinger equation Shallow water equations Stokes' wave theory Trochoidal wave Wave turbulence Whitham equation Notes References , 2 Parts, 967 pages. Originally published in 1879, the 6th extended edition appeared first in 1932. External links Mathematical aspects of dispersive waves are discussed on the Dispersive Wiki. Water waves Wave mechanics Fluid dynamics Physical oceanography
Dispersion (water waves)
[ "Physics", "Chemistry", "Engineering" ]
2,856
[ "Physical phenomena", "Applied and interdisciplinary physics", "Water waves", "Chemical engineering", "Classical mechanics", "Waves", "Wave mechanics", "Physical oceanography", "Piping", "Fluid dynamics" ]
1,744,868
https://en.wikipedia.org/wiki/Damping
In physical systems, damping is the loss of energy of an oscillating system by dissipation. Damping is an influence within or upon an oscillatory system that has the effect of reducing or preventing its oscillation. Examples of damping include viscous damping in a fluid (see viscous drag), surface friction, radiation, resistance in electronic oscillators, and absorption and scattering of light in optical oscillators. Damping not based on energy loss can be important in other oscillating systems such as those that occur in biological systems and bikes (ex. Suspension (mechanics)). Damping is not to be confused with friction, which is a type of dissipative force acting on a system. Friction can cause or be a factor of damping. The damping ratio is a dimensionless measure describing how oscillations in a system decay after a disturbance. Many systems exhibit oscillatory behavior when they are disturbed from their position of static equilibrium. A mass suspended from a spring, for example, might, if pulled and released, bounce up and down. On each bounce, the system tends to return to its equilibrium position, but overshoots it. Sometimes losses (e.g. frictional) damp the system and can cause the oscillations to gradually decay in amplitude towards zero or attenuate. The damping ratio is a measure describing how rapidly the oscillations decay from one bounce to the next. The damping ratio is a system parameter, denoted by ("zeta"), that can vary from undamped (), underdamped () through critically damped () to overdamped (). The behaviour of oscillating systems is often of interest in a diverse range of disciplines that include control engineering, chemical engineering, mechanical engineering, structural engineering, and electrical engineering. The physical quantity that is oscillating varies greatly, and could be the swaying of a tall building in the wind, or the speed of an electric motor, but a normalised, or non-dimensionalised approach can be convenient in describing common aspects of behavior. Oscillation cases Depending on the amount of damping present, a system exhibits different oscillatory behaviors and speeds. Where the spring–mass system is completely lossless, the mass would oscillate indefinitely, with each bounce of equal height to the last. This hypothetical case is called undamped. If the system contained high losses, for example if the spring–mass experiment were conducted in a viscous fluid, the mass could slowly return to its rest position without ever overshooting. This case is called overdamped. Commonly, the mass tends to overshoot its starting position, and then return, overshooting again. With each overshoot, some energy in the system is dissipated, and the oscillations die towards zero. This case is called underdamped. Between the overdamped and underdamped cases, there exists a certain level of damping at which the system will just fail to overshoot and will not make a single oscillation. This case is called critical damping. The key difference between critical damping and overdamping is that, in critical damping, the system returns to equilibrium in the minimum amount of time. Damped sine wave A damped sine wave or damped sinusoid is a sinusoidal function whose amplitude approaches zero as time increases. It corresponds to the underdamped case of damped second-order systems, or underdamped second-order differential equations. Damped sine waves are commonly seen in science and engineering, wherever a harmonic oscillator is losing energy faster than it is being supplied. A true sine wave starting at time = 0 begins at the origin (amplitude = 0). A cosine wave begins at its maximum value due to its phase difference from the sine wave. A given sinusoidal waveform may be of intermediate phase, having both sine and cosine components. The term "damped sine wave" describes all such damped waveforms, whatever their initial phase. The most common form of damping, which is usually assumed, is the form found in linear systems. This form is exponential damping, in which the outer envelope of the successive peaks is an exponential decay curve. That is, when you connect the maximum point of each successive curve, the result resembles an exponential decay function. The general equation for an exponentially damped sinusoid may be represented as: where: is the instantaneous amplitude at time ; is the initial amplitude of the envelope; is the decay rate, in the reciprocal of the time units of the independent variable ; is the phase angle at ; is the angular frequency. Other important parameters include: Frequency: , the number of cycles per time unit. It is expressed in inverse time units , or hertz. Time constant: , the time for the amplitude to decrease by the factor of e. Half-life is the time it takes for the exponential amplitude envelope to decrease by a factor of 2. It is equal to which is approximately . Damping ratio: is a non-dimensional characterization of the decay rate relative to the frequency, approximately , or exactly . Q factor: is another non-dimensional characterization of the amount of damping; high Q indicates slow damping relative to the oscillation. Damping ratio definition The damping ratio is a parameter, usually denoted by ζ (Greek letter zeta), that characterizes the frequency response of a second-order ordinary differential equation. It is particularly important in the study of control theory. It is also important in the harmonic oscillator. In general, systems with higher damping ratios (one or greater) will demonstrate more of a damping effect. Underdamped systems have a value of less than one. Critically damped systems have a damping ratio of exactly 1, or at least very close to it. The damping ratio provides a mathematical means of expressing the level of damping in a system relative to critical damping. For a damped harmonic oscillator with mass m, damping coefficient c, and spring constant k, it can be defined as the ratio of the damping coefficient in the system's differential equation to the critical damping coefficient: where the system's equation of motion is . and the corresponding critical damping coefficient is or where is the natural frequency of the system. The damping ratio is dimensionless, being the ratio of two coefficients of identical units. Derivation Using the natural frequency of a harmonic oscillator and the definition of the damping ratio above, we can rewrite this as: This equation is more general than just the mass–spring system, and also applies to electrical circuits and to other domains. It can be solved with the approach where C and s are both complex constants, with s satisfying Two such solutions, for the two values of s satisfying the equation, can be combined to make the general real solutions, with oscillatory and decaying properties in several regimes: Undamped Is the case where corresponds to the undamped simple harmonic oscillator, and in that case the solution looks like , as expected. This case is extremely rare in the natural world with the closest examples being cases where friction was purposefully reduced to minimal values. Underdamped If s is a pair of complex values, then each complex solution term is a decaying exponential combined with an oscillatory portion that looks like . This case occurs for , and is referred to as underdamped (e.g., bungee cable). Overdamped If s is a pair of real values, then the solution is simply a sum of two decaying exponentials with no oscillation. This case occurs for , and is referred to as overdamped. Situations where overdamping is practical tend to have tragic outcomes if overshooting occurs, usually electrical rather than mechanical. For example, landing a plane in autopilot: if the system overshoots and releases landing gear too late, the outcome would be a disaster. Critically damped The case where is the border between the overdamped and underdamped cases, and is referred to as critically damped. This turns out to be a desirable outcome in many cases where engineering design of a damped oscillator is required (e.g., a door closing mechanism). Q factor and decay rate The Q factor, damping ratio ζ, and exponential decay rate α are related such that When a second-order system has (that is, when the system is underdamped), it has two complex conjugate poles that each have a real part of ; that is, the decay rate parameter represents the rate of exponential decay of the oscillations. A lower damping ratio implies a lower decay rate, and so very underdamped systems oscillate for long times. For example, a high quality tuning fork, which has a very low damping ratio, has an oscillation that lasts a long time, decaying very slowly after being struck by a hammer. Logarithmic decrement For underdamped vibrations, the damping ratio is also related to the logarithmic decrement . The damping ratio can be found for any two peaks, even if they are not adjacent. For adjacent peaks: where where x0 and x1 are amplitudes of any two successive peaks. As shown in the right figure: where , are amplitudes of two successive positive peaks and , are amplitudes of two successive negative peaks. Percentage overshoot In control theory, overshoot refers to an output exceeding its final, steady-state value. For a step input, the percentage overshoot (PO) is the maximum value minus the step value divided by the step value. In the case of the unit step, the overshoot is just the maximum value of the step response minus one. The percentage overshoot (PO) is related to damping ratio (ζ) by: Conversely, the damping ratio (ζ) that yields a given percentage overshoot is given by: Examples and applications Viscous drag When an object is falling through the air, the only force opposing its freefall is air resistance. An object falling through water or oil would slow down at a greater rate, until eventually reaching a steady-state velocity as the drag force comes into equilibrium with the force from gravity. This is the concept of viscous drag, which for example is applied in automatic doors or anti-slam doors. Damping in electrical systems Electrical systems that operate with alternating current (AC) use resistors to damp LC resonant circuits. Magnetic damping and Magnetorheological damping Kinetic energy that causes oscillations is dissipated as heat by electric eddy currents which are induced by passing through a magnet's poles, either by a coil or aluminum plate. Eddy currents are a key component of electromagnetic induction where they set up a magnetic flux directly opposing the oscillating movement, creating a resistive force. In other words, the resistance caused by magnetic forces slows a system down. An example of this concept being applied is the brakes on roller coasters. Magnetorheological Dampers (MR Dampers) use Magnetorheological fluid, which changes viscosity when subjected to a magnetic field. In this case, Magnetorheological damping may be considered an interdisciplinary form of damping with both viscous and magnetic damping mechanisms. References "Damping". Encyclopædia Britannica. OpenStax, College. "Physics". Lumen. Dimensionless numbers of mechanics Engineering ratios Ordinary differential equations Mathematical analysis Classical mechanics
Damping
[ "Physics", "Mathematics", "Engineering" ]
2,410
[ "Mathematical analysis", "Metrics", "Engineering ratios", "Quantity", "Classical mechanics", "Mechanics", "Dimensionless numbers of mechanics" ]
1,744,922
https://en.wikipedia.org/wiki/Gamma%20spectroscopy
Gamma-ray spectroscopy is the qualitative study of the energy spectra of gamma-ray sources, such as in the nuclear industry, geochemical investigation, and astrophysics. Gamma-ray spectrometry, on the other hand, is the method used to acquire a quantitative spectrum measurement. Most radioactive sources produce gamma rays, which are of various energies and intensities. When these emissions are detected and analyzed with a spectroscopy system, a gamma-ray energy spectrum can be produced. A detailed analysis of this spectrum is typically used to determine the identity and quantity of gamma emitters present in a gamma source, and is a vital tool in radiometric assay. The gamma spectrum is characteristic of the gamma-emitting nuclides contained in the source, just like in an optical spectrometer, the optical spectrum is characteristic of the material contained in a sample. Gamma ray characteristics Gamma rays are the highest-energy form of electromagnetic radiation, being physically the same as all other forms (e.g., X-rays, visible light, infrared, radio) but having (in general) higher photon energy due to their shorter wavelength. Because of this, the energy of gamma-ray photons can be resolved individually, and a gamma-ray spectrometer can measure and display the energies of the gamma-ray photons detected. Radioactive nuclei (radionuclides) commonly emit gamma rays in the energy range from a few keV to ~10 MeV, corresponding to the typical energy levels in nuclei with reasonably long lifetimes. Such sources typically produce gamma-ray "line spectra" (i.e., many photons emitted at discrete energies), whereas much higher energies (upwards of 1 TeV) may occur in the continuum spectra observed in astrophysics and elementary particle physics. The difference between gamma rays and X-rays is somewhat blurred. Gamma rays arise from transitions between nuclear energy levels and are monoenergetic, whereas X-rays are either related to transitions between atomic energy levels (characteristic X rays, which are monoenergetic), or are electrically generated (X-ray tube, linear accelerator) and have a broad energy range. Components of a gamma spectrometer The main components of a gamma spectrometer are the energy-sensitive radiation detector and the electronic devices that analyse the detector output signals, such as a pulse sorter (i.e., multichannel analyzer). Additional components may include signal amplifiers, rate meters, peak position stabilizers, and data handling devices. Detector Gamma spectroscopy detectors are passive materials that are able to interact with incoming gamma rays. The most important interaction mechanisms are the photoelectric effect, the Compton effect, and pair production. Through these processes, the energy of the gamma ray is absorbed and converted into a voltage signal by detecting the energy difference before and after the interaction (or, in a scintillation counter, the emitted photons using a photomultiplier). The voltage of the signal produced is proportional to the energy of the detected gamma ray. Common detector materials include sodium iodide (NaI) scintillation counters, high-purity germanium detectors such as Bismuth germanate, and more recently, GAGG:Ce. To accurately determine the energy of the gamma ray, it is advantageous if the photoelectric effect occurs, as it absorbs all of the energy of the incident ray. Absorbing all the energy is also possible when a series of these interaction mechanisms take place within the detector volume. With Compton interaction or pair production, a portion of the energy may escape from the detector volume, without being absorbed. The absorbed energy thus gives rise to a signal that behaves like a signal from a ray of lower energy. This leads to a spectral feature overlapping the regions of lower energy. Using larger detector volumes reduces this effect. More sophisticated methods of reducing this effect include using Compton-suppression shields and employing segmented detectors with add-back (see: clover (detector)). Data acquisition The voltage pulses produced for every gamma ray that interacts within the detector volume are then analyzed by a multichannel analyzer (MCA). In the MCA, a pulse-shaping amplifier takes the transient voltage signal and reshapes it into a Gaussian or trapezoidal shape. From this shape, the signal is then converted into a digital form, using a fast analog-to-digital converter (ADC). In new systems with a very high-sampling-rate ADC, the analog-to-digital conversion can be performed without reshaping. Additional logic in the MCA then performs pulse-height analysis, sorting the pulses by their height into specific bins, or channels. Each channel represents a specific range of energy in the spectrum, the number of detected signals for each channel represents the spectral intensity of the radiation in this energy range. By changing the number of channels, it is possible to fine-tune the spectral resolution and sensitivity. The MCA can send its data to a computer, which stores, displays, and further analyzes the data. A variety of software packages are available from several manufacturers, and generally include spectrum analysis tools such as energy calibration (converting bins to energies), peak area and net area calculation, and resolution calculation. A USB sound card can serve as a cheap, consumer off-the-shelf ADC, a technique pioneered by Marek Dolleiser. Specialized computer software performs pulse-height analysis on the digitized waveform, forming a complete MCA. Sound cards have high-speed but low-resolution (up to 192 kHz) ADC chips, allowing for reasonable quality for a low-to-medium count rate. The "sound card spectrometer" has been further refined in amateur and professional circles. Detector performance Gamma spectroscopy systems are selected to take advantage of several performance characteristics. Two of the most important include detector resolution and detector efficiency. Detector energy resolution Gamma rays detected in a spectroscopic system produce peaks in the spectrum. These peaks can also be called lines by analogy to optical spectroscopy. The width of the peaks is determined by the resolution of the detector, a very important characteristic of gamma spectroscopic detectors, and high resolution enables the spectroscopist to separate two gamma lines that are close to each other. Gamma spectroscopy systems are designed and adjusted to produce symmetrical peaks of the best possible resolution. The peak shape is usually a Gaussian distribution. In most spectra the horizontal position of the peak is determined by the gamma ray's energy, and the area of the peak is determined by the intensity of the gamma ray and the efficiency of the detector. The most common figure used to express detector resolution is full width at half maximum (FWHM). This is the width of the gamma ray peak at half of the highest point on the peak distribution. Energy resolution figures are given with reference to specified gamma ray energies. Resolution can be expressed in absolute (i.e., eV or MeV) or relative terms. For example, a sodium iodide (NaI) detector may have a FWHM of 9.15 keV at 122 keV, and 82.75 keV at 662 keV. These resolution values are expressed in absolute terms. To express the energy resolution in relative terms, the FWHM in eV or MeV is divided by the energy of the gamma ray and usually shown as percentage. Using the preceding example, the resolution of the detector is 7.5% at 122 keV, and 12.5% at 662 keV. A typical resolution of a coaxial germanium detector is about 2 keV at 1332 keV, yielding a relative resolution of 0.15%. Detector efficiency Not all gamma rays emitted by the source that pass through the detector will produce a count in the system. The probability that an emitted gamma ray will interact with the detector and produce a count is the efficiency of the detector. High-efficiency detectors produce spectra in less time than low-efficiency detectors. In general, larger detectors have higher efficiency than smaller detectors, although the shielding properties of the detector material are also important factors. Detector efficiency is measured by comparing a spectrum from a source of known activity to the count rates in each peak to the count rates expected from the known intensities of each gamma ray. Efficiency, like resolution, can be expressed in absolute or relative terms. The same units are used (i.e., percentages); therefore, the spectroscopist must take care to determine which kind of efficiency is being given for the detector. Absolute efficiency values represent the probability that a gamma ray of a specified energy passing through the detector will interact and be detected. Relative efficiency values are often used for germanium detectors, and compare the efficiency of the detector at 1332 keV to that of a 3 in × 3 in NaI detector (i.e., 1.2×10 −3 cps/Bq at 25 cm). Relative efficiency values greater than one hundred percent can therefore be encountered when working with very large germanium detectors. The energy of the gamma rays being detected is an important factor in the efficiency of the detector. An efficiency curve can be obtained by plotting the efficiency at various energies. This curve can then be used to determine the efficiency of the detector at energies different from those used to obtain the curve. High-purity germanium (HPGe) detectors typically have higher sensitivity. Scintillation detectors Scintillation detectors use crystals that emit light when gamma rays interact with the atoms in the crystals. The intensity of the light produced is usually proportional to the energy deposited in the crystal by the gamma ray; a well known situation where this relationship fails is the absorption of < 200 keV radiation by intrinsic and doped sodium iodide detectors. The mechanism is similar to that of a thermoluminescent dosimeter. The detectors are joined to photomultipliers; a photocathode converts the light into electrons; and then by using dynodes to generate electron cascades through delta ray production, the signal is amplified. Common scintillators include thallium-doped sodium iodide (NaI(Tl))—often simplified to sodium iodide (NaI) detectors—and bismuth germanate (BGO). Because photomultipliers are also sensitive to ambient light, scintillators are encased in light-tight coverings. Scintillation detectors can also be used to detect alpha- and beta-radiation. Sodium iodide-based detectors Thallium-doped sodium iodide (NaI(Tl)) has two principal advantages: It can be produced in large crystals, yielding good efficiency, and it produces intense bursts of light compared to other spectroscopic scintillators. NaI(Tl) is also convenient to use, making it popular for field applications such as the identification of unknown materials for law enforcement purposes. Electron hole recombination will emit light that can re-excite pure scintillation crystals; however, the thallium dopant in NaI(Tl) provides energy states within the band gap between the conduction and valence bands. Following excitation in doped scintillation crystals, some electrons in the conduction band will migrate to the activator states; the downward transitions from the activator states will not re-excite the doped crystal, so the crystal is transparent to this radiation. An example of a NaI spectrum is the gamma spectrum of the caesium isotope —see Figure 1. emits a single gamma line of 662 keV. The 662 keV line shown is actually produced by , the decay product of , which is in secular equilibrium with . The spectrum in Figure 1 was measured using a NaI-crystal on a photomultiplier, an amplifier, and a multichannel analyzer. The figure shows the number of counts within the measuring period versus channel number. The spectrum indicates the following peaks (from left to right): low energy x radiation (due to internal conversion of the gamma ray), backscatter at the low energy end of the Compton distribution, and a photopeak (full energy peak) at an energy of 662 keV The Compton distribution is a continuous distribution that is present up to channel 150 in Figure 1. The distribution arises because of primary gamma rays undergoing Compton scattering within the crystal: Depending on the scattering angle, the Compton electrons have different energies and hence produce pulses in different energy channels. If many gamma rays are present in a spectrum, Compton distributions can present analysis challenges. To reduce gamma rays, an anticoincidence shield can be used—see Compton suppression. Gamma ray reduction techniques are especially useful for small lithium-doped germanium (Ge(Li)) detectors. The gamma spectrum shown in Figure 2 is of the cobalt isotope , with two gamma rays with 1.17 MeV and 1.33 MeV respectively. (See the decay scheme article for the decay scheme of cobalt-60.) The two gamma lines can be seen well-separated; the peak to the left of channel 200 most likely indicates a strong background radiation source that has not been subtracted. A backscatter peak can be seen near channel 150, similar to the second peak in Figure 1. Sodium iodide systems, as with all scintillator systems, are sensitive to changes in temperature. Changes in the operating temperature caused by changes in environmental temperature will shift the spectrum on the horizontal axis. Peak shifts of tens of channels or more are commonly observed. Such shifts can be prevented by using spectrum stabilizers. Because of the poor resolution of NaI-based detectors, they are not suitable for the identification of complicated mixtures of gamma ray-producing materials. Scenarios requiring such analyses require detectors with higher resolution. Semiconductor-based detectors Semiconductor detectors, also called solid-state detectors, are fundamentally different from scintillation detectors: They rely on detection of the charge carriers (electrons and holes) generated in semiconductors by energy deposited by gamma ray photons. In semiconductor detectors, an electric field is applied to the detector volume. An electron in the semiconductor is fixed in its valence band in the crystal until a gamma ray interaction provides the electron enough energy to move to the conduction band. Electrons in the conduction band can respond to the electric field in the detector, and therefore move to the positive contact that is creating the electrical field. The gap created by the moving electron is called a "hole", and is filled by an adjacent electron. This shuffling of holes effectively moves a positive charge to the negative contact. The arrival of the electron at the positive contact and the hole at the negative contact produces the electrical signal that is sent to the preamplifier, the MCA, and on through the system for analysis. The movement of electrons and holes in a solid-state detector is very similar to the movement of ions within the sensitive volume of gas-filled detectors such as ionization chambers. Common semiconductor-based detectors include germanium, cadmium telluride, and cadmium zinc telluride. Germanium detectors provide significantly improved energy resolution in comparison to sodium iodide detectors, as explained in the preceding discussion of resolution. Germanium detectors produce the highest resolution commonly available today. However, a disadvantage is the requirement of cryogenic temperatures for the operation of germanium detectors, typically by cooling with liquid nitrogen. Interpretation of measurements Backscatter peak In a real detector setup, some photons can and will undergo one or potentially more Compton scattering processes (e.g. in the housing material of the radioactive source, in shielding material or material otherwise surrounding the experiment) before entering the detector material. This leads to a peak structure that can be seen in the above shown energy spectrum of (Figure 1, the first peak left of the Compton edge), the so-called backscatter peak. The detailed shape of backscatter peak structure is influenced by many factors, such as the geometry of the experiment (source geometry, relative position of source, shielding and detector) or the type of the surrounding material (giving rise to different ratios of the cross sections of Photo- and Compton-effect). The basic principle, however, is as follows: Gamma-ray sources emit photons isotropically Some photons will undergo a Compton scattering process in e.g. the shielding material or the housing of the source with a scattering angle close to 180° and some of these photons will subsequently be detected by the detector. The result is a peak structure with approximately the energy of the incident photon minus the energy of the Compton edge. The backscatter peak usually appears wide and occurs at lower than 250 keV. Single escape and double escape peaks For incident photon energies E larger than two times the rest mass of the electron (1.022 MeV), pair production can occur. The resulting positron annihilates with one of the surrounding electrons, typically producing two photons with 511 keV. In a real detector (i.e. a detector of finite size) it is possible that after the annihilation: Both photons deposit their energy in the detector. This results in a peak with E, identical to the energy of the incident photon. One of the two photons escapes the detector and only one of the photons deposits its energy in the detector, resulting in a peak with E − 511 keV, the single escape peak. Both photons escape the detector, resulting in a peak with E − 2 × 511 keV, the double escape peak. The above Am-Be-source spectrum shows an example of single and double escape peak in a real measurement. Calibration and background radiation If a gamma spectrometer is used for identifying samples of unknown composition, its energy scale must be calibrated first. Calibration is performed by using the peaks of a known source, such as caesium-137 or cobalt-60. Because the channel number is proportional to energy, the channel scale can then be converted to an energy scale. If the size of the detector crystal is known, one can also perform an intensity calibration, so that not only the energies but also the intensities of an unknown source—or the amount of a certain isotope in the source—can be determined. Because some radioactivity is present everywhere (i.e., background radiation), the spectrum should be analyzed when no source is present. The background radiation must then be subtracted from the actual measurement. Lead absorbers can be placed around the measurement apparatus to reduce background radiation. See also Alpha-particle spectroscopy Gamma probe Gamma ray spectrometer Isomeric shift Liquid scintillation counting Mass spectrometry Mössbauer spectroscopy Perturbed angular correlation Pandemonium effect Total absorption spectroscopy Scintillation counter X-ray spectroscopy Works cited Nucleonica Wiki. Gamma Spectrum Generator. Accessed 8 October 2008. References External links Amateur gamma spectrometry of a chunk of a black mold picked in Minamisoma, close to the Fukushima Dai-ichi nuclear plant. Japan. On-line gamma-ray energy spectrum conversion utility Spectrometers Spectroscopy Nuclear physics Radioactivity Gamma rays
Gamma spectroscopy
[ "Physics", "Chemistry" ]
3,938
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Electromagnetic spectrum", "Gamma rays", "Nuclear physics", "Spectrometers", "Spectroscopy", "Radioactivity" ]
1,744,973
https://en.wikipedia.org/wiki/Transfer%20length%20method
The Transfer Length Method or the "Transmission Line Model" (both abbreviated as TLM) is a technique used in semiconductor physics and engineering to determine the specific contact resistivity between a metal and a semiconductor. TLM has been developed because with the ongoing device shrinkage in microelectronics the relative contribution of the contact resistance at metal-semiconductor interfaces in a device could not be neglected any more and an accurate measurement method for determining the specific contact resistivity was required. General description The goal of the transfer length method (TLM) is the determination of the specific contact resistivity of a metal-semiconductor junction. To create a metal-semiconductor junction a metal film is deposited on the surface of a semiconductor substrate. The TLM is usually used to determine the specific contact resistivity when the metal-semiconductor junction shows ohmic behaviour. In this case the contact resistivity can be defined as the voltage difference across the interfacial layer between the deposited metal and the semiconductor substrate divided by the current density which is defined as the current divided by the interfacial area through which the current is passing: In this definition of the specific contact resistivity refers to the voltage value just below the metal-semiconductor interfacial layer while represents the voltage value just above the metal-semiconductor interfacial layer. There are two different methods of performing TLM measurements which are both introduced in the remainder of this section. One is called just transfer length method while the other is named circular transfer length method (c-TLM). TLM To determine the specific contact resistivity an array of rectangular metal pads is deposited on the surface of a semiconductor substrate as it is depicted in the image to the right. The definition of the rectangular pads can be done by utilizing photolithography while the metal deposition can be done with sputter deposition, thermal evaporation or electroless deposition. In the image to the right the distance between the pads increases from the bottom to the top. Therefore, when the resistance between adjacent pads is measured the total resistance increases accordingly as it is indicated in the graph beneath the depiction of the metal pads. In this graph the abscissa represents the distance between two adjacent metal pads while the circles represent measured resistance values. The total resistivity can be separated into a component due to the uncovered semiconductor substrate and a component that corresponds to the voltage drop in two metal-covered areas. The former component can be described with the formula , whereas represents the sheet resistance of the semiconductor substrate and the width of the metal pads. The other component that contributes to the total resistance is denoted by because when two adjacent pads are characterized two identical metallized areas have to be considered. This means that the total resistance can be written in the following functional form, with the pad distance as independent variable: If the contribution of the metal layer itself is neglected then arises because of the voltage drop at the metal-semiconductor interface as well as in the semiconductor substrate underneath. This means that during a total resistance measurement, the voltage drops exponentially (and hence also the current density) in the metallic regions (see also theory section for further explanation). As it is derived in the next section of this article the majority of the voltage drop underneath a metallic pad takes place within the length which is defined as the transfer length . Physically speaking this means that the main part of the area underneath a metallic contact through which current enters the metal via the metal-semiconductor interface is given by the transfer length multiplied with the width of the pad . This situation is also depicted in the figure in this section where the current density distribution underneath two adjacent metal pads during a resistance measurement is depicted with a green colouring. All in all this means that (if the metal pad length is much larger than the transfer length) that a relation between and can be stated: Since can be extracted from a linear fit through the data points and can be obtained from the y-intercept of the linear fit an estimation of is possible. Circular TLM The original TLM method as described above has the drawback, that the current does not just flow within the area given by times . This means that the current density distribution also spreads to the vertical sides of the metallic pads in the figure in the TLM section, a phenomenon that is not considered in the derivation of the formula describing . To account for this geometrical issue instead of rectangular metallic pads, circular pads with radius are used which are separated from a holohedral metallic coating by a distance (see figure to the right). When the total resistance between circular pad and holohedral coating is measured three distinguishable components contribute to the measured value, namely the gap resistance and the contact resistances at the inner and outer end of the gap area ( and ). This is expressed in the following formula: As will be derived in the theory section an expression for that allows the extraction of from experimental data as long as is much larger than : Similar to the TLM method and can be obtained with a multiple linear regression analysis utilizing data-pairs of and . Theory TLM In the last section the basic principle of TLM was introduced and now more details about the theoretical background are given. The main purpose here is to find an expression that relates the measurable quantity with the specific contact resistivity which is intended to be determined with TLM. Therefore, in the image to the right a resistor network is illustrated that describes the situation when a voltage is applied between two adjacent metallic pads. The resistor () in the middle takes account for the part that is not covered with metal while the rest describes the situation for the metallic pads. The horizontal resistor elements () represent the resistance due to the semiconductor substrate and the vertical resistor elements () take account for the resistance due to the metal-semiconductor interfacial layer. In this description pairs of horizontal and vertical resistor elements describe the situation within a volume element of length in a metallic pad area. This methodology is also used for the derivation of the telegrapher's equations which are used to describe the behaviour of transmission lines. Because of this analogy, the described measurement technique in this article is often called the transmission line method. By using Kirchhoff's circuit laws the following expressions for the voltage as well as for the current within the above considered length element (read square in the figure in this section) are obtained for a steady state situation where both voltage and current are not a function of time: By taking the limit the following two differential equations are obtained: These two coupled differential equations can be separated by differentiating one with respect to such that the other can plugged in. By doing so finally, two differential equations are obtained which do not depend on each other: Both differential equations have solutions of the form whereat and are constants which need to be determined by using appropriate boundary conditions and is given by which is the inverse of the previously defined transfer length . Two boundary conditions can be obtained by defining the voltage as well as the current at the beginning of a metallic pad area as and respectively. In a formal manner this means that and when using the settings in the figure in this section. By using the pair of coupled differential equations above two more boundary conditions are obtained, namely and . Eventually two equations, describing the voltage and the current as a function of distance are obtained by using the four stated boundary conditions: When a measurement is performed, it can be assumed that no current is flowing at the opposing end of each metallic pad, which in turn means that . This allows a further refinement of the equation describing the voltage when using the relation : The last equation describes the voltage drop across the region covered by a metallic pad (compare with the figure in this section). By realizing that the resistance value can be expressed with and by setting in the last formula an expression can be found that relates to the specific contact resistivity : The last equation allows the calculation of by utilizing experimental data. Since goes to 1 as increases and is significantly larger than the transfer length often the estimation is used instead of the strictly derived equality. This is identical to what was stated in the general description section. In summary the voltage as well as the current as a function of distance in the region of a metallic pad has been derived by utilizing a model that is similar to the telegrapher's equations. This enabled to find an expression that allows the calculation of the specific contact resistivity of the metal-semiconductor junction by using the experimentally found quantities and and the width of a metallic pad. Circular TLM The physical idea of deriving differential equations for the c-TLM method is the same as for TLM but polar coordinates are used instead of cartesian coordinates. This changes the resistor network that describes the metal covered area as can be seen in the figure to the right. Like for TLM by using Kirchhoff's circuit laws two coupled differential equations are obtained. When the current is eliminated a different equation for the voltage is obtained: A general solution to this type of differential equations is given as follows, whereat and are unspecified constants and is . The functions and are zero-order modified Bessel functions of the first and second kind respectively. By utilizing the coupled differential equations above and the differentiation rules for modified Bessel functions (, ) an expression for the current can be obtained. The functions and are first-order Bessel functions of the first and second kind respectively. Now after having obtained expressions for the current as well as for the voltage, expressions for the contact resistances corresponding to the inner and outer boundary of the gap area have to be found (compare with the schematic illustration of the measurement metallization in the general section). The contact resistance at the inner boundary is given by and during a measurement the current in the middle of the circular metallic pad is zero (). Since the modified Bessel function tends to infinity when tends to zero (see figures to the right), the constant has to be zero because the voltage can not be infinite. Considering this, the contact resistance at the inner boundary of the gap area equates to: In a similar manner an expression for the contact resistance at the outer boundary of the gap area can be found when is replaced with (compare with the drawing in the general section). Here, also a boundary condition for the current can be given, namely . This means that A has to be zero because the function tends to infinity (see figure to the right) as goes to infinity. In turn this means that the contact resistance at the outer boundary of the gap area is given by: The resistance due to the gap area itself can be found by considering the horizontal differential resistor in the figure in this section and by integrating from to . By adding , and an expression for the total resistance can be given: When the outer and the inner radius are much larger than the transfer length the quotients of the modified Bessel functions are approximately one. This means that when is substituted with the same formula for as given in the general section is found, which can be used for extracting and from experimental data: Practical example In this section a practical example of a c-TLM measurement is presented. By utilizing photolithography and sputter deposition, metallic c-TLM pads were deposited on the surface of a semiconductor thin film. The gap spacings of the c-TLM pads ranged between 20 μm and 200 μm while step sizes of 20 μm where chosen. To obtain values for the total resistance corresponding to each c-TLM pad, current-voltage measurements were performed across each gap spacing. The plot to the left shows the recorded measurement data, whereat the green arrow indicates an increase of the gap length. The curves are linear (which proofs that there is an ohmic contact between the metal and semiconductor layer) and the value of the total resistance for each c-TLM pad is obtained by taking the inverse of the slope. For the extraction of , and the specific contact resistivity the equation obtained for the total resistance is re-written as follows, with , , and : . This re-writing was done to compactify the notation and also because in this particular example the inner diameter was kept constant for each c-TLM pad. Since 10 measurements of were performed -each corresponding to a different gap length - a system of linear equations can be obtained, which can be written in matrix-vector form. The vector on the left side contains the values from the resistance measurements , all of them exhibiting a measurement error . Therefore a measurement error vector is added to the matrix-vector product. Before proceeding the matrix-vector equation is written in a more compact form: The goal is to find values of and such that the euclidean norm of the error vector becomes minimal. With this premise the error vector must be normal to the column space of , which means that . This means, that multiplication of the matrix-vector equation with the transposed matrix of yields: . Since all components of can be calculated and the components of are provided by the resistance measurements, the coefficients and can be calculated. Finally from the two coefficients, the values for , and the specific contact resistivity can be calculated as well. A plot to the left shows the measured resistance values in dependence of the gap length together with the fitting function corresponding to the determined coefficients and . The following GNU Octave script corresponds to the performed measurement series and also includes the obtained resistance values. A plot of the measurement points together with the fitting function is created and the values for , and the specific contact resistivity are calculated as well. %vectors that contain the obtained measurement data d = 20:20:200; #this vector contains the gap lengths R_row = [112.258772, 125.071437, 130.619235, 138.959548, 139.110758, 148.420932, 148.474871, 160.83128, 166.670412, 167.614947]; R = transpose(R_row); %Here the column vectors of X are defined r_i = 200; x1 = transpose(log((r_i.+d)/r_i)); x2 = transpose(1./(r_i.+d) + 1/r_i); %Define the matrix X X = [x1, x2]; %Obtain the values A and B beta = inv(transpose(X)*X)*transpose(X)*R; A = beta(1); B = beta(2); %Define the fitting function d_fit = 0:1:200; R_fit = A*log((r_i.+d_fit)/r_i) + B*(1./(r_i.+d_fit) + 1/r_i); %Plot the fitting function and the measurement values scatter(d,R, "r"); hold on; plot(d_fit, R_fit); set(gca,'fontsize',14); xlabel('Gap length [μm]'); ylabel('Total Resistance [Ohms]'); %Calculate the physical properties R_S = A*2*pi; #given in ohms L_T = 2*pi*B/R_S; #given in μm rho_c = (R_S*(L_T)^2)*10^(-8); #given in Ohm*cm^2 See also References Further reading Semiconductors
Transfer length method
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,183
[ "Electrical resistance and conductance", "Physical quantities", "Semiconductors", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Matter" ]
5,933,183
https://en.wikipedia.org/wiki/Liebigs%20Annalen
Justus Liebig's Annalen der Chemie (often cited as Liebigs Annalen) was one of the oldest and historically most important journals in the field of organic chemistry worldwide. It was established in 1832 and edited by Justus von Liebig with Friedrich Wöhler and others until Liebig's death in 1873. The journal was originally titled Annalen der Pharmacie; its name was changed to Justus Liebig's Annalen der Chemie in 1874. In its first decades of publishing, the journal was both a periodical containing news of the chemical and pharmaceutical fields and a publisher of primary research. During this time, it was noted to contain rebuttals and criticism of the works it published, inserted by Justus von Liebig during his tenure as an editor. After 1874, changes were made to editorial policies, and the journal published only completed research; later on, in the 20th century, its focus was narrowed to only print articles on organic chemistry, though it had always placed emphasis on the field. The journal was especially influential in the mid-19th century, but by the post-World War II period was considered "no longer as preeminent as it once was". The journal has undergone mergers and changes in name throughout its history, from its inception to changes made following Liebig's death and its eventual consolidation with other journals in the late 20th century. In 1997, the journal merged with Recueil des Travaux Chimiques des Pays-Bas to form Liebigs Annalen/Recueil, and in 1998, it was absorbed by European Journal of Organic Chemistry by merger of a number of other national European chemistry journals. Content Many chemical syntheses and discoveries were published in Liebigs Annalen. Among these were Robert Bunsen and Gustav Kirchhoff's discovery of caesium and its later isolation by Carl Setterberg, Adolf Windaus' studies on the constitution of cholesterol and vitamins for which he was awarded the 1928 Nobel prize in Chemistry, and many of Georg Wittig's publications, including the preparation of phenyllithium. Liebigs Annalen published news on advances in chemistry and pharmacy in addition to primary research, mainly during Justus von Liebig's time as editor. From 1839 to 1855, the journal published a summary report of the advances made in chemistry for the year. One example of a news item published in the Annalen was the discovery of ether as it is used in surgical anesthesia by Henry Jacob Bigelow, which Liebig had been informed of through a letter from Edward Everett. Lothar Meyer and Dmitri Mendeleev both published their versions of the periodic table in Liebigs Annalen in 1870 and 1871, respectively, though both had published elsewhere in the years prior to their separate printings of the "full periodic system" in the Annalen. By 1957, the content of Liebigs Annalen was entirely organic chemistry. Under Liebig's editorship As an editor, Justus von Liebig would often promote his own work in the journal. Liebig would also publish his criticism on articles published in the journal, including attacks on theoretical frameworks of organic chemistry that were in conflict with his support of radical theory. These criticisms were later described by chemist and historian J. R. Partington in his series A History of Chemistry: Similarly, on Liebig and Hermann Kolbe, a contemporary organic chemist of similar reputation, J. P. Phillips of the University of Louisville Department of Chemistry wrote "...that the polemical outbursts for which Liebig and Kolbe were famous were not mere episodes in low comedy but a reasonably consistent defense of the conservative position that organic theory must develop from experiment alone." Following Liebig's death, Jacob Volhard, head of the group publishing the Annalen in 1878, altered the policies of the journal to only accept and print finished research papers not already printed in other papers and "to exclude articles of a polemical nature". History The history of Liebigs Annalen started with the monthly Magazin für Pharmacie und die dahin einschlagenden Wissenschaften, a work printed in Lemgo and Heidelberg (later exclusively in Heidelberg), edited by professor of pharmacy Philipp Lorenz Geiger, that Justus von Liebig joined in 1831 as co-editor. The name was changed by the end of 1831 to Magazin für Pharmacie und Experimentalkritik, in the following year merged with the Archiv der Pharmazie, then known as the Archiv des Apothekervereins im nördlichen Teutschland, edited by Rudolph Brandes. In 1834, the Neues Journal der Pharmazie fur Arzte, Apotheker und Chemiker was merged with the Annalen, resulting in a brief period wherein there were 4 editors: Liebig, Brandes, Geiger, and Johann Trommsdorff. The first volume of the journal after the merger included papers from several well-known names in chemistry, including Jöns Jacob Berzelius and Joseph Louis Gay-Lussac, not to mention Liebig himself. Brandes withdrew from the journal in 1835 due to disagreements with Liebig, going on to publish the Archiv der Pharmazie independently; Annalen der Pharmacie was renamed to Annalen der Chemie und Pharmacie on the publication of volume 33 in 1840 in an effort to be more inclusive of the related fields of research in chemistry and thus broaden the potential audience. In 1837, Liebig left Germany for Britain to meet with the British Association for the Advancement of Science and to market his work, and around that time met with Thomas Graham and Jean-Baptiste Dumas. Upon returning to Germany, due to the perceived poor quality of the Annalen while he was away, Liebig fired his co-editors Emanuel Merck and Friedrich Mohr, making himself the sole editor of the Annalen. At this point, the journal was starting publication outside of Germany, namely in France and England. Liebig acknowledged "the cooperation" of Graham and Dumas from 1838 to 1842, but would break away from them in 1842, and remained the only editor until 1851, at which point he invited Hermann Kopp to take over management of the journal; Kopp's name would appear on the title page of the journal as editor from 1851 until his death in 1892, though several other editors, including Jacob Volhard, joined the editorial board during his tenure. After Liebig's death in 1873, the journal's name was changed to Justus Liebig's Annalen der Chemie und Pharmazie. This name was shortened to Justus Liebig's Annalen der Chemie beginning with volume 173 in 1974, which was kept until it was merged with the Dutch journal Recueil des Travaux Chimiques des Pays-Bas in 1997. Shortly before the merger, in 1995, Liebigs Annalen started publishing articles in English. The resulting publication, titled Liebigs Annalen/Recueil, became part of the European Journal of Organic Chemistry in January 1998. Prior to the mergers in the late 20th century, Liebigs Annalen faced difficulties due to paper shortages and reduced research publication during World War I, the deaths of several editors in the 1910s, and further publishing difficulties during World War II. For several years prior to World War II, several Nobel Prize recipients served on the editorial board, including Richard Willstätter, Adolf Windaus, Heinrich Otto Wieland, Hans Fischer and Richard Kuhn. Publications in the during- and post-war period were fewer in number and had poor paper quality due to shortages, and printing moved from Heidelberg to Munich in 1945 and to Weinheim by 1947. By the later 1950s, printing volume and quality had been brought back to pre-war averages, but by this point the journal was described as "no longer as preeminent as it once was". Editors The editorial board of Liebigs Annalen throughout its history has included many notable figures in German chemistry: Title history Annalen der Pharmacie, 1832–1839 Annalen der Chemie und Pharmacie, 1840–1873 (, CODEN JLACBF) Justus Liebig's Annalen der Chemie und Pharmacie, 1873–1874 (, CODEN JLACBF) Justus Liebig's Annalen der Chemie, 1874–1944 & 1947–1978 (, CODEN JLACBF) Liebigs Annalen der Chemie, 1979–1994 (, CODEN LACHDL) Liebigs Annalen, 1995–1996 (, CODEN LANAEM) Liebigs Annalen/Recueil, 1997 (, CODEN LIARFV) European Journal of Organic Chemistry, 1998+ (Print ; e, CODEN EJOCFK) Notes References External links of European Journal of Organic Chemistry (the successor journal of Liebigs Annalen, including a complete archive of the latter) Liebigs Annalen at the Internet Archive Justus Liebigs Annalen der Chemie at the Hathi Trust Organic chemistry journals Academic journals established in 1832 Wiley-VCH academic journals Publications disestablished in 1997 Justus von Liebig Defunct journals Science and technology in Germany English-German multilingual journals
Liebigs Annalen
[ "Chemistry" ]
1,958
[ "Organic chemistry journals" ]
5,937,299
https://en.wikipedia.org/wiki/Pendulum%20%28mechanics%29
A pendulum is a body suspended from a fixed support such that it freely swings back and forth under the influence of gravity. When a pendulum is displaced sideways from its resting, equilibrium position, it is subject to a restoring force due to gravity that will accelerate it back towards the equilibrium position. When released, the restoring force acting on the pendulum's mass causes it to oscillate about the equilibrium position, swinging it back and forth. The mathematics of pendulums are in general quite complicated. Simplifying assumptions can be made, which in the case of a simple pendulum allow the equations of motion to be solved analytically for small-angle oscillations. Simple gravity pendulum A simple gravity pendulum is an idealized mathematical model of a real pendulum. It is a weight (or bob) on the end of a massless cord suspended from a pivot, without friction. Since in the model there is no frictional energy loss, when given an initial displacement it swings back and forth with a constant amplitude. The model is based on the assumptions: The rod or cord is massless, inextensible and always remains under tension. The bob is a point mass. The motion occurs in two dimensions. The motion does not lose energy to external friction or air resistance. The gravitational field is uniform. The support is immobile. The differential equation which governs the motion of a simple pendulum is where is the magnitude of the gravitational field, is the length of the rod or cord, and is the angle from the vertical to the pendulum. Small-angle approximation The differential equation given above is not easily solved, and there is no solution that can be written in terms of elementary functions. However, adding a restriction to the size of the oscillation's amplitude gives a form whose solution can be easily obtained. If it is assumed that the angle is much less than 1 radian (often cited as less than 0.1 radians, about 6°), or then substituting for into using the small-angle approximation, yields the equation for a harmonic oscillator, The error due to the approximation is of order (from the Taylor expansion for ). Let the starting angle be . If it is assumed that the pendulum is released with zero angular velocity, the solution becomes The motion is simple harmonic motion where is the amplitude of the oscillation (that is, the maximum angle between the rod of the pendulum and the vertical). The corresponding approximate period of the motion is then which is known as Christiaan Huygens's law for the period. Note that under the small-angle approximation, the period is independent of the amplitude ; this is the property of isochronism that Galileo discovered. Rule of thumb for pendulum length gives If SI units are used (i.e. measure in metres and seconds), and assuming the measurement is taking place on the Earth's surface, then , and (0.994 is the approximation to 3 decimal places). Therefore, relatively reasonable approximations for the length and period are: where is the number of seconds between two beats (one beat for each side of the swing), and is measured in metres. Arbitrary-amplitude period For amplitudes beyond the small angle approximation, one can compute the exact period by first inverting the equation for the angular velocity obtained from the energy method (), and then integrating over one complete cycle, or twice the half-cycle or four times the quarter-cycle which leads to Note that this integral diverges as approaches the vertical so that a pendulum with just the right energy to go vertical will never actually get there. (Conversely, a pendulum close to its maximum can take an arbitrarily long time to fall down.) This integral can be rewritten in terms of elliptic integrals as where is the incomplete elliptic integral of the first kind defined by Or more concisely by the substitution expressing in terms of , Here is the complete elliptic integral of the first kind defined by For comparison of the approximation to the full solution, consider the period of a pendulum of length 1 m on Earth ( = ) at an initial angle of 10 degrees is The linear approximation gives The difference between the two values, less than 0.2%, is much less than that caused by the variation of with geographical location. From here there are many ways to proceed to calculate the elliptic integral. Legendre polynomial solution for the elliptic integral Given and the Legendre polynomial solution for the elliptic integral: where denotes the double factorial, an exact solution to the period of a simple pendulum is: Figure 4 shows the relative errors using the power series. is the linear approximation, and to include respectively the terms up to the 2nd to the 10th powers. Power series solution for the elliptic integral Another formulation of the above solution can be found if the following Maclaurin series: is used in the Legendre polynomial solution above. The resulting power series is: more fractions available in the On-Line Encyclopedia of Integer Sequences with having the numerators and having the denominators. Arithmetic-geometric mean solution for elliptic integral Given and the arithmetic–geometric mean solution of the elliptic integral: where is the arithmetic-geometric mean of and . This yields an alternative and faster-converging formula for the period: The first iteration of this algorithm gives This approximation has the relative error of less than 1% for angles up to 96.11 degrees. Since the expression can be written more concisely as The second order expansion of reduces to A second iteration of this algorithm gives This second approximation has a relative error of less than 1% for angles up to 163.10 degrees. Approximate formulae for the nonlinear pendulum period Though the exact period can be determined, for any finite amplitude rad, by evaluating the corresponding complete elliptic integral , where , this is often avoided in applications because it is not possible to express this integral in a closed form in terms of elementary functions. This has made way for research on simple approximate formulae for the increase of the pendulum period with amplitude (useful in introductory physics labs, classical mechanics, electromagnetism, acoustics, electronics, superconductivity, etc. The approximate formulae found by different authors can be classified as follows: ‘Not so large-angle’ formulae, i.e. those yielding good estimates for amplitudes below rad (a natural limit for a bob on the end of a flexible string), though the deviation with respect to the exact period increases monotonically with amplitude, being unsuitable for amplitudes near to rad. One of the simplest formulae found in literature is the following one by Lima (2006): , where . ‘Very large-angle’ formulae, i.e. those which approximate the exact period asymptotically for amplitudes near to rad, with an error that increases monotonically for smaller amplitudes (i.e., unsuitable for small amplitudes). One of the better such formulae is that by Cromer, namely: . Of course, the increase of with amplitude is more apparent when , as has been observed in many experiments using either a rigid rod or a disc. As accurate timers and sensors are currently available even in introductory physics labs, the experimental errors found in ‘very large-angle’ experiments are already small enough for a comparison with the exact period, and a very good agreement between theory and experiments in which friction is negligible has been found. Since this activity has been encouraged by many instructors, a simple approximate formula for the pendulum period valid for all possible amplitudes, to which experimental data could be compared, was sought. In 2008, Lima derived a weighted-average formula with this characteristic: where , which presents a maximum error of only 0.6% (at ). Arbitrary-amplitude angular displacement The Fourier series expansion of is given by where is the elliptic nome, and the angular frequency. If one defines can be approximated using the expansion (see ). Note that for , thus the approximation is applicable even for large amplitudes. Equivalently, the angle can be given in terms of the Jacobi elliptic function with modulus For small , , and , so the solution is well-approximated by the solution given in Pendulum (mechanics)#Small-angle approximation. Examples The animations below depict the motion of a simple (frictionless) pendulum with increasing amounts of initial displacement of the bob, or equivalently increasing initial velocity. The small graph above each pendulum is the corresponding phase plane diagram; the horizontal axis is displacement and the vertical axis is velocity. With a large enough initial velocity the pendulum does not oscillate back and forth but rotates completely around the pivot. Compound pendulum A compound pendulum (or physical pendulum) is one where the rod is not massless, and may have extended size; that is, an arbitrarily shaped rigid body swinging by a pivot . In this case the pendulum's period depends on its moment of inertia around the pivot point. The equation of torque gives: where: is the angular acceleration. is the torque The torque is generated by gravity so: where: is the total mass of the rigid body (rod and bob) is the distance from the pivot point to the system's centre-of-mass is the angle from the vertical Hence, under the small-angle approximation, (or equivalently when ), where is the moment of inertia of the body about the pivot point . The expression for is of the same form as the conventional simple pendulum and gives a period of And a frequency of If the initial angle is taken into consideration (for large amplitudes), then the expression for becomes: and gives a period of: where is the maximum angle of oscillation (with respect to the vertical) and is the complete elliptic integral of the first kind. An important concept is the equivalent length, , the length of a simple pendulums that has the same angular frequency as the compound pendulum: Consider the following cases: The simple pendulum is the special case where all the mass is located at the bob swinging at a distance from the pivot. Thus, and , so the expression reduces to: . Notice , as expected (the definition of equivalent length). A homogeneous rod of mass and length swinging from its end has and , so the expression reduces to: . Notice , a homogeneous rod oscillates as if it were a simple pendulum of two-thirds its length. A heavy simple pendulum: combination of a homogeneous rod of mass and length swinging from its end, and a bob at the other end. Then the system has a total mass of , and the other parameters being (by definition of centre-of-mass) and , so the expression reduces to: Where . Notice these formulae can be particularized into the two previous cases studied before just by considering the mass of the rod or the bob to be zero respectively. Also notice that the formula does not depend on both the mass of the bob and the rod, but actually on their ratio, . An approximation can be made for : Notice how similar it is to the angular frequency in a spring-mass system with effective mass. Damped, driven pendulum The above discussion focuses on a pendulum bob only acted upon by the force of gravity. Suppose a damping force, e.g. air resistance, as well as a sinusoidal driving force acts on the body. This system is a damped, driven oscillator, and is chaotic. Equation (1) can be written as (see the Torque derivation of Equation (1) above). A damping term and forcing term can be added to the right hand side to get where the damping is assumed to be directly proportional to the angular velocity (this is true for low-speed air resistance, see also Drag (physics)). and are constants defining the amplitude of forcing and the degree of damping respectively. is the angular frequency of the driving oscillations. Dividing through by : For a physical pendulum: This equation exhibits chaotic behaviour. The exact motion of this pendulum can only be found numerically and is highly dependent on initial conditions, e.g. the initial velocity and the starting amplitude. However, the small angle approximation outlined above can still be used under the required conditions to give an approximate analytical solution. Physical interpretation of the imaginary period The Jacobian elliptic function that expresses the position of a pendulum as a function of time is a doubly periodic function with a real period and an imaginary period. The real period is, of course, the time it takes the pendulum to go through one full cycle. Paul Appell pointed out a physical interpretation of the imaginary period: if is the maximum angle of one pendulum and is the maximum angle of another, then the real period of each is the magnitude of the imaginary period of the other. Coupled pendula Coupled pendulums can affect each other's motion, either through a direction connection (such as a spring connecting the bobs) or through motions in a supporting structure (such as a tabletop). The equations of motion for two identical simple pendulums coupled by a spring connecting the bobs can be obtained using Lagrangian mechanics. The kinetic energy of the system is: where is the mass of the bobs, is the length of the strings, and , are the angular displacements of the two bobs from equilibrium. The potential energy of the system is: where is the gravitational acceleration, and is the spring constant. The displacement of the spring from its equilibrium position assumes the small angle approximation. The Lagrangian is then which leads to the following set of coupled differential equations: Adding and subtracting these two equations in turn, and applying the small angle approximation, gives two harmonic oscillator equations in the variables and : with the corresponding solutions where and , , , are constants of integration. Expressing the solutions in terms of and alone: If the bobs are not given an initial push, then the condition requires , which gives (after some rearranging): See also Harmonograph Conical pendulum Cycloidal pendulum Double pendulum Inverted pendulum Kapitza's pendulum Rayleigh–Lorentz pendulum Elastic pendulum Mathieu function Pendulum equations (software) References Further reading External links Mathworld article on Mathieu Function Differential equations Dynamical systems Horology Mathematical physics Mathematics
Pendulum (mechanics)
[ "Physics", "Mathematics" ]
2,921
[ "Physical quantities", "Time", "Horology", "Applied mathematics", "Theoretical physics", "Mathematical objects", "Differential equations", "Equations", "Mechanics", "Spacetime", "Mathematical physics", "Dynamical systems" ]
4,494,768
https://en.wikipedia.org/wiki/Emergent%20gravity
Emergent gravity may refer to Induced gravity, a theory proposed by Andrei Sakharov in 1967, Entropic gravity, a theory proposed by Erik Verlinde in 2009. Theories of gravity
Emergent gravity
[ "Physics" ]
41
[ "Theoretical physics", "Theories of gravity" ]
4,495,335
https://en.wikipedia.org/wiki/Tautology%20%28logic%29
In mathematical logic, a tautology (from ) is a formula that is true regardless of the interpretation of its component terms, with only the logical constants having a fixed meaning. For example, a formula that states, "the ball is green or the ball is not green," is always true, regardless of what a ball is and regardless of its colour. Tautology is usually, though not always, used to refer to valid formulas of propositional logic. The philosopher Ludwig Wittgenstein first applied the term to redundancies of propositional logic in 1921, borrowing from rhetoric, where a tautology is a repetitive statement. In logic, a formula is satisfiable if it is true under at least one interpretation, and thus a tautology is a formula whose negation is unsatisfiable. In other words, it cannot be false. Unsatisfiable statements, both through negation and affirmation, are known formally as contradictions. A formula that is neither a tautology nor a contradiction is said to be logically contingent. Such a formula can be made either true or false based on the values assigned to its propositional variables. The double turnstile notation is used to indicate that S is a tautology. Tautology is sometimes symbolized by "Vpq", and contradiction by "Opq". The tee symbol is sometimes used to denote an arbitrary tautology, with the dual symbol (falsum) representing an arbitrary contradiction; in any symbolism, a tautology may be substituted for the truth value "true", as symbolized, for instance, by "1". Tautologies are a key concept in propositional logic, where a tautology is defined as a propositional formula that is true under any possible Boolean valuation of its propositional variables. A key property of tautologies in propositional logic is that an effective method exists for testing whether a given formula is always satisfied (equiv., whether its negation is unsatisfiable). The definition of tautology can be extended to sentences in predicate logic, which may contain quantifiers—a feature absent from sentences of propositional logic. Indeed, in propositional logic, there is no distinction between a tautology and a logically valid formula. In the context of predicate logic, many authors define a tautology to be a sentence that can be obtained by taking a tautology of propositional logic, and uniformly replacing each propositional variable by a first-order formula (one formula per propositional variable). The set of such formulas is a proper subset of the set of logically valid sentences of predicate logic (i.e., sentences that are true in every model). History The word tautology was used by the ancient Greeks to describe a statement that was asserted to be true merely by virtue of saying the same thing twice, a pejorative meaning that is still used for rhetorical tautologies. Between 1800 and 1940, the word gained new meaning in logic, and is currently used in mathematical logic to denote a certain type of propositional formula, without the pejorative connotations it originally possessed. In 1800, Immanuel Kant wrote in his book Logic: Here, analytic proposition refers to an analytic truth, a statement in natural language that is true solely because of the terms involved. In 1884, Gottlob Frege proposed in his Grundlagen that a truth is analytic exactly if it can be derived using logic. However, he maintained a distinction between analytic truths (i.e., truths based only on the meanings of their terms) and tautologies (i.e., statements devoid of content). In his Tractatus Logico-Philosophicus in 1921, Ludwig Wittgenstein proposed that statements that can be deduced by logical deduction are tautological (empty of meaning), as well as being analytic truths. Henri Poincaré had made similar remarks in Science and Hypothesis in 1905. Although Bertrand Russell at first argued against these remarks by Wittgenstein and Poincaré, claiming that mathematical truths were not only non-tautologous but were synthetic, he later spoke in favor of them in 1918: Here, logical proposition refers to a proposition that is provable using the laws of logic. Many logicians in the early 20th century used the term 'tautology' for any formula that is universally valid, whether a formula of propositional logic or of predicate logic. In this broad sense, a tautology is a formula that is true under all interpretations, or that is logically equivalent to the negation of a contradiction. Tarski and Gödel followed this usage and it appears in textbooks such as that of Lewis and Langford. This broad use of the term is less common today, though some textbooks continue to use it. Modern textbooks more commonly restrict the use of 'tautology' to valid sentences of propositional logic, or valid sentences of predicate logic that can be reduced to propositional tautologies by substitution. Background Propositional logic begins with propositional variables, atomic units that represent concrete propositions. A formula consists of propositional variables connected by logical connectives, built up in such a way that the truth of the overall formula can be deduced from the truth or falsity of each variable. A valuation is a function that assigns each propositional variable to either T (for truth) or F (for falsity). So by using the propositional variables A and B, the binary connectives and representing disjunction and conjunction respectively, and the unary connective representing negation, the following formula can be obtained:. A valuation here must assign to each of A and B either T or F. But no matter how this assignment is made, the overall formula will come out true. For if the first disjunct is not satisfied by a particular valuation, then A or B must be assigned F, which will make one of the following disjunct to be assigned T. In natural language, either both A and B are true or at least one of them is false. Definition and examples A formula of propositional logic is a tautology if the formula itself is always true, regardless of which valuation is used for the propositional variables. There are infinitely many tautologies. In many of the following examples A represents the statement "object X is bound", B represents "object X is a book", and C represents "object X is on the shelf". Without a specific referent object X, corresponds to the proposition "all bound things are books". ("A or not A"), the law of excluded middle. This formula has only one propositional variable, A. Any valuation for this formula must, by definition, assign A one of the truth values true or false, and assign A the other truth value. For instance, "The cat is black or the cat is not black". ("if A implies B, then not-B implies not-A", and vice versa), which expresses the law of contraposition. For instance, "If it's bound, it is a book; if it's not a book, it's not bound" and vice versa. ("if not-A implies both B and its negation not-B, then not-A must be false, then A must be true"), which is the principle known as reductio ad absurdum. For instance, "If it's not bound, we know it's a book, if it's not bound, we know it's also not a book, so it is bound". ("if not both A and B, then not-A or not-B", and vice versa), which is known as De Morgan's law. "If it is not both a book and bound, then we are sure that it's not a book or that it's not bound" and vice versa. ("if A implies B and B implies C, then A implies C"), which is the principle known as hypothetical syllogism. "If it's bound, then it's a book and if it's a book, then it's on that shelf, so if it's bound, it's on that shelf". ("if at least one of A or B is true, and each implies C, then C must be true as well"), which is the principle known as proof by cases. "Bound things and books are on that shelf. If it's either a book or it's blue, it's on that shelf". A minimal tautology is a tautology that is not the instance of a shorter tautology. is a tautology, but not a minimal one, because it is an instantiation of . Verifying tautologies The problem of determining whether a formula is a tautology is fundamental in propositional logic. If there are n variables occurring in a formula then there are 2n distinct valuations for the formula. Therefore, the task of determining whether or not the formula is a tautology is a finite and mechanical one: one needs only to evaluate the truth value of the formula under each of its possible valuations. One algorithmic method for verifying that every valuation makes the formula to be true is to make a truth table that includes every possible valuation. For example, consider the formula There are 8 possible valuations for the propositional variables A, B, C, represented by the first three columns of the following table. The remaining columns show the truth of subformulas of the formula above, culminating in a column showing the truth value of the original formula under each valuation. Because each row of the final column shows T, the sentence in question is verified to be a tautology. It is also possible to define a deductive system (i.e., proof system) for propositional logic, as a simpler variant of the deductive systems employed for first-order logic (see Kleene 1967, Sec 1.9 for one such system). A proof of a tautology in an appropriate deduction system may be much shorter than a complete truth table (a formula with n propositional variables requires a truth table with 2n lines, which quickly becomes infeasible as n increases). Proof systems are also required for the study of intuitionistic propositional logic, in which the method of truth tables cannot be employed because the law of the excluded middle is not assumed. Tautological implication A formula R is said to tautologically imply a formula S if every valuation that causes R to be true also causes S to be true. This situation is denoted . It is equivalent to the formula being a tautology (Kleene 1967 p. 27). For example, let be . Then is not a tautology, because any valuation that makes false will make false. But any valuation that makes true will make true, because is a tautology. Let be the formula . Then , because any valuation satisfying will make true—and thus makes true. It follows from the definition that if a formula is a contradiction, then tautologically implies every formula, because there is no truth valuation that causes to be true, and so the definition of tautological implication is trivially satisfied. Similarly, if is a tautology, then is tautologically implied by every formula. Substitution There is a general procedure, the substitution rule, that allows additional tautologies to be constructed from a given tautology (Kleene 1967 sec. 3). Suppose that is a tautology and for each propositional variable in a fixed sentence is chosen. Then the sentence obtained by replacing each variable in with the corresponding sentence is also a tautology. For example, let be the tautology: . Let be and let be . It follows from the substitution rule that the sentence: is also a tautology. Semantic completeness and soundness An axiomatic system is complete if every tautology is a theorem (derivable from axioms). An axiomatic system is sound if every theorem is a tautology. Efficient verification and the Boolean satisfiability problem The problem of constructing practical algorithms to determine whether sentences with large numbers of propositional variables are tautologies is an area of contemporary research in the area of automated theorem proving. The method of truth tables illustrated above is provably correct – the truth table for a tautology will end in a column with only T, while the truth table for a sentence that is not a tautology will contain a row whose final column is F, and the valuation corresponding to that row is a valuation that does not satisfy the sentence being tested. This method for verifying tautologies is an effective procedure, which means that given unlimited computational resources it can always be used to mechanistically determine whether a sentence is a tautology. This means, in particular, the set of tautologies over a fixed finite or countable alphabet is a decidable set. As an efficient procedure, however, truth tables are constrained by the fact that the number of valuations that must be checked increases as 2k, where k is the number of variables in the formula. This exponential growth in the computation length renders the truth table method useless for formulas with thousands of propositional variables, as contemporary computing hardware cannot execute the algorithm in a feasible time period. The problem of determining whether there is any valuation that makes a formula true is the Boolean satisfiability problem; the problem of checking tautologies is equivalent to this problem, because verifying that a sentence S is a tautology is equivalent to verifying that there is no valuation satisfying . The Boolean satisfiability problem is NP-complete, and consequently, tautology is co-NP-complete. It is widely believed that (equivalently for all NP-complete problems) no polynomial-time algorithm can solve the satisfiability problem, although some algorithms perform well on special classes of formulas, or terminate quickly on many instances. Tautologies versus validities in first-order logic The fundamental definition of a tautology is in the context of propositional logic. The definition can be extended, however, to sentences in first-order logic. These sentences may contain quantifiers, unlike sentences of propositional logic. In the context of first-order logic, a distinction is maintained between logical validities, sentences that are true in every model, and tautologies (or, tautological validities), which are a proper subset of the first-order logical validities. In the context of propositional logic, these two terms coincide. A tautology in first-order logic is a sentence that can be obtained by taking a tautology of propositional logic and uniformly replacing each propositional variable by a first-order formula (one formula per propositional variable). For example, because is a tautology of propositional logic, is a tautology in first order logic. Similarly, in a first-order language with a unary relation symbols R,S,T, the following sentence is a tautology: It is obtained by replacing with , with , and with in the propositional tautology: . Not all logical validities are tautologies in first-order logic. For example, the sentence: is true in any first-order interpretation, but it corresponds to the propositional sentence which is not a tautology of propositional logic. Tautologies in Non-Classical Logics Whether a given formula is a tautology depends on the formal system of logic that is in use. For example, the following formula is a tautology of classical logic but not of intuitionistic logic: See also Normal forms Algebraic normal form Conjunctive normal form Disjunctive normal form Logic optimization Related logical topics Boolean algebra Boolean domain Boolean function Contradiction False (logic) Syllogism Law of identity List of logic symbols Logic synthesis Logical consequence Logical graph Logical truth Vacuous truth References Further reading Bocheński, J. M. (1959) Précis of Mathematical Logic, translated from the French and German editions by Otto Bird, Dordrecht, South Holland: D. Reidel. Enderton, H. B. (2002) A Mathematical Introduction to Logic, Harcourt/Academic Press, . Kleene, S. C. (1967) Mathematical Logic, reprinted 2002, Dover Publications, . Reichenbach, H. (1947). Elements of Symbolic Logic, reprinted 1980, Dover, Wittgenstein, L. (1921). "Logisch-philosophiche Abhandlung", Annalen der Naturphilosophie (Leipzig), v. 14, pp. 185–262, reprinted in English translation as Tractatus logico-philosophicus, New York City and London, 1922. External links Logical expressions Logical truth Mathematical logic Propositional calculus Propositions Semantics Sentences by type
Tautology (logic)
[ "Mathematics" ]
3,467
[ "Mathematical logic", "Logical expressions", "Logical truth" ]
4,497,595
https://en.wikipedia.org/wiki/Nitrilotriacetic%20acid
Nitrilotriacetic acid (NTA) is the aminopolycarboxylic acid with the formula N(CH2CO2H)3. It is a colourless solid. Its conjugate base nitrilotriacetate is used as a chelating agent for Ca2+, Co2+, Cu2+, and Fe3+. Production and use Nitrilotriacetic acid is commercially available as the free acid and as the sodium salt. It is produced from ammonia, formaldehyde, and sodium cyanide or hydrogen cyanide. Worldwide capacity is estimated at 100 thousand tonnes per year. NTA is also cogenerated as an impurity in the synthesis of EDTA, arising from reactions of the ammonia coproduct. Older routes to NTA included alkylation of ammonia with chloroacetic acid and oxidation of triethanolamine. Coordination chemistry and applications The conjugate base of NTA is a tripodal tetradentate trianionic ligand, forming coordination compounds with a variety of metal ions. Like EDTA, its sodium salt is used for water softening to remove Ca2+. For this purpose, NTA is a replacement for triphosphate, which once was widely used in detergents, and cleansers, but can cause eutrophication of lakes. In one application, sodium NTA removes Cr, Cu, and As from wood that had been treated with chromated copper arsenate. Laboratory uses In the laboratory, this compound is used in complexometric titrations. A variant of NTA is used for protein isolation and purification in the His-tag method. The modified NTA is used to immobilize nickel on a solid support. This allows purification of proteins containing a tag consisting of six histidine residues at either terminus. The His-tag binds the metal of metal chelator complexes. Previously, iminodiacetic acid was used for that purpose. Now, nitrilotriacetic acid is more commonly used. For laboratory uses, Ernst Hochuli et al. (1987) coupled the NTA ligand and nickel ions to agarose beads. This Ni-NTA Agarose is the most used tool to purify His-tagged proteins via affinity chromatography. Toxicity and environment In contrast to EDTA, NTA is easily biodegradable and is almost completely removed during wastewater treatment. The environmental impacts of NTA are minimal. Despite widespread use in cleaning products, the concentration in the water supply is too low to have a sizeable impact on human health or environmental quality. Related compounds N-Methyliminodiacetic acid (MIDA), the N-methyl derivative of IDA Imidodiacetic acid, the amino diacetic acid N-(2-Carboxyethyl)iminodiacetic acid, a more biodegradable analogue of NTA N-hydroxyiminodiacetic acid (HIDA), (registry number = 87339-38-6) See HIDA scan. References Amines Acetic acids Chelating agents IARC Group 2B carcinogens Tripodal ligands
Nitrilotriacetic acid
[ "Chemistry" ]
665
[ "Functional groups", "Chelating agents", "Amines", "Bases (chemistry)", "Process chemicals" ]
4,498,159
https://en.wikipedia.org/wiki/Estramustine%20phosphate
Estramustine phosphate (EMP), also known as estradiol normustine phosphate and sold under the brand names Emcyt and Estracyt, is a dual estrogen and chemotherapy medication which is used in the treatment of prostate cancer in men. It is taken multiple times a day by mouth or by injection into a vein. Side effects of EMP include nausea, vomiting, gynecomastia, feminization, demasculinization, sexual dysfunction, blood clots, and cardiovascular complications. EMP is a dual cytostatic and hence chemotherapeutic agent and a hormonal anticancer agent of the estrogen type. It is a prodrug of estramustine and estromustine in terms of its cytostatic effects and a prodrug of estradiol in relation to its estrogenic effects. EMP has strong estrogenic effects at typical clinical dosages, and consequently has marked antigonadotropic and functional antiandrogenic effects. EMP was introduced for medical use in the early 1970s. It is available in the United States, Canada, the United Kingdom, other European countries, and elsewhere in the world. Medical uses EMP is indicated, in the United States, for the palliative treatment of metastatic and/or progressive prostate cancer, whereas in the United Kingdom it is indicated for the treatment of unresponsive or relapsing prostate cancer. The medication is usually reserved for use in hormone-refractory cases of prostate cancer, although it has been used as a first-line monotherapy as well. Response rates with EMP in prostate cancer are said to be equivalent to conventional high-dose estrogen therapy. Due to its relatively severe side effects and toxicity, EMP has rarely been used in the treatment of prostate cancer. This is especially true in Western countries today. As a result, and also due to the scarce side effects of gonadotropin-releasing hormone modulators (GnRH modulators) like leuprorelin, EMP was almost abandoned. However, encouraging clinical research findings resulted in renewed interest of EMP for the treatment of prostate cancer. EMP has been used at doses of 140 to 1,400 mg/day orally in the treatment of prostate cancer. However, oral EMP is most commonly used at a dose of 560 to 640 mg/day (280–320 mg twice daily). The recommended dosage of oral EMP in the Food and Drug Administration (FDA) label for Emcyt is 14 mg per kg of body weight (i.e., one 140 mg oral capsule for each 10 kg or 22 lbs of body weight) given in 3 or 4 divided doses per day. The label states that most patients in studies of oral EMP in the United States have received 10 to 16 mg per kg per day. This would be about 900 to 1,440 mg/day for a 90-kg or 200-lb man. Lower doses of oral EMP, such as 280 mg/day, have been found to have comparable effectiveness as higher doses but with improved tolerability and reduced toxicity. Doses of 140 mg/day have been described as a very low dosage. EMP has been used at doses of 240 to 450 mg/day intravenously. EMP and other estrogens such as polyestradiol phosphate and ethinylestradiol are far less costly than newer therapies such as GnRH modulators, abiraterone acetate, and enzalutamide. In addition, estrogens may offer significant benefits over other means of androgen deprivation therapy, for instance in terms of bone loss and fractures, hot flashes, cognition, and metabolic status. EMP has been used to prevent the testosterone flare at the start of GnRH agonist therapy in men with prostate cancer. Available forms EMP is or has been available in the form of both capsules (140 mg, 280 mg) for oral administration and aqueous solutions (300 mg) for intravenous injection. Contraindications EMP is contraindicated when used in children, patients hypersensitive to estrogens or nitrogen mustards, those with peptic ulcer (an ulcer in the digestive tract), those with severely compromised liver function, those with weak heart muscle (also known as myocardial insufficiency) and those with thromboembolic disorders or complications related to fluid retention. Side effects The side effects of EMP overall have been described as relatively severe. The most common side effects of EMP have been reported to be gastrointestinal side effects like nausea, vomiting, and diarrhea, with nausea and vomiting occurring in 40% of men. They are usually mild or moderate in severity, and the nausea and vomiting can be managed with prophylactic antiemetic medications. Nonetheless, severe cases of gastrointestinal side effects with EMP may require dose reduction or discontinuation of therapy. Although nausea and vomiting have been reported to be the most common side effects of EMP, gynecomastia (male breast development) has been found to occur in as many as 83% of men treated with EMP, and the incidence of erectile dysfunction is possibly similar to or slightly less than the risk of gynecomastia. As a rule, feminization, a gynoid fat distribution, demasculinization, and impotence are said to occur in virtually or nearly 100% of men treated with high-dose estrogen therapy. Decreased sexual activity has also been reported in men treated with EMP. These side effects are due to high estrogen levels and low testosterone levels. Prophylactic irradiation of the breasts can be used to decrease the incidence and severity of gynecomastia with estrogens. Severe adverse effects of EMP are thromboembolic and cardiovascular complications including pulmonary embolism, deep vein thrombosis, stroke, thrombophlebitis, coronary artery disease (ischemic heart disease; e.g., myocardial infarction), thrombophlebitis, and congestive heart failure with fluid retention. EMP produces cardiovascular toxicity similarly to diethylstilbestrol, but to a lesser extent in comparison at low doses (e.g., 280 mg/day oral EMP vs. 1 mg/day oral diethylstilbestrol). The prostate cancer disease state also increases the risk of thromboembolism, and combination with docetaxel may exacerbate the risk of thromboembolism as well. Meta-analyses of clinical trials have found that the overall risk of thromboembolism with EMP is 4 to 7%, relative to 0.4% for chemotherapy regimens without EMP. Thromboembolism is the major toxicity-related cause of discontinuation of EMP. Anticoagulant therapy with medications such as aspirin, warfarin, unfractionated and low-molecular-weight heparin, and vitamin K antagonists can be useful for decreasing the risk of thromboembolism with EMP and other estrogens like diethylstilbestrol and ethinylestradiol. Adverse liver function tests are commonly seen with EMP, but severe liver dysfunction is rare with the medication. Central nervous system side effects are rarely seen with EMP, although enlarged ventricles and neuronal pigmentation have been reported in monkeys treated with very high doses of EMP (20–140 mg/kg/day) for 3 to 6 months. EMP does not appear to have cytostatic effects in normal brain tissue. In women treated with EMP in clinical studies, a few instances of minor gynecological hemorrhages have been observed. EMP is described as relatively well tolerated among cytostatic antineoplastic and nitrogen-mustard agents, rarely or not at all being associated with significant hematologic toxicity such as myelosuppression (bone marrow suppression), gastrointestinal toxicity, or other more marked toxicity associated with such agents. In contrast to most other cytostatic agents, which often cause myelosuppression, leukopenia (decreased white blood cell count), and neutropenia (decreased neutrophil count), EMP actually produces leukocytosis (increased white blood cell count) as a side effect. In a small low-dose study using 280 mg/day oral EMP for 150 days, tolerability was significantly improved, with gastrointestinal irritation occurring in only 15% of men, and there was no incidence of severe cardiovascular toxicity or deep vein thrombosis. In addition, no other side effects besides slight transient elevated liver enzymes were observed. These findings suggest that lower doses of oral EMP may be a safer option than higher doses for the treatment of prostate cancer. However, a subsequent 2004 meta-analysis of 23 studies of thromboembolic events with EMP found substantial incidence of thromboembolic events regardless of dosage and no association of EMP dose with risk of these complications. Overdose There has been no clinical experience with overdose of EMP. Overdose of EMP may result in pronounced manifestations of the known adverse effects of the medication. There is no specific antidote for overdose of EMP. In the event of overdose, gastric lavage should be used to evacuate gastric contents as necessary and treatment should be symptom-based and supportive. In the case of dangerously low counts of red blood cells, white blood cells, or platelets, whole blood may be given as needed. Liver function should be monitored with EMP overdose. After an overdose of EMP, hematological and hepatic parameters should continue to be monitored for at least 6 weeks. EMP has been used at high doses of as much as 1,260 mg/day by the oral route and 240 to 450 mg/day by intravenous injection. Interactions EMP has been reported to increase the efficacy and toxicity of tricyclic antidepressants like amitriptyline and imipramine. When products containing calcium, aluminium, and/or magnesium, such as dairy products like milk, various foods dietary supplements, and antacids, are consumed concomitantly with EMP, an insoluble chelate complex/phosphate salt between EMP and these metals can be formed, and this can markedly impair the absorption and hence oral bioavailability of EMP. There may be an increased risk of angioedema in those concurrently taking ACE inhibitors. Pharmacology Pharmacodynamics EMP, also known as estradiol normustine phosphate, is a combined estrogen ester and nitrogen mustard ester. It consists of estradiol, an estrogen, linked with a phosphate ester as well as an ester of normustine, a nitrogen mustard. In terms of its pharmacodynamic effects, EMP is a prodrug of estramustine, estromustine, and estradiol. As a prodrug of estradiol, EMP is an estrogen and hence an agonist of the estrogen receptors. EMP itself has only very weak affinity for the estrogen receptors. The medication is of about 91% higher molecular weight than estradiol due to the presence of its C3 normustine and C17β phosphate esters. Because EMP is a prodrug of estradiol, it may be considered to be a natural and bioidentical form of estrogen, although it does have additional cytostatic activity via estramustine and estromustine. EMP acts by a dual mechanism of action: 1) direct cytostatic activity via a number of actions; and 2) as a form of high-dose estrogen therapy via estrogen receptor-mediated antigonadotropic and functional antiandrogenic effects. The antigonadotropic and functional antiandrogenic effects of EMP consist of strong suppression of gonadal androgen production and hence circulating levels of androgens such as testosterone; greatly increased levels of sex hormone-binding globulin and hence a decreased fraction of free androgens in the circulation; and direct antiandrogenic actions in prostate cells. The free androgen index with oral EMP has been found to be on average 4.6-fold lower than with orchiectomy. As such, EMP therapy results in considerably stronger androgen deprivation than orchiectomy. Metabolites of EMP, including estramustine, estromustine, estradiol, and estrone, have been found to act as weak antagonists of the androgen receptor ( = 0.5–3.1 μM), although the clinical significance of this is unknown. Extremely high levels of estradiol and estrone occur during EMP therapy. The estrogenic metabolites of EMP are responsible for its most common adverse effects and its cardiovascular toxicity. EMP has been described as having relatively weak estrogenic effects in some publications. However, it has shown essentially the same rates and degrees of estrogenic effects, such as breast tenderness, gynecomastia, cardiovascular toxicity, changes in liver protein synthesis, and testosterone suppression, as high-dose diethylstilbestrol and ethinylestradiol in clinical studies. The notion that EMP has relatively weak estrogen activity may have been based on animal research, which found that EMP had 100-fold lower uterotrophic effects than estradiol in rats, and may also not have taken into account the very high doses of EMP used clinically in humans. The mechanism of action of the cytostatic effects of EMP is complex and only partially understood. EMP is considered to mainly be a mitotic inhibitor, inhibiting mechanisms involved in the mitosis phase of the cell cycle. Specifically, it binds to microtubule-associated proteins and/or to tubulin and produces depolymerization of microtubules (Kd = 10–20 μM for estramustine), resulting in the arrest of cell division in the G2/M phase (specifically metaphase). EMP was originally thought to mediate its cytostatic effects as a prodrug of normustine, a nitrogen mustard, and hence was thought to be an alkylating antineoplastic agent. However, subsequent research has found that EMP is devoid of alkylating actions, and that the influence of EMP on microtubules is mediated by intact estramustine and estromustine, with normustine or estradiol alone having only minor or negligible effects. As such, the unique properties of the estramustine and estromustine structures, containing a carbamate-ester bond, appear to be responsible for the cytostatic effects of EMP. In addition to its antimitotic actions, EMP has also been found to produce other cytostatic effects, including induction of apoptosis, interference with DNA synthesis, nuclear matrix interaction, cell membrane alterations, induction of reactive oxygen species (free oxygen radicals), and possibly additional mechanisms. EMP has been found to have a radiosensitizing effect in prostate cancer and glioma cells, improving sensitivity to radiation therapy as well. The cytostatic metabolites of EMP are accumulated in tissues in a selective manner, for instance in prostate cancer cells. This may be due to the presence of a specific estramustine-binding protein (EMBP) (Kd = 10–35 nM for estramustine), also known as prostatin or prostatic secretion protein (PSP), which has been detected in prostate cancer, glioma, melanoma, and breast cancer cells. Because of its tissue selectivity, EMP is said to produce minimal cytostatic effects in healthy tissues, and its tissue selectivity may be responsible for its therapeutic cytostatic efficacy against prostate cancer cells. EMP was originally developed as a dual ester prodrug of an estrogen and normustine as a nitrogen mustard alkylating antineoplastic agent which, due to the affinity of the estrogen moiety for estrogen receptors, would be selectively accumulated in estrogen target tissues and hence estrogen receptor-positive tumor cells. Consequentially, it was thought that EMP would preferentially deliver the alkylating normustine moiety to these tissues, allowing for reduced cytostatic effects in healthy tissues and hence improved efficacy and tolerability. However, subsequent research found that there is very limited and slow cleavage of the normustine ester and that EMP is devoid of alkylating activity. In addition, it appears that estramustine and estromustine may be preferentially accumulated in estrogen target tissues not due to affinity for the estrogen receptors, but instead due to affinity for the distinct EMBP. Extremely high, pregnancy-like levels of estradiol may be responsible for the leukocytosis (increased white blood cell count) that is observed in individuals treated with EMP. This side effect is in contrast to most other cytotoxic agents, which instead cause myelosuppression (bone marrow suppression), leukopenia (decreased white blood cell count), and neutropenia (decreased neutrophil count). Antigonadotropic effects EMP at a dosage 280 mg/day has been found to suppress testosterone levels in men into the castrate range (to 30 ng/dL) within 20 days and to the low castrate range (to 10 ng/dL) within 30 days. Similarly, a dosage of 70 mg/day EMP suppressed testosterone levels into the castrate range within 4 weeks. Pharmacokinetics Upon oral ingestion, EMP is rapidly and completely dephosphorylated by phosphatases into estramustine during the first pass in the gastrointestinal tract. Estramustine is also partially but considerably oxidized into estromustine by 17β-hydroxysteroid dehydrogenases during the first pass. As such, EMP reaches the circulation as estramustine and estromustine, and the major metabolite of EMP is estromustine. A limited quantity of approximately 10 to 15% of estramustine and estromustine is further slowly metabolized via hydrolysis of the normustine ester into estradiol and estrone, respectively. This reaction is believed to be catalyzed by carbamidases, although the genes encoding the responsible enzymes have not been characterized. The circulating levels of normustine formed from EMP are insignificant. Release of nitrogen mustard gas from normustine via cleavage of the carboxylic acid group has not been demonstrated and does not seem to occur. The oral bioavailability of EMP is low, which is due to profound first-pass metabolism; specifically, dephosphorylation of EMP. The oral bioavailability of EMP specifically as estramustine and estromustine is 44 to 75%, suggesting that absorption may be incomplete. In any case, there is a linear relationship between the oral dose of EMP and circulating levels of estramustine and estromustine. Consumption of calcium, aluminium, or magnesium with oral EMP can markedly impair its bioavailability due to diminished absorption from the intestines, and this may interfere with its therapeutic effectiveness at low doses. Following a single oral dose of 420 mg EMP in men with prostate cancer, maximal levels of estromustine were 310 to 475 ng/mL (475,000 pg/mL) and occurred after 2 to 3 hours. Estradiol levels with 280 mg/day oral EMP have been found to increase to very high concentrations within one week of therapy. In one study, levels of estradiol were over 20,000 pg/mL after 10 days, were about 30,000 pg/mL after 30 days, and peaked at about 40,000 pg/mL at 50 days. Another study found lower estradiol levels of 4,900 to 9,000 pg/mL during chronic therapy with 560 mg/day oral EMP. An additional study found estradiol levels of about 17,000 pg/mL with 140 mg/day oral EMP and 38,000 pg/mL with 280 mg/day oral EMP. The circulating levels of estradiol and estrone during EMP therapy have been reported to exceed normal levels in men by more than 100- and 1,000-fold, respectively. Levels of estramustine and estradiol in the circulation are markedly lower than those of estromustine and estrone, respectively, with a ratio of about 1:10 in both cases. Nonetheless, estradiol levels during EMP therapy appear to be similar to those that occur in mid-to-late pregnancy, which range from 5,000 to 40,000 pg/mL. No unchanged EMP is seen in the circulation with oral administration. The pharmacokinetics of EMP are different with intravenous injection. Following a single intravenous injection of 300 mg EMP, levels of EMP were higher than those of its metabolites for the first 8 hours. This is likely due to the bypassing of first-pass metabolism. However, by 24 hours after the dose, unchanged EMP could no longer be detected in the circulation. The clearance of EMP from blood plasma is 4.85 ± 0.684 L/h. The volumes of distribution of EMP with intravenous injection were small; under a two-compartment model, the volume of distribution for the central compartment was 0.043 L/kg and for the peripheral compartment was 0.11 L/kg. The plasma protein binding of EMP is high. Estramustine is accumulated in tumor tissue, for instance prostate cancer and glioma tissue, with estramustine levels much higher in these tissues than in plasma (e.g., 6.3- and 15.9-fold, respectively). Conversely, levels of estromustine in tumor versus plasma are similar (1.0- and 0.5-fold, respectively). Estramustine and estromustine appear to accumulate in adipose tissue. The elimination half-life of estromustine with oral EMP was 13.6 hours on average, with a range of 8.8 to 22.7 hours. Conversely, the elimination half-life of estromustine with intravenous injection was 10.3 hours, with a range of 7.36 to 12.3 hours. For comparison, the corresponding elimination half-lives of estrone were 16.5 and 14.7 hours for oral and intravenous administration, respectively. Estramustine and estromustine are mainly excreted in bile and hence in feces. They are not believed to be excreted in urine. Chemistry EMP, also known as estradiol 3-normustine 17β-phosphate or as estradiol 3-(bis(2-chloroethyl)carbamate) 17β-(dihydrogen phosphate), is a synthetic estrane steroid and a derivative of estradiol. It is an estrogen ester; specifically, EMP is a diester of estradiol with a C3 normustine (nitrogen mustard–carbamate moiety) ester and a C17β phosphate ester. EMP is provided as the sodium or meglumine salt. EMP is similar as a compound to other estradiol esters such as estradiol sulfate and estradiol valerate, but differs in the presence of its nitrogen mustard ester moiety. Antineoplastic agents related to EMP, although none of them were marketed, include alestramustine, atrimustine, cytestrol acetate, estradiol mustard, ICI-85966, and phenestrol. Due to its hydrophilic phosphate ester moiety, EMP is a readily water-soluble compound. This is in contrast to most other estradiol esters, which are fatty acid esters and lipophilic compounds that are not particularly soluble in water. Unlike EMP, estramustine is highly lipophilic, practically insoluble in water, and non-ionizable. The phosphate ester of EMP was incorporated into the molecule in order to increase its water solubility and allow for intravenous administration. The molecular weight of EMP sodium is 564.3 g/mol, of EMP meglumine is 715.6 g/mol, of EMP is 520.4 g/mol, of estramustine is 440.4 g/mol, and of estradiol is 272.4 g/mol. As a result of these differences in molecular weights, EMP contains about 52%, EMP sodium about 48%, and EMP meglumine about 38% of the amount of estradiol within their structures as does an equal-mass quantity of estradiol. History EMP was first synthesized in the mid-1960s and was patented in 1967. It was initially developed for the treatment of breast cancer. The idea for EMP was inspired by the uptake and accumulation of radiolabeled estrogens into breast cancer tissue. However, initial clinical findings of EMP in women with breast cancer were disappointing. Subsequently, radiolabeled EMP was found to be taken up into and accumulated rat prostate gland, and this finding culminated in the medication being repurposed for the treatment of prostate cancer. EMP was introduced for medical use in the treatment of this condition in the early 1970s, and was approved in the United States for this indication in 1981. EMP was originally introduced for use by intravenous injection. Subsequently, an oral formulation was introduced, and the intravenous preparation was almost abandoned in favor of the oral version. Society and culture Generic names EMP is provided as the sodium salt for oral administration, which has the generic names estramustine phosphate sodium () and estramustine sodium phosphate (, ), and as the meglumine salt for intravenous administration, which has the generic name estramustine phosphate meglumine. The is estramustine phosphate. The name estramustine phosphate is a contraction of estradiol normustine phosphate. EMP is also known by its former developmental code names Leo 299, Ro 21-8837, and Ro 21-8837/001. Brand names EMP is most commonly marketed under the brand names Estracyt and Emcyt, but has also been sold under a number of other brand names, including Amsupros, Biasetyl, Cellmustin, Estramustin HEXAL, Estramustina Filaxis, Estranovag, Multosin, Multosin Injekt, Proesta, Prostamustin, and Suloprost. Availability EMP is marketed in the United States, Canada, and Mexico under the brand name Emcyt, whereas the medication is marketed under the brand name Estracyt in the United Kingdom and elsewhere throughout Europe as well as in Argentina, Chile, and Hong Kong. It has been discontinued in a number of countries, including Australia, Brazil, Ireland, and Norway. Research EMP has been studied in the treatment of other cancers such as glioma and breast cancer. It has been found to slightly improve quality of life in people with glioma during the first 3 months of therapy. References Further reading Antiandrogens Antigonadotropins Carbamates Chloroethyl compounds DNA replication inhibitors Estradiol esters Estranes Estrogens Hormonal antineoplastic drugs Mitotic inhibitors Nitrogen mustards Organochlorides Phosphate esters Drugs developed by Pfizer Prodrugs Prostate cancer
Estramustine phosphate
[ "Chemistry" ]
5,893
[ "Chemicals in medicine", "Harmful chemical substances", "Prodrugs", "Mitotic inhibitors" ]
4,498,490
https://en.wikipedia.org/wiki/Non-Euclidean%20crystallographic%20group
In mathematics, a non-Euclidean crystallographic group, NEC group or N.E.C. group is a discrete group of isometries of the hyperbolic plane. These symmetry groups correspond to the wallpaper groups in euclidean geometry. A NEC group which contains only orientation-preserving elements is called a Fuchsian group, and any non-Fuchsian NEC group has an index 2 Fuchsian subgroup of orientation-preserving elements. The hyperbolic triangle groups are notable NEC groups. Others are listed in Orbifold notation. See also Non-Euclidean geometry Isometry group Fuchsian group Uniform tilings in hyperbolic plane References . . . Non-Euclidean geometry Hyperbolic geometry Symmetry Discrete groups
Non-Euclidean crystallographic group
[ "Physics", "Mathematics" ]
145
[ "Geometry", "Symmetry" ]
4,499,323
https://en.wikipedia.org/wiki/Alexander%20Pines
Alexander Pines (June 22, 1945 – November 1, 2024) was an American chemist. He was the Glenn T. Seaborg Professor Emeritus, University of California, Berkeley, Chancellor's Professor Emeritus and Professor of the Graduate School, University of California, Berkeley, and a member of the California Institute for Quantitative Biosciences (QB3) and the Department of Bioengineering. Background Pines was born on June 22, 1945, and grew up in Bulawayo in Southern Rhodesia (now Zimbabwe). He studied undergraduate mathematics and chemistry in Israel at Hebrew University of Jerusalem. Coming to the United States in 1968, Pines obtained his Ph.D. in chemical physics at M.I.T. in 1972 and joined the UC Berkeley faculty later that year. Pines died on November 1, 2024, at the age of 79. Research Pines was a pioneer in the development and applications of nuclear magnetic resonance (NMR) spectroscopy of non-liquid samples. In his early work, he demonstrated time-reversal of dipole-dipole couplings in many-body spin systems, and introduced high sensitivity, cross polarization NMR of dilute spins such as carbon-13 in solids (Proton Enhanced Nuclear Induction Spectroscopy), thereby helping to launch the era of modern solid-state NMR in chemistry. He also developed the areas of multiple-quantum spectroscopy, adiabatic sech/tanh inversion pulses, zero-field NMR, double rotation and dynamic-angle spinning, iterative maps for pulse sequences and quantum control, and the quantum geometric phase. His combination of optical pumping and cross-polarization made it possible to observe enhanced NMR of surfaces and the selective "lighting up" of solution NMR and magnetic resonance imaging (MRI) by means of laser-polarized xenon. Until he retired to emeritus status, his program was composed of two complementary components. The first is the establishment of new concepts and techniques in NMR and MRI, in order to extend their applicability and enhance their capability to investigate molecular structure, organization and function from materials to organisms. Examples of methodologies emanating from these efforts include: novel polarization and detection methods, ex-situ and mobile NMR and MRI, laser-polarized NMR and MRI, functionalized NMR biosensors and molecular imaging, ultralow and zero-field SQUID NMR and MRI, remote detection of NMR and MRI amplified by means of laser magnetometers, and miniaturization including fluid flow through porous materials and "microfluidic chemistry and NMR/MRI on a chip". The second component of his research program involves the application of such novel methods to problems in chemistry, materials science, and biomedicine. Awards Among his many prestigious awards and honors, Pines received the Langmuir Medal of the American Chemical Society, the Faraday Medal of the Royal Society of Chemistry, the Wolf Prize for Chemistry (together with Richard R. Ernst) in 1991. He was awarded the F.A. Cotton Medal for Excellence in Chemical Research of the American Chemical Society in 1999. In 2005, an Ampere Symposium was held in honor of Pines' 60th birthday in Chamonix, France, and in 2008, he was awarded the Russell Varian Prize at the European Magnetic Resonance Conference. (Previous Varian Prizes winners: Jean Jeener, Erwin Hahn, Nicolaas Bloembergen, John. S. Waugh, and Alfred G. Redfield.) Pines was also recognized by numerous teaching honors, including the University of California's Distinguished Teaching Award. He was a member of the U.S. National Academy of Sciences, the American Academy of Arts and Sciences and a Foreign Member of the Royal Society (London); he was Doctor Honoris Causa at the Weizmann Institute of Science, Universite Paul Cezanne, University of Paris and the University of Rome, and past President of the International Society of Magnetic Resonance. References External links Biography U.C. Berkeley College of Chemistry faculty page Pines Group Webpage Alex Pines webpage 1945 births 2024 deaths Scientists from Tel Aviv 21st-century American chemists Foreign members of the Royal Society Hebrew University of Jerusalem alumni Israeli chemists Israeli emigrants to the United States Israeli Jews Jewish American scientists Jewish chemists Massachusetts Institute of Technology School of Science alumni Members of the United States National Academy of Sciences Rhodesian Jews UC Berkeley College of Chemistry faculty White Rhodesian people Wolf Prize in Chemistry laureates Nuclear magnetic resonance Fellows of the American Physical Society 21st-century American Jews
Alexander Pines
[ "Physics", "Chemistry" ]
923
[ "Nuclear magnetic resonance", "Nuclear physics" ]
4,499,533
https://en.wikipedia.org/wiki/Type-II%20superconductor
In superconductivity, a type-II superconductor is a superconductor that exhibits an intermediate phase of mixed ordinary and superconducting properties at intermediate temperature and fields above the superconducting phases. It also features the formation of magnetic field vortices with an applied external magnetic field. This occurs above a certain critical field strength Hc1. The vortex density increases with increasing field strength. At a higher critical field Hc2, superconductivity is destroyed. Type-II superconductors do not exhibit a complete Meissner effect. History In 1935, J.N. Rjabinin and Lev Shubnikov experimentally discovered the type-II superconductors. In 1950, the theory of the two types of superconductors was further developed by Lev Landau and Vitaly Ginzburg in their paper on Ginzburg–Landau theory. In their argument, a type-I superconductor had positive free energy of the superconductor-normal metal boundary. Ginzburg and Landau pointed out the possibility of type-II superconductors that should form inhomogeneous state in strong magnetic fields. However, at that time, all known superconductors were type-I, and they commented that there was no experimental motivation to consider precise structure of type-II superconducting state. The theory for the behavior of the type-II superconducting state in magnetic field was greatly improved by Alexei Alexeyevich Abrikosov, who was elaborating on the ideas by Lars Onsager and Richard Feynman of quantum vortices in superfluids. Quantum vortex solution in a superconductor is also very closely related to Fritz London's work on magnetic flux quantization in superconductors. The Nobel Prize in Physics was awarded for the theory of type-II superconductivity in 2003. Vortex state Ginzburg–Landau theory introduced the superconducting coherence length ξ in addition to London magnetic field penetration depth λ. According to Ginzburg–Landau theory, in a type-II superconductor . Ginzburg and Landau showed that this leads to negative energy of the interface between superconducting and normal phases. The existence of the negative interface energy was also known since the mid-1930s from the early works by the London brothers. A negative interface energy suggests that the system should be unstable against maximizing the number of such interfaces. This instability was not observed until the experiments of Shubnikov in 1936 where two critical fields were found. In 1952 an observation of type-II superconductivity was also reported by Zavaritskii. Fritz London demonstrated that a magnetic flux can penetrate a superconductor via a topological defect that has integer phase winding and carries quantized magnetic flux. Onsager and Feynman demonstrated that quantum vortices should form in superfluids. A 1957 paper by A. A. Abrikosov generalizes these ideas. In the limit of very short coherence length the vortex solution is identical to London's fluxoid, where the vortex core is approximated by a sharp cutoff rather than a gradual vanishing of superconducting condensate near the vortex center. Abrikosov found that the vortices arrange themselves into a regular array known as a vortex lattice. Near a so-called upper critical magnetic field, the problem of a superconductor in an external field is equivalent to the problem of vortex state in a rotating superfluid, discussed by Lars Onsager and Richard Feynman. Flux pinning In the vortex state, a phenomenon known as flux pinning becomes possible. This is not possible with type-I superconductors, since they cannot be penetrated by magnetic fields. If a superconductor is cooled in a field, the field can be trapped, which can allow the superconductor to be suspended over a magnet, with the potential for a frictionless joint or bearing. The worth of flux pinning is seen through many implementations such as lifts, frictionless joints, and transportation. The thinner the superconducting layer, the stronger the pinning that occurs when exposed to magnetic fields. Materials Type-II superconductors are usually made of metal alloys or complex oxide ceramics. All high-temperature superconductors are type-II superconductors. While most elemental superconductors are type-I, niobium, vanadium, and technetium are elemental type-II superconductors. Boron-doped diamond and silicon are also type-II superconductors. Metal alloy superconductors can also exhibit type-II behavior (e.g., niobium–titanium, one of the most common superconductors in applied superconductivity), as well as intermetallic compounds like niobium–tin. Other type-II examples are the cuprate-perovskite ceramic materials which have achieved the highest superconducting critical temperatures. These include La1.85Ba0.15CuO4, BSCCO, and YBCO (Yttrium-Barium-Copper-Oxide), which is famous as the first material to achieve superconductivity above the boiling point of liquid nitrogen (77 K). Due to strong vortex pinning, the cuprates are close to ideally hard superconductors. Important uses Strong superconducting electromagnets (used in MRI scanners, NMR machines, and particle accelerators) often use coils wound of niobium-titanium wires or, for higher fields, niobium-tin wires. These materials are type-II superconductors with substantial upper critical field Hc2, and in contrast to, for example, the cuprate superconductors with even higher Hc2, they can be easily machined into wires. Recently, however, 2nd generation superconducting tapes are allowing replacement of cheaper niobium-based wires with much more expensive, but superconductive at much higher temperatures and magnetic fields "2nd generation" tapes. References Superconductivity
Type-II superconductor
[ "Physics", "Materials_science", "Engineering" ]
1,279
[ "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Electrical resistance and conductance" ]
4,499,573
https://en.wikipedia.org/wiki/Adduct
In chemistry, an adduct (; alternatively, a contraction of "addition product") is a product of a direct addition of two or more distinct molecules, resulting in a single reaction product containing all atoms of all components. The resultant is considered a distinct molecular species. Examples include the addition of sodium bisulfite to an aldehyde to give a sulfonate. It can be considered as a single product resulting from the direct combination of different molecules which comprises all atoms of the reactant molecules. Adducts often form between Lewis acids and Lewis bases. A good example is the formation of adducts between the Lewis acid borane and the oxygen atom in the Lewis bases, tetrahydrofuran (THF): or diethyl ether: . Many Lewis acids and Lewis bases reacting in the gas phase or in non-aqueous solvents to form adducts have been examined in the ECW model. Trimethylborane, trimethyltin chloride and bis(hexafluoroacetylacetonato)copper(II) are examples of Lewis acids that form adducts which exhibit steric effects. For example: trimethyltin chloride, when reacting with diethyl ether, exhibits steric repulsion between the methyl groups on the tin and the ethyl groups on oxygen. But when the Lewis base is tetrahydrofuran, steric repulsion is reduced. The ECW model can provide a measure of these steric effects. Compounds or mixtures that cannot form an adduct because of steric hindrance are called frustrated Lewis pairs. Adducts are not necessarily molecular in nature. A good example from solid-state chemistry is the adducts of ethylene or carbon monoxide of . The latter is a solid with an extended lattice structure. Upon formation of the adduct, a new extended phase is formed in which the gas molecules are incorporated (inserted) as ligands of the copper atoms within the structure. This reaction can also be considered a reaction between a base and a Lewis acid where the copper atom plays the electron-receiving role and the pi electrons of the gas molecule play the electron-donating role. Adduct ions An adduct ion is formed from a precursor ion and contains all of the constituent atoms of that ion as well as additional atoms or molecules. Adduct ions are often formed in a mass spectrometer ion source. See also Adductomics DNA adduct References Chemical reactions Solid-state chemistry General chemistry
Adduct
[ "Physics", "Chemistry", "Materials_science" ]
518
[ "Condensed matter physics", "nan", "Solid-state chemistry" ]
4,500,316
https://en.wikipedia.org/wiki/Loop%20quantum%20cosmology
Loop quantum cosmology (LQC) is a finite, symmetry-reduced model of loop quantum gravity (LQG) that predicts a "quantum bridge" between contracting and expanding cosmological branches. The distinguishing feature of LQC is the prominent role played by the quantum geometry effects of loop quantum gravity (LQG). In particular, quantum geometry creates a brand new repulsive force which is totally negligible at low space-time curvature but rises very rapidly in the Planck regime, overwhelming the classical gravitational attraction and thereby resolving singularities of general relativity. Once singularities are resolved, the conceptual paradigm of cosmology changes and one has to revisit many of the standard issues—e.g., the "horizon problem"—from a new perspective. Since LQG is based on a specific quantum theory of Riemannian geometry, geometric observables display a fundamental discreteness that play a key role in quantum dynamics: While predictions of LQC are very close to those of quantum geometrodynamics (QGD) away from the Planck regime, there is a dramatic difference once densities and curvatures enter the Planck scale. In LQC the Big Bang is replaced by a quantum bounce. Study of LQC has led to many successes, including the emergence of a possible mechanism for cosmic inflation, resolution of gravitational singularities, as well as the development of effective semi-classical Hamiltonians. This subfield originated in 1999 by Martin Bojowald, and further developed in particular by Abhay Ashtekar and Jerzy Lewandowski, as well as Tomasz Pawłowski and Parampreet Singh, et al. In late 2012 LQC represented a very active field in physics, with about three hundred papers on the subject published in the literature. There has also recently been work by Carlo Rovelli, et al. on relating LQC to spinfoam cosmology. However, the results obtained in LQC are subject to the usual restriction that a truncated classical theory, then quantized, might not display the true behaviour of the full theory due to artificial suppression of degrees of freedom that might have large quantum fluctuations in the full theory. It has been argued that singularity avoidance in LQC is by mechanisms only available in these restrictive models and that singularity avoidance in the full theory can still be obtained but by a more subtle feature of LQG. Big bounce in loop quantum cosmology Due to the quantum geometry, the Big Bang is replaced by a big bounce without any assumptions on the matter content or any fine tuning. An important feature of loop quantum cosmology is the effective space-time description of the underlying quantum evolution. The effective dynamics approach has been extensively used in loop quantum cosmology to describe physics at the Planck scale and the very early universe. Rigorous numerical simulations have confirmed the validity of the effective dynamics, which provides an excellent approximation to the full loop quantum dynamics. It has been shown that only when the states have very large quantum fluctuations at late times, which means that they do not lead to macroscopic universes as described by general relativity, that the effective dynamics has departures from the quantum dynamics near bounce and the subsequent evolution. In such a case, the effective dynamics overestimates the density at the bounce, but still captures the qualitative aspects extremely well. Scale-invariant loop quantum cosmology If the underlying spacetime geometry with matter has a scale invariance, which has been proposed to resolve the problem of time, Immirzi ambiguity and hierarchy problem of fundamental couplings, then the resulting loop quantum geometry has no definitive discrete gaps or a minimum size. Consequently, in scale-invariant LQC, the Big Bang is shown not to be replaced by a quantum bounce. See also Canonical quantum gravity Dilaton References External links Loop quantum cosmology on arxiv.org Quantum Nature of The Big Bang in Loop Quantum Cosmology Gravity and the Quantum Loop Quantum Cosmology, Martin Bojowald Did our cosmos exist before the Big Bang? Abhay Ashtekar, Parampreet Singh "Loop Quantum Cosmology: A Status Report" Physical cosmology Loop quantum gravity
Loop quantum cosmology
[ "Physics", "Astronomy" ]
863
[ "Astronomical sub-disciplines", "Theoretical physics", "Physical cosmology", "Astrophysics" ]
4,500,344
https://en.wikipedia.org/wiki/Phorbol
Phorbol is a natural, plant-derived organic compound. It is a member of the tigliane family of diterpenes. Phorbol was first isolated in 1934 as the hydrolysis product of croton oil, which is derived from the seeds of the purging croton, Croton tiglium. The structure of phorbol was determined in 1967. Various esters of phorbol have important biological properties, the most notable of which is the capacity to act as tumor promoters through activation of protein kinase C. They mimic diacylglycerols, glycerol derivatives in which two hydroxyl groups have reacted with fatty acids to form esters. The most common and potent phorbol ester is 12-O-tetradecanoylphorbol-13-acetate (TPA), also called phorbol-12-myristate-13-acetate (PMA), which is used as a biomedical research tool in contexts such as models of carcinogenesis. History and source Phorbol is a natural product found in many plants, especially those of the Euphorbiaceae and Thymelaeaceae families. Phorbol is the active constituent of the highly toxic New World tropical manchineel or beach apple, Hippomane mancinella. It is very soluble in most polar organic solvents, as well as in water. In the manchineel, this leads to an additional exposure risk during rain, where liquid splashing from an undamaged tree may also be injurious. Contact with the tree or consumption of its fruit can lead to symptoms such as severe pain and swelling. The purging croton, Croton tiglium, is the source of croton oil from which phorbol was initially isolated. Its seeds and oil have been used for hundreds of years in traditional medicine, generally as a purgative, and the seeds were mentioned in Chinese herbal texts 2000 years ago. The purgative effects of the oil are largely attributed to the high percentage of phorbol esters contained in the oil. Phorbol was isolated from C. tiglium seeds in 1934. The structure of the compound was determined in 1967, and a total synthesis was described in 2015. Mechanism of action Phorbol derivatives work primarily by interacting with protein kinase C (PKC), although they can interact with other phospholipid membrane receptors. The esters bind to PKC in a similar way to its natural ligand, diacylglycerol, and activate the kinase. Diacylglycerol is degraded quickly by the body, allowing PKC to be reversibly activated. When phorbol esters bind to the receptor, they are not degraded as efficiently by the body, leading to constitutively active PK. PKC is involved in a number of important cell signaling pathways. Thus, phorbol ester exposure can show a wide range of results. The main results of phorbol exposure are tumor promotion and inflammatory response. Although phorbol is not a carcinogen itself, it greatly enhances the action of other substances and promotes tumor proliferation. PKC is a key component in biological pathways controlling cell growth and differentiation. When phorbol esters bind to PKC, cell proliferation pathways are activated. This effect greatly promotes tumors when the cells are exposed to even a sub-carcinogenic amount of a substance. PKC is also involved in activation of inflammation pathways such as the NF-κB pathway. Thus, exposure to phorbol products can induce an inflammatory response in tissues. Symptoms can include edema and pain, especially to the skin and mucous membranes. While phorbol itself does not have irritant activity, nearly all phorbol esters are highly irritant, with a wide range of half-maximal inhibitory concentration (IC50) values. The median lethal dose (LD50) of phorbol esters for male mice was found to be about 27 mg/kg, with the mice showing hemorrhage and congestion of pulmonary blood vessels, as well as lesions throughout the body. Total synthesis A total synthesis of enantiopure phorbol was developed in 2015. While this synthesis will not replace natural isolation products, it will enable researchers to create phorbol analogs for use in research, especially creating phorbol derivatives that can be evaluated for anti-cancer activity. Previously, the difficulty with synthesizing phorbol had been creating C–C bonds, especially in the six-membered ring at the top of the molecule. This synthesis starts from (+)-3-carene, and uses a series of 19 steps to eventually create (+)-phorbol. Uses in biomedical research Because of their mechanism of action, phorbol esters can be used to study tumor proliferation and pain response. TPA is most commonly used in the laboratory to induce a cellular response. For example, TPA can be used to measure response to pain and test compounds that may mitigate the inflammatory response. TPA and other phorbol esters can also be used to induce tumor formation and to study mechanism of action. TPA, together with ionomycin, can also be used to stimulate T-cell activation, proliferation, and cytokine production, and is used in protocols for intracellular staining of these cytokines. Possible and purported medicinal uses The phorbol ester tigilanol tiglate reportedly has in vitro anti-cancer, antiviral, and antibacterial activities. The phorbol derivatives in croton oil are used in folk medicine, with purported purgative, counter-irritant, or anthelmintic activities. References Further reading External links Diterpenes Alcohols Benzoazulenes Ketones Total synthesis Cyclopropanes Cyclopentenes Protein kinase C activators Phorbol esters
Phorbol
[ "Chemistry" ]
1,239
[ "Total synthesis", "Ketones", "Functional groups", "Chemical synthesis" ]
4,500,356
https://en.wikipedia.org/wiki/ISO/IEC%2015288
The ISO/IEC 15288 Systems and software engineering — System life cycle processes is a technical standard in systems engineering which covers processes and lifecycle stages, developed by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). Planning for the ISO/IEC 15288:2002(E) standard started in 1994 when the need for a common systems engineering process framework was recognized. ISO/IEC/IEEE 15288 is managed by ISO/IEC JTC1/SC7, which is the committee responsible for developing standards in the area of Software and Systems Engineering. ISO/IEC/IEEE 15288 is part of the SC 7 Integrated set of Standards, and other standards in this domain include: ISO/IEC TR 15504 which addresses capability ISO/IEC 12207 and ISO/IEC 15288 which address lifecycle and ISO 9001 & ISO 90003 which address quality History The previously accepted standard MIL STD 499A (1974) was cancelled after a memo from the United States Secretary of Defense (SECDEF) prohibited the use of most U.S. Military Standards without a waiver (this memo was rescinded in 2005). The first edition was issued on 1 November 2002. Stuart Arnold was the editor and Harold Lawson was the architect of the standard. In 2004 this standard was adopted by the Institute of Electrical and Electronics Engineers as IEEE 15288. ISO/IEC 15288 was updated in 2008, then again (as a joint publication with IEEE) in 2015 and 2023. ISO/IEC/IEEE 15288:2023, 16 May 2023 Revises: ISO/IEC/IEEE 15288:2015 (jointly with IEEE), 15 May 2015 Revises: ISO/IEC 15288:2008 (harmonized with ISO/IEC 12207:2008), 1 February 2008 Revises: ISO/IEC 15288:2002 (first edition), 1 November 2002 Processes The standard defines thirty processes grouped into four categories: Agreement processes Organizational project-enabling processes Technical management processes Technical processes The standard defines two agreement processes: Acquisition process (clause 6.1.1) Supply process (clause 6.1.2) The standard defines six organizational project-enabling processes: Life cycle model management process (clause 6.2.1) Infrastructure management process (clause 6.2.2) Portfolio management process (clause 6.2.3) Human resources management process (clause 6.2.4) Quality management process (clause 6.2.5) Knowledge management process (clause 6.2.6) The standard defines eight technical management processes: Project planning process (clause 6.3.1) Project assessment and control process (clause 6.3.2) Decision management process (clause 6.3.3) Risk management process (clause 6.3.4) Configuration management process (clause 6.3.5) Information management process (clause 6.3.6) Measurement process (clause 6.3.7) Quality assurance process (clause 6.3.8) The standard defines fourteen technical processes: Business or mission analysis process (clause 6.4.1) Stakeholder needs and requirements definition process (clause 6.4.2) System requirements definition process (clause 6.4.3) Architecture definition process (clause 6.4.4) Design definition process (clause 6.4.5) System analysis process (clause 6.4.6) Implementation process (clause 6.4.7) Integration process (clause 6.4.8) Verification process (clause 6.4.9) Transition process (clause 6.4.10) Validation process (clause 6.4.11) Operation process (clause 6.4.12) Maintenance process (clause 6.4.13) Disposal process (clause 6.4.14) Each process is defined by a purpose, outcomes and activities. Activities are further divided into tasks. See also Systems development life cycle System lifecycle Capability Maturity Model Integration (CMMI) ISO/IEC 12207 Concept of operations or CONOPS References Systems engineering 15288 15288
ISO/IEC 15288
[ "Engineering" ]
845
[ "ISO/IEC JTC1/SC7 standards", "Systems engineering" ]
4,500,678
https://en.wikipedia.org/wiki/River%20morphology
The terms river morphology and its synonym stream morphology are used to describe the shapes of river channels and how they change in shape and direction over time. The morphology of a river channel is a function of a number of processes and environmental conditions, including the composition and erodibility of the bed and banks (e.g., sand, clay, bedrock); erosion comes from the power and consistency of the current, and can effect the formation of the river's path. Also, vegetation and the rate of plant growth; the availability of sediment; the size and composition of the sediment moving through the channel; the rate of sediment transport through the channel and the rate of deposition on the floodplain, banks, bars, and bed; and regional aggradation or degradation due to subsidence or uplift. River morphology can also be affected by human interaction, which is a way the river responds to a new factor in how the river can change its course. An example of human induced change in river morphology is dam construction, which alters the ebb flow of fluvial water and sediment, therefore creating or shrinking estuarine channels. A river regime is a dynamic equilibrium system, which is a way of classifying rivers into different categories. The four categories of river regimes are Sinuous canali- form rivers, Sinuous point bar rivers, Sinuous braided rivers, and Non-sinuous braided rivers. The study of river morphology is accomplished in the field of fluvial geomorphology, the scientific term. See also Bedload, Suspended load Sediment, sedimentation, erosion River, Stream, Canal Water, Water resource Flood References Rosgen, Dave (1996). Applied River Morphology. 2nd ed. (Fort Collins, CO: Wildland Hydrology, publ.) . Brice J C. Planform properties of meandering rivers [C].River Meandering, Proceedings of the October 24–26, 1983 Rivers '83 Conference, ASCE. New Orleans, Louisi- ana, 1983. 1-15. External links River Morphology at Delft University of Technology River Engineering and Morphology at WL | Delft Hydraulics River morphology in Delft Cluster Rivers Hydraulic engineering Water streams Geomorphology Sedimentology Fluvial landforms
River morphology
[ "Physics", "Engineering", "Environmental_science" ]
465
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Hydraulic engineering" ]
4,501,128
https://en.wikipedia.org/wiki/Neutron%20time-of-flight%20scattering
In neutron time-of-flight scattering, a form of inelastic neutron scattering, the initial position and velocity of a pulse of neutrons are fixed, and their final position and the time after the pulse that the neutrons are detected are measured. By the principle of conservation of momentum, these pairs of coordinates may be transformed into momenta and energies for the neutrons, and the experimentalist may use this information to calculate the momentum and energy transferred to the sample. Inverse geometry spectrometers are also possible. In this case, the final position and velocity are fixed, and the incident coordinates are varied. Time-of-flight scattering can be performed at either a research reactor or a spallation source. Time-of-flight spectrometers at pulsed sources Time-of-flight spectrometers at pulsed sources include Pharos at LANSCE's Lujan Center at Los Alamos National Laboratory, MAPS, MARI, HET, MERLIN and LET at the ISIS neutron source, and ARCS, CNCS, and SEQUOIA at the Spallation Neutron Source, iBIX, SuperHRPD, PLANET, SENJU, TAKUMI, iMATERIA and NOVA at the J-PARC and SKAT-EPSILON, DIN-2PI, NERA at the IBR-2 pulsed reactor. Time-of-flight spectrometers at continuous sources Time-of-flight spectrometers at continuous sources include DCS and FCS at the NIST laboratories in Maryland, IN4, IN5, and IN6 at the Institut Laue-Langevin, TOFTOF at the Forschungsneutronenquelle Heinz Maier-Leibnitz, PELICAN at the Australian Nuclear Science and Technology Organisation, FOCUS at the Paul Scherrer Institute. Related projects Integrated Infrastructure Initiative for Neutron Scattering and Muon Spectroscopy (NMI3) is a European consortium of 18 partner organisations from 12 countries, including all major facilities in the fields of neutron scattering and muon spectroscopy. References Neutron scattering
Neutron time-of-flight scattering
[ "Chemistry" ]
415
[ "Scattering", "Scattering stubs", "Neutron scattering" ]
4,501,325
https://en.wikipedia.org/wiki/Plasma%20parameters
Plasma parameters define various characteristics of a plasma, an electrically conductive collection of charged and neutral particles of various species (electrons and ions) that responds collectively to electromagnetic forces. Such particle systems can be studied statistically, i.e., their behaviour can be described based on a limited number of global parameters instead of tracking each particle separately. Fundamental The fundamental plasma parameters in a steady state are the number density of each particle species present in the plasma, the temperature of each species, the mass of each species, the charge of each species, and the magnetic flux density . Using these parameters and physical constants, other plasma parameters can be derived. Other All quantities are in Gaussian (cgs) units except energy and temperature which are in electronvolts. For the sake of simplicity, a single ionic species is assumed. The ion mass is expressed in units of the proton mass, and the ion charge in units of the elementary charge (in the case of a fully ionized atom, equals to the respective atomic number). The other physical quantities used are the Boltzmann constant speed of light and the Coulomb logarithm Frequencies Lengths Velocities Dimensionless number of particles in a Debye sphere Alfvén speed to speed of light ratio electron plasma frequency to gyrofrequency ratio ion plasma frequency to gyrofrequency ratio thermal pressure to magnetic pressure ratio, or beta, β magnetic field energy to ion rest energy ratio Collisionality In the study of tokamaks, collisionality is a dimensionless parameter which expresses the ratio of the electron-ion collision frequency to the banana orbit frequency. The plasma collisionality is defined as where denotes the electron-ion collision frequency, is the major radius of the plasma, is the inverse aspect-ratio, and is the safety factor. The plasma parameters and denote, respectively, the mass and temperature of the ions, and is the Boltzmann constant. Electron temperature Temperature is a statistical quantity whose formal definition is or the change in internal energy with respect to entropy, holding volume and particle number constant. A practical definition comes from the fact that the atoms, molecules, or whatever particles in a system have an average kinetic energy. The average means to average over the kinetic energy of all the particles in a system. If the velocities of a group of electrons, e.g., in a plasma, follow a Maxwell–Boltzmann distribution, then the electron temperature is defined as the temperature of that distribution. For other distributions, not assumed to be in equilibrium or have a temperature, two-thirds of the average energy is often referred to as the temperature, since for a Maxwell–Boltzmann distribution with three degrees of freedom, . The SI unit of temperature is the kelvin (K), but using the above relation the electron temperature is often expressed in terms of the energy unit electronvolt (eV). Each kelvin (1 K) corresponds to ; this factor is the ratio of the Boltzmann constant to the elementary charge. Each eV is equivalent to 11,605 kelvins, which can be calculated by the relation . The electron temperature of a plasma can be several orders of magnitude higher than the temperature of the neutral species or of the ions. This is a result of two facts. Firstly, many plasma sources heat the electrons more strongly than the ions. Secondly, atoms and ions are much heavier than electrons, and energy transfer in a two-body collision is much more efficient if the masses are similar. Therefore, equilibration of the temperature happens very slowly, and is not achieved during the time range of the observation. See also Ball-pen probe Langmuir probe References NRL Plasma Formulary – Naval Research Laboratory (2018) Plasma parameters Astrophysics
Plasma parameters
[ "Physics", "Astronomy" ]
762
[ "Astronomical sub-disciplines", "Astrophysics" ]
19,121,488
https://en.wikipedia.org/wiki/Hodoscope
A hodoscope (from the Greek "hodos" for way or path, and "skopos" an observer) is an instrument used in particle detectors to detect passing charged particles and determine their trajectories. Hodoscopes are characterized by being made up of many segments; the combination of which segments record a detection is then used to infer where the particle passed through hodoscope. The typical detector segment is a piece of scintillating material, which emits light when a particle passes through it. The scintillation light can be converted to an electrical signal either by a photomultiplier tube (PMT) or a PIN diode. If a segment measures some significant amount of light, the experimenter can infer that a particle passed through that segment. In addition to coordinate information, for some systems the strength of the light can be proportional to the deposited energy. By doing necessary calibrations, the deposited energy can be determined, which then can be used to infer information about the original particle's energy. As an example: a simple hodoscope might be used to determine where a particle crossed a plane or a wall. In this case, the experimenter could use two segments shaped like strips, arranged in two layers. One layer of strips could be arranged horizontally, while a second layer could be arranged vertically. A particle passing through the wall would hit a strip in each layer; the vertical strip would reveal the particle's horizontal position when it crossed the wall, while the horizontal strip would indicate the particle's vertical position. Hodoscopes are some of the simplest detectors for tracking charged particles. However, their spatial resolution is limited by the segment size. In applications where the spatial resolution is very important, hodoscopes have been superseded by other detectors such as drift chambers and time projection chambers. Further reading http://www.scifun.ed.ac.uk/pages/pp4ss/pp4ss-hodoscope.html External links Particle detectors
Hodoscope
[ "Physics", "Technology", "Engineering" ]
417
[ "Particle physics stubs", "Particle detectors", "Particle physics", "Measuring instruments" ]
19,122,210
https://en.wikipedia.org/wiki/Global%20Design%20Effort
The Global Design Effort (GDE) was an international team tasked with designing the International Linear Collider (ILC), a particle accelerator to succeed machines such as the Large Hadron Collider (LHC) and the Stanford Linear Accelerator (SLAC), with the endorsement of the International Committee for Future Accelerators. Between 2005–2013, the GDE led planning, research and development, and produced an ILC Technical Design Report. The Global Design Effort was headed by Barry Barish of Caltech, former director of the LIGO laboratory. References External links ILC Global Design Effort (GDE) Particle physics facilities
Global Design Effort
[ "Physics" ]
130
[ "Particle physics stubs", "Particle physics" ]
19,127,190
https://en.wikipedia.org/wiki/Muffin-tin%20approximation
The muffin-tin approximation is a shape approximation of the potential well in a crystal lattice. It is most commonly employed in quantum mechanical simulations of the electronic band structure in solids. The approximation was proposed by John C. Slater. Augmented plane wave method (APW) is a method which uses muffin-tin approximation. It is a method to approximate the energy states of an electron in a crystal lattice. The basic approximation lies in the potential in which the potential is assumed to be spherically symmetric in the muffin-tin region and constant in the interstitial region. Wave functions (the augmented plane waves) are constructed by matching solutions of the Schrödinger equation within each sphere with plane-wave solutions in the interstitial region, and linear combinations of these wave functions are then determined by the variational method. Many modern electronic structure methods employ the approximation. Among them APW method, the linear muffin-tin orbital method (LMTO) and various Green's function methods. One application is found in the variational theory developed by Jan Korringa (1947) and by Walter Kohn and N. Rostoker (1954) referred to as the KKR method. This method has been adapted to treat random materials as well, where it is called the KKR coherent potential approximation. In its simplest form, non-overlapping spheres are centered on the atomic positions. Within these regions, the screened potential experienced by an electron is approximated to be spherically symmetric about the given nucleus. In the remaining interstitial region, the potential is approximated as a constant. Continuity of the potential between the atom-centered spheres and interstitial region is enforced. In the interstitial region of constant potential, the single electron wave functions can be expanded in terms of plane waves. In the atom-centered regions, the wave functions can be expanded in terms of spherical harmonics and the eigenfunctions of a radial Schrödinger equation. Such use of functions other than plane waves as basis functions is termed the augmented plane-wave approach (of which there are many variations). It allows for an efficient representation of single-particle wave functions in the vicinity of the atomic cores where they can vary rapidly (and where plane waves would be a poor choice on convergence grounds in the absence of a pseudopotential). See also Anderson's rule Band gap Bloch waves Kohn–Sham equations Kronig–Penney model Local-density approximation References Electronic band structures Electronic structure methods Computational physics Condensed matter physics
Muffin-tin approximation
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
516
[ "Electron", "Quantum chemistry", "Phases of matter", "Quantum mechanics", "Materials science", "Computational physics", "Electronic structure methods", "Electronic band structures", "Computational chemistry", "Condensed matter physics", "Matter" ]
10,061,569
https://en.wikipedia.org/wiki/Perpendicular%20axis%20theorem
The perpendicular axis theorem (or plane figure theorem) states that for a planar lamina the moment of inertia about an axis perpendicular to the plane of the lamina is equal to the sum of the moments of inertia about two mutually perpendicular axes in the plane of the lamina, which intersect at the point where the perpendicular axis passes through. This theorem applies only to planar bodies and is valid when the body lies entirely in a single plane. Define perpendicular axes , , and (which meet at origin ) so that the body lies in the plane, and the axis is perpendicular to the plane of the body. Let Ix, Iy and Iz be moments of inertia about axis x, y, z respectively. Then the perpendicular axis theorem states that This rule can be applied with the parallel axis theorem and the stretch rule to find polar moments of inertia for a variety of shapes. If a planar object has rotational symmetry such that and are equal, then the perpendicular axes theorem provides the useful relationship: Derivation Working in Cartesian coordinates, the moment of inertia of the planar body about the axis is given by: On the plane, , so these two terms are the moments of inertia about the and axes respectively, giving the perpendicular axis theorem. The converse of this theorem is also derived similarly. Note that because in , measures the distance from the axis of rotation, so for a y-axis rotation, deviation distance from the axis of rotation of a point is equal to its x coordinate. References See also Parallel axis theorem Stretch rule Rigid bodies Physics theorems Articles containing proofs Classical mechanics Moment (physics)
Perpendicular axis theorem
[ "Physics", "Mathematics" ]
335
[ "Equations of physics", "Physical quantities", "Quantity", "Classical mechanics", "Mechanics", "Articles containing proofs", "Physics theorems", "Moment (physics)" ]
10,062,927
https://en.wikipedia.org/wiki/Overdrive%20voltage
Overdrive voltage, usually abbreviated as VOV, is typically referred to in the context of MOSFET transistors. The overdrive voltage is defined as the voltage between transistor gate and source (VGS) in excess of the threshold voltage (VTH) where VTH is defined as the minimum voltage required between gate and source to turn the transistor on (allow it to conduct electricity). Due to this definition, overdrive voltage is also known as "excess gate voltage" or "effective voltage." Overdrive voltage can be found using the simple equation: VOV = VGS − VTH. Technology VOV is important as it directly affects the output drain terminal current (ID) of the transistor, an important property of amplifier circuits. By increasing VOV, ID can be increased until saturation is reached. Overdrive voltage is also important because of its relationship to VDS, the drain voltage relative to the source, which can be used to determine the region of operation of the MOSFET. The table below shows how to use overdrive voltage to understand what region of operation the MOSFET is in: A more physics-related explanation follows: In an NMOS transistor, the channel region under zero bias has an abundance of holes (i.e., it is p-type silicon). By applying a negative gate bias (VGS < 0) we attract more holes, and this is called accumulation. A positive gate voltage (VGS > 0) will attract electrons and repel holes, and this is called depletion because we are depleting the number of holes. At a critical voltage called the threshold voltage (VTH) the channel will actually be so depleted of holes and rich in electrons that it will INVERT to being n-type silicon, and this is called the inversion region. As we increase this voltage, VGS, beyond VTH, we are said to be then overdriving the gate by creating a stronger channel, hence the overdrive (often called Vov, Vod, or Von) is defined as (VGS − VTH). See also MOSFET Threshold voltage Electronic amplifier Short-channel effect Biasing References Electrical parameters Semiconductors MOSFETs
Overdrive voltage
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
460
[ "Electrical resistance and conductance", "Physical quantities", "Semiconductors", "Materials", "Electronic engineering", "Condensed matter physics", "Electrical engineering", "Solid state engineering", "Matter", "Electrical parameters" ]
10,064,136
https://en.wikipedia.org/wiki/Separation%20principle
In control theory, a separation principle, more formally known as a principle of separation of estimation and control, states that under some assumptions the problem of designing an optimal feedback controller for a stochastic system can be solved by designing an optimal observer for the state of the system, which feeds into an optimal deterministic controller for the system. Thus the problem can be broken into two separate parts, which facilitates the design. The first instance of such a principle is in the setting of deterministic linear systems, namely that if a stable observer and a stable state feedback are designed for a linear time-invariant system (LTI system hereafter), then the combined observer and feedback is stable. The separation principle does not hold in general for nonlinear systems. Another instance of the separation principle arises in the setting of linear stochastic systems, namely that state estimation (possibly nonlinear) together with an optimal state feedback controller designed to minimize a quadratic cost, is optimal for the stochastic control problem with output measurements. When process and observation noise are Gaussian, the optimal solution separates into a Kalman filter and a linear-quadratic regulator. This is known as linear-quadratic-Gaussian control. More generally, under suitable conditions and when the noise is a martingale (with possible jumps), again a separation principle applies and is known as the separation principle in stochastic control. The separation principle also holds for high gain observers used for state estimation of a class of nonlinear systems and control of quantum systems. Proof of separation principle for deterministic LTI systems Consider a deterministic LTI system: where represents the input signal, represents the output signal, and represents the internal state of the system. We can design an observer of the form and state feedback Define the error e: Then Now we can write the closed-loop dynamics as Since this is a triangular matrix, the eigenvalues are just those of A − BK together with those of A − LC. Thus the stability of the observer and feedback are independent. References Brezinski, Claude. Computational Aspects of Linear Control (Numerical Methods and Algorithms). Springer, 2002. Control theory Stochastic control
Separation principle
[ "Mathematics" ]
446
[ "Applied mathematics", "Control theory", "Dynamical systems" ]
10,064,152
https://en.wikipedia.org/wiki/Continental%20rise
The continental rise is a low-relief zone of accumulated sediments that lies between the continental slope and the abyssal plain. It is a major part of the continental margin, covering around 10% of the ocean floor. Formation This geologic structure results from deposition of sediments, mainly due to mass wasting, the gravity-driven downhill motion of sand and other sediments. Mass wasting can occur gradually, with sediments accumulating discontinuously, or in large, sudden events. Large mass wasting occurrences are often triggered by sudden events such as earthquakes or oversteepening of the continental slope. More gradual accumulation of sediments occurs when hemipelagic sediments suspended in the ocean slowly settle to the ocean basin. Slope Because the continental rise lies below the continental slope and is formed from sediment deposition, it has a very gentle slope, usually ranging from 1:50 to 1:500. As the continental rise extends seaward, the layers of sediment thin, and the rise merges with the abyssal plain, typically forming a slope of around 1:1000. Accompanying Structures Alluvial Fans Deposition of sediments at the mouth of submarine canyons may form enormous fan-shaped accumulations called submarine fans on both the continental slope and continental rise. Alluvial or sedimentary fans are shallow cone-shaped reliefs at the base of the continental slope that merge together, forming the continental rise. Erosional submarine canyons slope downward and lead to alluvial fan valleys with increasing depth. It is in this zone that sediment is deposited, forming the continental rise. Alluvial fans such as the Bengal Fan, which stretches , make up one of the largest sedimentary structures in the world. Many alluvial fans also contain critical oil and natural gas reservoirs, making them key points for the collection of seismic data. Abyssal Plain Beyond the continental rise stretches the abyssal plain, which lies on top of basaltic oceanic crust and spans the majority of the seafloor. The abyssal plain hosts life forms which are uniquely adapted to survival in its cold, high pressure, and dark conditions. The flatness of the abyssal plain is interrupted by massive underwater mountain chains near the tectonic boundaries of Earth's plates. The sediments are mostly silt and clay. References Oceanography
Continental rise
[ "Physics", "Environmental_science" ]
455
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
10,066,313
https://en.wikipedia.org/wiki/Burgers%20vector
In materials science, the Burgers vector, named after Dutch physicist Jan Burgers, is a vector, often denoted as , that represents the magnitude and direction of the lattice distortion resulting from a dislocation in a crystal lattice. Concepts The vector's magnitude and direction is best understood when the dislocation-bearing crystal structure is first visualized without the dislocation, that is, the perfect crystal structure. In this perfect crystal structure, a rectangle whose lengths and widths are integer multiples of (the unit cell edge length) is drawn encompassing the site of the original dislocation's origin. Once this encompassing rectangle is drawn, the dislocation can be introduced. This dislocation will have the effect of deforming, not only the perfect crystal structure, but the rectangle as well. The said rectangle could have one of its sides disjoined from the perpendicular side, severing the connection of the length and width line segments of the rectangle at one of the rectangle's corners, and displacing each line segment from each other. What was once a rectangle before the dislocation was introduced is now an open geometric figure, whose opening defines the direction and magnitude of the Burgers vector. Specifically, the breadth of the opening defines the magnitude of the Burgers vector, and, when a set of fixed coordinates is introduced, an angle between the termini of the dislocated rectangle's length line segment and width line segment may be specified. When calculating the Burgers vector practically, one may draw a rectangular clockwise circuit (Burgers circuit) from a starting point to enclose the dislocation. The Burgers vector will be the vector to complete the circuit, i.e., from the start to the end of the circuit. One can also use a counterclockwise Burgers circuit from a starting point to enclose the dislocation. The Burgers vector will instead be from the end to the start of the circuit (see picture above). The direction of the vector depends on the plane of dislocation, which is usually on one of the closest-packed crystallographic planes. The magnitude is usually represented by the equation (For BCC and FCC lattices only): where is the unit cell edge length of the crystal, is the magnitude of the Burgers vector, and , , and are the components of the Burgers vector, the coefficient is because in BCC and FCC lattices, the shortest lattice vectors could be as expressed Comparatively, for simple cubic lattices, and hence the magnitude is represented by Generally, the Burgers vector of a dislocation is defined by performing a line integral over the distortion field around the dislocation line where the integration path is a Burgers circuit around the dislocation line, is the displacement field, and is the distortion field. In most metallic materials, the magnitude of the Burgers vector for a dislocation is of a magnitude equal to the interatomic spacing of the material, since a single dislocation will offset the crystal lattice by one close-packed crystallographic spacing unit. In edge dislocations, the Burgers vector and dislocation line are perpendicular to one another. In screw dislocations, they are parallel. The Burgers vector is significant in determining the yield strength of a material by affecting solute hardening, precipitation hardening and work hardening. The Burgers vector plays an important role in determining the direction of dislocation line. See also Frank–Read source Dislocations References Crystallography Materials science Mineralogy concepts Vectors (mathematics and physics) de:Versetzung (Materialwissenschaft)#Der Burgersvektor
Burgers vector
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
772
[ "Applied and interdisciplinary physics", "Materials science", "Crystallography", "Condensed matter physics", "nan" ]
10,067,215
https://en.wikipedia.org/wiki/Quality%20engineering
Quality engineering is the discipline of engineering concerned with the principles and practice of product and service quality assurance and control. In software development, it is the management, development, operation and maintenance of IT systems and enterprise architectures with high quality standard. Description Quality engineering is the discipline of engineering that creates and implements strategies for quality assurance in product development and production as well as software development. Quality Engineers focus on optimizing product quality which W. Edwards Deming defined as: Quality engineering body of knowledge includes: Management and leadership The quality system Elements of a quality system Product and process design Classification of quality characteristics Design inputs and review Design verification Reliability and maintainability Product and process control Continuous improvement Quality control tools Quality management and planning tools Continuous improvement techniques Corrective action Preventive action Statistical process control (SPC) Risk management Roles Auditor: Quality engineers may be responsible for auditing their own companies or their suppliers for compliance to international quality standards such as ISO9000 and AS9100. They may also be independent auditors under an auditing body. Process quality: Quality engineers may be tasked with value stream mapping and statistical process control to determine if a process is likely to produce a defective product. They may create inspection plans and criteria to ensure defective parts are detected prior to completion. Supplier quality: Quality engineers may be responsible for auditing suppliers or performing root cause and corrective action at their facility or overseeing such activity to prevent the delivery of defective products. Software IT services are increasingly interlinked in workflows across platform boundaries, device and organisational boundaries, for example in cyber-physical systems, business-to-business workflows or when using cloud services. In such contexts, quality engineering facilitates the necessary all-embracing consideration of quality attributes. In such contexts an "end-to-end" view of quality from management to operation is vital. Quality engineering integrates methods and tools from enterprise architecture-management, Software product management, IT service management, software engineering and systems engineering, and from software quality management and information security management. This means that quality engineering goes beyond the classic disciplines of software engineering, information security management or software product management since it integrates management issues (such as business and IT strategy, risk management, business process views, knowledge and information management, operative performance management), design considerations (including the software development process, requirements analysis, software testing) and operative considerations (such as configuration, monitoring, IT service management). In many of the fields where it is used, quality engineering is closely linked to compliance with legal and business requirements, contractual obligations and standards. As far as quality attributes are concerned, reliability, security and safety of IT services play a predominant role. In quality engineering, quality objectives are implemented in a collaborative process. This process requires the interaction of largely independent actors whose knowledge is based on different sources of information. Quality objectives Quality objectives describe basic requirements for software quality. In quality engineering they often address the quality attributes of availability, security, safety, reliability and performance. With the help of quality models like ISO/IEC 25000 and methods like the Goal Question Metric approach it is possible to attribute metrics to quality objectives. This allows measuring the degree of attainment of quality objectives. This is a key component of the quality engineering process and, at the same time, is a prerequisite for its continuous monitoring and control. To ensure effective and efficient measuring of quality objectives the integration of core numbers, which were identified manually (e.g. by expert estimates or reviews), and automatically identified metrics (e.g. by statistical analysis of source codes or automated regression tests) as a basis for decision-making is favourable. Actors The end-to-end quality management approach to quality engineering requires numerous actors with different responsibilities and tasks, different expertise and involvement in the organisation. Different roles involved in quality engineering: Business architect, IT architect, Security officer, Requirements engineer, Software quality manager, Test manager, Project manager, Product manager and Security architect. Typically, these roles are distributed over geographic and organisational boundaries. Therefore, appropriate measures need to be taken to coordinate the heterogeneous tasks of the various roles in quality engineering and to consolidate and synchronize the data and information necessary to fulfill the tasks, and make them available to each actor in an appropriate form. Knowledge management Knowledge management plays an important part in quality engineering. The quality engineering knowledge base comprises manifold structured and unstructured data, ranging from code repositories via requirements specifications, standards, test reports and enterprise architecture models to system configurations and runtime logs. Software and system models play an important role in mapping this knowledge. The data of the quality engineering knowledge base are generated, processed and made available both manually as well as tool-based in a geographically, organisationally and technically distributed context. Of prime importance is the focus on quality assurance tasks, early recognition of risks, and appropriate support for the collaboration of actors. This results in the following requirements for a quality engineering knowledge base: Knowledge is available in a quality as required. Important quality criteria include that knowledge is consistent and up-to-date as well as complete and adequate in terms of granularity in relation to the tasks of the appropriate actors. Knowledge is interconnected and traceable in order to support interaction between the actors and to facilitate analysis of data. Such traceability relates not only to interconnectedness of data across different levels of abstraction (e.g. connection of requirements with the services realizing them) but also to their traceability over time periods, which is only possible if appropriate versioning concepts exist. Data can be interconnected both manually as well as (semi-) automatically. Information has to be available in a form that is consistent with the domain knowledge of the appropriate actors. Therefore, the knowledge base has to provide adequate mechanisms for information transformation (e.g. aggregation) and visualization. The RACI concept is an example of an appropriate model for assigning actors to information in a quality engineering knowledge base. In contexts, where actors from different organisations or levels interact with each other, the quality engineering knowledge base has to provide mechanisms for ensuring confidentiality and integrity. Quality engineering knowledge bases offer a whole range of possibilities for analysis and finding information in order to support quality control tasks of actors. Collaborative processes The quality engineering process comprises all tasks carried out manually and in a (semi-)automated way to identify, fulfil and measure any quality features in a chosen context. The process is a highly collaborative one in the sense that it requires interaction of actors, widely acting independently from each other. The quality engineering process has to integrate any existing sub-processes that may comprise highly structured processes such as IT service management and processes with limited structure such as agile software development. Another important aspect is change-driven procedure, where change events, such as changed requirements are dealt with in the local context of information and actors affected by such change. A pre-requisite for this is methods and tools, which support change propagation and change handling. The objective of an efficient quality engineering process is the coordination of automated and manual quality assurance tasks. Code review or elicitation of quality objectives are examples of manual tasks, while regression tests and the collection of code metrics are examples for automatically performed tasks. The quality engineering process (or its sub-processes) can be supported by tools such as ticketing systems or security management tools. See also Index of quality engineering articles Seven Basic Tools of Quality Engineering management Manufacturing engineering Mission assurance Systems engineering W. Edwards Deming Associations American Society for Quality INFORMS Institute of Industrial Engineers External links Txture is a tool for textual IT-Architecture documentation and analysis. mbeddr is a set of integrated and extensible languages for embedded software engineering, plus an integrated development environment (IDE). qeunit.com is a blog on QE matters TMAP.net is the body of knowledge from Sogeti References Enterprise architecture Engineering disciplines Systems engineering Software quality Knowledge management Quality Quality engineering
Quality engineering
[ "Engineering" ]
1,610
[ "Systems engineering", "Quality engineering", "nan" ]
7,800,566
https://en.wikipedia.org/wiki/Surface%20photovoltage
Surface photovoltage (SPV) measurements are a widely used method to determine the minority carrier diffusion length of semiconductors. Since the transport of minority carriers determines the behavior of the p-n junctions that are ubiquitous in semiconductor devices, surface photovoltage data can be very helpful in understanding their performance. As a contactless method, SPV is a popular technique for characterizing poorly understood compound semiconductors where the fabrication of ohmic contacts or special device structures may be difficult. Theory As the name suggests, SPV measurements involve monitoring the potential of a semiconductor surface while generating electron-hole pairs with a light source. The surfaces of semiconductors are often depletion regions (or space charge regions) where a built-in electric field due to defects has swept out mobile charge carriers. A reduced carrier density means that the electronic energy band of the majority carriers is bent away from the Fermi level. This band-bending gives rise to a surface potential. When a light source creates electron-hole pairs deep within the semiconductor, they must diffuse through the bulk before reaching the surface depletion region. The photogenerated minority carriers have a shorter diffusion length than the much more numerous majority carriers, with which they can radiatively recombine. The change in surface potential upon illumination is therefore a measure of the ability of minority carriers to reach the surface, namely the minority carrier diffusion length. As always in diffusive processes, the diffusion length is approximately related to the lifetime by the expression , where is the diffusion coefficient. The diffusion length is independent of any built-in fields in contrast to the drift behavior of the carriers. Note that the photogenerated majority carriers will also diffuse towards the surface but their number as a fraction of the thermally generated majority carrier density in a moderately doped semiconductor will be too small to create a measurable photovoltage. Both carrier types will also diffuse towards the rear contact where their collection can confuse interpretation of the data when the diffusion lengths are larger than the film thickness. In a real semiconductor, the measured diffusion length includes the effect of surface recombination, which is best understood through its effect on carrier lifetime: where is the effective carrier lifetime, is the bulk carrier lifetime, is the surface recombination velocity and is the film or wafer thickness. Even for well characterized materials, uncertainty about the value of the surface recombination velocity reduces the accuracy with which the diffusion length can be determined for thinner films. Experimental methods Surface photovoltage measurements are performed by placing a wafer or sheet film of a semiconducting material on a ground electrode and positioning a kelvin probe a small distance above the sample. The surface is illuminated with light of fixed wavelength in industrial applications or with light whose wavelength is scanned using a monochromator so as to vary the absorption depth of the photons. The deeper in the semiconductor that carrier generation occurs, the fewer the number of minority carriers that will reach the surface and the smaller the photovoltage. On a semiconductor whose spectral absorption coefficient is known, the minority carrier diffusion length can in principle be extracted from a measurement of photovoltage versus wavelength. The optical properties of a novel semiconductor may not be well known or may not be homogeneous across the sample. The temperature of the semiconductor must be carefully controlled during an SPV measurement test thermal drift complicate the comparison of different samples. Typically SPV measurements are done in an AC-coupled fashion using a chopped light source rather than a vibrating Kelvin probe. Significance The minority carrier diffusion length is critical in determining the performance of devices such as photoconducting detectors and bipolar transistors. In both cases the ratio of the diffusion length to the device dimensions determines the gain. In photovoltaic devices, photodiodes and field-effect transistors, the drift behavior due to built-in fields is more important under typical conditions than the diffusive behavior. Even so the SPV is a convenient method of measuring the density of impurity-derived recombination centers that limit device performance. SPV is performed both as an automated and routine test of material quality in a production environment and as an experimental tool to probe the behavior of less well studied semiconducting materials. Time-resolved photoluminescence is an alternate contactless method of determining minority carrier transport properties. See also Kelvin probe force microscope Photo-reflectance Scanning Kelvin probe References External links Freiberg Instruments vendor of industrial and scientific SPV and Minority Carrier Lifetime measurement systems Semilab vendor of commercial SPV and Minority Carrier Lifetime measurement systems KP Technology vendors of and consultants about Kelvin probes ASTM standard F391-96 "Standard Test Methods for Minority Carrier Diffusion Length in Extrinsic Semiconductors by Measurement of Steady-State Surface Photovoltage" Semiconductor analysis Condensed matter physics
Surface photovoltage
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
983
[ "Phases of matter", "Condensed matter physics", "Matter", "Materials science" ]
7,801,429
https://en.wikipedia.org/wiki/Reproductive%20suppression
Reproductive suppression is the prevention or inhibition of reproduction in otherwise healthy adult individuals. It occurs in birds, mammals, and social insects. It is sometimes accompanied by cooperative breeding. It is maintained by behavioral mechanisms such as aggression, and physiological mechanisms such as pheromone signalling. In evolutionary terms, it may be explained by the theory of inclusive fitness. Overview Reproductive suppression is the prevention or inhibition of reproduction in otherwise healthy adult individuals. It includes delayed sexual maturation (puberty) or inhibition of sexual receptivity, facultatively increased interbirth interval through delayed or inhibited ovulation or spontaneous or induced abortion, abandonment of immature and dependent offspring, mate guarding, selective destruction and worker policing of eggs in some eusocial insects or cooperatively breeding birds, and infanticide of the offspring of subordinate females either by directly killing by dominant females or males in mammals or indirectly through the withholding of assistance with infant care in marmosets and some carnivores. The reproductive suppression model holds that "females can optimize their lifetime reproductive success by suppressing reproduction when future (physical or social) conditions for the survival of offspring are likely to be greatly improved over present ones”. When intragroup competition is high it may be beneficial to suppress the reproduction of others, and for subordinate females to suppress their own reproduction until a later time when social competition is reduced. This leads to reproductive skew within a social group, with some individuals having more offspring than others. The cost of reproductive suppression to the individual is lowest at the earliest stages of a reproductive event and reproductive suppression is often easiest to induce at the pre-ovulatory or earliest stages of pregnancy in mammals, and greatest after a birth. Therefore, neuroendocrine cues for assessing reproductive success should evolve to be reliable at early stages in the ovulatory cycle. Reproductive suppression occurs in its most extreme form in eusocial insects such as termites, paper wasps and honeybees, and in a mammal, the naked mole rat, which depend on a complex division of labor within the group for survival and in which specific genes, epigenetics and other factors determine whether individuals will permanently be unable to breed or able to reach reproductive maturity under particular social conditions. It is found, too, among cooperatively breeding fish; cooperatively breeding birds such as some woodpeckers; and mammals in which a breeding pair depends on helpers whose reproduction is suppressed. In eusocial and cooperatively breeding animals, most non-reproducing helpers engage in kin selection, enhancing their own inclusive fitness by ensuring the survival of offspring they are closely related to. Wolf packs suppress subordinate breeding. Pre-fertilization Environmental cues Food shortage Female mammals experience delays in the onset of puberty or increase their interbirth intervals in response to environmental conditions that are associated with low abundance and poor quality of foods. Nutritional stress is apparently linked to the female endocrine system and ovulatory pheromonal cycle. For example, Orangutans (Pongo pygmaeus) experience high ketone (due to fat burning) and low estrogen levels during times of low food availability and poor food quality, but low ketone and high estrogen levels that stimulates onset of the ovulation during the fruit masting seasons that occurs about every four years on Sumatra. Since adult female orangutans are solitary, interference by other females can be ruled out as influencing their temporary reproductive suppression and facultatively long interbirth intervals. Polygynandrous yellow baboons (Papio cynocephalus) in Kenya were far less likely to ovulate or conceive during periods of drought or extreme heat, especially if they live in large groups, resulting in longer interbirth intervals during periods of nutritional and thermal stress. Population density High population density can lead to reproductive suppression. For example, when high numbers of yellow baboon (Papio cynocephalus) females are simultaneously in estrous at Mikumi National Park, Tanzania, dominant females form coalitions that attack subordinate females, disrupting their reproductive cycle and preventing conception. This places constraints on the number of offspring born in a single generation. Since infant mortality in these baboons depends in part on the number and ages of other infants born into the group, pre-ovulatory females that are most susceptible to stress-induced delay or inhibition of ovulation are the most frequent targets of female coalition attacks. Female attackers are often in advanced stages of pregnancy and have the most to lose if the number of infants in the group reaches an unsustainable level. A similar study of Chacma baboons (Papio ursinus) noted high levels of female-female aggression around the mating season when the number of ovulating females was high (indicated by sexual swellings) and that aggression directed toward suppressing the mating opportunities of ovulating females. Among elephant seals (Mirounga), high neonatal mortality occurs when the number of pups born in a season is high, with deaths resulting from injury and starvation. To counteract loss of their pups elephant seals conceive next year's offspring immediately after giving birth to this year's young, but delay implantation for 4 months. Female striped mice (Rhabdomys pumilio) in monogamous social groups do not experience reproductive suppression, but those living in communally breeding groups with high population density and large numbers of old breeding females do. Social cues Aggression Chronic physiological stress resulting from aggression by dominant toward subordinate individuals is thought to be a major cause of delayed maturation and suppressed ovulation in subordinate individuals of a wide range of species. In response to stress the HPA axis (hypothalamic-pituitary-adrenocortical axis) is activated, producing high concentrations of circulating adrenal glucocorticoids which, when chronically present, negatively influence an animal's health and can lead to reproductive suppression in male marmots (Marmota marmota), and blocks of ovulatory cycles of female talapoin monkeys (Miopithecus talapoin), meerkats (Suricata suricatta) and marmots. In cooperatively breeding meerkats (Suricata suricatta) the dominant breeding female is the oldest and heaviest female who is highly aggressive toward subordinate females during the breeding season and temporarily evicts subordinate females from the group. Evicted subordinates suffer repeated attacks during the later stages of the breeding female's pregnancy and showed glucocorticoid levels twice normal. Approximately 3 days after giving birth, subordinate females return to the group to help rear the breeding female's pups, at which time the dominant female is no longer aggressive toward them. By inducing the stress response in subordinates, the breeding female disrupts the ovulatory cycle and prevents mating of the subordinate females, or causes the pregnancies of subordinates to fail if they do occur. If her social group is small the breeding female evicts all subordinate females, but if it is large she targets older females, pregnant females and non-kin. However, reproductive suppression through stress does not apply to many species including orangutans for which subordinate males have lower GC levels than dominant ones, marmosets for which nonbreeding subordinate females were not found to have higher GC levels than dominant breeding females, social carnivores and other animals. However the correlation between glucocorticoid levels and position in the social hierarchy is dubious. Dominant breeding males and females often have higher glucocorticoid levels among the dwarf mongoose (Helogale parvula), African wild dogs (Lycaon pictus) and gray wolves (Canis lupus) in which older and heavier individuals make up the breeding individuals in the group with younger individuals helping to raise their pups. Pheromones Pheromonal suppression of subordinate male and female reproduction is the mechanism by which dominant breeding pairs suppress the reproduction of non-breeders in communally and cooperatively breeding species, especially in the absence of stress. Wolves (Canis lupus), coyotes (Canis latrons) and African wild dogs (Lycaon pictus) live in packs with a dominant breeding pair that does most of the territorial urine scent marking in the group. During the breeding season, the dominant male urinates over the dominant female's urine, possibly to hide the female's reproductive status from other males. Pheromones in urine are implicated as a possible mechanism to shut down the subordinate's reproductive cycle. Only the dominant pair mate and have young, producing one litter per year. Subordinate females either do not exhibit estrous hormonal changes or they occur irregularly. Occasionally a subordinate animal leaves the pack and forms a new mating pair when it then exhibits of the territorial urine marking and mating behavior that was suppressed when living in the presence of the dominant breeding pair. Yellow, banded and dwarf mongooses live in families similar to wolf packs with scent marking done almost exclusively by the dominant pair. Dwarf mongooses show synchronous estrous but only the dominant female regularly gives birth. Subordinate females do not conceive or abort early, but do lactate after the dominant female gives birth and participate in communal nursing. When the subordinate females leave the parental pack they will become fully reproductively active in new groups they help establish. In captivity, more than one dwarf mongoose in a group can give birth but the young of subordinates do not survive. However, if the size of their cage is increased and density is lower, reproductive activity of subordinates resumes. As in wolves, pheromonal control over the reproductive cycle of the mongoose is implicated as the mechanism for reproductive suppression. The house mouse (Mus musculus) is territorial with a single dominant male marking its territorial boundary with urine while subordinate males do not urine mark. A female's rate of urine marking is elevated in response to a dominant male's urine marking, and his urine contains pheromones that accelerate her ovulation. The presence of a dominant adult male and his urine are crucial for young females to initiate puberty. Females isolated from males often do not experience puberty or have unstable ovulatory cycles. Contact with the urine of a dominant female signals a young prepubescent female to delay the onset of puberty. When housed with only an adult male, female mice reach puberty within 25 days, but puberty will not occur until 40–50 days if they are housed with adult females. Being grouped with other females suppresses the young female's serum Luteinizing hormone and prolactin. Exposure to the male elevates their luteinizing hormone levels followed by increased serum estradiol. Female mice have a short ovulatory cycle when housed with males and a long cycle when housed with only other females. Queen pheromones are among the most important chemical messages regulating insect societies but are yet to be isolated. in natural colonies of the Saxon wasp, Dolichovespula saxonica, queens emit reliable chemical cues of their true fertility, but as the queen's signals decrease incidences of worker reproduction increases. In colonies of Bombus terrestris, the queen can control oogenesis and worker egg laying by regulating concentrations of juvenile hormone (JH) in workers. This is likely achieved through queen pheromones. Social self-inhibition Game theory suggests that subordinates suppress their own reproduction to avoid the social cost imposed on them if they reproduce, including expulsion from the group and infanticide against their progeny by dominant breeding individuals. Forced dispersal from the group may be extremely dangerous, involving high levels of predation and difficulty finding food and sleeping sites. In the absence of stress or pheromones as a mechanism, a "self-inhibition" model was proposed that suggests subordinates can choose to inhibit their own reproduction in cooperatively breeding groups in order to remain flexible and share in the group's productivity. When group members are closely related, independent breeding opportunities are poor, and detection of mating and pregnancy is high, subordinates choose to suppress their own reproduction. The self-inhibition model has been applied to female common marmosets (Callithrix jacchus). When they are introduced into a new group, female marmosets are immediately subordinate, experience a drop in chrorionic gonadotrophin (CG) and within the first four days and stops ovulating. Subordinate female ovaries are smaller than dominants and secrete little estrogen. Older female marmosets are less subject to ovulatory self-restraint and the scent of an unfamiliar dominant female does not affect their ovulation. However, continued exposure to the dominant female may represent a social, rather than biological, cue that causes the subordinates to shut down their ovulation. Mate guarding Limiting access to reproductive partners through mate guarding is a form of reproductive suppression. In the banded mongoose (Mungos mungo) the oldest males have the most offspring. Yet mating in this species is highly promiscuous. The older males increase their reproductive success by mate guarding the oldest most fertile females since they cannot guard all the females in the group. Therefore, successfully breeding older males are highly selective in their mate guarding and exert mate selection. Mate guarding is also a mechanism of reproductive suppression of subordinates in mandrills (Mandrillus sphinx) in which males over the age of 8 mate-guard, with alpha males performing 94% of the mate-guarding and being responsible for siring 69% of the offspring. Eusociality Ovary activation in honeybee workers is inhibited by queen pheromones. In the Cape honeybee social parasites that are of the worker caste enter the colony and kill the resident queen, activate their own ovaries and parthenogenetically produce diploid female offspring (thelytoky) – behaviors linked to a single locus on chromosome 13. The parasites produced queen-like pheromones falsely signally the presence of a queen suppressing the reproduction of worker's native to the colony and policing or destroying these eggs. Alternative slicing of a gene homologous to the gemini transcription factor of Drosophila controls worker sterility. Knocking out an exon results in rapid worker ovary activation. A 9 nucleotide deletion from the normal altruistic worker genome may turn a honeybee into a parasite. As for many eusocial insects, termite workers are totipotent and rarely produce offspring although they are capable of reproduction when the breeding pair dies. Researchers hypothesized that the gene Neofem2, which is involved in cockroach communication, plays a critical role in queen-worker communication. They experimentally took the king or queen out of the colony. Some workers displayed pre-mating head butting behavior and can go on to become reproductive while others did not. However, when the Neofem2 gene was suppressed butting behavior did not occur, indicating that the gene is essential in allowing the queen to suppress worker reproduction. Neofem2 may play a role in cockroach egg pheromones. Naked mole-rats resemble eusocial insects in becoming reversible nonbreeders at birth. Underground-dwelling mole-rats having an elaborate division of labor, high densities as colonies mature, high levels of inbreeding and extreme permanent reproductive despotism. Helping to rear the dominant females young may be the only way they can ensure an individual's survival. Post-fertilization Destruction of eggs Destroying eggs such that only one female's eggs survive is documented in cooperatively breeding birds such as acorn woodpeckers (Melanerpes formicivorus). About 25% of acorn woodpecker groups contain two or more joint-nesting females that cooperate to raise young, but cooperative female destroy each other's eggs prior to laying their own. These eggs are eaten by other members of the group, including the mother. In Cape honey bees (Apis mellifera capensis), the multiple mating of the queen and existence of half-siblings creates sub-families within the colony. Sometimes the queen fails to control worker reproduction. In response workers remove eggs laid by other workers, leading to less than 1% of offspring being worker-laid. Worker policing of worker eggs occurs in the European hornet (Vespa crabo) honeybees and wasps. Spontaneous abortion Social stress and resource limitation can lead to high rates of spontaneous abortion as a mechanism of reproductive suppression. Meerkats subordinate females that have conceived prior to being evicted from the group by the breeding female during the breeding season fail to carry their pregnancies as a result of increased stress (higher GC levels) and reduced access to food and other resources. Evicted females regularly copulate with males from neighboring groups Subordinate females have low conception rates and increased abortion rates in response to low food availability and high predation risk. Dwarf mongooses show synchronous estrous but only the dominant female regularly bears young. If subordinate females do manage to conceive they abort the pregnancy. Infanticide Infanticide is a mechanism of reproductive success in many species. Alpha female dwarf mongooses and African wild dogs kill offspring other than their own, but alpha males do not. This may be because the breeding alpha may have sired a rare subordinate's offspring. Subordinate females time their pregnancies such that they give birth several days after the alpha female, to reduce the risk of infanticide. Pregnant subordinate meerkats kill the pups of the dominant breeding female. No dominant female pups have ever been observed killed after their 4th day of life. Evicted subordinate females are kept out for 3 days after the breeding female gave birth. Eight infanticides involving the killing of a subordinate female's offspring by the breeding female have been observed in wild common marmosets, as well as two instances of infanticide in which the subordinate female bred and then killed the breeding female's babies. Infanticidal marmoset females are often in the last trimester of pregnancy, but postpartum females do not commit infanticide. References Further reading Behavioral ecology Reproduction in animals Sociobiology
Reproductive suppression
[ "Biology" ]
3,774
[ "Reproduction in animals", "Behavior", "Reproduction", "Behavioral ecology", "Behavioural sciences", "Sociobiology", "Ethology" ]
7,803,002
https://en.wikipedia.org/wiki/Marchetti%27s%20constant
Marchetti's constant is the average time spent by a person for commuting each day. Its value is approximately one hour, or half an hour for a one-way trip. It is named after Italian physicist Cesare Marchetti, though Marchetti himself attributed the "one hour" finding to transportation analyst and engineer Yacov Zahavi. Marchetti posits that although forms of urban planning and transport may change, and although some live in villages and others in cities, people gradually adjust their lives to their conditions (including location of their homes relative to their workplace) such that the average travel time stays approximately constant. Ever since Neolithic times, people have kept the average time spent per day for travel the same, even though the distance may increase due to the advancements in the means of transportation. In his 1934 book Technics and Civilization, Lewis Mumford attributes this observation to Bertrand Russell: Mr. Bertrand Russell has noted that each improvement in locomotion has increased the area over which people are compelled to move: so that a person who would have had to spend half an hour to walk to work a century ago must still spend half an hour to reach his destination, because the contrivance that would have enabled him to save time had he remained in his original situation now—by driving him to a more distant residential area—effectually cancels out the gain. A related concept is that of Zahavi, who also noticed that people seem to have a constant "travel time budget", that is, "a stable daily amount of time that people make available for travel." David Metz, former chief scientist at the Department of Transport, UK, cites data of average travel time in Britain drawn from the British National Travel Survey in support of Marchetti's and Zahavi's conclusions. The work casts doubt on the contention that investment in infrastructure saves travel time. Instead, it appears from Metz's figures that people invest travel time saved in travelling a longer distance, a particular example of Jevons paradox described by the Lewis–Mogridge position. Because of the constancy of travel times as well as induced travel, Robert Cervero has argued that the World Bank and other international aid agencies evaluate transportation investment proposals in developing and rapidly motorizing cities less on the basis of potential travel-time savings and more on the accessibility benefits they confer. See also Braess's paradox Urban sprawl References Urban planning Urban geography Commuting Transport economics Transportation planning Sustainable transport
Marchetti's constant
[ "Physics", "Engineering" ]
510
[ "Physical systems", "Transport", "Sustainable transport", "Urban planning", "Architecture" ]
7,803,747
https://en.wikipedia.org/wiki/S-knot
In loop quantum gravity, an s-knot is an equivalence class of spin networks under diffeomorphisms. In this formalism, s-knots represent the quantum states of the gravitational field. External links Living Reviews in Relativity: Loop Quantum Gravity: Diffeomorphism invariance Loop quantum gravity
S-knot
[ "Physics" ]
65
[ "Quantum mechanics", "Quantum physics stubs" ]
18,103,572
https://en.wikipedia.org/wiki/Multiplicative%20number%20theory
Multiplicative number theory is a subfield of analytic number theory that deals with prime numbers and with factorization and divisors. The focus is usually on developing approximate formulas for counting these objects in various contexts. The prime number theorem is a key result in this subject. The Mathematics Subject Classification for multiplicative number theory is 11Nxx. Scope Multiplicative number theory deals primarily in asymptotic estimates for arithmetic functions. Historically the subject has been dominated by the prime number theorem, first by attempts to prove it and then by improvements in the error term. The Dirichlet divisor problem that estimates the average order of the divisor function d(n) and Gauss's circle problem that estimates the average order of the number of representations of a number as a sum of two squares are also classical problems, and again the focus is on improving the error estimates. The distribution of primes numbers among residue classes modulo an integer is an area of active research. Dirichlet's theorem on primes in arithmetic progressions shows that there are an infinity of primes in each co-prime residue class, and the prime number theorem for arithmetic progressions shows that the primes are asymptotically equidistributed among the residue classes. The Bombieri–Vinogradov theorem gives a more precise measure of how evenly they are distributed. There is also much interest in the size of the smallest prime in an arithmetic progression; Linnik's theorem gives an estimate. The twin prime conjecture, namely that there are an infinity of primes p such that p+2 is also prime, is the subject of active research. Chen's theorem shows that there are an infinity of primes p such that p+2 is either prime or the product of two primes. Methods The methods belong primarily to analytic number theory, but elementary methods, especially sieve methods, are also very important. The large sieve and exponential sums are usually considered part of multiplicative number theory. The distribution of prime numbers is closely tied to the behavior of the Riemann zeta function and the Riemann hypothesis, and these subjects are studied both from a number theory viewpoint and a complex analysis viewpoint. Standard texts A large part of analytic number theory deals with multiplicative problems, and so most of its texts contain sections on multiplicative number theory. These are some well-known texts that deal specifically with multiplicative problems: See also Multiplicative combinatorics Additive combinatorics Additive number theory Sum-product phenomenon Analytic number theory
Multiplicative number theory
[ "Mathematics" ]
522
[ "Analytic number theory", "Number theory" ]
18,104,802
https://en.wikipedia.org/wiki/Complementarity%20theory
A complementarity problem is a type of mathematical optimization problem. It is the problem of optimizing (minimizing or maximizing) a function of two vector variables subject to certain requirements (constraints) which include: that the inner product of the two vectors must equal zero, i.e. they are orthogonal. In particular for finite-dimensional real vector spaces this means that, if one has vectors X and Y with all nonnegative components (xi ≥ 0 and yi ≥ 0 for all : in the first quadrant if 2-dimensional, in the first octant if 3-dimensional), then for each pair of components xi and yi one of the pair must be zero, hence the name complementarity. e.g. X = (1, 0) and Y = (0, 2) are complementary, but X = (1, 1) and Y = (2, 0) are not. A complementarity problem is a special case of a variational inequality. History Complementarity problems were originally studied because the Karush–Kuhn–Tucker conditions in linear programming and quadratic programming constitute a linear complementarity problem (LCP) or a mixed complementarity problem (MCP). In 1963 Lemke and Howson showed that, for two person games, computing a Nash equilibrium point is equivalent to an LCP. In 1968 Cottle and Dantzig unified linear and quadratic programming and bimatrix games. Since then the study of complementarity problems and variational inequalities has expanded enormously. Areas of mathematics and science that contributed to the development of complementarity theory include: optimization, equilibrium problems, variational inequality theory, fixed point theory, topological degree theory and nonlinear analysis. See also Mathematical programming with equilibrium constraints nl format for representing complementarity problems References Further reading Collections External links CPNET:Complementarity Problem Net Mathematical optimization Functional analysis Topology
Complementarity theory
[ "Physics", "Mathematics" ]
394
[ "Mathematical analysis", "Functions and mappings", "Functional analysis", "Mathematical analysis stubs", "Mathematical objects", "Topology", "Space", "Mathematical relations", "Geometry", "Mathematical optimization", "Spacetime" ]
2,421,876
https://en.wikipedia.org/wiki/Monod%E2%80%93Wyman%E2%80%93Changeux%20model
In biochemistry, the Monod–Wyman–Changeux model (MWC model, also known as the symmetry model or concerted model) describes allosteric transitions of proteins made up of identical subunits. It was proposed by Jean-Pierre Changeux in his PhD thesis, and described by Jacques Monod, Jeffries Wyman, and Jean-Pierre Changeux. It contrasts with the sequential model and substrate presentation. The concept of two distinct symmetric states is the central postulate of the MWC model. The main idea is that regulated proteins, such as many enzymes and receptors, exist in different interconvertible states in the absence of any regulator. The ratio of the different conformational states is determined by thermal equilibrium. This model is defined by the following rules: An allosteric protein is an oligomer of protomers that are symmetrically related (for hemoglobin, we shall assume, for the sake of algebraic simplicity, that all four subunits are functionally identical). Each protomer can exist in (at least) two conformational states, designated T and R; these states are in equilibrium whether or not ligand is bound to the oligomer. The ligand can bind to a protomer in either conformation. Only the conformational change alters the affinity of a protomer for the ligand. The regulators merely shift the equilibrium toward one state or another. For instance, an agonist will stabilize the active form of a pharmacological receptor. Phenomenologically, it looks as if the agonist provokes the conformational transition. One crucial feature of the model is the dissociation between the binding function (the fraction of protein bound to the regulator), and the state function (the fraction of protein under the activated state), cf below. In the models said of "induced-fit", those functions are identical. In the historical model, each allosteric unit, called a protomer (generally assumed to be a subunit), can exist in two different conformational states – designated 'R' (for relaxed) or 'T' (for tense) states. In any one molecule, all protomers must be in the same state. That is to say, all subunits must be in either the R or the T state. Proteins with subunits in different states are not allowed by this model. The R state has a higher affinity for the ligand than the T state. Because of that, although the ligand may bind to the subunit when it is in either state, the binding of a ligand will increase the equilibrium in favor of the R state. Two equations can be derived, that express the fractional occupancy of the ligand binding site () and the fraction of the proteins in the R state (): Where is the allosteric constant, that is the ratio of proteins in the T and R states in the absence of ligand, is the ratio of the affinities of R and T states for the ligand, and , the normalized concentration of ligand. It is not immediately obvious that the expression for is a form of the Adair equation, but in fact it is, as one can see by multiplying out the expressions in parentheses and comparing the coefficients of powers of with corresponding coefficients in the Adair equation. This model explains sigmoidal binding properties (i.e. positive cooperativity) as change in concentration of ligand over a small range will lead to a large increase in the proportion of molecules in the R state, and thus will lead to a high association of the ligand to the protein. It cannot explain negative cooperativity. The MWC model proved very popular in enzymology, and pharmacology, although it has been shown inappropriate in a certain number of cases. The best example of a successful application of the model is the regulation of hemoglobin function. Extensions of the model have been proposed for lattices of proteins by various authors. Edelstein argued that the MWC model gave a better account of the data for hemoglobin than the sequential model could do. He and Changeux applied the model to signal transduction. Changeux has discussed the status of the model after 50 years. See also Sequential model References Protein structure
Monod–Wyman–Changeux model
[ "Chemistry" ]
873
[ "Protein structure", "Structural biology" ]
2,422,001
https://en.wikipedia.org/wiki/ISACA
ISACA is an international professional association focused on IT (information technology) governance. On its IRS filings, it is known as the Information Systems Audit and Control Association, although ISACA now goes by its acronym only. ISACA currently offers 8 certification programs, as well as other micro-certificates. History ISACA originated in United States in 1967, when a group of individuals working on auditing controls in computer systems started to become increasingly critical of the operations of their organizations. They identified a need for a centralized source of information and guidance in the field. In 1969, Stuart Tyrnauer, an employee of the (later) Douglas Aircraft Company, incorporated the group as the EDP Auditors Association (EDPAA). Tyrnauer served as the body's founding chairman for the first three years. In 1976 the association formed an education foundation to undertake large-scale research efforts to expand the knowledge of and value accorded to the fields of governance and control of information technology. The association became the Information Systems Audit and Control Association in 1994. the organization had dropped its long title and branded itself as ISACA. In March 2016, ISACA bought the CMMI Institute, which is behind the Capability Maturity Model Integration. In January 2020, ISACA updated and refreshed its look and digital presence, introducing a new logo. Current status ISACA currently serves more than 170,000 constituents (members and professionals holding ISACA certifications) in more than 180 countries. The job titles of members are such as IS auditor, consultant, educator, IS security professional, regulator, chief information officer, chief information security officer and internal auditor. They work in nearly all industry categories. There is a network of ISACA chapters with more than 225 chapters established in over 180 countries. Chapters provide education, resource sharing, advocacy, networking and other benefits. Major publications COBIT ISACA Framework Frameworks, Standards and Models Blockchain Framework and Guidance Risk IT Framework IT Audit Framework - (ITAF™): A Professional Practices Framework for IT Audit, 4th Edition Business Model for Information Systems (BMIS) Capability Maturity Model Integrated(CMMI) Information System Control Journal Insights and Expertise Audit Programs and tools Publications - over 200 professional publications and Guidance on Audit & Assurance, Emerging Technology, Governance, Information Security, Information Technology, Privacy, Risk. Some of the topics include: Artificial Intelligence Blockchain Certification Exam Prep Guides for CISA, CRISC, CISM, CGEIT, CDPSE, CET and several Certificate Courses Cloud Computing COBIT Compliance Cybersecurity Data Governance Data Science Internet of Things Network Infrastructure Software Development Threats and Controls Vendor Management Young Professionals White Papers - Over 200 white papers on a range of contemporary topics News and Trends Certifications Certified Information Systems Auditor (CISA,1978) Certified Information Security Manager (CISM, 2002) Certified in the Governance of Enterprise IT (CGEIT, 2007) Certified in Risk and Information Systems Control (CRISC, 2010) Cybersecurity Practitioner Certification (CSX-P, 2015) Certified Data Privacy Solutions Engineer (CDPSE, 2020) Information Technology Certified Associate (ITCA, 2021) Certified in Emerging Technology (CET, 2021) The CSX-P, ISACA's first cybersecurity certification, was introduced in the summer of 2015. It is one of the few certifications that require the individual to work in a live environment, with real problems, to obtain a certification. Specifically, the exam puts test takers in a live network with a real incident taking place. The student's efforts to respond to the incident and fix the problem results in the type of score awarded. Certificates IT Audit Fundamentals Certificate IT Risk Fundamentals Certificate Certificate of Cloud Auditing Knowledge Cybersecurity Audit Certificate Computing Fundamentals Certificate Networks and Infrastructure Fundamentals Certificate Cybersecurity Fundamentals Certificate Software Development Fundamentals Certificate Data Science Fundamentals Certificate Cloud Fundamentals Certificate Blockchain Fundamentals Certificate IoT Fundamentals Certificate Artificial Intelligence Fundamentals Certificate COBIT Design and Implementation Implementing the NIST Cybersecurity Framework Using COBIT 2019 COBIT Foundation COBIT 5 Certificates See also Information assurance Information Security Information security management system IT risk Risk IT Framework COBIT Committee of Sponsoring Organizations of the Treadway Commission (COSO) (ISC)² Information Systems Security Association List of international professional associations IAPP References External links ISACA official webpage Official ISACA CSX webpage Knowledge Management and Organizational Learning )(Framework) Information technology organizations Computer security organizations Auditing organizations Organizations established in 1967 Professional accounting bodies
ISACA
[ "Technology" ]
922
[ "Information technology", "Information technology organizations" ]
2,423,712
https://en.wikipedia.org/wiki/Magic%20acid
Magic acid () is a superacid consisting of a mixture, most commonly in a 1:1 molar ratio, of fluorosulfuric acid () and antimony pentafluoride (). This conjugate Brønsted–Lewis superacid system was developed in the 1960s by Ronald Gillespie and his team at McMaster University, and has been used by George Olah to stabilise carbocations and hypercoordinated carbonium ions in liquid media. Magic acid and other superacids are also used to catalyze isomerization of saturated hydrocarbons, and have been shown to protonate even weak bases, including methane, xenon, halogens, and molecular hydrogen. History The term "superacid" was first used in 1927 when James Bryant Conant found that perchloric acid could protonate ketones and aldehydes to form salts in nonaqueous solution. The term itself was coined by R. J. Gillespie, after Conant combined sulfuric acid with fluorosulfuric acid, and found the solution to be several million times more acidic than sulfuric acid alone. The magic acid system was developed in the 1960s by Ronald Gillespie, and was to be used to study stable carbocations. Gillespie also used the acid system to generate electron-deficient inorganic cations. The name originated after a Christmas party in 1966, when a member of the Olah lab placed a paraffin candle into the acid, and found that it dissolved quite rapidly. Examination of the solution with 1H-NMR showed a tert-butyl cation, suggesting that the paraffin chain that forms the wax had been cleaved, then isomerized into the relatively stable tertiary carbocation. The name appeared in a paper published by the Olah lab. Properties Structure Although a 1:1 molar ratio of and best generates carbonium ions, the effects of the system at other molar ratios have also been documented. When the ratio : is less than 0.2, the following two equilibria, determined by 19F NMR spectroscopy, are the most prominent in solution: (In both of these structures, the sulfur has tetrahedral coordination, not planar. The double bonds between sulfur and oxygen are more properly represented as single bonds, with formal negative charges on the oxygen atoms and a formal plus two charge on the sulfur. The antimony atoms will also have a formal charge of minus one.) In the above figure, Equilibrium I accounts for 80% of the NMR data, while Equilibrium II accounts for about 20%. As the ratio of the two compounds increases from 0.4–1.4, new NMR signals appear and increase in intensity with increasing concentrations of . The resolution of the signals decreases as well, because of the increasing viscosity of the liquid system. Strength All proton-producing acids stronger than 100% sulfuric acid are considered superacids, and are characterized by low values of the Hammett acidity function. For instance, sulfuric acid, , has a Hammett acidity function, H0, of −12, perchloric acid, , has a Hammett acidity function, of −13, and that of the 1:1 magic acid system, , is −23. Fluoroantimonic acid, the strongest known superacid, is believed to reach extrapolated H0 values down to −28. Uses Observations of stable carbocations Magic acid has low nucleophilicity, allowing for increased stability of carbocations in solution. The "classical" trivalent carbocation can be observed in the acid medium, and has been found to be planar and sp2-hybridized. Because the carbon atom is surrounded by only six valence electrons, it is highly electron deficient and electrophilic. It is easily described by Lewis dot structures because it contains only two-electron, single bonds to adjacent carbon atoms. Many tertiary cycloalkyl cations can also be formed in superacidic solutions. One such example is the 1-methyl-1-cyclopentyl cation, which is formed from both the cyclopentane and cyclohexane precursor. In the case of the cyclohexane, the cyclopentyl cation is formed from isomerization of the secondary carbocation to the tertiary, more stable carbocation. Cyclopropylcarbenium ions, alkenyl cations, and arenium cations have also been observed. As use of the Magic acid system became more widespread, however, higher-coordinate carbocations were observed. Penta-coordinate carbocations, also described as nonclassical ions, cannot be depicted using only two-electron, two-center bonds, and require, instead, two-electron, three (or more) center bonding. In these ions, two electrons are delocalized over more than two atoms, rendering these bond centers so electron deficient that they enable saturated alkanes to participate in electrophilic reactions. The discovery of hypercoordinated carbocations fueled the nonclassical ion controversy of the 1950s and 60s. Due to the slow timescale of 1H-NMR, the rapidly equilibrating positive charges on hydrogen atoms would likely go undetected. However, IR spectroscopy, Raman spectroscopy, and 13C NMR have been used to investigate bridged carbocation systems. One controversial cation, the norbornyl cation, has been observed in several media, Magic acid among them. The bridging methylene carbon atom is pentacoordinated, with three two-electron, two-center bonds, and one two-electron, three-center bond with its remaining sp3 orbital. Quantum mechanical calculations have also shown that the classical model is not an energy minimum. Reactions with alkanes Magic acid is capable of protonating alkanes. For instance, methane reacts to form the ion at 140 °C and atmospheric pressure, though some hydrocarbon ions of greater molecular weights are also formed as byproducts. Hydrogen gas is another reaction byproduct. In the presence of rather than , methane has been shown to interchange hydrogen atoms for deuterium atoms, and HD is released rather than . This is evidence to suggest that in these reactions, methane is indeed a base, and can accept a proton from the acid medium to form . This ion is then deprotonated, explaining the hydrogen exchange, or loses a hydrogen molecule to form – the carbonium ion. This species is quite reactive, and can yield several new carbocations, shown below. Larger alkanes, such as ethane, are also reactive in magic acid, and both exchange hydrogen atoms and condense to form larger carbocations, such as protonated neopentane. This ion is then cloven at higher temperatures, and reacts to release hydrogen gas and forms the t-amyl cation at lower temperatures. It is on this note that George Olah suggests we no longer take as synonymous the names "alkane" and "paraffin." The word "paraffin" is derived from the Latin "parum affinis", meaning "lacking in affinity." He says, "It is, however, with some nostalgia that we make this recommendation, as ‘inert gases’ at least maintained their ‘nobility’ as their chemical reactivity became apparent, but referring to ‘noble hydrocarbons’ would seem to be inappropriate." Catalysis with hydroperoxides Magic acid catalyzes cleavage-rearrangement reactions of tertiary hydroperoxides and tertiary alcohols. The nature of the experiments used to determine the mechanism, namely the fact that they took place in superacid medium, allowed observation of the carbocation intermediates formed. It was determined that the mechanism depends on the amount of magic acid used. Near molar equivalency, only O–O cleavage is observed, but with increasing excess of magic acid, C–O cleavage competes with O–O cleavage. The excess acid likely deactivates the hydrogen peroxide formed in C–O heterolysis. Magic acid also catalyzes electrophilic hydroxylation of aromatic compounds with hydrogen peroxide, resulting in high-yield preparation of monohydroxylated products. Phenols exist as completely protonated species in superacid solutions, and when produced in the reaction, are then deactivated toward further electrophilic attack. Protonated hydrogen peroxide is the active hydroxylating agent. Catalysis with ozone Oxygenation of alkanes can be catalyzed by a magic acid– solution in the presence of ozone. The mechanism is similar to that of protolysis of alkanes, with an electrophilic insertion into the single σ bonds of the alkane. The hydrocarbon–ozone complex transition state has the form of a penta-coordinated ion. Alcohols, ketones, and aldehydes are oxygenated by electrophilic insertion as well. Safety As with all strong acids, and especially superacids, proper personal protective equipment should be used. In addition to the obligatory gloves and goggles, the use of a faceshield and full-face respirator are also recommended. Predictably, magic acid is highly toxic upon ingestion and inhalation, causes severe skin and eye burns, and is toxic to aquatic life. See also Fluoroantimonic acid, the strongest known superacid References Superacids Antimony(V) compounds Sulfonic acids Fluoro complexes
Magic acid
[ "Chemistry" ]
1,996
[ "Superacids", "Acids", "Functional groups", "Sulfonic acids" ]
2,424,431
https://en.wikipedia.org/wiki/Electrovacuum%20solution
In general relativity, an electrovacuum solution (electrovacuum) is an exact solution of the Einstein field equation in which the only nongravitational mass–energy present is the field energy of an electromagnetic field, which must satisfy the (curved-spacetime) source-free Maxwell equations appropriate to the given geometry. For this reason, electrovacuums are sometimes called (source-free) Einstein–Maxwell solutions. Definition In general relativity, the geometric setting for physical phenomena is a Lorentzian manifold, which is interpreted as a curved spacetime, and which is specified by defining a metric tensor (or by defining a frame field). The Riemann curvature tensor of this manifold and associated quantities such as the Einstein tensor , are well-defined. In general relativity, they can be interpreted as geometric manifestations (curvature and forces) of the gravitational field. We also need to specify an electromagnetic field by defining an electromagnetic field tensor on our Lorentzian manifold. To be classified as an electrovacuum solution, these two tensors are required to satisfy two following conditions The electromagnetic field tensor must satisfy the source-free curved spacetime Maxwell field equations and The Einstein tensor must match the electromagnetic stress–energy tensor, . The first Maxwell equation is satisfied automatically if we define the field tensor in terms of an electromagnetic potential vector . In terms of the dual covector (or potential one-form) and the electromagnetic two-form, we can do this by setting . Then we need only ensure that the divergences vanish (i.e. that the second Maxwell equation is satisfied for a source-free field) and that the electromagnetic stress–energy matches the Einstein tensor. Invariants The electromagnetic field tensor is antisymmetric, with only two algebraically independent scalar invariants, Here, the star is the Hodge star. Using these, we can classify the possible electromagnetic fields as follows: If but , we have an electrostatic field, which means that some observers will measure a static electric field, and no magnetic field. If but , we have an magnetostatic field, which means that some observers will measure a static magnetic field, and no electric field. If , the electromagnetic field is said to be null, and we have a null electrovacuum. Null electrovacuums are associated with electromagnetic radiation. An electromagnetic field which is not null is called non-null, and then we have a non-null electrovacuum. Einstein tensor The components of a tensor computed with respect to a frame field rather than the coordinate basis are often called physical components, because these are the components which can (in principle) be measured by an observer. In the case of an electrovacuum solution, an adapted frame can always be found in which the Einstein tensor has a particularly simple appearance. Here, the first vector is understood to be a timelike unit vector field; this is everywhere tangent to the world lines of the corresponding family of adapted observers, whose motion is "aligned" with the electromagnetic field. The last three are spacelike unit vector fields. For a non-null electrovacuum, an adapted frame can be found in which the Einstein tensor takes the form where is the energy density of the electromagnetic field, as measured by any adapted observer. From this expression, it is easy to see that the isotropy group of our non-null electrovacuum is generated by boosts in the direction and rotations about the axis. In other words, the isotropy group of any non-null electrovacuum is a two-dimensional abelian Lie group isomorphic to SO(1,1) x SO(2). For a null electrovacuum, an adapted frame can be found in which the Einstein tensor takes the form From this it is easy to see that the isotropy group of our null electrovacuum includes rotations about the axis; two further generators are the two parabolic Lorentz transformations aligned with the direction given in the article on the Lorentz group. In other words, the isotropy group of any null electrovacuum is a three-dimensional Lie group isomorphic to E(2), the isometry group of the euclidean plane. The fact that these results are exactly the same in curved spacetimes as for electrodynamics in flat Minkowski spacetime is one expression of the equivalence principle. Eigenvalues The characteristic polynomial of the Einstein tensor of a non-null electrovacuum must have the form Using Newton's identities, this condition can be re-expressed in terms of the traces of the powers of the Einstein tensor as where This necessary criterion can be useful for checking that a putative non-null electrovacuum solution is plausible, and is sometimes useful for finding non-null electrovacuum solutions. The characteristic polynomial of a null electrovacuum vanishes identically, even if the energy density is nonzero. This possibility is a tensor analogue of the well known that a null vector always has vanishing length, even if it is not the zero vector. Thus, every null electrovacuum has one quadruple eigenvalue, namely zero. Rainich conditions In 1925, George Yuri Rainich presented purely mathematical conditions which are both necessary and sufficient for a Lorentzian manifold to admit an interpretation in general relativity as a non-null electrovacuum. These comprise three algebraic conditions and one differential condition. The conditions are sometimes useful for checking that a putative non-null electrovacuum really is what it claims, or even for finding such solutions. Analogous necessary and sufficient conditions for a null electrovacuum have been found by Charles Torre. Test fields Sometimes one can assume that the field energy of any electromagnetic field is so small that its gravitational effects can be neglected. Then, to obtain an approximate electrovacuum solution, we need only solve the Maxwell equations on a given vacuum solution. In this case, the electromagnetic field is often called a test field, in analogy with the term test particle (denoting a small object whose mass is too small to contribute appreciably to the ambient gravitational field). Here, it is useful to know that any Killing vectors which may be present will (in the case of a vacuum solution) automatically satisfy the curved spacetime Maxwell equations. Note that this procedure amounts to assuming that the electromagnetic field, but not the gravitational field, is "weak". Sometimes we can go even further; if the gravitational field is also considered "weak", we can independently solve the linearised Einstein field equations and the (flat spacetime) Maxwell equations on a Minkowksi vacuum background. Then the (weak) metric tensor gives the approximate geometry; the Minkowski background is unobservable by physical means, but mathematically much simpler to work with, whenever we can get away with such a sleight-of-hand. Examples Noteworthy individual non-null electrovacuum solutions include: Reissner–Nordström electrovacuum (which describes the geometry around a charged spherical mass), Kerr–Newman electrovacuum (which describes the geometry around a charged, rotating object), Melvin electrovacuum (a model of a cylindrically symmetric magnetostatic field), Garfinkle–Melvin electrovacuum (like the preceding, but including a gravitational wave traveling along the axis of symmetry), Bertotti–Robinson electrovacuum: this is a simple spacetime having a remarkable product structure; it arises from a kind of "blow up" of the horizon of the Reissner–Nordström electrovacuum, Witten electrovacuums (discovered by Louis Witten, father of Edward Witten). Noteworthy individual null electrovacuum solutions include: the monochromatic electromagnetic plane wave, an exact solution which is the general relativitistic analogue of the plane waves in classical electromagnetism, Bell–Szekeres electrovacuum (a colliding plane wave model). Some well known families of electrovacuums are: Weyl–Maxwell electrovacuums: this is the family of all static axisymmetric electrovacuum solutions; it includes the Reissner–Nordström electrovacuum, Ernst–Maxwell electrovacuums: this is the family of all stationary axisymmetric electrovacuum solutions; it includes the Kerr–Newman electrovacuum, Beck–Maxwell electrovacuums: all nonrotating cylindrically symmetric electrovacuum solutions, Ehlers–Maxwell electrovacuums: all stationary cylindrically symmetric electrovacuum solutions, Szekeres electrovacuums: all pairs of colliding plane waves, where each wave may contain both gravitational and electromagnetic radiation; these solutions are null electrovacuums outside the interaction zone, but generally non-null electrovacuums inside the interaction zone, due to the non-linear interaction of the two waves after they collide. Many pp-wave spacetimes admit an electromagnetic field tensor turning them into exact null electrovacuum solutions. See also Classification of electromagnetic fields Exact solutions in general relativity Lorentz group References See section 5.4 for the Rainich conditions, section 19.4 for the Weyl–Maxwell electrovacuums, section 21.1 for the Ernst-Maxwell electrovacuums, section 24.5 for pp-waves, section 25.5 for Szekeres electrovacuums, etc. The definitive resource on colliding plane waves, including the examples mentioned above. Exact solutions in general relativity Electromagnetism
Electrovacuum solution
[ "Physics", "Mathematics" ]
1,964
[ "Exact solutions in general relativity", "Electromagnetism", "Physical phenomena", "Mathematical objects", "Equations", "Fundamental interactions" ]
2,424,548
https://en.wikipedia.org/wiki/Tetrameric%20protein
A tetrameric protein is a protein with a quaternary structure of four subunits (tetrameric). Homotetramers have four identical subunits (such as glutathione S-transferase), and heterotetramers are complexes of different subunits. A tetramer can be assembled as dimer of dimers with two homodimer subunits (such as sorbitol dehydrogenase), or two heterodimer subunits (such as hemoglobin). Subunit interactions in tetramers The interactions between subunits forming a tetramer is primarily determined by non covalent interaction. Hydrophobic effects, hydrogen bonds and electrostatic interactions are the primary sources for this binding process between subunits. For homotetrameric proteins such as sorbitol dehydrogenase (SDH), the structure is believed to have evolved going from a monomeric to a dimeric and finally a tetrameric structure in evolution. The binding process in SDH and many other tetrameric enzymes can be described by the gain in free energy which can be determined from the rate of association and dissociation. The above image shows the assembly of the four subunits (A,B,C and D) in SDH. Hydrogen bonds between subunits Hydrogen bonding networks between subunits has been shown to be important for the stability of the tetrameric quaternary protein structure. For example, a study of SDH which used diverse methods such as protein sequence alignments, structural comparisons, energy calculations, gel filtration experiments and enzyme kinetics experiments, could reveal an important hydrogen bonding network which stabilizes the tetrameric quaternary structure in mammalian SDH. Tetramers in immunology In immunology, MHC tetramers can be used in tetramer assays, to quantify numbers of antigen-specific T cells (especially CD8+ T cells). MHC tetramers are based on recombinant class I molecules that, through the action of bacterial BirA, have been biotinylated. These molecules are folded with the peptide of interest and β2M and tetramerized by a fluorescently labeled streptavidin. (Streptavidin binds to four biotins per molecule.) This tetramer reagent will specifically label T cells that express T cell receptors that are specific for a given peptide-MHC complex. For example, a Kb/FAPGNYPAL tetramer will specifically bind to Sendai virus specific cytotoxic T cell in a C57BL/6 mouse. Antigen specific responses can be measured as CD8+, tetramer+ T cells as a fraction of all CD8+ lymphocytes. The reason for using a tetramer, as opposed to a single labeled MHC class I molecule is that the tetrahedral tetramers can bind to three TCRs at once, allowing specific binding in spite of the low (1 micromolar) affinity of the typical class I-peptide-TCR interaction. MHC class II tetramers can also be made, although these are more difficult to work with practically. Homotetramers and heterotetramers A homotetramer is a protein complex made up of four identical subunits which are associated but not covalently bound. Conversely, a heterotetramer is a 4-subunit complex where one or more subunits differ. Examples of homotetramers include: enzymes like beta-glucuronidase (pictured) export factors such as SecB from Escherichia coli magnesium ion transporters such as CorA. lectins such as Concanavalin A IMPDH and IMPDH2 Examples of heterotetramers include haemoglobin (pictured), the NMDA receptor, some aquaporins, some AMPA receptors, as well as some enzymes. Purification of heterotetramers Ion-exchange chromatography is useful for isolating specific heterotetrameric protein assemblies, allowing purification of specific complexes according to both the number and the position of charged peptide tags. Nickel affinity chromatography may also be employed for heterotetramer purification. Intragenic complementation Multiple copies of a polypeptide encoded by a gene often can form an aggregate referred to as a multimer. When a multimer is formed from polypeptides produced by two different mutant alleles of a particular gene, the mixed multimer may exhibit greater functional activity than the unmixed multimers formed by each of the mutants alone. When a mixed multimer displays increased functionality relative to the unmixed multimers, the phenomenon is referred to as intragenic complementation. In humans, argininosuccinate lyase (ASL) is a homotetrameric enzyme that can undergo intragenic complementation. An ASL disorder in humans can arise from mutations in the ASL gene, particularly mutations that affect the active site of the tetrameric enzyme. ASL disorder is associated with considerable clinical and genetic heterogeneity which is considered to reflect the extensive intragenic complementation occurring among different individual patients. References External links T-cell Group - Cardiff University Immunology Proteins Protein complexes
Tetrameric protein
[ "Chemistry", "Biology" ]
1,091
[ "Immunology", "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
2,425,912
https://en.wikipedia.org/wiki/Scalar%20field%20solution
In general relativity, a scalar field solution is an exact solution of the Einstein field equation in which the gravitational field is due entirely to the field energy and momentum of a scalar field. Such a field may or may not be massless, and it may be taken to have minimal curvature coupling, or some other choice, such as conformal coupling. Definition In general relativity, the geometric setting for physical phenomena is a Lorentzian manifold, which is physically interpreted as a curved spacetime, and which is mathematically specified by defining a metric tensor (or by defining a frame field). The curvature tensor of this manifold and associated quantities such as the Einstein tensor , are well-defined even in the absence of any physical theory, but in general relativity they acquire a physical interpretation as geometric manifestations of the gravitational field. In addition, we must specify a scalar field by giving a function . This function is required to satisfy two following conditions: The function must satisfy the (curved spacetime) source-free wave equation , The Einstein tensor must match the stress-energy tensor for the scalar field, which in the simplest case, a minimally coupled massless scalar field, can be written . Both conditions follow from varying the Lagrangian density for the scalar field, which in the case of a minimally coupled massless scalar field is Here, gives the wave equation, while gives the Einstein equation (in the case where the field energy of the scalar field is the only source of the gravitational field). Physical interpretation Scalar fields are often interpreted as classical approximations, in the sense of effective field theory, to some quantum field. In general relativity, the speculative quintessence field can appear as a scalar field. For example, a flux of neutral pions can in principle be modeled as a minimally coupled massless scalar field. Einstein tensor The components of a tensor computed with respect to a frame field rather than the coordinate basis are often called physical components, because these are the components which can (in principle) be measured by an observer. In the special case of a minimally coupled massless scalar field, an adapted frame (the first is a timelike unit vector field, the last three are spacelike unit vector fields) can always be found in which the Einstein tensor takes the simple form where is the energy density of the scalar field. Eigenvalues The characteristic polynomial of the Einstein tensor in a minimally coupled massless scalar field solution must have the form In other words, we have a simple eigenvalue and a triple eigenvalue, each being the negative of the other. Multiply out and using Gröbner basis methods, we find that the following three invariants must vanish identically: Using Newton's identities, we can rewrite these in terms of the traces of the powers. We find that We can rewrite this in terms of index gymnastics as the manifestly invariant criteria: Examples Notable individual scalar field solutions include the Janis–Newman–Winicour scalar field solution, which is the unique static and spherically symmetric massless minimally coupled scalar field solution. See also Exact solutions in general relativity Lorentz group References See section 3.3 for the stress-energy tensor of a minimally coupled scalar field. Exact solutions in general relativity
Scalar field solution
[ "Physics", "Mathematics" ]
681
[ "Exact solutions in general relativity", "Mathematical objects", "Equations", "Relativity stubs", "Theory of relativity" ]
2,425,949
https://en.wikipedia.org/wiki/Lambdavacuum%20solution
In general relativity, a lambdavacuum solution is an exact solution to the Einstein field equation in which the only term in the stress–energy tensor is a cosmological constant term. This can be interpreted physically as a kind of classical approximation to a nonzero vacuum energy. These are discussed here as distinct from the vacuum solutions in which the cosmological constant is vanishing. Terminological note: this article concerns a standard concept, but there is apparently no standard term to denote this concept, so we have attempted to supply one for the benefit of Wikipedia. Definition The Einstein field equation is often written as with a so-called cosmological constant term . However, it is possible to move this term to the right hand side and absorb it into the stress–energy tensor , so that the cosmological constant term becomes just another contribution to the stress–energy tensor. When other contributions to that tensor vanish, the result is a lambdavacuum. An equivalent formulation in terms of the Ricci tensor is Physical interpretation A nonzero cosmological constant term can be interpreted in terms of a nonzero vacuum energy. There are two cases: : positive vacuum energy density and negative isotropic vacuum pressure, as in de Sitter space, : negative vacuum energy density and positive isotropic vacuum pressure, as in anti-de Sitter space. The idea of the vacuum having a nonvanishing energy density might seem counterintuitive, but this does make sense in quantum field theory. Indeed, nonzero vacuum energies can even be experimentally verified in the Casimir effect. Einstein tensor The components of a tensor computed with respect to a frame field rather than the coordinate basis are often called physical components, because these are the components which can (in principle) be measured by an observer. A frame consists of four unit vector fields Here, the first is a timelike unit vector field and the others are spacelike unit vector fields, and is everywhere orthogonal to the world lines of a family of observers (not necessarily inertial observers). Remarkably, in the case of lambdavacuum, all observers measure the same energy density and the same (isotropic) pressure. That is, the Einstein tensor takes the form Saying that this tensor takes the same form for all observers is the same as saying that the isotropy group of a lambdavacuum is , the full Lorentz group. Eigenvalues The characteristic polynomial of the Einstein tensor of a lambdavacuum must have the form Using Newton's identities, this condition can be re-expressed in terms of the traces of the powers of the Einstein tensor as where are the traces of the powers of the linear operator corresponding to the Einstein tensor, which has second rank. Relation with Einstein manifolds The definition of a lambdavacuum solution makes sense mathematically irrespective of any physical interpretation, and lambdavacuums are a special case of a concept that is studied by pure mathematicians. Einstein manifolds are pseudo-Riemannian manifolds in which the Ricci tensor is proportional to the metric tensor. The Lorentzian manifolds that are also Einstein manifolds are precisely the lambdavacuum solutions. Examples Noteworthy individual examples of lambdavacuum solutions include: de Sitter space, often referred to as the dS cosmological model, anti-de Sitter space, often referred to as the AdS cosmological model, de Sitter–Schwarzschild metric, which models a spherically symmetric massive object immersed in a de Sitter universe (and likewise for AdS), Kerr–de Sitter metric, the rotating generalization of the latter, Nariai spacetime; this is the only solution in general relativity, other than the Bertotti–Robinson electrovacuum, that has a Cartesian product structure. See also Exact solutions in general relativity Exact solutions in general relativity
Lambdavacuum solution
[ "Mathematics" ]
791
[ "Exact solutions in general relativity", "Mathematical objects", "Equations" ]
412,531
https://en.wikipedia.org/wiki/Sphygmomanometer
A sphygmomanometer ( ), also known as a blood pressure monitor, or blood pressure gauge, is a device used to measure blood pressure, composed of an inflatable cuff to collapse and then release the artery under the cuff in a controlled manner, and a mercury or aneroid manometer to measure the pressure. Manual sphygmomanometers are used with a stethoscope when using the auscultatory technique. A sphygmomanometer consists of an inflatable cuff, a measuring unit (the mercury manometer, or aneroid gauge), and a mechanism for inflation which may be a manually operated bulb and valve or a pump operated electrically. Etymology The word sphygmomanometer uses the combining form of sphygmo- + manometer. The roots involved are as follows: Greek sphygmos "pulse", plus the scientific term manometer (from French manomètre), i.e. "pressure meter", itself coined from manos "thin, sparse", and metron "measure". Most sphygmomanometers were mechanical gauges with dial faces, or mercury columns, during most of the 20th century. Since the advent of electronic medical devices, names such as "meter" and "monitor" can also apply, as devices can automatically monitor blood pressure on an ongoing basis. History The sphygmomanometer was invented by Samuel Siegfried Karl Ritter von Basch in the year 1881. Scipione Riva-Rocci introduced a more easily-usable version in 1896. In 1901, pioneering neurosurgeon Dr. Harvey Cushing brought an example of Riva-Rocci's device to the US, modernized and popularized it within the medical community. Further improvement came in 1905 when Russian physician Nikolai Korotkov included diastolic blood pressure measurement following his discovery of "Korotkoff sounds". William A. Baum invented the Baumanometer brand in 1916, while working for The Life Extension Institute which performed insurance and employment physicals. Types Both manual and digital meters are currently employed, with different trade-offs in accuracy versus convenience. Manual A stethoscope is required for auscultation (see below). Manual meters are best used by trained practitioners, and, while it is possible to obtain a basic reading through palpation alone, this yields only the systolic pressure. Mercury sphygmomanometers are considered the gold standard. They indicate pressure with a column of mercury, which does not require recalibration. Because of their accuracy, they are often used in clinical trials of drugs and in clinical evaluations of high-risk patients, including pregnant women. A frequently used wall mounted mercury sphygmomanometer is also known as a Baumanometer. Aneroid sphygmomanometers (mechanical types with a dial) are in common use; they may require calibration checks, unlike mercury manometers. Aneroid sphygmomanometers are considered safer than mercury sphygmomanometers, although inexpensive ones are less accurate. A major cause of departure from calibration is mechanical jarring. Aneroids mounted on walls or stands are not susceptible to this particular problem. Digital Digital meters employ oscillometric measurements and electronic calculations rather than auscultation. They may use manual or automatic inflation, but both types are electronic, easy to operate without training, and can be used in noisy environments. They calculate systolic and diastolic pressures by oscillometric detection, employing either deformable membranes that are measured using differential capacitance, or differential piezoresistance, and they include a microprocessor. They estimate mean arterial blood pressure and measure pulse rate; while systolic and diastolic pressures are obtained less accurately than with manual meters, and calibration is also a concern. Digital oscillometric monitors may not be advisable for some patients, such as those with arteriosclerosis, arrhythmia, preeclampsia, pulsus alternans, and pulsus paradoxus, as their calculations may not be correct for these conditions, and in these cases, an analog sphygmomanometer is preferable when used by a trained person. Digital instruments may use a cuff placed, in order of accuracy and inverse order of portability and convenience, around the upper arm, the wrist, or a finger. Recently, a group of researchers at Michigan State University developed a smartphone based device that uses oscillometry to estimate blood pressure. The oscillometric method of detection used gives blood pressure readings that differ from those determined by auscultation, and vary according to many factors, such as pulse pressure, heart rate and arterial stiffness, although some instruments are claimed also to measure arterial stiffness, and some can detect irregular heartbeats. Operation In humans, the cuff is normally placed smoothly and snugly around an upper arm, at roughly the same vertical height as the heart while the subject is seated with the arm supported. Other sites of placement depend on species and may include the flipper or tail. It is essential that the correct size of cuff is selected for the patient. Too small a cuff results in too high a pressure, while too large a cuff results in too low a pressure. For clinical measurements it is usual to measure and record both arms in the initial consultation to determine if the pressure is significantly higher in one arm than the other. A difference of 10 mmHg may be a sign of coarctation of the aorta. If the arms read differently, the higher reading arm would be used for later readings. The cuff is inflated until the artery is completely occluded. With a manual instrument, listening with a stethoscope to the brachial artery, the examiner slowly releases the pressure in the cuff at a rate of approximately 2 mmHg per heart beat. As the pressure in the cuffs falls, a "whooshing" or pounding sound is heard (see Korotkoff sounds) when blood flow first starts again in the artery. The pressure at which this sound began is noted and recorded as the systolic blood pressure. The cuff pressure is further released until the sound can no longer be heard. This is recorded as the diastolic blood pressure. In noisy environments where auscultation is impossible (such as the scenes often encountered in emergency medicine), systolic blood pressure alone may be read by releasing the pressure until a radial pulse is palpated (felt). In veterinary medicine, auscultation is rarely of use, and palpation or visualization of pulse distal to the sphygmomanometer is used to detect systolic pressure. Digital instruments use a cuff which may be placed, according to the instrument, around the upper arm, wrist, or a finger, in all cases elevated to the same height as the heart. They inflate the cuff and gradually reduce the pressure in the same way as a manual meter, and measure blood pressures by the oscillometric method. Significance By observing the mercury in the column, or the aneroid gauge pointer, while releasing the air pressure with a control valve, the operator notes the values of the blood pressure in mmHg. The peak pressure in the arteries during the cardiac cycle is the systolic pressure, and the lowest pressure (at the resting phase of the cardiac cycle) is the diastolic pressure. A stethoscope, applied lightly over the artery being measured, is used in the auscultatory method. Systolic pressure (first phase) is identified with the first of the continuous Korotkoff sounds. Diastolic pressure is identified at the moment the Korotkoff sounds disappear (fifth phase). Measurement of the blood pressure is carried out in the diagnosis and treatment of hypertension (high blood pressure), and in many other healthcare scenarios. See also Critical closing pressure References External links 1881 introductions 19th-century inventions Austrian inventions Blood pressure Medical equipment Physiological instruments Pressure gauges
Sphygmomanometer
[ "Technology", "Engineering", "Biology" ]
1,655
[ "Physiological instruments", "Medical equipment", "Measuring instruments", "Pressure gauges", "Medical technology" ]
412,702
https://en.wikipedia.org/wiki/Monotypic%20taxon
In biology, a monotypic taxon is a taxonomic group (taxon) that contains only one immediately subordinate taxon. A monotypic species is one that does not include subspecies or smaller, infraspecific taxa. In the case of genera, the term "unispecific" or "monospecific" is sometimes preferred. In botanical nomenclature, a monotypic genus is a genus in the special case where a genus and a single species are simultaneously described. Theoretical implications Monotypic taxa present several important theoretical challenges in biological classification. One key issue is known as "Gregg's Paradox": if a single species is the only member of multiple hierarchical levels (for example, being the only species in its genus, which is the only genus in its family), then each level needs a distinct definition to maintain logical structure. Otherwise, the different taxonomic ranks become effectively identical, which creates problems for organizing biological diversity in a hierarchical system. When taxonomists identify a monotypic taxon, this often reflects uncertainty about its relationships rather than true evolutionary isolation. This uncertainty is evident in many cases across different species. For instance, the diatom Licmophora juergensii is placed in a monotypic genus because scientists have not yet found clear evidence of its relationships to other species. Some taxonomists argue against monotypic taxa because they reduce the information content of biological classifications. As taxonomists Backlund and Bremer explain in their critique, "'Monotypic' taxa do not provide any information about the relationships of the immediately subordinate taxon". When monotypic taxa are sister to a single larger group, they might be merged into that group; however, when they are sister to multiple other groups, they may need to remain separate to maintain a natural classification. From a cladistic perspective, which focuses on shared derived characteristics to determine evolutionary relationships, the theoretical status of monotypic taxa is complex. Some argue they can only be justified when relationships cannot be resolved through synapomorphies (shared derived characteristics); otherwise, they would necessarily exclude related species and thus be paraphyletic. However, others contend that while most taxonomic groups can be classified as either monophyletic (containing all descendants of a common ancestor) or paraphyletic (excluding some descendants), these concepts do not apply to monotypic taxa because they contain only a single member. Monotypic taxa are part of a broader challenge in biological classification known as aphyly – situations where evolutionary relationships are poorly supported by evidence. This includes both monotypic groups and cases where traditional groupings are found to be artificial. Understanding how monotypic taxa fit into this bigger picture helps identify areas needing further research. The German lichenologist Robert Lücking suggests that the common application of the term monotypic is frequently misleading, "since each taxon by definition contains exactly one type and is hence "monotypic", regardless of the total number of units", and suggests using "monospecific" for a genus with a single species, and "monotaxonomic" for a taxon containing only one unit. Conservation implications Species in monotypic genera tend to be more threatened with extinction than average species. Studies have found this pattern particularly pronounced in amphibians, where about 6.56% of monotypic genera are critically endangered, compared to birds and mammals where around 4.54% and 4.02% of monotypic genera face critical endangerment respectively. Studies have found that extinction of monotypic genera is particularly associated with island species. Among 25 documented extinct monotypic genera studied, 22 occurred on islands, with flightless animals being particularly vulnerable to human impacts. Examples Just as the term monotypic is used to describe a taxon including only one subdivision, the contained taxon can also be referred to as monotypic within the higher-level taxon, e.g. a genus monotypic within a family. Some examples of monotypic groups are: Plants In the order Amborellales, there is only one family, Amborellaceae, and there is only one genus, Amborella, and in this genus there is only one species, Amborella trichopoda. The flowering plant Breonadia salicina is the only species in the monotypic genus Breonadia. The family Cephalotaceae includes only one genus, Cephalotus, and only one species, Cephalotus follicularis – the Albany pitcher plant. The division Ginkgophyta is monotypic, containing the single class Ginkgoopsida. This class is also monotypic, containing the single order Ginkgoales. Picomonas judraskeda is the only known species in the division Picozoa. Animals The madrone butterfly is the only species in the monotypic genus Eucheira. However, there are two subspecies of this butterfly, E. socialis socialis and E. socialis westwoodi, which means the species E. socialis is not monotypic. Erithacus rubecula, the European robin, is the only extant member of its genus. Delphinapterus leucas or the beluga whale is the only member of its genus and lacks subspecies. Dugong dugon is the only species in the monotypic genus Dugong. Homo sapiens (humans) are monotypic, as they have too little genetic diversity to harbor any living subspecies. Limnognathia maerski is a microscopic animal and the only species in the monotypic phylum Micrognathozoa. The narwhal is a medium-sized cetacean that is the only member of the monotypic genus Monodon. The platypus is the only member of the monotypic genus Ornithorhynchus. The salamanderfish is the only member of the order Lepidogalaxiiformes, which is the sister group to the remaining euteleosts. Ozichthys albimaculosus, the cream-spotted cardinalfish, found in tropical Australia and southern New Guinea, is the type species of the monotypic genus Ozichthys. The bearded reedling is the only species in the monotypic genus Panurus, which is the only genus in the monotypic family Panuridae. Canines form the only living subfamily of the dog family, Canidae See also Glossary of scientific naming Monophyly References External links Conservation biology Speciation
Monotypic taxon
[ "Biology" ]
1,307
[ "Evolutionary processes", "Speciation", "Conservation biology" ]
412,909
https://en.wikipedia.org/wiki/Substructural%20logic
In logic, a substructural logic is a logic lacking one of the usual structural rules (e.g. of classical and intuitionistic logic), such as weakening, contraction, exchange or associativity. Two of the more significant substructural logics are relevance logic and linear logic. Examples In a sequent calculus, one writes each line of a proof as . Here the structural rules are rules for rewriting the LHS of the sequent, denoted Γ, initially conceived of as a string (sequence) of propositions. The standard interpretation of this string is as conjunction: we expect to read as the sequent notation for (A and B) implies C. Here we are taking the RHS Σ to be a single proposition C (which is the intuitionistic style of sequent); but everything applies equally to the general case, since all the manipulations are taking place to the left of the turnstile symbol . Since conjunction is a commutative and associative operation, the formal setting-up of sequent theory normally includes structural rules for rewriting the sequent Γ accordingly—for example for deducing from . There are further structural rules corresponding to the idempotent and monotonic properties of conjunction: from we can deduce . Also from one can deduce, for any B, . Linear logic, in which duplicated hypotheses 'count' differently from single occurrences, leaves out both of these rules, while relevant (or relevance) logics merely leaves out the latter rule, on the ground that B is clearly irrelevant to the conclusion. The above are basic examples of structural rules. It is not that these rules are contentious, when applied in conventional propositional calculus. They occur naturally in proof theory, and were first noticed there (before receiving a name). Premise composition There are numerous ways to compose premises (and in the multiple-conclusion case, conclusions as well). One way is to collect them into a set. But since e.g. {a,a} = {a} we have contraction for free if premises are sets. We also have associativity and permutation (or commutativity) for free as well, among other properties. In substructural logics, typically premises are not composed into sets, but rather they are composed into more fine-grained structures, such as trees or multisets (sets that distinguish multiple occurrences of elements) or sequences of formulae. For example, in linear logic, since contraction fails, the premises must be composed in something at least as fine-grained as multisets. History Substructural logics are a relatively young field. The first conference on the topic was held in October 1990 in Tübingen, as "Logics with Restricted Structural Rules". During the conference, Kosta Došen proposed the term "substructural logics", which is now in use today. See also Substructural type system Residuated lattice References F. Paoli (2002), Substructural Logics: A Primer, Kluwer. G. Restall (2000) An Introduction to Substructural Logics, Routledge. Further reading Galatos, Nikolaos, Peter Jipsen, Tomasz Kowalski, and Hiroakira Ono (2007), Residuated Lattices. An Algebraic Glimpse at Substructural Logics, Elsevier, . External links Non-classical logic
Substructural logic
[ "Mathematics" ]
726
[ "Substructural logic", "Proof theory" ]
412,937
https://en.wikipedia.org/wiki/Teichoic%20acid
Teichoic acids (cf. Greek τεῖχος, teīkhos, "wall", to be specific a fortification wall, as opposed to τοῖχος, toīkhos, a regular wall) are bacterial copolymers of glycerol phosphate or ribitol phosphate and carbohydrates linked via phosphodiester bonds. Teichoic acids are found within the cell wall of most Gram-positive bacteria such as species in the genera Staphylococcus, Streptococcus, Bacillus, Clostridium, Corynebacterium, and Listeria, and appear to extend to the surface of the peptidoglycan layer. They can be covalently linked to N-acetylmuramic acid or a terminal D-alanine in the tetrapeptide crosslinkage between N-acetylmuramic acid units of the peptidoglycan layer, or they can be anchored in the cytoplasmic membrane with a lipid anchor. Teichoic acid's chemical signal is CH17P4O29NOH. Teichoic acids that are anchored to the lipid membrane are referred to as lipoteichoic acids (LTAs), whereas teichoic acids that are covalently bound to peptidoglycan are referred to as wall teichoic acids (WTA). Structure The most common structure of Wall teichoic acids are a ManNAc(β1→4)GlcNAc disaccharide with one to three glycerol phosphates attached to the C4 hydroxyl of the ManNAc residue followed by a long chain of glycerol- or ribitol phosphate repeats. Variations come in the long chain tail, which generally include sugar subunits being attached to the sides or the body of the repeats. Four types of WTA repeats have been named, as of 2013. Lipoteichoic acids follow a similar pattern of putting most variation in the repeats, although the set of enzymes used are different, at least in the case of Type I LTA. The repeats are anchored onto the membrane via a (di)glucosyl-diacylglycerol (Glc(2)DAG) anchor. Type IV LTA from Streptococcus pneumoniae represents a special case where both types intersect: after the tail is synthesized with an undecaprenyl phosphate (C55-P) intermediate "head", different TagU/LCP (LytR-CpsA-Psr) family enzymes either attaches it to the wall to form a WTA or to the GlcDAG anchor. Function The main function of teichoic acids is to provide flexibility to the cell-wall by attracting cations such as calcium and potassium. Teichoic acids can be substituted with D-alanine ester residues, or D-glucosamine, giving the molecule zwitterionic properties. These zwitterionic teichoic acids are suspected ligands for toll-like receptors 2 and 4. Teichoic acids also assist in regulation of cell growth by limiting the ability of autolysins to break the β(1-4) bond between the N-acetyl glucosamine and the N-acetylmuramic acid. Lipoteichoic acids may also act as receptor molecules for some Gram-positive bacteriophage; however, this has not yet been conclusively supported. It is an acidic polymer and contributes negative charge to the cell wall. Biosynthesis WTA and Type IV LTA Enzymes involved in the biosynthesis of WTAs have been named: TarO, TarA, TarB, TarF, TarK, and TarL. Their roles are: TarO (, EC 2.7.8.33) starts off the process by connecting GlcNAc to a biphospho-undecaprenyl (bactoprenyl) in the inner membrane. TarA (, EC 2.4.1.187) connects a ManNAc to the UDP-GlcNac formed by TarO via a β-(1,4) linkage. TarB (, EC 2.7.8.44) connects a single glycerol-3-phosphate to the C4 hydroxyl of ManNAc. TarF (, EC 2.7.8.12) connects more glycerol-3-phosphate units to the glycerol tail. In Tag-producing bacteria, this is the final step (a long glycerol tail). Otherwise it only adds one unit. TarK (, EC 2.7.8.46) connects the initial ribitol-5-phosphate unit. It is necessary in Bacillus subtilis W23 for Tar production, but S. aureus has both functions in the same TarL/K enzyme. TarL (, EC 2.7.8.47) constructs the long ribitol-5-phosphate tail. Following the synthesis, the ATP-binding cassette transporters (teichoic-acid-transporting ATPase) TarGH (, ) flip the cytoplasmic complex to the external surface of the inner membrane. The redundant TagTUV enzymes link this product to the cell wall. The enzymes TarI () and TarJ () are responsible for producing the substrates that lead to the polymer tail. Many of these proteins are located in a conserved gene cluster. Later (2013) studies have identified a few more enzymes that attach unique sugars to the WTA repeat units. A set of enzymes and transporters named DltABCE that adds alanines to both wall and lipo-teichoic acids were found. Note that the set of genes are named "Tag" (teichoic acid glycerol) instead of "Tar" (teichoic acid ribitol) in B. subtilis 168, which lacks the TarK/TarL enzymes. TarB/F/L/K all bear some similarities to each other, and belong to the same family (). Due to the role of B. subtilis as the main model strain, some linked UniProt entries are in fact the "Tag" ortholog as they are better annotated. The "similarity search" may be used to access the genes in the Tar-producing B. substilis W23 (BACPZ). As an antibiotic drug target This was proposed in 2004. A further review in 2013 has given more specific parts of the pathways to inhibit given newer knowledge. See also Lipoteichoic acid – a major constituent of the cell wall of gram-positive bacteria Sir James Baddiley References External links Bioinformatic mappings (see also the EC entries): UniProt: WTA KW-0777, pathway:547.789 (poly(glucopyranosyl N-acetylgalactosamine 1-phosphate) teichoic acid biosynthesis), pathway:547.827 (poly(glycerol phosphate) teichoic acid biosynthesis), pathway:547.790 (poly(ribitol phosphate) teichoic acid biosynthesis) UniProt: LTA pathway:547.556 (lipoteichoic acid biosynthesis) gene ontology: GO:0019350 Organic acids Cell biology
Teichoic acid
[ "Chemistry", "Biology" ]
1,561
[ "Organic acids", "Acids", "Cell biology", "Organic compounds" ]
412,942
https://en.wikipedia.org/wiki/Lipopolysaccharide
Lipopolysaccharide, now more commonly known as endotoxin, is a collective term for components of the outermost membrane of the cell envelope of gram-negative bacteria, such as E. coli and Salmonella with a common structural architecture. Lipopolysaccharides (LPS) are large molecules consisting of three parts: an outer core polysaccharide termed the O-antigen, an inner core oligosaccharide and Lipid A (from which toxicity is largely derived), all covalently linked. In current terminology, the term endotoxin is often used synonymously with LPS, although there are a few endotoxins (in the original sense of toxins that are inside the bacterial cell that are released when the cell disintegrates) that are not related to LPS, such as the so-called delta endotoxin proteins produced by Bacillus thuringiensis. Lipopolysaccharides can have substantial impacts on human health, primarily through interactions with the immune system. LPS is a potent activator of the immune system and is a pyrogen (agent that causes fever). In severe cases, LPS can trigger a brisk host response and multiple types of acute organ failure which can lead to septic shock. In lower levels and over a longer time period, there is evidence LPS may play an important and harmful role in autoimmunity, obesity, depression, and cellular senescence. Discovery The toxic activity of LPS was first discovered and termed endotoxin by Richard Friedrich Johannes Pfeiffer. He distinguished between exotoxins, toxins that are released by bacteria into the surrounding environment, and endotoxins, which are toxins "within" the bacterial cell and released only after destruction of the bacterial outer membrane. Subsequent work showed that release of LPS from Gram negative microbes does not necessarily require the destruction of the bacterial cell wall, but rather, LPS is secreted as part of the normal physiological activity of membrane vesicle trafficking in the form of bacterial outer membrane vesicles (OMVs), which may also contain other virulence factors and proteins. Functions in bacteria LPS is a major component of the outer cell membrane of gram-negative bacteria, contributing greatly to the structural integrity of the bacteria and protecting the membrane from certain kinds of chemical attack. LPS is the most abundant antigen on the cell surface of most gram-negative bacteria, contributing up to 80% of the outer membrane of E. coli and Salmonella. LPS increases the negative charge of the cell membrane and helps stabilize the overall membrane structure. It is of crucial importance to many gram-negative bacteria, which die if the genes coding for it are mutated or removed. However, it appears that LPS is nonessential in at least some gram-negative bacteria, such as Neisseria meningitidis, Moraxella catarrhalis, and Acinetobacter baumannii. It has also been implicated in non-pathogenic aspects of bacterial ecology, including surface adhesion, bacteriophage sensitivity, and interactions with predators such as amoebae. LPS is also required for the functioning of omptins, a class of bacterial protease. Composition LPS are amphipathic and composed of three parts: the O antigen (or O polysaccharide) which is hydrophilic, the core oligosaccharide (also hydrophilic), and Lipid A, the hydrophobic domain. O-antigen The repetitive glycan polymer contained within an LPS is referred to as the O antigen, O polysaccharide, or O side-chain of the bacteria. The O antigen is attached to the core oligosaccharide, and comprises the outermost domain of the LPS molecule. The structure and composition of the O chain is highly variable from strain to strain, determining the serological specificity of the parent bacterial strain; there are over 160 different O antigen structures produced by different E. coli strains. The presence or absence of O chains determines whether the LPS is considered "rough" or "smooth". Full-length O-chains would render the LPS smooth, whereas the absence or reduction of O-chains would make the LPS rough. Bacteria with rough LPS usually have more penetrable cell membranes to hydrophobic antibiotics, since a rough LPS is more hydrophobic. O antigen is exposed on the very outer surface of the bacterial cell, and, as a consequence, is a target for recognition by host antibodies. Core The core domain always contains an oligosaccharide component that attaches directly to lipid A and commonly contains sugars such as heptose and 3-Deoxy-D-manno-oct-2-ulosonic acid (also known as KDO, keto-deoxyoctulosonate). The core oligosaccharide is less variable in its structure and composition, a given core structure being common to large groups of bacteria. The LPS cores of many bacteria also contain non-carbohydrate components, such as phosphate, amino acids, and ethanolamine substituents. Lipid A Lipid A is, in normal circumstances, a phosphorylated glucosamine disaccharide decorated with multiple fatty acids. These hydrophobic fatty acid chains anchor the LPS into the bacterial membrane, and the rest of the LPS projects from the cell surface. The lipid A domain is the most bioactive and responsible for much of the toxicity of Gram-negative bacteria. When bacterial cells are lysed by the immune system, fragments of membrane containing lipid A may be released into the circulation, causing fever, diarrhea, and possible fatal endotoxic septic shock (a form of septic shock). The Lipid A moiety is a very conserved component of the LPS. However Lipid A structure varies among bacterial species. Lipid A structure largely defines the degree and nature of the overall host immune activation. Lipooligosaccharides The "rough form" of LPS has a lower molecular weight due to the absence of the O polysaccharide. In its place is a short oligosaccharide: this form is known as Lipooligosaccharide (LOS), and is a glycolipid found in the outer membrane of some types of Gram-negative bacteria, such as Neisseria spp. and Haemophilus spp. LOS plays a central role in maintaining the integrity and functionality of the outer membrane of the Gram negative cell envelope. LOS play an important role in the pathogenesis of certain bacterial infections because they are capable of acting as immunostimulators and immunomodulators. Furthermore, LOS molecules are responsible for the ability of some bacterial strains to display molecular mimicry and antigenic diversity, aiding in the evasion of host immune defenses and thus contributing to the virulence of these bacterial strains. In the case of Neisseria meningitidis, the lipid A portion of the molecule has a symmetrical structure and the inner core is composed of 3-deoxy-D-manno-2-octulosonic acid (KDO) and heptose (Hep) moieties. The outer core oligosaccharide chain varies depending on the bacterial strain. LPS detoxification A highly conserved host enzyme called acyloxyacyl hydrolase (AOAH) may detoxify LPS when it enters, or is produced in, animal tissues. It may also convert LPS in the intestine into an LPS inhibitor. Neutrophils, macrophages and dendritic cells produce this lipase, which inactivates LPS by removing the two secondary acyl chains from lipid A to produce tetraacyl LPS. If mice are given LPS parenterally, those that lack AOAH develop high titers of non-specific antibodies, develop prolonged hepatomegaly, and experience prolonged endotoxin tolerance. LPS inactivation may be required for animals to restore homeostasis after parenteral LPS exposure. Although mice have many other mechanisms for inhibiting LPS signaling, none is able to prevent these changes in animals that lack AOAH. Dephosphorylation of LPS by intestinal alkaline phosphatase can reduce the severity of Salmonella tryphimurium and Clostridioides difficile infection restoring normal gut microbiota. Alkaline phosphatase prevents intestinal inflammation (and "leaky gut") from bacteria by dephosphorylating the Lipid A portion of LPS. Biosynthesis and transport The entire process of making LPS starts with a molecule called lipid A-Kdo2, which is first created on the surface of the bacterial cell's inner membrane. Then, additional sugars are added to this molecule on the inner membrane before it's moved to the space between the inner and outer membranes (periplasmic space) with the help of a protein called MsbA. The O-antigen, another part of LPS, is made by special enzyme complexes on the inner membrane. It is then moved to the outer membrane through three different systems: one is Wzy-dependent, another relies on ABC transporters, and the third involves a synthase-dependent process. Ultimately, LPS is transported to the outer membrane by a membrane-to-membrane bridge of lipolysaccharide transport (Lpt) proteins. This transporter is a potential antibiotic target. Biological effects on hosts infected with Gram-negative bacteria LPS storage in the body The human body carries endogenous stores on LPS. The epithelial surfaces are colonized by a complex microbial flora (including gram-negative bacteria), which outnumber human cells by a factor of 10 to 1. Gram-negative bacterial will shed endotoxins. This host-microbial interaction is a symbiotic relationship which plays a critical role in systemic immunologic homeostasis. When this is disrupted, it can lead to disease such as endotoxemia and endotoxic septic shock. Immune response LPS acts as the prototypical endotoxin because it binds the CD14/TLR4/MD2 receptor complex in many cell types, but especially in monocytes, dendritic cells, macrophages and B cells, which promotes the secretion of pro-inflammatory cytokines, nitric oxide, and eicosanoids. Bruce Beutler was awarded a portion of the 2011 Nobel Prize in Physiology or Medicine for his work demonstrating that TLR4 is the LPS receptor. As part of the cellular stress response, superoxide is one of the major reactive oxygen species induced by LPS in various cell types that express TLR (toll-like receptor). LPS is also an exogenous pyrogen (fever-inducing substance). LPS function has been under experimental research for several years due to its role in activating many transcription factors. LPS also produces many types of mediators involved in septic shock. Of mammals, humans are much more sensitive to LPS than other primates, and other animals as well (e.g., mice). A dose of 1 μg/kg induces shock in humans, but mice will tolerate a dose up to a thousand times higher. This may relate to differences in the level of circulating natural antibodies between the two species. It may also be linked to multiple immune tactics against pathogens, and part of a multi-faceted anti-microbial strategy that has been informed by human behavioral changes over our species' evolution (e.g., meat eating, agricultural practices, and smoking). Said et al. showed that LPS causes an IL-10-dependent inhibition of CD4 T-cell expansion and function by up-regulating PD-1 levels on monocytes which leads to IL-10 production by monocytes after binding of PD-1 by PD-L1. Endotoxins are in large part responsible for the dramatic clinical manifestations of infections with pathogenic Gram-negative bacteria, such as Neisseria meningitidis, the pathogens that causes meningococcal disease, including meningococcemia, Waterhouse–Friderichsen syndrome, and meningitis. Portions of the LPS from several bacterial strains have been shown to be chemically similar to human host cell surface molecules; the ability of some bacteria to present molecules on their surface which are chemically identical or similar to the surface molecules of some types of host cells is termed molecular mimicry. For example, in Neisseria meningitidis L2,3,5,7,9, the terminal tetrasaccharide portion of the oligosaccharide (lacto-N-neotetraose) is the same tetrasaccharide as that found in paragloboside, a precursor for ABH glycolipid antigens found on human erythrocytes. In another example, the terminal trisaccharide portion (lactotriaose) of the oligosaccharide from pathogenic Neisseria spp. LOS is also found in lactoneoseries glycosphingolipids from human cells. Most meningococci from groups B and C, as well as gonococci, have been shown to have this trisaccharide as part of their LOS structure. The presence of these human cell surface 'mimics' may, in addition to acting as a 'camouflage' from the immune system, play a role in the abolishment of immune tolerance when infecting hosts with certain human leukocyte antigen (HLA) genotypes, such as HLA-B35. LPS can be sensed directly by hematopoietic stem cells (HSCs) through the bonding with TLR4, causing them to proliferate in reaction to a systemic infection. This response activate the TLR4-TRIF-ROS-p38 signaling within the HSCs and through a sustained TLR4 activation can cause a proliferative stress, leading to impair their competitive repopulating ability. Infection in mice using S. typhimurium showed similar results, validating the experimental model also in vivo. Effect of variability on immune response O-antigens (the outer carbohydrates) are the most variable portion of the LPS molecule, imparting antigenic specificity. In contrast, lipid A is the most conserved part. However, lipid A composition also may vary (e.g., in number and nature of acyl chains even within or between genera). Some of these variations may impart antagonistic properties to these LPS. For example, diphosphoryl lipid A of Rhodobacter sphaeroides (RsDPLA) is a potent antagonist of LPS in human cells, but is an agonist in hamster and equine cells. It has been speculated that conical lipid A (e.g., from E. coli) is more agonistic, while less conical lipid A like that of Porphyromonas gingivalis may activate a different signal (TLR2 instead of TLR4), and completely cylindrical lipid A like that of Rhodobacter sphaeroides is antagonistic to TLRs. In general, LPS gene clusters are highly variable between different strains, subspecies, species of bacterial pathogens of plants and animals. Normal human blood serum contains anti-LOS antibodies that are bactericidal and patients that have infections caused by serotypically distinct strains possess anti-LOS antibodies that differ in their specificity compared with normal serum. These differences in humoral immune response to different LOS types can be attributed to the structure of the LOS molecule, primarily within the structure of the oligosaccharide portion of the LOS molecule. In Neisseria gonorrhoeae it has been demonstrated that the antigenicity of LOS molecules can change during an infection due to the ability of these bacteria to synthesize more than one type of LOS, a characteristic known as phase variation. Additionally, Neisseria gonorrhoeae, as well as Neisseria meningitidis and Haemophilus influenzae, are capable of further modifying their LOS in vitro, for example through sialylation (modification with sialic acid residues), and as a result are able to increase their resistance to complement-mediated killing or even down-regulate complement activation or evade the effects of bactericidal antibodies. Sialylation may also contribute to hindered neutrophil attachment and phagocytosis by immune system cells as well as a reduced oxidative burst. Haemophilus somnus, a pathogen of cattle, has also been shown to display LOS phase variation, a characteristic which may help in the evasion of bovine host immune defenses. Taken together, these observations suggest that variations in bacterial surface molecules such as LOS can help the pathogen evade both the humoral (antibody and complement-mediated) and the cell-mediated (killing by neutrophils, for example) host immune defenses. Non-canonical pathways of LPS recognition Recently, it was shown that in addition to TLR4 mediated pathways, certain members of the family of the transient receptor potential ion channels recognize LPS. LPS-mediated activation of TRPA1 was shown in mice and Drosophila melanogaster flies. At higher concentrations, LPS activates other members of the sensory TRP channel family as well, such as TRPV1, TRPM3 and to some extent TRPM8. LPS is recognized by TRPV4 on epithelial cells. TRPV4 activation by LPS was necessary and sufficient to induce nitric oxide production with a bactericidal effect. Testing Lipopolysaccharide is a significant factor that makes bacteria harmful, and it helps categorize them into different groups based on their structure and function. This makes LPS a useful marker for telling apart various Gram-negative bacteria. Swiftly identifying and understanding the types of pathogens involved is crucial for promptly managing and treating infections. Since LPS is the main trigger for the immune response in our cells, it acts as an early signal of an acute infection. Therefore, LPS testing is more specific and meaningful than many other serological tests. The current methods for testing LPS are quite sensitive, but many of them struggle to differentiate between different LPS groups. Additionally, the nature of LPS, which has both water-attracting and water-repelling properties (amphiphilic), makes it challenging to develop sensitive and user-friendly tests. The typical detection methods rely on identifying the lipid A part of LPS because Lipid A is very similar among different bacterial species and serotypes. LPS testing techniques fall into six categories, and they often overlap: in vivo tests, in vitro tests, modified immunoassays, biological assays, and chemical assays. Endotoxin Activity Assay Because the LPS is very difficult to measure in whole blood and because most LPS is bound to proteins and complement, the Endotoxin Activity Assay (EAA™) was developed and cleared by the US FDA in 2003. EAA is a rapid in vitro chemiluminescent immunodiagnostic test. It utilizes a specific monoclonal antibody to measure the endotoxin activity in EDTA whole blood specimens. This assay uses the biological response of the neutrophils in a patient’s blood to an immunological complex of endotoxin and exogenous antibody – the chemiluminescent reaction formed creates an emission of light. The amount of chemiluminescence is proportional to the logarithmic concentration of LPS in the sample and is a measure of the endotoxin activity in the blood. The assay reacts specifically with the Lipid A moiety of LPS of Gram-negative bacteria and does not cross-react with cell wall constituents of Gram-positive bacteria and other microorganisms. Pathophysiology LPS is a powerful toxin that, when in the body, triggers inflammation by binding to cell receptors. Excessive LPS in the blood, endotoxemia, may cause a highly lethal form of sepsis known as endotoxic septic shock. This condition includes symptoms that fall along a continuum of pathophysiologic states, starting with a systemic inflammatory response syndrome (SIRS) and ending in multiorgan dysfunction syndrome (MODS) before death. Early symptoms include rapid heart rate, quick breathing, temperature changes, and blood clotting issues, resulting in blood vessels widening and reduced blood volume, leading to cellular dysfunction. Recent research indicates that even small LPS exposure is associated with autoimmune diseases and allergies. High levels of LPS in the blood can lead to metabolic syndrome, increasing the risk of conditions like diabetes, heart disease, and liver problems. LPS also plays a crucial role in symptoms caused by infections from harmful bacteria, including severe conditions like Waterhouse-Friderichsen syndrome, meningococcemia, and meningitis. Certain bacteria can adapt their LPS to cause long-lasting infections in the respiratory and digestive systems. Recent studies have shown that LPS disrupts cell membrane lipids, affecting cholesterol and metabolism, potentially leading to high cholesterol, abnormal blood lipid levels, and non-alcoholic fatty liver disease. In some cases, LPS can interfere with toxin clearance, which may be linked to neurological issues. Health effects In general the health effects of LPS are due to its abilities as a potent activator and modulator of the immune system, especially its inducement of inflammation. LPS is directly cytoxic and is highly immunostimulatory – as host immune cells recognize LPS, complement are strongly activated. Complement activation and a rising anti-inflammatory response can lead to immune cell dysfunction, immunosuppression, widespread coagulopathy, serious tissue damage and can progress to multi-system organ failure and death. Endotoxemia The presence of endotoxins in the blood is called endotoxemia. High level of endotoxemia can lead to septic shock, or more specifically endotoxic septic shock, while lower concentration of endotoxins in the bloodstream is called metabolic endotoxemia. Endotoxemia is associated with obesity, diet, cardiovascular diseases, and diabetes, while also host genetics might have an effect. Moreover, endotoxemia of intestinal origin, especially, at the host-pathogen interface, is considered to be an important factor in the development of alcoholic hepatitis, which is likely to develop on the basis of the small bowel bacterial overgrowth syndrome and an increased intestinal permeability. Lipid A may cause uncontrolled activation of mammalian immune systems with production of inflammatory mediators that may lead to endotoxic septic shock. This inflammatory reaction is primarily mediated by Toll-like receptor 4 which is responsible for immune system cell activation. Damage to the endothelial layer of blood vessels caused by these inflammatory mediators can lead to capillary leak syndrome, dilation of blood vessels and a decrease in cardiac function and can further worsen shock. LPS is also a potent activator of complemen. Uncontrolled complement activation may trigger destructive endothelial damage leading to disseminated intravascular coagulation (DIC), or atypical hemolytic uremic syndrome (aHUS) with injury to various organs such as including kidneys and lungs. The skin can show the effects of vascular damage often coupled with depletion of coagulation factors in the form of petechiae, purpura and ecchymoses. The limbs can also be affected, sometimes with devastating consequences such as the development of gangrene, requiring subsequent amputation. Loss of function of the adrenal glands can cause adrenal insufficiency and additional hemorrhage into the adrenals causes Waterhouse-Friderichsen syndrome, both of which can be life-threatening. It has also been reported that gonococcal LOS can cause damage to human fallopian tubes. Treatment of Endotoxemia Toraymyxin is a widely used extracorporeal endotoxin removal therapy through direct hemoadsorption (also referred to as hemoperfusion). It is a polystyrene-derived cartridge with molecules of polymyxin B (PMX-B) covalently bound to mesh fibers contained within it. Polymyxins are cyclic cationic polypeptide antibiotics derived from Bacillus polymyxa with an effective antimicrobial activity against Gram-negative bacteria, but their intravenous clinical use has been limited due to their nephrotoxicity and neurotoxicity side effects. The extracorporeal use of the Toraymyxin cartridge allows PMX-B to bind lipid A with a very stable interaction with its hydrophobic residues thereby neutralizing endotoxins as the blood is filtered through the extracorporeal circuit inside the cartridge, thus reversing endotoxemia and avoiding its toxic systemic effects. Auto-immune disease The molecular mimicry of some LOS molecules is thought to cause autoimmune-based host responses, such as flareups of multiple sclerosis. Other examples of bacterial mimicry of host structures via LOS are found with the bacteria Helicobacter pylori and Campylobacter jejuni, organisms which cause gastrointestinal disease in humans, and Haemophilus ducreyi which causes chancroid. Certain C. jejuni LPS serotypes (attributed to certain tetra- and pentasaccharide moieties of the core oligosaccharide) have also been implicated with Guillain–Barré syndrome and a variant of Guillain–Barré called Miller-Fisher syndrome. Link to obesity Epidemiological studies have shown that increased endotoxin load, which can be a result of increased populations of endotoxin-producing bacteria in the intestinal tract, is associated with certain obesity-related patient groups. Other studies have shown that purified endotoxin from Escherichia coli can induce obesity and insulin-resistance when injected into germ-free mouse models. A more recent study has uncovered a potentially contributing role for Enterobacter cloacae B29 toward obesity and insulin resistance in a human patient. The presumed mechanism for the association of endotoxin with obesity is that endotoxin induces an inflammation-mediated pathway accounting for the observed obesity and insulin resistance. Bacterial genera associated with endotoxin-related obesity effects include Escherichia and Enterobacter. Depression There is experimental and observational evidence that LPS might play a role in depression. Administration of LPS in mice can lead to depressive symptoms, and there seem to be elevated levels of LPS in some people with depression. Inflammation may sometimes play a role in the development of depression, and LPS is pro-inflammatory. Cellular senescence Inflammation induced by LPS can induce cellular senescence, as has been shown for the lung epithelial cells and microglial cells (the latter leading to neurodegeneration). Role as contaminant in biotechnology and research Lipopolysaccharides are frequent contaminants in plasmid DNA prepared from bacteria or proteins expressed from bacteria, and must be removed from the DNA or protein to avoid contaminating experiments and to avoid toxicity of products manufactured using industrial fermentation. Ovalbumin is frequently contaminated with endotoxins. Ovalbumin is one of the extensively studied proteins in animal models and also an established model allergen for airway hyper-responsiveness (AHR). Commercially available ovalbumin that is contaminated with LPS can falsify research results, as it does not accurately reflect the effect of the protein antigen on animal physiology. In pharmaceutical production, it is necessary to remove all traces of endotoxin from drug product containers, as even small amounts of endotoxin will cause illness in humans. A depyrogenation oven is used for this purpose. Temperatures in excess of 300 °C are required to fully break down LPS. The standard assay for detecting presence of endotoxin is the Limulus Amebocyte Lysate (LAL) assay, utilizing blood from the Horseshoe crab (Limulus polyphemus). Very low levels of LPS can cause coagulation of the limulus lysate due to a powerful amplification through an enzymatic cascade. However, due to the dwindling population of horseshoe crabs, and the fact that there are factors that interfere with the LAL assay, efforts have been made to develop alternative assays, with the most promising ones being ELISA tests using a recombinant version of a protein in the LAL assay, Factor C. See also Bioaerosol Depyrogenation Host-pathogen interface Mucopolysaccharide Nesfatin-1 Schwartzman reaction AOAH References External links Membrane-active molecules Saccharolipids Bacterial toxins
Lipopolysaccharide
[ "Physics", "Chemistry" ]
6,135
[ "Molecules", "Membrane-active molecules", "Matter" ]
412,984
https://en.wikipedia.org/wiki/Glide%20reflection
In geometry, a glide reflection or transflection is a geometric transformation that consists of a reflection across a hyperplane and a translation ("glide") in a direction parallel to that hyperplane, combined into a single transformation. Because the distances between points are not changed under glide reflection, it is a motion or isometry. When the context is the two-dimensional Euclidean plane, the hyperplane of reflection is a straight line called the glide line or glide axis. When the context is three-dimensional space, the hyperplane of reflection is a plane called the glide plane. The displacement vector of the translation is called the glide vector. When some geometrical object or configuration appears unchanged by a transformation, it is said to have symmetry, and the transformation is called a symmetry operation. Glide-reflection symmetry is seen in frieze groups (patterns which repeat in one dimension, often used in decorative borders), wallpaper groups (regular tessellations of the plane), and space groups (which describe e.g. crystal symmetries). Objects with glide-reflection symmetry are in general not symmetrical under reflection alone, but two applications of the same glide reflection result in a double translation, so objects with glide-reflection symmetry always also have a simple translational symmetry. When a reflection is composed with a translation in a direction perpendicular to the hyperplane of reflection, the composition of the two transformations is a reflection in a parallel hyperplane. However, when a reflection is composed with a translation in any other direction, the composition of the two transformations is a glide reflection, which can be uniquely described as a reflection in a parallel hyperplane composed with a translation in a direction parallel to the hyperplane. A single glide is represented as frieze group p11g. A glide reflection can be seen as a limiting rotoreflection, where the rotation becomes a translation. It can also be given a Schoenflies notation as S2∞, Coxeter notation as [∞+,2+], and orbifold notation as ∞×. Frieze groups In the Euclidean plane, reflections and glide reflections are the only two kinds of indirect (orientation-reversing) isometries. For example, there is an isometry consisting of the reflection on the x-axis, followed by translation of one unit parallel to it. In coordinates, it takes This isometry maps the x-axis to itself; any other line which is parallel to the x-axis gets reflected in the x-axis, so this system of parallel lines is left invariant. The isometry group generated by just a glide reflection is an infinite cyclic group. Combining two equal glide reflections gives a pure translation with a translation vector that is twice that of the glide reflection, so the even powers of the glide reflection form a translation group. In the case of glide-reflection symmetry, the symmetry group of an object contains a glide reflection, and hence the group generated by it. If that is all it contains, this type is frieze group p11g. Example pattern with this symmetry group: A typical example of glide reflection in everyday life would be the track of footprints left in the sand by a person walking on a beach. Frieze group nr. 6 (glide-reflections, translations and rotations) is generated by a glide reflection and a rotation about a point on the line of reflection. It is isomorphic to a semi-direct product of Z and C2. Example pattern with this symmetry group: For any symmetry group containing some glide-reflection symmetry, the translation vector of any glide reflection is one half of an element of the translation group. If the translation vector of a glide reflection is itself an element of the translation group, then the corresponding glide-reflection symmetry reduces to a combination of reflection symmetry and translational symmetry. Wallpaper groups Glide-reflection symmetry with respect to two parallel lines with the same translation implies that there is also translational symmetry in the direction perpendicular to these lines, with a translation distance which is twice the distance between glide reflection lines. This corresponds to wallpaper group pg; with additional symmetry it occurs also in pmg, pgg and p4g. If there are also true reflection lines in the same direction then they are evenly spaced between the glide reflection lines. A glide reflection line parallel to a true reflection line already implies this situation. This corresponds to wallpaper group cm. The translational symmetry is given by oblique translation vectors from one point on a true reflection line to two points on the next, supporting a rhombus with the true reflection line as one of the diagonals. With additional symmetry it occurs also in cmm, p3m1, p31m, p4m and p6m. In the Euclidean plane 3 of 17 wallpaper groups require glide reflection generators. p2gg has orthogonal glide reflections and 2-fold rotations. cm has parallel mirrors and glides, and pg has parallel glides. (Glide reflections are shown below as dashed lines) Space groups Glide planes are noted in the Hermann–Mauguin notation by a, b or c, depending on which axis the glide is along. (The orientation of the plane is determined by the position of the symbol in the Hermann–Mauguin designation.) If the axis is not defined, then the glide plane may be noted by g. When the glide plane is parallel to the screen, these planes may be indicated by a bent arrow in which the arrowhead indicates the direction of the glide. When the glide plane is perpendicular to the screen, these planes can be represented either by dashed lines when the glide is parallel to the plane of the screen or dotted lines when the glide is perpendicular to the plane of the screen. Additionally, a centered lattice can cause a glide plane to exist in two directions at the same time. This type of glide plane may be indicated by a bent arrow with an arrowhead on both sides when the glide plan is parallel to the plane of the screen or a dashed and double-dotted line when the glide plane is perpendicular to the plane of the screen. There is also the n glide, which is a glide along the half of a diagonal of a face, and the d glide, which is along a fourth of either a face or space diagonal of the unit cell . The latter is often called the diamond glide plane as it features in the diamond structure. The n glide plane may be indicated by diagonal arrow when it is parallel to the plane of the screen or a dashed-dotted line when the glide plane is perpendicular to the plane of the screen. A d glide plane may be indicated by a diagonal half-arrow if the glide plane is parallel to the plane of the screen or a dashed-dotted line with arrows if the glide plane is perpendicular to the plane of the screen. If a d glide plane is present in a crystal system, then that crystal must have a centered lattice. In today's version of Hermann–Mauguin notation, the symbol e is used in cases where there are two possible ways of designating the glide direction because both are true. For example if a crystal has a base-centered Bravais lattice centered on the C face, then a glide of half a cell unit in the a direction gives the same result as a glide of half a cell unit in the b direction. The isometry group generated by just a glide reflection is an infinite cyclic group. Combining two equal glide plane operations gives a pure translation with a translation vector that is twice that of the glide reflection, so the even powers of the glide reflection form a translation group. In the case of glide-reflection symmetry, the symmetry group of an object contains a glide reflection and the group generated by it. For any symmetry group containing a glide reflection, the glide vector is one half of an element of the translation group. If the translation vector of a glide plane operation is itself an element of the translation group, then the corresponding glide plane symmetry reduces to a combination of reflection symmetry and translational symmetry. Examples and applications Glide symmetry can be observed in nature among certain fossils of the Ediacara biota; the machaeridians; and certain palaeoscolecid worms. It can also be seen in many extant groups of sea pens. In Conway's Game of Life, a commonly occurring pattern called the glider is so named because it repeats its configuration of cells, shifted by a glide reflection, after two steps of the automaton. After four steps and two glide reflections, the pattern returns to its original orientation, shifted diagonally by one unit. Continuing in this way, it moves across the array of the game. See also Screw axis Lattice (group) Notes References External links Glide Reflection at cut-the-knot Euclidean symmetries Transformation (function) Crystallography
Glide reflection
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,781
[ "Functions and mappings", "Euclidean symmetries", "Transformation (function)", "Mathematical objects", "Materials science", "Crystallography", "Mathematical relations", "Condensed matter physics", "Geometry", "Symmetry" ]
413,102
https://en.wikipedia.org/wiki/Folding%40home
Folding@home (FAH or F@h) is a distributed computing project aimed to help scientists develop new therapeutics for a variety of diseases by the means of simulating protein dynamics. This includes the process of protein folding and the movements of proteins, and is reliant on simulations run on volunteers' personal computers. Folding@home is currently based at the University of Pennsylvania and led by Greg Bowman, a former student of Vijay Pande. The project utilizes graphics processing units (GPUs), central processing units (CPUs), and ARM processors like those on the Raspberry Pi for distributed computing and scientific research. The project uses statistical simulation methodology that is a paradigm shift from traditional computing methods. As part of the client–server model network architecture, the volunteered machines each receive pieces of a simulation (work units), complete them, and return them to the project's database servers, where the units are compiled into an overall simulation. Volunteers can track their contributions on the Folding@home website, which makes volunteers' participation competitive and encourages long-term involvement. Folding@home is one of the world's fastest computing systems. With heightened interest in the project as a result of the COVID-19 pandemic, the system achieved a speed of approximately 1.22 exaflops by late March 2020 and reached 2.43 exaflops by April 12, 2020, making it the world's first exaflop computing system. This level of performance from its large-scale computing network has allowed researchers to run computationally costly atomic-level simulations of protein folding thousands of times longer than formerly achieved. Since its launch on October 1, 2000, Folding@home was involved in the production of 226 scientific research papers. Results from the project's simulations agree well with experiments. Background Proteins are an essential component to many biological functions and participate in virtually all processes within biological cells. They often act as enzymes, performing biochemical reactions including cell signaling, molecular transportation, and cellular regulation. As structural elements, some proteins act as a type of skeleton for cells, and as antibodies, while other proteins participate in the immune system. Before a protein can take on these roles, it must fold into a functional three-dimensional structure, a process that often occurs spontaneously and is dependent on interactions within its amino acid sequence and interactions of the amino acids with their surroundings. Protein folding is driven by the search to find the most energetically favorable conformation of the protein, i.e., its native state. Thus, understanding protein folding is critical to understanding what a protein does and how it works, and is considered a holy grail of computational biology. Despite folding occurring within a crowded cellular environment, it typically proceeds smoothly. However, due to a protein's chemical properties or other factors, proteins may misfold, that is, fold down the wrong pathway and end up misshapen. Unless cellular mechanisms can destroy or refold misfolded proteins, they can subsequently aggregate and cause a variety of debilitating diseases. Laboratory experiments studying these processes can be limited in scope and atomic detail, leading scientists to use physics-based computing models that, when complementing experiments, seek to provide a more complete picture of protein folding, misfolding, and aggregation. Due to the complexity of proteins' conformation or configuration space (the set of possible shapes a protein can take), and limits in computing power, all-atom molecular dynamics simulations have been severely limited in the timescales that they can study. While most proteins typically fold in the order of milliseconds, before 2010, simulations could only reach nanosecond to microsecond timescales. General-purpose supercomputers have been used to simulate protein folding, but such systems are intrinsically costly and typically shared among many research groups. Further, because the computations in kinetic models occur serially, strong scaling of traditional molecular simulations to these architectures is exceptionally difficult. Moreover, as protein folding is a stochastic process (i.e., random) and can statistically vary over time, it is challenging computationally to use long simulations for comprehensive views of the folding process. Protein folding does not occur in one step. Instead, proteins spend most of their folding time, nearly 96% in some cases, waiting in various intermediate conformational states, each a local thermodynamic free energy minimum in the protein's energy landscape. Through a process known as adaptive sampling, these conformations are used by Folding@home as starting points for a set of simulation trajectories. As the simulations discover more conformations, the trajectories are restarted from them, and a Markov state model (MSM) is gradually created from this cyclic process. MSMs are discrete-time master equation models which describe a biomolecule's conformational and energy landscape as a set of distinct structures and the short transitions between them. The adaptive sampling Markov state model method significantly increases the efficiency of simulation as it avoids computation inside the local energy minimum itself, and is amenable to distributed computing (including on GPUGRID) as it allows for the statistical aggregation of short, independent simulation trajectories. The amount of time it takes to construct a Markov state model is inversely proportional to the number of parallel simulations run, i.e., the number of processors available. In other words, it achieves linear parallelization, leading to an approximately four orders of magnitude reduction in overall serial calculation time. A completed MSM may contain tens of thousands of sample states from the protein's phase space (all the conformations a protein can take on) and the transitions between them. The model illustrates folding events and pathways (i.e., routes) and researchers can later use kinetic clustering to view a coarse-grained representation of the otherwise highly detailed model. They can use these MSMs to reveal how proteins misfold and to quantitatively compare simulations with experiments. Between 2000 and 2010, the length of the proteins Folding@home has studied have increased by a factor of four, while its timescales for protein folding simulations have increased by six orders of magnitude. In 2002, Folding@home used Markov state models to complete approximately a million CPU days of simulations over the span of several months, and in 2011, MSMs parallelized another simulation that required an aggregate 10 million CPU hours of computing. In January 2010, Folding@home used MSMs to simulate the dynamics of the slow-folding 32-residue NTL9 protein out to 1.52 milliseconds, a timescale consistent with experimental folding rate predictions but a thousand times longer than formerly achieved. The model consisted of many individual trajectories, each two orders of magnitude shorter, and provided an unprecedented level of detail into the protein's energy landscape. In 2010, Folding@home researcher Gregory Bowman was awarded the Thomas Kuhn Paradigm Shift Award from the American Chemical Society for the development of the open-source MSMBuilder software and for attaining quantitative agreement between theory and experiment. For his work, Pande was awarded the 2012 Michael and Kate Bárány Award for Young Investigators for "developing field-defining and field-changing computational methods to produce leading theoretical models for protein and RNA folding", and the 2006 Irving Sigal Young Investigator Award for his simulation results which "have stimulated a re-examination of the meaning of both ensemble and single-molecule measurements, making Pande's efforts pioneering contributions to simulation methodology." Examples of application in biomedical research Protein misfolding can result in a variety of diseases including Alzheimer's disease, cancer, Creutzfeldt–Jakob disease, cystic fibrosis, Huntington's disease, sickle-cell anemia, and type II diabetes. Cellular infection by viruses such as HIV and influenza also involve folding events on cell membranes. Once protein misfolding is better understood, therapies can be developed that augment cells' natural ability to regulate protein folding. Such therapies include the use of engineered molecules to alter the production of a given protein, help destroy a misfolded protein, or assist in the folding process. The combination of computational molecular modeling and experimental analysis has the possibility to fundamentally shape the future of molecular medicine and the rational design of therapeutics, such as expediting and lowering the costs of drug discovery. The goal of the first five years of Folding@home was to make advances in understanding folding, while the current goal is to understand misfolding and related disease, especially Alzheimer's. The simulations run on Folding@home are used in conjunction with laboratory experiments, but researchers can use them to study how folding in vitro differs from folding in native cellular environments. This is advantageous in studying aspects of folding, misfolding, and their relationships to disease that are difficult to observe experimentally. For example, in 2011, Folding@home simulated protein folding inside a ribosomal exit tunnel, to help scientists better understand how natural confinement and crowding might influence the folding process. Furthermore, scientists typically employ chemical denaturants to unfold proteins from their stable native state. It is not generally known how the denaturant affects the protein's refolding, and it is difficult to experimentally determine if these denatured states contain residual structures which may influence folding behavior. In 2010, Folding@home used GPUs to simulate the unfolded states of Protein L, and predicted its collapse rate in strong agreement with experimental results. The large data sets from the project are freely available for other researchers to use upon request and some can be accessed from the Folding@home website. The Pande lab has collaborated with other molecular dynamics systems such as the Blue Gene supercomputer, and they share Folding@home's key software with other researchers, so that the algorithms which benefited Folding@home may aid other scientific areas. In 2011, they released the open-source Copernicus software, which is based on Folding@home's MSM and other parallelizing methods and aims to improve the efficiency and scaling of molecular simulations on large computer clusters or supercomputers. Summaries of all scientific findings from Folding@home are posted on the Folding@home website after publication. Alzheimer's disease Alzheimer's disease is an incurable neurodegenerative disease which most often affects the elderly and accounts for more than half of all cases of dementia. Its exact cause remains unknown, but the disease is identified as a protein misfolding disease. Alzheimer's is associated with toxic aggregations of the amyloid beta (Aβ) peptide, caused by Aβ misfolding and clumping together with other Aβ peptides. These Aβ aggregates then grow into significantly larger senile plaques, a pathological marker of Alzheimer's disease. Due to the heterogeneous nature of these aggregates, experimental methods such as X-ray crystallography and nuclear magnetic resonance (NMR) have had difficulty characterizing their structures. Moreover, atomic simulations of Aβ aggregation are highly demanding computationally due to their size and complexity. Preventing Aβ aggregation is a promising method to developing therapeutic drugs for Alzheimer's disease, according to Naeem and Fazili in a literature review article. In 2008, Folding@home simulated the dynamics of Aβ aggregation in atomic detail over timescales of the order of tens of seconds. Prior studies were only able to simulate about 10 microseconds. Folding@home was able to simulate Aβ folding for six orders of magnitude longer than formerly possible. Researchers used the results of this study to identify a beta hairpin that was a major source of molecular interactions within the structure. The study helped prepare the Pande lab for future aggregation studies and for further research to find a small peptide which may stabilize the aggregation process. In December 2008, Folding@home found several small drug candidates which appear to inhibit the toxicity of Aβ aggregates. In 2010, in close cooperation with the Center for Protein Folding Machinery, these drug leads began to be tested on biological tissue. In 2011, Folding@home completed simulations of several mutations of Aβ that appear to stabilize the aggregate formation, which could aid in the development of therapeutic drug therapies for the disease and greatly assist with experimental nuclear magnetic resonance spectroscopy studies of Aβ oligomers. Later that year, Folding@home began simulations of various Aβ fragments to determine how various natural enzymes affect the structure and folding of Aβ. Huntington's disease Huntington's disease is a neurodegenerative genetic disorder that is associated with protein misfolding and aggregation. Excessive repeats of the glutamine amino acid at the N-terminus of the huntingtin protein cause aggregation, and although the behavior of the repeats is not completely understood, it does lead to the cognitive decline associated with the disease. As with other aggregates, there is difficulty in experimentally determining its structure. Scientists are using Folding@home to study the structure of the huntingtin protein aggregate and to predict how it forms, assisting with rational drug design methods to stop the aggregate formation. The N17 fragment of the huntingtin protein accelerates this aggregation, and while there have been several mechanisms proposed, its exact role in this process remains largely unknown. Folding@home has simulated this and other fragments to clarify their roles in the disease. Since 2008, its drug design methods for Alzheimer's disease have been applied to Huntington's. Cancer More than half of all known cancers involve mutations of p53, a tumor suppressor protein present in every cell which regulates the cell cycle and signals for cell death in the event of damage to DNA. Specific mutations in p53 can disrupt these functions, allowing an abnormal cell to continue growing unchecked, resulting in the development of tumors. Analysis of these mutations helps explain the root causes of p53-related cancers. In 2004, Folding@home was used to perform the first molecular dynamics study of the refolding of p53's protein dimer in an all-atom simulation of water. The simulation's results agreed with experimental observations and gave insights into the refolding of the dimer that were formerly unobtainable. This was the first peer reviewed publication on cancer from a distributed computing project. The following year, Folding@home powered a new method to identify the amino acids crucial for the stability of a given protein, which was then used to study mutations of p53. The method was reasonably successful in identifying cancer-promoting mutations and determined the effects of specific mutations which could not otherwise be measured experimentally. Folding@home is also used to study protein chaperones, heat shock proteins which play essential roles in cell survival by assisting with the folding of other proteins in the crowded and chemically stressful environment within a cell. Rapidly growing cancer cells rely on specific chaperones, and some chaperones play key roles in chemotherapy resistance. Inhibitions to these specific chaperones are seen as potential modes of action for efficient chemotherapy drugs or for reducing the spread of cancer. Using Folding@home and working closely with the Center for Protein Folding Machinery, the Pande lab hopes to find a drug which inhibits those chaperones involved in cancerous cells. Researchers are also using Folding@home to study other molecules related to cancer, such as the enzyme Src kinase, and some forms of the engrailed homeodomain: a large protein which may be involved in many diseases, including cancer. In 2011, Folding@home began simulations of the dynamics of the small knottin protein EETI, which can identify carcinomas in imaging scans by binding to surface receptors of cancer cells. Interleukin 2 (IL-2) is a protein that helps T cells of the immune system attack pathogens and tumors. However, its use as a cancer treatment is restricted due to serious side effects such as pulmonary edema. IL-2 binds to these pulmonary cells differently than it does to T cells, so IL-2 research involves understanding the differences between these binding mechanisms. In 2012, Folding@home assisted with the discovery of a mutant form of IL-2 which is three hundred times more effective in its immune system role but carries fewer side effects. In experiments, this altered form significantly outperformed natural IL-2 in impeding tumor growth. Pharmaceutical companies have expressed interest in the mutant molecule, and the National Institutes of Health are testing it against a large variety of tumor models to try to accelerate its development as a therapeutic. Osteogenesis imperfecta Osteogenesis imperfecta, known as brittle bone disease, is an incurable genetic bone disorder which can be lethal. Those with the disease are unable to make functional connective bone tissue. This is most commonly due to a mutation in Type-I collagen, which fulfills a variety of structural roles and is the most abundant protein in mammals. The mutation causes a deformation in collagen's triple helix structure, which if not naturally destroyed, leads to abnormal and weakened bone tissue. In 2005, Folding@home tested a new quantum mechanical method that improved upon prior simulation methods, and which may be useful for future computing studies of collagen. Although researchers have used Folding@home to study collagen folding and misfolding, the interest stands as a pilot project compared to Alzheimer's and Huntington's research. Viruses Folding@home is assisting in research towards preventing some viruses, such as influenza and HIV, from recognizing and entering biological cells. In 2011, Folding@home began simulations of the dynamics of the enzyme RNase H, a key component of HIV, to try to design drugs to deactivate it. Folding@home has also been used to study membrane fusion, an essential event for viral infection and a wide range of biological functions. This fusion involves conformational changes of viral fusion proteins and protein docking, but the exact molecular mechanisms behind fusion remain largely unknown. Fusion events may consist of over a half million atoms interacting for hundreds of microseconds. This complexity limits typical computer simulations to about ten thousand atoms over tens of nanoseconds: a difference of several orders of magnitude. The development of models to predict the mechanisms of membrane fusion will assist in the scientific understanding of how to target the process with antiviral drugs. In 2006, scientists applied Markov state models and the Folding@home network to discover two pathways for fusion and gain other mechanistic insights. Following detailed simulations from Folding@home of small cells known as vesicles, in 2007, the Pande lab introduced a new computing method to measure the topology of its structural changes during fusion. In 2009, researchers used Folding@home to study mutations of influenza hemagglutinin, a protein that attaches a virus to its host cell and assists with viral entry. Mutations to hemagglutinin affect how well the protein binds to a host's cell surface receptor molecules, which determines how infective the virus strain is to the host organism. Knowledge of the effects of hemagglutinin mutations assists in the development of antiviral drugs. As of 2012, Folding@home continues to simulate the folding and interactions of hemagglutinin, complementing experimental studies at the University of Virginia. In March 2020, Folding@home launched a program to assist researchers around the world who are working on finding a cure and learning more about the coronavirus pandemic. The initial wave of projects simulate potentially druggable protein targets from SARS-CoV-2 virus, and the related SARS-CoV virus, about which there is significantly more data available. Drug design Drugs function by binding to specific locations on target molecules and causing some desired change, such as disabling a target or causing a conformational change. Ideally, a drug should act very specifically, and bind only to its target without interfering with other biological functions. However, it is difficult to precisely determine where and how tightly two molecules will bind. Due to limits in computing power, current in silico methods usually must trade speed for accuracy; e.g., use rapid protein docking methods instead of computationally costly free energy calculations. Folding@home's computing performance allows researchers to use both methods, and evaluate their efficiency and reliability. Computer-assisted drug design has the potential to expedite and lower the costs of drug discovery. In 2010, Folding@home used MSMs and free energy calculations to predict the native state of the villin protein to within 1.8 angstrom (Å) root mean square deviation (RMSD) from the crystalline structure experimentally determined through X-ray crystallography. This accuracy has implications to future protein structure prediction methods, including for intrinsically unstructured proteins. Scientists have used Folding@home to research drug resistance by studying vancomycin, an antibiotic drug of last resort, and beta-lactamase, a protein that can break down antibiotics like penicillin. Chemical activity occurs along a protein's active site. Traditional drug design methods involve tightly binding to this site and blocking its activity, under the assumption that the target protein exists in one rigid structure. However, this approach works for approximately only 15% of all proteins. Proteins contain allosteric sites which, when bound to by small molecules, can alter a protein's conformation and ultimately affect the protein's activity. These sites are attractive drug targets, but locating them is very computationally costly. In 2012, Folding@home and MSMs were used to identify allosteric sites in three medically relevant proteins: beta-lactamase, interleukin-2, and RNase H. Approximately half of all known antibiotics interfere with the workings of a bacteria's ribosome, a large and complex biochemical machine that performs protein biosynthesis by translating messenger RNA into proteins. Macrolide antibiotics clog the ribosome's exit tunnel, preventing synthesis of essential bacterial proteins. In 2007, the Pande lab received a grant to study and design new antibiotics. In 2008, they used Folding@home to study the interior of this tunnel and how specific molecules may affect it. The full structure of the ribosome was determined only as of 2011, and Folding@home has also simulated ribosomal proteins, as many of their functions remain largely unknown. Patterns of participation Like other distributed computing projects, Folding@home is an online citizen science project. In these projects non-specialists contribute computer processing power or help to analyze data produced by professional scientists. Participants receive little or no obvious reward. Research has been carried out into the motivations of citizen scientists and most of these studies have found that participants are motivated to take part because of altruistic reasons; that is, they want to help scientists and make a contribution to the advancement of their research. Many participants in citizen science have an underlying interest in the topic of the research and gravitate towards projects that are in disciplines of interest to them. Folding@home is no different in that respect. Research carried out recently on over 400 active participants revealed that they wanted to help make a contribution to research and that many had friends or relatives affected by the diseases that the Folding@home scientists investigate. Folding@home attracts participants who are computer hardware enthusiasts. These groups bring considerable expertise to the project and are able to build computers with advanced processing power. Other distributed computing projects attract these types of participants and projects are often used to benchmark the performance of modified computers, and this aspect of the hobby is accommodated through the competitive nature of the project. Individuals and teams can compete to see who can process the most computer processing units (CPUs). This latest research on Folding@home involving interview and ethnographic observation of online groups showed that teams of hardware enthusiasts can sometimes work together, sharing best practice with regard to maximizing processing output. Such teams can become communities of practice, with a shared language and online culture. This pattern of participation has been observed in other distributed computing projects. Another key observation of Folding@home participants is that many are male. This has also been observed in other distributed projects. Furthermore, many participants work in computer and technology-based jobs and careers. Not all Folding@home participants are hardware enthusiasts. Many participants run the project software on unmodified machines and do take part competitively. By January 2020, the number of users was down to 30,000. However, it is difficult to ascertain what proportion of participants are hardware enthusiasts. Although, according to the project managers, the contribution of the enthusiast community is substantially larger in terms of processing power. Performance Supercomputer FLOPS performance is assessed by running the legacy LINPACK benchmark. This short-term testing has difficulty in accurately reflecting sustained performance on real-world tasks because LINPACK more efficiently maps to supercomputer hardware. Computing systems vary in architecture and design, so direct comparison is difficult. Despite this, FLOPS remain the primary speed metric used in supercomputing. In contrast, Folding@home determines its FLOPS using wall-clock time by measuring how much time its work units take to complete. On September 16, 2007, due in large part to the participation of PlayStation 3 consoles, the Folding@home project officially attained a sustained performance level higher than one native petaFLOPS, becoming the first computing system of any kind to do so. Top500's fastest supercomputer at the time was BlueGene/L, at 0.280 petaFLOPS. The following year, on May 7, 2008, the project attained a sustained performance level higher than two native petaFLOPS, followed by the three and four native petaFLOPS milestones in August 2008 and September 28, 2008 respectively. On February 18, 2009, Folding@home achieved five native petaFLOPS, and was the first computing project to meet these five levels. In comparison, November 2008's fastest supercomputer was IBM's Roadrunner at 1.105 petaFLOPS. On November 10, 2011, Folding@home's performance exceeded six native petaFLOPS with the equivalent of nearly eight x86 petaFLOPS. In mid-May 2013, Folding@home attained over seven native petaFLOPS, with the equivalent of 14.87 x86 petaFLOPS. It then reached eight native petaFLOPS on June 21, followed by nine on September 9 of that year, with 17.9 x86 petaFLOPS. On May 11, 2016 Folding@home announced that it was moving towards reaching the 100 x86 petaFLOPS mark. Further use grew from increased awareness and participation in the project from the coronavirus pandemic in 2020. On March 20, 2020 Folding@home announced via Twitter that it was running with over 470 native petaFLOPS, the equivalent of 958 x86 petaFLOPS. By March 25 it reached 768 petaFLOPS, or 1.5 x86 exaFLOPS, making it the first exaFLOP computing system. , the computing power of Folding@home stands at 14.3 petaFLOPS, or 27.5 x86 petaFLOPS. Points Similarly to other distributed computing projects, Folding@home quantitatively assesses user computing contributions to the project through a credit system. All units from a given protein project have uniform base credit, which is determined by benchmarking one or more work units from that project on an official reference machine before the project is released. Each user receives these base points for completing every work unit, though through the use of a passkey they can receive added bonus points for reliably and rapidly completing units which are more demanding computationally or have a greater scientific priority. Users may also receive credit for their work by clients on multiple machines. This point system attempts to align awarded credit with the value of the scientific results. Users can register their contributions under a team, which combine the points of all their members. A user can start their own team, or they can join an existing team. In some cases, a team may have their own community-driven sources of help or recruitment such as an Internet forum. The points can foster friendly competition between individuals and teams to compute the most for the project, which can benefit the folding community and accelerate scientific research. Individual and team statistics are posted on the Folding@home website. If a user does not form a new team, or does not join an existing team, that user automatically becomes part of a "Default" team. This "Default" team has a team number of "0". Statistics are accumulated for this "Default" team as well as for specially named teams. Software Folding@home software at the user's end involves three primary components: work units, cores, and a client. Work units A work unit is the protein data that the client is asked to process. Work units are a fraction of the simulation between the states in a Markov model. After the work unit has been downloaded and completely processed by a volunteer's computer, it is returned to Folding@home servers, which then award the volunteer the credit points. This cycle repeats automatically. All work units have associated deadlines, and if this deadline is exceeded, the user may not get credit and the unit will be automatically reissued to another participant. As protein folding occurs serially, and many work units are generated from their predecessors, this allows the overall simulation process to proceed normally if a work unit is not returned after a reasonable period of time. Due to these deadlines, the minimum system requirement for Folding@home is a Pentium 3 450 MHz CPU with Streaming SIMD Extensions (SSE). However, work units for high-performance clients have a much shorter deadline than those for the uniprocessor client, as a major part of the scientific benefit is dependent on rapidly completing simulations. Before public release, work units go through several quality assurance steps to keep problematic ones from becoming fully available. These testing stages include internal, beta, and advanced, before a final full release across Folding@home. Folding@home's work units are normally processed only once, except in the rare event that errors occur during processing. If this occurs for three different users, the unit is automatically pulled from distribution. The Folding@home support forum can be used to differentiate between issues arising from problematic hardware and bad work units. Cores Specialized molecular dynamics programs, referred to as "FahCores" and often abbreviated "cores", perform the calculations on the work unit as a background process. A large majority of Folding@home's cores are based on GROMACS, one of the fastest and most popular molecular dynamics software packages, which largely consists of manually optimized assembly language code and hardware optimizations. Although GROMACS is open-source software and there is a cooperative effort between the Pande lab and GROMACS developers, Folding@home uses a closed-source license to help ensure data validity. Less active cores include ProtoMol and SHARPEN. Folding@home has used AMBER, CPMD, Desmond, and TINKER, but these have since been retired and are no longer in active service. Some of these cores perform explicit solvation calculations in which the surrounding solvent (usually water) is modeled atom-by-atom; while others perform implicit solvation methods, where the solvent is treated as a mathematical continuum. The core is separate from the client to enable the scientific methods to be updated automatically without requiring a client update. The cores periodically create calculation checkpoints so that if they are interrupted they can resume work from that point upon startup. Client A Folding@home participant installs a client program on their personal computer. The user interacts with the client, which manages the other software components in the background. Through the client, the user may pause the folding process, open an event log, check the work progress, or view personal statistics. The computer clients run continuously in the background at a very low priority, using idle processing power so that normal computer use is unaffected. The maximum CPU use can be adjusted via client settings. The client connects to a Folding@home server and retrieves a work unit and may also download the appropriate core for the client's settings, operating system, and the underlying hardware architecture. After processing, the work unit is returned to the Folding@home servers. Computer clients are tailored to uniprocessor and multi-core processor systems, and graphics processing units. The diversity and power of each hardware architecture provides Folding@home with the ability to efficiently complete many types of simulations in a timely manner (in a few weeks or months rather than years), which is of significant scientific value. Together, these clients allow researchers to study biomedical questions formerly considered impractical to tackle computationally. Professional software developers are responsible for most of Folding@home's code, both for the client and server-side. The development team includes programmers from Nvidia, ATI, Sony, and Cauldron Development. Clients can be downloaded only from the official Folding@home website or its commercial partners, and will only interact with Folding@home computer files. They will upload and download data with Folding@home's data servers (over port 8080, with 80 as an alternate), and the communication is verified using 2048-bit digital signatures. While the client's graphical user interface (GUI) is open-source, the client is proprietary software citing security and scientific integrity as the reasons. However, this rationale of using proprietary software is disputed since while the license could be enforceable in the legal domain retrospectively, it does not practically prevent the modification (also known as patching) of the executable binary files. Likewise, binary-only distribution does not prevent the malicious modification of executable binary-code, either through a man-in-the-middle attack while being downloaded via the internet, or by the redistribution of binaries by a third-party that have been previously modified either in their binary state (i.e. patched), or by decompiling and recompiling them after modification. These modifications are possible unless the binary files – and the transport channel – are signed and the recipient person/system is able to verify the digital signature, in which case unwarranted modifications should be detectable, but not always. Either way, since in the case of Folding@home the input data and output result processed by the client-software are both digitally signed, the integrity of work can be verified independently from the integrity of the client software itself. Folding@home uses the Cosm software libraries for networking. Folding@home was launched on October 1, 2000, and was the first distributed computing project aimed at bio-molecular systems. Its first client was a screensaver, which would run while the computer was not otherwise in use. In 2004, the Pande lab collaborated with David P. Anderson to test a supplemental client on the open-source BOINC framework. This client was released to closed beta in April 2005; however, the method became unworkable and was shelved in June 2006. Graphics processing units The specialized hardware of graphics processing units (GPU) is designed to accelerate rendering of 3-D graphics applications such as video games and can significantly outperform CPUs for some types of calculations. GPUs are one of the most powerful and rapidly growing computing platforms, and many scientists and researchers are pursuing general-purpose computing on graphics processing units (GPGPU). However, GPU hardware is difficult to use for non-graphics tasks and usually requires significant algorithm restructuring and an advanced understanding of the underlying architecture. Such customization is challenging, more so to researchers with limited software development resources. Folding@home uses the open-source OpenMM library, which uses a bridge design pattern with two application programming interface (API) levels to interface molecular simulation software to an underlying hardware architecture. With the addition of hardware optimizations, OpenMM-based GPU simulations need no significant modification but achieve performance nearly equal to hand-tuned GPU code, and greatly outperform CPU implementations. Before 2010, the computing reliability of GPGPU consumer-grade hardware was largely unknown, and circumstantial evidence related to the lack of built-in error detection and correction in GPU memory raised reliability concerns. In the first large-scale test of GPU scientific accuracy, a 2010 study of over 20,000 hosts on the Folding@home network detected soft errors in the memory subsystems of two-thirds of the tested GPUs. These errors strongly correlated to board architecture, though the study concluded that reliable GPU computing was very feasible as long as attention is paid to the hardware traits, such as software-side error detection. The first generation of Folding@home's GPU client (GPU1) was released to the public on October 2, 2006, delivering a 20–30 times speedup for some calculations over its CPU-based GROMACS counterparts. It was the first time GPUs had been used for either distributed computing or major molecular dynamics calculations. GPU1 gave researchers significant knowledge and experience with the development of GPGPU software, but in response to scientific inaccuracies with DirectX, on April 10, 2008, it was succeeded by GPU2, the second generation of the client. Following the introduction of GPU2, GPU1 was officially retired on June 6. Compared to GPU1, GPU2 was more scientifically reliable and productive, ran on ATI and CUDA-enabled Nvidia GPUs, and supported more advanced algorithms, larger proteins, and real-time visualization of the protein simulation. Following this, the third generation of Folding@home's GPU client (GPU3) was released on May 25, 2010. While backward compatible with GPU2, GPU3 was more stable, efficient, and flexibile in its scientific abilities, and used OpenMM on top of an OpenCL framework. Although these GPU3 clients did not natively support the operating systems Linux and macOS, Linux users with Nvidia graphics cards were able to run them through the Wine software application. GPUs remain Folding@home's most powerful platform in FLOPS. As of November 2012, GPU clients account for 87% of the entire project's x86 FLOPS throughput. Native support for Nvidia and AMD graphics cards under Linux was introduced with FahCore 17, which uses OpenCL rather than CUDA. PlayStation 3 From March 2007 until November 2012, Folding@home took advantage of the computing power of PlayStation 3s. At the time of its inception, its main streaming Cell processor delivered a 20 times speed increase over PCs for some calculations, processing power which could not be found on other systems such as the Xbox 360. The PS3's high speed and efficiency introduced other opportunities for worthwhile optimizations according to Amdahl's law, and significantly changed the tradeoff between computing efficiency and overall accuracy, allowing the use of more complex molecular models at little added computing cost. This allowed Folding@home to run biomedical calculations that would have been otherwise infeasible computationally. The PS3 client was developed in a collaborative effort between Sony and the Pande lab and was first released as a standalone client on March 23, 2007. Its release made Folding@home the first distributed computing project to use PS3s. On September 18 of the following year, the PS3 client became a channel of Life with PlayStation on its launch. In the types of calculations it can perform, at the time of its introduction, the client fit in between a CPU's flexibility and a GPU's speed. However, unlike clients running on personal computers, users were unable to perform other activities on their PS3 while running Folding@home. The PS3's uniform console environment made technical support easier and made Folding@home more user friendly. The PS3 also had the ability to stream data quickly to its GPU, which was used for real-time atomic-level visualizing of the current protein dynamics. On November 6, 2012, Sony ended support for the Folding@home PS3 client and other services available under Life with PlayStation. Over its lifetime of five years and seven months, more than 15 million users contributed over 100 million hours of computing to Folding@home, greatly assisting the project with disease research. Following discussions with the Pande lab, Sony decided to terminate the application. Pande considered the PlayStation 3 client a "game changer" for the project. Multi-core processing client Folding@home can use the parallel computing abilities of modern multi-core processors. The ability to use several CPU cores simultaneously allows completing the full simulation far faster. Working together, these CPU cores complete single work units proportionately faster than the standard uniprocessor client. This method is scientifically valuable because it enables much longer simulation trajectories to be performed in the same amount of time, and reduces the traditional difficulties of scaling a large simulation to many separate processors. A 2007 publication in the Journal of Molecular Biology relied on multi-core processing to simulate the folding of part of the villin protein approximately 10 times longer than was possible with a single-processor client, in agreement with experimental folding rates. In November 2006, first-generation symmetric multiprocessing (SMP) clients were publicly released for open beta testing, referred to as SMP1. These clients used Message Passing Interface (MPI) communication protocols for parallel processing, as at that time the GROMACS cores were not designed to be used with multiple threads. This was the first time a distributed computing project had used MPI. Although the clients performed well in Unix-based operating systems such as Linux and macOS, they were troublesome under Windows. On January 24, 2010, SMP2, the second generation of the SMP clients and the successor to SMP1, was released as an open beta and replaced the complex MPI with a more reliable thread-based implementation. SMP2 supports a trial of a special category of bigadv work units, designed to simulate proteins that are unusually large and computationally intensive and have a great scientific priority. These units originally required a minimum of eight CPU cores, which was raised to sixteen later, on February 7, 2012. Along with these added hardware requirements over standard SMP2 work units, they require more system resources such as random-access memory (RAM) and Internet bandwidth. In return, users who run these are rewarded with a 20% increase over SMP2's bonus point system. The bigadv category allows Folding@home to run especially demanding simulations for long times that had formerly required use of supercomputing clusters and could not be performed anywhere else on Folding@home. Many users with hardware able to run bigadv units have later had their hardware setup deemed ineligible for bigadv work units when CPU core minimums were increased, leaving them only able to run the normal SMP work units. This frustrated many users who invested significant amounts of money into the program only to have their hardware be obsolete for bigadv purposes shortly after. As a result, Pande announced in January 2014 that the bigadv program would end on January 31, 2015. V7 The V7 client is the seventh and latest generation of the Folding@home client software, and is a full rewrite and unification of the prior clients for Windows, macOS, and Linux operating systems. It was released on March 22, 2012. Like its predecessors, V7 can run Folding@home in the background at a very low priority, allowing other applications to use CPU resources as they need. It is designed to make the installation, start-up, and operation more user-friendly for novices, and offer greater scientific flexibility to researchers than prior clients. V7 uses Trac for managing its bug tickets so that users can see its development process and provide feedback. V7 consists of four integrated elements. The user typically interacts with V7's open-source GUI, named FAHControl. This has Novice, Advanced, and Expert user interface modes, and has the ability to monitor, configure, and control many remote folding clients from one computer. FAHControl directs FAHClient, a back-end application that in turn manages each FAHSlot (or slot). Each slot acts as replacement for the formerly distinct Folding@home v6 uniprocessor, SMP, or GPU computer clients, as it can download, process, and upload work units independently. The FAHViewer function, modeled after the PS3's viewer, displays a real-time 3-D rendering, if available, of the protein currently being processed. Google Chrome In 2014, a client for the Google Chrome and Chromium web browsers was released, allowing users to run Folding@home in their web browser. The client used Google's Native Client (NaCl) feature on Chromium-based web browsers to run the Folding@home code at near-native speed in a sandbox on the user's machine. Due to the phasing out of NaCL and changes at Folding@home, the web client was permanently shut down in June 2019. Android In July 2015, a client for Android mobile phones was released on Google Play for devices running Android 4.4 KitKat or newer. On February 16, 2018, the Android client, which was offered in cooperation with Sony, was removed from Google Play. Plans were announced to offer an open source alternative in the future. Comparison to other molecular simulators Rosetta@home is a distributed computing project aimed at protein structure prediction and is one of the most accurate tertiary structure predictors. The conformational states from Rosetta's software can be used to initialize a Markov state model as starting points for Folding@home simulations. Conversely, structure prediction algorithms can be improved from thermodynamic and kinetic models and the sampling aspects of protein folding simulations. As Rosetta only tries to predict the final folded state, and not how folding proceeds, Rosetta@home and Folding@home are complementary and address very different molecular questions. Anton is a special-purpose supercomputer built for molecular dynamics simulations. In October 2011, Anton and Folding@home were the two most powerful molecular dynamics systems. Anton is unique in its ability to produce single ultra-long computationally costly molecular trajectories, such as one in 2010 which reached the millisecond range. These long trajectories may be especially helpful for some types of biochemical problems. However, Anton does not use Markov state models (MSM) for analysis. In 2011, the Pande lab constructed a MSM from two 100-μs Anton simulations and found alternative folding pathways that were not visible through Anton's traditional analysis. They concluded that there was little difference between MSMs constructed from a limited number of long trajectories or one assembled from many shorter trajectories. In June 2011 Folding@home added sampling of an Anton simulation in an effort to better determine how its methods compare to Anton's. However, unlike Folding@home's shorter trajectories, which are more amenable to distributed computing and other parallelizing methods, longer trajectories do not require adaptive sampling to sufficiently sample the protein's phase space. Due to this, it is possible that a combination of Anton's and Folding@home's simulation methods would provide a more thorough sampling of this space. See also BOINC DreamLab, for use on smartphones Foldit List of distributed computing projects Comparison of software for molecular mechanics modeling Molecular modeling on GPUs SETI@home Storage@home Molecule editor Volunteer computing World Community Grid References Sources External links Bioinformatics Computational biology Computational chemistry 2000 software Cross-platform software Data mining and machine learning software Distributed computing projects Hidden Markov models Mathematical and theoretical biology Medical technology Medical research organizations Molecular dynamics software Molecular modelling Molecular modelling software PlayStation 3 software Proprietary cross-platform software Protein folds Protein structure Simulation software Science software for Linux Science software for macOS Science software for Windows University of Pennsylvania
Folding@home
[ "Chemistry", "Mathematics", "Engineering", "Biology" ]
9,809
[ "Molecular dynamics software", "Molecular modelling software", "Molecular physics", "Computational chemistry software", "Distributed computing projects", "Mathematical and theoretical biology", "Biological engineering", "Applied mathematics", "Information technology projects", "Bioinformatics", ...
413,204
https://en.wikipedia.org/wiki/Storm%20drain
A storm drain, storm sewer (United Kingdom, U.S. and Canada), highway drain, surface water drain/sewer (United Kingdom), or stormwater drain (Australia and New Zealand) is infrastructure designed to drain excess rain and ground water from impervious surfaces such as paved streets, car parks, parking lots, footpaths, sidewalks, and roofs. Storm drains vary in design from small residential dry wells to large municipal systems. Drains receive water from street gutters on most motorways, freeways and other busy roads, as well as towns in areas with heavy rainfall that leads to flooding, and coastal towns with regular storms. Even rain gutters from houses and buildings can connect to the storm drain. Since many storm drainage systems are gravity sewers that drain untreated storm water into rivers or streams, any hazardous substances poured into the drains will contaminate the destination bodies of water. Storm drains sometimes cannot manage the quantity of rain that falls in heavy rains or storms. Inundated drains can cause basement and street flooding. Many areas require detention tanks inside a property that temporarily hold runoff in heavy rains and restrict outlet flow to the public sewer. This reduces the risk of overwhelming the public sewer. Some storm drains mix stormwater (rainwater) with sewage, either intentionally in the case of combined sewers, or unintentionally. Nomenclature Several related terms are used differently in American and British English. Function Inlet There are two main types of stormwater drain (highway drain or road gully in the UK) inlets: side inlets and grated inlets. Side inlets are located adjacent to the curb and rely on the ability of the opening under the back stone or lintel to capture flow. They are usually depressed at the invert of the channel to improve capture capacity. Many inlets have gratings or grids to prevent people, vehicles, large objects or debris from falling into the storm drain. Grate bars are spaced so that the flow of water is not impeded, but sediment and many small objects can also fall through. However, if grate bars are too far apart, the openings may present a risk to pedestrians, bicyclists, and others in the vicinity. Grates with long narrow slots parallel to traffic flow are of particular concern to cyclists, as the front tire of a bicycle may become stuck, causing the cyclist to go over the handlebars or lose control and fall. Storm drains in streets and parking areas must be strong enough to support the weight of vehicles, and are often made of cast iron or reinforced concrete. Some of the heavier sediment and small objects may settle in a catch basin, or sump, which lies immediately below the outlet, where water from the top of the catch basin reservoir overflows into the sewer proper. The catchbasin serves much the same function as the "trap" in household wastewater plumbing in trapping objects. In the United States, unlike the plumbing trap, the catch basin does not necessarily prevent sewer gases such as hydrogen sulfide and methane from escaping. However, in the United Kingdom, where they are called gully pots, they are designed as true water-filled traps and do block the egress of gases and rodents. Most catchbasins contain stagnant water during drier parts of the year and can, in warm countries, become mosquito breeding grounds. Larvicides or disruptive larval hormones, sometimes released from "mosquito biscuits", have been used to control mosquito breeding in catch basins. Mosquitoes may be physically prevented from reaching the standing water or migrating into the sewer proper by the use of an "inverted cone filter". Another method of mosquito control is to spread a thin layer of oil on the surface of stagnant water, interfering with the breathing tubes of mosquito larvae. The performance of catch basins at removing sediment and other pollutants depends on the design of the catchbasin (for example, the size of the sump), and on routine maintenance to retain the storage available in the sump to capture sediment. Municipalities typically have large vacuum trucks that perform this task. Catch basins act as the first-line pretreatment for other treatment practices, such as retention basins, by capturing large sediments and street litter from urban runoff before it enters the storm drainage pipes. Piping Pipes can come in many different cross-sectional shapes (rectangular, square, bread-loaf-shaped, oval, inverted pear-shaped, egg shaped, and most commonly, circular). Drainage systems may have many different features including waterfalls, stairways, balconies and pits for catching rubbish, sometimes called Gross Pollutant Traps (GPTs). Pipes made of different materials can also be used, such as brick, concrete, high-density polyethylene or galvanized steel. Fibre reinforced plastic is being used more commonly for drain pipes and fittings. Outlet Most drains have a single large exit at their point of discharge (often covered by a grating) into a canal, river, lake, reservoir, sea or ocean. Other than catchbasins, typically there are no treatment facilities in the piping system. Small storm drains may discharge into individual dry wells. Storm drains may be interconnected using slotted pipe, to make a larger dry well system. Storm drains may discharge into human-made excavations known as recharge basins or retention ponds. Environmental impacts Water quantity Storm drains are often unable to manage the quantity of rain that falls during heavy rains and/or storms. When storm drains are inundated, basement and street flooding can occur. Unlike catastrophic flooding events, this type of urban flooding occurs in built-up areas where human-made drainage systems are prevalent. Urban flooding is the primary cause of sewer backups and basement flooding, which can affect properties repeatedly. Clogged drains also contribute to flooding by the obstruction of storm drains. Communities or cities can help reduce this by cleaning leaves from the storm drains to stop ponding or flooding into yards. Snow in the winter can also clog drains when there is an unusual amount of rain in the winter and snow is plowed atop storm drains. Runoff into storm sewers can be minimized by including sustainable urban drainage systems (UK term) or low impact development or green infrastructure practices (US terms) into municipal plans. To reduce stormwater from rooftops, flows from eaves troughs (rain gutters and downspouts) may be infiltrated into adjacent soil, rather than discharged into the storm sewer system. Storm water runoff from paved surfaces can be directed to unlined ditches (sometimes called swales or bioswales) before flowing into the storm sewers, again to allow the runoff to soak into the ground. Permeable paving materials can be used in building sidewalks, driveways and in some cases, parking lots, to infiltrate a portion of the stormwater volume. Many areas require that properties have detention tanks that temporarily hold rainwater runoff, and restrict the outlet flow to the public sewer. This lessens the risk of overburdening the public sewer during heavy rain. An overflow outlet may also connect higher on the outlet side of the detention tank. This overflow prevents the detention tank from completely filling. Restricting water flow and temporarily holding the water in a detention tank public this way makes it far less likely for rain to overwhelm the sewers. Water quality The first flush from urban runoff can be extremely dirty. Storm water may become contaminated while running down the road or other impervious surface, or from lawn chemical run-off, before entering the drain. Water running off these impervious surfaces tends to pick up gasoline, motor oil, heavy metals, trash and other pollutants from roadways and parking lots, as well as fertilizers and pesticides from lawns. Roads and parking lots are major sources of nickel, copper, zinc, cadmium, lead and polycyclic aromatic hydrocarbons (PAHs), which are created as combustion byproducts of gasoline and other fossil fuels. Roof runoff contributes high levels of synthetic organic compounds and zinc (from galvanized gutters). Fertilizer use on residential lawns, parks and golf courses is a significant source of nitrates and phosphorus. Separation of undesired runoff can be achieved by installing devices within the storm sewer system. These devices are relatively new and can only be installed with new development or during major upgrades. They are referred to as oil-grit separators (OGS) or oil-sediment separators (OSS). They consist of a specialized manhole chamber, and use the water flow and/or gravity to separate oil and grit. Mosquito breeding Catch basins are commonly designed with a sump area below the outlet pipe level—a reservoir for water and debris that helps prevent the pipe from clogging. Unless constructed with permeable bottoms to let water infiltrate into underlying soil, this subterranean basin can become a mosquito breeding area, because it is cool, dark, and retains stagnant water for a long time. Combined with standard grates, which have holes large enough for mosquitoes to enter and leave the basin, this is a major problem in mosquito control. Basins can be filled with concrete up to the pipe level to prevent this reservoir from forming. Without proper maintenance, the functionality of the basin is questionable, as these catch basins are most commonly not cleaned annually as is needed to make them perform as designed. The trapping of debris serves no purpose because once filled they operate as if no basins were present, but continue to allow a shallow area of water retention for the breeding of mosquito. Moreover, even if cleaned and maintained, the water reservoir remains filled, accommodating the breeding of mosquitoes. Relationship to sanitary sewer systems Storm drains are separate and distinct from sanitary sewer systems. The separation of storm sewers from sanitary sewers helps prevent sewage treatment plants becoming overwhelmed by infiltration/inflow during a rainstorm, which could discharge untreated sewage into the environment. Many storm drainage systems drain untreated storm water into rivers or streams. In the US, many local governments conduct public awareness campaigns about this, lest people dump waste into the storm drain system. In Cleveland, Ohio, for example, all new catch basins installed have inscriptions on them not to dump any waste, and usually include a fish imprint as well. Trout Unlimited Canada recommends that a yellow fish symbol be painted next to existing storm drains. Combined sewers Cities that installed their sewage collection systems before the 1930s typically used single piping systems to transport both urban runoff and sewage. This type of collection system is referred to as a combined sewer system (CSS). The cities' rationale when combined sewers were built was that it would be cheaper to build just a single system. In these systems a sudden large rainfall that exceeds sewage treatment capacity is allowed to overflow directly from storm drains into receiving waters via structures called combined sewer overflows. Storm drains are typically installed at shallower depths than combined sewers. This is because combined sewers were designed to accept sewage flows from buildings with basements, in addition to receiving surface runoff from streets. About 860 communities in the US have combined sewer systems, serving about 40 million people. New York City, Washington, D.C., Seattle and other cities with combined systems have this problem due to a large influx of storm water after every heavy rain event. Some cities have dealt with this by adding large storage tanks or ponds to hold the water until it can be treated. Chicago has a system of tunnels, collectively called the Deep Tunnel, underneath the city for storing its stormwater. Many areas require detention tanks or roof detention systems that temporarily hold runoff in heavy rains and restrict outlet flow to the public sewer. This lessens the risk of overwhelming the public sewer in heavy rain. An overflow outlet may also connect higher on the outlet side of the detention tank. This overflow prevents the detention tank from completely filling. By restricting the flow of water in this way and temporarily holding the water in a detention vault or tank or by rooftop detention, public sewers are less likely to overflow. Regulations and local building codes Building codes and local government ordinances vary significantly on the handling of storm drain runoff. New developments might be required to construct their storm drain processing capacity for returning the runoff to the water table and bioswales may be required in sensitive ecological areas to protect the watershed. In the United States, cities, suburban communities, and towns with over 100,000 population, smaller community drainage systems in urbanized areas, and additional municipal systems that are specifically designated by state agencies are required to obtain discharge permits for their storm sewer systems under the Clean Water Act. The Environmental Protection Agency (EPA) issued stormwater regulations for large cities in 1990 and for other communities in 1999. The permits require local governments to operate stormwater management programs, covering both construction of new buildings and facilities, and maintenance of their existing municipal drainage networks. For new construction projects, many municipalities require builders to obtain approval of the site drainage system and structural plans. State government facilities, such as roads and highways, are also subject to the stormwater management regulations. Examples Southeastern Los Angeles County installed thousands of stainless steel, full-capture trash devices on their road drains in 2011. Exploration An international subculture has grown up around exploring stormwater drains. Societies such as the Cave Clan regularly explore the drains underneath cities. This is commonly known as "urban exploration", but is also known as draining when in specific relation to storm drains. Residence In several large American cities, homeless people live in storm drains. At least 300 people live in the 200 miles of underground storm drains of Las Vegas, many of them making a living finding unclaimed winnings in the gambling machines. An organization called Shine a Light was founded in 2009 to help the drain residents after over 20 drowning deaths occurred in the preceding years. A man in San Diego was evicted from a storm drain after living there for nine months in 1986. History Archaeological studies have revealed use of rather sophisticated stormwater runoff systems in ancient cultures. For example, in Minoan Crete around 2000 BC, cities such as Phaistos were designed to have storm drains and channels to collect precipitation runoff. At Cretan Knossos, storm drains include stone-lined structures large enough for a person to crawl through. Other examples of early civilizations with elements of stormwater drain systems include early people of Mainland Orkney such as Gurness and the Brough of Birsay in Scotland. Gallery See also Urban runoff Water pollution Pervious concrete roads References External links EPA – Combined Sewer Overflows EPA Storm Drain Stenciling Took Kit 7 Steps to Clean Water from Great Lakes Green Initiative (example of a local public awareness program) Drainage Environmental engineering Flood control Hydraulic engineering Subterranea (geography) Drain Road hazards
Storm drain
[ "Physics", "Chemistry", "Technology", "Engineering", "Environmental_science" ]
3,017
[ "Hydrology", "Water treatment", "Stormwater management", "Chemical engineering", "Road hazards", "Water pollution", "Physical systems", "Flood control", "Hydraulics", "Civil engineering", "Environmental engineering", "Hydraulic engineering" ]