id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
248,182
https://en.wikipedia.org/wiki/Space%20charge
Space charge is an interpretation of a collection of electric charges in which excess electric charge is treated as a continuum of charge distributed over a region of space (either a volume or an area) rather than distinct point-like charges. This model typically applies when charge carriers have been emitted from some region of a solid—the cloud of emitted carriers can form a space charge region if they are sufficiently spread out, or the charged atoms or molecules left behind in the solid can form a space charge region. Space charge effects are most pronounced in dielectric media (including vacuum); in highly conductive media, the charge tends to be rapidly neutralized or screened. The sign of the space charge can be either negative or positive. This situation is perhaps most familiar in the area near a metal object when it is heated to incandescence in a vacuum. This effect was first observed by Thomas Edison in light bulb filaments, where it is sometimes called the Edison effect. Space charge is a significant phenomenon in many vacuum and solid-state electronic devices. Cause Physical explanation When a metal object is placed in a vacuum and is heated to incandescence, the energy is sufficient to cause electrons to "boil" away from the surface atoms and surround the metal object in a cloud of free electrons. This is called thermionic emission. The resulting cloud is negatively charged, and can be attracted to any nearby positively charged object, thus producing an electric current which passes through the vacuum. Space charge can result from a range of phenomena, but the most important are: Combination of the current density and spatially inhomogeneous resistance Ionization of species within the dielectric to form heterocharge Charge injection from electrodes and from a stress enhancement Polarization in structures such as water trees. "Water tree" is a name given to a tree-like figure appearing in a water-impregnated polymer insulating cable. It has been suggested that in alternating current (AC) most carriers injected at electrodes during a half cycle are ejected during the next half cycle, so the net balance of charge on a cycle is practically zero. However, a small fraction of the carriers can be trapped at levels deep enough to retain them when the field is inverted. The amount of charge in AC should increase slower than in direct current (DC) and become observable after longer periods of time. Hetero and homo charge Hetero charge means that the polarity of the space charge is opposite to that of neighboring electrode, and homo charge is the reverse situation. Under high voltage application, a hetero charge near the electrode is expected to reduce the breakdown voltage, whereas a homo charge will increase it. After polarity reversal under ac conditions, the homo charge is converted to hetero space charge. Mathematical explanation If the near "vacuum" has a pressure of 10−6 mmHg or less, the main vehicle of conduction is electrons. The emission current density (J) from the cathode, as a function of its thermodynamic temperature T, in the absence of space-charge, is given by Richardson's law: where = elementary positive charge (i.e., magnitude of electron charge), = electron mass, = Boltzmann constant = , = Planck constant = , = work function of the cathode, = mean electron reflection coefficient. The reflection coefficient can be as low as 0.105 but is usually near 0.5. For tungsten, (1 − )A0 = , and . At 2500 °C, the emission is 28207 A/m2. The emission current as given above is many times greater than that normally collected by the electrodes, except in some pulsed valves such as the cavity magnetron. Most of the electrons emitted by the cathode are driven back to it by the repulsion of the cloud of electrons in its neighborhood. This is called the space charge effect. In the limit of large current densities, J is given by the Child–Langmuir equation below, rather than by the thermionic emission equation above. Occurrence Space charge is an inherent property of all vacuum tubes. This has at times made life harder or easier for electrical engineers who used tubes in their designs. For example, space charge significantly limited the practical application of triode amplifiers which led to further innovations such as the vacuum tube tetrode. On the other hand, space charge was useful in some tube applications because it generates a negative EMF within the tube's envelope, which could be used to create a negative bias on the tube's grid. Grid bias could also be achieved by using an applied grid voltage in addition to the control voltage. This could improve the engineer's control and fidelity of amplification. It allowed the construction of space charge tubes for car radios that required only 6 or 12 volts anode voltage (typical examples were the 6DR8/EBF83, 6GM8/ECC86, 6DS8/ECH83, 6ES6/EF97 and 6ET6/EF98). Space charges can also occur within dielectrics. For example, when gas near a high voltage electrode begins to undergo dielectric breakdown, electrical charges are injected into the region near the electrode, forming space charge regions in the surrounding gas. Space charges can also occur within solid or liquid dielectrics that are stressed by high electric fields. Trapped space charges within solid dielectrics are often a contributing factor leading to dielectric failure within high voltage power cables and capacitors. In semiconductor physics, space charge layers that are depleted of charge carriers are used as a model to explain the rectifying behaviour of p–n junctions and the buildup of a voltage in photovoltaic cells. Space-charge-limited current In vacuum (Child's law) First proposed by Clement D. Child in 1911, Child's law states that the space-charge-limited current (SCLC) in a plane-parallel vacuum diode varies directly as the three-halves power of the anode voltage and inversely as the square of the distance d separating the cathode and the anode. For electrons, the current density J (amperes per meter squared) is written: where is the anode current and S the surface area of the anode receiving the current; is the magnitude of the charge of the electron and is its mass. The equation is also known as the "three-halves-power law" or the Child–Langmuir law. Child originally derived this equation for the case of atomic ions, which have much smaller ratios of their charge to their mass. Irving Langmuir published the application to electron currents in 1913, and extended it to the case of cylindrical cathodes and anodes. The equation's validity is subject to the following assumptions: Electrons travel ballistically between electrodes (i.e., no scattering). In the interelectrode region, the space charge of any ions is negligible. The electrons have zero velocity at the cathode surface. The assumption of no scattering (ballistic transport) is what makes the predictions of Child–Langmuir law different from those of Mott–Gurney law. The latter assumes steady-state drift transport and therefore strong scattering. Child's law was further generalized by Buford R. Conley in 1995 for the case of non-zero velocity at the cathode surface with the following equation: where is the initial velocity of the particle. This equation reduces to Child's Law for the special case of equal to zero. In recent years, various models of SCLC current have been revised as reported in two review papers. In semiconductors In semiconductors and insulating materials, an electric field causes charged particles, electrons, to reach a specific drift velocity that is parallel to the direction of the field. This is different from the behavior of the free charged particles in a vacuum, in which a field accelerates the particle. The proportionality factor between the magnitudes of the drift velocity, , and the electric field, , is called the mobility, : Drift regime (Mott–Gurney law) The Child's law behavior of a space-charge-limited current that applies in a vacuum diode doesn't generally apply to a semiconductor/insulator in a single-carrier device, and is replaced by the Mott–Gurney law. For a thin slab of material of thickness , sandwiched between two selective Ohmic contacts, the electric current density, , flowing through the slab is given by: where is the voltage that has been applied across the slab and is the permittivity of the solid. The Mott–Gurney law offers some crucial insight into charge-transport across an intrinsic semiconductor, namely that one should not expect the drift current to increase linearly with the applied voltage, i.e., from Ohm's law, as one would expect from charge-transport across a metal or highly doped semiconductor. Since the only unknown quantity in the Mott–Gurney law is the charge-carrier mobility, , the equation is commonly used to characterize charge transport in intrinsic semiconductors. Using the Mott–Gurney law for characterizing amorphous semiconductors, along with semiconductors containing defects and/or non-Ohmic contacts, should however be approached with caution as significant deviations both in the magnitude of the current and the power law dependence with respect to the voltage will be observed. In those cases the Mott–Gurney law can not be readily used for characterization, and other equations which can account for defects and/or non-ideal injection should be used instead. During the derivation of the Mott–Gurney law, one has to make the following assumptions: There is only one type of charge carrier present, i.e., only electrons or holes. The material has no intrinsic conductivity, but charges are injected into it from one electrode and captured by the other. The carrier mobility, , and the permittivity, , are constant throughout the sample. The current flow is not limited by traps or energetic disorder. The current is not predominantly due to doping. The electric field at the charge-injecting electrode is zero, meaning that the current is governed by drift only. Derivation Consider a crystal of thickness carrying a current . Let be the electric field at a distance from the surface, and the number of electrons per unit volume. Then the current is given has two contributions, one due to drift and the other due to diffusion: When is the electrons mobility and the diffusion coefficient. Laplace's equation gives for the field: Hence, eliminating , we have: After integrating, making use of the Einstein relation and neglecting the term we obtain for the electric field: where is a constant. We may neglect the term because we are supposing that and . Since, at , , we have: It follows that the potential drop across the crystal is: Making use of () and () we can write in terms of . For small , is small and , so that: Thus the current increases as the square of . For large , and we obtain: As an application example, the steady-state space-charge-limited current across a piece of intrinsic silicon with a charge-carrier mobility of 1500 cm2/V-s, a relative dielectric constant of 11.9, an area of 10−8 cm2 and a thickness of 10−4 cm can be calculated by an online calculator to be 126.4 μA at 3 V. Note that in order for this calculation to be accurate, one must assume all the points listed above. In the case where the electron/hole transport is limited by trap states in the form of exponential tails extending from the conduction/valence band edges, the drift current density is given by the Mark-Helfrich equation, where is the elementary charge, with being the thermal energy, is the effective density of states of the charge carrier type in the semiconductor, i.e., either or , and is the trap density. Low voltage regime In the case where a very small applied bias is applied across the single-carrier device, the current is given by: Note that the equation describing the current in the low voltage regime follows the same thickness scaling as the Mott–Gurney law, , but increases linearly with the applied voltage. Saturation regimes When a very large voltage is applied across the semiconductor, the current can transition into a saturation regime. In the velocity-saturation regime, this equation takes the following form Note the different dependence of on between the Mott–Gurney law and the equation describing the current in the velocity-saturation regime. In the ballistic case (assuming no collisions), the Mott–Gurney equation takes the form of the more familiar Child–Langmuir law. In the charge-carrier saturation regime, the current through the sample is given by, where is the effective density of states of the charge carrier type in the semiconductor. Shot noise Space charge tends to reduce shot noise. Shot noise results from the random arrivals of discrete charge; the statistical variation in the arrivals produces shot noise. A space charge develops a potential that slows the carriers down. For example, an electron approaching a cloud of other electrons will slow down due to the repulsive force. The slowing carriers also increases the space charge density and resulting potential. In addition, the potential developed by the space charge can reduce the number of carriers emitted. When the space charge limits the current, the random arrivals of the carriers are smoothed out; the reduced variation results in less shot noise. See also Thermionic emission Vacuum tube Grid leak References Electricity Theories Microwave technology Vacuum tubes Mass spectrometry Semiconductors
Space charge
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,803
[ "Electrical resistance and conductance", "Physical quantities", "Spectrum (physical sciences)", "Semiconductors", "Instrumental analysis", "Vacuum tubes", "Mass", "Vacuum", "Materials", "Electronic engineering", "Mass spectrometry", "Condensed matter physics", "Solid state engineering", "M...
248,590
https://en.wikipedia.org/wiki/Luteinizing%20hormone
Luteinizing hormone (LH, also known as luteinising hormone, lutropin and sometimes lutrophin) is a hormone produced by gonadotropic cells in the anterior pituitary gland. The production of LH is regulated by gonadotropin-releasing hormone (GnRH) from the hypothalamus. In females, an acute rise of LH known as an LH surge, triggers ovulation and development of the corpus luteum. In males, where LH had also been called interstitial cell–stimulating hormone (ICSH), it stimulates Leydig cell production of testosterone. It acts synergistically with follicle-stimulating hormone (FSH). Etymology The term luteinizing comes from the Latin "luteus", meaning "yellow". This is in reference to the corpus luteum, which is a mass of cells that forms in an ovary after an ovum (egg) has been discharged. The corpus luteum is so named because it often has a distinctive yellow color. The process of forming the corpus luteum is known as "luteinization", and thus the hormone that triggers this process is termed the "luteinizing" hormone. Structure LH is a heterodimeric glycoprotein. Each monomeric unit is a glycoprotein molecule; one alpha and one beta subunit make the full, functional protein. Its structure is similar to that of the other glycoprotein hormones, follicle-stimulating hormone (FSH), thyroid-stimulating hormone (TSH), and human chorionic gonadotropin (hCG). The protein dimer contains 2 glycopeptidic subunits (labeled alpha- and beta- subunits) that are non-covalently associated: The alpha subunits of LH, FSH, TSH, and hCG are identical, and contain 92 amino acids in human but 96 amino acids in almost all other vertebrate species (glycoprotein hormones do not exist in invertebrates). The beta subunits vary. LH has a beta subunit of 120 amino acids (LHB) that confers its specific biologic action and is responsible for the specificity of the interaction with the LH receptor. This beta subunit contains an amino acid sequence that exhibits large homologies with that of the beta subunit of hCG and both stimulate the same receptor. However, the hCG beta subunit contains an additional 24 amino acids, and the two hormones differ in the composition of their sugar moieties. The different composition of these oligosaccharides affects bioactivity and speed of degradation. The biologic half-life of LH is 20 minutes, shorter than that of FSH (3–4 hours) and hCG (24 hours). The biological half-life of LH is 23 hours subcutaneous or terminal half life of 10-12 hours. Genes The gene for the alpha subunit is located on chromosome 6q12.21. The luteinizing hormone beta subunit gene is localized in the LHB/CGB gene cluster on chromosome 19q13.32. In contrast to the alpha gene activity, beta LH subunit gene activity is restricted to the pituitary gonadotropic cells. It is regulated by the gonadotropin-releasing hormone from the hypothalamus. GnRH activates Egr1 which interacts with transcription factors NR5A1 and PITX1 at the gene promoter to up-regulate LHB transcription. Function In both males and females, LH works upon endocrine cells in the gonads to produce androgens. Effects in females LH supports theca cells in the ovaries that provide androgens and hormonal precursors for estradiol production. At the time of menstruation, FSH initiates follicular growth, specifically affecting granulosa cells. With the rise in estrogens, LH receptors are also expressed on the maturing follicle, which causes it to produce more estradiol. Eventually, when the follicle has fully matured, a spike in 17α-hydroxyprogesterone production by the follicle inhibits the production of estrogens. Previously, the preovulatory LH surge was attributed to a decrease in estrogen-mediated negative feedback of GnRH in the hypothalamus, subsequently stimulating the release of LH from the anterior pituitary. Some studies, however, attribute the LH surge to positive feedback from estradiol after production by the dominant follicle exceeds a certain threshold. Exceptionally high levels of estradiol induce hypothalamic production of progesterone, which stimulates elevated GnRH secretion, triggering a surge in LH. The increase in LH production only lasts for 24 to 48 hours. This "LH surge" triggers ovulation, thereby not only releasing the egg from the follicle, but also initiating the conversion of the residual follicle into a corpus luteum that, in turn, produces progesterone to prepare the endometrium for a possible implantation. LH is necessary to maintain luteal function for the second two weeks of the menstrual cycle. If pregnancy occurs, LH levels will decrease, and luteal function will instead be maintained by the action of hCG (human chorionic gonadotropin), a hormone very similar to LH but secreted from the new placenta. Gonadal steroids (estrogens and androgens) generally have negative feedback effects on GnRH-1 release at the level of the hypothalamus and at the gonadotropes, reducing their sensitivity to GnRH. Positive feedback by estrogens also occurs in the gonadal axis of female mammals and is responsible for the midcycle surge of LH that stimulates ovulation. Although estrogens inhibit kisspeptin (Kp) release from kiss1 neurons in the ARC, estrogens stimulate Kp release from the Kp neurons in the AVPV. As estrogens' levels gradually increase the positive effect predominates, leading to the LH surge. GABA-secreting neurons that innervate GnRH-1 neurons also can stimulate GnRH-1 release. These GABA neurons also possess ERs and may be responsible for the GnRH-1 surge. Part of the inhibitory action of endorphins on GnRH-1 release is through inhibition of these GABA neurons. Rupture of the ovarian follicle at ovulation causes a drastic reduction in estrogen synthesis and a marked increase in secretion of progesterone by the corpus luteum in the ovary, reinstating a predominantly negative feedback on hypothalamic secretion of GnRH-1. Effects in males LH acts upon the Leydig cells of the testis and is regulated by gonadotropin-releasing hormone (GnRH). The Leydig cells produce testosterone under the control of LH. LH binds to LH receptors on the membrane surface of Leydig cells. Binding to this receptor causes an increase in cyclic adenosine monophosphate (cAMP), a secondary messenger, which allows cholesterol to translocate into the mitochondria. Within the mitochondria, cholesterol is converted to pregnenolone by CYP11A1. Pregnenolone is then converted to dehydroepiandrosterone (DHEA). DHEA is then converted to androstenedione by 3β-hydroxysteroid dehydrogenase (3β-HSD) and then finally converted to testosterone by 17β-hydroxysteroid dehydrogenase (HSD17B). The onset of puberty is controlled by two major hormones: FSH initiates spermatogenesis and LH signals the release of testosterone, an androgen that exerts both endocrine activity and intratesticular activity on spermatogenesis. LH is released from the pituitary gland, and is controlled by pulses of gonadotropin-releasing hormone. When bloodstream testosterone levels are low, the pituitary gland is stimulated to release LH. As the levels of testosterone increase, it will act on the pituitary through a negative feedback loop and inhibit the release of GnRH and LH consequently. Androgens (including testosterone and dihydrotestosterone) inhibit monoamine oxidase (MAO) in the pineal gland, leading to increased melatonin and reduced LH and FSH by melatonin-induced increase of Gonadotropin-Inhibitory Hormone (GnIH) synthesis and secretion. Testosterone can also be aromatized into estradiol (E2) to inhibit LH. E2 decreases pulse amplitude and responsiveness to GnRH from the hypothalamus onto the pituitary. Changes in LH and testosterone blood levels and pulse secretions are induced by changes in sexual arousal in human males. Effects in the brain Luteinizing hormone receptors are located in areas of the brain associated with cognitive function. The role of LH role in the central nervous system (CNS) may be of relevance to understanding and treating post-menopausal cognitive decline. Some research has observed an inverse relationship between circulating LH and CNS LH levels. After ovariectomy (a procedure used to mimic menopause) in female mice, circulating LH levels surge while CNS levels of LH fall. Treatments that lower circulating LH restore LH levels in the CNS. Normal levels LH levels are normally low during childhood and in women, high after menopause. Since LH is secreted as pulses, it is necessary to follow its concentration over a sufficient period of time to get proper information about its blood level. During reproductive years, typical levels are between 1 and 20 IU/L. Physiologic high LH levels are seen during the LH surge (v.s.) and typically last 48 hours. In males over 18 years of age, reference ranges have been estimated to be 1.8–8.6 IU/L. LH is measured in international units (IU). When quantifying the amount of LH in a sample in IUs, it is important to know which international standard your lot of LH was calibrated against since they can vary broadly from year to year. For human urinary LH, one IU is defined as 1/189th of an ampule denoted 96/602 and distributed by the NIBSC, corresponding to approximately 0.04656 μg of LH protein for a single IU, but older standard versions are still widely in use. Predicting ovulation The detection of a surge in release of luteinizing hormone indicates impending ovulation. LH can be detected by urinary ovulation predictor kits (OPK, also LH-kit) that are performed, typically daily, around the time ovulation may be expected. A conversion from a negative to a positive reading would suggest that ovulation is about to occur within 24–48 hours, giving women two days to engage in sexual intercourse or artificial insemination with the intention of conceiving. The recommended testing frequency differs between manufacturers. For example, the Clearblue test is taken daily, and an increased frequency does not decrease the risk of missing an LH surge. On the other hand, the Chinese company Nantong Egens Biotechnology recommends using their test twice per day. If testing once per day, no significant difference has been found between testing LH in the morning versus in the evening, in relation to conception rates, and recommendations of what time in the day to take the test varies between manufacturers and healthcare workers. Tests may be read manually using a color-change paper strip, or digitally with the assistance of reading electronics. Tests for luteinizing hormone may be combined with testing for estradiol in tests such as the Clearblue fertility monitor. The sensitivity of LH tests are measured in milli international unit, with tests commonly available in the range 10–40 m.i.u. (the lower the number, the higher the sensitivity). As sperm can stay viable in the woman for several days, LH tests are not recommended for contraceptive practices, as the LH surge typically occurs after the beginning of the fertile window. Disease states Excess In children with precocious puberty of pituitary or central origin, LH and FSH levels may be in the reproductive range instead of the low levels typical for their age. During the reproductive years, relatively elevated LH is frequently seen in patients with polycystic ovary syndrome; however, it would be unusual for them to have LH levels outside of the normal reproductive range. Persistently high LH levels are indicative of situations where the normal restricting feedback from the gonad is absent, leading to a pituitary production of both LH and FSH. While this is typical in menopause, it is abnormal in the reproductive years. There it may be a sign of: Premature menopause Gonadal dysgenesis, Turner syndrome, Klinefelter syndrome Castration Swyer syndrome Polycystic ovary syndrome Certain forms of congenital adrenal hyperplasia Testicular failure Pregnancy – BetaHCG can mimic LH so tests may show elevated LH Note: A medical drug for inhibiting luteinizing hormone secretion is butinazocine. Deficiency Diminished secretion of LH can result in failure of gonadal function (hypogonadism). This condition is typically manifest in males as failure in production of normal numbers of sperm. In females, amenorrhea is commonly observed. Conditions with very low LH secretions include: Pasqualini syndrome Kallmann syndrome Hypothalamic suppression Hypopituitarism Eating disorder Female athlete triad Hyperprolactinemia Hypogonadism Gonadal suppression therapy GnRH antagonist GnRH agonist (inducing an initial stimulation (flare up) followed by permanent blockage of the GnRH pituitary receptor) As a medication Luteinizing hormone is available mixed with FSH in the form of menotropin, and other forms of urinary gonadotropins. More purified forms of urinary gonadotropins may reduce the LH portion in relation to FSH. Recombinant luteinizing hormone is available as lutropin alfa (Luveris). Role in phosphorylation Phosphorylation is a biochemical process that involves the addition of phosphate to an organic compound. Steroidogenesis entails processes by which cholesterol is converted to biologically active steroid hormones. A study shows that LH via a PKA signaling pathway regulates the phosphorylation and localization of DRP1 within mitochondria of the steroidogenic cells of the ovary. References External links Recombinant proteins Gynaecological endocrinology Glycoproteins Gonadotropin-releasing hormone and gonadotropins Drugs developed by Merck Peptide hormones Sex hormones Human hormones Hormones of the hypothalamus-pituitary-gonad axis Anterior pituitary hormones
Luteinizing hormone
[ "Chemistry", "Biology" ]
3,245
[ "Behavior", "Biotechnology products", "Sex hormones", "Recombinant proteins", "Glycoproteins", "Glycobiology", "Sexuality" ]
248,604
https://en.wikipedia.org/wiki/Room%20acoustics
Room acoustics is a subfield of acoustics dealing with the behaviour of sound in enclosed or partially-enclosed spaces. The architectural details of a room influences the behaviour of sound waves within it, with the effects varying by frequency. Acoustic reflection, diffraction, and diffusion can combine to create audible phenomena such as room modes and standing waves at specific frequencies and locations, echos, and unique reverberation patterns. Frequency zones The way that sound behaves in a room can be broken up into four different frequency zones: The first zone is below the frequency that has a wavelength of twice the longest length of the room. In this zone, sound behaves very much like changes in static air pressure. Above that zone, until wavelengths are comparable to the dimensions of the room, room resonances dominate. This transition frequency is popularly known as the Schroeder frequency, or the cross-over frequency, and it differentiates the low frequencies which create standing waves within small rooms from the mid and high frequencies. The third region which extends approximately 2 octaves is a transition to the fourth zone. In the fourth zone, sounds behave like rays of light bouncing around the room. Natural modes For frequencies under the Schroeder frequency, certain wavelengths of sound will build up as resonances within the boundaries of the room, and the resonating frequencies can be determined using the room's dimensions. Similar to the calculation of standing waves inside a pipe with two closed ends, the modal frequencies and the sound pressure of those modes at a particular position of a rectilinear room can be defined as where are mode numbers corresponding to the x-,y-, and z-axis of the room, is the speed of sound in , are the dimensions of the room in meters. is the amplitude of the sound wave, and are coordinates of a point contained inside the room. Modes can occur in all three dimensions of a room. Axial modes are one-dimensional, and build up between one set of parallel walls. Tangential modes are two-dimensional, and involve four walls bounding the space perpendicular to each other. Finally, oblique modes concern all walls within the simplified rectilinear room. A modal density analysis method using concepts from psychoacoustics, the "Bonello criterion", analyzes the first 48 room modes and plots the number of modes in each one-third of an octave. The curve increases monotonically (each one-third of an octave must have more modes than the preceding one). Other systems to determine correct room ratios have more recently been developed. Reverberation of the room After determining the best dimensions of the room, using the modal density criteria, the next step is to find the correct reverberation time. The most appropriate reverberation time depends on the use of the room. RT60 is a measure of reverberation time. Times about 1.5 to 2 seconds are needed for opera theaters and concert halls. For broadcasting and recording studios and conference rooms, values under one second are frequently used. The recommended reverberation time is always a function of the volume of the room. Several authors give their recommendations A good approximation for broadcasting studios and conference rooms is: TR[1 kHz] = [0.4 log (V+62)] – 0.38 seconds, with V=volume of the room in m3. Ideally, the RT60 should have about the same value at all frequencies from 30 to 12,000 Hz. To get the desired RT60, several acoustics materials can be used as described in several books. A valuable simplification of the task was proposed by Oscar Bonello in 1979. It consists of using standard acoustic panels of 1 m2 hung from the walls of the room (only if the panels are parallel). These panels use a combination of three Helmholtz resonators and a wooden resonant panel. This system gives a large acoustic absorption at low frequencies (under 500 Hz) and reduces at high frequencies to compensate for the typical absorption by people, lateral surfaces, ceilings, etc. Acoustic space is an acoustic environment in which sound can be heard by an observer. The term acoustic space was first mentioned by Marshall McLuhan, a professor and a philosopher. Nature of acoustics In reality, there are some properties of acoustics that affect the acoustic space. These properties can either improve the quality of the sound or interfere with the sound. Reflection is the change in direction of a wave when it hits an object. Many acoustic engineers took advantage from this. It is used for interior designs, either use reflections for benefits or eliminates the reflections. The sound waves usually reflect off the wall and interfere with other sound waves that are generated later. To prevent sound waves reflecting directly to the receiver, a diffusor is introduced. A diffusor has different depths in it, causing the sound to scatter in random directions evenly. It changes the disturbing echo of the sound into a mild reverb which decays over time. Diffraction is the change of a sound wave's propagation to avoid obstacles. According to Huygens’ principle, when a sound wave is partially blocked by an obstacle, the remaining part that gets through acts as a source of secondary waves. For instance, if a person is in a room and shouts with the door open, the people on either side of the hallway will hear it. The sound waves that left the door become a source then spread out in the hallway. The sounds from the surroundings might interfere with the acoustic space like the example given. Uses of acoustic space The application of acoustic space is very useful in architecture. Some kinds of architecture need a proficient design to bring out the best performances. For example, concert halls, auditoriums, theaters, or even cathedrals. Concert Hall – a place that is designed to hold a concert. A good concert hall usually holds around 1700 to 2600 audience. There are three main attributes of a good concert halls: clarity, ambiance, and loudness. If the seats are well positioned, the audience will hear clear sound from every single seat. For more ambiance, reverberation times are designed as preferred. For instance, romantic music usually requires an amount of reverberation time to enhance the emotions, therefore, the ceilings of the concert hall should be high. Theater – a place that is designed for live performances. The first priority for sound design in a theater is speech. Speech has to be heard clearly, even if it is a soft whisper. The reverb is not needed in this case, it interrupts the words spoken by the actors. The intensity has to be increased, in order to enlarge the acoustic space, to cover the theater without disrupting the dynamic. In large theaters, amplification must be used. Cathedral (and church) have an area called a choir, usually located near the transept, where the tower is located in most cathedrals. The choir is for the choir to sing. This kind of singing needs a soft cloudy sound for ambiance and emotion. The height of the cathedral does not only show religious pride but also improves the acoustics. There is more reverb when the source generates a sound in the space See also Acoustic board Acoustics Anechoic room Architectural acoustics Digital room correction Noise control Sound proofing Whispering gallery Notes References External links Understanding Acoustic Treatment Does Acoustic Treatment Make a Difference? Acoustics Building engineering
Room acoustics
[ "Physics", "Engineering" ]
1,521
[ "Building engineering", "Classical mechanics", "Acoustics", "Civil engineering", "Architecture" ]
248,635
https://en.wikipedia.org/wiki/Group%20cohomology
In mathematics (more specifically, in homological algebra), group cohomology is a set of mathematical tools used to study groups using cohomology theory, a technique from algebraic topology. Analogous to group representations, group cohomology looks at the group actions of a group G in an associated G-module M to elucidate the properties of the group. By treating the G-module as a kind of topological space with elements of representing n-simplices, topological properties of the space may be computed, such as the set of cohomology groups . The cohomology groups in turn provide insight into the structure of the group G and G-module M themselves. Group cohomology plays a role in the investigation of fixed points of a group action in a module or space and the quotient module or space with respect to a group action. Group cohomology is used in the fields of abstract algebra, homological algebra, algebraic topology and algebraic number theory, as well as in applications to group theory proper. As in algebraic topology, there is a dual theory called group homology. The techniques of group cohomology can also be extended to the case that instead of a G-module, G acts on a nonabelian G-group; in effect, a generalization of a module to non-Abelian coefficients. These algebraic ideas are closely related to topological ideas. The group cohomology of a discrete group G is the singular cohomology of a suitable space having G as its fundamental group, namely the corresponding Eilenberg–MacLane space. Thus, the group cohomology of can be thought of as the singular cohomology of the circle S1. Likewise, the group cohomology of is the singular cohomology of A great deal is known about the cohomology of groups, including interpretations of low-dimensional cohomology, functoriality, and how to change groups. The subject of group cohomology began in the 1920s, matured in the late 1940s, and continues as an area of active research today. Motivation A general paradigm in group theory is that a group G should be studied via its group representations. A slight generalization of those representations are the G-modules: a G-module is an abelian group M together with a group action of G on M, with every element of G acting as an automorphism of M. We will write G multiplicatively and M additively. Given such a G-module M, it is natural to consider the submodule of G-invariant elements: Now, if N is a G-submodule of M (i.e., a subgroup of M mapped to itself by the action of G), it isn't in general true that the invariants in are found as the quotient of the invariants in M by those in N: being invariant 'modulo N ' is broader. The purpose of the first group cohomology is to precisely measure this difference. The group cohomology functors in general measure the extent to which taking invariants doesn't respect exact sequences. This is expressed by a long exact sequence. Definitions The collection of all G-modules is a category (the morphisms are equivariant group homomorphisms, that is group homomorphisms f with the property for all g in G and x in M). Sending each module M to the group of invariants yields a functor from the category of G-modules to the category Ab of abelian groups. This functor is left exact but not necessarily right exact. We may therefore form its right derived functors. Their values are abelian groups and they are denoted by , "the n-th cohomology group of G with coefficients in M". Furthermore, the group may be identified with . Cochain complexes The definition using derived functors is conceptually very clear, but for concrete applications, the following computations, which some authors also use as a definition, are often helpful. For let be the group of all functions from to M (here means ). This is an abelian group; its elements are called the (inhomogeneous) n-cochains. The coboundary homomorphisms are defined by One may check that so this defines a cochain complex whose cohomology can be computed. It can be shown that the above-mentioned definition of group cohomology in terms of derived functors is isomorphic to the cohomology of this complex Here the groups of n-cocycles, and n-coboundaries, respectively, are defined as The functors Extn and formal definition of group cohomology Interpreting G-modules as modules over the group ring one can note that i.e., the subgroup of G-invariant elements in M is identified with the group of homomorphisms from , which is treated as the trivial G-module (every element of G acts as the identity) to M. Therefore, as Ext functors are the derived functors of Hom, there is a natural isomorphism These Ext groups can also be computed via a projective resolution of , the advantage being that such a resolution only depends on G and not on M. We recall the definition of Ext more explicitly for this context. Let F be a projective -resolution (e.g. a free -resolution) of the trivial -module : e.g., one may always take the resolution of group rings, with morphisms Recall that for -modules N and M, HomG(N, M) is an abelian group consisting of -homomorphisms from N to M. Since is a contravariant functor and reverses the arrows, applying to F termwise and dropping produces a cochain complex : The cohomology groups of G with coefficients in the module M are defined as the cohomology of the above cochain complex: This construction initially leads to a coboundary operator that acts on the "homogeneous" cochains. These are the elements of , that is, functions that obey The coboundary operator is now naturally defined by, for example, The relation to the coboundary operator d that was defined in the previous section, and which acts on the "inhomogeneous" cochains , is given by reparameterizing so that and so on. Thus as in the preceding section. Group homology Dually to the construction of group cohomology there is the following definition of group homology: given a G-module M, set DM to be the submodule generated by elements of the form g·m − m, g ∈ G, m ∈ M. Assigning to M its so-called coinvariants, the quotient is a right exact functor. Its left derived functors are by definition the group homology The covariant functor which assigns MG to M is isomorphic to the functor which sends M to where is endowed with the trivial G-action. Hence one also gets an expression for group homology in terms of the Tor functors, Note that the superscript/subscript convention for cohomology/homology agrees with the convention for group invariants/coinvariants, while which is denoted "co-" switches: superscripts correspond to cohomology H* and invariants XG while subscripts correspond to homology H∗ and coinvariants XG := X/G. Specifically, the homology groups Hn(G, M) can be computed as follows. Start with a projective resolution F of the trivial -module as in the previous section. Apply the covariant functor to F termwise to get a chain complex : Then Hn(G, M) are the homology groups of this chain complex, for n ≥ 0. Group homology and cohomology can be treated uniformly for some groups, especially finite groups, in terms of complete resolutions and the Tate cohomology groups. The group homology of abelian groups G with values in a principal ideal domain k is closely related to the exterior algebra . Low-dimensional cohomology groups H 1 The first cohomology group is the quotient of the so-called crossed homomorphisms, i.e. maps (of sets) f : G → M satisfying f(ab) = f(a) + af(b) for all a, b in G, modulo the so-called principal crossed homomorphisms, i.e. maps f : G → M given by f(g) = gm−m for some fixed m ∈ M. This follows from the definition of cochains above. If the action of G on M is trivial, then the above boils down to H1(G,M) = Hom(G, M), the group of group homomorphisms G → M, since the crossed homomorphisms are then just ordinary homomorphisms and the coboundaries (i.e. the principal crossed homomorphisms) must have image identically zero: hence there is only the zero coboundary. On the other hand, consider the case of where denotes the non-trivial -structure on the additive group of integers, which sends a to -a for every ; and where we regard as the group . By considering all possible cases for the images of , it may be seen that crossed homomorphisms constitute all maps satisfying and for some arbitrary choice of integer t. Principal crossed homomorphisms must additionally satisfy for some integer m: hence every crossed homomorphism sending -1 to an even integer is principal, and therefore: with the group operation being pointwise addition: , noting that is the identity element. H 2 If M is a trivial G-module (i.e. the action of G on M is trivial), the second cohomology group H2(G,M) is in one-to-one correspondence with the set of central extensions of G by M (up to a natural equivalence relation). More generally, if the action of G on M is nontrivial, H2(G,M) classifies the isomorphism classes of all extensions of G by M, in which the action of G on E (by inner automorphisms), endows (the image of) M with an isomorphic G-module structure. In the example from the section on immediately above, as the only extension of by with the given nontrivial action is the infinite dihedral group, which is a split extension and so trivial inside the group. This is in fact the significance in group-theoretical terms of the unique non-trivial element of . An example of a second cohomology group is the Brauer group: it is the cohomology of the absolute Galois group of a field k which acts on the invertible elements in a separable closure: See also . Basic examples Group cohomology of a finite cyclic group For the finite cyclic group of order with generator , the element in the associated group ring is a divisor of zero because its product with , given bygivesThis property can be used to construct the resolution of the trivial -module via the complexgiving the group cohomology computation for any -module . Note the augmentation map gives the trivial module its -structure byThis resolution gives a computation of the group cohomology since there is the isomorphism of cohomology groupsshowing that applying the functor to the complex above (with removed since this resolution is a quasi-isomorphism), gives the computationforFor example, if , the trivial module, then , , and , hence Explicit cocycles Cocycles for the group cohomology of a cyclic group can be given explicitly using the Bar resolution. We get a complete set of generators of -cocycles for odd as the mapsgiven byfor odd, , a primitive -th root of unity, a field containing -th roots of unity, andfor a rational number denoting the largest integer not greater than . Also, we are using the notationwhere is a generator for . Note that for non-zero even indices the cohomology groups are trivial. Cohomology of free groups Using a resolution Given a set the associated free group has an explicit resolution of the trivial module which can be easily computed. Notice the augmentation maphas kernel given by the free submodule generated by the set , so.Because this object is free, this gives a resolutionhence the group cohomology of with coefficients in can be computed by applying the functor to the complex , givingthis is because the dual mapsends any -module morphismto the induced morphism on by composing the inclusion. The only maps which are sent to are -multiples of the augmentation map, giving the first cohomology group. The second can be found by noticing the only other mapscan be generated by the -basis of maps sending for a fixed , and sending for any . Using topology The group cohomology of free groups generated by letters can be readily computed by comparing group cohomology with its interpretation in topology. Recall that for every group there is a topological space , called the classifying space of the group, which has the property.In addition, it has the property that its topological cohomology is isomorphic to group cohomologygiving a way to compute some group cohomology groups. Note could be replaced by any local system which is determined by a mapfor some abelian group . In the case of for letters, this is represented by a wedge sum of circles which can be showed using the Van-Kampen theorem, giving the group cohomology Group cohomology of an integral lattice For an integral lattice of rank (hence isomorphic to ), its group cohomology can be computed with relative ease. First, because , and has , which as abelian groups are isomorphic to , the group cohomology has the isomorphismwith the integral cohomology of a torus of rank . Properties In the following, let M be a G-module. Long exact sequence of cohomology In practice, one often computes the cohomology groups using the following fact: if is a short exact sequence of G-modules, then a long exact sequence is induced: The so-called connecting homomorphisms, can be described in terms of inhomogeneous cochains as follows. If is represented by an n-cocycle then is represented by where is an n-cochain "lifting" (i.e. is the composition of with the surjective map M → N). Functoriality Group cohomology depends contravariantly on the group G, in the following sense: if f : H → G is a group homomorphism, then we have a naturally induced morphism Hn(G, M) → Hn(H, M) (where in the latter, M is treated as an H-module via f). This map is called the restriction map. If the index of H in G is finite, there is also a map in the opposite direction, called transfer map, In degree 0, it is given by the map Given a morphism of G-modules M → N, one gets a morphism of cohomology groups in the Hn(G, M) → Hn(G, N). Products Similarly to other cohomology theories in topology and geometry, such as singular cohomology or de Rham cohomology, group cohomology enjoys a product structure: there is a natural map called cup product: for any two G-modules M and N. This yields a graded anti-commutative ring structure on where R is a ring such as or For a finite group G, the even part of this cohomology ring in characteristic p, carries a lot of information about the group the structure of G, for example the Krull dimension of this ring equals the maximal rank of an abelian subgroup . For example, let G be the group with two elements, under the discrete topology. The real projective space is a classifying space for G. Let the field of two elements. Then a polynomial k-algebra on a single generator, since this is the cellular cohomology ring of Künneth formula If, M = k is a field, then H*(G; k) is a graded k-algebra and the cohomology of a product of groups is related to the ones of the individual groups by a Künneth formula: For example, if G is an elementary abelian 2-group of rank r, and then the Künneth formula shows that the cohomology of G is a polynomial k-algebra generated by r classes in H1(G; k)., Homology vs. cohomology As for other cohomology theories, such as singular cohomology, group cohomology and homology are related to one another by means of a short exact sequence where A is endowed with the trivial G-action and the term at the left is the first Ext group. Amalgamated products Given a group A which is the subgroup of two groups G1 and G2, the homology of the amalgamated product (with integer coefficients) lies in a long exact sequence The homology of can be computed using this: This exact sequence can also be applied to show that the homology of the and the special linear group agree for an infinite field k. Change of group The Hochschild–Serre spectral sequence relates the cohomology of a normal subgroup N of G and the quotient G/N to the cohomology of the group G (for (pro-)finite groups G). From it, one gets the inflation-restriction exact sequence. Cohomology of the classifying space Group cohomology is closely related to topological cohomology theories such as sheaf cohomology, by means of an isomorphism The expression at the left is a classifying space for . It is an Eilenberg–MacLane space , i.e., a space whose fundamental group is and whose higher homotopy groups vanish). Classifying spaces for and are the 1-sphere S1, infinite real projective space and lens spaces, respectively. In general, can be constructed as the quotient , where is a contractible space on which acts freely. However, does not usually have an easily amenable geometric description. More generally, one can attach to any -module a local coefficient system on and the above isomorphism generalizes to an isomorphism Further examples Semi-direct products of groups There is a way to compute the semi-direct product of groups using the topology of fibrations and properties of Eilenberg-Maclane spaces. Recall that for a semi-direct product of groups there is an associated short exact sequence of groupsUsing the associated Eilenberg-Maclane spaces there is a Serre fibrationwhich can be put through a Serre spectral sequence. This gives an -pagewhich gives information about the group cohomology of from the group cohomology groups of . Note this formalism can be applied in a purely group-theoretic manner using the Lyndon–Hochschild–Serre spectral sequence. Cohomology of finite groups Higher cohomology groups are torsion The cohomology groups Hn(G, M) of finite groups G are all torsion for all n≥1. Indeed, by Maschke's theorem the category of representations of a finite group is semi-simple over any field of characteristic zero (or more generally, any field whose characteristic does not divide the order of the group), hence, viewing group cohomology as a derived functor in this abelian category, one obtains that it is zero. The other argument is that over a field of characteristic zero, the group algebra of a finite group is a direct sum of matrix algebras (possibly over division algebras which are extensions of the original field), while a matrix algebra is Morita equivalent to its base field and hence has trivial cohomology. If the order of G is invertible in a G-module M (for example, if M is a -vector space), the transfer map can be used to show that for A typical application of this fact is as follows: the long exact cohomology sequence of the short exact sequence (where all three groups have a trivial G-action) yields an isomorphism Tate cohomology Tate cohomology groups combine both homology and cohomology of a finite group G: where is induced by the norm map: Tate cohomology enjoys similar features, such as long exact sequences, product structures. An important application is in class field theory, see class formation. Tate cohomology of finite cyclic groups, is 2-periodic in the sense that there are isomorphisms A necessary and sufficient criterion for a d-periodic cohomology is that the only abelian subgroups of G are cyclic. For example, any semi-direct product has this property for coprime integers n and m. Applications Algebraic K-theory and homology of linear groups Algebraic K-theory is closely related to group cohomology: in Quillen's +-construction of K-theory, K-theory of a ring R is defined as the homotopy groups of a space Here is the infinite general linear group. The space has the same homology as i.e., the group homology of GL(R). In some cases, stability results assert that the sequence of cohomology groups becomes stationary for large enough n, hence reducing the computation of the cohomology of the infinite general linear group to the one of some . Such results have been established when R is a field or for rings of integers in a number field. The phenomenon that group homology of a series of groups stabilizes is referred to as homological stability. In addition to the case just mentioned, this applies to various other groups such as symmetric groups or mapping class groups. Projective representations and group extensions In quantum mechanics we often have systems with a symmetry group We expect an action of on the Hilbert space by unitary matrices We might expect but the rules of quantum mechanics only require where is a phase. This projective representation of can also be thought of as a conventional representation of a group extension of by as described by the exact sequence Requiring associativity leads to which we recognise as the statement that i.e. that is a cocycle taking values in We can ask whether we can eliminate the phases by redefining which changes This we recognise as shifting by a coboundary The distinct projective representations are therefore classified by Note that if we allow the phases themselves to be acted on by the group (for example, time reversal would complex-conjugate the phase), then the first term in each of the coboundary operations will have a acting on it as in the general definitions of coboundary in the previous sections. For example, Extensions Cohomology of topological groups Given a topological group G, i.e., a group equipped with a topology such that product and inverse are continuous maps, it is natural to consider continuous G-modules, i.e., requiring that the action is a continuous map. For such modules, one can again consider the derived functor of . A special case occurring in algebra and number theory is when G is profinite, for example the absolute Galois group of a field. The resulting cohomology is called Galois cohomology. Non-abelian group cohomology Using the G-invariants and the 1-cochains, one can construct the zeroth and first group cohomology for a group G with coefficients in a non-abelian group. Specifically, a G-group is a (not necessarily abelian) group A together with an action by G. The zeroth cohomology of G with coefficients in A is defined to be the subgroup of elements of A fixed by G. The first cohomology of G with coefficients in A is defined as 1-cocycles modulo an equivalence relation instead of by 1-coboundaries. The condition for a map to be a 1-cocycle is that and if there is an a in A such that . In general, is not a group when A is non-abelian. It instead has the structure of a pointed set – exactly the same situation arises in the 0th homotopy group, which for a general topological space is not a group but a pointed set. Note that a group is in particular a pointed set, with the identity element as distinguished point. Using explicit calculations, one still obtains a truncated long exact sequence in cohomology. Specifically, let be a short exact sequence of G-groups, then there is an exact sequence of pointed sets History and relation to other fields The low-dimensional cohomology of a group was classically studied in other guises, well before the notion of group cohomology was formulated in 1943–45. The first theorem of the subject can be identified as Hilbert's Theorem 90 in 1897; this was recast into Emmy Noether's equations in Galois theory (an appearance of cocycles for ). The idea of factor sets for the extension problem for groups (connected with ) arose in the work of Otto Hölder (1893), in Issai Schur's 1904 study of projective representations, in Otto Schreier's 1926 treatment, and in Richard Brauer's 1928 study of simple algebras and the Brauer group. A fuller discussion of this history may be found in . In 1941, while studying (which plays a special role in groups), Heinz Hopf discovered what is now called Hopf's integral homology formula , which is identical to Schur's formula for the Schur multiplier of a finite, finitely presented group: where and F is a free group. Hopf's result led to the independent discovery of group cohomology by several groups in 1943-45: Samuel Eilenberg and Saunders Mac Lane in the United States ; Hopf and Beno Eckmann in Switzerland; Hans Freudenthal in the Netherlands ; and Dmitry Faddeev in the Soviet Union (, ). The situation was chaotic because communication between these countries was difficult during World War II. From a topological point of view, the homology and cohomology of G was first defined as the homology and cohomology of a model for the topological classifying space BG as discussed above. In practice, this meant using topology to produce the chain complexes used in formal algebraic definitions. From a module-theoretic point of view this was integrated into the Cartan–Eilenberg theory of homological algebra in the early 1950s. The application in algebraic number theory to class field theory provided theorems valid for general Galois extensions (not just abelian extensions). The cohomological part of class field theory was axiomatized as the theory of class formations. In turn, this led to the notion of Galois cohomology and étale cohomology (which builds on it) . Some refinements in the theory post-1960 have been made, such as continuous cocycles and John Tate's redefinition, but the basic outlines remain the same. This is a large field, and now basic in the theories of algebraic groups. The analogous theory for Lie algebras, called Lie algebra cohomology, was first developed in the late 1940s, by Claude Chevalley and Eilenberg, and Jean-Louis Koszul . It is formally similar, using the corresponding definition of invariant for the action of a Lie algebra. It is much applied in representation theory, and is closely connected with the BRST quantization of theoretical physics. Group cohomology theory also has a direct application in condensed matter physics. Just like group theory being the mathematical foundation of spontaneous symmetry breaking phases, group cohomology theory is the mathematical foundation of a class of quantum states of matter—short-range entangled states with symmetry. Short-range entangled states with symmetry are also known as symmetry-protected topological states. See also Lyndon–Hochschild–Serre spectral sequence N-group (category theory) Postnikov tower Notes References Works cited Further reading Chapter 6 of Algebraic number theory Cohomology theories Group theory Homological algebra
Group cohomology
[ "Mathematics" ]
5,872
[ "Mathematical structures", "Group theory", "Fields of abstract algebra", "Category theory", "Algebraic number theory", "Number theory", "Homological algebra" ]
248,671
https://en.wikipedia.org/wiki/Gluconeogenesis
Gluconeogenesis (GNG) is a metabolic pathway that results in the biosynthesis of glucose from certain non-carbohydrate carbon substrates. It is a ubiquitous process, present in plants, animals, fungi, bacteria, and other microorganisms. In vertebrates, gluconeogenesis occurs mainly in the liver and, to a lesser extent, in the cortex of the kidneys. It is one of two primary mechanisms – the other being degradation of glycogen (glycogenolysis) – used by humans and many other animals to maintain blood sugar levels, avoiding low levels (hypoglycemia). In ruminants, because dietary carbohydrates tend to be metabolized by rumen organisms, gluconeogenesis occurs regardless of fasting, low-carbohydrate diets, exercise, etc. In many other animals, the process occurs during periods of fasting, starvation, low-carbohydrate diets, or intense exercise. In humans, substrates for gluconeogenesis may come from any non-carbohydrate sources that can be converted to pyruvate or intermediates of glycolysis (see figure). For the breakdown of proteins, these substrates include glucogenic amino acids (although not ketogenic amino acids); from breakdown of lipids (such as triglycerides), they include glycerol, odd-chain fatty acids (although not even-chain fatty acids, see below); and from other parts of metabolism that includes lactate from the Cori cycle. Under conditions of prolonged fasting, acetone derived from ketone bodies can also serve as a substrate, providing a pathway from fatty acids to glucose. Although most gluconeogenesis occurs in the liver, the relative contribution of gluconeogenesis by the kidney is increased in diabetes and prolonged fasting. The gluconeogenesis pathway is highly endergonic until it is coupled to the hydrolysis of ATP or GTP, effectively making the process exergonic. For example, the pathway leading from pyruvate to glucose-6-phosphate requires 4 molecules of ATP and 2 molecules of GTP to proceed spontaneously. These ATPs are supplied from fatty acid catabolism via beta oxidation. Precursors In humans the main gluconeogenic precursors are lactate, glycerol (which is a part of the triglyceride molecule), alanine and glutamine. Altogether, they account for over 90% of the overall gluconeogenesis. Other glucogenic amino acids and all citric acid cycle intermediates (through conversion to oxaloacetate) can also function as substrates for gluconeogenesis. Generally, human consumption of gluconeogenic substrates in food does not result in increased gluconeogenesis. In ruminants, propionate is the principal gluconeogenic substrate. In nonruminants, including human beings, propionate arises from the β-oxidation of odd-chain and branched-chain fatty acids, and is a (relatively minor) substrate for gluconeogenesis. Lactate is transported back to the liver where it is converted into pyruvate by the Cori cycle using the enzyme lactate dehydrogenase. Pyruvate, the first designated substrate of the gluconeogenic pathway, can then be used to generate glucose. Transamination or deamination of amino acids facilitates entering of their carbon skeleton into the cycle directly (as pyruvate or oxaloacetate), or indirectly via the citric acid cycle. The contribution of Cori cycle lactate to overall glucose production increases with fasting duration. Specifically, after 12, 20, and 40 hours of fasting by human volunteers, the contribution of Cori cycle lactate to gluconeogenesis was 41%, 71%, and 92%, respectively. Whether even-chain fatty acids can be converted into glucose in animals has been a longstanding question in biochemistry. Odd-chain fatty acids can be oxidized to yield acetyl-CoA and propionyl-CoA, the latter serving as a precursor to succinyl-CoA, which can be converted to oxaloacetate and enter into gluconeogenesis. In contrast, even-chain fatty acids are oxidized to yield only acetyl-CoA, whose entry into gluconeogenesis requires the presence of a glyoxylate cycle (also known as glyoxylate shunt) to produce four-carbon dicarboxylic acid precursors. The glyoxylate shunt comprises two enzymes, malate synthase and isocitrate lyase, and is present in fungi, plants, and bacteria. Despite some reports of glyoxylate shunt enzymatic activities detected in animal tissues, genes encoding both enzymatic functions have only been found in nematodes, in which they exist as a single bi-functional enzyme. Genes coding for malate synthase alone (but not isocitrate lyase) have been identified in other animals including arthropods, echinoderms, and even some vertebrates. Mammals found to possess the malate synthase gene include monotremes (platypus) and marsupials (opossum), but not placental mammals. The existence of the glyoxylate cycle in humans has not been established, and it is widely held that fatty acids cannot be converted to glucose in humans directly. Carbon-14 has been shown to end up in glucose when it is supplied in fatty acids, but this can be expected from the incorporation of labelled atoms derived from acetyl-CoA into citric acid cycle intermediates which are interchangeable with those derived from other physiological sources, such as glucogenic amino acids. In the absence of other glucogenic sources, the 2-carbon acetyl-CoA derived from the oxidation of fatty acids cannot produce a net yield of glucose via the citric acid cycle, since an equivalent two carbon atoms are released as carbon dioxide during the cycle. During ketosis, however, acetyl-CoA from fatty acids yields ketone bodies, including acetone, and up to ~60% of acetone may be oxidized in the liver to the pyruvate precursors acetol and methylglyoxal. Thus ketone bodies derived from fatty acids could account for up to 11% of gluconeogenesis during starvation. Catabolism of fatty acids also produces energy in the form of ATP that is necessary for the gluconeogenesis pathway. Location In mammals, gluconeogenesis has been believed to be restricted to the liver, the kidney, the intestine, and muscle, but recent evidence indicates gluconeogenesis occurring in astrocytes of the brain. These organs use somewhat different gluconeogenic precursors. The liver preferentially uses lactate, glycerol, and glucogenic amino acids (especially alanine) while the kidney preferentially uses lactate, glutamine and glycerol. Lactate from the Cori cycle is quantitatively the largest source of substrate for gluconeogenesis, especially for the kidney. The liver uses both glycogenolysis and gluconeogenesis to produce glucose, whereas the kidney only uses gluconeogenesis. After a meal, the liver shifts to glycogen synthesis, whereas the kidney increases gluconeogenesis. The intestine uses mostly glutamine and glycerol. Propionate is the principal substrate for gluconeogenesis in the ruminant liver, and the ruminant liver may make increased use of gluconeogenic amino acids (e.g., alanine) when glucose demand is increased. The capacity of liver cells to use lactate for gluconeogenesis declines from the preruminant stage to the ruminant stage in calves and lambs. In sheep kidney tissue, very high rates of gluconeogenesis from propionate have been observed. In all species, the formation of oxaloacetate from pyruvate and TCA cycle intermediates is restricted to the mitochondrion, and the enzymes that convert Phosphoenolpyruvic acid (PEP) to glucose-6-phosphate are found in the cytosol. The location of the enzyme that links these two parts of gluconeogenesis by converting oxaloacetate to PEP – PEP carboxykinase (PEPCK) – is variable by species: it can be found entirely within the mitochondria, entirely within the cytosol, or dispersed evenly between the two, as it is in humans. Transport of PEP across the mitochondrial membrane is accomplished by dedicated transport proteins; however no such proteins exist for oxaloacetate. Therefore, in species that lack intra-mitochondrial PEPCK, oxaloacetate must be converted into malate or aspartate, exported from the mitochondrion, and converted back into oxaloacetate in order to allow gluconeogenesis to continue. Pathway Gluconeogenesis is a pathway consisting of a series of eleven enzyme-catalyzed reactions. The pathway will begin in either the liver or kidney, in the mitochondria or cytoplasm of those cells, this being dependent on the substrate being used. Many of the reactions are the reverse of steps found in glycolysis. Gluconeogenesis begins in the mitochondria with the formation of oxaloacetate by the carboxylation of pyruvate. This reaction also requires one molecule of ATP, and is catalyzed by pyruvate carboxylase. This enzyme is stimulated by high levels of acetyl-CoA (produced in β-oxidation in the liver) and inhibited by high levels of ADP and glucose. Oxaloacetate is reduced to malate using NADH, a step required for its transportation out of the mitochondria. Malate is oxidized to oxaloacetate using NAD+ in the cytosol, where the remaining steps of gluconeogenesis take place. Oxaloacetate is decarboxylated and then phosphorylated to form phosphoenolpyruvate using the enzyme PEPCK. A molecule of GTP is hydrolyzed to GDP during this reaction. The next steps in the reaction are the same as reversed glycolysis. However, fructose 1,6-bisphosphatase converts fructose 1,6-bisphosphate to fructose 6-phosphate, using one water molecule and releasing one phosphate (in glycolysis, phosphofructokinase 1 converts F6P and ATP to F1,6BP and ADP). This is also the rate-limiting step of gluconeogenesis. Glucose-6-phosphate is formed from fructose 6-phosphate by phosphoglucoisomerase (the reverse of step 2 in glycolysis). Glucose-6-phosphate can be used in other metabolic pathways or dephosphorylated to free glucose. Whereas free glucose can easily diffuse in and out of the cell, the phosphorylated form (glucose-6-phosphate) is locked in the cell, a mechanism by which intracellular glucose levels are controlled by cells. The final step in gluconeogenesis, the formation of glucose, occurs in the lumen of the endoplasmic reticulum, where glucose-6-phosphate is hydrolyzed by glucose-6-phosphatase to produce glucose and release an inorganic phosphate. Like two steps prior, this step is not a simple reversal of glycolysis, in which hexokinase catalyzes the conversion of glucose and ATP into G6P and ADP. Glucose is shuttled into the cytoplasm by glucose transporters located in the endoplasmic reticulum's membrane. Regulation While most steps in gluconeogenesis are the reverse of those found in glycolysis, three regulated and strongly endergonic reactions are replaced with more kinetically favorable reactions. Hexokinase/glucokinase, phosphofructokinase, and pyruvate kinase enzymes of glycolysis are replaced with glucose-6-phosphatase, fructose-1,6-bisphosphatase, and PEP carboxykinase/pyruvate carboxylase. These enzymes are typically regulated by similar molecules, but with opposite results. For example, acetyl CoA and citrate activate gluconeogenesis enzymes (pyruvate carboxylase and fructose-1,6-bisphosphatase, respectively), while at the same time inhibiting the glycolytic enzyme pyruvate kinase. This system of reciprocal control allow glycolysis and gluconeogenesis to inhibit each other and prevents a futile cycle of synthesizing glucose to only break it down. Pyruvate kinase can be also bypassed by 86 pathways not related to gluconeogenesis, for the purpose of forming pyruvate and subsequently lactate; some of these pathways use carbon atoms originated from glucose. The majority of the enzymes responsible for gluconeogenesis are found in the cytosol; the exceptions are mitochondrial pyruvate carboxylase and, in animals, phosphoenolpyruvate carboxykinase. The latter exists as an isozyme located in both the mitochondrion and the cytosol. The rate of gluconeogenesis is ultimately controlled by the action of a key enzyme, fructose-1,6-bisphosphatase, which is also regulated through signal transduction by cAMP and its phosphorylation. Global control of gluconeogenesis is mediated by glucagon (released when blood glucose is low); it triggers phosphorylation of enzymes and regulatory proteins by Protein Kinase A (a cyclic AMP regulated kinase) resulting in inhibition of glycolysis and stimulation of gluconeogenesis. Insulin counteracts glucagon by inhibiting gluconeogenesis. Type 2 diabetes is marked by excess glucagon and insulin resistance from the body. Insulin can no longer inhibit the gene expression of enzymes such as PEPCK which leads to increased levels of hyperglycemia in the body. The anti-diabetic drug metformin reduces blood glucose primarily through inhibition of gluconeogenesis, overcoming the failure of insulin to inhibit gluconeogenesis due to insulin resistance. Studies have shown that the absence of hepatic glucose production has no major effect on the control of fasting plasma glucose concentration. Compensatory induction of gluconeogenesis occurs in the kidneys and intestine, driven by glucagon, glucocorticoids, and acidosis. Insulin resistance In the liver, the FOX protein FOXO6 normally promotes gluconeogenesis in the fasted state, but insulin blocks FOXO6 upon feeding. In a condition of insulin resistance, insulin fails to block FOXO6 resulting in continued gluconeogenesis even upon feeding, resulting in high blood glucose (hyperglycemia). Insulin resistance is a common feature of metabolic syndrome and type 2 diabetes. For this reason, gluconeogenesis is a target of therapy for type 2 diabetes, such as the antidiabetic drug metformin, which inhibits gluconeogenic glucose formation, and stimulates glucose uptake by cells. Origins Gluconeogenesis is considered one of the most ancient anabolic pathways and is likely to have been exhibited in the last universal common ancestor. Rafael F. Say and Georg Fuchs stated in 2010 that "all archaeal groups as well as the deeply branching bacterial lineages contain a bifunctional fructose 1,6-bisphosphate (FBP) aldolase/phosphatase with both FBP aldolase and FBP phosphatase activity. This enzyme is missing in most other Bacteria and in Eukaryota, and is heat-stabile even in mesophilic marine Crenarchaeota". It is proposed that fructose 1,6-bisphosphate aldolase/phosphatase was an ancestral gluconeogenic enzyme and had preceded glycolysis. But the chemical mechanisms between gluconeogenesis and glycolysis, whether it is anabolic or catabolic, are similar, suggesting they both originated at the same time. Fructose 1,6-bisphosphate is shown to be nonenzymatically synthesized continuously within a freezing solution. The synthesis is accelerated in the presence of amino acids such as glycine and lysine implying that the first anabolic enzymes were amino acids. The prebiotic reactions in gluconeogenesis can also proceed nonenzymatically at dehydration-desiccation cycles. Such chemistry could have occurred in hydrothermal environments, including temperature gradients and cycling of freezing and thawing. Mineral surfaces might have played a role in the phosphorylation of metabolic intermediates from gluconeogenesis and have to been shown to produce tetrose, hexose phosphates, and pentose from formaldehyde, glyceraldehyde, and glycolaldehyde. See also Bioenergetics References External links Overview at indstate.edu Interactive diagram at uakron.edu The chemical logic behind gluconeogenesis metpath: Interactive representation of gluconeogenesis Biochemical reactions Carbohydrate metabolism Diabetes Exercise biochemistry Glycobiology Hepatology Metabolic pathways
Gluconeogenesis
[ "Chemistry", "Biology" ]
3,772
[ "Carbohydrate metabolism", "Biochemistry", "Exercise biochemistry", "Biochemical reactions", "Carbohydrate chemistry", "Metabolic pathways", "Glycobiology", "Metabolism" ]
248,710
https://en.wikipedia.org/wiki/Asymptotic%20equipartition%20property
In information theory, the asymptotic equipartition property (AEP) is a general property of the output samples of a stochastic source. It is fundamental to the concept of typical set used in theories of data compression. Roughly speaking, the theorem states that although there are many series of results that may be produced by a random process, the one actually produced is most probably from a loosely defined set of outcomes that all have approximately the same chance of being the one actually realized. (This is a consequence of the law of large numbers and ergodic theory.) Although there are individual outcomes which have a higher probability than any outcome in this set, the vast number of outcomes in the set almost guarantees that the outcome will come from the set. One way of intuitively understanding the property is through Cramér's large deviation theorem, which states that the probability of a large deviation from mean decays exponentially with the number of samples. Such results are studied in large deviations theory; intuitively, it is the large deviations that would violate equipartition, but these are unlikely. In the field of pseudorandom number generation, a candidate generator of undetermined quality whose output sequence lies too far outside the typical set by some statistical criteria is rejected as insufficiently random. Thus, although the typical set is loosely defined, practical notions arise concerning sufficient typicality. Definition Given a discrete-time stationary ergodic stochastic process on the probability space , the asymptotic equipartition property is an assertion that, almost surely, where or simply denotes the entropy rate of , which must exist for all discrete-time stationary processes including the ergodic ones. The asymptotic equipartition property is proved for finite-valued (i.e. ) stationary ergodic stochastic processes in the Shannon–McMillan–Breiman theorem using the ergodic theory and for any i.i.d. sources directly using the law of large numbers in both the discrete-valued case (where is simply the entropy of a symbol) and the continuous-valued case (where is the differential entropy instead). The definition of the asymptotic equipartition property can also be extended for certain classes of continuous-time stochastic processes for which a typical set exists for long enough observation time. The convergence is proven almost sure in all cases. Discrete-time i.i.d. sources Given is an i.i.d. source which may take values in the alphabet , its time series is i.i.d. with entropy . The weak law of large numbers gives the asymptotic equipartition property with convergence in probability, since the entropy is equal to the expectation of The strong law of large numbers asserts the stronger almost sure convergence, Convergence in the sense of L1 asserts an even stronger Discrete-time finite-valued stationary ergodic sources Consider a finite-valued sample space , i.e. , for the discrete-time stationary ergodic process defined on the probability space . The Shannon–McMillan–Breiman theorem, due to Claude Shannon, Brockway McMillan, and Leo Breiman, states that we have convergence in the sense of L1. Chung Kai-lai generalized this to the case where may take value in a set of countable infinity, provided that the entropy rate is still finite. Non-stationary discrete-time source producing independent symbols The assumptions of stationarity/ergodicity/identical distribution of random variables is not essential for the asymptotic equipartition property to hold. Indeed, as is quite clear intuitively, the asymptotic equipartition property requires only some form of the law of large numbers to hold, which is fairly general. However, the expression needs to be suitably generalized, and the conditions need to be formulated precisely. We assume that the source is producing independent symbols, with possibly different output statistics at each instant. We assume that the statistics of the process are known completely, that is, the marginal distribution of the process seen at each time instant is known. The joint distribution is just the product of marginals. Then, under the condition (which can be relaxed) that for all i, for some M > 0, the following holds (AEP): where Applications The asymptotic equipartition property for non-stationary discrete-time independent process leads us to (among other results) the source coding theorem for non-stationary source (with independent output symbols) and noisy-channel coding theorem for non-stationary memoryless channels. Measure-theoretic form is a measure-preserving map on the probability space . If is a finite or countable partition of , then its entropy is with the convention that . We only consider partitions with finite entropy: . If is a finite or countable partition of , then we construct a sequence of partitions by iterating the map:where is the least upper bound partition, that is, the least refined partition that refines both and :Write to be the set in where falls in. So, for example, is the -letter initial segment of the -name of . Write to be the information (in units of nats) about we can recover, if we know which element in the partition that falls in:Similarly, the conditional information of partition , conditional on partition , about , is is the Kolmogorov-Sinai entropyIn other words, by definition, we have a convergence in expectation. The SMB theorem states that when is ergodic, we have convergence in L1. If is not necessarily ergodic, then the underlying probability space would be split up into multiple subsets, each invariant under . In this case, we still have L1 convergence to some function, but that function is no longer a constant function. When is ergodic, is trivial, and so the functionsimplifies into the constant function , which by definition, equals , which equals by a proposition. Continuous-time stationary ergodic sources Discrete-time functions can be interpolated to continuous-time functions. If such interpolation f is measurable, we may define the continuous-time stationary process accordingly as . If the asymptotic equipartition property holds for the discrete-time process, as in the i.i.d. or finite-valued stationary ergodic cases shown above, it automatically holds for the continuous-time stationary process derived from it by some measurable interpolation. i.e. where n corresponds to the degree of freedom in time . and are the entropy per unit time and per degree of freedom respectively, defined by Shannon. An important class of such continuous-time stationary process is the bandlimited stationary ergodic process with the sample space being a subset of the continuous functions. The asymptotic equipartition property holds if the process is white, in which case the time samples are i.i.d., or there exists T > 1/2W, where W is the nominal bandwidth, such that the T-spaced time samples take values in a finite set, in which case we have the discrete-time finite-valued stationary ergodic process. Any time-invariant operations also preserves the asymptotic equipartition property, stationarity and ergodicity and we may easily turn a stationary process to non-stationary without losing the asymptotic equipartition property by nulling out a finite number of time samples in the process. Category theory A category theoretic definition for the equipartition property is given by Gromov. Given a sequence of Cartesian powers of a measure space P, this sequence admits an asymptotically equivalent sequence HN of homogeneous measure spaces (i.e. all sets have the same measure; all morphisms are invariant under the group of automorphisms, and thus factor as a morphism to the terminal object). The above requires a definition of asymptotic equivalence. This is given in terms of a distance function, giving how much an injective correspondence differs from an isomorphism. An injective correspondence is a partially defined map that is a bijection; that is, it is a bijection between a subset and . Then define where |S| denotes the measure of a set S. In what follows, the measure of P and Q are taken to be 1, so that the measure spaces are probability spaces. This distance is commonly known as the earth mover's distance or Wasserstein metric. Similarly, define with taken to be the counting measure on P. Thus, this definition requires that P be a finite measure space. Finally, let A sequence of injective correspondences are then asymptotically equivalent when Given a homogenous space sequence HN that is asymptotically equivalent to PN, the entropy H(P) of P may be taken as See also Cramér's theorem (large deviations) Noisy-channel coding theorem Shannon's source coding theorem Notes References Journal articles Claude E. Shannon. "A Mathematical Theory of Communication". Bell System Technical Journal, July/October 1948. Sergio Verdu and Te Sun Han. "The Role of the Asymptotic Equipartition Property in Noiseless Source Coding." IEEE Transactions on Information Theory, 43(3): 847–857, 1997. Textbooks Information theory Theorems in statistics
Asymptotic equipartition property
[ "Mathematics", "Technology", "Engineering" ]
1,962
[ "Mathematical theorems", "Theorems in statistics", "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory", "Mathematical problems" ]
248,808
https://en.wikipedia.org/wiki/Algebraic%20variety
Algebraic varieties are the central objects of study in algebraic geometry, a sub-field of mathematics. Classically, an algebraic variety is defined as the set of solutions of a system of polynomial equations over the real or complex numbers. Modern definitions generalize this concept in several different ways, while attempting to preserve the geometric intuition behind the original definition. Conventions regarding the definition of an algebraic variety differ slightly. For example, some definitions require an algebraic variety to be irreducible, which means that it is not the union of two smaller sets that are closed in the Zariski topology. Under this definition, non-irreducible algebraic varieties are called algebraic sets. Other conventions do not require irreducibility. The fundamental theorem of algebra establishes a link between algebra and geometry by showing that a monic polynomial (an algebraic object) in one variable with complex number coefficients is determined by the set of its roots (a geometric object) in the complex plane. Generalizing this result, Hilbert's Nullstellensatz provides a fundamental correspondence between ideals of polynomial rings and algebraic sets. Using the Nullstellensatz and related results, mathematicians have established a strong correspondence between questions on algebraic sets and questions of ring theory. This correspondence is a defining feature of algebraic geometry. Many algebraic varieties are differentiable manifolds, but an algebraic variety may have singular points while a differentiable manifold cannot. Algebraic varieties can be characterized by their dimension. Algebraic varieties of dimension one are called algebraic curves and algebraic varieties of dimension two are called algebraic surfaces. In the context of modern scheme theory, an algebraic variety over a field is an integral (irreducible and reduced) scheme over that field whose structure morphism is separated and of finite type. Overview and definitions An affine variety over an algebraically closed field is conceptually the easiest type of variety to define, which will be done in this section. Next, one can define projective and quasi-projective varieties in a similar way. The most general definition of a variety is obtained by patching together smaller quasi-projective varieties. It is not obvious that one can construct genuinely new examples of varieties in this way, but Nagata gave an example of such a new variety in the 1950s. Affine varieties For an algebraically closed field and a natural number , let be an affine -space over , identified to through the choice of an affine coordinate system. The polynomials in the ring can be viewed as K-valued functions on by evaluating at the points in , i.e. by choosing values in K for each xi. For each set S of polynomials in , define the zero-locus Z(S) to be the set of points in on which the functions in S simultaneously vanish, that is to say A subset V of is called an affine algebraic set if V = Z(S) for some S. A nonempty affine algebraic set V is called irreducible if it cannot be written as the union of two proper algebraic subsets. An irreducible affine algebraic set is also called an affine variety. (Some authors use the phrase affine variety to refer to any affine algebraic set, irreducible or not.) Affine varieties can be given a natural topology by declaring the closed sets to be precisely the affine algebraic sets. This topology is called the Zariski topology. Given a subset V of , we define I(V) to be the ideal of all polynomial functions vanishing on V: For any affine algebraic set V, the coordinate ring or structure ring of V is the quotient of the polynomial ring by this ideal. Projective varieties and quasi-projective varieties Let be an algebraically closed field and let be the projective n-space over . Let in be a homogeneous polynomial of degree d. It is not well-defined to evaluate on points in in homogeneous coordinates. However, because is homogeneous, meaning that , it does make sense to ask whether vanishes at a point . For each set S of homogeneous polynomials, define the zero-locus of S to be the set of points in on which the functions in S vanish: A subset V of is called a projective algebraic set if V = Z(S) for some S. An irreducible projective algebraic set is called a projective variety. Projective varieties are also equipped with the Zariski topology by declaring all algebraic sets to be closed. Given a subset V of , let I(V) be the ideal generated by all homogeneous polynomials vanishing on V. For any projective algebraic set V, the coordinate ring of V is the quotient of the polynomial ring by this ideal. A quasi-projective variety is a Zariski open subset of a projective variety. Notice that every affine variety is quasi-projective. Notice also that the complement of an algebraic set in an affine variety is a quasi-projective variety; in the context of affine varieties, such a quasi-projective variety is usually not called a variety but a constructible set. Abstract varieties In classical algebraic geometry, all varieties were by definition quasi-projective varieties, meaning that they were open subvarieties of closed subvarieties of a projective space. For example, in Chapter 1 of Hartshorne a variety over an algebraically closed field is defined to be a quasi-projective variety, but from Chapter 2 onwards, the term variety (also called an abstract variety) refers to a more general object, which locally is a quasi-projective variety, but when viewed as a whole is not necessarily quasi-projective; i.e. it might not have an embedding into projective space. So classically the definition of an algebraic variety required an embedding into projective space, and this embedding was used to define the topology on the variety and the regular functions on the variety. The disadvantage of such a definition is that not all varieties come with natural embeddings into projective space. For example, under this definition, the product is not a variety until it is embedded into a larger projective space; this is usually done by the Segre embedding. Furthermore, any variety that admits one embedding into projective space admits many others, for example by composing the embedding with the Veronese embedding; thus many notions that should be intrinsic, such as that of a regular function, are not obviously so. The earliest successful attempt to define an algebraic variety abstractly, without an embedding, was made by André Weil. In his Foundations of Algebraic Geometry, using valuations. Claude Chevalley made a definition of a scheme, which served a similar purpose, but was more general. However, Alexander Grothendieck's definition of a scheme is more general still and has received the most widespread acceptance. In Grothendieck's language, an abstract algebraic variety is usually defined to be an integral, separated scheme of finite type over an algebraically closed field, although some authors drop the irreducibility or the reducedness or the separateness condition or allow the underlying field to be not algebraically closed. Classical algebraic varieties are the quasiprojective integral separated finite type schemes over an algebraically closed field. Existence of non-quasiprojective abstract algebraic varieties One of the earliest examples of a non-quasiprojective algebraic variety were given by Nagata. Nagata's example was not complete (the analog of compactness), but soon afterwards he found an algebraic surface that was complete and non-projective. Since then other examples have been found: for example, it is straightforward to construct toric varieties that are not quasi-projective but complete. Examples Subvariety A subvariety is a subset of a variety that is itself a variety (with respect to the topological structure induced by the ambient variety). For example, every open subset of a variety is a variety. See also closed immersion. Hilbert's Nullstellensatz says that closed subvarieties of an affine or projective variety are in one-to-one correspondence with the prime ideals or non-irrelevant homogeneous prime ideals of the coordinate ring of the variety. Affine variety Example 1 Let , and A2 be the two-dimensional affine space over C. Polynomials in the ring C[x, y] can be viewed as complex valued functions on A2 by evaluating at the points in A2. Let subset S of C[x, y] contain a single element : The zero-locus of is the set of points in A2 on which this function vanishes: it is the set of all pairs of complex numbers (x, y) such that y = 1 − x. This is called a line in the affine plane. (In the classical topology coming from the topology on the complex numbers, a complex line is a real manifold of dimension two.) This is the set : Thus the subset of A2 is an algebraic set. The set V is not empty. It is irreducible, as it cannot be written as the union of two proper algebraic subsets. Thus it is an affine algebraic variety. Example 2 Let , and A2 be the two-dimensional affine space over C. Polynomials in the ring C[x, y] can be viewed as complex valued functions on A2 by evaluating at the points in A2. Let subset S of C[x, y] contain a single element g(x, y): The zero-locus of g(x, y) is the set of points in A2 on which this function vanishes, that is the set of points (x,y) such that x2 + y2 = 1. As g(x, y) is an absolutely irreducible polynomial, this is an algebraic variety. The set of its real points (that is the points for which x and y are real numbers), is known as the unit circle; this name is also often given to the whole variety. Example 3 The following example is neither a hypersurface, nor a linear space, nor a single point. Let A3 be the three-dimensional affine space over C. The set of points (x, x2, x3) for x in C is an algebraic variety, and more precisely an algebraic curve that is not contained in any plane. It is the twisted cubic shown in the above figure. It may be defined by the equations The irreducibility of this algebraic set needs a proof. One approach in this case is to check that the projection (x, y, z) → (x, y) is injective on the set of the solutions and that its image is an irreducible plane curve. For more difficult examples, a similar proof may always be given, but may imply a difficult computation: first a Gröbner basis computation to compute the dimension, followed by a random linear change of variables (not always needed); then a Gröbner basis computation for another monomial ordering to compute the projection and to prove that it is generically injective and that its image is a hypersurface, and finally a polynomial factorization to prove the irreducibility of the image. General linear group The set of n-by-n matrices over the base field k can be identified with the affine n2-space with coordinates such that is the (i, j)-th entry of the matrix . The determinant is then a polynomial in and thus defines the hypersurface in . The complement of is then an open subset of that consists of all the invertible n-by-n matrices, the general linear group . It is an affine variety, since, in general, the complement of a hypersurface in an affine variety is affine. Explicitly, consider where the affine line is given coordinate t. Then amounts to the zero-locus in of the polynomial in : i.e., the set of matrices A such that has a solution. This is best seen algebraically: the coordinate ring of is the localization , which can be identified with . The multiplicative group k* of the base field k is the same as and thus is an affine variety. A finite product of it is an algebraic torus, which is again an affine variety. A general linear group is an example of a linear algebraic group, an affine variety that has a structure of a group in such a way the group operations are morphism of varieties. Characteristic variety Let A be a not-necessarily-commutative algebra over a field k. Even if A is not commutative, it can still happen that A has a -filtration so that the associated ring is commutative, reduced and finitely generated as a k-algebra; i.e., is the coordinate ring of an affine (reducible) variety X. For example, if A is the universal enveloping algebra of a finite-dimensional Lie algebra , then is a polynomial ring (the PBW theorem); more precisely, the coordinate ring of the dual vector space . Let M be a filtered module over A (i.e., ). If is fintiely generated as a -algebra, then the support of in X; i.e., the locus where does not vanish is called the characteristic variety of M. The notion plays an important role in the theory of D-modules. Projective variety A projective variety is a closed subvariety of a projective space. That is, it is the zero locus of a set of homogeneous polynomials that generate a prime ideal. Example 1 A plane projective curve is the zero locus of an irreducible homogeneous polynomial in three indeterminates. The projective line P1 is an example of a projective curve; it can be viewed as the curve in the projective plane } defined by . For another example, first consider the affine cubic curve in the 2-dimensional affine space (over a field of characteristic not two). It has the associated cubic homogeneous polynomial equation: which defines a curve in P2 called an elliptic curve. The curve has genus one (genus formula); in particular, it is not isomorphic to the projective line P1, which has genus zero. Using genus to distinguish curves is very basic: in fact, the genus is the first invariant one uses to classify curves (see also the construction of moduli of algebraic curves). Example 2: Grassmannian Let V be a finite-dimensional vector space. The Grassmannian variety Gn(V) is the set of all n-dimensional subspaces of V. It is a projective variety: it is embedded into a projective space via the Plücker embedding: where bi are any set of linearly independent vectors in V, is the n-th exterior power of V, and the bracket [w] means the line spanned by the nonzero vector w. The Grassmannian variety comes with a natural vector bundle (or locally free sheaf in other terminology) called the tautological bundle, which is important in the study of characteristic classes such as Chern classes. Jacobian variety and abelian variety Let C be a smooth complete curve and the Picard group of it; i.e., the group of isomorphism classes of line bundles on C. Since C is smooth, can be identified as the divisor class group of C and thus there is the degree homomorphism . The Jacobian variety of C is the kernel of this degree map; i.e., the group of the divisor classes on C of degree zero. A Jacobian variety is an example of an abelian variety, a complete variety with a compatible abelian group structure on it (the name "abelian" is however not because it is an abelian group). An abelian variety turns out to be projective (in short, algebraic theta functions give an embedding into a projective space. See equations defining abelian varieties); thus, is a projective variety. The tangent space to at the identity element is naturally isomorphic to hence, the dimension of is the genus of . Fix a point on . For each integer , there is a natural morphism where is the product of n copies of C. For (i.e., C is an elliptic curve), the above morphism for turns out to be an isomorphism; in particular, an elliptic curve is an abelian variety. Moduli varieties Given an integer , the set of isomorphism classes of smooth complete curves of genus is called the moduli of curves of genus and is denoted as . There are few ways to show this moduli has a structure of a possibly reducible algebraic variety; for example, one way is to use geometric invariant theory which ensures a set of isomorphism classes has a (reducible) quasi-projective variety structure. Moduli such as the moduli of curves of fixed genus is typically not a projective variety; roughly the reason is that a degeneration (limit) of a smooth curve tends to be non-smooth or reducible. This leads to the notion of a stable curve of genus , a not-necessarily-smooth complete curve with no terribly bad singularities and not-so-large automorphism group. The moduli of stable curves , the set of isomorphism classes of stable curves of genus , is then a projective variety which contains as an open dense subset. Since is obtained by adding boundary points to , is colloquially said to be a compactification of . Historically a paper of Mumford and Deligne introduced the notion of a stable curve to show is irreducible when . The moduli of curves exemplifies a typical situation: a moduli of nice objects tend not to be projective but only quasi-projective. Another case is a moduli of vector bundles on a curve. Here, there are the notions of stable and semistable vector bundles on a smooth complete curve . The moduli of semistable vector bundles of a given rank and a given degree (degree of the determinant of the bundle) is then a projective variety denoted as , which contains the set of isomorphism classes of stable vector bundles of rank and degree as an open subset. Since a line bundle is stable, such a moduli is a generalization of the Jacobian variety of . In general, in contrast to the case of moduli of curves, a compactification of a moduli need not be unique and, in some cases, different non-equivalent compactifications are constructed using different methods and by different authors. An example over is the problem of compactifying , the quotient of a bounded symmetric domain by an action of an arithmetic discrete group . A basic example of is when , Siegel's upper half-space and commensurable with ; in that case, has an interpretation as the moduli of principally polarized complex abelian varieties of dimension (a principal polarization identifies an abelian variety with its dual). The theory of toric varieties (or torus embeddings) gives a way to compactify , a toroidal compactification of it. But there are other ways to compactify ; for example, there is the minimal compactification of due to Baily and Borel: it is the projective variety associated to the graded ring formed by modular forms (in the Siegel case, Siegel modular forms; see also Siegel modular variety). The non-uniqueness of compactifications is due to the lack of moduli interpretations of those compactifications; i.e., they do not represent (in the category-theory sense) any natural moduli problem or, in the precise language, there is no natural moduli stack that would be an analog of moduli stack of stable curves. Non-affine and non-projective example An algebraic variety can be neither affine nor projective. To give an example, let and the projection. Here X is an algebraic variety since it is a product of varieties. It is not affine since P1 is a closed subvariety of X (as the zero locus of p), but an affine variety cannot contain a projective variety of positive dimension as a closed subvariety. It is not projective either, since there is a nonconstant regular function on X; namely, p. Another example of a non-affine non-projective variety is (cf. .) Non-examples Consider the affine line over . The complement of the circle in is not an algebraic variety (nor even an algebraic set). Note that is not a polynomial in (although it is a polynomial in the real coordinates ). On the other hand, the complement of the origin in is an algebraic (affine) variety, since the origin is the zero-locus of . This may be explained as follows: the affine line has dimension one and so any subvariety of it other than itself must have strictly less dimension; namely, zero. For similar reasons, a unitary group (over the complex numbers) is not an algebraic variety, while the special linear group is a closed subvariety of , the zero-locus of . (Over a different base field, a unitary group can however be given a structure of a variety.) Basic results An affine algebraic set V is a variety if and only if I(V) is a prime ideal; equivalently, V is a variety if and only if its coordinate ring is an Every nonempty affine algebraic set may be written uniquely as a finite union of algebraic varieties (where none of the varieties in the decomposition is a subvariety of any other). The dimension of a variety may be defined in various equivalent ways. See Dimension of an algebraic variety for details. A product of finitely many algebraic varieties (over an algebraically closed field) is an algebraic variety. A finite product of affine varieties is affine and a finite product of projective varieties is projective. Isomorphism of algebraic varieties Let be algebraic varieties. We say and are isomorphic, and write , if there are regular maps and such that the compositions and are the identity maps on and respectively. Discussion and generalizations The basic definitions and facts above enable one to do classical algebraic geometry. To be able to do more — for example, to deal with varieties over fields that are not algebraically closed — some foundational changes are required. The modern notion of a variety is considerably more abstract than the one above, though equivalent in the case of varieties over algebraically closed fields. An abstract algebraic variety is a particular kind of scheme; the generalization to schemes on the geometric side enables an extension of the correspondence described above to a wider class of rings. A scheme is a locally ringed space such that every point has a neighbourhood that, as a locally ringed space, is isomorphic to a spectrum of a ring. Basically, a variety over is a scheme whose structure sheaf is a sheaf of -algebras with the property that the rings R that occur above are all integral domains and are all finitely generated -algebras, that is to say, they are quotients of polynomial algebras by prime ideals. This definition works over any field . It allows you to glue affine varieties (along common open sets) without worrying whether the resulting object can be put into some projective space. This also leads to difficulties since one can introduce somewhat pathological objects, e.g. an affine line with zero doubled. Such objects are usually not considered varieties, and are eliminated by requiring the schemes underlying a variety to be separated. (Strictly speaking, there is also a third condition, namely, that one needs only finitely many affine patches in the definition above.) Some modern researchers also remove the restriction on a variety having integral domain affine charts, and when speaking of a variety only require that the affine charts have trivial nilradical. A complete variety is a variety such that any map from an open subset of a nonsingular curve into it can be extended uniquely to the whole curve. Every projective variety is complete, but not vice versa. These varieties have been called "varieties in the sense of Serre", since Serre's foundational paper FAC on sheaf cohomology was written for them. They remain typical objects to start studying in algebraic geometry, even if more general objects are also used in an auxiliary way. One way that leads to generalizations is to allow reducible algebraic sets (and fields that aren't algebraically closed), so the rings R may not be integral domains. A more significant modification is to allow nilpotents in the sheaf of rings, that is, rings which are not reduced. This is one of several generalizations of classical algebraic geometry that are built into Grothendieck's theory of schemes. Allowing nilpotent elements in rings is related to keeping track of "multiplicities" in algebraic geometry. For example, the closed subscheme of the affine line defined by x2 = 0 is different from the subscheme defined by x = 0 (the origin). More generally, the fiber of a morphism of schemes X → Y at a point of Y may be non-reduced, even if X and Y are reduced. Geometrically, this says that fibers of good mappings may have nontrivial "infinitesimal" structure. There are further generalizations called algebraic spaces and stacks. Algebraic manifolds An algebraic manifold is an algebraic variety that is also an m-dimensional manifold, and hence every sufficiently small local patch is isomorphic to km. Equivalently, the variety is smooth (free from singular points). When is the real numbers, R, algebraic manifolds are called Nash manifolds. Algebraic manifolds can be defined as the zero set of a finite collection of analytic algebraic functions. Projective algebraic manifolds are an equivalent definition for projective varieties. The Riemann sphere is one example. See also Variety (disambiguation) — listing also several mathematical meanings Function field of an algebraic variety Birational geometry Motive (algebraic geometry) Analytic variety Zariski–Riemann space Semi-algebraic set Fano variety Mnëv's universality theorem Notes References Sources Milne J., Jacobian Varieties, published as Chapter VII of Arithmetic geometry (Storrs, Conn., 1984), 167–212, Springer, New York, 1986. Algebraic geometry
Algebraic variety
[ "Mathematics" ]
5,385
[ "Fields of abstract algebra", "Algebraic geometry" ]
248,950
https://en.wikipedia.org/wiki/Presolar%20grains
Presolar grains are interstellar solid matter in the form of tiny solid grains that originated at a time before the Sun was formed. Presolar grains formed within outflowing and cooling gases from earlier presolar stars. The study of presolar grains is typically considered part of the field of cosmochemistry and meteoritics. The stellar nucleosynthesis that took place within each presolar star gives to each granule an isotopic composition unique to that parent star, which differs from the isotopic composition of the Solar System's matter as well as from the galactic average. These isotopic signatures often fingerprint very specific astrophysical nuclear processes that took place within the parent star or formation event and prove their presolar origin. Terminology Presolar grains are individual solid grains which condensed around distant stars or as part of novae, and potentially supernovae outflows, which were accreted in the early solar nebula and remain in relatively unaltered chondritic meteorites. As they were accreted before the formation of the Solar System, they must be presolar. Presolar grains also exist in the interstellar medium. Researchers occasionally use the term stardust to refer to presolar grains, particularly in science communication, though the term is sometimes used interchangeably in the scientific literature. History In the 1960s, the noble gases neon and xenon were discovered to have unusual isotopic ratios in primitive meteorites; their origin and the type of matter that contained them was a mystery. These discoveries were made by vaporizing a bulk sample of a meteorite within a mass spectrometer, in order to count the relative abundance of the isotopes of the very small amount of noble gases trapped as inclusions. During the 1970s similar experiments discovered more components of trapped xenon isotopes. Competing speculations about the origins of the xenon isotopic components were advanced, all within the existing paradigm that the variations were created by processes within an initially homogeneous solar gas cloud. A new theoretical framework for interpretation was advanced during the 1970s when Donald D. Clayton rejected the popular belief among meteoriticists that the Solar System began as a uniform hot gas. Instead he predicted that unusual but predictable isotopic compositions would be found within thermally condensed interstellar grains that had condensed during mass loss from stars of differing types. He argued that such grains exist throughout the interstellar medium. Clayton's first papers using that idea in 1975 pictured an interstellar medium populated with supernova grains that are rich in the radiogenic isotopes of Ne and Xe that had defined the extinct radioactivities. Clayton defined several types of presolar grains likely to be discovered: stardust from red giant stars, sunocons (acronym from SUperNOva CONdensates) from supernovae, nebcons from nebular condensation by accretion of cold cloud gaseous atoms and molecules, and novacons from nova condensation. Despite vigorous and continuous active development of this picture, Clayton's suggestions lay unsupported by others for a decade until such grains were discovered within meteorites. The first unambiguous consequence of the existence of presolar grains within meteorites came from the laboratory of Edward Anders in Chicago, who found using traditional mass spectrometry that the xenon isotopic abundances contained within an acid-insoluble carbonaceous residue that remained after the meteorite bulk had been dissolved in acids matched almost exactly the predictions for isotopic xenon in red giant dust condensate. It then seemed certain that presolar grains were contained within Anders' acid-insoluble residue. Finding the actual presolar grains and documenting them was a much harder challenge that required locating the grains and showing that their isotopes matched those within the red-giant star. There followed a decade of intense experimental searching in the attempt to isolate individual grains of those xenon carriers. But what was really needed to discover presolar grains was a new type of mass spectrometer that could measure the smaller number of atoms in a single grain. Sputtering ion probes were pursued by several laboratories in the attempt to demonstrate such an instrument. But the contemporary ion probes needed to be technologically much better. In 1987 diamond grains and silicon carbide grains were found to exist abundantly in those same acid-insoluble residues and also to contain large concentrations of noble gases. Significant isotopic anomalies were in turn measured by improvements in secondary ion mass spectrometry (SIMS) within the structural chemical elements of these grains. Improved SIMS experiments showed that the silicon isotopes within each SiC grain did not have solar isotopic ratios but rather those expected in certain red-giant stars. The finding of presolar is therefore dated 1987. To measure the isotopic abundance ratios of the structural elements (e.g. silicon in an SiC grain) in microscopic presolar grains had required two difficult technological and scientific steps: 1) locating micron-sized presolar grains within the meteorite's overwhelming mass; 2) development of SIMS technology to a sufficiently high level to measure isotopic abundance ratios within micron-sized grains. Ernst Zinner became an important leader in SIMS applications to microscopic grains. In January 2020, analysis of the Murchison meteorite concluded that out of 40 presolar silicon carbide grains examined, one had formed 3 ± 2 billion years before Earth's 4.6 billion year-old sun. This would make some of the grains the oldest solid material ever discovered on Earth. In meteorites Presolar grains are the solid matter that was contained in the interstellar gas before the Sun formed. The presolar component can be identified in the laboratory by their abnormal isotopic abundances and consists of refractory minerals which survived the collapse of the solar nebula and the subsequent formation of planetesimals. To meteorite researchers, the term presolar grains has come to mean presolar grains found in meteorites, of which 99% are stardust. Many other types of cosmic dust have not been detected in meteorites. Presolar grains comprise only about 0.1 percent of the total mass of particulate matter found in meteorites. Such grains are isotopically-distinct material found in the fine-grained matrix of meteorites, such as primitive chondrites. Their isotopic differences from the encasing meteorite require that they predate the Solar System. The crystallinity of those clusters ranges from micrometer-sized silicon carbide crystals (up to 1013 atoms), down to that of nanometer-sized diamond (about 1000 atoms), and unlayered graphene crystals of fewer than 100 atoms. The refractory grains achieved their mineral structures by condensing thermally within the slowly cooling expanding gases of supernovae and of red giant stars. Characterization Presolar grains are investigated using scanning or transmission electron microscopes (SEM/TEM), and mass spectrometric methods (noble gas mass spectrometry, resonance ionization mass spectrometry (RIMS), secondary ion mass spectrometry (SIMS, NanoSIMS)). Presolar grains that consist of diamonds are only a few nanometers in size and are, therefore, called nanodiamonds. Because of their small size, nanodiamonds are hard to investigate and, although they are among the first presolar grains discovered, relatively little is known about them. The typical sizes of other presolar grains are in the range of micrometers. Presolar grains consisting of the following minerals have so far been identified: diamond (C) nanometer-sized grains (~ diameter) possibly formed by vapor deposition graphite (C) particles and anions, some with unlayered graphene cores silicon carbide (SiC) submicrometer to micrometer sized grains. Presolar SiC occurs as single-polytype grains or polytype intergrowths. The atomic structures observed contain the two lowest order polytypes: hexagonal 2H and cubic 3C (with varying degrees of stacking fault disorder) as well as 1-dimensionally disordered SiC grains. In comparison, SiC synthesized in terrestrial laboratories is known to form over a hundred polytypes. titanium carbide (TiC) and other carbides within C and SiC grains silicon nitride () corundum () spinel () hibonite () titanium oxide () silicate minerals (olivine and pyroxene) Information on stellar evolution The study of presolar grains provides information about nucleosynthesis and stellar evolution. Grains bearing the isotopic signature of "r-process" (rapid neutron capture) and alpha process (alpha capture) types of nucleosynthesis are useful in testing models of supernova explosions. 1% of presolar grains (supernova grains) have very large excesses of calcium-44, a stable isotope of calcium which normally composes only 2% of the calcium abundance. The calcium in some presolar grains is composed primarily of 44Ca, which is presumably the remains of the extinct radionuclide titanium-44, a titanium isotope which is formed in abundance in Type II supernovae such as SN 1987A after rapid capture of four alpha particles by 28Si, after the process of silicon burning normally begins, and prior to the supernova explosion. However, 44Ti has a half-life of only 59 years, and thus it is soon converted entirely to 44Ca. Excesses of the decay products of the longer-lived, but extinct, nuclides calcium-41 (half-life 99,400 years) and aluminium-26 (730,000 years) have also been detected in such grains. The rapid-process isotopic anomalies of these grains include relative excesses of nitrogen-15 and oxygen-18 relative to Solar System abundances, as well as excesses of the neutron-rich stable nuclides 42Ca and 49Ti. Other presolar grains provide isotopic and physical information on asymptotic giant branch stars (AGB stars), which have manufactured the largest portion of the refractory elements lighter than iron in the galaxy. Because the elements in these particles were made at different times (and places) in the early Milky Way, the set of collected particles further provides insight into galactic evolution prior to the formation of the Solar System. In addition to providing information on nucleosynthesis of the grain's elements, solid grains provide information on the physico-chemical conditions under which they condensed, and on events subsequent to their formation. For example, consider red giants — which produce much of the carbon in our galaxy. Their atmospheres are cool enough for condensation processes to take place, resulting in the precipitation of solid particles (i.e., multiple atom agglomerations of elements such as carbon) in their atmosphere. This is unlike the atmosphere of the Sun, which is too hot to allow atoms to build up into more complex molecules. These solid fragments of matter are then injected into the interstellar medium by radiation pressure. Hence, particles bearing the signature of stellar nucleosynthesis provide information on (i) condensation processes in red giant atmospheres, (ii) radiation and heating processes in the interstellar medium, and (iii) the types of particles that carried the elements of which we are made, across the galaxy to the Solar System. See also Circumstellar dust Cosmic dust Cosmochemistry Extraterrestrial diamonds Extraterrestrial materials Glossary of meteoritics Interplanetary dust cloud List of meteorite minerals References External links Presolar grain research Presolar grains in meteorites Moving Stars and Shifting Sands of Presolar History Presolar Grains in Meteorites: An Overview and Some Implications Cosmic dust Interstellar media Meteorite mineralogy and petrology Meteorite minerals Nucleosynthesis
Presolar grains
[ "Physics", "Chemistry", "Astronomy" ]
2,488
[ "Nuclear fission", "Interstellar media", "Outer space", "Astrophysics", "Nucleosynthesis", "Nuclear physics", "Nuclear fusion", "Astronomical objects", "Cosmic dust" ]
248,988
https://en.wikipedia.org/wiki/Product%20rule
In calculus, the product rule (or Leibniz rule or Leibniz product rule) is a formula used to find the derivatives of products of two or more functions. For two functions, it may be stated in Lagrange's notation as or in Leibniz's notation as The rule may be extended or generalized to products of three or more functions, to a rule for higher-order derivatives of a product, and to other contexts. Discovery Discovery of this rule is credited to Gottfried Leibniz, who demonstrated it using "infinitesimals" (a precursor to the modern differential). (However, J. M. Child, a translator of Leibniz's papers, argues that it is due to Isaac Barrow.) Here is Leibniz's argument: Let u and v be functions. Then d(uv) is the same thing as the difference between two successive uv'''s; let one of these be uv, and the other u+du times v+dv; then: Since the term du·dv is "negligible" (compared to du and dv), Leibniz concluded that and this is indeed the differential form of the product rule. If we divide through by the differential dx, we obtain which can also be written in Lagrange's notation as Examples Suppose we want to differentiate By using the product rule, one gets the derivative (since the derivative of is and the derivative of the sine function is the cosine function). One special case of the product rule is the constant multiple rule, which states: if is a number, and is a differentiable function, then is also differentiable, and its derivative is This follows from the product rule since the derivative of any constant is zero. This, combined with the sum rule for derivatives, shows that differentiation is linear. The rule for integration by parts is derived from the product rule, as is (a weak version of) the quotient rule. (It is a "weak" version in that it does not prove that the quotient is differentiable but only says what its derivative is it is differentiable.) Proofs Limit definition of derivative Let and suppose that and are each differentiable at . We want to prove that is differentiable at and that its derivative, , is given by . To do this, (which is zero, and thus does not change the value) is added to the numerator to permit its factoring, and then properties of limits are used. The fact that follows from the fact that differentiable functions are continuous. Linear approximations By definition, if are differentiable at , then we can write linear approximations: and where the error terms are small with respect to h: that is, also written . Then: The "error terms" consist of items such as and which are easily seen to have magnitude Dividing by and taking the limit gives the result. Quarter squares This proof uses the chain rule and the quarter square function with derivative . We have: and differentiating both sides gives: Multivariable chain rule The product rule can be considered a special case of the chain rule for several variables, applied to the multiplication function : Non-standard analysis Let u and v be continuous functions in x, and let dx, du and dv be infinitesimals within the framework of non-standard analysis, specifically the hyperreal numbers. Using st to denote the standard part function that associates to a finite hyperreal number the real infinitely close to it, this gives This was essentially Leibniz's proof exploiting the transcendental law of homogeneity (in place of the standard part above). Smooth infinitesimal analysis In the context of Lawvere's approach to infinitesimals, let be a nilsquare infinitesimal. Then and , so that since Dividing by then gives or . Logarithmic differentiation Let . Taking the absolute value of each function and the natural log of both sides of the equation, Applying properties of the absolute value and logarithms, Taking the logarithmic derivative of both sides and then solving for : Solving for and substituting back for gives: Note: Taking the absolute value of the functions is necessary for the logarithmic differentiation of functions that may have negative values, as logarithms are only real-valued for positive arguments. This works because , which justifies taking the absolute value of the functions for logarithmic differentiation. Generalizations Product of more than two factors The product rule can be generalized to products of more than two factors. For example, for three factors we have For a collection of functions , we have The logarithmic derivative provides a simpler expression of the last form, as well as a direct proof that does not involve any recursion. The logarithmic derivative of a function , denoted here , is the derivative of the logarithm of the function. It follows that Using that the logarithm of a product is the sum of the logarithms of the factors, the sum rule for derivatives gives immediately The last above expression of the derivative of a product is obtained by multiplying both members of this equation by the product of the Higher derivatives It can also be generalized to the general Leibniz rule for the nth derivative of a product of two factors, by symbolically expanding according to the binomial theorem: Applied at a specific point x, the above formula gives: Furthermore, for the nth derivative of an arbitrary number of factors, one has a similar formula with multinomial coefficients: Higher partial derivatives For partial derivatives, we have where the index runs through all subsets of , and is the cardinality of . For example, when , Banach space Suppose X, Y, and Z are Banach spaces (which includes Euclidean space) and B : X × Y → Z is a continuous bilinear operator. Then B is differentiable, and its derivative at the point (x,y) in X × Y is the linear map D(x,y)B : X × Y → Z given by This result can be extended to more general topological vector spaces. In vector calculus The product rule extends to various product operations of vector functions on : For scalar multiplication: For dot product: For cross product of vector functions on : There are also analogues for other analogs of the derivative: if f and g are scalar fields then there is a product rule with the gradient: Such a rule will hold for any continuous bilinear product operation. Let B : X × Y → Z be a continuous bilinear map between vector spaces, and let f and g be differentiable functions into X and Y, respectively. The only properties of multiplication used in the proof using the limit definition of derivative is that multiplication is continuous and bilinear. So for any continuous bilinear operation, This is also a special case of the product rule for bilinear maps in Banach space. Derivations in abstract algebra and differential geometry In abstract algebra, the product rule is the defining property of a derivation. In this terminology, the product rule states that the derivative operator is a derivation on functions. In differential geometry, a tangent vector to a manifold M at a point p may be defined abstractly as an operator on real-valued functions which behaves like a directional derivative at p: that is, a linear functional v which is a derivation, Generalizing (and dualizing) the formulas of vector calculus to an n-dimensional manifold M, one may take differential forms of degrees k and l, denoted , with the wedge or exterior product operation , as well as the exterior derivative . Then one has the graded Leibniz rule: Applications Among the applications of the product rule is a proof that when n is a positive integer (this rule is true even if n is not positive or is not an integer, but the proof of that must rely on other methods). The proof is by mathematical induction on the exponent n. If n = 0 then xn is constant and nxn − 1 = 0. The rule holds in that case because the derivative of a constant function is 0. If the rule holds for any particular exponent n, then for the next value, n + 1, we have Therefore, if the proposition is true for n, it is true also for n + 1, and therefore for all natural n''. See also References Articles containing proofs Differentiation rules Theorems in analysis Theorems in calculus
Product rule
[ "Mathematics" ]
1,744
[ "Theorems in mathematical analysis", "Mathematical analysis", "Theorems in calculus", "Calculus", "Mathematical problems", "Articles containing proofs", "Mathematical theorems" ]
249,254
https://en.wikipedia.org/wiki/Clique%20problem
In computer science, the clique problem is the computational problem of finding cliques (subsets of vertices, all adjacent to each other, also called complete subgraphs) in a graph. It has several different formulations depending on which cliques, and what information about the cliques, should be found. Common formulations of the clique problem include finding a maximum clique (a clique with the largest possible number of vertices), finding a maximum weight clique in a weighted graph, listing all maximal cliques (cliques that cannot be enlarged), and solving the decision problem of testing whether a graph contains a clique larger than a given size. The clique problem arises in the following real-world setting. Consider a social network, where the graph's vertices represent people, and the graph's edges represent mutual acquaintance. Then a clique represents a subset of people who all know each other, and algorithms for finding cliques can be used to discover these groups of mutual friends. Along with its applications in social networks, the clique problem also has many applications in bioinformatics, and computational chemistry. Most versions of the clique problem are hard. The clique decision problem is NP-complete (one of Karp's 21 NP-complete problems). The problem of finding the maximum clique is both fixed-parameter intractable and hard to approximate. And, listing all maximal cliques may require exponential time as there exist graphs with exponentially many maximal cliques. Therefore, much of the theory about the clique problem is devoted to identifying special types of graph that admit more efficient algorithms, or to establishing the computational difficulty of the general problem in various models of computation. To find a maximum clique, one can systematically inspect all subsets, but this sort of brute-force search is too time-consuming to be practical for networks comprising more than a few dozen vertices. Although no polynomial time algorithm is known for this problem, more efficient algorithms than the brute-force search are known. For instance, the Bron–Kerbosch algorithm can be used to list all maximal cliques in worst-case optimal time, and it is also possible to list them in polynomial time per clique. History and applications The study of complete subgraphs in mathematics predates the "clique" terminology. For instance, complete subgraphs make an early appearance in the mathematical literature in the graph-theoretic reformulation of Ramsey theory by . But the term "clique" and the problem of algorithmically listing cliques both come from the social sciences, where complete subgraphs are used to model social cliques, groups of people who all know each other. used graphs to model social networks, and adapted the social science terminology to graph theory. They were the first to call complete subgraphs "cliques". The first algorithm for solving the clique problem is that of , who were motivated by the sociological application. Social science researchers have also defined various other types of cliques and maximal cliques in social network, "cohesive subgroups" of people or actors in the network all of whom share one of several different kinds of connectivity relation. Many of these generalized notions of cliques can also be found by constructing an undirected graph whose edges represent related pairs of actors from the social network, and then applying an algorithm for the clique problem to this graph. Since the work of Harary and Ross, many others have devised algorithms for various versions of the clique problem. In the 1970s, researchers began studying these algorithms from the point of view of worst-case analysis. See, for instance, , an early work on the worst-case complexity of the maximum clique problem. Also in the 1970s, beginning with the work of and , researchers began using the theory of NP-completeness and related intractability results to provide a mathematical explanation for the perceived difficulty of the clique problem. In the 1990s, a breakthrough series of papers beginning with showed that (assuming P ≠ NP) it is not even possible to approximate the problem accurately and efficiently. Clique-finding algorithms have been used in chemistry, to find chemicals that match a target structure and to model molecular docking and the binding sites of chemical reactions. They can also be used to find similar structures within different molecules. In these applications, one forms a graph in which each vertex represents a matched pair of atoms, one from each of two molecules. Two vertices are connected by an edge if the matches that they represent are compatible with each other. Being compatible may mean, for instance, that the distances between the atoms within the two molecules are approximately equal, to within some given tolerance. A clique in this graph represents a set of matched pairs of atoms in which all the matches are compatible with each other. A special case of this method is the use of the modular product of graphs to reduce the problem of finding the maximum common induced subgraph of two graphs to the problem of finding a maximum clique in their product. In automatic test pattern generation, finding cliques can help to bound the size of a test set. In bioinformatics, clique-finding algorithms have been used to infer evolutionary trees, predict protein structures, and find closely interacting clusters of proteins. Listing the cliques in a dependency graph is an important step in the analysis of certain random processes. In mathematics, Keller's conjecture on face-to-face tiling of hypercubes was disproved by , who used a clique-finding algorithm on an associated graph to find a counterexample. Definitions An undirected graph is formed by a finite set of vertices and a set of unordered pairs of vertices, which are called edges. By convention, in algorithm analysis, the number of vertices in the graph is denoted by and the number of edges is denoted by . A clique in a graph is a complete subgraph of . That is, it is a subset of the vertices such that every two vertices in are the two endpoints of an edge in . A maximal clique is a clique to which no more vertices can be added. For each vertex that is not part of a maximal clique, there must be another vertex that is in the clique and non-adjacent to , preventing from being added to the clique. A maximum clique is a clique that includes the largest possible number of vertices. The clique number is the number of vertices in a maximum clique of . Several closely related clique-finding problems have been studied. In the maximum clique problem, the input is an undirected graph, and the output is a maximum clique in the graph. If there are multiple maximum cliques, one of them may be chosen arbitrarily. In the weighted maximum clique problem, the input is an undirected graph with weights on its vertices (or, less frequently, edges) and the output is a clique with maximum total weight. The maximum clique problem is the special case in which all weights are equal. As well as the problem of optimizing the sum of weights, other more complicated bicriterion optimization problems have also been studied. In the maximal clique listing problem, the input is an undirected graph, and the output is a list of all its maximal cliques. The maximum clique problem may be solved using as a subroutine an algorithm for the maximal clique listing problem, because the maximum clique must be included among all the maximal cliques. In the -clique problem, the input is an undirected graph and a number . The output is a clique with vertices, if one exists, or a special value indicating that there is no -clique otherwise. In some variations of this problem, the output should list all cliques of size . In the clique decision problem, the input is an undirected graph and a number , and the output is a Boolean value: true if the graph contains a -clique, and false otherwise. The first four of these problems are all important in practical applications. The clique decision problem is not of practical importance; it is formulated in this way in order to apply the theory of NP-completeness to clique-finding problems. The clique problem and the independent set problem are complementary: a clique in is an independent set in the complement graph of and vice versa. Therefore, many computational results may be applied equally well to either problem, and some research papers do not clearly distinguish between the two problems. However, the two problems have different properties when applied to restricted families of graphs. For instance, the clique problem may be solved in polynomial time for planar graphs while the independent set problem remains NP-hard on planar graphs. Algorithms Finding a single maximal clique A maximal clique, sometimes called inclusion-maximal, is a clique that is not included in a larger clique. Therefore, every clique is contained in a maximal clique. Maximal cliques can be very small. A graph may contain a non-maximal clique with many vertices and a separate clique of size 2 which is maximal. While a maximum (i.e., largest) clique is necessarily maximal, the converse does not hold. There are some types of graphs in which every maximal clique is maximum; these are the complements of the well-covered graphs, in which every maximal independent set is maximum. However, other graphs have maximal cliques that are not maximum. A single maximal clique can be found by a straightforward greedy algorithm. Starting with an arbitrary clique (for instance, any single vertex or even the empty set), grow the current clique one vertex at a time by looping through the graph's remaining vertices. For each vertex that this loop examines, add to the clique if it is adjacent to every vertex that is already in the clique, and discard otherwise. This algorithm runs in linear time. Because of the ease of finding maximal cliques, and their potential small size, more attention has been given to the much harder algorithmic problem of finding a maximum or otherwise large clique. However, some research in parallel algorithms has studied the problem of finding a maximal clique. In particular, the problem of finding the lexicographically first maximal clique (the one found by the algorithm above) has been shown to be complete for the class of polynomial-time functions. This result implies that the problem is unlikely to be solvable within the parallel complexity class NC. Cliques of fixed size One can test whether a graph contains a -vertex clique, and find any such clique that it contains, using a brute force algorithm. This algorithm examines each subgraph with vertices and checks to see whether it forms a clique. It takes time , as expressed using big O notation. This is because there are subgraphs to check, each of which has edges whose presence in needs to be checked. Thus, the problem may be solved in polynomial time whenever is a fixed constant. However, when does not have a fixed value, but instead may vary as part of the input to the problem, the time is exponential. The simplest nontrivial case of the clique-finding problem is finding a triangle in a graph, or equivalently determining whether the graph is triangle-free. In a graph with edges, there may be at most triangles (using big theta notation to indicate that this bound is tight). The worst case for this formula occurs when is itself a clique. Therefore, algorithms for listing all triangles must take at least time in the worst case (using big omega notation), and algorithms are known that match this time bound. For instance, describe an algorithm that sorts the vertices in order from highest degree to lowest and then iterates through each vertex in the sorted list, looking for triangles that include and do not include any previous vertex in the list. To do so the algorithm marks all neighbors of , searches through all edges incident to a neighbor of outputting a triangle for every edge that has two marked endpoints, and then removes the marks and deletes from the graph. As the authors show, the time for this algorithm is proportional to the arboricity of the graph (denoted ) multiplied by the number of edges, which is . Since the arboricity is at most , this algorithm runs in time . More generally, all -vertex cliques can be listed by a similar algorithm that takes time proportional to the number of edges multiplied by the arboricity to the power . For graphs of constant arboricity, such as planar graphs (or in general graphs from any non-trivial minor-closed graph family), this algorithm takes time, which is optimal since it is linear in the size of the input. If one desires only a single triangle, or an assurance that the graph is triangle-free, faster algorithms are possible. As observe, the graph contains a triangle if and only if its adjacency matrix and the square of the adjacency matrix contain nonzero entries in the same cell. Therefore, fast matrix multiplication techniques can be applied to find triangles in time . used fast matrix multiplication to improve the algorithm for finding triangles to . These algorithms based on fast matrix multiplication have also been extended to problems of finding -cliques for larger values of . Listing all maximal cliques By a result of , every -vertex graph has at most maximal cliques. They can be listed by the Bron–Kerbosch algorithm, a recursive backtracking procedure of . The main recursive subroutine of this procedure has three arguments: a partially constructed (non-maximal) clique, a set of candidate vertices that could be added to the clique, and another set of vertices that should not be added (because doing so would lead to a clique that has already been found). The algorithm tries adding the candidate vertices one by one to the partial clique, making a recursive call for each one. After trying each of these vertices, it moves it to the set of vertices that should not be added again. Variants of this algorithm can be shown to have worst-case running time , matching the number of cliques that might need to be listed. Therefore, this provides a worst-case-optimal solution to the problem of listing all maximal cliques. Further, the Bron–Kerbosch algorithm has been widely reported as being faster in practice than its alternatives. However, when the number of cliques is significantly smaller than its worst case, other algorithms might be preferable. As showed, it is also possible to list all maximal cliques in a graph in an amount of time that is polynomial per generated clique. An algorithm such as theirs in which the running time depends on the output size is known as an output-sensitive algorithm. Their algorithm is based on the following two observations, relating the maximal cliques of the given graph to the maximal cliques of a graph formed by removing an arbitrary vertex from : For every maximal clique of , either continues to form a maximal clique in , or forms a maximal clique in . Therefore, has at least as many maximal cliques as does. Each maximal clique in that does not contain is a maximal clique in , and each maximal clique in that does contain can be formed from a maximal clique in by adding and removing the non-neighbors of from . Using these observations they can generate all maximal cliques in by a recursive algorithm that chooses a vertex arbitrarily and then, for each maximal clique in , outputs both and the clique formed by adding to and removing the non-neighbors of . However, some cliques of may be generated in this way from more than one parent clique of , so they eliminate duplicates by outputting a clique in only when its parent in is lexicographically maximum among all possible parent cliques. On the basis of this principle, they show that all maximal cliques in may be generated in time per clique, where is the number of edges in and is the number of vertices. improve this to per clique, where is the arboricity of the given graph. provide an alternative output-sensitive algorithm based on fast matrix multiplication. show that it is even possible to list all maximal cliques in lexicographic order with polynomial delay per clique. However, the choice of ordering is important for the efficiency of this algorithm: for the reverse of this order, there is no polynomial-delay algorithm unless P = NP. On the basis of this result, it is possible to list all maximal cliques in polynomial time, for families of graphs in which the number of cliques is polynomially bounded. These families include chordal graphs, complete graphs, triangle-free graphs, interval graphs, graphs of bounded boxicity, and planar graphs. In particular, the planar graphs have cliques, of at most constant size, that can be listed in linear time. The same is true for any family of graphs that is both sparse (having a number of edges at most a constant times the number of vertices) and closed under the operation of taking subgraphs. Finding maximum cliques in arbitrary graphs It is possible to find the maximum clique, or the clique number, of an arbitrary n-vertex graph in time by using one of the algorithms described above to list all maximal cliques in the graph and returning the largest one. However, for this variant of the clique problem better worst-case time bounds are possible. The algorithm of solves this problem in time . It is a recursive backtracking scheme similar to that of the Bron–Kerbosch algorithm, but is able to eliminate some recursive calls when it can be shown that the cliques found within the call will be suboptimal. improved the time to , and improved it to time, at the expense of greater space usage. Robson's algorithm combines a similar backtracking scheme (with a more complicated case analysis) and a dynamic programming technique in which the optimal solution is precomputed for all small connected subgraphs of the complement graph. These partial solutions are used to shortcut the backtracking recursion. The fastest algorithm known today is a refined version of this method by which runs in time . There has also been extensive research on heuristic algorithms for solving maximum clique problems without worst-case runtime guarantees, based on methods including branch and bound, local search, greedy algorithms, and constraint programming. Non-standard computing methodologies that have been suggested for finding cliques include DNA computing and adiabatic quantum computation. The maximum clique problem was the subject of an implementation challenge sponsored by DIMACS in 1992–1993, and a collection of graphs used as benchmarks for the challenge, which is publicly available. Special classes of graphs Planar graphs, and other families of sparse graphs, have been discussed above: they have linearly many maximal cliques, of bounded size, that can be listed in linear time. In particular, for planar graphs, any clique can have at most four vertices, by Kuratowski's theorem. Perfect graphs are defined by the properties that their clique number equals their chromatic number, and that this equality holds also in each of their induced subgraphs. For perfect graphs, it is possible to find a maximum clique in polynomial time, using an algorithm based on semidefinite programming. However, this method is complex and non-combinatorial, and specialized clique-finding algorithms have been developed for many subclasses of perfect graphs. In the complement graphs of bipartite graphs, Kőnig's theorem allows the maximum clique problem to be solved using techniques for matching. In another class of perfect graphs, the permutation graphs, a maximum clique is a longest decreasing subsequence of the permutation defining the graph and can be found using known algorithms for the longest decreasing subsequence problem. Conversely, every instance of the longest decreasing subsequence problem can be described equivalently as a problem of finding a maximum clique in a permutation graph. provide an alternative quadratic-time algorithm for maximum cliques in comparability graphs, a broader class of perfect graphs that includes the permutation graphs as a special case. In chordal graphs, the maximal cliques can be found by listing the vertices in an elimination ordering, and checking the clique neighborhoods of each vertex in this ordering. In some cases, these algorithms can be extended to other, non-perfect, classes of graphs as well. For instance, in a circle graph, the neighborhood of each vertex is a permutation graph, so a maximum clique in a circle graph can be found by applying the permutation graph algorithm to each neighborhood. Similarly, in a unit disk graph (with a known geometric representation), there is a polynomial time algorithm for maximum cliques based on applying the algorithm for complements of bipartite graphs to shared neighborhoods of pairs of vertices. The algorithmic problem of finding a maximum clique in a random graph drawn from the Erdős–Rényi model (in which each edge appears with probability , independently from the other edges) was suggested by . Because the maximum clique in a random graph has logarithmic size with high probability, it can be found by a brute force search in expected time . This is a quasi-polynomial time bound. Although the clique number of such graphs is usually very close to , simple greedy algorithms as well as more sophisticated randomized approximation techniques only find cliques with size , half as big. The number of maximal cliques in such graphs is with high probability exponential in , which prevents methods that list all maximal cliques from running in polynomial time. Because of the difficulty of this problem, several authors have investigated the planted clique problem, the clique problem on random graphs that have been augmented by adding large cliques. While spectral methods and semidefinite programming can detect hidden cliques of size , no polynomial-time algorithms are currently known to detect those of size (expressed using little-o notation). Approximation algorithms Several authors have considered approximation algorithms that attempt to find a clique or independent set that, although not maximum, has size as close to the maximum as can be found in polynomial time. Although much of this work has focused on independent sets in sparse graphs, a case that does not make sense for the complementary clique problem, there has also been work on approximation algorithms that do not use such sparsity assumptions. describes a polynomial time algorithm that finds a clique of size in any graph that has clique number for any constant . By using this algorithm when the clique number of a given input graph is between and , switching to a different algorithm of for graphs with higher clique numbers, and choosing a two-vertex clique if both algorithms fail to find anything, Feige provides an approximation algorithm that finds a clique with a number of vertices within a factor of of the maximum. Although the approximation ratio of this algorithm is weak, it is the best known to date. The results on hardness of approximation described below suggest that there can be no approximation algorithm with an approximation ratio significantly less than linear. Lower bounds NP-completeness The clique decision problem is NP-complete. It was one of Richard Karp's original 21 problems shown NP-complete in his 1972 paper "Reducibility Among Combinatorial Problems". This problem was also mentioned in Stephen Cook's paper introducing the theory of NP-complete problems. Because of the hardness of the decision problem, the problem of finding a maximum clique is also NP-hard. If one could solve it, one could also solve the decision problem, by comparing the size of the maximum clique to the size parameter given as input in the decision problem. Karp's NP-completeness proof is a many-one reduction from the Boolean satisfiability problem. It describes how to translate Boolean formulas in conjunctive normal form (CNF) into equivalent instances of the maximum clique problem. Satisfiability, in turn, was proved NP-complete in the Cook–Levin theorem. From a given CNF formula, Karp forms a graph that has a vertex for every pair , where is a variable or its negation and is a clause in the formula that contains . Two of these vertices are connected by an edge if they represent compatible variable assignments for different clauses. That is, there is an edge from to whenever and and are not each other's negations. If denotes the number of clauses in the CNF formula, then the -vertex cliques in this graph represent consistent ways of assigning truth values to some of its variables in order to satisfy the formula. Therefore, the formula is satisfiable if and only if a -vertex clique exists. Some NP-complete problems (such as the travelling salesman problem in planar graphs) may be solved in time that is exponential in a sublinear function of the input size parameter , significantly faster than a brute-force search. However, it is unlikely that such a subexponential time bound is possible for the clique problem in arbitrary graphs, as it would imply similarly subexponential bounds for many other standard NP-complete problems. Circuit complexity The computational difficulty of the clique problem has led it to be used to prove several lower bounds in circuit complexity. The existence of a clique of a given size is a monotone graph property, meaning that, if a clique exists in a given graph, it will exist in any supergraph. Because this property is monotone, there must exist a monotone circuit, using only and gates and or gates, to solve the clique decision problem for a given fixed clique size. However, the size of these circuits can be proven to be a super-polynomial function of the number of vertices and the clique size, exponential in the cube root of the number of vertices. Even if a small number of NOT gates are allowed, the complexity remains superpolynomial. Additionally, the depth of a monotone circuit for the clique problem using gates of bounded fan-in must be at least a polynomial in the clique size. Decision tree complexity The (deterministic) decision tree complexity of determining a graph property is the number of questions of the form "Is there an edge between vertex and vertex ?" that have to be answered in the worst case to determine whether a graph has a particular property. That is, it is the minimum height of a Boolean decision tree for the problem. There are possible questions to be asked. Therefore, any graph property can be determined with at most questions. It is also possible to define random and quantum decision tree complexity of a property, the expected number of questions (for a worst case input) that a randomized or quantum algorithm needs to have answered in order to correctly determine whether the given graph has the property. Because the property of containing a clique is monotone, it is covered by the Aanderaa–Karp–Rosenberg conjecture, which states that the deterministic decision tree complexity of determining any non-trivial monotone graph property is exactly . For arbitrary monotone graph properties, this conjecture remains unproven. However, for deterministic decision trees, and for any in the range , the property of containing a -clique was shown to have decision tree complexity exactly by . Deterministic decision trees also require exponential size to detect cliques, or large polynomial size to detect cliques of bounded size. The Aanderaa–Karp–Rosenberg conjecture also states that the randomized decision tree complexity of non-trivial monotone functions is . The conjecture again remains unproven, but has been resolved for the property of containing a clique for . This property is known to have randomized decision tree complexity . For quantum decision trees, the best known lower bound is , but no matching algorithm is known for the case of . Fixed-parameter intractability Parameterized complexity is the complexity-theoretic study of problems that are naturally equipped with a small integer parameter and for which the problem becomes more difficult as increases, such as finding -cliques in graphs. A problem is said to be fixed-parameter tractable if there is an algorithm for solving it on inputs of size , and a function , such that the algorithm runs in time . That is, it is fixed-parameter tractable if it can be solved in polynomial time for any fixed value of and moreover if the exponent of the polynomial does not depend on . For finding -vertex cliques, the brute force search algorithm has running time . Because the exponent of depends on , this algorithm is not fixed-parameter tractable. Although it can be improved by fast matrix multiplication, the running time still has an exponent that is linear in . Thus, although the running time of known algorithms for the clique problem is polynomial for any fixed , these algorithms do not suffice for fixed-parameter tractability. defined a hierarchy of parametrized problems, the W hierarchy, that they conjectured did not have fixed-parameter tractable algorithms. They proved that independent set (or, equivalently, clique) is hard for the first level of this hierarchy, W[1]. Thus, according to their conjecture, clique has no fixed-parameter tractable algorithm. Moreover, this result provides the basis for proofs of W[1]-hardness of many other problems, and thus serves as an analogue of the Cook–Levin theorem for parameterized complexity. showed that finding -vertex cliques cannot be done in time unless the exponential time hypothesis fails. Again, this provides evidence that no fixed-parameter tractable algorithm is possible. Although the problems of listing maximal cliques or finding maximum cliques are unlikely to be fixed-parameter tractable with the parameter , they may be fixed-parameter tractable for other parameters of instance complexity. For instance, both problems are known to be fixed-parameter tractable when parametrized by the degeneracy of the input graph. Hardness of approximation Weak results hinting that the clique problem might be hard to approximate have been known for a long time. observed that, because the clique number takes on small integer values and is NP-hard to compute, it cannot have a fully polynomial-time approximation scheme, unless P = NP. If too accurate an approximation were available, rounding its value to an integer would give the exact clique number. However, little more was known until the early 1990s, when several authors began to make connections between the approximation of maximum cliques and probabilistically checkable proofs. They used these connections to prove hardness of approximation results for the maximum clique problem. After many improvements to these results it is now known that, for every real number , there can be no polynomial time algorithm that approximates the maximum clique to within a factor better than , unless P = NP. The rough idea of these inapproximability results is to form a graph that represents a probabilistically checkable proof system for an NP-complete problem such as the Boolean satisfiability problem. In a probabilistically checkable proof system, a proof is represented as a sequence of bits. An instance of the satisfiability problem should have a valid proof if and only if it is satisfiable. The proof is checked by an algorithm that, after a polynomial-time computation on the input to the satisfiability problem, chooses to examine a small number of randomly chosen positions of the proof string. Depending on what values are found at that sample of bits, the checker will either accept or reject the proof, without looking at the rest of the bits. False negatives are not allowed: a valid proof must always be accepted. However, an invalid proof may sometimes mistakenly be accepted. For every invalid proof, the probability that the checker will accept it must be low. To transform a probabilistically checkable proof system of this type into a clique problem, one forms a graph with a vertex for each possible accepting run of the proof checker. That is, a vertex is defined by one of the possible random choices of sets of positions to examine, and by bit values for those positions that would cause the checker to accept the proof. It can be represented by a partial word with a 0 or 1 at each examined position and a wildcard character at each remaining position. Two vertices are adjacent, in this graph, if the corresponding two accepting runs see the same bit values at every position they both examine. Each (valid or invalid) proof string corresponds to a clique, the set of accepting runs that see that proof string, and all maximal cliques arise in this way. One of these cliques is large if and only if it corresponds to a proof string that many proof checkers accept. If the original satisfiability instance is satisfiable, it will have a valid proof string, one that is accepted by all runs of the checker, and this string will correspond to a large clique in the graph. However, if the original instance is not satisfiable, then all proof strings are invalid, each proof string has only a small number of checker runs that mistakenly accept it, and all cliques are small. Therefore, if one could distinguish in polynomial time between graphs that have large cliques and graphs in which all cliques are small, or if one could accurately approximate the clique problem, then applying this approximation to the graphs generated from satisfiability instances would allow satisfiable instances to be distinguished from unsatisfiable instances. However, this is not possible unless P = NP. Notes References Surveys and textbooks . . . . . . . . . . . . . . . . Research articles . . . . . . Originally presented at the 1992 Symposium on Foundations of Computer Science, . . Originally presented at the 1992 Symposium on Foundations of Computer Science, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Source code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NP-complete problems Computational problems in graph theory
Clique problem
[ "Mathematics" ]
7,055
[ "Computational problems in graph theory", "Computational mathematics", "Graph theory", "Computational problems", "Mathematical relations", "Mathematical problems", "NP-complete problems" ]
249,388
https://en.wikipedia.org/wiki/Open%20spectrum
Open spectrum (also known as free spectrum) is a movement to get the Federal Communications Commission to provide more unlicensed radio-frequency spectrum that is available for use by all. Proponents of the "commons model" of open spectrum advocate a future where all the spectrum is shared, and in which people use Internet protocols to communicate with each other, and smart devices, which would find the most effective energy level, frequency, and mechanism. Previous government-imposed limits on who can have stations and who cannot would be removed, and everyone would be given equal opportunity to use the airwaves for their own radio station, television station, or even broadcast their own website. A notable advocate for Open Spectrum is Lawrence Lessig. National governments currently allocate bands of spectrum (sometimes based on guidelines from the ITU) for use by anyone so long as they respect certain technical limits, most notably, a limit on total transmission power. Unlicensed spectrum is decentralized: there are no license payments or central control for users. However, sharing spectrum between unlicensed equipment requires that mitigation techniques (e.g.: power limitation, duty cycle, dynamic frequency selection) are imposed to ensure that these devices operate without interference. Traditional users of unlicensed spectrum include cordless telephones, and baby monitors. A collection of new technologies are taking advantage of unlicensed spectrum including Wi-Fi, Ultra Wideband, spread spectrum, software-defined radio, cognitive radio, and mesh networks. Radio astronomy needs Astronomers use many radio telescopes to look up at objects such as pulsars in our own Galaxy and at distant radio galaxies up to about half the distance of the observable sphere of our Universe. The use of radio frequencies for communication creates pollution from the point of view of astronomers, at best, creating noise or, at worst, totally blinding the astronomical community for certain types of observations of very faint objects. As more and more frequencies are used for communication, astronomical observations are getting more and more difficult. Negotiations to defend the parts of the spectrum most useful for observing the Universe are mostly carried out by the international astronomical community, as a grassroots community effort, coordinated in the Scientific Committee on Frequency Allocations for Radio Astronomy and Space Science. See also Media consolidation Media democracy Pirate Radio Spectrum commons theory References External links Open Spectrum FAQ Open Spectrum UK David P. Reed's Open Spectrum resource page Committee on Radio Astronomy Frequencies - includes list of frequencies useful for looking at the Universe Scientific Committee on Frequency Allocations for Radio Astronomy and Space Science (IUCAF) Wireless networking Radio resource management Radio spectrum
Open spectrum
[ "Physics", "Technology", "Engineering" ]
525
[ "Radio spectrum", "Spectrum (physical sciences)", "Wireless networking", "Computer networks engineering", "Electromagnetic spectrum" ]
249,459
https://en.wikipedia.org/wiki/Cambridge%20Ring%20%28computer%20network%29
The Cambridge Ring was an experimental local area network architecture developed at the Computer Laboratory, University of Cambridge starting in 1974 and continuing into the 1980s. It was a ring network with a theoretical limit of 255 nodes (though such a large number would have badly affected performance), around which cycled a fixed number of packets. Free packets would be "loaded" with data by a sending machine, marked as received by the destination machine, and "unloaded" on return to the sender; thus in principle, there could be as many simultaneous senders as packets. The network ran over twin twisted-pair cabling (plus a fibre-optic section) at a raw data rate of 10 megabits/sec. There are strong similarities between the Cambridge Ring and an earlier ring network developed at Bell Labs based on a design by John R. Pierce. That network used T1 lines at bit rate of 1.544 MHz and accommodating 522 bit messages (data plus address). People associated with the project include Andy Hopper, David Wheeler, Maurice Wilkes, and Roger Needham. A 1980 study by Peter Cowley reported several commercial implementors of elements of the network, ranging from Ferranti (producing gate arrays), Inmos (a semiconductor manufacturer), Linotype Paul, Logica VTS, MDB Systems, and Toltec Data (a design company who manufactured interface boards). In 2002, the Computer Laboratory launched a graduate society called the Cambridge Computer Lab Ring named after the Cambridge Ring. See also Cambridge Distributed Computing System Internet in the United Kingdom § History JANET NPL network Packet switching Token Ring University of London Computer Centre References External links Cambridge Ring Hardware Cambridge Fast Ring Cambridge Backbone Ring Hardware Cambridge Computer Lab Ring 1974 introductions Experimental computer networks History of computing in the United Kingdom Local area networks Network topology University of Cambridge Computer Laboratory
Cambridge Ring (computer network)
[ "Mathematics", "Technology" ]
374
[ "Network topology", "Topology", "History of computing", "History of computing in the United Kingdom" ]
249,527
https://en.wikipedia.org/wiki/List%20of%20decorative%20stones
This is a geographical list of natural stone used for decorative purposes in construction and monumental sculpture produced in various countries. The dimension-stone industry classifies stone based on appearance and hardness as either "granite", "marble" or "slate". The granite of the dimension-stone industry along with truly granitic rock also includes gneiss, gabbro, anorthosite and even some sedimentary rocks. Natural stone is used as architectural stone (construction, flooring, cladding, counter tops, curbing, etc.) and as raw block and monument stone for the funerary trade. Natural stone is also used in custom stone engraving. The engraved stone can be either decorative or functional. Natural memorial stones are used as natural burial markers. Africa Marble Asia India See Stones of India Pakistan Pakistan has more than 300 kinds of marble and natural stone. Iran Iran has more than 250 kinds of marble, travertine, onyx, granite, and limestone. Europe France Limestone Caen stone Pierre de Comblanchien, see also Côte d'Or (escarpment) Pierre d'Euville Pierre de Jaumont Tuffeau stone Greece Marble Verde Antico Italy Carrara marble Peperino Pietra serena Portoro Buono Travertine Belgium, Norway and Poland Poland Sandstone Radków Szczytna Czaple (Heron) Skała (Rock) Limestone Dębnik Kielce Granite Strzegom Strzelin Syenite, Granodiorite Kośmin Przedborowa Serpentinite Nasławice United Kingdom Chalk Clunch Flint Granite Aberdeen granite Limestone Ancaster stone Barnack rag Beer stone Clipsham stone Corallian limestone Cotswold stone (Oolitic limestone) Forest marble Frosterley Marble Ketton Stone Magnesian Limestone Portland stone Portland Admiralty Roach Portland Bowers Basebed Portland Bowers Lynham Whitbed Portland Bowers Saunders Whitbed Portland Grove Whitbed Portland Hard Blue Portland Independent Basebed Portland Independent Bottom Whitbed Portland Independent Top Whitbed Portland New Independent Whitbed Purbeck marble Sandstone Banktop Bearl Blaxter Catcastle Corsehill Corncockle Dunhouse Blue Dunhouse Buff Hall Dale Haslingden Flag Heavitree stone Locharbriggs Ravensworth Yorkstone Slate Welsh Slate Skiddaw Slate Southeast Europe Turkey Elazig Cherry Marble Burdur Beige Marble Emprador Marmara marble Mugla White Noche Travertine Middle East Israel Limestone/Dolomite Jerusalem stone North America Canada Anorthosite Charnockite Diabase Diorite Granite Gabbro Gneiss Limestone Marble Monzonite Sandstone Slate Steatite (Soapstone) Stromatolites Syenite Mesoamerica Tezontle — a volcanic rock used in Pre-Columbian Mesoamerican architecture. Archaeological sites with tezontle structures are located in present day México and northern Central America. United States Brownstone, a type of Triassic sandstone Granite, extensively quarried in Vermont, Georgia and New Hampshire Black granite – a common trade name for gabbro used as architectural material Austin limestone – a marble-like stone widely used as a building stone for interior and exterior wall cladding and interior and exterior paving Oceania New Zealand Oamaru stone — a creamy limestone mined in North Otago used for architecture and sculpture Port Chalmers bluestone (also called Timaru bluestone) — a dark basalt mined in Otago and Canterbury used for architecture South America See also Building stone List of types of limestone List of types of marble List of sandstones NIST Stone test wall — U.S. National Institute of Standards and Technology—NIST. List of rock types List of minerals Quarrying Rock (geology) Stonemasonry References Decorative stones .Decorative stones Decorative stones Decorative stones Decorative stones Decorative stones Decorative stones Decorative stones Decorative
List of decorative stones
[ "Physics", "Engineering" ]
778
[ "Natural materials", "Architecture", "Construction", "Stonemasonry", "Materials", "Design-related lists", "Design", "Matter", "Architecture lists" ]
3,485,752
https://en.wikipedia.org/wiki/Nanoart
NanoArt is a novel art discipline related to science and technology. It depicts natural or synthetic structures with features sized at the nanometer scale, which are observed by electron or scanning probe microscopy techniques in scientific laboratories. The recorded two or three dimensional images and movies are processed for artistic appeal and presented to the general audience. One of the aims of NanoArt is to familiarize people with nanoscale objects and advances in their synthesis and manipulation. NanoArt has been presented at traditional art exhibitions around the world. Besides, online competitions have been launched in the 2000s such as the “NANO” 2003 show at Los Angeles County Museum of Art and “Nanomandala”, the 2004 and 2005 installations in New York and Rome by Victoria Vesna and James Gimzewski, and the regular "Science as Art" section launched at the 2006 Materials Research Society Meeting. A characteristic example of nanoart is A Boy and His Atom, a one-minute stop-motion animated film created in 2012 by IBM Research from 242 images sized by 45×25 nm, which were recorded with a scanning tunneling microscope. The movie tells the story of a boy and a wayward atom who meet and become friends. The film was accepted into the Tribeca Online Film Festival and shown at the New York Tech Meet-up and the World Science Festival. Earlier in 2007 a book Teeny Ted from Turnip Town was created at the Simon Fraser University in Canada using a gallium-ion beam with a diameter of ~7 nanometers. The book contains 30 silicon-based pages sized by 0.07×0.10 mm; it was published in 100 copies and has an ISBN. In 2015, Jonty Hurwitz pioneered a new technique for creating nanosculpture using multiphoton lithography and photogrammetry. His work Trust was prepared in collaboration with Karlsruhe Institute of Technology and set a Guinness World Record as the "Smallest Sculpture of a Human Form". References External links Is NanoArt the New Photography?, The New York Times Extraordinary Beauty of the NanoArt World: Photos, Discovery News University of New Zealand Visual arts media Nanotechnology Science in art
Nanoart
[ "Materials_science", "Engineering" ]
437
[ "Nanotechnology", "Materials science" ]
3,487,414
https://en.wikipedia.org/wiki/Fillet%20%28mechanics%29
In mechanical engineering, a fillet (pronounced , like "fill it") is a rounding of an interior or exterior corner of a part. An interior or exterior corner, with an angle or type of bevel, is called a "chamfer". Fillet geometry, when on an interior corner is a line of concave function, whereas a fillet on an exterior corner is a line of convex function (in these cases, fillets are typically referred to as rounds). Fillets commonly appear on welded, soldered, or brazed joints. Depending on a geometric modelling kernel different CAD software products may provide different fillet functionality. Usually fillets can be quickly designed onto parts using 3D solid modeling engineering by picking edges of interest and invoking the function. Smooth edges connecting two simple flat features are generally simple for a computer to create and fast for a human user to specify. Once these features are included in the CAD design of a part, they are often manufactured automatically using computer-numerical control. Applications Stress concentration is a problem of load-bearing mechanical parts which is reduced by employing fillets on points and lines of expected high stress. The fillets distribute the stress over a broader area and effectively make the parts more durable and capable of bearing larger loads. For considerations in aerodynamics, fillets are employed to reduce interference drag where aircraft components such as wings, struts, and other surfaces meet one another. For manufacturing, concave corners are sometimes filleted to allow the use of round-tipped end mills to cut out an area of a material. This has a cycle time benefit if the round mill is simultaneously being used to mill complex curved surfaces. Radii are used to eliminate sharp edges that can be easily damaged or that can cause injury when the part is handled. Terminology Different design packages use different names for the same operations. Autodesk Inventor, AutoCAD, Rhino3D, CATIA, FreeCAD, Solidworks and Vectorworks refer to both concave and convex rounded edges as fillets, while referring to angled cuts of edges and concave corners as chamfers. CADKEY and Unigraphics refer to concave and convex rounded edges as blends. PTC Creo Elements/Pro (formerly Pro/Engineer) refers to rounded edges simply as rounds. Other 3D solid modeling software programs outside of engineering, such as gameSpace, have similar functions. See also Welding Notes External links Welding fillets Link missing Mechanical engineering
Fillet (mechanics)
[ "Physics", "Engineering" ]
502
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
3,488,128
https://en.wikipedia.org/wiki/Reynolds%20transport%20theorem
In differential calculus, the Reynolds transport theorem (also known as the Leibniz–Reynolds transport theorem), or simply the Reynolds theorem, named after Osborne Reynolds (1842–1912), is a three-dimensional generalization of the Leibniz integral rule. It is used to recast time derivatives of integrated quantities and is useful in formulating the basic equations of continuum mechanics. Consider integrating over the time-dependent region that has boundary , then taking the derivative with respect to time: If we wish to move the derivative into the integral, there are two issues: the time dependence of , and the introduction of and removal of space from due to its dynamic boundary. Reynolds transport theorem provides the necessary framework. General form Reynolds transport theorem can be expressed as follows: in which is the outward-pointing unit normal vector, is a point in the region and is the variable of integration, and are volume and surface elements at , and is the velocity of the area element (not the flow velocity). The function may be tensor-, vector- or scalar-valued. Note that the integral on the left hand side is a function solely of time, and so the total derivative has been used. Form for a material element In continuum mechanics, this theorem is often used for material elements. These are parcels of fluids or solids which no material enters or leaves. If is a material element then there is a velocity function , and the boundary elements obey This condition may be substituted to obtain: A special case If we take to be constant with respect to time, then and the identity reduces to as expected. (This simplification is not possible if the flow velocity is incorrectly used in place of the velocity of an area element.) Interpretation and reduction to one dimension The theorem is the higher-dimensional extension of differentiation under the integral sign and reduces to that expression in some cases. Suppose is independent of and , and that is a unit square in the -plane and has limits and . Then Reynolds transport theorem reduces to which, up to swapping and , is the standard expression for differentiation under the integral sign. See also References External links Osborne Reynolds, Collected Papers on Mechanical and Physical Subjects, in three volumes, published circa 1903, now fully and freely available in digital format: Volume 1, Volume 2, Volume 3, Aerodynamics Articles containing proofs Chemical engineering Continuum mechanics Eponymous theorems of physics Equations of fluid dynamics Fluid dynamics Fluid mechanics Mechanical engineering
Reynolds transport theorem
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
490
[ "Theorems in mathematical analysis", "Chemical engineering", "Eponymous theorems of physics", "Continuum mechanics", "Aerospace engineering", "Fluid dynamics", "Equations of physics", "Aerodynamics", "Civil engineering", "Mechanical engineering", "Physics theorems", "Equations of fluid dynamic...
3,489,177
https://en.wikipedia.org/wiki/William%20Andrew%20Goddard%20III
William Andrew Goddard III (born March 29, 1937) is the Charles and Mary Ferkel Professor of Chemistry and Applied Physics, and director of the Materials and Process Simulation Center at the California Institute of Technology. Early life and education William A. Goddard III was born in El Centro California and lived his early years in farm towns across California (El Centro, Delano, Indio, Lodi, Oildale, MacFarland, Firebaugh, also Yuma AZ), where his dad made the wooden boxes used to ship agricultural products. He always dreamed of living in Los Angeles. Goddard earned a BS in engineering from the University of California at Los Angeles in 1960 and PhD in engineering science with a minor in physics from Caltech in 1964. He has four children (Bill, Suzy, Cecilia, Lisa) and has been married for 58 years. Career He joined the chemistry faculty at Caltech in November 1964 where he remains today as a professor and researcher. After his Ph.D. he remained at the California Institute of Technology as Arthur Amos Noyes Research Fellow (1964–66), Professor of Theoretical Chemistry (1967–78) and Professor of Chemistry & Applied Physics (1978-). Goddard has made many contributions to theoretical chemistry, such as the generalized valence bond (GVB) method for ab initio electronic structure calculations and the ReaxFF force field for classical molecular dynamics simulations. He is a member of the International Academy of Quantum Molecular Science and the U.S. National Academy of Sciences. In August 2007, the American Chemical Society at its biannual national convention celebrated Goddard's 70th birthday with a 5-day symposium titled, "Bold predictions in theoretical chemistry." As of November 2017, Goddard has published 1160 peer-reviewed articles. References External links Homepage of William A. Goddard III 1937 births Members of the United States National Academy of Sciences Living people people from El Centro, California 21st-century American chemists California Institute of Technology alumni California Institute of Technology faculty Members of the International Academy of Quantum Molecular Science Theoretical chemists Computational chemists
William Andrew Goddard III
[ "Chemistry" ]
422
[ "Theoretical chemists", "American theoretical chemists" ]
113,469
https://en.wikipedia.org/wiki/Joule%E2%80%93Thomson%20effect
In thermodynamics, the Joule–Thomson effect (also known as the Joule–Kelvin effect or Kelvin–Joule effect) describes the temperature change of a real gas or liquid (as differentiated from an ideal gas) when it is expanding; typically caused by the pressure loss from flow through a valve or porous plug while keeping it insulated so that no heat is exchanged with the environment. This procedure is called a throttling process or Joule–Thomson process. The effect is purely an effect due to deviation from ideality, as any ideal gas has no JT effect. At room temperature, all gases except hydrogen, helium, and neon cool upon expansion by the Joule–Thomson process when being throttled through an orifice; these three gases rise in temperature when forced through a porous plug at room temperature, but lowers in temperature when already at lower temperatures. Most liquids such as hydraulic oils will be warmed by the Joule–Thomson throttling process. The temperature at which the JT effect switches sign is the inversion temperature. The gas-cooling throttling process is commonly exploited in refrigeration processes such as liquefiers in air separation industrial process. In hydraulics, the warming effect from Joule–Thomson throttling can be used to find internally leaking valves as these will produce heat which can be detected by thermocouple or thermal-imaging camera. Throttling is a fundamentally irreversible process. The throttling due to the flow resistance in supply lines, heat exchangers, regenerators, and other components of (thermal) machines is a source of losses that limits their performance. Since it is a constant-enthalpy process, it can be used to experimentally measure the lines of constant enthalpy (isenthalps) on the diagram of a gas. Combined with the specific heat capacity at constant pressure it allows the complete measurement of the thermodynamic potential for the gas. History The effect is named after James Prescott Joule and William Thomson, 1st Baron Kelvin, who discovered it in 1852. It followed upon earlier work by Joule on Joule expansion, in which a gas undergoes free expansion in a vacuum and the temperature is unchanged, if the gas is ideal. Description The adiabatic (no heat exchanged) expansion of a gas may be carried out in a number of ways. The change in temperature experienced by the gas during expansion depends not only on the initial and final pressure, but also on the manner in which the expansion is carried out. If the expansion process is reversible, meaning that the gas is in thermodynamic equilibrium at all times, it is called an isentropic expansion. In this scenario, the gas does positive work during the expansion, and its temperature decreases. In a free expansion, on the other hand, the gas does no work and absorbs no heat, so the internal energy is conserved. Expanded in this manner, the temperature of an ideal gas would remain constant, but the temperature of a real gas decreases, except at very high temperature. The method of expansion discussed in this article, in which a gas or liquid at pressure P1 flows into a region of lower pressure P2 without significant change in kinetic energy, is called the Joule–Thomson expansion. The expansion is inherently irreversible. During this expansion, enthalpy remains unchanged (see proof below). Unlike a free expansion, work is done, causing a change in internal energy. Whether the internal energy increases or decreases is determined by whether work is done on or by the fluid; that is determined by the initial and final states of the expansion and the properties of the fluid. The temperature change produced during a Joule–Thomson expansion is quantified by the Joule–Thomson coefficient, . This coefficient may be either positive (corresponding to cooling) or negative (heating); the regions where each occurs for molecular nitrogen, N2, are shown in the figure. Note that most conditions in the figure correspond to N2 being a supercritical fluid, where it has some properties of a gas and some of a liquid, but can not be really described as being either. The coefficient is negative at both very high and very low temperatures; at very high pressure it is negative at all temperatures. The maximum inversion temperature (621 K for N2) occurs as zero pressure is approached. For N2 gas at low pressures, is negative at high temperatures and positive at low temperatures. At temperatures below the gas-liquid coexistence curve, N2 condenses to form a liquid and the coefficient again becomes negative. Thus, for N2 gas below 621 K, a Joule–Thomson expansion can be used to cool the gas until liquid N2 forms. Physical mechanism There are two factors that can change the temperature of a fluid during an adiabatic expansion: a change in internal energy or the conversion between potential and kinetic internal energy. Temperature is the measure of thermal kinetic energy (energy associated with molecular motion); so a change in temperature indicates a change in thermal kinetic energy. The internal energy is the sum of thermal kinetic energy and thermal potential energy. Thus, even if the internal energy does not change, the temperature can change due to conversion between kinetic and potential energy; this is what happens in a free expansion and typically produces a decrease in temperature as the fluid expands. If work is done on or by the fluid as it expands, then the total internal energy changes. This is what happens in a Joule–Thomson expansion and can produce larger heating or cooling than observed in a free expansion. In a Joule–Thomson expansion the enthalpy remains constant. The enthalpy, , is defined as where is internal energy, is pressure, and is volume. Under the conditions of a Joule–Thomson expansion, the change in represents the work done by the fluid (see the proof below). If increases, with constant, then must decrease as a result of the fluid doing work on its surroundings. This produces a decrease in temperature and results in a positive Joule–Thomson coefficient. Conversely, a decrease in means that work is done on the fluid and the internal energy increases. If the increase in kinetic energy exceeds the increase in potential energy, there will be an increase in the temperature of the fluid and the Joule–Thomson coefficient will be negative. For an ideal gas, does not change during a Joule–Thomson expansion. As a result, there is no change in internal energy; since there is also no change in thermal potential energy, there can be no change in thermal kinetic energy and, therefore, no change in temperature. In real gases, does change. The ratio of the value of to that expected for an ideal gas at the same temperature is called the compressibility factor, . For a gas, this is typically less than unity at low temperature and greater than unity at high temperature (see the discussion in compressibility factor). At low pressure, the value of always moves towards unity as a gas expands. Thus at low temperature, and will increase as the gas expands, resulting in a positive Joule–Thomson coefficient. At high temperature, and decrease as the gas expands; if the decrease is large enough, the Joule–Thomson coefficient will be negative. For liquids, and for supercritical fluids under high pressure, increases as pressure increases. This is due to molecules being forced together, so that the volume can barely decrease due to higher pressure. Under such conditions, the Joule–Thomson coefficient is negative, as seen in the figure above. The physical mechanism associated with the Joule–Thomson effect is closely related to that of a shock wave, although a shock wave differs in that the change in bulk kinetic energy of the gas flow is not negligible. The Joule–Thomson (Kelvin) coefficient The rate of change of temperature with respect to pressure in a Joule–Thomson process (that is, at constant enthalpy ) is the Joule–Thomson (Kelvin) coefficient . This coefficient can be expressed in terms of the gas's specific volume , its heat capacity at constant pressure , and its coefficient of thermal expansion as: See the below for the proof of this relation. The value of is typically expressed in °C/bar (SI units: K/Pa) and depends on the type of gas and on the temperature and pressure of the gas before expansion. Its pressure dependence is usually only a few percent for pressures up to 100 bar. All real gases have an inversion point at which the value of changes sign. The temperature of this point, the Joule–Thomson inversion temperature, depends on the pressure of the gas before expansion. In a gas expansion the pressure decreases, so the sign of is negative by definition. With that in mind, the following table explains when the Joule–Thomson effect cools or warms a real gas: Helium and hydrogen are two gases whose Joule–Thomson inversion temperatures at a pressure of one atmosphere are very low (e.g., about 40 K, −233 °C for helium). Thus, helium and hydrogen warm when expanded at constant enthalpy at typical room temperatures. On the other hand, nitrogen and oxygen, the two most abundant gases in air, have inversion temperatures of 621 K (348 °C) and 764 K (491 °C) respectively: these gases can be cooled from room temperature by the Joule–Thomson effect. For an ideal gas, is always equal to zero: ideal gases neither warm nor cool upon being expanded at constant enthalpy. Theoretical models For a Van der Waals gas, the coefficient iswith inversion temperature . For the Dieterici gas, the reduced inversion temperature is , and the relation between reduced pressure and reduced inversion temperature is . This is plotted on the right. The critical point falls inside the region where the gas cools on expansion. The outside region is where the gas warms on expansion. Applications In practice, the Joule–Thomson effect is achieved by allowing the gas to expand through a throttling device (usually a valve) which must be very well insulated to prevent any heat transfer to or from the gas. No external work is extracted from the gas during the expansion (the gas must not be expanded through a turbine, for example). The cooling produced in the Joule–Thomson expansion makes it a valuable tool in refrigeration. The effect is applied in the Linde technique as a standard process in the petrochemical industry, where the cooling effect is used to liquefy gases, and in many cryogenic applications (e.g. for the production of liquid oxygen, nitrogen, and argon). A gas must be below its inversion temperature to be liquefied by the Linde cycle. For this reason, simple Linde cycle liquefiers, starting from ambient temperature, cannot be used to liquefy helium, hydrogen, or neon. They must first be cooled to their inversion temperatures, which are -233 C (helium), -71 C (hydrogen), and -42 C (neon). Proof that the specific enthalpy remains constant In thermodynamics so-called "specific" quantities are quantities per unit mass (kg) and are denoted by lower-case characters. So h, u, and v are the specific enthalpy, specific internal energy, and specific volume (volume per unit mass, or reciprocal density), respectively. In a Joule–Thomson process the specific enthalpy h remains constant. To prove this, the first step is to compute the net work done when a mass m of the gas moves through the plug. This amount of gas has a volume of V1 = m v1 in the region at pressure P1 (region 1) and a volume V2 = m v2 when in the region at pressure P2 (region 2). Then in region 1, the "flow work" done on the amount of gas by the rest of the gas is: W1 = m P1v1. In region 2, the work done by the amount of gas on the rest of the gas is: W2 = m P2v2. So, the total work done on the mass m of gas is The change in internal energy minus the total work done on the amount of gas is, by the first law of thermodynamics, the total heat supplied to the amount of gas. In the Joule–Thomson process, the gas is insulated, so no heat is absorbed. This means that where u1 and u2 denote the specific internal energies of the gas in regions 1 and 2, respectively. Using the definition of the specific enthalpy h = u + Pv, the above equation implies that where h1 and h2 denote the specific enthalpies of the amount of gas in regions 1 and 2, respectively. Throttling in the T-s diagram A convenient way to get a quantitative understanding of the throttling process is by using diagrams such as h-T diagrams, h-P diagrams, and others. Commonly used are the so-called T-s diagrams. Figure 2 shows the T-s diagram of nitrogen as an example. Various points are indicated as follows: As shown before, throttling keeps h constant. E.g. throttling from 200 bar and 300K (point a in fig. 2) follows the isenthalpic (line of constant specific enthalpy) of 430kJ/kg. At 1 bar it results in point b which has a temperature of 270K. So throttling from 200 bar to 1 bar gives a cooling from room temperature to below the freezing point of water. Throttling from 200 bar and an initial temperature of 133K (point c in fig. 2) to 1 bar results in point d, which is in the two-phase region of nitrogen at a temperature of 77.2K. Since the enthalpy is an extensive parameter the enthalpy in d (hd) is equal to the enthalpy in e (he) multiplied with the mass fraction of the liquid in d (xd) plus the enthalpy in f (hf) multiplied with the mass fraction of the gas in d (1 − xd). So With numbers: 150 = xd 28 + (1 − xd) 230 so xd is about 0.40. This means that the mass fraction of the liquid in the liquid–gas mixture leaving the throttling valve is 40%. Derivation of the Joule–Thomson coefficient It is difficult to think physically about what the Joule–Thomson coefficient, , represents. Also, modern determinations of do not use the original method used by Joule and Thomson, but instead measure a different, closely related quantity. Thus, it is useful to derive relationships between and other, more conveniently measured quantities, as described below. The first step in obtaining these results is to note that the Joule–Thomson coefficient involves the three variables T, P, and H. A useful result is immediately obtained by applying the cyclic rule; in terms of these three variables that rule may be written Each of the three partial derivatives in this expression has a specific meaning. The first is , the second is the constant pressure heat capacity, , defined by and the third is the inverse of the isothermal Joule–Thomson coefficient, , defined by . This last quantity is more easily measured than . Thus, the expression from the cyclic rule becomes This equation can be used to obtain Joule–Thomson coefficients from the more easily measured isothermal Joule–Thomson coefficient. It is used in the following to obtain a mathematical expression for the Joule–Thomson coefficient in terms of the volumetric properties of a fluid. To proceed further, the starting point is the fundamental equation of thermodynamics in terms of enthalpy; this is Now "dividing through" by dP, while holding temperature constant, yields The partial derivative on the left is the isothermal Joule–Thomson coefficient, , and the one on the right can be expressed in terms of the coefficient of thermal expansion via a Maxwell relation. The appropriate relation is where α is the cubic coefficient of thermal expansion. Replacing these two partial derivatives yields This expression can now replace in the earlier equation for to obtain: This provides an expression for the Joule–Thomson coefficient in terms of the commonly available properties heat capacity, molar volume, and thermal expansion coefficient. It shows that the Joule–Thomson inversion temperature, at which is zero, occurs when the coefficient of thermal expansion is equal to the inverse of the temperature. Since this is true at all temperatures for ideal gases (see expansion in gases), the Joule–Thomson coefficient of an ideal gas is zero at all temperatures. Joule's second law It is easy to verify that for an ideal gas defined by suitable microscopic postulates that αT = 1, so the temperature change of such an ideal gas at a Joule–Thomson expansion is zero. For such an ideal gas, this theoretical result implies that: The internal energy of a fixed mass of an ideal gas depends only on its temperature (not pressure or volume). This rule was originally found by Joule experimentally for real gases and is known as Joule's second law. More refined experiments found important deviations from it. See also Critical point (thermodynamics) Enthalpy and Isenthalpic process Ideal gas Liquefaction of gases MIRI (Mid-Infrared Instrument), a J–T loop is used on one of the instruments of the James Webb Space Telescope Refrigeration Reversible process (thermodynamics) References Bibliography External links Joule–Thomson effect module, University of Notre Dame Thermodynamics Cryogenics Engineering thermodynamics Gases Heating, ventilation, and air conditioning Thomson effect William Thomson, 1st Baron Kelvin
Joule–Thomson effect
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
3,659
[ "Matter", "Applied and interdisciplinary physics", "Engineering thermodynamics", "Phases of matter", "Cryogenics", "Thermodynamics", "Mechanical engineering", "Statistical mechanics", "Gases", "Dynamical systems" ]
2,534,487
https://en.wikipedia.org/wiki/Regelation
Regelation is the phenomenon of ice melting under pressure and refreezing when the pressure is reduced. This can be demonstrated by looping a fine wire around a block of ice, with a heavy weight attached to it. The pressure exerted on the ice slowly melts it locally, permitting the wire to pass through the entire block. The wire's track will refill as soon as pressure is relieved, so the ice block will remain intact even after wire passes completely through. This experiment is possible for ice at −10 °C or cooler, and while essentially valid, the details of the process by which the wire passes through the ice are complex. The phenomenon works best with high thermal conductivity materials such as copper, since latent heat of fusion from the top side needs to be transferred to the lower side to supply latent heat of melting. In short, the phenomenon in which ice converts to liquid due to applied pressure and then re-converts to ice once the pressure is removed is called regelation. Regelation was discovered by Michael Faraday. It occurs only for substances such as ice, that have the property of expanding upon freezing, for the melting points of those substances decrease with the increasing external pressure. The melting point of ice falls by 0.0072 °C for each additional atm of pressure applied. For example, a pressure of 500 atmospheres is needed for ice to melt at −4 °C. Surface melting For a normal crystalline ice far below its melting point, there will be some relaxation of the atoms near the surface. Simulations of ice near to its melting point show that there is significant melting of the surface layers rather than a symmetric relaxation of atom positions. Nuclear magnetic resonance provided evidence for a liquid layer on the surface of ice. In 1998, using atomic force microscopy, Astrid Döppenschmidt and Hans-Jürgen Butt measured the thickness of the liquid-like layer on ice to be roughly 32 nm at −1 °C, and 11 nm at −10 °C. The surface melting can account for the following: Low coefficient of friction of ice, as experienced by skaters. Ease of compaction of ice High adhesion of ice surfaces Examples of regelation A glacier can exert a sufficient amount of pressure on its lower surface to lower the melting point of its ice. The melting of the ice at the glacier's base allows it to move from a higher elevation to a lower elevation. Liquid water may flow from the base of a glacier at lower elevations when the temperature of the air is above the freezing point of water. Misconceptions At least one 1992 article suggests it is a slightly misconceived misconception to ascribe regelation to ice skating. The problem with matching the (large) magnitude of the water-ice p-V gradient above the triple point boundary with the magnitudes of prevailing temperature and pressure in the case of the ice skating context applies equally in the context of the classic lab experiment with a copper wire cutting through an 10cm ice block with say a 28 swg wire. The misconception is not that these observations fail to be regelation but that regelation can be explained (solely) in terms of the magnitude of p-V gradient above the triple point. There is much more going on. Regelation is empirical—it is a phenomenon as was, for example, Brownian Motion before, during, and arguably even after Einstein modelled it. It has been so widely observed and described that we generalise to describing it in terms of pressure causing increased surface melting. The recognition of this phenomenon in all the mentioned contexts is not in doubt. Car tyres work in snow even though there is some increased surface melting because they have tread which allows water to be liberated. Ice skating is given as an example of regelation; however, the pressure required is much greater than the weight of a skater. Additionally, regelation does not explain how one can ice skate at sub-zero (0°C) temperatures. Compaction and creation of snow balls is another example from old texts. Here, the pressure required is far greater than the pressure that can be applied by hand. A counter example is that cars do not melt snow as they run over it. See also Premelting References Thermodynamics
Regelation
[ "Physics", "Chemistry", "Mathematics" ]
871
[ "Thermodynamics", "Dynamical systems" ]
2,536,095
https://en.wikipedia.org/wiki/Actin-binding%20protein
Actin-binding proteins (also known as ABPs) are proteins that bind to actin. This may mean ability to bind actin monomers, or polymers, or both. Many actin-binding proteins, including α-actinin, β-spectrin, dystrophin, utrophin and fimbrin, do this through the actin-binding calponin homology domain. This is a list of actin-binding proteins in alphabetical order. 0–9 25kDa 25kDa ABP from aorta 30akDA 30bkDa 34kDA 45kDa 110 kD dimer ABP 110 kD (Drebrin) p53 p58gag p185neu p116rip A a-actinin Abl ABLIM Actin-Interacting MAPKKK Ssk2p ABP120 ABP140 Abp1p ABP280 (Filamin) ABP50 (EF-1a) Acan 125 (Carmil) ActA Actibind Actin Actinfilin Actinogelin Actin-regulating kinases Actin-Related Proteins Actobindin Actolinkin Actopaxin Actophorin Acumentin (= L-plastin) Adducin ADF/Cofilin Adseverin (scinderin) Afadin AFAP-110 Affixin Aginactin AIP1 Aldolase Angiogenin Anillin Annexins Aplyronine Archvillin (isoform of Supervillin) Arginine kinase Arp2/3 complex B Band 4.1 Band 4.9 (Dematin) b-actinin b-Cap73 Bifocal Bistramide A BPAG1 Brevin (Gelsolin) C c-Abl Calpactin (Annexin) CHO1 Cortactin CamKinase II Calponin Chondramide Cortexillin CAP Caltropin CH-ILKBP CPb3 Cap100 Calvasculin Ciboulot Coactosin CAP23 CARMIL Acan125 Cingulin Cytovillin (Ezrin) CapZ/Capping Protein a-Catenin Cofilin CR16 Caldesmon CCT Comitin Calicin Centuarin Coronin D DBP40 Drebrin Dematin (Band 4.9) Dynacortin Destrin (ADF/cofilin) Dystonins Diaphanous Dystroglycan DNase I Dystrophin Doliculide Dolastatins E EAST Endossin EF-1a (ABP50) Eps15 EF-1b EPLIN EF-2 Epsin EGF receptor ERK ENC-1 ERM proteins (ezrin, radixin, moesin, plus merlin) END3p Ezrin (the E of ERM protein family) F F17R Fodrin (spectrin) Fascin Formins Fessilin Frabin FHL3 Fragmin Fhos FLNA (filamin A) Fimbrin (plastin) G GAP43 Glycogenins Gas2 G-proteins Gastrin-Binding Protein Gelactins I-IV Gelsolins Girdin Glucokinase H Harmonin b Hrp36 Hexokinase Hrp65-2 Hectochlorin HS1 (actin binding protein) Helicase II Hsp27 HIP1 (Huntingtin Interacting protein 1) Hsp70 Histactophilin Hsp90 Histidine rich protein II Hsp100 I Inhibitor of apoptosis (IAP) Insertin Interaptin IP3Kinase A (Inositol 1,4,5-trisphosphate 3-kinase A) IQGAP Integrins J Jaspisamide A Jasplakinolide K Kabiramide C Kaptin Kettin Kelch protein L 5-Lipoxygenase Limatin Lim Kinases Lim Proteins L-plastin Lymphocyte Specific Protein 1 (LSP1) M MACF1 MacMARKS Mena Myopodin MAP1A Merlin (related to the ERM proteins) Myosins MAP-1C Metavinculin Moesin (the M of ERM proteins) Myosin light chain kinase MAL Mip-90 Myosin Light Chain A1 MARKS MIM MAYP Mycalolide (a macroglide drug) Mayven Myelin basic protein N Naphthylphthalamic acid binding protein (NPA) N-RAP Nebulin N-WASP Neurabin Nullo Neurexins Neurocalcin Nexillin O OYE2 P Palladin Plastin p30 PAK (p21-activated Kinase) Plectin p47PHOX Parvin (actopaxin) Prefoldin p53 PASK (Proline, Alanine rich Ste20 related Kinase) Presenilin I p58 Phalloidin (not a protein; a small cyclic peptide) Profilin p185neu Ponticulin Protein kinase C Porin P.IB Prk1p (actin regulating kinase) R Radixin (the R of ERM proteins) Rapsyn Rhizopodin RPL45 RTX toxin (Vibrio cholerae) RVS 167 S Sac6 Sla1p Srv2 (CAP) S-adenosyl-L-homocysteine hydrolase, (SAHH) Sla2p Synaptopodin Scinderin (adseverin) Synapsins Scruin Spectrin Severin Spectraplakins SVSII Shot (Short stop) Spire Shroom Smitin (Smooth Musc.Titin) Supervillin SipA Smoothelin Sucrose synthetase SipC Sra-1 Spinophilin Ssk2p Swinholide T Talin protein Toxophilin Twinfilin Tau Trabeculin Twinstar TCP-1 Transgelin Transgelin 2 Transgelin 3 Tensin Tropomodulin Thymosin Tropomyosin Titin Troponin TOR2 Tubulin bIV U Ulapualide Utrophin Unc-87 Unc-60 (ADF/cofilins) V VASP Vav Verprolin VDAC Vibrio cholerae RTX toxin Villin Vinculin Vitamin D-binding protein W WIP WASp Y Y-box proteins YpkA (YopO) Z Zipper protein Zo-1 Zyxin See also Cytoskeletal drugs References External links The Encyclopaedia of Actin-Binding Proteins (and Drugs)– alphabetical list, sourced profile for each Cell biology Proteins
Actin-binding protein
[ "Chemistry", "Biology" ]
1,435
[ "Biomolecules by chemical classification", "Proteins", "Cell biology", "Molecular biology" ]
2,536,597
https://en.wikipedia.org/wiki/Nanotribology
Nanotribology is the branch of tribology that studies friction, wear, adhesion and lubrication phenomena at the nanoscale, where atomic interactions and quantum effects are not negligible. The aim of this discipline is characterizing and modifying surfaces for both scientific and technological purposes. Nanotribological research has historically involved both direct and indirect methodologies. Microscopy techniques, including Scanning Tunneling Microscope (STM), Atomic-Force Microscope (AFM) and Surface Forces Apparatus, (SFA) have been used to analyze surfaces with extremely high resolution, while indirect methods such as computational methods and Quartz crystal microbalance (QCM) have also been extensively employed. Changing the topology of surfaces at the nanoscale, friction can be either reduced or enhanced more intensively than macroscopic lubrication and adhesion; in this way, superlubrication and superadhesion can be achieved. In micro- and nano-mechanical devices problems of friction and wear, that are critical due to the extremely high surface volume ratio, can be solved covering moving parts with super lubricant coatings. On the other hand, where adhesion is an issue, nanotribological techniques offer a possibility to overcome such difficulties. History Friction and wear have been technological issues since ancient periods. On the one hand, the scientific approach of the last centuries towards the comprehension of the underlying mechanisms was focused on macroscopic aspects of tribology. On the other hand, in nanotribology, the systems studied are composed of nanometric structures, where volume forces (such as those related to mass and gravity) can often be considered negligible compared to surface forces. Scientific equipment to study such systems have been developed only in the second half of the 20th century. In 1969 the very first method to study the behavior of a molecularly thin liquid film sandwiched between two smooth surfaces through the SFA was developed. From this starting point, in 1980s researchers would employ other techniques to investigate solid state surfaces at the atomic scale. Direct observation of friction and wear at the nanoscale started with the first Scanning Tunneling Microscope (STM), which can obtain three-dimensional images of surfaces with atomic resolution; this instrument was developed by Gerd Binnig and Henrich Rohrer in 1981. STM can study only conductive materials, but in 1985 with the invention of the Atomic Force Microscope (AFM) by Binning and his colleagues, also non conductive surfaces can be observed. Afterwards, AFMs were modified to obtain data on normal and frictional forces: these modified microscopes are called Friction Force Microscopes (FFM) or Lateral Force Microscopes (LFM). The term "Nanotribology" was first used in the title of a 1990 publication and in a 1991 publication . in a title of a major review paper published in Nature in 1995 and in a title of a major Nanotribology Handbook in 1995. From the beginning of the 21st century, computer-based atomic simulation methods have been employed to study the behaviour of single asperities, even those composed by few atoms. Thanks to these techniques, the nature of bonds and interactions in materials can be understood with a high spatial and time resolution. Surface analysis Surface forces apparatus The SFA (Surface Forces Apparatus) is an instrument used for measuring physical forces between surfaces, such as adhesion and capillary forces in liquids and vapors, and van der Waals interactions. Since 1969, the year in which the first apparatus of this kind was described, numerous versions of this tool have been developed. SFA 2000, which has fewer components and is easier to use and clean than previous versions of the apparatus, is one of the currently most advanced equipment utilized for nanotribological purposes on thin films, polymers, nanoparticles and polysaccharides. SFA 2000 has one single cantilever which is able to generate mechanically coarse and electrically fine movements in seven orders of magnitude, respectively with coils and with piezoelectric materials. The extra-fine control enables the user to have a positional accuracy lesser than 1 Å. The sample is trapped by two molecularly smooth surfaces of mica in which it perfectly adheres epitaxially. Normal forces can be measured by a simple relation: where is the applied displacement by using one of the control methods mentioned before, is the spring constant and is the actual deformation of the sample measured by MBI. Moreover, if then there is a mechanical instability and therefore the lower surface will jump to a more stable region of the upper surface. And so, the adhesion force is measured with the following formula: . Using the DMT model, the interaction energy per unit area can be calculated: where is the curvature radius and is the force between cylyndrically curved surfaces. Scanning probe microscopy SPM techniques such as AFM and STM are widely used in nanotribology studies. The Scanning Tunneling Microscope is used mostly for morphological topological investigation of a clean conductive sample, because it is able to give an image of its surface with atomic resolution. The Atomic Force Microscope is a powerful tool in order to study tribology at a fundamental level. It provides an ultra-fine surface-tip contact with a high refined control over motion and atomic-level precision of measure. The microscope consists, basically, in a high flexible cantilever with a sharp tip, which is the part in contact with the sample and therefore the crossing section must be ideally atomic-size, but actually nanometric (radius of the section varies from 10 to 100 nm). In nanotribology AFM is commonly used for measuring normal and friction forces with a resolution of pico-Newtons. The tip is brought close to the sample's surface, consequently forces between the last atoms of the tip and the sample's deflect the cantilever proportionally to the intensity of this interactions. Normal forces bend the cantilever vertically up or down of the equilibrium position, depending on the sign of the force. The normal force can be calculated by means of the following equation: where is the spring constant of the cantilever, is the output of the photodetector, which is an electric signal, directly with the displacement of the cantilever and is the optical-lever sensitivity of the AFM. On the other hand, lateral forces can be measured with the FFM, which is fundamentally very similar to the AFM. The main difference resides in the tip motion, that slides perpendicularly to its axis. These lateral forces, i.e. friction forces in this case, result in twisting the cantilever, which is controlled to ensure that only the tip touches the surface and not other parts of the probe. At every step the twist is measured and related with the frictional force with this formula: where is the output voltage, is the torsional constant of the cantilever, is the height of the tip plus the cantilever thickness and is the lateral deflection sensitivity. Since the tip is part of a compliant apparatus, the cantilever, the load can be specified and so the measurement is made in load-control mode; but in this way the cantilever has snap-in and snap-out instabilities and so in some regions measurements cannot be completed stably. These instabilities can be avoided with displacement-controlled techniques, one of this is the interfacial force microscopy. The tap can be at contact with the sample in the whole measurement process, and this is called contact mode (or static mode), otherwise it can be oscillated and this is called tapping mode (or dynamic mode). Contact mode is commonly applied on hard sample, on which the tip cannot leave any sign of wear, such as scars and debris. For softer materials tapping mode is used to minimize the effects of friction. In this case the tip is vibrated by a piezo and taps the surface at the resonant frequency of the cantilever, i.e. 70-400 kHz, and with an amplitude of 20-100 nm, high enough to allow the tip to not get stuck to the sample because of the adhesion force. The atomic force microscope can be used as a nanoindenter in order to measure hardness and Young's modulus of the sample. For this application, the tip is made of diamond and it is pressed against the surface for about two seconds, then the procedure is repeated with different loads. The hardness is obtained dividing the maximum load by the residual imprint of the indenter, which can be different from the indenter section because of sink-in or pile-up phenomena. The Young's modulus can be calculated using the Oliver and Pharr method, which allows to obtain a relation between the stiffness of the sample, function of the indentation area, and its Young's and Poisson's moduli. Atomistic simulations Computational methods are particularly useful in nanotribology for studying various phenomena, such as nanoindentation, friction, wear or lubrication. In an atomistic simulation, every single atom's motion and trajectory can be tracked with a very high precision and so this information can be related to experimental results, in order to interpret them, to confirm a theory or to have access to phenomena, that are invisible to a direct study. Moreover, many experimental difficulties do not exist in an atomistic simulation, such as sample preparation and instrument calibration. Theoretically every surface can be created from a flawless one to the most disordered. As well as in the other fields where atomistic simulations are used, the main limitations of these techniques relies on the lack of accurate interatomic potentials and the limited computing power. For this reason, simulation time is very often small (femtoseconds) and the time step is limited to 1 fs for fundamental simulations up to 5 fs for coarse-grained models. It has been demonstrated with an atomistic simulation that the attraction force between the tip and sample's surface in a SPM measurement produces a jump-to-contact effect. This phenomenon has a completely different origin from the snap-in that occurs in load-controlled AFM, because this latter is originated from the finite compliance of the cantilever. The origin of the atomic resolution of an AFM was discovered and it has been shown that covalent bonds form between the tip and the sample which dominate van der Waals interactions and they are responsible for a such high resolution. Simulating an AFM scansion in contact mode, It has been found that a vacancy or an adatom can be detected only by an atomically sharp tip. Whether in non-contact mode vacancies and adatoms can be distinguished with the so-called frequency modulation technique with a non-atomically sharp tip. In conclusion only in non-contact mode can be achieved atomic resolution with an AFM. Properties Friction Friction, the force opposing to the relative motion, is usually idealized by means of some empirical laws such as Amonton’s First and Second laws and Coulomb's law. At the nanoscale, however, such laws may lose their validity. For instance, Amonton's second law states that friction coefficient is independent from the area of contact. Surfaces, in general, have asperities, that reduce the real area of contact and therefore, minimizing such area can minimize friction. During the scanning process with an AFM or FFM, the tip, sliding on the sample's surface, passes through both low (stable) and high potential energy points, determined, for instance, by atomic positions or, on a larger scale, by surface roughness. Without considering thermal effects, the only force that makes the tip overcome these potential barriers is the spring force given by the support: this causes the stick-slip motion. At the nanoscale, friction coefficient depends on several conditions. For example, with light loading conditions, tend to be lower than those at the macroscale. With higher loading conditions, such coefficient tends to be similar to the macroscopic one. Temperature and relative motion speed can also affect friction. Lubricity and superlubricity at the atomic scale Lubrication is the technique used to reduce friction between two surfaces in mutual contact. Generally, lubricants are fluids introduced between these surfaces in order to reduce friction. However, in micro- or nano-devices, lubrication is often required and traditional lubricants become too viscous when confined in layers of molecular thickness. A more effective technique is based on thin films, commonly produced by Langmuir–Blodgett deposition, or self-assembled monolayers Thin films and self-assembled monolayers are also used to increase adhesion phenomena. Two thin films made of perfluorinated lubricants (PFPE) with different chemical composition were found to have opposite behaviors in humid environment: hydrophobicity increases the adhesive force and decreases lubrication of films with nonpolar end groups; instead, hydrophilicity has the opposite effects with polar end groups. Superlubricity “Superlubricity is a frictionless tribological state sometimes occurring in nanoscale material junctions”. At the nanoscale, friction tends to be non isotropic: if two surfaces sliding against each other have incommensurate surface lattice structures, each atom is subject to different amount of force from different directions . Forces, in this situation, can offset each other, resulting in almost zero friction. The very first proof of this was obtained using a UHV-STM to measure. If lattices are incommensurable, friction was not observed, however, if the surfaces are commensurable, friction force is present. At the atomic level, these tribological properties are directly connected with superlubricity. An example of this is given by solid lubricants, such as graphite, MoS2 and Ti3SiC2: this can be explained with the low resistance to shear between layers due to the stratified structure of these solids. Even if at the macroscopic scale friction involves multiple microcontacts with different size and orientation, basing on these experiments one can speculate that a large fraction of contacts will be in superlubric regime. This leads to a great reduction in average friction force, explaining why such solids have a lubricant effect. Other experiments carried out with the LFM shows that the stick-slip regime is not visible if the applied normal load is negative: the sliding of the tip is smooth and the average friction force seems to be zero. Other mechanisms of superlubricity may include: (a) Thermodynamic repulsion due to a layer of free or grafted macromolecules between the bodies so that the entropy of the intermediate layer decreases at small distances due to stronger confinement; (b) Electrical repulsion due to external electrical voltage; (c) Repulsion due to electrical double layer; (d) Repulsion due to thermal fluctuations. Thermolubricity at the atomic scale With the introduction of AFM and FFM, thermal effects on lubricity at the atomic scale could not be considered negligible any more. Thermal excitation can result in multiple jumps of the tip in the direction of the slide and backward. When the sliding velocity is low, the tip takes a long time to move between low potential energy points and thermal motion can cause it to make a lot of spontaneous forward and reverse jumps: therefore, the required lateral force to make the tip follow the slow support motion is small, so the friction force becomes very low. For this situation was introduced the term thermolubricity. Adhesion Adhesion is the tendency of two surfaces to stay attached together. The attention in studying adhesion at the micro- and nanoscale increased with the development of AFM: it can be used in nanoindentation experiments, in order to quantify adhesion forces According to these studies, hardness was found to be constant with film thickness, and it's given by: where is the indentation's area and is the load applied to the indenter. Stiffness, defined as , where is the indentation's depth, can be obtained from , the radius of the indenter-contact line. is the reduced Young's modulus, and are the indenter's Young's modulus and Poisson's ratio and , are the same parameters for the sample. However, can't always be determined from direct observation; it could be deduced from the value of (depth of indentation), but it's possible only if there is no sink-in or pile-up (perfect Sneddon's surface conditions). If there is sink in, for example, and the indenter is conical the situation is described below. From the image, we can see that: and From Oliver and Pharr's study where ε depends on the geometry of the indenter; if it's conical, if it's spherical and if it's a flat cylinder. Oliver and Pharr, therefore, did not consider adhesive force, but only elastic force, so they concluded: Considering adhesive force Introducing as the adhesion energy and as the work of adhesion: obtaining In conclusion: The consequences of the additional term of adhesion is visible in the following graph: During loading, indentation depth is higher when adhesion is not negligible: adhesion forces contributes to the work of indentation; on the other hand, during unloading process, adhesion forces opposes indentation process. Adhesion is also related to capillary forces acting between two surfaces when in presence of humidity. Applications of adhesion studies This phenomenon is very important in thin films, because a mismatch between the film and the surface can cause internal stresses and, consequently interface debonding. When a normal load is applied with an indenter, the film deforms plastically, until the load reaches a critical value: an interfacial fracture starts to develop. The crack propagates radially, until the film is buckled. On the other hand, adhesion was also investigated for its biomimetic applications: several creatures including insects, spiders, lizards and geckos have developed a unique climbing ability that are trying to be replicated in synthetic materials . It was shown that a multi-level hierarchical structure produces adhesion enhancement: a synthetic adhesive replicating gecko feet organization was created using nanofabrication techniques and self-assembly. Wear Wear is related to the removal and the deformation of a material caused by the mechanical actions. At the nanoscale, wear is not uniform. The mechanism of wear generally begins on the surface of material. The relative motion of two surfaces can cause indentations obtained by the removal and deformation of surface material. Continued motion can eventually grow in both width and depth these indentations. At the macro scale wear is measured by quantifying the volume (or mass) of material loss or by measuring the ratio of wear volume per energy dissipated. At the nanoscale, however, measuring such volume can be difficult and therefore, it is possible to use evaluate wear by analyzing modifications in surface topology, generally by means of AFM scanning. See also J. Thomas Dickinson References External links Nanotribology Laboratory for Information Storage and MEMS/NEMS Nanotribology on TRIBONET Nanotribology Lab at the University of Pennsylvania Nanotribology Lab at North Carolina State University Atomic-scale Friction Research and Education Synergy Hub (AFRESH) an Engineering Virtual Organization for the atomic-scale friction community to share, archive, link, and discuss data, knowledge and tools related to atomic-scale friction Nanotechnology Materials science Tribology
Nanotribology
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
4,064
[ "Tribology", "Applied and interdisciplinary physics", "Materials science", "Surface science", "nan", "Mechanical engineering", "Nanotechnology" ]
2,537,182
https://en.wikipedia.org/wiki/Categories%20for%20the%20Working%20Mathematician
Categories for the Working Mathematician (CWM) is a textbook in category theory written by American mathematician Saunders Mac Lane, who cofounded the subject together with Samuel Eilenberg. It was first published in 1971, and is based on his lectures on the subject given at the University of Chicago, the Australian National University, Bowdoin College, and Tulane University. It is widely regarded as the premier introduction to the subject. Contents The book has twelve chapters, which are: Chapter I. Categories, Functors, and Natural Transformations. Chapter II. Constructions on Categories. Chapter III. Universals and Limits. Chapter IV. Adjoints. Chapter V. Limits. Chapter VI. Monads and Algebras. Chapter VII. Monoids. Chapter VIII. Abelian Categories. Chapter IX. Special Limits. Chapter X. Kan Extensions. Chapter XI. Symmetry and Braiding in Monoidal Categories Chapter XII. Structures in Categories. Chapters XI and XII were added in the 1998 second edition, the first in view of its importance in string theory and quantum field theory, and the second to address higher-dimensional categories that have come into prominence. Although it is the classic reference for category theory, some of the terminology is not standard. In particular, Mac Lane attempted to settle an ambiguity in usage for the terms epimorphism and monomorphism by introducing the terms epic and monic, but the distinction is not in common use. References Notes 1971 non-fiction books Graduate Texts in Mathematics Mathematics textbooks Category theory
Categories for the Working Mathematician
[ "Mathematics" ]
303
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Category theory" ]
2,537,223
https://en.wikipedia.org/wiki/Uniform%20polyhedron
In geometry, a uniform polyhedron has regular polygons as faces and is vertex-transitive—there is an isometry mapping any vertex onto any other. It follows that all vertices are congruent. Uniform polyhedra may be regular (if also face- and edge-transitive), quasi-regular (if also edge-transitive but not face-transitive), or semi-regular (if neither edge- nor face-transitive). The faces and vertices don't need to be convex, so many of the uniform polyhedra are also star polyhedra. There are two infinite classes of uniform polyhedra, together with 75 other polyhedra. They are 2 infinite classes of prisms and antiprisms, the convex polyhedrons as in 5 Platonic solids and 13 Archimedean solids—2 quasiregular and 11 semiregular— the non-convex star polyhedra as in 4 Kepler–Poinsot polyhedra and 53 uniform star polyhedra—14 quasiregular and 39 semiregular. There are also many degenerate uniform polyhedra with pairs of edges that coincide, including one found by John Skilling called the great disnub dirhombidodecahedron, Skilling's figure. Dual polyhedra to uniform polyhedra are face-transitive (isohedral) and have regular vertex figures, and are generally classified in parallel with their dual (uniform) polyhedron. The dual of a regular polyhedron is regular, while the dual of an Archimedean solid is a Catalan solid. The concept of uniform polyhedron is a special case of the concept of uniform polytope, which also applies to shapes in higher-dimensional (or lower-dimensional) space. Definition define uniform polyhedra to be vertex-transitive polyhedra with regular faces. They define a polyhedron to be a finite set of polygons such that each side of a polygon is a side of just one other polygon, such that no non-empty proper subset of the polygons has the same property. By a polygon they implicitly mean a polygon in 3-dimensional Euclidean space; these are allowed to be non-convex and intersecting each other. There are some generalizations of the concept of a uniform polyhedron. If the connectedness assumption is dropped, then we get uniform compounds, which can be split as a union of polyhedra, such as the compound of 5 cubes. If we drop the condition that the realization of the polyhedron is non-degenerate, then we get the so-called degenerate uniform polyhedra. These require a more general definition of polyhedra. gave a rather complicated definition of a polyhedron, while gave a simpler and more general definition of a polyhedron: in their terminology, a polyhedron is a 2-dimensional abstract polytope with a non-degenerate 3-dimensional realization. Here an abstract polytope is a poset of its "faces" satisfying various condition, a realization is a function from its vertices to some space, and the realization is called non-degenerate if any two distinct faces of the abstract polytope have distinct realizations. Some of the ways they can be degenerate are as follows: Hidden faces. Some polyhedra have faces that are hidden, in the sense that no points of their interior can be seen from the outside. These are usually not counted as uniform polyhedra. Degenerate compounds. Some polyhedra have multiple edges and their faces are the faces of two or more polyhedra, though these are not compounds in the previous sense since the polyhedra share edges. Double covers. Some non-orientable polyhedra have double covers satisfying the definition of a uniform polyhedron. There double covers have doubled faces, edges and vertices. They are usually not counted as uniform polyhedra. Double faces. There are several polyhedra with doubled faces produced by Wythoff's construction. Most authors do not allow doubled faces and remove them as part of the construction. Double edges. Skilling's figure has the property that it has double edges (as in the degenerate uniform polyhedra) but its faces cannot be written as a union of two uniform polyhedra. History Regular convex polyhedra The Platonic solids date back to the classical Greeks and were studied by the Pythagoreans, Plato (c. 424 – 348 BC), Theaetetus (c. 417 BC – 369 BC), Timaeus of Locri (c. 420–380 BC), and Euclid (fl. 300 BC). The Etruscans discovered the regular dodecahedron before 500 BC. Nonregular uniform convex polyhedra The cuboctahedron was known by Plato. Archimedes (287 BC – 212 BC) discovered all of the 13 Archimedean solids. His original book on the subject was lost, but Pappus of Alexandria (c. 290 – c. 350 AD) mentioned Archimedes listed 13 polyhedra. Piero della Francesca (1415 – 1492) rediscovered the five truncations of the Platonic solids—truncated tetrahedron, truncated octahedron, truncated cube, truncated dodecahedron, and truncated icosahedron—and included illustrations and calculations of their metric properties in his book De quinque corporibus regularibus. He also discussed the cuboctahedron in a different book. Luca Pacioli plagiarized Francesca's work in De divina proportione in 1509, adding the rhombicuboctahedron, calling it an icosihexahedron for its 26 faces, which was drawn by Leonardo da Vinci. Johannes Kepler (1571–1630) was the first to publish the complete list of Archimedean solids, in 1619. He also identified the infinite families of uniform prisms and antiprisms. Regular star polyhedra Kepler (1619) discovered two of the regular Kepler–Poinsot polyhedra, the small stellated dodecahedron and great stellated dodecahedron. Louis Poinsot (1809) discovered the other two, the great dodecahedron and great icosahedron. The set of four was proven complete by Augustin-Louis Cauchy in 1813 and named by Arthur Cayley in 1859. Other 53 nonregular star polyhedra Of the remaining 53, Edmund Hess (1878) discovered 2, Albert Badoureau (1881) discovered 36 more, and Pitsch (1881) independently discovered 18, of which 3 had not previously been discovered. Together these gave 41 polyhedra. The geometer H.S.M. Coxeter discovered the remaining twelve in collaboration with J. C. P. Miller (1930–1932) but did not publish. M.S. Longuet-Higgins and H.C. Longuet-Higgins independently discovered eleven of these. Lesavre and Mercier rediscovered five of them in 1947. published the list of uniform polyhedra. proved their conjecture that the list was complete. In 1974, Magnus Wenninger published his book Polyhedron models, which lists all 75 nonprismatic uniform polyhedra, with many previously unpublished names given to them by Norman Johnson. independently proved the completeness and showed that if the definition of uniform polyhedron is relaxed to allow edges to coincide then there is just one extra possibility (the great disnub dirhombidodecahedron). In 1987, Edmond Bonan drew all the uniform polyhedra and their duals in 3D with a Turbo Pascal program called Polyca. Most of them were shown during the International Stereoscopic Union Congress held in 1993, at the Congress Theatre, Eastbourne, England; and again in 2005 at the Kursaal of Besançon, France. In 1993, Zvi Har'El (1949–2008) produced a complete kaleidoscopic construction of the uniform polyhedra and duals with a computer program called Kaleido and summarized it in a paper Uniform Solution for Uniform Polyhedra, counting figures 1-80. Also in 1993, R. Mäder ported this Kaleido solution to Mathematica with a slightly different indexing system. In 2002 Peter W. Messer discovered a minimal set of closed-form expressions for determining the main combinatorial and metrical quantities of any uniform polyhedron (and its dual) given only its Wythoff symbol. Uniform star polyhedra The 57 nonprismatic nonconvex forms, with exception of the great dirhombicosidodecahedron, are compiled by Wythoff constructions within Schwarz triangles. Convex forms by Wythoff construction The convex uniform polyhedra can be named by Wythoff construction operations on the regular form. In more detail the convex uniform polyhedron are given below by their Wythoff construction within each symmetry group. Within the Wythoff construction, there are repetitions created by lower symmetry forms. The cube is a regular polyhedron, and a square prism. The octahedron is a regular polyhedron, and a triangular antiprism. The octahedron is also a rectified tetrahedron. Many polyhedra are repeated from different construction sources, and are colored differently. The Wythoff construction applies equally to uniform polyhedra and uniform tilings on the surface of a sphere, so images of both are given. The spherical tilings including the set of hosohedrons and dihedrons which are degenerate polyhedra. These symmetry groups are formed from the reflectional point groups in three dimensions, each represented by a fundamental triangle (p q r), where p > 1, q > 1, r > 1 and . Tetrahedral symmetry (3 3 2) – order 24 Octahedral symmetry (4 3 2) – order 48 Icosahedral symmetry (5 3 2) – order 120 Dihedral symmetry (n 2 2), for n = 3,4,5,... – order 4n The remaining nonreflective forms are constructed by alternation operations applied to the polyhedra with an even number of sides. Along with the prisms and their dihedral symmetry, the spherical Wythoff construction process adds two regular classes which become degenerate as polyhedra : the dihedra and the hosohedra, the first having only two faces, and the second only two vertices. The truncation of the regular hosohedra creates the prisms. Below the convex uniform polyhedra are indexed 1–18 for the nonprismatic forms as they are presented in the tables by symmetry form. For the infinite set of prismatic forms, they are indexed in four families: Hosohedra H2... (only as spherical tilings) Dihedra D2... (only as spherical tilings) Prisms P3... (truncated hosohedra) Antiprisms A3... (snub prisms) Summary tables And a sampling of dihedral symmetries: (The sphere is not cut, only the tiling is cut.) (On a sphere, an edge is the arc of the great circle, the shortest way, between its two vertices. Hence, a digon whose vertices are not polar-opposite is flat: it looks like an edge.) (3 3 2) Td tetrahedral symmetry The tetrahedral symmetry of the sphere generates 5 uniform polyhedra, and a 6th form by a snub operation. The tetrahedral symmetry is represented by a fundamental triangle with one vertex with two mirrors, and two vertices with three mirrors, represented by the symbol (3 3 2). It can also be represented by the Coxeter group A2 or [3,3], as well as a Coxeter diagram: . There are 24 triangles, visible in the faces of the tetrakis hexahedron, and in the alternately colored triangles on a sphere: (4 3 2) Oh octahedral symmetry The octahedral symmetry of the sphere generates 7 uniform polyhedra, and a 7 more by alternation. Six of these forms are repeated from the tetrahedral symmetry table above. The octahedral symmetry is represented by a fundamental triangle (4 3 2) counting the mirrors at each vertex. It can also be represented by the Coxeter group B2 or [4,3], as well as a Coxeter diagram: . There are 48 triangles, visible in the faces of the disdyakis dodecahedron, and in the alternately colored triangles on a sphere: (5 3 2) Ih icosahedral symmetry The icosahedral symmetry of the sphere generates 7 uniform polyhedra, and a 1 more by alternation. Only one is repeated from the tetrahedral and octahedral symmetry table above. The icosahedral symmetry is represented by a fundamental triangle (5 3 2) counting the mirrors at each vertex. It can also be represented by the Coxeter group G2 or [5,3], as well as a Coxeter diagram: . There are 120 triangles, visible in the faces of the disdyakis triacontahedron, and in the alternately colored triangles on a sphere: (p 2 2) Prismatic [p,2], I2(p) family (Dph dihedral symmetry) The dihedral symmetry of the sphere generates two infinite sets of uniform polyhedra, prisms and antiprisms, and two more infinite set of degenerate polyhedra, the hosohedra and dihedra which exist as tilings on the sphere. The dihedral symmetry is represented by a fundamental triangle (p 2 2) counting the mirrors at each vertex. It can also be represented by the Coxeter group I2(p) or [n,2], as well as a prismatic Coxeter diagram: . Below are the first five dihedral symmetries: D2 ... D6. The dihedral symmetry Dp has order 4n, represented the faces of a bipyramid, and on the sphere as an equator line on the longitude, and n equally-spaced lines of longitude. (2 2 2) Dihedral symmetry There are 8 fundamental triangles, visible in the faces of the square bipyramid (Octahedron) and alternately colored triangles on a sphere: (3 2 2) D3h dihedral symmetry There are 12 fundamental triangles, visible in the faces of the hexagonal bipyramid and alternately colored triangles on a sphere: (4 2 2) D4h dihedral symmetry There are 16 fundamental triangles, visible in the faces of the octagonal bipyramid and alternately colored triangles on a sphere: (5 2 2) D5h dihedral symmetry There are 20 fundamental triangles, visible in the faces of the decagonal bipyramid and alternately colored triangles on a sphere: (6 2 2) D6h dihedral symmetry There are 24 fundamental triangles, visible in the faces of the dodecagonal bipyramid and alternately colored triangles on a sphere. Wythoff construction operators See also Polyhedron Regular polyhedron Quasiregular polyhedron Semiregular polyhedron List of uniform polyhedra List of uniform polyhedra by vertex figure List of uniform polyhedra by Wythoff symbol List of uniform polyhedra by Schwarz triangle List of Johnson solids List of Wenninger polyhedron models Polyhedron model Uniform tiling Uniform tilings in hyperbolic plane Pseudo-uniform polyhedron List of shapes Notes References Brückner, M. Vielecke und vielflache. Theorie und geschichte.. Leipzig, Germany: Teubner, 1900. External links Uniform Solution for Uniform Polyhedra The Uniform Polyhedra Virtual Polyhedra Uniform Polyhedra Uniform polyhedron gallery Uniform Polyhedron -- from Wolfram MathWorld Has a visual chart of all 75
Uniform polyhedron
[ "Physics" ]
3,320
[ "Uniform polytopes", "Uniform polyhedra", "Symmetry" ]
2,537,711
https://en.wikipedia.org/wiki/Ideal%20chain
An ideal chain (or freely-jointed chain) is the simplest model in polymer chemistry to describe polymers, such as nucleic acids and proteins. It assumes that the monomers in a polymer are located at the steps of a hypothetical random walker that does not remember its previous steps. By neglecting interactions among monomers, this model assumes that two (or more) monomers can occupy the same location. Although it is simple, its generality gives insight about the physics of polymers. In this model, monomers are rigid rods of a fixed length , and their orientation is completely independent of the orientations and positions of neighbouring monomers. In some cases, the monomer has a physical interpretation, such as an amino acid in a polypeptide. In other cases, a monomer is simply a segment of the polymer that can be modeled as behaving as a discrete, freely jointed unit. If so, is the Kuhn length. For example, chromatin is modeled as a polymer in which each monomer is a segment approximately in length. Model N mers form the polymer, whose total unfolded length is: where N is the number of mers. In this very simple approach where no interactions between mers are considered, the energy of the polymer is taken to be independent of its shape, which means that at thermodynamic equilibrium, all of its shape configurations are equally likely to occur as the polymer fluctuates in time, according to the Maxwell–Boltzmann distribution. Let us call the total end to end vector of an ideal chain and the vectors corresponding to individual mers. Those random vectors have components in the three directions of space. Most of the expressions given in this article assume that the number of mers N is large, so that the central limit theorem applies. The figure below shows a sketch of a (short) ideal chain. The two ends of the chain are not coincident, but they fluctuate around each other, so that of course: Throughout the article the brackets will be used to denote the mean (of values taken over time) of a random variable or a random vector, as above. Since are independent, it follows from the central limit theorem that is distributed according to a normal distribution (or gaussian distribution): precisely, in 3D, and are distributed according to a normal distribution of mean 0 and of variance: So that . The end to end vector of the chain is distributed according to the following probability density function: The average end-to-end distance of the polymer is: A quantity frequently used in polymer physics is the radius of gyration: It is worth noting that the above average end-to-end distance, which in the case of this simple model is also the typical amplitude of the system's fluctuations, becomes negligible compared to the total unfolded length of the polymer at the thermodynamic limit. This result is a general property of statistical systems. Mathematical remark: the rigorous demonstration of the expression of the density of probability is not as direct as it appears above: from the application of the usual (1D) central limit theorem one can deduce that , and are distributed according to a centered normal distribution of variance . Then, the expression given above for is not the only one that is compatible with such distribution for , and . However, since the components of the vectors are uncorrelated for the random walk we are considering, it follows that , and are also uncorrelated. This additional condition can only be fulfilled if is distributed according to . Alternatively, this result can also be demonstrated by applying a multidimensional generalization of the central limit theorem, or through symmetry arguments. Generality of the model While the elementary model described above is totally unadapted to the description of real-world polymers at the microscopic scale, it does show some relevance at the macroscopic scale in the case of a polymer in solution whose monomers form an ideal mix with the solvent (in which case, the interactions between monomer and monomer, solvent molecule and solvent molecule, and between monomer and solvent are identical, and the system's energy can be considered constant, validating the hypotheses of the model). The relevancy of the model is, however, limited, even at the macroscopic scale, by the fact that it does not consider any excluded volume for monomers (or, to speak in chemical terms, that it neglects steric effects). Since the N mers are of a rigid, fixed length, the model also does not consider bond stretching, though it can be extended to do so. Other fluctuating polymer models that consider no interaction between monomers and no excluded volume, like the worm-like chain model, are all asymptotically convergent toward this model at the thermodynamic limit. For purpose of this analogy a Kuhn segment is introduced, corresponding to the equivalent monomer length to be considered in the analogous ideal chain. The number of Kuhn segments to be considered in the analogous ideal chain is equal to the total unfolded length of the polymer divided by the length of a Kuhn segment. Entropic elasticity of an ideal chain If the two free ends of an ideal chain are pulled apart by some sort of device, then the device experiences a force exerted by the polymer. As the ideal chain is stretched, its energy remains constant, and its time-average, or internal energy, also remains constant, which means that this force necessarily stems from a purely entropic effect. This entropic force is very similar to the pressure experienced by the walls of a box containing an ideal gas. The internal energy of an ideal gas depends only on its temperature, and not on the volume of its containing box, so it is not an energy effect that tends to increase the volume of the box like gas pressure does. This implies that the pressure of an ideal gas has a purely entropic origin. What is the microscopic origin of such an entropic force or pressure? The most general answer is that the effect of thermal fluctuations tends to bring a thermodynamic system toward a macroscopic state that corresponds to a maximum in the number of microscopic states (or micro-states) that are compatible with this macroscopic state. In other words, thermal fluctuations tend to bring a system toward its macroscopic state of maximum entropy. What does this mean in the case of the ideal chain? First, for our ideal chain, a microscopic state is characterized by the superposition of the states of each individual monomer (with i varying from 1 to N). In its solvent, the ideal chain is constantly subject to shocks from moving solvent molecules, and each of these shocks sends the system from its current microscopic state to another, very similar microscopic state. For an ideal polymer, as will be shown below, there are more microscopic states compatible with a short end-to-end distance than there are microscopic states compatible with a large end-to-end distance. Thus, for an ideal chain, maximizing its entropy means reducing the distance between its two free ends. Consequently, a force that tends to collapse the chain is exerted by the ideal chain between its two free ends. In this section, the mean of this force will be derived. The generality of the expression obtained at the thermodynamic limit will then be discussed. Ideal chain under length constraint The case of an ideal chain whose two ends are attached to fixed points will be considered in this sub-section. The vector joining these two points characterizes the macroscopic state (or macro-state) of the ideal chain. Each macro-state corresponds a certain number of micro-states, that we will call (micro-states are defined in the introduction to this section). Since the ideal chain's energy is constant, each of these micro-states is equally likely to occur. The entropy associated to a macro-state is thus equal to: where is the Boltzmann constant The above expression gives the absolute (quantum) entropy of the system. A precise determination of would require a quantum model for the ideal chain, which is beyond the scope of this article. However, we have already calculated the probability density associated with the end-to-end vector of the unconstrained ideal chain, above. Since all micro-states of the ideal chain are equally likely to occur, is proportional to . This leads to the following expression for the classical (relative) entropy of the ideal chain: where is a fixed constant. Let us call the force exerted by the chain on the point to which its end is attached. From the above expression of the entropy, we can deduce an expression of this force. Suppose that, instead of being fixed, the positions of the two ends of the ideal chain are now controlled by an operator. The operator controls the evolution of the end to end vector . If the operator changes by a tiny amount , then the variation of internal energy of the chain is zero, since the energy of the chain is constant. This condition can be written as: is defined as the elementary amount of mechanical work transferred by the operator to the ideal chain, and is defined as the elementary amount of heat transferred by the solvent to the ideal chain. Now, if we assume that the transformation imposed by the operator on the system is quasistatic (i.e., infinitely slow), then the system's transformation will be time-reversible, and we can assume that during its passage from macro-state to macro-state , the system passes through a series of thermodynamic equilibrium macro-states. This has two consequences: first, the amount of heat received by the system during the transformation can be tied to the variation of its entropy: where is the temperature of the chain. second, in order for the transformation to remain infinitely slow, the mean force exerted by the operator on the end points of the chain must balance the mean force exerted by the chain on its end points. Calling the force exerted by the operator and the force exerted by the chain, we have: We are thus led to: The above equation is the equation of state of the ideal chain. Since the expression depends on the central limit theorem, it is only exact in the limit of polymers containing a large number of monomers (that is, the thermodynamic limit). It is also only valid for small end-to-end distances, relative to the overall polymer contour length, where the behavior is like a hookean spring. Behavior over larger force ranges can be modeled using a canonical ensemble treatment identical to magnetization of paramagnetic spins. For the arbitrary forces the extension-force dependence will be given by Langevin function : where the extension is . For the arbitrary extensions the force-extension dependence can be approximated by: where is the inverse Langevin function, is the number of bonds in the molecule (therefore if the molecule has bonds it has monomers making up the molecule.). Finally, the model can be extended to even larger force ranges by inclusion of a stretch modulus along the polymer contour length. That is, by allowing the length of each unit of the chain to respond elastically to the applied force. Ideal polymer exchanging length with a reservoir Throughout this sub-section, as in the previous one, the two ends of the polymer are attached to a micro-manipulation device. This time, however, the device does not maintain the two ends of the ideal chain in a fixed position, but rather it maintains a constant pulling force on the ideal chain. In this case the two ends of the polymer fluctuate around a mean position . The ideal chain reacts with a constant opposite force . For an ideal chain exchanging length with a reservoir, a macro-state of the system is characterized by the vector . The change between an ideal chain of fixed length and an ideal chain in contact with a length reservoir is very much akin to the change between the micro-canonical ensemble and the canonical ensemble (see the Statistical mechanics article about this). The change is from a state where a fixed value is imposed on a certain parameter, to a state where the system is left free to exchange this parameter with the outside. The parameter in question is energy for the microcanonical and canonical descriptions, whereas in the case of the ideal chain the parameter is the length of the ideal chain. As in the micro-canonical and canonical ensembles, the two descriptions of the ideal chain differ only in the way they treat the system's fluctuations. They are thus equivalent at the thermodynamic limit. The equation of state of the ideal chain remains the same, except that is now subject to fluctuations: Ideal chain under a constant force constraint – calculation Consider a freely jointed chain of bonds of length subject to a constant elongational force applied to its ends along the axis and an environment temperature . An example could be a chain with two opposite charges and at its ends in a constant electric field applied along the axis as sketched in the figure on the right. If the direct Coulomb interaction between the charges is ignored, then there is a constant force at the two ends. Different chain conformations are not equally likely, because they correspond to different energy of the chain in the external electric field. Thus, different chain conformation have different statistical Boltzmann factors . The partition function is: Every monomer connection in the chain is characterized by a vector of length and angles in the spherical coordinate system. The end-to-end vector can be represented as: . Therefore: The Gibbs free energy G can be directly calculated from the partition function: The Gibbs free energy is used here because the ensemble of chains corresponds to constant temperature and constant force (analogous to the isothermal–isobaric ensemble, which has constant temperature and pressure). The average end-to-end distance corresponding to a given force can be obtained as the derivative of the free energy: This expression is the Langevin function , also mentioned in previous paragraphs: where . For small relative elongations () the dependence is approximately linear, and follows Hooke's law as shown in previous paragraphs: See also Polymer Worm-like chain, a more complex polymer model Kuhn length Coil–globule transition References Polymer chemistry Polymer physics
Ideal chain
[ "Chemistry", "Materials_science", "Engineering" ]
2,923
[ "Polymer physics", "Materials science", "Polymer chemistry" ]
2,537,958
https://en.wikipedia.org/wiki/Oseledets%20theorem
In mathematics, the multiplicative ergodic theorem, or Oseledets theorem provides the theoretical background for computation of Lyapunov exponents of a nonlinear dynamical system. It was proved by Valery Oseledets (also spelled "Oseledec") in 1965 and reported at the International Mathematical Congress in Moscow in 1966. A conceptually different proof of the multiplicative ergodic theorem was found by M. S. Raghunathan. The theorem has been extended to semisimple Lie groups by V. A. Kaimanovich and further generalized in the works of David Ruelle, Grigory Margulis, Anders Karlsson, and François Ledrappier. Cocycles The multiplicative ergodic theorem is stated in terms of matrix cocycles of a dynamical system. The theorem states conditions for the existence of the defining limits and describes the Lyapunov exponents. It does not address the rate of convergence. A cocycle of an autonomous dynamical system X is a map C : X×T → Rn×n satisfying where X and T (with T = Z⁺ or T = R⁺) are the phase space and the time range, respectively, of the dynamical system, and In is the n-dimensional unit matrix. The dimension n of the matrices C is not related to the phase space X. Examples A prominent example of a cocycle is given by the matrix Jt in the theory of Lyapunov exponents. In this special case, the dimension n of the matrices is the same as the dimension of the manifold X. For any cocycle C, the determinant det C(x, t) is a one-dimensional cocycle. Statement of the theorem Let μ be an ergodic invariant measure on X and C a cocycle of the dynamical system such that for each t ∈ T, the maps and are L1-integrable with respect to μ. Then for μ-almost all x and each non-zero vector u ∈ Rn the limit exists and assumes, depending on u but not on x, up to n different values. These are the Lyapunov exponents. Further, if λ1 > ... > λm are the different limits then there are subspaces Rn = R1 ⊃ ... ⊃ Rm ⊃ Rm+1 = {0}, depending on x, such that the limit is λi for u ∈ Ri \ Ri+1 and i = 1, ..., m. The values of the Lyapunov exponents are invariant with respect to a wide range of coordinate transformations. Suppose that g : X → X is a one-to-one map such that and its inverse exist; then the values of the Lyapunov exponents do not change. Additive versus multiplicative ergodic theorems Verbally, ergodicity means that time and space averages are equal, formally: where the integrals and the limit exist. Space average (right hand side, μ is an ergodic measure on X) is the accumulation of f(x) values weighted by μ(dx). Since addition is commutative, the accumulation of the f(x)μ(dx) values may be done in arbitrary order. In contrast, the time average (left hand side) suggests a specific ordering of the f(x(s)) values along the trajectory. Since matrix multiplication is, in general, not commutative, accumulation of multiplied cocycle values (and limits thereof) according to C(x(t0),tk) = C(x(tk−1),tk − tk−1) ... C(x(t0),t1 − t0) — for tk large and the steps ti − ti−1 small — makes sense only for a prescribed ordering. Thus, the time average may exist (and the theorem states that it actually exists), but there is no space average counterpart. In other words, the Oseledets theorem differs from additive ergodic theorems (such as G. D. Birkhoff's and J. von Neumann's) in that it guarantees the existence of the time average, but makes no claim about the space average. References External links V. I. Oseledets, Oseledets theorem at Scholarpedia Ergodic theory Theorems in dynamical systems
Oseledets theorem
[ "Mathematics" ]
924
[ "Theorems in dynamical systems", "Ergodic theory", "Mathematical problems", "Mathematical theorems", "Dynamical systems" ]
38,824
https://en.wikipedia.org/wiki/Electric%20power%20transmission
Electric power transmission is the bulk movement of electrical energy from a generating site, such as a power plant, to an electrical substation. The interconnected lines that facilitate this movement form a transmission network. This is distinct from the local wiring between high-voltage substations and customers, which is typically referred to as electric power distribution. The combined transmission and distribution network is part of electricity delivery, known as the electrical grid. Efficient long-distance transmission of electric power requires high voltages. This reduces the losses produced by strong currents. Transmission lines use either alternating current (AC) or direct current (DC). The voltage level is changed with transformers. The voltage is stepped up for transmission, then reduced for local distribution. A wide area synchronous grid, known as an interconnection in North America, directly connects generators delivering AC power with the same relative frequency to many consumers. North America has four major interconnections: Western, Eastern, Quebec and Texas. One grid connects most of continental Europe. Historically, transmission and distribution lines were often owned by the same company, but starting in the 1990s, many countries liberalized the regulation of the electricity market in ways that led to separate companies handling transmission and distribution. System Most North American transmission lines are high-voltage three-phase AC, although single phase AC is sometimes used in railway electrification systems. DC technology is used for greater efficiency over longer distances, typically hundreds of miles. High-voltage direct current (HVDC) technology is also used in submarine power cables (typically longer than 30 miles (50 km)), and in the interchange of power between grids that are not mutually synchronized. HVDC links stabilize power distribution networks where sudden new loads, or blackouts, in one part of a network might otherwise result in synchronization problems and cascading failures. Electricity is transmitted at high voltages to reduce the energy loss due to resistance that occurs over long distances. Power is usually transmitted through overhead power lines. Underground power transmission has a significantly higher installation cost and greater operational limitations, but lowers maintenance costs. Underground transmission is more common in urban areas or environmentally sensitive locations. Electrical energy must typically be generated at the same rate at which it is consumed. A sophisticated control system is required to ensure that power generation closely matches demand. If demand exceeds supply, the imbalance can cause generation plant(s) and transmission equipment to automatically disconnect or shut down to prevent damage. In the worst case, this may lead to a cascading series of shutdowns and a major regional blackout. The US Northeast faced blackouts in 1965, 1977, 2003, and major blackouts in other US regions in 1996 and 2011. Electric transmission networks are interconnected into regional, national, and even continent-wide networks to reduce the risk of such a failure by providing multiple redundant, alternative routes for power to flow should such shutdowns occur. Transmission companies determine the maximum reliable capacity of each line (ordinarily less than its physical or thermal limit) to ensure that spare capacity is available in the event of a failure in another part of the network. Overhead High-voltage overhead conductors are not covered by insulation. The conductor material is nearly always an aluminium alloy, formed of several strands and possibly reinforced with steel strands. Copper was sometimes used for overhead transmission, but aluminum is lighter, reduces yields only marginally and costs much less. Overhead conductors are supplied by several companies. Conductor material and shapes are regularly improved to increase capacity. Conductor sizes range from 12 mm2 (#6 American wire gauge) to 750 mm2 (1,590,000 circular mils area), with varying resistance and current-carrying capacity. For large conductors (more than a few centimetres in diameter), much of the current flow is concentrated near the surface due to the skin effect. The center of the conductor carries little current but contributes weight and cost. Thus, multiple parallel cables (called bundle conductors) are used for higher capacity. Bundle conductors are used at high voltages to reduce energy loss caused by corona discharge. Today, transmission-level voltages are usually 110 kV and above. Lower voltages, such as 66 kV and 33 kV, are usually considered subtransmission voltages, but are occasionally used on long lines with light loads. Voltages less than 33 kV are usually used for distribution. Voltages above 765 kV are considered extra high voltage and require different designs. Overhead transmission wires depend on air for insulation, requiring that lines maintain minimum clearances. Adverse weather conditions, such as high winds and low temperatures, interrupt transmission. Wind speeds as low as can permit conductors to encroach operating clearances, resulting in a flashover and loss of supply. Oscillatory motion of the physical line is termed conductor gallop or flutter depending on the frequency and amplitude of oscillation. Underground Electric power can be transmitted by underground power cables. Underground cables take up no right-of-way, have lower visibility, and are less affected by weather. However, cables must be insulated. Cable and excavation costs are much higher than overhead construction. Faults in buried transmission lines take longer to locate and repair. In some metropolitan areas, cables are enclosed by metal pipe and insulated with dielectric fluid (usually an oil) that is either static or circulated via pumps. If an electric fault damages the pipe and leaks dielectric, liquid nitrogen is used to freeze portions of the pipe to enable draining and repair. This extends the repair period and increases costs. The temperature of the pipe and surroundings are monitored throughout the repair period. Underground lines are limited by their thermal capacity, which permits less overload or re-rating lines. Long underground AC cables have significant capacitance, which reduces their ability to provide useful power beyond . DC cables are not limited in length by their capacitance. History Commercial electric power was initially transmitted at the same voltage used by lighting and mechanical loads. This restricted the distance between generating plant and loads. In 1882, DC voltage could not easily be increased for long-distance transmission. Different classes of loads (for example, lighting, fixed motors, and traction/railway systems) required different voltages, and so used different generators and circuits. Thus, generators were sited near their loads, a practice that later became known as distributed generation using large numbers of small generators. Transmission of alternating current (AC) became possible after Lucien Gaulard and John Dixon Gibbs built what they called the secondary generator, an early transformer provided with 1:1 turn ratio and open magnetic circuit, in 1881. The first long distance AC line was long, built for the 1884 International Exhibition of Electricity in Turin, Italy. It was powered by a 2 kV, 130 Hz Siemens & Halske alternator and featured several Gaulard transformers with primary windings connected in series, which fed incandescent lamps. The system proved the feasibility of AC electric power transmission over long distances. The first commercial AC distribution system entered service in 1885 in via dei Cerchi, Rome, Italy, for public lighting. It was powered by two Siemens & Halske alternators rated 30 hp (22 kW), 2 kV at 120 Hz and used 19 km of cables and 200 parallel-connected 2 kV to 20 V step-down transformers provided with a closed magnetic circuit, one for each lamp. A few months later it was followed by the first British AC system, serving Grosvenor Gallery. It also featured Siemens alternators and 2.4 kV to 100 V step-down transformers – one per user – with shunt-connected primaries. Working to improve what he considered an impractical Gaulard-Gibbs design, electrical engineer William Stanley, Jr. developed the first practical series AC transformer in 1885. Working with the support of George Westinghouse, in 1886 he demonstrated a transformer-based AC lighting system in Great Barrington, Massachusetts. It was powered by a steam engine-driven 500 V Siemens generator. Voltage was stepped down to 100 volts using the Stanley transformer to power incandescent lamps at 23 businesses over . This practical demonstration of a transformer and alternating current lighting system led Westinghouse to begin installing AC systems later that year. In 1888 the first designs for an AC motor appeared. These were induction motors running on polyphase current, independently invented by Galileo Ferraris and Nikola Tesla. Westinghouse licensed Tesla's design. Practical three-phase motors were designed by Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown. Widespread use of such motors were delayed many years by development problems and the scarcity of polyphase power systems needed to power them. In the late 1880s and early 1890s smaller electric companies merged into larger corporations such as Ganz and AEG in Europe and General Electric and Westinghouse Electric in the US. These companies developed AC systems, but the technical difference between direct and alternating current systems required a much longer technical merger. Alternating current's economies of scale with large generating plants and long-distance transmission slowly added the ability to link all the loads. These included single phase AC systems, poly-phase AC systems, low voltage incandescent lighting, high-voltage arc lighting, and existing DC motors in factories and street cars. In what became a universal system, these technological differences were temporarily bridged via the rotary converters and motor-generators that allowed the legacy systems to connect to the AC grid. These stopgaps were slowly replaced as older systems were retired or upgraded. The first transmission of single-phase alternating current using high voltage came in Oregon in 1890 when power was delivered from a hydroelectric plant at Willamette Falls to the city of Portland down river. The first three-phase alternating current using high voltage took place in 1891 during the international electricity exhibition in Frankfurt. A 15 kV transmission line, approximately 175 km long, connected Lauffen on the Neckar and Frankfurt. Transmission voltages increased throughout the 20th century. By 1914, fifty-five transmission systems operating at more than 70 kV were in service. The highest voltage then used was 150 kV. Interconnecting multiple generating plants over a wide area reduced costs. The most efficient plants could be used to supply varying loads during the day. Reliability was improved and capital costs were reduced, because stand-by generating capacity could be shared over many more customers and a wider area. Remote and low-cost sources of energy, such as hydroelectric power or mine-mouth coal, could be exploited to further lower costs. The 20th century's rapid industrialization made electrical transmission lines and grids critical infrastructure. Interconnection of local generation plants and small distribution networks was spurred by World War I, when large electrical generating plants were built by governments to power munitions factories. Bulk transmission These networks use components such as power lines, cables, circuit breakers, switches and transformers. The transmission network is usually administered on a regional basis by an entity such as a regional transmission organization or transmission system operator. Transmission efficiency is improved at higher voltage and lower current. The reduced current reduces heating losses. Joule's first law states that energy losses are proportional to the square of the current. Thus, reducing the current by a factor of two lowers the energy lost to conductor resistance by a factor of four for any given size of conductor. The optimum size of a conductor for a given voltage and current can be estimated by Kelvin's law for conductor size, which states that size is optimal when the annual cost of energy wasted in resistance is equal to the annual capital charges of providing the conductor. At times of lower interest rates and low commodity costs, Kelvin's law indicates that thicker wires are optimal. Otherwise, thinner conductors are indicated. Since power lines are designed for long-term use, Kelvin's law is used in conjunction with long-term estimates of the price of copper and aluminum as well as interest rates. Higher voltage is achieved in AC circuits by using a step-up transformer. High-voltage direct current (HVDC) systems require relatively costly conversion equipment that may be economically justified for particular projects such as submarine cables and longer distance high capacity point-to-point transmission. HVDC is necessary for sending energy between unsynchronized grids. A transmission grid is a network of power stations, transmission lines, and substations. Energy is usually transmitted within a grid with three-phase AC. Single-phase AC is used only for distribution to end users since it is not usable for large polyphase induction motors. In the 19th century, two-phase transmission was used but required either four wires or three wires with unequal currents. Higher order phase systems require more than three wires, but deliver little or no benefit. While the price of generating capacity is high, energy demand is variable, making it often cheaper to import needed power than to generate it locally. Because loads often rise and fall together across large areas, power often comes from distant sources. Because of the economic benefits of load sharing, wide area transmission grids may span countries and even continents. Interconnections between producers and consumers enables power to flow even if some links are inoperative. The slowly varying portion of demand is known as the base load and is generally served by large facilities with constant operating costs, termed firm power. Such facilities are nuclear, coal or hydroelectric, while other energy sources such as concentrated solar thermal and geothermal power have the potential to provide firm power. Renewable energy sources, such as solar photovoltaics, wind, wave, and tidal, are, due to their intermittency, not considered to be firm. The remaining or peak power demand, is supplied by peaking power plants, which are typically smaller, faster-responding, and higher cost sources, such as combined cycle or combustion turbine plants typically fueled by natural gas. Long-distance transmission (hundreds of kilometers) is cheap and efficient, with costs of US$0.005–0.02 per kWh, compared to annual averaged large producer costs of US$0.01–0.025 per kWh, retail rates upwards of US$0.10 per kWh, and multiples of retail for instantaneous suppliers at unpredicted high demand moments. New York often buys over 1000 MW of low-cost hydropower from Canada. Local sources (even if more expensive and infrequently used) can protect the power supply from weather and other disasters that can disconnect distant suppliers. Hydro and wind sources cannot be moved closer to big cities, and solar costs are lowest in remote areas where local power needs are nominal. Connection costs can determine whether any particular renewable alternative is economically realistic. Costs can be prohibitive for transmission lines, but high capacity, long distance super grid transmission network costs could be recovered with modest usage fees. Grid input At power stations, power is produced at a relatively low voltage between about 2.3 kV and 30 kV, depending on the size of the unit. The voltage is then stepped up by the power station transformer to a higher voltage (115 kV to 765 kV AC) for transmission. In the United States, power transmission is, variously, 230 kV to 500 kV, with less than 230 kV or more than 500 kV as exceptions. The Western Interconnection has two primary interchange voltages: 500 kV AC at 60 Hz, and ±500 kV (1,000 kV net) DC from North to South (Columbia River to Southern California) and Northeast to Southwest (Utah to Southern California). The 287.5 kV (Hoover Dam to Los Angeles line, via Victorville) and 345 kV (Arizona Public Service (APS) line) are local standards, both of which were implemented before 500 kV became practical. Losses Transmitting electricity at high voltage reduces the fraction of energy lost to Joule heating, which varies by conductor type, the current, and the transmission distance. For example, a span at 765 kV carrying 1000 MW of power can have losses of 0.5% to 1.1%. A 345 kV line carrying the same load across the same distance has losses of 4.2%. For a given amount of power, a higher voltage reduces the current and thus the resistive losses. For example, raising the voltage by a factor of 10 reduces the current by a corresponding factor of 10 and therefore the losses by a factor of 100, provided the same sized conductors are used in both cases. Even if the conductor size (cross-sectional area) is decreased ten-fold to match the lower current, the losses are still reduced ten-fold using the higher voltage. While power loss can also be reduced by increasing the wire's conductance (by increasing its cross-sectional area), larger conductors are heavier and more expensive. And since conductance is proportional to cross-sectional area, resistive power loss is only reduced proportionally with increasing cross-sectional area, providing a much smaller benefit than the squared reduction provided by multiplying the voltage. Long-distance transmission is typically done with overhead lines at voltages of 115 to 1,200 kV. At higher voltages, where more than 2,000 kV exists between conductor and ground, corona discharge losses are so large that they can offset the lower resistive losses in the line conductors. Measures to reduce corona losses include larger conductor diameter, hollow cores or conductor bundles. Factors that affect resistance and thus loss include temperature, spiraling, and the skin effect. Resistance increases with temperature. Spiraling, which refers to the way stranded conductors spiral about the center, also contributes to increases in conductor resistance. The skin effect causes the effective resistance to increase at higher AC frequencies. Corona and resistive losses can be estimated using a mathematical model. US transmission and distribution losses were estimated at 6.6% in 1997, 6.5% in 2007 and 5% from 2013 to 2019. In general, losses are estimated from the discrepancy between power produced (as reported by power plants) and power sold; the difference constitutes transmission and distribution losses, assuming no utility theft occurs. As of 1980, the longest cost-effective distance for DC transmission was . For AC it was , though US transmission lines are substantially shorter. In any AC line, conductor inductance and capacitance can be significant. Currents that flow solely in reaction to these properties, (which together with the resistance define the impedance) constitute reactive power flow, which transmits no power to the load. These reactive currents, however, cause extra heating losses. The ratio of real power transmitted to the load to apparent power (the product of a circuit's voltage and current, without reference to phase angle) is the power factor. As reactive current increases, reactive power increases and power factor decreases. For transmission systems with low power factor, losses are higher than for systems with high power factor. Utilities add capacitor banks, reactors and other components (such as phase-shifters; static VAR compensators; and flexible AC transmission systems, FACTS) throughout the system help to compensate for the reactive power flow, reduce the losses in power transmission and stabilize system voltages. These measures are collectively called 'reactive support'. Transposition Current flowing through transmission lines induces a magnetic field that surrounds the lines of each phase and affects the inductance of the surrounding conductors of other phases. The conductors' mutual inductance is partially dependent on the physical orientation of the lines with respect to each other. Three-phase lines are conventionally strung with phases separated vertically. The mutual inductance seen by a conductor of the phase in the middle of the other two phases is different from the inductance seen on the top/bottom. Unbalanced inductance among the three conductors is problematic because it may force the middle line to carry a disproportionate amount of the total power transmitted. Similarly, an unbalanced load may occur if one line is consistently closest to the ground and operates at a lower impedance. Because of this phenomenon, conductors must be periodically transposed along the line so that each phase sees equal time in each relative position to balance out the mutual inductance seen by all three phases. To accomplish this, line position is swapped at specially designed transposition towers at regular intervals along the line using various transposition schemes. Subtransmission Subtransmission runs at relatively lower voltages. It is uneconomical to connect all distribution substations to the high main transmission voltage, because that equipment is larger and more expensive. Typically, only larger substations connect with this high voltage. Voltage is stepped down before the current is sent to smaller substations. Subtransmission circuits are usually arranged in loops so that a single line failure does not stop service to many customers for more than a short time. Loops can be normally closed, where loss of one circuit should result in no interruption, or normally open where substations can switch to a backup supply. While subtransmission circuits are usually carried on overhead lines, in urban areas buried cable may be used. The lower-voltage subtransmission lines use less right-of-way and simpler structures; undergrounding is less difficult. No fixed cutoff separates subtransmission and transmission, or subtransmission and distribution. Their voltage ranges overlap. Voltages of 69 kV, 115 kV, and 138 kV are often used for subtransmission in North America. As power systems evolved, voltages formerly used for transmission were used for subtransmission, and subtransmission voltages became distribution voltages. Like transmission, subtransmission moves relatively large amounts of power, and like distribution, subtransmission covers an area instead of just point-to-point. Transmission grid exit Substation transformers reduce the voltage to a lower level for distribution to customers. This distribution is accomplished with a combination of sub-transmission (33 to 138 kV) and distribution (3.3 to 25 kV). Finally, at the point of use, the energy is transformed to end-user voltage (100 to 4160 volts). Advantage of high-voltage transmission High-voltage power transmission allows for lesser resistive losses over long distances. This efficiency delivers a larger proportion of the generated power to the loads. In a simplified model, the grid delivers electricity from an ideal voltage source with voltage , delivering a power ) to a single point of consumption, modelled by a resistance , when the wires are long enough to have a significant resistance . If the resistances are in series with no intervening transformer, the circuit acts as a voltage divider, because the same current runs through the wire resistance and the powered device. As a consequence, the useful power (at the point of consumption) is: Should an ideal transformer convert high-voltage, low-current electricity into low-voltage, high-current electricity with a voltage ratio of (i.e., the voltage is divided by and the current is multiplied by in the secondary branch, compared to the primary branch), then the circuit is again equivalent to a voltage divider, but the wires now have apparent resistance of only . The useful power is then: For (i.e. conversion of high voltage to low voltage near the consumption point), a larger fraction of the generator's power is transmitted to the consumption point and a lesser fraction is lost to Joule heating. Modeling The terminal characteristics of the transmission line are the voltage and current at the sending (S) and receiving (R) ends. The transmission line can be modeled as a black box and a 2 by 2 transmission matrix is used to model its behavior, as follows: The line is assumed to be a reciprocal, symmetrical network, meaning that the receiving and sending labels can be switched with no consequence. The transmission matrix T has the properties: The parameters A, B, C, and D differ depending on how the desired model handles the line's resistance (R), inductance (L), capacitance (C), and shunt (parallel, leak) conductance G. The four main models are the short line approximation, the medium line approximation, the long line approximation (with distributed parameters), and the lossless line. In such models, a capital letter such as R refers to the total quantity summed over the line and a lowercase letter such as c refers to the per-unit-length quantity. Lossless line The lossless line approximation is the least accurate; it is typically used on short lines where the inductance is much greater than the resistance. For this approximation, the voltage and current are identical at the sending and receiving ends. The characteristic impedance is pure real, which means resistive for that impedance, and it is often called surge impedance. When a lossless line is terminated by surge impedance, the voltage does not drop. Though the phase angles of voltage and current are rotated, the magnitudes of voltage and current remain constant along the line. For load > SIL, the voltage drops from sending end and the line consumes VARs. For load < SIL, the voltage increases from the sending end, and the line generates VARs. Short line The short line approximation is normally used for lines shorter than . There, only a series impedance Z is considered, while C and G are ignored. The final result is that A = D = 1 per unit, B = Z Ohms, and C = 0. The associated transition matrix for this approximation is therefore: Medium line The medium line approximation is used for lines running between . The series impedance and the shunt (current leak) conductance are considered, placing half of the shunt conductance at each end of the line. This circuit is often referred to as a nominal π (pi) circuit because of the shape (π) that is taken on when leak conductance is placed on both sides of the circuit diagram. The analysis of the medium line produces: Counterintuitive behaviors of medium-length transmission lines: voltage rise at no load or small current (Ferranti effect) receiving-end current can exceed sending-end current Long line The long line model is used when a higher degree of accuracy is needed or when the line under consideration is more than long. Series resistance and shunt conductance are considered to be distributed parameters, such that each differential length of the line has a corresponding differential series impedance and shunt admittance. The following result can be applied at any point along the transmission line, where is the propagation constant. To find the voltage and current at the end of the long line, should be replaced with (the line length) in all parameters of the transmission matrix. This model applies the Telegrapher's equations. High-voltage direct current High-voltage direct current (HVDC) is used to transmit large amounts of power over long distances or for interconnections between asynchronous grids. When electrical energy is transmitted over very long distances, the power lost in AC transmission becomes appreciable and it is less expensive to use direct current instead. For a long transmission line, these lower losses (and reduced construction cost of a DC line) can offset the cost of the required converter stations at each end. HVDC is used for long submarine cables where AC cannot be used because of cable capacitance. In these cases special high-voltage cables are used. Submarine HVDC systems are often used to interconnect the electricity grids of islands, for example, between Great Britain and continental Europe, between Great Britain and Ireland, between Tasmania and the Australian mainland, between the North and South Islands of New Zealand, between New Jersey and New York City, and between New Jersey and Long Island. Submarine connections up to in length have been deployed. HVDC links can be used to control grid problems. The power transmitted by an AC line increases as the phase angle between source end voltage and destination ends increases, but too large a phase angle allows the systems at either end to fall out of step. Since the power flow in a DC link is controlled independently of the phases of the AC networks that it connects, this phase angle limit does not exist, and a DC link is always able to transfer its full rated power. A DC link therefore stabilizes the AC grid at either end, since power flow and phase angle can then be controlled independently. As an example, to adjust the flow of AC power on a hypothetical line between Seattle and Boston would require adjustment of the relative phase of the two regional electrical grids. This is an everyday occurrence in AC systems, but one that can become disrupted when AC system components fail and place unexpected loads on the grid. With an HVDC line instead, such an interconnection would: Convert AC in Seattle into HVDC; Use HVDC for the of cross-country transmission; and Convert the HVDC to locally synchronized AC in Boston, (and possibly in other cooperating cities along the transmission route). Such a system could be less prone to failure if parts of it were suddenly shut down. One example of a long DC transmission line is the Pacific DC Intertie located in the Western United States. Capacity The amount of power that can be sent over a transmission line varies with the length of the line. The heating of short line conductors due to line losses sets a thermal limit. If too much current is drawn, conductors may sag too close to the ground, or conductors and equipment may overheat. For intermediate-length lines on the order of , the limit is set by the voltage drop in the line. For longer AC lines, system stability becomes the limiting factor. Approximately, the power flowing over an AC line is proportional to the cosine of the phase angle of the voltage and current at the ends. This angle varies depending on system loading. It is undesirable for the angle to approach 90 degrees, as the power flowing decreases while resistive losses remain. The product of line length and maximum load is approximately proportional to the square of the system voltage. Series capacitors or phase-shifting transformers are used on long lines to improve stability. HVDC lines are restricted only by thermal and voltage drop limits, since the phase angle is not material. Understanding the temperature distribution along the cable route became possible with the introduction of distributed temperature sensing (DTS) systems that measure temperatures all along the cable. Without them maximum current was typically set as a compromise between understanding of operation conditions and risk minimization. This monitoring solution uses passive optical fibers as temperature sensors, either inside a high-voltage cable or externally mounted on the cable insulation. For overhead cables the fiber is integrated into the core of a phase wire. The integrated Dynamic Cable Rating (DCR)/Real Time Thermal Rating (RTTR) solution makes it possible to run the network to its maximum. It allows the operator to predict the behavior of the transmission system to reflect major changes to its initial operating conditions. Reconductoring Some utilities have embraced reconductoring to handle the increase in electricity production. Reconductoring is the replacement-in-place of existing transmission lines with higher-capacity lines. Adding transmission lines is difficult due to cost, permit intervals, and local opposition. Reconductoring has the potential to double the amount of electricity that can travel across a transmission line. A 2024 report found the United States behind countries like Belgium and the Netherlands in adoption of this technique to accommodate electrification and renewable energy. In April 2022, the Biden Administration streamlined environmental reviews for such projects, and in May 2022 announced competitive grants for them funded by the 2021 Bipartisan Infrastructure Law and 2022 Inflation Reduction Act. The rate of transmission expansion needs to double to support ongoing electrification and reach emission reduction targets. As of 2022, more than 10,000 power plant and energy storage projects were awaiting permission to connect to the US grid — 95% were zero-carbon resources. New power lines can take 10 years to plan, permit, and build. Traditional power lines use a steel core surrounded by aluminum strands (Aluminium-conductor steel-reinforced cable). Replacing the steel with a lighter, stronger composite material such as carbon fiber (ACCC conductor) allows lines to operate at higher temperatures, with less sag, and doubled transmission capacity. Lowering line sag at high temperatures can prevent wildfires from starting when power lines touch dry vegetation. Although advanced lines can cost 2-4x more than steel, total reconductoring costs are less than half of a new line, given savings in time, land acquisition, permitting, and construction. A reconductoring project in southeastern Texas upgraded 240 miles of transmission lines at a cost of $900,000 per mile, versus a 3,600-mile greenfield project that averaged $1.9 million per mile. Control To ensure safe and predictable operation, system components are controlled with generators, switches, circuit breakers and loads. The voltage, power, frequency, load factor, and reliability capabilities of the transmission system are designed to provide cost effective performance. Load balancing The transmission system provides for base load and peak load capability, with margins for safety and fault tolerance. Peak load times vary by region largely due to the industry mix. In hot and cold climates home air conditioning and heating loads affect the overall load. They are typically highest in the late afternoon in the hottest part of the year and in mid-mornings and mid-evenings in the coldest part of the year. Power requirements vary by season and time of day. Distribution system designs always take the base load and the peak load into consideration. The transmission system usually does not have a large buffering capability to match loads with generation. Thus generation has to be kept matched to the load, to prevent overloading generation equipment. Multiple sources and loads can be connected to the transmission system and they must be controlled to provide orderly transfer of power. In centralized power generation, only local control of generation is necessary. This involves synchronization of the generation units. In distributed power generation the generators are geographically distributed and the process to bring them online and offline must be carefully controlled. The load control signals can either be sent on separate lines or on the power lines themselves. Voltage and frequency can be used as signaling mechanisms to balance the loads. In voltage signaling, voltage is varied to increase generation. The power added by any system increases as the line voltage decreases. This arrangement is stable in principle. Voltage-based regulation is complex to use in mesh networks, since the individual components and setpoints would need to be reconfigured every time a new generator is added to the mesh. In frequency signaling, the generating units match the frequency of the power transmission system. In droop speed control, if the frequency decreases, the power is increased. (The drop in line frequency is an indication that the increased load is causing the generators to slow down.) Wind turbines, vehicle-to-grid, virtual power plants, and other locally distributed storage and generation systems can interact with the grid to improve system operation. Internationally, a slow move from a centralized to decentralized power systems have taken place. The main draw of locally distributed generation systems is that they reduce transmission losses by leading to consumption of electricity closer to where it was produced. Failure protection Under excess load conditions, the system can be designed to fail incrementally rather than all at once. Brownouts occur when power supplied drops below the demand. Blackouts occur when the grid fails completely. Rolling blackouts (also called load shedding) are intentionally engineered electrical power outages, used to distribute insufficient power to various loads in turn. Communications Grid operators require reliable communications to manage the grid and associated generation and distribution facilities. Fault-sensing protective relays at each end of the line must communicate to monitor the flow of power so that faulted conductors or equipment can be quickly de-energized and the balance of the system restored. Protection of the transmission line from short circuits and other faults is usually so critical that common carrier telecommunications are insufficiently reliable, while in some remote areas no common carrier is available. Communication systems associated with a transmission project may use: Microwaves Power-line communication Optical fibers Rarely, and for short distances, pilot-wires are strung along the transmission line path. Leased circuits from common carriers are not preferred since availability is not under control of the operator. Transmission lines can be used to carry data: this is called power-line carrier, or power-line communication (PLC). PLC signals can be easily received with a radio in the long wave range. Optical fibers can be included in the stranded conductors of a transmission line, in the overhead shield wires. These cables are known as optical ground wire (OPGW). Sometimes a standalone cable is used, all-dielectric self-supporting (ADSS) cable, attached to the transmission line cross arms. Some jurisdictions, such as Minnesota, prohibit energy transmission companies from selling surplus communication bandwidth or acting as a telecommunications common carrier. Where the regulatory structure permits, the utility can sell capacity in extra dark fibers to a common carrier. Market structure Electricity transmission is generally considered to be a natural monopoly, but one that is not inherently linked to generation. Many countries regulate transmission separately from generation. Spain was the first country to establish a regional transmission organization. In that country, transmission operations and electricity markets are separate. The transmission system operator is Red Eléctrica de España (REE) and the wholesale electricity market operator is Operador del Mercado Ibérico de Energía – Polo Español, S.A. (OMEL) OMEL Holding | Omel Holding. Spain's transmission system is interconnected with those of France, Portugal, and Morocco. The establishment of RTOs in the United States was spurred by the FERC's Order 888, Promoting Wholesale Competition Through Open Access Non-discriminatory Transmission Services by Public Utilities; Recovery of Stranded Costs by Public Utilities and Transmitting Utilities, issued in 1996. In the United States and parts of Canada, electric transmission companies operate independently of generation companies, but in the Southern United States vertical integration is intact. In regions of separation, transmission owners and generation owners continue to interact with each other as market participants with voting rights within their RTO. RTOs in the United States are regulated by the Federal Energy Regulatory Commission. Merchant transmission projects in the United States include the Cross Sound Cable from Shoreham, New York to New Haven, Connecticut, Neptune RTS Transmission Line from Sayreville, New Jersey, to New Bridge, New York, and Path 15 in California. Additional projects are in development or have been proposed throughout the United States, including the Lake Erie Connector, an underwater transmission line proposed by ITC Holdings Corp., connecting Ontario to load serving entities in the PJM Interconnection region. Australia has one unregulated or market interconnector – Basslink – between Tasmania and Victoria. Two DC links originally implemented as market interconnectors, Directlink and Murraylink, were converted to regulated interconnectors. A major barrier to wider adoption of merchant transmission is the difficulty in identifying who benefits from the facility so that the beneficiaries pay the toll. Also, it is difficult for a merchant transmission line to compete when the alternative transmission lines are subsidized by utilities with a monopolized and regulated rate base. In the United States, the FERC's Order 1000, issued in 2010, attempted to reduce barriers to third party investment and creation of merchant transmission lines where a public policy need is found. Transmission costs The cost of high voltage transmission is comparatively low, compared to all other costs constituting consumer electricity bills. In the UK, transmission costs are about 0.2 p per kWh compared to a delivered domestic price of around 10 p per kWh. The level of capital expenditure in the electric power T&D equipment market was estimated to be $128.9 bn in 2011. Health concerns Mainstream scientific evidence suggests that low-power, low-frequency, electromagnetic radiation associated with household currents and high transmission power lines does not constitute a short- or long-term health hazard. Some studies failed to find any link between living near power lines and developing any sickness or diseases, such as cancer. A 1997 study reported no increased risk of cancer or illness from living near a transmission line. Other studies, however, reported statistical correlations between various diseases and living or working near power lines. No adverse health effects have been substantiated for people not living close to power lines. The New York State Public Service Commission conducted a study to evaluate potential health effects of electric fields. The study measured the electric field strength at the edge of an existing right-of-way on a 765 kV transmission line. The field strength was 1.6 kV/m, and became the interim maximum strength standard for new transmission lines in New York State. The opinion also limited the voltage of new transmission lines built in New York to 345 kV. On September 11, 1990, after a similar study of magnetic field strengths, the NYSPSC issued their Interim Policy Statement on Magnetic Fields. This policy established a magnetic field standard of 200 mG at the edge of the right-of-way using the winter-normal conductor rating. As a comparison with everyday items, a hair dryer or electric blanket produces a 100 mG – 500 mG magnetic field. Applications for a new transmission line typically include an analysis of electric and magnetic field levels at the edge of rights-of-way. Public utility commissions typically do not comment on health impacts. Biological effects have been established for acute high level exposure to magnetic fields above 100 μT (1 G) (1,000 mG). In a residential setting, one study reported "limited evidence of carcinogenicity in humans and less than sufficient evidence for carcinogenicity in experimental animals", in particular, childhood leukemia, associated with average exposure to residential power-frequency magnetic field above 0.3 μT (3 mG) to 0.4 μT (4 mG). These levels exceed average residential power-frequency magnetic fields in homes, which are about 0.07 μT (0.7 mG) in Europe and 0.11 μT (1.1 mG) in North America. The Earth's natural geomagnetic field strength varies over the surface of the planet between 0.035 mT and 0.07 mT (35 μT – 70 μT or 350 mG – 700 mG) while the international standard for continuous exposure is set at 40 mT (400,000 mG or 400 G) for the general public. Tree growth regulators and herbicides may be used in transmission line right of ways, which may have health effects. Specialized transmission Grids for railways In some countries where electric locomotives or electric multiple units run on low frequency AC power, separate single phase traction power networks are operated by the railways. Prime examples are countries such as Austria, Germany and Switzerland that utilize AC technology based on 16 2/3 Hz. Norway and Sweden also use this frequency but use conversion from the 50 Hz public supply; Sweden has a 16 2/3 Hz traction grid but only for part of the system. Superconducting cables High-temperature superconductors (HTS) promise to revolutionize power distribution by providing lossless transmission. The development of superconductors with transition temperatures higher than the boiling point of liquid nitrogen has made the concept of superconducting power lines commercially feasible, at least for high-load applications. It has been estimated that waste would be halved using this method, since the necessary refrigeration equipment would consume about half the power saved by the elimination of resistive losses. Companies such as Consolidated Edison and American Superconductor began commercial production of such systems in 2007. Superconducting cables are particularly suited to high load density areas such as the business district of large cities, where purchase of an easement for cables is costly. Single-wire earth return Single-wire earth return (SWER) or single-wire ground return is a single-wire transmission line for supplying single-phase electrical power to remote areas at low cost. It is principally used for rural electrification, but also finds use for larger isolated loads such as water pumps. Single-wire earth return is also used for HVDC over submarine power cables. Wireless power transmission Both Nikola Tesla and Hidetsugu Yagi attempted to devise systems for large scale wireless power transmission in the late 1800s and early 1900s, without commercial success. In November 2009, LaserMotive won the NASA 2009 Power Beaming Challenge by powering a cable climber 1 km vertically using a ground-based laser transmitter. The system produced up to 1 kW of power at the receiver end. In August 2010, NASA contracted with private companies to pursue the design of laser power beaming systems to power low earth orbit satellites and to launch rockets using laser power beams. Wireless power transmission has been studied for transmission of power from solar power satellites to the earth. A high power array of microwave or laser transmitters would beam power to a rectenna. Major engineering and economic challenges face any solar power satellite project. Security The federal government of the United States stated that the American power grid was susceptible to cyber-warfare. The United States Department of Homeland Security works with industry to identify vulnerabilities and to help industry enhance the security of control system networks. In June 2019, Russia conceded that it was "possible" its electrical grid is under cyber-attack by the United States. The New York Times reported that American hackers from the United States Cyber Command planted malware potentially capable of disrupting the Russian electrical grid. Records Highest capacity system: 12 GW Zhundong–Wannan (准东-皖南)±1100 kV HVDC. Highest transmission voltage (AC): planned: 1.20 MV (Ultra-High Voltage) on Wardha-Aurangabad line (India), planned to initially operate at 400 kV. worldwide: 1.15 MV (Ultra-High Voltage) on Ekibastuz-Kokshetau line (Kazakhstan) Largest double-circuit transmission, Kita-Iwaki Powerline (Japan). Highest towers: Yangtze River Crossing (China) (height: ) Longest power line: Inga-Shaba (Democratic Republic of Congo) (length: ) Longest span of power line: at Ameralik Span (Greenland, Denmark) Longest submarine cables: North Sea Link, (Norway/United Kingdom) – (length of submarine cable: ) NorNed, North Sea (Norway/Netherlands) – (length of submarine cable: ) Basslink, Bass Strait, (Australia) – (length of submarine cable: , total length: ) Baltic Cable, Baltic Sea (Germany/Sweden) – (length of submarine cable: , HVDC length: , total length: ) Longest underground cables: Murraylink, Riverland/Sunraysia (Australia) – (length of underground cable: ) See also Dynamic demand (electric power) Demand response List of energy storage power plants Traction power network Backfeeding Conductor marking lights Double-circuit transmission line Electromagnetic Transients Program (EMTP) Flexible AC transmission system (FACTS) Geomagnetically induced current, (GIC) Graphene-clad wire Grid-tied electrical system List of high-voltage underground and submarine cables Load profile National Grid (disambiguation) Power-line communications (PLC) Power system simulation Radio frequency power transmission Wheeling (electric power transmission) References Further reading Grigsby, L. L., et al. The Electric Power Engineering Handbook. US: CRC Press. (2001). Hughes, Thomas P., Networks of Power: Electrification in Western Society 1880–1930, The Johns Hopkins University Press, Baltimore 1983 , an excellent overview of development during the first 50 years of commercial electric power Pansini, Anthony J, E.E., P.E. undergrounding electric lines. US: Hayden Book Co, 1978. Westinghouse Electric Corporation, "Electric power transmission patents; Tesla polyphase system". (Transmission of power; polyphase system; Tesla patents) Electrical engineering Monopoly (economics) Electrical safety
Electric power transmission
[ "Engineering" ]
9,800
[ "Electrical engineering" ]
38,829
https://en.wikipedia.org/wiki/Three-phase%20electric%20power
Three-phase electric power (abbreviated 3ϕ) is a common type of alternating current (AC) used in electricity generation, transmission, and distribution. It is a type of polyphase system employing three wires (or four including an optional neutral return wire) and is the most common method used by electrical grids worldwide to transfer power. Three-phase electrical power was developed in the 1880s by several people. In three-phase power, the voltage on each wire is 120 degrees phase shifted relative to each of the other wires. Because it is an AC system, it allows the voltages to be easily stepped up using transformers to high voltage for transmission and back down for distribution, giving high efficiency. A three-wire three-phase circuit is usually more economical than an equivalent two-wire single-phase circuit at the same line-to-ground voltage because it uses less conductor material to transmit a given amount of electrical power. Three-phase power is mainly used directly to power large induction motors, other electric motors and other heavy loads. Small loads often use only a two-wire single-phase circuit, which may be derived from a three-phase system. Terminology The conductors between a voltage source and a load are called lines, and the voltage between any two lines is called line voltage. The voltage measured between any line and neutral is called phase voltage. For example, for a 208/120-volt service, the line voltage is 208 volts, and the phase voltage is 120 volts. History Polyphase power systems were independently invented by Galileo Ferraris, Mikhail Dolivo-Dobrovolsky, Jonas Wenström, John Hopkinson, William Stanley Jr., and Nikola Tesla in the late 1880s. Three phase power evolved out of electric motor development. In 1885, Galileo Ferraris was doing research on rotating magnetic fields. Ferraris experimented with different types of asynchronous electric motors. The research and his studies resulted in the development of an alternator, which may be thought of as an alternating-current motor operating in reverse, so as to convert mechanical (rotating) power into electric power (as alternating current). On 11 March 1888, Ferraris published his research in a paper to the Royal Academy of Sciences in Turin. Two months later Nikola Tesla gained for a three-phase electric motor design, application filed October 12, 1887. Figure 13 of this patent shows that Tesla envisaged his three-phase motor being powered from the generator via six wires. These alternators operated by creating systems of alternating currents displaced from one another in phase by definite amounts, and depended on rotating magnetic fields for their operation. The resulting source of polyphase power soon found widespread acceptance. The invention of the polyphase alternator is key in the history of electrification, as is the power transformer. These inventions enabled power to be transmitted by wires economically over considerable distances. Polyphase power enabled the use of water-power (via hydroelectric generating plants in large dams) in remote places, thereby allowing the mechanical energy of the falling water to be converted to electricity, which then could be fed to an electric motor at any location where mechanical work needed to be done. This versatility sparked the growth of power-transmission network grids on continents around the globe. Mikhail Dolivo-Dobrovolsky developed a three-phase electrical generator and a three-phase electric motor in 1888 and studied star and delta connections. His three-phase three-wire transmission system was displayed in 1891 in Germany at the International Electrotechnical Exhibition, where Dolivo-Dobrovolsky used the system to transmit electric power at the distance of 176 km (110 miles) with 75% efficiency. In 1891 he also created a three-phase transformer and short-circuited (squirrel-cage) induction motor. <ref>Gerhard Neidhöfer: Michael von Dolivo-Dobrowolsky und der Drehstrom. Geschichte der Elektrotechnik VDE-Buchreihe, Volume 9, VDE VERLAG, Berlin Offenbach, .</ref> He designed the world's first three-phase hydroelectric power plant in 1891. Inventor Jonas Wenström received in 1890 a Swedish patent on the same three-phase system. The possibility of transferring electrical power from a waterfall at a distance was explored at the Grängesberg mine. A fall at Hällsjön, Smedjebackens kommun, where a small iron work had been located, was selected. In 1893, a three-phase system was used to transfer a distance of 15 km (10 miles), becoming the first commercial application. Principle In a symmetric three-phase power supply system, three conductors each carry an alternating current of the same frequency and voltage amplitude relative to a common reference, but with a phase difference of one third of a cycle (i.e., 120 degrees out of phase) between each. The common reference is usually connected to ground and often to a current-carrying conductor called the neutral. Due to the phase difference, the voltage on any conductor reaches its peak at one third of a cycle after one of the other conductors and one third of a cycle before the remaining conductor. This phase delay gives constant power transfer to a balanced linear load. It also makes it possible to produce a rotating magnetic field in an electric motor and generate other phase arrangements using transformers (for instance, a two-phase system using a Scott-T transformer). The amplitude of the voltage difference between two phases is times the amplitude of the voltage of the individual phases. The symmetric three-phase systems described here are simply referred to as three-phase systems because, although it is possible to design and implement asymmetric three-phase power systems (i.e., with unequal voltages or phase shifts), they are not used in practice because they lack the most important advantages of symmetric systems. In a three-phase system feeding a balanced and linear load, the sum of the instantaneous currents of the three conductors is zero. In other words, the current in each conductor is equal in magnitude to the sum of the currents in the other two, but with the opposite sign. The return path for the current in any phase conductor is the other two phase conductors. Constant power transfer is possible with any number of phases greater than one. However, two-phase systems do not have neutral-current cancellation and thus use conductors less efficiently, and more than three phases complicates infrastructure unnecessarily. Additionally, in some practical generators and motors, two phases can result in a less smooth (pulsating) torque. Three-phase systems may have a fourth wire, common in low-voltage distribution. This is the neutral wire. The neutral allows three separate single-phase supplies to be provided at a constant voltage and is commonly used for supplying multiple single-phase loads. The connections are arranged so that, as far as possible in each group, equal power is drawn from each phase. Further up the distribution system, the currents are usually well balanced. Transformers may be wired to have a four-wire secondary and a three-wire primary, while allowing unbalanced loads and the associated secondary-side neutral currents. Phase sequence Wiring for three phases is typically identified by colors that vary by country and voltage. The phases must be connected in the correct order to achieve the intended direction of rotation of three-phase motors. For example, pumps and fans do not work as intended in reverse. Maintaining the identity of phases is required if two sources could be connected at the same time. A direct connection between two different phases is a short circuit and leads to flow of unbalanced current. Advantages and disadvantages As compared to a single-phase AC power supply that uses two current-carrying conductors (phase and neutral), a three-phase supply with no neutral and the same phase-to-ground voltage and current capacity per phase can transmit three times as much power by using just 1.5 times as many wires (i.e., three instead of two). Thus, the ratio of capacity to conductor material is doubled. The ratio of capacity to conductor material increases to 3:1 with an ungrounded three-phase and center-grounded single-phase system (or 2.25:1 if both use grounds with the same gauge as the conductors). That leads to higher efficiency, lower weight, and cleaner waveforms. Three-phase supplies have properties that make them desirable in electric power distribution systems: The phase currents tend to cancel out one another, summing to zero in the case of a linear balanced load, which allows a reduction of the size of the neutral conductor because it carries little or no current. With a balanced load, all the phase conductors carry the same current and so can have the same size. Power transfer into a linear balanced load is constant, which, in motor/generator applications, helps to reduce vibrations. Three-phase systems can produce a rotating magnetic field with a specified direction and constant magnitude, which simplifies the design of electric motors, as no starting circuit is required. However, most loads are single-phase. In North America, single-family houses and individual apartments are supplied one phase from the power grid and use a split-phase system to the panelboard from which most branch circuits will carry 120 V. Circuits designed for higher powered devices such as stoves, dryers, or outlets for electric vehicles carry 240 V. In Europe, three-phase power is normally delivered to the panelboard and further to higher powered devices. Generation and distribution At the power station, an electrical generator converts mechanical power into a set of three AC electric currents, one from each coil (or winding) of the generator. The windings are arranged such that the currents are at the same frequency but with the peaks and troughs of their wave forms offset to provide three complementary currents with a phase separation of one-third cycle (120° or radians). The generator frequency is typically 50 or 60 Hz, depending on the country. At the power station, transformers change the voltage from generators to a level suitable for transmission in order to minimize losses. After further voltage conversions in the transmission network, the voltage is finally transformed to the standard utilization before power is supplied to customers. Most automotive alternators generate three-phase AC and rectify it to DC with a diode bridge. Transformer connections A "delta" (Δ) connected transformer winding is connected between phases of a three-phase system. A "wye" (Y) transformer connects each winding from a phase wire to a common neutral point. A single three-phase transformer can be used, or three single-phase transformers. In an "open delta" or "V" system, only two transformers are used. A closed delta made of three single-phase transformers can operate as an open delta if one of the transformers has failed or needs to be removed. In open delta, each transformer must carry current for its respective phases as well as current for the third phase, therefore capacity is reduced to 87%. With one of three transformers missing and the remaining two at 87% efficiency, the capacity is 58% ( of 87%).H. W. Beaty, D. G. Fink (ed.), Standard Handbook for Electrical Engineers. 15th ed., McGraw-Hill, 2007, , pp. 10–11. Where a delta-fed system must be grounded for detection of stray current to ground or protection from surge voltages, a grounding transformer (usually a zigzag transformer) may be connected to allow ground fault currents to return from any phase to ground. Another variation is a "corner grounded" delta system, which is a closed delta that is grounded at one of the junctions of transformers. Three-wire and four-wire circuits There are two basic three-phase configurations: wye (Y) and delta (Δ). As shown in the diagram, a delta configuration requires only three wires for transmission, but a wye (star) configuration may have a fourth wire. The fourth wire, if present, is provided as a neutral and is normally grounded. The three-wire and four-wire designations do not count the ground wire present above many transmission lines, which is solely for fault protection and does not carry current under normal use. A four-wire system with symmetrical voltages between phase and neutral is obtained when the neutral is connected to the "common star point" of all supply windings. In such a system, all three phases will have the same magnitude of voltage relative to the neutral. Other non-symmetrical systems have been used. The four-wire wye system is used when a mixture of single-phase and three-phase loads are to be served, such as mixed lighting and motor loads. An example of application is local distribution in Europe (and elsewhere), where each customer may be only fed from one phase and the neutral (which is common to the three phases). When a group of customers sharing the neutral draw unequal phase currents, the common neutral wire carries the currents resulting from these imbalances. Electrical engineers try to design the three-phase power system for any one location so that the power drawn from each of three phases is the same, as far as possible at that site. Electrical engineers also try to arrange the distribution network so the loads are balanced as much as possible, since the same principles that apply to individual premises also apply to the wide-scale distribution system power. Hence, every effort is made by supply authorities to distribute the power drawn on each of the three phases over a large number of premises so that, on average, as nearly as possible a balanced load is seen at the point of supply. For domestic use, some countries such as the UK may supply one phase and neutral at a high current (up to 100 A) to one property, while others such as Germany may supply 3 phases and neutral to each customer, but at a lower fuse rating, typically 40–63 A per phase, and "rotated" to avoid the effect that more load tends to be put on the first phase. Based on wye (Y) and delta (Δ) connection. Generally, there are four different types of three-phase transformer winding connections for transmission and distribution purposes: wye (Y) – wye (Y) is used for small current and high voltage, Delta (Δ) – Delta (Δ) is used for large currents and low voltages, Delta (Δ) – wye (Y) is used for step-up transformers, i.e., at generating stations, wye (Y) – Delta (Δ) is used for step-down transformers, i.e., at the end of the transmission. In North America, a high-leg delta supply is sometimes used where one winding of a delta-connected transformer feeding the load is center-tapped and that center tap is grounded and connected as a neutral as shown in the second diagram. This setup produces three different voltages: If the voltage between the center tap (neutral) and each of the top and bottom taps (phase and anti-phase) is 120 V (100%), the voltage across the phase and anti-phase lines is 240 V (200%), and the neutral to "high leg" voltage is ≈ 208 V (173%). The reason for providing the delta connected supply is usually to power large motors requiring a rotating field. However, the premises concerned will also require the "normal" North American 120 V supplies, two of which are derived (180 degrees "out of phase") between the "neutral" and either of the center-tapped phase points. Balanced circuits In the perfectly balanced case all three lines share equivalent loads. Examining the circuits, we can derive relationships between line voltage and current, and load voltage and current for wye- and delta-connected loads. In a balanced system each line will produce equal voltage magnitudes at phase angles equally spaced from each other. With V1 as our reference and V3 lagging V2 lagging V1, using angle notation, and VLN the voltage between the line and the neutral we have: These voltages feed into either a wye- or delta-connected load. Wye (or, star; Y) The voltage seen by the load will depend on the load connection; for the wye case, connecting each load to a phase (line-to-neutral) voltages gives where Ztotal is the sum of line and load impedances (Ztotal = ZLN + ZY), and θ is the phase of the total impedance (Ztotal). The phase angle difference between voltage and current of each phase is not necessarily 0 and depends on the type of load impedance, Zy. Inductive and capacitive loads will cause current to either lag or lead the voltage. However, the relative phase angle between each pair of lines (1 to 2, 2 to 3, and 3 to 1) will still be −120°. By applying Kirchhoff's current law (KCL) to the neutral node, the three phase currents sum to the total current in the neutral line. In the balanced case: Delta (Δ) In the delta circuit, loads are connected across the lines, and so loads see line-to-line voltages: (Φv1 is the phase shift for the first voltage, commonly taken to be 0°; in this case, Φv2 = −120° and Φv3 = −240° or 120°.) Further: where θ is the phase of delta impedance (ZΔ). Relative angles are preserved, so I31 lags I23 lags I12 by 120°. Calculating line currents by using KCL at each delta node gives and similarly for each other line: where, again, θ is the phase of delta impedance (ZΔ). Inspection of a phasor diagram, or conversion from phasor notation to complex notation, illuminates how the difference between two line-to-neutral voltages yields a line-to-line voltage that is greater by a factor of . As a delta configuration connects a load across phases of a transformer, it delivers the line-to-line voltage difference, which is times greater than the line-to-neutral voltage delivered to a load in the wye configuration. As the power transferred is V2/Z'', the impedance in the delta configuration must be 3 times what it would be in a wye configuration for the same power to be transferred. Single-phase loads Except in a high-leg delta system and a corner-grounded delta system, single-phase loads may be connected across any two phases, or a load can be connected from phase to neutral. Distributing single-phase loads among the phases of a three-phase system balances the load and makes most economical use of conductors and transformers. In a symmetrical three-phase four-wire wye system, the three phase conductors have the same voltage to the system neutral. The voltage between line conductors is times the phase conductor to neutral voltage: The currents returning from the customers' premises to the supply transformer all share the neutral wire. If the loads are evenly distributed on all three phases, the sum of the returning currents in the neutral wire is approximately zero. Any unbalanced phase loading on the secondary side of the transformer will use the transformer capacity inefficiently. If the supply neutral is broken, phase-to-neutral voltage is no longer maintained. Phases with higher relative loading will experience reduced voltage, and phases with lower relative loading will experience elevated voltage, up to the phase-to-phase voltage. A high-leg delta provides phase-to-neutral relationship of , however, LN load is imposed on one phase. A transformer manufacturer's page suggests that LN loading not exceed 5% of transformer capacity. Since ≈ 1.73, defining as 100% gives . If was set as 100%, then . Unbalanced loads When the currents on the three live wires of a three-phase system are not equal or are not at an exact 120° phase angle, the power loss is greater than for a perfectly balanced system. The method of symmetrical components is used to analyze unbalanced systems. Non-linear loads With linear loads, the neutral only carries the current due to imbalance between the phases. Gas-discharge lamps and devices that utilize rectifier-capacitor front-end such as switch-mode power supplies, computers, office equipment and such produce third-order harmonics that are in-phase on all the supply phases. Consequently, such harmonic currents add in the neutral in a wye system (or in the grounded (zigzag) transformer in a delta system), which can cause the neutral current to exceed the phase current. Three-phase loads An important class of three-phase load is the electric motor. A three-phase induction motor has a simple design, inherently high starting torque and high efficiency. Such motors are applied in industry for many applications. A three-phase motor is more compact and less costly than a single-phase motor of the same voltage class and rating, and single-phase AC motors above are uncommon. Three-phase motors also vibrate less and hence last longer than single-phase motors of the same power used under the same conditions. Resistive heating loads such as electric boilers or space heating may be connected to three-phase systems. Electric lighting may also be similarly connected. Line frequency flicker in light is detrimental to high-speed cameras used in sports event broadcasting for slow-motion replays. It can be reduced by evenly spreading line frequency operated light sources across the three phases so that the illuminated area is lit from all three phases. This technique was applied successfully at the 2008 Beijing Olympics. Rectifiers may use a three-phase source to produce a six-pulse DC output. The output of such rectifiers is much smoother than rectified single phase and, unlike single-phase, does not drop to zero between pulses. Such rectifiers may be used for battery charging, electrolysis processes such as aluminium production and the electric arc furnace used in steelmaking, and for operation of DC motors. Zigzag transformers may make the equivalent of six-phase full-wave rectification, twelve pulses per cycle, and this method is occasionally employed to reduce the cost of the filtering components, while improving the quality of the resulting DC. In many European countries electric stoves are usually designed for a three-phase feed with permanent connection. Individual heating units are often connected between phase and neutral to allow for connection to a single-phase circuit if three-phase is not available. Other usual three-phase loads in the domestic field are tankless water heating systems and storage heaters. Homes in Europe have standardized on a nominal 230 V ±10% between any phase and ground. Most groups of houses are fed from a three-phase street transformer so that individual premises with above-average demand can be fed with a second or third phase connection. Phase converters Phase converters are used when three-phase equipment needs to be operated on a single-phase power source. They are used when three-phase power is not available or cost is not justifiable. Such converters may also allow the frequency to be varied, allowing speed control. Some railway locomotives use a single-phase source to drive three-phase motors fed through an electronic drive. A rotary phase converter is a three-phase motor with special starting arrangements and power factor correction that produces balanced three-phase voltages. When properly designed, these rotary converters can allow satisfactory operation of a three-phase motor on a single-phase source. In such a device, the energy storage is performed by the inertia (flywheel effect) of the rotating components. An external flywheel is sometimes found on one or both ends of the shaft. A three-phase generator can be driven by a single-phase motor. This motor-generator combination can provide a frequency changer function as well as phase conversion, but requires two machines with all their expenses and losses. The motor-generator method can also form an uninterruptible power supply when used in conjunction with a large flywheel and a battery-powered DC motor; such a combination will deliver nearly constant power compared to the temporary frequency drop experienced with a standby generator set gives until the standby generator kicks in. Capacitors and autotransformers can be used to approximate a three-phase system in a static phase converter, but the voltage and phase angle of the additional phase may only be useful for certain loads. Variable-frequency drives and digital phase converters use power electronic devices to synthesize a balanced three-phase supply from single-phase input power. Testing Verification of the phase sequence in a circuit is of considerable practical importance. Two sources of three-phase power must not be connected in parallel unless they have the same phase sequence, for example, when connecting a generator to an energized distribution network or when connecting two transformers in parallel. Otherwise, the interconnection will behave like a short circuit, and excess current will flow. The direction of rotation of three-phase motors can be reversed by interchanging any two phases; it may be impractical or harmful to test a machine by momentarily energizing the motor to observe its rotation. Phase sequence of two sources can be verified by measuring voltage between pairs of terminals and observing that terminals with very low voltage between them will have the same phase, whereas pairs that show a higher voltage are on different phases. Where the absolute phase identity is not required, phase rotation test instruments can be used to identify the rotation sequence with one observation. The phase rotation test instrument may contain a miniature three-phase motor, whose direction of rotation can be directly observed through the instrument case. Another pattern uses a pair of lamps and an internal phase-shifting network to display the phase rotation. Another type of instrument can be connected to a de-energized three-phase motor and can detect the small voltages induced by residual magnetism, when the motor shaft is rotated by hand. A lamp or other indicator lights to show the sequence of voltages at the terminals for the given direction of shaft rotation. Alternatives to three-phase Split-phase electric power Used when three-phase power is not available and allows double the normal utilization voltage to be supplied for high-power loads. Two-phase electric power Uses two AC voltages, with a 90-electrical-degree phase shift between them. Two-phase circuits may be wired with two pairs of conductors, or two wires may be combined, requiring only three wires for the circuit. Currents in the common conductor add to 1.4 times ( ) the current in the individual phases, so the common conductor must be larger. Two-phase and three-phase systems can be interconnected by a Scott-T transformer, invented by Charles F. Scott. Very early AC machines, notably the first generators at Niagara Falls, used a two-phase system, and some remnant two-phase distribution systems still exist, but three-phase systems have displaced the two-phase system for modern installations. Monocyclic power An asymmetrical modified two-phase power system used by General Electric around 1897, championed by Charles Proteus Steinmetz and Elihu Thomson. This system was devised to avoid patent infringement. In this system, a generator was wound with a full-voltage single-phase winding intended for lighting loads and with a small fraction (usually 1/4 of the line voltage) winding that produced a voltage in quadrature with the main windings. The intention was to use this "power wire" additional winding to provide starting torque for induction motors, with the main winding providing power for lighting loads. After the expiration of the Westinghouse patents on symmetrical two-phase and three-phase power distribution systems, the monocyclic system fell out of use; it was difficult to analyze and did not last long enough for satisfactory energy metering to be developed. High-phase-order systems Have been built and tested for power transmission. Such transmission lines typically would use six or twelve phases. High-phase-order transmission lines allow transfer of slightly less than proportionately higher power through a given volume without the expense of a high-voltage direct current (HVDC) converter at each end of the line. However, they require correspondingly more pieces of equipment. DC AC was historically used because it could be easily transformed to higher voltages for long distance transmission. However modern electronics can raise the voltage of DC with high efficiency, and DC lacks skin effect which permits transmission wires to be lighter and cheaper and so high-voltage direct current gives lower losses over long distances. Color codes Conductors of a three-phase system are usually identified by a color code, to facilitate balanced loading and to assure the correct phase rotation for motors. Colors used may adhere to International Standard IEC 60446 (later IEC 60445), older standards or to no standard at all and may vary even within a single installation. For example, in the U.S. and Canada, different color codes are used for grounded (earthed) and ungrounded systems. See also Industrial and multiphase power plugs and sockets International Electrotechnical Exhibition Mathematics of three-phase electric power Rotary phase converter Three-phase AC railway electrification Y-Δ transform Notes References Further reading External links AC Power History and Timeline Electric power Electrical engineering Electrical wiring Inventions by Nikola Tesla AC power
Three-phase electric power
[ "Physics", "Engineering" ]
6,064
[ "Physical quantities", "Electrical systems", "Building engineering", "Physical systems", "Power (physics)", "Electric power", "Electrical engineering", "Electrical wiring" ]
38,873
https://en.wikipedia.org/wiki/Raoult%27s%20law
Raoult's law ( law) is a relation of physical chemistry, with implications in thermodynamics. Proposed by French chemist François-Marie Raoult in 1887, it states that the partial pressure of each component of an ideal mixture of liquids is equal to the vapor pressure of the pure component (liquid or solid) multiplied by its mole fraction in the mixture. In consequence, the relative lowering of vapor pressure of a dilute solution of nonvolatile solute is equal to the mole fraction of solute in the solution. Mathematically, Raoult's law for a single component in an ideal solution is stated as where is the partial pressure of the component in the gaseous mixture above the solution, is the equilibrium vapor pressure of the pure component , and is the mole fraction of the component in the liquid or solid solution. Where two volatile liquids A and B are mixed with each other to form a solution, the vapor phase consists of both components of the solution. Once the components in the solution have reached equilibrium, the total vapor pressure of the solution can be determined by combining Raoult's law with Dalton's law of partial pressures to give In other words, the vapor pressure of the solution is the mole-weighted mean of the individual vapour pressures: If a non-volatile solute B (it has zero vapor pressure, so does not evaporate) is dissolved into a solvent A to form an ideal solution, the vapor pressure of the solution will be lower than that of the solvent. In an ideal solution of a nonvolatile solute, the decrease in vapor pressure is directly proportional to the mole fraction of solute: If the solute associates or dissociates in the solution, the expression of the law includes the van 't Hoff factor as a correction factor. Principles Raoult's law is a phenomenological relation that assumes ideal behavior based on the simple microscopic assumption that intermolecular forces between unlike molecules are equal to those between similar molecules, and that their molar volumes are the same: the conditions of an ideal solution. This is analogous to the ideal gas law, which is a limiting law valid when the interactive forces between molecules approach zero, for example as the concentration approaches zero. Raoult's law is instead valid if the physical properties of the components are identical. The more similar the components are, the more their behavior approaches that described by Raoult's law. For example, if the two components differ only in isotopic content, then Raoult's law is essentially exact. Comparing measured vapor pressures to predicted values from Raoult's law provides information about the true relative strength of intermolecular forces. If the vapor pressure is less than predicted (a negative deviation), fewer molecules of each component than expected have left the solution in the presence of the other component, indicating that the forces between unlike molecules are stronger. The converse is true for positive deviations. For a solution of two liquids A and B, Raoult's law predicts that if no other gases are present, then the total vapor pressure above the solution is equal to the weighted sum of the "pure" vapor pressures and of the two components. Thus the total pressure above the solution of A and B would be Since the sum of the mole fractions is equal to one, This is a linear function of the mole fraction , as shown in the graph. Thermodynamic considerations Raoult's law was first observed empirically and led François-Marie Raoult to postulate that the vapor pressure above an ideal mixture of liquids is equal to the sum of the vapor pressures of each component multiplied by its mole fraction. Taking compliance with Raoult's Law as a defining characteristic of ideality in a solution, it is possible to deduce that the chemical potential of each component of the liquid is given by where is the chemical potential in the pure state and is the mole fraction of component in the ideal solution. From this equation, other thermodynamic properties of an ideal solution may be determined. If the assumption that the vapor follows the ideal gas law is added, Raoult's law may be derived as follows. If the system is ideal, then, at equilibrium, the chemical potential of each component must be the same in the liquid and gas states. That is, Substituting the formula for chemical potential gives as the gas-phase mole fraction depends on its fugacity, , as a fraction of the pressure in the reference state, . The corresponding equation when the system consists purely of component in equilibrium with its vapor is Subtracting these equations and re-arranging leads to the result For the ideal gas, pressure and fugacity are equal, so introducing simple pressures to this result yields Raoult's law: Ideal mixing An ideal solution would follow Raoult's law, but most solutions deviate from ideality. Interactions between gas molecules are typically quite small, especially if the vapor pressures are low. However, the interactions in a liquid are very strong. For a solution to be ideal, the interactions between unlike molecules must be of the same magnitude as those between like molecules. This approximation is only true when the different species are almost chemically identical. One can see that from considering the Gibbs free energy change of mixing: This is always negative, so mixing is spontaneous. However, the expression is, apart from a factor , equal to the entropy of mixing. This leaves no room at all for an enthalpy effect and implies that must be equal to zero, and this can only be true if the interactions between the molecules are indifferent. It can be shown using the Gibbs–Duhem equation that if Raoult's law holds over the entire concentration range in a binary solution then, for the second component, the same must also hold. If deviations from the ideal are not too large, Raoult's law is still valid in a narrow concentration range when approaching for the majority phase (the solvent). The solute also shows a linear limiting law, but with a different coefficient. This relationship is known as Henry's law. The presence of these limited linear regimes has been experimentally verified in a great number of cases, though large deviations occur in a variety of cases. Consequently, both its pedagogical value and utility have been questioned at the introductory college level. In a perfectly ideal system, where ideal liquid and ideal vapor are assumed, a very useful equation emerges if Raoult's law is combined with Dalton's Law: where is the mole fraction of component in the solution, and is its mole fraction in the gas phase. This equation shows that, for an ideal solution where each pure component has a different vapor pressure, the gas phase is enriched in the component with the higher vapor pressure when pure, and the solution is enriched in the component with the lower pure vapor pressure. This phenomenon is the basis for distillation. Non-ideal mixing In elementary applications, Raoult's law is generally valid when the liquid phase is either nearly pure or a mixture of similar substances. Raoult's law may be adapted to non-ideal solutions by incorporating two factors that account for the interactions between molecules of different substances. The first factor is a correction for gas non-ideality, or deviations from the ideal-gas law. It is called the fugacity coefficient (). The second, the activity coefficient , is a correction for interactions in the liquid phase between the different molecules. This modified or extended Raoult's law is then written as Real solutions In many pairs of liquids, there is no uniformity of attractive forces, i.e., the adhesive (between dissimilar molecules) and cohesive forces (between similar molecules) are not uniform between the two liquids. Therefore, they deviate from Raoult's law, which applies only to ideal solutions. Negative deviation When the adhesion is stronger than the cohesion, fewer liquid particles turn into vapor thereby lowering the vapor pressure and leading to negative deviation in the graph. For example, the system of chloroform (CHCl3) and acetone (CH3COCH3) has a negative deviation from Raoult's law, indicating an attractive interaction between the two components that have been described as a hydrogen bond. The system HCl–water has a large enough negative deviation to form a minimum in the vapor pressure curve known as a (negative) azeotrope, corresponding to a mixture that evaporates without change of composition. When these two components are mixed, the reaction is exothermic as ion-dipole intermolecular forces of attraction are formed between the resulting ions (H3O+ and Cl–) and the polar water molecules so that ΔHmix is negative. Positive deviation When the adhesion is weaker than cohesion, which is quite common, the liquid particles escape the solution more easily that increases the vapor pressure and leads to a positive deviation. If the deviation is large, then the vapor pressure curve shows a maximum at a particular composition and forms a positive azeotrope (low-boiling mixture). Some mixtures in which this happens are (1) ethanol and water, (2) benzene and methanol, (3) carbon disulfide and acetone, (4) chloroform and ethanol, and (5) glycine and water. When these pairs of components are mixed, the process is endothermic as weaker intermolecular interactions are formed so that ΔmixH is positive. See also Antoine equation Atomic theory Azeotrope Dühring's rule Henry's law Köhler theory Solubility References Chapter 24, D. A. McQuarrie, J. D. Simon Physical Chemistry: A Molecular Approach. University Science Books. (1997) E. B. Smith Basic Chemical Thermodynamics. Clarendon Press. Oxford (1993) Eponymous laws of physics Physical chemistry Equilibrium chemistry Engineering thermodynamics Solutions
Raoult's law
[ "Physics", "Chemistry", "Engineering" ]
2,045
[ "Applied and interdisciplinary physics", "Separation processes", "Engineering thermodynamics", "Equilibrium chemistry", "Homogeneous chemical mixtures", "Distillation", "Thermodynamics", "nan", "Mechanical engineering", "Solutions", "Physical chemistry" ]
39,120
https://en.wikipedia.org/wiki/Nvidia
Nvidia Corporation ( ) is an American multinational corporation and technology company headquartered in Santa Clara, California, and incorporated in Delaware. Founded in 1993 by Jensen Huang (president and CEO), Chris Malachowsky, and Curtis Priem, it is a software company which designs and supplies graphics processing units (GPUs), application programming interfaces (APIs) for data science and high-performance computing, and system on a chip units (SoCs) for mobile computing and the automotive market. Nvidia is also the dominant supplier of artificial intelligence (AI) hardware and software. Nvidia outsources the manufacturing of the hardware it designs. Nvidia's professional line of GPUs are used for edge-to-cloud computing and in supercomputers and workstations for applications in fields such as architecture, engineering and construction, media and entertainment, automotive, scientific research, and manufacturing design. Its GeForce line of GPUs are aimed at the consumer market and are used in applications such as video editing, 3D rendering, and PC gaming. With a market share of 80.2% in the second quarter of 2023, Nvidia leads the market for discrete desktop GPUs by a wide margin. The company expanded its presence in the gaming industry with the introduction of the Shield Portable (a handheld game console), Shield Tablet (a gaming tablet), and Shield TV (a digital media player), as well as its cloud gaming service GeForce Now. In addition to GPU design and outsourcing manufacturing, Nvidia provides the CUDA software platform and API that allows the creation of massively parallel programs which utilize GPUs. They are deployed in supercomputing sites around the world. In the late 2000s, Nvidia had moved into the mobile computing market, where it produces Tegra mobile processors for smartphones and tablets and vehicle navigation and entertainment systems. Its competitors include AMD, Intel, Qualcomm, and AI accelerator companies such as Cerebras and Graphcore. It also makes AI-powered software for audio and video processing (e.g., Nvidia Maxine). Nvidia's offer to acquire Arm from SoftBank in September 2020 failed to materialize following extended regulatory scrutiny, leading to the termination of the deal in February 2022 in what would have been the largest semiconductor acquisition. In 2023, Nvidia became the seventh public U.S. company to be valued at over $1 trillion, and the company's valuation has increased rapidly since then as the company became a leader in data center chips with AI capabilities in the midst of the AI boom. In June 2024, for one day, Nvidia overtook Microsoft as the world's most valuable publicly traded company, with a market capitalization of over $3.3 trillion. History Founding Nvidia was founded on April 5, 1993, by Jensen Huang (who, , remains CEO), a Taiwanese-American electrical engineer who was previously the director of CoreWare at LSI Logic and a microprocessor designer at AMD; Chris Malachowsky, an engineer who worked at Sun Microsystems; and Curtis Priem, who was previously a senior staff engineer and graphics chip designer at IBM and Sun Microsystems. The three men agreed to start the company in a meeting at a Denny's roadside diner on Berryessa Road in East San Jose. At the time, Malachowsky and Priem were frustrated with Sun's management and were looking to leave, but Huang was on "firmer ground", in that he was already running his own division at LSI. The three co-founders discussed a vision of the future which was so compelling that Huang decided to leave LSI and become the chief executive officer of their new startup. In 1993, the three co-founders envisioned that the ideal trajectory for the forthcoming wave of computing would be in the realm of accelerated computing, specifically in graphics-based processing. This path was chosen due to its unique ability to tackle challenges that eluded general-purpose computing methods. As Huang later explained: "We also observed that video games were simultaneously one of the most computationally challenging problems and would have incredibly high sales volume. Those two conditions don’t happen very often. Video games was our killer app — a flywheel to reach large markets funding huge R&D to solve massive computational problems." With $40,000 in the bank, the company was born. The company subsequently received $20 million of venture capital funding from Sequoia Capital, Sutter Hill Ventures and others. During the late 1990s, Nvidia was one of 70 startup companies chasing the idea that graphics acceleration for video games was the path to the future. Only two survived: Nvidia and ATI Technologies, the latter of which merged into AMD. Nvidia initially had no name and the co-founders named all their files NV, as in "next version". The need to incorporate the company prompted the co-founders to review all words with those two letters. At one point, Malachowsky and Priem wanted to call the company NVision, but that name was already taken by a manufacturer of toilet paper. Huang suggested the name Nvidia, from "invidia", the Latin word for "envy". The company's original headquarters office was in Sunnyvale, California. First graphics accelerator Nvidia's first graphics accelerator, the NV1, was designed to process quadrilateral primitives (forward texture mapping), a feature that set it apart from competitors, who preferred triangle primitives. However, when Microsoft introduced the DirectX platform, it chose not to support any other graphics software and announced that its Direct3D API would exclusively support triangles. As a result, the NV1 failed to gain traction in the market. Nvidia had also entered into a partnership with Sega to supply the graphics chip for the Dreamcast console and worked on the project for about a year. However, Nvidia's technology was already lagging behind competitors. This placed the company in a difficult position: continue working on a chip that was likely doomed to fail or abandon the project, risking financial collapse. In a pivotal moment, Sega's president, Shoichiro Irimajiri, visited Huang in person to inform him that Sega had decided to choose another vendor for the Dreamcast. However, Irimajiri believed in Nvidia's potential and persuaded Sega’s management to invest $5 million into the company. Huang later reflected that this funding was all that kept Nvidia afloat, and that Irimajiri's "understanding and generosity gave us six months to live". In 1996, Huang laid off more than half of Nvidia's employees—thereby reducing headcount from 100 to 40—and focused the company's remaining resources on developing a graphics accelerator product optimized for processing triangle primitives: the RIVA 128. By the time the RIVA 128 was released in August 1997, Nvidia had only enough money left for one month’s payroll. The sense of impending failure became so pervasive that it gave rise to Nvidia's unofficial company motto: "Our company is thirty days from going out of business." Huang began internal presentations to Nvidia staff with those words for many years. Nvidia sold about a million RIVA 128 units within four months, and used the revenue to fund development of its next generation of products. In 1998, the release of the RIVA TNT helped solidify Nvidia’s reputation as a leader in graphics technology. Public company Nvidia went public on January 22, 1999. Investing in Nvidia after it had already failed to deliver on its contract turned out to be Irimajiri's best decision as Sega's president. After Irimajiri left Sega in 2000, Sega sold its Nvidia stock for $15 million. In late 1999, Nvidia released the GeForce 256 (NV10), its first product expressly marketed as a GPU, which was most notable for introducing onboard transformation and lighting (T&L) to consumer-level 3D hardware. Running at 120 MHz and featuring four-pixel pipelines, it implemented advanced video acceleration, motion compensation, and hardware sub-picture alpha blending. The GeForce outperformed existing products by a wide margin. Due to the success of its products, Nvidia won the contract to develop the graphics hardware for Microsoft's Xbox game console, which earned Nvidia a $200 million advance. However, the project took many of its best engineers away from other projects. In the short term this did not matter, and the GeForce2 GTS shipped in the summer of 2000. In December 2000, Nvidia reached an agreement to acquire the intellectual assets of its one-time rival 3dfx, a pioneer in consumer 3D graphics technology leading the field from the mid-1990s until 2000. The acquisition process was finalized in April 2002. In 2001, Standard & Poor's selected Nvidia to replace the departing Enron in the S&P 500 stock index, meaning that index funds would need to hold Nvidia shares going forward. In July 2002, Nvidia acquired Exluna for an undisclosed sum. Exluna made software-rendering tools and the personnel were merged into the Cg project. In August 2003, Nvidia acquired MediaQ for approximately US$70 million. It launched GoForce the follow year. On April 22, 2004, Nvidia acquired iReady, also a provider of high-performance TCP offload engines and iSCSI controllers. In December 2004, it was announced that Nvidia would assist Sony with the design of the graphics processor (RSX) for the PlayStation 3 game console. On December 14, 2005, Nvidia acquired ULI Electronics, which at the time supplied third-party southbridge parts for chipsets to ATI, Nvidia's competitor. In March 2006, Nvidia acquired Hybrid Graphics. In December 2006, Nvidia, along with its main rival in the graphics industry AMD (which had acquired ATI), received subpoenas from the U.S. Department of Justice regarding possible antitrust violations in the graphics card industry. 2007–2014 Forbes named Nvidia its Company of the Year for 2007, citing the accomplishments it made during the said period as well as during the previous five years. On January 5, 2007, Nvidia announced that it had completed the acquisition of PortalPlayer, Inc. In February 2008, Nvidia acquired Ageia, developer of PhysX, a physics engine and physics processing unit. Nvidia announced that it planned to integrate the PhysX technology into its future GPU products. In July 2008, Nvidia took a write-down of approximately $200 million on its first-quarter revenue, after reporting that certain mobile chipsets and GPUs produced by the company had "abnormal failure rates" due to manufacturing defects. Nvidia, however, did not reveal the affected products. In September 2008, Nvidia became the subject of a class action lawsuit over the defects, claiming that the faulty GPUs had been incorporated into certain laptop models manufactured by Apple Inc., Dell, and HP. In September 2010, Nvidia reached a settlement, in which it would reimburse owners of the affected laptops for repairs or, in some cases, replacement. On January 10, 2011, Nvidia signed a six-year, $1.5 billion cross-licensing agreement with Intel, ending all litigation between the two companies. In November 2011, after initially unveiling it at Mobile World Congress, Nvidia released its ARM-based system on a chip for mobile devices, Tegra 3. Nvidia claimed that the chip featured the first-ever quad-core mobile CPU. In May 2011, it was announced that Nvidia had agreed to acquire Icera, a baseband chip making company in the UK, for $367 million. In January 2013, Nvidia unveiled the Tegra 4, as well as the Nvidia Shield, an Android-based handheld game console powered by the new system on a chip. On July 29, 2013, Nvidia announced that they acquired PGI from STMicroelectronics. In February 2013, Nvidia announced its plans to build a new headquarters in the form of two giant triangle-shaped buildings on the other side of San Tomas Expressway (to the west of its existing headquarters complex). The company selected triangles as its design theme. As Huang explained in a blog post, the triangle is "the fundamental building block of computer graphics". In 2014, Nvidia ported the Valve games Portal and Half Life 2 to its Nvidia Shield tablet as Lightspeed Studio. Since 2014, Nvidia has diversified its business focusing on three markets: gaming, automotive electronics, and mobile devices. That same year, Nvidia also prevailed in litigation brought by the trustee of 3dfx's bankruptcy estate to challenge its 2000 acquisition of 3dfx's intellectual assets. On November 6, 2014, in an unpublished memorandum order, the U.S. Court of Appeals for the Ninth Circuit affirmed the "district court's judgment affirming the bankruptcy court's determination that [Nvidia] did not pay less than fair market value for assets purchased from 3dfx shortly before 3dfx filed for bankruptcy". 2016–2018 On May 6, 2016, Nvidia unveiled the first GPUs of the GeForce 10 series, the GTX 1080 and 1070, based on the company's new Pascal microarchitecture. Nvidia claimed that both models outperformed its Maxwell-based Titan X model; the models incorporate GDDR5X and GDDR5 memory respectively, and use a 16 nm manufacturing process. The architecture also supports a new hardware feature known as simultaneous multi-projection (SMP), which is designed to improve the quality of multi-monitor and virtual reality rendering. Laptops that include these GPUs and are sufficiently thin – as of late 2017, under  – have been designated as meeting Nvidia's "Max-Q" design standard. In July 2016, Nvidia agreed to a settlement for a false advertising lawsuit regarding its GTX 970 model, as the models were unable to use all of their advertised 4 GB of VRAM due to limitations brought by the design of its hardware. In May 2017, Nvidia announced a partnership with Toyota which will use Nvidia's Drive PX-series artificial intelligence platform for its autonomous vehicles. In July 2017, Nvidia and Chinese search giant Baidu announced a far-reaching AI partnership that includes cloud computing, autonomous driving, consumer devices, and Baidu's open-source AI framework PaddlePaddle. Baidu unveiled that Nvidia's Drive PX 2 AI will be the foundation of its autonomous-vehicle platform. Nvidia officially released the Titan V on December 7, 2017. Nvidia officially released the Nvidia Quadro GV100 on March 27, 2018. Nvidia officially released the RTX 2080 GPUs on September 27, 2018. In 2018, Google announced that Nvidia's Tesla P4 graphic cards would be integrated into Google Cloud service's artificial intelligence. In May 2018, on the Nvidia user forum, a thread was started asking the company to update users when they would release web drivers for its cards installed on legacy Mac Pro machines up to mid-2012 5,1 running the macOS Mojave operating system 10.14. Web drivers are required to enable graphics acceleration and multiple display monitor capabilities of the GPU. On its Mojave update info website, Apple stated that macOS Mojave would run on legacy machines with 'Metal compatible' graphics cards and listed Metal compatible GPUs, including some manufactured by Nvidia. However, this list did not include Metal compatible cards that currently work in macOS High Sierra using Nvidia-developed web drivers. In September, Nvidia responded, "Apple fully controls drivers for macOS. But if Apple allows, our engineers are ready and eager to help Apple deliver great drivers for macOS 10.14 (Mojave)." In October, Nvidia followed this up with another public announcement, "Apple fully controls drivers for macOS. Unfortunately, Nvidia currently cannot release a driver unless it is approved by Apple," suggesting a possible rift between the two companies. By January 2019, with still no sign of the enabling web drivers, Apple Insider weighed into the controversy with a claim that Apple management "doesn't want Nvidia support in macOS". The following month, Apple Insider followed this up with another claim that Nvidia support was abandoned because of "relational issues in the past", and that Apple was developing its own GPU technology. Without Apple-approved Nvidia web drivers, Apple users are faced with replacing their Nvidia cards with a competing supported brand, such as AMD Radeon from the list recommended by Apple. 2019 acquisition of Mellanox Technologies On March 11, 2019, Nvidia announced a deal to buy Mellanox Technologies for $6.9 billion to substantially expand its footprint in the high-performance computing market. In May 2019, Nvidia announced new RTX Studio laptops. The creators say that the new laptop is going to be seven times faster than a top-end MacBook Pro with a Core i9 and AMD's Radeon Pro Vega 20 graphics in apps like Maya and RedCine-X Pro. In August 2019, Nvidia announced Minecraft RTX, an official Nvidia-developed patch for the game Minecraft adding real-time DXR ray tracing exclusively to the Windows 10 version of the game. The whole game is, in Nvidia's words, "refit" with path tracing, which dramatically affects the way light, reflections, and shadows work inside the engine. 2020–2023 In May 2020, Nvidia announced it was acquiring Cumulus Networks. Post acquisition the company was absorbed into Nvidia's networking business unit, along with Mellanox. In May 2020, Nvidia's developed an open-source ventilator to address the shortage resulting from the global coronavirus pandemic. On May 14, 2020, Nvidia officially announced their Ampere GPU microarchitecture and the Nvidia A100 GPU accelerator. In July 2020, it was reported that Nvidia was in talks with SoftBank to buy Arm, a UK-based chip designer, for $32 billion. On September 1, 2020, Nvidia officially announced the GeForce 30 series based on the company's new Ampere microarchitecture. On September 13, 2020, Nvidia announced that they would buy Arm from SoftBank Group for $40 billion, subject to the usual scrutiny, with the latter retaining a 10% share of Nvidia. In October 2020, Nvidia announced its plan to build the most powerful computer in Cambridge, England. The computer, called Cambridge-1, launched in July 2021 with a $100 million investment and will employ AI to support healthcare research. According to Jensen Huang, "The Cambridge-1 supercomputer will serve as a hub of innovation for the UK, and further the groundbreaking work being done by the nation's researchers in critical healthcare and drug discovery." Also in October 2020, along with the release of the Nvidia RTX A6000, Nvidia announced it is retiring its workstation GPU brand Quadro, shifting its product name to Nvidia RTX for future products and the manufacturing to be Nvidia Ampere architecture-based. In August 2021, the proposed takeover of Arm was stalled after the UK's Competition and Markets Authority raised "significant competition concerns". In October 2021, the European Commission opened a competition investigation into the takeover. The Commission stated that Nvidia's acquisition could restrict competitors' access to Arm's products and provide Nvidia with too much internal information on its competitors due to their deals with Arm. SoftBank (the parent company of Arm) and Nvidia announced in early February 2022 that they "had agreed not to move forward with the transaction 'because of significant regulatory challenges'". The investigation is set to end on March 15, 2022. That same month, Nvidia was reportedly compromised by a cyberattack. In March 2022, Nvidia's CEO Jensen Huang mentioned that they are open to having Intel manufacture their chips in the future. This was the first time the company mentioned that they would work together with Intel's upcoming foundry services. In April 2022, it was reported that Nvidia planned to open a new research center in Yerevan, Armenia. In May 2022, Nvidia opened Voyager, the second of the two giant buildings at its new headquarters complex to the west of the old one. Unlike its smaller and older sibling Endeavor, the triangle theming is used more "sparingly" in Voyager. In September 2022, Nvidia announced its next-generation automotive-grade chip, Drive Thor. In September 2022, Nvidia announced a collaboration with the Broad Institute of MIT and Harvard related to the entire suite of Nvidia's AI-powered healthcare software suite called Clara, that includes Parabricks and MONAI. Following U.S. Department of Commerce regulations which placed an embargo on exports to China of advanced microchips, which went into effect in October 2022, Nvidia saw its data center chip added to the export control list. The next month, the company unveiled a new advanced chip in China, called the A800 GPU, that met the export control rules. In September 2023, Getty Images announced that it was partnering with Nvidia to launch Generative AI by Getty Images, a new tool that lets people create images using Getty's library of licensed photos. Getty will use Nvidia's Edify model, which is available on Nvidia's generative AI model library Picasso. On September 26, 2023, Denny's CEO Kelli Valade joined Huang in East San Jose to celebrate the founding of Nvidia at Denny's on Berryessa Road, where a plaque was installed to mark the relevant corner booth as the birthplace of a $1 trillion company. By then, Nvidia's H100 GPUs were in such demand that even other tech giants were beholden to how Nvidia allocated supply. Larry Ellison of Oracle Corporation said that month that during a dinner with Huang at Nobu in Palo Alto, he and Elon Musk of Tesla, Inc. and xAI "were begging" for H100s, "I guess is the best way to describe it. An hour of sushi and begging". In October 2023, it was reported that Nvidia had quietly begun designing ARM-based central processing units (CPUs) for Microsoft's Windows operating system with a target to start selling them in 2025. 2024 In January 2024, Forbes reported that Nvidia has increased its lobbying presence in Washington, D.C. as American lawmakers consider proposals to regulate artificial intelligence. From 2023 to 2024, the company reportedly hired at least four government affairs with professional backgrounds at agencies including the United States Department of State and the Department of the Treasury. It was noted that the $350,000 spent by the company on lobbying in 2023 was small compared to a number of major tech companies in the artificial intelligence space. As of January 2024, Raymond James Financial analysts estimated that Nvidia was selling the H100 GPU in the price range of $25,000 to $30,000 each, while on eBay, individual H100s cost over $40,000. Tech giants were purchasing tens or hundreds of thousands of GPUs for their data centers to run generative artificial intelligence projects; simple arithmetic implied that they were committing to billions of dollars in capital expenditures. In February 2024, it was reported that Nvidia was the "hot employer" in Silicon Valley because it was offering interesting work and good pay at a time when other tech employers were downsizing. Half of Nvidia employees earned over $228,000 in 2023. By then, Nvidia GPUs had become so valuable that they needed special security while in transit to data centers. Cisco chief information officer Fletcher Previn explained at a CIO summit: "Those GPUs arrive by armored car". On March 1, 2024, Nvidia became the third company in the history of the United States to close with a market capitalization in excess of $2 trillion. Nvidia needed only 180 days to get to $2 trillion from $1 trillion, while the first two companies, Apple and Microsoft, each took over 500 days. On March 18, Nvidia announced its new AI chip and microarchitecture Blackwell, named after mathematician David Blackwell. In April 2024, Reuters reported that China had allegedly acquired banned Nvidia chips and servers from Supermicro and Dell via tenders. In June 2024, the Federal Trade Commission (FTC) and the Justice Department (DOJ) began antitrust investigations into Nvidia, Microsoft and OpenAI, focusing on their influence in the AI industry. The FTC led the investigations into Microsoft and OpenAI, while the DOJ handled Nvidia. The probes centered on the companies' conduct rather than mergers. This development followed an open letter from OpenAI employees expressing concerns about the rapid AI advancements and lack of oversight. The company became the world's most valuable, surpassing Microsoft and Apple, on June 18, 2024, after its market capitalization exceeded $3.3 trillion. In June 2024, Trend Micro announced a partnership with Nvidia to develop AI-driven security tools, notably to protect the data centers where AI workloads are processed. This collaboration integrates Nvidia NIM and Nvidia Morpheus with Trend Vision One and its Sovereign and Private Cloud solutions to improve data privacy, real-time analysis, and rapid threat mitigation. Nvidia introduced a family of open-source multimodal large language models in October 2024 called NVLM 1.0, which features a flagship version with 72 billion parameters, designed to improve text-only performance after multimodal training. In November 2024, the company was added to the Dow Jones Industrial Average. In November 2024, Morgan Stanley reported that "the entire 2025 production" of all of Nvidia's Blackwell chips was "already sold out". Also in November 2024, the company made an investment in Nebius Group. Fabless manufacturing Nvidia uses external suppliers for all phases of manufacturing, including wafer fabrication, assembly, testing, and packaging. Nvidia thus avoids most of the investment and production costs and risks associated with chip manufacturing, although it does sometimes directly procure some components and materials used in the production of its products (e.g., memory and substrates). Nvidia focuses its own resources on product design, quality assurance, marketing, and customer support. Corporate affairs Leadership Nvidia's key management as of March 2024 consists of: Jensen Huang, founder, president and chief executive officer Chris Malachowsky, founder and Nvidia fellow Colette Kress, executive vice president and chief financial officer Jay Puri, executive vice president of worldwide field operations Debora Shoquist, executive vice president of operations Tim Teter, executive vice president, general counsel and secretary Board of directors , the company's board consisted of the following directors: Rob Burgess (former chief executive officer of Macromedia Inc.) Tench Coxe (former managing director of Sutter Hill Ventures) John Dabiri (engineer and professor at the California Institute of Technology) Persis Drell (physicist and professor at Stanford University) Jensen Huang (co-founder, CEO and president of Nvidia) Dawn Hudson (former Chief Marketing Officer of the National Football League) Harvey C. Jones (managing partner of Square Wave Ventures) Melissa B. Lora (former president of Taco Bell International) Stephen Neal (lead independent director of Nvidia, former CEO and Chairman Emeritus and Senior Counsel of Cooley LLP) Ellen Ochoa (former director of NASA Johnson Space Center) Brooke Seawell (venture partner at New Enterprise Associates) Aarti Shah (former Senior Vice President & Chief Information and Digital Officer at Eli Lilly and Company) Mark Stevens (managing Partner at S-Cubed Capital) Finances For the fiscal year 2020, Nvidia reported earnings of US$2.796 billion, with an annual revenue of US$10.918 billion, a decline of 6.8% over the previous fiscal cycle. Nvidia's shares traded at over $531 per share, and its market capitalization was valued at over US$328.7 billion in January 2021. As of late Q3 2024, Nvidia's market cap is around US$2.98 trillion. For the Q2 of 2020, Nvidia reported sales of $3.87 billion, which was a 50% rise from the same period in 2019. The surge in sales and people's higher demand for computer technology. According to the financial chief of the company, Colette Kress, the effects of the pandemic will "likely reflect this evolution in enterprise workforce trends with a greater focus on technologies, such as Nvidia laptops and virtual workstations, that enable remote work and virtual collaboration." In May 2023, Nvidia crossed $1 trillion in market valuation during trading hours, and grew to $1.2 trillion by the following November. For its strength, size and market capitalization, Nvidia has been selected to be one of Bloomberg's "Magnificent Seven", the seven biggest companies on the stock market in these regards. Ownership The 10 largest shareholders of Nvidia in early 2024 were: The Vanguard Group (8.280%) BlackRock (5.623%) Fidelity Investments (5.161%) State Street Corporation (3.711%) Jensen Huang (3.507%) Geode Capital Management (2.024%) T. Rowe Price (2.013%) JPMorgan Chase (1.417%) BlackRock Life (1.409%) Eaton Vance (1.337%) GPU Technology Conference Nvidia's GPU Technology Conference (GTC) is a series of technical conferences held around the world. It originated in 2009 in San Jose, California, with an initial focus on the potential for solving computing challenges through GPUs. In recent years, the conference's focus has shifted to various applications of artificial intelligence and deep learning; including self-driving cars, healthcare, high-performance computing, and Nvidia Deep Learning Institute (DLI) training. GTC 2018 attracted over 8400 attendees. GTC 2020 was converted to a digital event and drew roughly 59,000 registrants. After several years of remote-only events, GTC in March 2024 returned to an in-person format in San Jose, California. Product families Nvidia's product families include graphics processing units, wireless communication devices, and automotive hardware and software, such as: GeForce, consumer-oriented graphics processing products RTX, professional visual computing graphics processing products (replacing GTX and Quadro) NVS, a multi-display business graphics processor Tegra, a system on a chip series for mobile devices Tesla, line of dedicated general-purpose GPUs for high-end image generation applications in professional and scientific fields nForce, a motherboard chipset created by Nvidia for Intel (Celeron, Pentium and Core 2) and AMD (Athlon and Duron) microprocessors GRID, a set of hardware and services by Nvidia for graphics virtualization Shield, a range of gaming hardware including the Shield Portable, Shield Tablet and Shield TV Drive, a range of hardware and software products for designers and manufacturers of autonomous vehicles. The Drive PX-series is a high-performance computer platform aimed at autonomous driving through deep learning, while Driveworks is an operating system for driverless cars. BlueField, a range of data processing units, initially inherited from their acquisition of Mellanox Technologies Datacenter/server class CPU, codenamed Grace, released in 2023 DGX, an enterprise platform designed for deep learning applications Maxine, a platform providing developers a suite of AI-based conferencing software Open-source software support Until September 23, 2013, Nvidia had not published any documentation for its advanced hardware, meaning that programmers could not write free and open-source device driver for its products without resorting to reverse engineering. Instead, Nvidia provides its own binary GeForce graphics drivers for X.Org and an open-source library that interfaces with the Linux, FreeBSD or Solaris kernels and the proprietary graphics software. Nvidia also provided but stopped supporting an obfuscated open-source driver that only supports two-dimensional hardware acceleration and ships with the X.Org distribution. The proprietary nature of Nvidia's drivers has generated dissatisfaction within free-software communities. In a 2012 talk, Linus Torvalds, in criticism of Nvidia's approach towards Linux, raised the finger and stated "Nvidia, fuck you." Some Linux and BSD users insist on using only open-source drivers and regard Nvidia's insistence on providing nothing more than a binary-only driver as inadequate, given that competing manufacturers such as Intel offer support and documentation for open-source developers and that others (like AMD) release partial documentation and provide some active development. Nvidia only provides x86/x64 and ARMv7-A versions of their proprietary driver; as a result, features like CUDA are unavailable on other platforms. Some users claim that Nvidia's Linux drivers impose artificial restrictions, like limiting the number of monitors that can be used at the same time, but the company has not commented on these accusations. In 2014, with Maxwell GPUs, Nvidia started to require firmware by them to unlock all features of its graphics cards. On May 12, 2022, Nvidia announced that they are opensourcing their GPU kernel modules. Support for Nvidia's firmware was implemented in nouveau in 2023, which allows proper power management and GPU reclocking for Turing and newer graphics card generations. Deep learning Nvidia GPUs are used in deep learning, and accelerated analytics due to Nvidia's CUDA software platform and API which allows programmers to utilize the higher number of cores present in GPUs to parallelize BLAS operations which are extensively used in machine learning algorithms. They were included in many Tesla, Inc. vehicles before Musk announced at Tesla Autonomy Day in 2019 that the company developed its own SoC and full self-driving computer now and would stop using Nvidia hardware for their vehicles. These GPUs are used by researchers, laboratories, tech companies and enterprise companies. In 2009, Nvidia was involved in what was called the "big bang" of deep learning, "as deep-learning neural networks were combined with Nvidia graphics processing units (GPUs)". That year, the Google Brain team used Nvidia GPUs to create deep neural networks capable of machine learning, where Andrew Ng determined that GPUs could increase the speed of deep learning systems by about 100 times. DGX DGX is a line of supercomputers by Nvidia. In April 2016, Nvidia produced the DGX-1 based on an 8 GPU cluster, to improve the ability of users to use deep learning by combining GPUs with integrated deep learning software. Nvidia gifted its first DGX-1 to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. It also developed Nvidia Tesla K80 and P100 GPU-based virtual machines, which are available through Google Cloud, which Google installed in November 2016. Microsoft added GPU servers in a preview offering of its N series based on Nvidia's Tesla K80s, each containing 4992 processing cores. Later that year, AWS's P2 instance was produced using up to 16 Nvidia Tesla K80 GPUs. That month Nvidia also partnered with IBM to create a software kit that boosts the AI capabilities of Watson, called IBM PowerAI. Nvidia also offers its own Nvidia Deep Learning software development kit. In 2017, the GPUs were also brought online at the Riken Center for Advanced Intelligence Project for Fujitsu. The company's deep learning technology led to a boost in its 2017 earnings. In May 2018, researchers at the artificial intelligence department of Nvidia realized the possibility that a robot can learn to perform a job simply by observing the person doing the same job. They have created a system that, after a short revision and testing, can already be used to control the universal robots of the next generation. In addition to GPU manufacturing, Nvidia provides parallel processing capabilities to researchers and scientists that allow them to efficiently run high-performance applications. Robotics In 2020, Nvidia unveiled "Omniverse", a virtual environment designed for engineers. Nvidia also open-sourced Isaac Sim, which makes use of this Omniverse to train robots through simulations that mimic the physics of the robots and the real world. In 2024, Huang oriented Nvidia's focus towards humanoid robots and self-driving cars, which he expects to gain widespread adoption. Inception Program Nvidia's Inception Program was created to support startups making exceptional advances in the fields of artificial intelligence and data science. Award winners are announced at Nvidia's GTC Conference. In May 2017, the program had 1,300 companies. As of March 2018, there were 2,800 startups in the Inception Program. As of August 2021, the program has surpassed 8,500 members in 90 countries, with cumulative funding of US$60 billion. Controversies Maxwell advertising dispute GTX 970 hardware specifications Issues with the GeForce GTX 970's specifications were first brought up by users when they found out that the cards, while featuring 4 GB of memory, rarely accessed memory over the 3.5 GB boundary. Further testing and investigation eventually led to Nvidia issuing a statement that the card's initially announced specifications had been altered without notice before the card was made commercially available, and that the card took a performance hit once memory over the 3.5 GB limit were put into use. The card's back-end hardware specifications, initially announced as being identical to those of the GeForce GTX 980, differed in the amount of L2 cache (1.75 MB versus 2 MB in the GeForce GTX 980) and the number of ROPs (56 versus 64 in the 980). Additionally, it was revealed that the card was designed to access its memory as a 3.5 GB section, plus a 0.5 GB one, access to the latter being 7 times slower than the first one. The company then went on to promise a specific driver modification to alleviate the performance issues produced by the cutbacks suffered by the card. However, Nvidia later clarified that the promise had been a miscommunication and there would be no specific driver update for the GTX 970. Nvidia claimed that it would assist customers who wanted refunds in obtaining them. On February 26, 2015, Nvidia CEO Jen-Hsun Huang went on record in Nvidia's official blog to apologize for the incident. In February 2015 a class-action lawsuit alleging false advertising was filed against Nvidia and Gigabyte Technology in the U.S. District Court for Northern California. Nvidia revealed that it is able to disable individual units, each containing 256 KB of L2 cache and 8 ROPs, without disabling whole memory controllers. This comes at the cost of dividing the memory bus into high speed and low speed segments that cannot be accessed at the same time unless one segment is reading while the other segment is writing because the L2/ROP unit managing both of the GDDR5 controllers shares the read return channel and the write data bus between the two GDDR5 controllers and itself. This is used in the GeForce GTX 970, which therefore can be described as having 3.5 GB in its high speed segment on a 224-bit bus and 0.5 GB in a low speed segment on a 32-bit bus. On July 27, 2016, Nvidia agreed to a preliminary settlement of the U.S. class action lawsuit, offering a $30 refund on GTX 970 purchases. The agreed upon refund represents the portion of the cost of the storage and performance capabilities the consumers assumed they were obtaining when they purchased the card. GeForce Partner Program The Nvidia GeForce Partner Program was a marketing program designed to provide partnering companies with benefits such as public relations support, video game bundling, and marketing development funds. The program proved to be controversial, with complaints about it possibly being an anti-competitive practice. First announced in a blog post on March 1, 2018, it was canceled on May 4, 2018. Hardware Unboxed On December 10, 2020, Nvidia told YouTube tech reviewer Steven Walton of Hardware Unboxed that it would no longer supply him with GeForce Founders Edition graphics card review units. In a Twitter message, Hardware Unboxed said, "Nvidia have officially decided to ban us from receiving GeForce Founders Edition GPU review samples. Their reasoning is that we are focusing on rasterization instead of ray tracing. They have said they will revisit this 'should your editorial direction change.'" In emails that were disclosed by Walton from Nvidia Senior PR Manager Bryan Del Rizzo, Nvidia had said:...your GPU reviews and recommendations have continued to focus singularly on rasterization performance, and you have largely discounted all of the other technologies we offer gamers. It is very clear from your community commentary that you do not see things the same way that we, gamers, and the rest of the industry do.TechSpot, partner site of Hardware Unboxed, said, "this and other related incidents raise serious questions around journalistic independence and what they are expecting of reviewers when they are sent products for an unbiased opinion." A number of technology reviewers came out strongly against Nvidia's move. Linus Sebastian, of Linus Tech Tips, titled the episode of his weekly WAN Show, "NVIDIA might ACTUALLY be EVIL..." and was highly critical of the company's move to dictate specific outcomes of technology reviews. The review site Gamers Nexus said it was, "Nvidia's latest decision to shoot both its feet: They've now made it so that any reviewers covering RT will become subject to scrutiny from untrusting viewers who will suspect subversion by the company. Shortsighted self-own from NVIDIA." Two days later, Nvidia reversed their stance. Hardware Unboxed sent out a Twitter message, "I just received an email from Nvidia apologizing for the previous email & they've now walked everything back." On December 14, Hardware Unboxed released a video explaining the controversy from their viewpoint. Via Twitter, they also shared a second apology sent by Nvidia's Del Rizzo that said "to withhold samples because I didn't agree with your commentary is simply inexcusable and crossed the line." Improper disclosures about cryptomining In 2018, Nvidia's chips became popular for cryptomining, the process of obtaining crypto rewards in exchange for verifying transactions on distributed ledgers, the U.S. Securities and Exchange Commission (SEC) said. However, the company failed to disclose that it was a "significant element" of its revenue growth from sales of chips designed for gaming, the SEC further added in a statement and charging order. Those omissions misled investors and analysts who were interested in understanding the impact of cryptomining on Nvidia's business, the SEC emphasized. Nvidia, which did not admit or deny the findings, has agreed to pay $5.5 million to settle civil charges, according to a statement made by the SEC in May 2022. See also Fast approximate anti-aliasing General-purpose computing on graphics processing units Huang's law Molecular modeling on GPUs GPU workstations Groq Notes References External links Nvidia Developer website 1993 establishments in California 1999 initial public offerings American companies established in 1993 Companies based in Santa Clara, California Companies in the Dow Jones Global Titans 50 Companies in the Dow Jones Industrial Average Computer companies established in 1993 Computer companies of the United States Computer hardware companies Computer systems companies Electronics companies established in 1993 Fabless semiconductor companies Graphics hardware companies Manufacturing companies based in the San Francisco Bay Area Semiconductor companies of the United States Technology companies based in the San Francisco Bay Area Technology companies established in 1993
Nvidia
[ "Technology" ]
9,405
[ "Computer hardware companies", "Computer systems companies", "Computers", "Computer systems" ]
39,136
https://en.wikipedia.org/wiki/Accelerating%20expansion%20of%20the%20universe
Observations show that the expansion of the universe is accelerating, such that the velocity at which a distant galaxy recedes from the observer is continuously increasing with time. The accelerated expansion of the universe was discovered in 1998 by two independent projects, the Supernova Cosmology Project and the High-Z Supernova Search Team, which used distant type Ia supernovae to measure the acceleration. The idea was that as type Ia supernovae have almost the same intrinsic brightness (a standard candle), and since objects that are further away appear dimmer, the observed brightness of these supernovae can be used to measure the distance to them. The distance can then be compared to the supernovae's cosmological redshift, which measures how much the universe has expanded since the supernova occurred; the Hubble law established that the further away an object is, the faster it is receding. The unexpected result was that objects in the universe are moving away from one another at an accelerating rate. Cosmologists at the time expected that recession velocity would always be decelerating, due to the gravitational attraction of the matter in the universe. Three members of these two groups have subsequently been awarded Nobel Prizes for their discovery. Confirmatory evidence has been found in baryon acoustic oscillations, and in analyses of the clustering of galaxies. The accelerated expansion of the universe is thought to have begun since the universe entered its dark-energy-dominated era roughly 5 billion years ago. Within the framework of general relativity, an accelerated expansion can be accounted for by a positive value of the cosmological constant , equivalent to the presence of a positive vacuum energy, dubbed "dark energy". While there are alternative possible explanations, the description assuming dark energy (positive ) is used in the standard model of cosmology, which also includes cold dark matter (CDM) and is known as the Lambda-CDM model. Background In the decades since the detection of cosmic microwave background (CMB) in 1965, the Big Bang model has become the most accepted model explaining the evolution of our universe. The Friedmann equation defines how the energy in the universe drives its expansion. where represents the curvature of the universe, is the scale factor, is the total energy density of the universe, and is the Hubble parameter. The critical density is defined as and the density parameter The Hubble parameter can then be rewritten as where the four currently hypothesized contributors to the energy density of the universe are curvature, matter, radiation and dark energy. Each of the components decreases with the expansion of the universe (increasing scale factor), except perhaps the dark energy term. It is the values of these cosmological parameters which physicists use to determine the acceleration of the universe. The acceleration equation describes the evolution of the scale factor with time where the pressure is defined by the cosmological model chosen. (see explanatory models) Physicists at one time were so assured of the deceleration of the universe's expansion that they introduced a so-called deceleration parameter . Recent observations indicate this deceleration parameter is negative. Relation to inflation According to the theory of cosmic inflation, the very early universe underwent a period of very rapid, quasi-exponential expansion. While the time-scale for this period of expansion was far shorter than that of the existing expansion, this was a period of accelerated expansion with some similarities to the current epoch. Technical definition The definition of "accelerating expansion" is that the second time derivative of the cosmic scale factor, , is positive, which is equivalent to the deceleration parameter, , being negative. However, note this does not imply that the Hubble parameter is increasing with time. Since the Hubble parameter is defined as , it follows from the definitions that the derivative of the Hubble parameter is given by so the Hubble parameter is decreasing with time unless . Observations prefer , which implies that is positive but is negative. Essentially, this implies that the cosmic recession velocity of any one particular galaxy is increasing with time, but its velocity/distance ratio is still decreasing; thus different galaxies expanding across a sphere of fixed radius cross the sphere more slowly at later times. It is seen from above that the case of "zero acceleration/deceleration" corresponds to is a linear function of , , , and . Evidence for acceleration The rate of expansion of the universe can be analyzed using the magnitude-redshift relationship of astronomical objects using standard candles, or their distance-redshift relationship using standard rulers. Also a factor is the growth of large-scale structure, finding that the observed values of the cosmological parameters are best described by models which include an accelerating expansion. Supernova observation In 1998, the first evidence for acceleration came from the observation of Type Ia supernovae, which are exploding white dwarf stars that have exceeded their stability limit. Because they all have similar masses, their intrinsic luminosity can be standardized. Repeated imaging of selected areas of the sky is used to discover the supernovae, then follow-up observations give their peak brightness, which is converted into a quantity known as luminosity distance (see distance measures in cosmology for details). Spectral lines of their light can be used to determine their redshift. For supernovae at redshift less than around 0.1, or light travel time less than 10 percent of the age of the universe, this gives a nearly linear distance–redshift relation due to Hubble's law. At larger distances, since the expansion rate of the universe has changed over time, the distance-redshift relation deviates from linearity, and this deviation depends on how the expansion rate has changed over time. The full calculation requires computer integration of the Friedmann equation, but a simple derivation can be given as follows: the redshift directly gives the cosmic scale factor at the time the supernova exploded. So a supernova with a measured redshift implies the universe was  =  of its present size when the supernova exploded. In the case of accelerated expansion, is positive; therefore, was smaller in the past than today. Thus, an accelerating universe took a longer time to expand from 2/3 to 1 times its present size, compared to a non-accelerating universe with constant and the same present-day value of the Hubble constant. This results in a larger light-travel time, larger distance and fainter supernovae, which corresponds to the actual observations. Adam Riess et al. found that "the distances of the high-redshift SNe Ia were, on average, 10% to 15% further than expected in a low mass density universe without a cosmological constant". This means that the measured high-redshift distances were too large, compared to nearby ones, for a decelerating universe. Several researchers have questioned the majority opinion on the acceleration or the assumption of the "cosmological principle" (that the universe is homogeneous and isotropic). For example, a 2019 paper analyzed the Joint Light-curve Analysis catalog of Type Ia supernovas, containing ten times as many supernova as were used in the 1998 analyses, and concluded that there was little evidence for a "monopole", that is, for an isotropic acceleration in all directions. See also the section on Alternate theories below. Baryon acoustic oscillations In the early universe before recombination and decoupling took place, photons and matter existed in a primordial plasma. Points of higher density in the photon-baryon plasma would contract, being compressed by gravity until the pressure became too large and they expanded again. This contraction and expansion created vibrations in the plasma analogous to sound waves. Since dark matter only interacts gravitationally, it stayed at the centre of the sound wave, the origin of the original overdensity. When decoupling occurred, approximately 380,000 years after the Big Bang, photons separated from matter and were able to stream freely through the universe, creating the cosmic microwave background as we know it. This left shells of baryonic matter at a fixed radius from the overdensities of dark matter, a distance known as the sound horizon. As time passed and the universe expanded, it was at these inhomogeneities of matter density where galaxies started to form. So by looking at the distances at which galaxies at different redshifts tend to cluster, it is possible to determine a standard angular diameter distance and use that to compare to the distances predicted by different cosmological models. Peaks have been found in the correlation function (the probability that two galaxies will be a certain distance apart) at , (where h is the dimensionless Hubble constant) indicating that this is the size of the sound horizon today, and by comparing this to the sound horizon at the time of decoupling (using the CMB), we can confirm the accelerated expansion of the universe. Clusters of galaxies Measuring the mass functions of galaxy clusters, which describe the number density of the clusters above a threshold mass, also provides evidence for dark energy . By comparing these mass functions at high and low redshifts to those predicted by different cosmological models, values for and are obtained which confirm a low matter density and a non-zero amount of dark energy. Age of the universe Given a cosmological model with certain values of the cosmological density parameters, it is possible to integrate the Friedmann equations and derive the age of the universe. By comparing this to actual measured values of the cosmological parameters, we can confirm the validity of a model which is accelerating now, and had a slower expansion in the past. Gravitational waves as standard sirens Recent discoveries of gravitational waves through LIGO and VIRGO not only confirmed Einstein's predictions but also opened a new window into the universe. These gravitational waves can work as sort of standard sirens to measure the expansion rate of the universe. Abbot et al. 2017 measured the Hubble constant value to be approximately 70 kilometres per second per megaparsec. The amplitudes of the strain 'h' is dependent on the masses of the objects causing waves, distances from observation point and gravitational waves detection frequencies. The associated distance measures are dependent on the cosmological parameters like the Hubble Constant for nearby objects and will be dependent on other cosmological parameters like the dark energy density, matter density, etc. for distant sources. Explanatory models Dark energy The most important property of dark energy is that it has negative pressure (repulsive action) which is distributed relatively homogeneously in space. where is the speed of light and is the energy density. Different theories of dark energy suggest different values of , with for cosmic acceleration (this leads to a positive value of in the acceleration equation above). The simplest explanation for dark energy is that it is a cosmological constant or vacuum energy; in this case . This leads to the Lambda-CDM model, which has generally been known as the Standard Model of Cosmology from 2003 through the present, since it is the simplest model in good agreement with a variety of recent observations. Riess et al. found that their results from supernova observations favoured expanding models with positive cosmological constant () and an accelerated expansion (). Phantom energy These observations allow the possibility of a cosmological model containing a dark energy component with equation of state . This phantom energy density would become infinite in finite time, causing such a huge gravitational repulsion that the universe would lose all structure and end in a Big Rip. For example, for and  =70 km·s−1·Mpc−1, the time remaining before the universe ends in this Big Rip is 22 billion years. Alternative theories There are many alternative explanations for the accelerating universe. Some examples are quintessence, a proposed form of dark energy with a non-constant state equation, whose density decreases with time. A negative mass cosmology does not assume that the mass density of the universe is positive (as is done in supernova observations), and instead finds a negative cosmological constant. Occam's razor also suggests that this is the 'more parsimonious hypothesis'. Dark fluid is an alternative explanation for accelerating expansion which attempts to unite dark matter and dark energy into a single framework. Alternatively, some authors have argued that the accelerated expansion of the universe could be due to a repulsive gravitational interaction of antimatter or a deviation of the gravitational laws from general relativity, such as massive gravity, meaning that gravitons themselves have mass. The measurement of the speed of gravity with the gravitational wave event GW170817 ruled out many modified gravity theories as alternative explanations to dark energy. Another type of model, the backreaction conjecture, was proposed by cosmologist Syksy Räsänen: the rate of expansion is not homogenous, but Earth is in a region where expansion is faster than the background. Inhomogeneities in the early universe cause the formation of walls and bubbles, where the inside of a bubble has less matter than on average. According to general relativity, space is less curved than on the walls, and thus appears to have more volume and a higher expansion rate. In the denser regions, the expansion is slowed by a higher gravitational attraction. Therefore, the inward collapse of the denser regions looks the same as an accelerating expansion of the bubbles, leading us to conclude that the universe is undergoing an accelerated expansion. The benefit is that it does not require any new physics such as dark energy. Räsänen does not consider the model likely, but without any falsification, it must remain a possibility. It would require rather large density fluctuations (20%) to work. A final possibility is that dark energy is an illusion caused by some bias in measurements. For example, if we are located in an emptier-than-average region of space, the observed cosmic expansion rate could be mistaken for a variation in time, or acceleration. A different approach uses a cosmological extension of the equivalence principle to show how space might appear to be expanding more rapidly in the voids surrounding our local cluster. While weak, such effects considered cumulatively over billions of years could become significant, creating the illusion of cosmic acceleration, and making it appear as if we live in a Hubble bubble. Yet other possibilities are that the accelerated expansion of the universe is an illusion caused by the relative motion of us to the rest of the universe, or that the supernova sample size used wasn't large enough. Consequences for the universe As the universe expands, the density of radiation and ordinary dark matter declines more quickly than the density of dark energy (see equation of state) and, eventually, dark energy dominates. Specifically, when the scale of the universe doubles, the density of matter is reduced by a factor of 8, but the density of dark energy is nearly unchanged (it is exactly constant if the dark energy is the cosmological constant). In models where dark energy is the cosmological constant, the universe will expand exponentially with time in the far future, coming closer and closer to a de Sitter universe. This will eventually lead to all evidence for the Big Bang disappearing, as the cosmic microwave background is redshifted to lower intensities and longer wavelengths. Eventually, its frequency will be low enough that it will be absorbed by the interstellar medium, and so be screened from any observer within the galaxy. This will occur when the universe is less than 50 times its existing age, leading to the end of any life as the distant universe turns dark. A constantly expanding universe with a non-zero cosmological constant has mass density decreasing over time. Under such a scenario, it is understood that all matter will ionize and disintegrate into isolated stable particles such as electrons and neutrinos, with all complex structures dissipating. This is called "heat death of the universe" (or the Big Freeze). Alternatives for the ultimate fate of the universe include the Big Rip mentioned above, a Big Bounce, or a Big Crunch. See also Cosmological constant Friedmann–Lemaître–Robertson–Walker metric High-Z Supernova Search Team Lambda-CDM model List of multiple discoveries Expansion of the universe Scale factor (cosmology) Supernova Cosmology Project Hubble constant Notes References Expansion of the universe Big Bang Physical cosmological concepts
Accelerating expansion of the universe
[ "Physics", "Astronomy", "Mathematics" ]
3,345
[ "Physical cosmological concepts", "Cosmogony", "Concepts in astrophysics", "Physical quantities", "Acceleration", "Big Bang", "Quantity", "Wikipedia categories named after physical quantities" ]
39,137
https://en.wikipedia.org/wiki/Quintessence%20%28physics%29
In physics, quintessence is a hypothetical form of dark energy, more precisely a scalar field minimally coupled to gravity, postulated as an explanation of the observation of an accelerating rate of expansion of the universe. The first example of this scenario was proposed by Ratra and Peebles (1988) and Wetterich (1988). The concept was expanded to more general types of time-varying dark energy, and the term "quintessence" was first introduced in a 1998 paper by Robert R. Caldwell, Rahul Dave and Paul Steinhardt. It has been proposed by some physicists to be a fifth fundamental force. Quintessence differs from the cosmological constant explanation of dark energy in that it is dynamic; that is, it changes over time, unlike the cosmological constant which, by definition, does not change. Quintessence can be either attractive or repulsive depending on the ratio of its kinetic and potential energy. Those working with this postulate believe that quintessence became repulsive about ten billion years ago, about 3.5 billion years after the Big Bang. A group of researchers argued in 2021 that observations of the Hubble tension may imply that only quintessence models with a nonzero coupling constant are viable. Terminology The name comes from quinta essentia (fifth element). So called in Latin starting from the Middle Ages, this was the (first) element added by Aristotle to the other four ancient classical elements because he thought it was the essence of the celestial world. Aristotle posited it to be a pure, fine, and primigenial element. Later scholars identified this element with aether. Similarly, modern quintessence would be the fifth known "dynamical, time-dependent, and spatially inhomogeneous" contribution to the overall mass–energy content of the universe. Of course, the other four components are not the ancient Greek classical elements, but rather "baryons, neutrinos, dark matter, [and] radiation." Although neutrinos are sometimes considered radiation, the term "radiation" in this context is only used to refer to massless photons. Spatial curvature of the cosmos (which has not been detected) is excluded because it is non-dynamical and homogeneous; the cosmological constant would not be considered a fifth component in this sense, because it is non-dynamical, homogeneous, and time-independent. Scalar field Quintessence (Q) is a scalar field with an equation of state where wq, the ratio of pressure pq and density q, is given by the potential energy and a kinetic term: Hence, quintessence is dynamic, and generally has a density and wq parameter that varies with time. Specifically, wq parameter can vary within the range [-1,1]. By contrast, a cosmological constant is static, with a fixed energy density and wq = −1. Tracker behavior Many models of quintessence have a tracker behavior, which according to Ratra and Peebles (1988) and Paul Steinhardt et al. (1999) partly solves the cosmological constant problem. In these models, the quintessence field has a density which closely tracks (but is less than) the radiation density until matter-radiation equality, which triggers quintessence to start having characteristics similar to dark energy, eventually dominating the universe. This naturally sets the low scale of the dark energy. When comparing the predicted expansion rate of the universe as given by the tracker solutions with cosmological data, a main feature of tracker solutions is that one needs four parameters to properly describe the behavior of their equation of state, whereas it has been shown that at most a two-parameter model can optimally be constrained by mid-term future data (horizon 2015–2020). Specific models Some special cases of quintessence are phantom energy, in which wq < −1, and k-essence (short for kinetic quintessence), which has a non-standard form of kinetic energy. If this type of energy were to exist, it would cause a big rip in the universe due to the growing energy density of dark energy, which would cause the expansion of the universe to increase at a faster-than-exponential rate. Holographic dark energy Holographic dark energy models, compared with cosmological constant models, imply a high degeneracy. It has been suggested that dark energy might originate from quantum fluctuations of spacetime, and is limited by the event horizon of the universe. Studies with quintessence dark energy found that it dominates gravitational collapse in a spacetime simulation, based on the holographic thermalization. These results show that the smaller the state parameter of quintessence is, the harder it is for the plasma to thermalize. Quintom scenario In 2004, when scientists fitted the evolution of dark energy with the cosmological data, they found that the equation of state had possibly crossed the cosmological constant boundary ( = –1) from above to below. A proven no-go theorem indicates this situation, called the Quintom scenario, requires at least two degrees of freedom for dark energy models involving ideal gases or scalar fields. In 2024, more detailed data from the Dark Energy Spectroscopic Instrument provided evidence suggesting a possible "Quintom-B" scenario, with the equation of state crossing the boundary from below to above. See also Aether (classical element) References Further reading Dark energy
Quintessence (physics)
[ "Physics", "Astronomy" ]
1,138
[ "Unsolved problems in astronomy", "Physical quantities", "Concepts in astronomy", "Unsolved problems in physics", "Energy (physics)", "Dark energy", "Wikipedia categories named after physical quantities" ]
39,140
https://en.wikipedia.org/wiki/Dyson%27s%20eternal%20intelligence
Dyson's eternal intelligence (the Dyson Scenario) is a hypothetical concept, proposed by Freeman Dyson in 1979, by which an immortal society of intelligent beings in an open universe may escape the prospect of the heat death of the universe by performing an infinite number of computations (as defined below) though expending only a finite amount of energy. Bremermann's limit can be invoked to deduce a lower bound on the amount of time required to distinguish two discrete energy levels of a quantum system using a quantum measurement. One can interpret this measurement as a computation on 1 bit for this system; however, Bremermann's limit is difficult to interpret physically, since there exist quantum Hamiltonians for which this interpretation would give arbitrarily fast computation speeds at arbitrarily low energy. Following this interpretation, the upper bound on the number of such measurements that can be performed grows over time. Assuming that the energy in the quantum system on which the measurement is performed is lost (while ignoring energy that is lost due to the measurement apparatus itself), the energy available from the mechanism suggested below slows logarithmically, but never stops. The intelligent beings would begin by storing a finite amount of energy. They then use half (or any fraction) of this energy to power their computation. When the energy is used up, they would enter a state of zero-energy-consumption until the universe cooled. Once the universe had cooled sufficiently, half of the remaining half (one quarter of the original energy) of the intelligent beings' fuel reserves would once again be released, powering a brief period of computation once more. This would continue, with smaller and smaller amounts of energy being released. As the universe cooled, the computations would be slower and slower, but there would still be an infinite number of them. In 1998, it was discovered that the expansion of the universe appears to be accelerating rather than decelerating due to a positive cosmological constant, implying that any two regions of the universe will eventually become permanently separated from one another. Dyson noted that "in an accelerated universe everything is different". However, even if the cosmological constant is , the matter density in an FLRW universe would converge to at rate , suggesting that the stored energy would become unavailable even if it is not used. Legacy Frank J. Tipler has cited Dyson's writings, and specifically his writings on the eternal intelligence, as a major influence on his own highly controversial Omega Point theory. Tipler's theory differs from Dyson's theory on several key points, most notable of which is that Dyson's eternal intelligence presupposes an open universe while Tipler's Omega Point presupposes a closed/contracting universe. Both theories will be invalidated if the observed universal expansion continues to accelerate. See also References Physical cosmology Freeman Dyson
Dyson's eternal intelligence
[ "Physics", "Astronomy" ]
592
[ "Astronomical sub-disciplines", "Theoretical physics", "Physical cosmology", "Astrophysics" ]
39,221
https://en.wikipedia.org/wiki/Thermodynamic%20free%20energy
In thermodynamics, the thermodynamic free energy is one of the state functions of a thermodynamic system. The change in the free energy is the maximum amount of work that the system can perform in a process at constant temperature, and its sign indicates whether the process is thermodynamically favorable or forbidden. Since free energy usually contains potential energy, it is not absolute but depends on the choice of a zero point. Therefore, only relative free energy values, or changes in free energy, are physically meaningful. The free energy is the portion of any first-law energy that is available to perform thermodynamic work at constant temperature, i.e., work mediated by thermal energy. Free energy is subject to irreversible loss in the course of such work. Since first-law energy is always conserved, it is evident that free energy is an expendable, second-law kind of energy. Several free energy functions may be formulated based on system criteria. Free energy functions are Legendre transforms of the internal energy. The Gibbs free energy is given by , where is the enthalpy, is the absolute temperature, and is the entropy. , where is the internal energy, is the pressure, and is the volume. is the most useful for processes involving a system at constant pressure and temperature , because, in addition to subsuming any entropy change due merely to heat, a change in also excludes the work needed to "make space for additional molecules" produced by various processes. Gibbs free energy change therefore equals work not associated with system expansion or compression, at constant temperature and pressure, hence its utility to solution-phase chemists, including biochemists. The historically earlier Helmholtz free energy is defined in contrast as . Its change is equal to the amount of reversible work done on, or obtainable from, a system at constant . Thus its appellation "work content", and the designation (). Since it makes no reference to any quantities involved in work (such as and ), the Helmholtz function is completely general: its decrease is the maximum amount of work which can be done by a system at constant temperature, and it can increase at most by the amount of work done on a system isothermally. The Helmholtz free energy has a special theoretical importance since it is proportional to the logarithm of the partition function for the canonical ensemble in statistical mechanics. (Hence its utility to physicists; and to gas-phase chemists and engineers, who do not want to ignore work.) Historically, the term 'free energy' has been used for either quantity. In physics, free energy most often refers to the Helmholtz free energy, denoted by (or ), while in chemistry, free energy most often refers to the Gibbs free energy. The values of the two free energies are usually quite similar and the intended free energy function is often implicit in manuscripts and presentations. Meaning of "free" The basic definition of "energy" is a measure of a body's (in thermodynamics, the system's) ability to cause change. For example, when a person pushes a heavy box a few metres forward, that person exerts mechanical energy, also known as work, on the box over a distance of a few meters forward. The mathematical definition of this form of energy is the product of the force exerted on the object and the distance by which the box moved (). Because the person changed the stationary position of the box, that person exerted energy on that box. The work exerted can also be called "useful energy", because energy was converted from one form into the intended purpose, i.e. mechanical use. For the case of the person pushing the box, the energy in the form of internal (or potential) energy obtained through metabolism was converted into work to push the box. This energy conversion, however, was not straightforward: while some internal energy went into pushing the box, some was diverted away (lost) in the form of heat (transferred thermal energy). For a reversible process, heat is the product of the absolute temperature and the change in entropy of a body (entropy is a measure of disorder in a system). The difference between the change in internal energy, which is , and the energy lost in the form of heat is what is called the "useful energy" of the body, or the work of the body performed on an object. In thermodynamics, this is what is known as "free energy". In other words, free energy is a measure of work (useful energy) a system can perform at constant temperature. Mathematically, free energy is expressed as This expression has commonly been interpreted to mean that work is extracted from the internal energy while represents energy not available to perform work. However, this is incorrect. For instance, in an isothermal expansion of an ideal gas, the internal energy change is and the expansion work is derived exclusively from the term supposedly not available to perform work. But it is noteworthy that the derivative form of the free energy: (for Helmholtz free energy) does indeed indicate that a spontaneous change in a non-reactive system's free energy (NOT the internal energy) comprises the available energy to do work (compression in this case) and the unavailable energy . Similar expression can be written for the Gibbs free energy change. In the 18th and 19th centuries, the theory of heat, i.e., that heat is a form of energy having relation to vibratory motion, was beginning to supplant both the caloric theory, i.e., that heat is a fluid, and the four element theory, in which heat was the lightest of the four elements. In a similar manner, during these years, heat was beginning to be distinguished into different classification categories, such as "free heat", "combined heat", "radiant heat", specific heat, heat capacity, "absolute heat", "latent caloric", "free" or "perceptible" caloric (calorique sensible), among others. In 1780, for example, Laplace and Lavoisier stated: “In general, one can change the first hypothesis into the second by changing the words ‘free heat, combined heat, and heat released’ into ‘vis viva, loss of vis viva, and increase of vis viva.’" In this manner, the total mass of caloric in a body, called absolute heat, was regarded as a mixture of two components; the free or perceptible caloric could affect a thermometer, whereas the other component, the latent caloric, could not. The use of the words "latent heat" implied a similarity to latent heat in the more usual sense; it was regarded as chemically bound to the molecules of the body. In the adiabatic compression of a gas, the absolute heat remained constant but the observed rise in temperature implied that some latent caloric had become "free" or perceptible. During the early 19th century, the concept of perceptible or free caloric began to be referred to as "free heat" or "heat set free". In 1824, for example, the French physicist Sadi Carnot, in his famous "Reflections on the Motive Power of Fire", speaks of quantities of heat ‘absorbed or set free’ in different transformations. In 1882, the German physicist and physiologist Hermann von Helmholtz coined the phrase ‘free energy’ for the expression , in which the change in A (or G) determines the amount of energy ‘free’ for work under the given conditions, specifically constant temperature. Thus, in traditional use, the term "free" was attached to Gibbs free energy for systems at constant pressure and temperature, or to Helmholtz free energy for systems at constant temperature, to mean ‘available in the form of useful work.’ With reference to the Gibbs free energy, we need to add the qualification that it is the energy free for non-volume work and compositional changes. An increasing number of books and journal articles do not include the attachment "free", referring to G as simply Gibbs energy (and likewise for the Helmholtz energy). This is the result of a 1988 IUPAC meeting to set unified terminologies for the international scientific community, in which the adjective ‘free’ was supposedly banished. This standard, however, has not yet been universally adopted, and many published articles and books still include the descriptive ‘free’. Application Just like the general concept of energy, free energy has a few definitions suitable for different conditions. In physics, chemistry, and biology, these conditions are thermodynamic parameters (temperature , volume , pressure , etc.). Scientists have come up with several ways to define free energy. The mathematical expression of Helmholtz free energy is: This definition of free energy is useful for gas-phase reactions or in physics when modeling the behavior of isolated systems kept at a constant volume. For example, if a researcher wanted to perform a combustion reaction in a bomb calorimeter, the volume is kept constant throughout the course of a reaction. Therefore, the heat of the reaction is a direct measure of the free energy change, . In solution chemistry, on the other hand, most chemical reactions are kept at constant pressure. Under this condition, the heat of the reaction is equal to the enthalpy change of the system. Under constant pressure and temperature, the free energy in a reaction is known as Gibbs free energy . These functions have a minimum in chemical equilibrium, as long as certain variables (, and or ) are held constant. In addition, they also have theoretical importance in deriving Maxwell relations. Work other than may be added, e.g., for electrochemical cells, or work in elastic materials and in muscle contraction. Other forms of work which must sometimes be considered are stress-strain, magnetic, as in adiabatic demagnetization used in the approach to absolute zero, and work due to electric polarization. These are described by tensors. In most cases of interest there are internal degrees of freedom and processes, such as chemical reactions and phase transitions, which create entropy. Even for homogeneous "bulk" materials, the free energy functions depend on the (often suppressed) composition, as do all proper thermodynamic potentials (extensive functions), including the internal energy. is the number of molecules (alternatively, moles) of type in the system. If these quantities do not appear, it is impossible to describe compositional changes. The differentials for processes at uniform pressure and temperature are (assuming only work): where μi is the chemical potential for the ith component in the system. The second relation is especially useful at constant and , conditions which are easy to achieve experimentally, and which approximately characterize living creatures. Under these conditions, it simplifies to Any decrease in the Gibbs function of a system is the upper limit for any isothermal, isobaric work that can be captured in the surroundings, or it may simply be dissipated, appearing as times a corresponding increase in the entropy of the system and/or its surrounding. An example is surface free energy, the amount of increase of free energy when the area of surface increases by every unit area. The path integral Monte Carlo method is a numerical approach for determining the values of free energies, based on quantum dynamical principles. Work and free energy change For a reversible isothermal process, ΔS = qrev/T and therefore the definition of A results in (at constant temperature) This tells us that the change in free energy equals the reversible or maximum work for a process performed at constant temperature. Under other conditions, free-energy change is not equal to work; for instance, for a reversible adiabatic expansion of an ideal gas, Importantly, for a heat engine, including the Carnot cycle, the free-energy change after a full cycle is zero, while the engine produces nonzero work. It is important to note that for heat engines and other thermal systems, the free energies do not offer convenient characterizations; internal energy and enthalpy are the preferred potentials for characterizing thermal systems. Free energy change and spontaneous processes According to the second law of thermodynamics, for any process that occurs in a closed system, the inequality of Clausius, ΔS > q/Tsurr, applies. For a process at constant temperature and pressure without non-PV work, this inequality transforms into . Similarly, for a process at constant temperature and volume, . Thus, a negative value of the change in free energy is a necessary condition for a process to be spontaneous; this is the most useful form of the second law of thermodynamics in chemistry. In chemical equilibrium at constant T and p without electrical work, dG = 0. History The quantity called "free energy" is a more advanced and accurate replacement for the outdated term affinity, which was used by chemists in previous years to describe the force that caused chemical reactions. The term affinity, as used in chemical relation, dates back to at least the time of Albertus Magnus. From the 1998 textbook Modern Thermodynamics by Nobel Laureate and chemistry professor Ilya Prigogine we find: "As motion was explained by the Newtonian concept of force, chemists wanted a similar concept of ‘driving force’ for chemical change. Why do chemical reactions occur, and why do they stop at certain points? Chemists called the ‘force’ that caused chemical reactions affinity, but it lacked a clear definition." During the entire 18th century, the dominant view with regard to heat and light was that put forth by Isaac Newton, called the Newtonian hypothesis, which states that light and heat are forms of matter attracted or repelled by other forms of matter, with forces analogous to gravitation or to chemical affinity. In the 19th century, the French chemist Marcellin Berthelot and the Danish chemist Julius Thomsen had attempted to quantify affinity using heats of reaction. In 1875, after quantifying the heats of reaction for a large number of compounds, Berthelot proposed the principle of maximum work, in which all chemical changes occurring without intervention of outside energy tend toward the production of bodies or of a system of bodies which liberate heat. In addition to this, in 1780 Antoine Lavoisier and Pierre-Simon Laplace laid the foundations of thermochemistry by showing that the heat given out in a reaction is equal to the heat absorbed in the reverse reaction. They also investigated the specific heat and latent heat of a number of substances, and amounts of heat given out in combustion. In a similar manner, in 1840 Swiss chemist Germain Hess formulated the principle that the evolution of heat in a reaction is the same whether the process is accomplished in one-step process or in a number of stages. This is known as Hess' law. With the advent of the mechanical theory of heat in the early 19th century, Hess's law came to be viewed as a consequence of the law of conservation of energy. Based on these and other ideas, Berthelot and Thomsen, as well as others, considered the heat given out in the formation of a compound as a measure of the affinity, or the work done by the chemical forces. This view, however, was not entirely correct. In 1847, the English physicist James Joule showed that he could raise the temperature of water by turning a paddle wheel in it, thus showing that heat and mechanical work were equivalent or proportional to each other, i.e., approximately, . This statement came to be known as the mechanical equivalent of heat and was a precursory form of the first law of thermodynamics. By 1865, the German physicist Rudolf Clausius had shown that this equivalence principle needed amendment. That is, one can use the heat derived from a combustion reaction in a coal furnace to boil water, and use this heat to vaporize steam, and then use the enhanced high-pressure energy of the vaporized steam to push a piston. Thus, we might naively reason that one can entirely convert the initial combustion heat of the chemical reaction into the work of pushing the piston. Clausius showed, however, that we must take into account the work that the molecules of the working body, i.e., the water molecules in the cylinder, do on each other as they pass or transform from one step of or state of the engine cycle to the next, e.g., from () to (). Clausius originally called this the "transformation content" of the body, and then later changed the name to entropy. Thus, the heat used to transform the working body of molecules from one state to the next cannot be used to do external work, e.g., to push the piston. Clausius defined this transformation heat as . In 1873, Willard Gibbs published A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces, in which he introduced the preliminary outline of the principles of his new equation able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i.e., bodies, being in composition part solid, part liquid, and part vapor, and by using a three-dimensional volume-entropy-internal energy graph, Gibbs was able to determine three states of equilibrium, i.e., "necessarily stable", "neutral", and "unstable", and whether or not changes will ensue. In 1876, Gibbs built on this framework by introducing the concept of chemical potential so to take into account chemical reactions and states of bodies that are chemically different from each other. In his own words, to summarize his results in 1873, Gibbs states: In this description, as used by Gibbs, ε refers to the internal energy of the body, η refers to the entropy of the body, and ν is the volume of the body. Hence, in 1882, after the introduction of these arguments by Clausius and Gibbs, the German scientist Hermann von Helmholtz stated, in opposition to Berthelot and Thomas' hypothesis that chemical affinity is a measure of the heat of reaction of chemical reaction as based on the principle of maximal work, that affinity is not the heat given out in the formation of a compound but rather it is the largest quantity of work which can be gained when the reaction is carried out in a reversible manner, e.g., electrical work in a reversible cell. The maximum work is thus regarded as the diminution of the free, or available, energy of the system (Gibbs free energy G at T = constant, P = constant or Helmholtz free energy A at T = constant, V = constant), whilst the heat given out is usually a measure of the diminution of the total energy of the system (Internal energy). Thus, G or A is the amount of energy "free" for work under the given conditions. Up until this point, the general view had been such that: “all chemical reactions drive the system to a state of equilibrium in which the affinities of the reactions vanish”. Over the next 60 years, the term affinity came to be replaced with the term free energy. According to chemistry historian Henry Leicester, the influential 1923 textbook Thermodynamics and the Free Energy of Chemical Reactions by Gilbert N. Lewis and Merle Randall led to the replacement of the term "affinity" by the term "free energy" in much of the English-speaking world. See also Energy Exergy Merle Randall Second law of thermodynamics Superconductivity References Energy (physics) State functions
Thermodynamic free energy
[ "Physics", "Chemistry", "Mathematics" ]
4,066
[ "State functions", "Thermodynamic properties", "Physical quantities", "Quantity", "Energy (physics)", "Thermodynamic free energy", "Wikipedia categories named after physical quantities" ]
39,406
https://en.wikipedia.org/wiki/Central%20limit%20theorem
In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions. This theorem has seen many changes during the formal development of probability theory. Previous versions of the theorem date back to 1811, but in its modern form it was only precisely stated as late as 1920. In statistics, the CLT can be stated as: let denote a statistical sample of size from a population with expected value (average) and finite positive variance , and let denote the sample mean (which is itself a random variable). Then the limit as of the distribution of is a normal distribution with mean and variance . In other words, suppose that a large sample of observations is obtained, each observation being randomly produced in a way that does not depend on the values of the other observations, and the average (arithmetic mean) of the observed values is computed. If this procedure is performed many times, resulting in a collection of observed averages, the central limit theorem says that if the sample size is large enough, the probability distribution of these averages will closely approximate a normal distribution. The central limit theorem has several variants. In its common form, the random variables must be independent and identically distributed (i.i.d.). This requirement can be weakened; convergence of the mean to the normal distribution also occurs for non-identical distributions or for non-independent observations if they comply with certain conditions. The earliest version of this theorem, that the normal distribution may be used as an approximation to the binomial distribution, is the de Moivre–Laplace theorem. Independent sequences Classical CLT Let be a sequence of i.i.d. random variables having a distribution with expected value given by and finite variance given by Suppose we are interested in the sample average By the law of large numbers, the sample average converges almost surely (and therefore also converges in probability) to the expected value as The classical central limit theorem describes the size and the distributional form of the stochastic fluctuations around the deterministic number during this convergence. More precisely, it states that as gets larger, the distribution of the normalized mean , i.e. the difference between the sample average and its limit scaled by the factor , approaches the normal distribution with mean and variance For large enough the distribution of gets arbitrarily close to the normal distribution with mean and variance The usefulness of the theorem is that the distribution of approaches normality regardless of the shape of the distribution of the individual Formally, the theorem can be stated as follows: In the case convergence in distribution means that the cumulative distribution functions of converge pointwise to the cdf of the distribution: for every real number where is the standard normal cdf evaluated at The convergence is uniform in in the sense that where denotes the least upper bound (or supremum) of the set. Lyapunov CLT In this variant of the central limit theorem the random variables have to be independent, but not necessarily identically distributed. The theorem also requires that random variables have moments of some order and that the rate of growth of these moments is limited by the Lyapunov condition given below. In practice it is usually easiest to check Lyapunov's condition for If a sequence of random variables satisfies Lyapunov's condition, then it also satisfies Lindeberg's condition. The converse implication, however, does not hold. Lindeberg (-Feller) CLT In the same setting and with the same notation as above, the Lyapunov condition can be replaced with the following weaker one (from Lindeberg in 1920). Suppose that for every where is the indicator function. Then the distribution of the standardized sums converges towards the standard normal distribution CLT for the sum of a random number of random variables Rather than summing an integer number of random variables and taking , the sum can be of a random number of random variables, with conditions on . Multidimensional CLT Proofs that use characteristic functions can be extended to cases where each individual is a random vector in with mean vector and covariance matrix (among the components of the vector), and these random vectors are independent and identically distributed. The multidimensional central limit theorem states that when scaled, sums converge to a multivariate normal distribution. Summation of these vectors is done component-wise. For let be independent random vectors. The sum of the random vectors is and their average is Therefore, The multivariate central limit theorem states that where the covariance matrix is equal to The multivariate central limit theorem can be proved using the Cramér–Wold theorem. The rate of convergence is given by the following Berry–Esseen type result: It is unknown whether the factor is necessary. The Generalized Central Limit Theorem The Generalized Central Limit Theorem (GCLT) was an effort of multiple mathematicians (Bernstein, Lindeberg, Lévy, Feller, Kolmogorov, and others) over the period from 1920 to 1937. The first published complete proof of the GCLT was in 1937 by Paul Lévy in French. An English language version of the complete proof of the GCLT is available in the translation of Gnedenko and Kolmogorov's 1954 book. The statement of the GCLT is as follows: A non-degenerate random variable Z is α-stable for some 0 < α ≤ 2 if and only if there is an independent, identically distributed sequence of random variables X1, X2, X3, ... and constants an > 0, bn ∈ ℝ with an (X1 + ... + Xn) − bn → Z. Here → means the sequence of random variable sums converges in distribution; i.e., the corresponding distributions satisfy Fn(y) → F(y) at all continuity points of F. In other words, if sums of independent, identically distributed random variables converge in distribution to some Z, then Z must be a stable distribution. Dependent processes CLT under weak dependence A useful generalization of a sequence of independent, identically distributed random variables is a mixing random process in discrete time; "mixing" means, roughly, that random variables temporally far apart from one another are nearly independent. Several kinds of mixing are used in ergodic theory and probability theory. See especially strong mixing (also called α-mixing) defined by where is so-called strong mixing coefficient. A simplified formulation of the central limit theorem under strong mixing is: In fact, where the series converges absolutely. The assumption cannot be omitted, since the asymptotic normality fails for where are another stationary sequence. There is a stronger version of the theorem: the assumption is replaced with and the assumption is replaced with Existence of such ensures the conclusion. For encyclopedic treatment of limit theorems under mixing conditions see . Martingale difference CLT Remarks Proof of classical CLT The central limit theorem has a proof using characteristic functions. It is similar to the proof of the (weak) law of large numbers. Assume are independent and identically distributed random variables, each with mean and finite variance The sum has mean and variance Consider the random variable where in the last step we defined the new random variables each with zero mean and unit variance The characteristic function of is given by where in the last step we used the fact that all of the are identically distributed. The characteristic function of is, by Taylor's theorem, where is "little notation" for some function of that goes to zero more rapidly than By the limit of the exponential function the characteristic function of equals All of the higher order terms vanish in the limit The right hand side equals the characteristic function of a standard normal distribution , which implies through Lévy's continuity theorem that the distribution of will approach as Therefore, the sample average is such that converges to the normal distribution from which the central limit theorem follows. Convergence to the limit The central limit theorem gives only an asymptotic distribution. As an approximation for a finite number of observations, it provides a reasonable approximation only when close to the peak of the normal distribution; it requires a very large number of observations to stretch into the tails. The convergence in the central limit theorem is uniform because the limiting cumulative distribution function is continuous. If the third central moment exists and is finite, then the speed of convergence is at least on the order of (see Berry–Esseen theorem). Stein's method can be used not only to prove the central limit theorem, but also to provide bounds on the rates of convergence for selected metrics. The convergence to the normal distribution is monotonic, in the sense that the entropy of increases monotonically to that of the normal distribution. The central limit theorem applies in particular to sums of independent and identically distributed discrete random variables. A sum of discrete random variables is still a discrete random variable, so that we are confronted with a sequence of discrete random variables whose cumulative probability distribution function converges towards a cumulative probability distribution function corresponding to a continuous variable (namely that of the normal distribution). This means that if we build a histogram of the realizations of the sum of independent identical discrete variables, the piecewise-linear curve that joins the centers of the upper faces of the rectangles forming the histogram converges toward a Gaussian curve as approaches infinity; this relation is known as de Moivre–Laplace theorem. The binomial distribution article details such an application of the central limit theorem in the simple case of a discrete variable taking only two possible values. Common misconceptions Studies have shown that the central limit theorem is subject to several common but serious misconceptions, some of which appear in widely used textbooks. These include: The misconceived belief that the theorem applies to random sampling of any variable, rather than to the mean values (or sums) of iid random variables extracted from a population by repeated sampling. That is, the theorem assumes the random sampling produces a sampling distribution formed from different values of means (or sums) of such random variables. The misconceived belief that the theorem ensures that random sampling leads to the emergence of a normal distribution for sufficiently large samples of any random variable, regardless of the population distribution. In reality, such sampling asymptotically reproduces the properties of the population, an intuitive result underpinned by the Glivenko-Cantelli theorem. The misconceived belief that the theorem leads to a good approximation of a normal distribution for sample sizes greater than around 30, allowing reliable inferences regardless of the nature of the population. In reality, this empirical rule of thumb has no valid justification, and can lead to seriously flawed inferences. See Z-test for where the approximation holds. Relation to the law of large numbers The law of large numbers as well as the central limit theorem are partial solutions to a general problem: "What is the limiting behavior of as approaches infinity?" In mathematical analysis, asymptotic series are one of the most popular tools employed to approach such questions. Suppose we have an asymptotic expansion of : Dividing both parts by and taking the limit will produce , the coefficient of the highest-order term in the expansion, which represents the rate at which changes in its leading term. Informally, one can say: " grows approximately as ". Taking the difference between and its approximation and then dividing by the next term in the expansion, we arrive at a more refined statement about : Here one can say that the difference between the function and its approximation grows approximately as . The idea is that dividing the function by appropriate normalizing functions, and looking at the limiting behavior of the result, can tell us much about the limiting behavior of the original function itself. Informally, something along these lines happens when the sum, , of independent identically distributed random variables, , is studied in classical probability theory. If each has finite mean , then by the law of large numbers, . If in addition each has finite variance , then by the central limit theorem, where is distributed as . This provides values of the first two constants in the informal expansion In the case where the do not have finite mean or variance, convergence of the shifted and rescaled sum can also occur with different centering and scaling factors: or informally Distributions which can arise in this way are called stable. Clearly, the normal distribution is stable, but there are also other stable distributions, such as the Cauchy distribution, for which the mean or variance are not defined. The scaling factor may be proportional to , for any ; it may also be multiplied by a slowly varying function of . The law of the iterated logarithm specifies what is happening "in between" the law of large numbers and the central limit theorem. Specifically it says that the normalizing function , intermediate in size between of the law of large numbers and of the central limit theorem, provides a non-trivial limiting behavior. Alternative statements of the theorem Density functions The density of the sum of two or more independent variables is the convolution of their densities (if these densities exist). Thus the central limit theorem can be interpreted as a statement about the properties of density functions under convolution: the convolution of a number of density functions tends to the normal density as the number of density functions increases without bound. These theorems require stronger hypotheses than the forms of the central limit theorem given above. Theorems of this type are often called local limit theorems. See Petrov for a particular local limit theorem for sums of independent and identically distributed random variables. Characteristic functions Since the characteristic function of a convolution is the product of the characteristic functions of the densities involved, the central limit theorem has yet another restatement: the product of the characteristic functions of a number of density functions becomes close to the characteristic function of the normal density as the number of density functions increases without bound, under the conditions stated above. Specifically, an appropriate scaling factor needs to be applied to the argument of the characteristic function. An equivalent statement can be made about Fourier transforms, since the characteristic function is essentially a Fourier transform. Calculating the variance Let be the sum of random variables. Many central limit theorems provide conditions such that converges in distribution to (the normal distribution with mean 0, variance 1) as . In some cases, it is possible to find a constant and function such that converges in distribution to as . Extensions Products of positive random variables The logarithm of a product is simply the sum of the logarithms of the factors. Therefore, when the logarithm of a product of random variables that take only positive values approaches a normal distribution, the product itself approaches a log-normal distribution. Many physical quantities (especially mass or length, which are a matter of scale and cannot be negative) are the products of different random factors, so they follow a log-normal distribution. This multiplicative version of the central limit theorem is sometimes called Gibrat's law. Whereas the central limit theorem for sums of random variables requires the condition of finite variance, the corresponding theorem for products requires the corresponding condition that the density function be square-integrable. Beyond the classical framework Asymptotic normality, that is, convergence to the normal distribution after appropriate shift and rescaling, is a phenomenon much more general than the classical framework treated above, namely, sums of independent random variables (or vectors). New frameworks are revealed from time to time; no single unifying framework is available for now. Convex body These two -close distributions have densities (in fact, log-concave densities), thus, the total variance distance between them is the integral of the absolute value of the difference between the densities. Convergence in total variation is stronger than weak convergence. An important example of a log-concave density is a function constant inside a given convex body and vanishing outside; it corresponds to the uniform distribution on the convex body, which explains the term "central limit theorem for convex bodies". Another example: where and . If then factorizes into which means are independent. In general, however, they are dependent. The condition ensures that are of zero mean and uncorrelated; still, they need not be independent, nor even pairwise independent. By the way, pairwise independence cannot replace independence in the classical central limit theorem. Here is a Berry–Esseen type result. The distribution of need not be approximately normal (in fact, it can be uniform). However, the distribution of is close to (in the total variation distance) for most vectors according to the uniform distribution on the sphere . Lacunary trigonometric series Gaussian polytopes The same also holds in all dimensions greater than 2. The polytope is called a Gaussian random polytope. A similar result holds for the number of vertices (of the Gaussian polytope), the number of edges, and in fact, faces of all dimensions. Linear functions of orthogonal matrices A linear function of a matrix is a linear combination of its elements (with given coefficients), where is the matrix of the coefficients; see Trace (linear algebra)#Inner product. A random orthogonal matrix is said to be distributed uniformly, if its distribution is the normalized Haar measure on the orthogonal group ; see Rotation matrix#Uniform random rotation matrices. Subsequences Random walk on a crystal lattice The central limit theorem may be established for the simple random walk on a crystal lattice (an infinite-fold abelian covering graph over a finite graph), and is used for design of crystal structures. Applications and examples A simple example of the central limit theorem is rolling many identical, unbiased dice. The distribution of the sum (or average) of the rolled numbers will be well approximated by a normal distribution. Since real-world quantities are often the balanced sum of many unobserved random events, the central limit theorem also provides a partial explanation for the prevalence of the normal probability distribution. It also justifies the approximation of large-sample statistics to the normal distribution in controlled experiments. Regression Regression analysis, and in particular ordinary least squares, specifies that a dependent variable depends according to some function upon one or more independent variables, with an additive error term. Various types of statistical inference on the regression assume that the error term is normally distributed. This assumption can be justified by assuming that the error term is actually the sum of many independent error terms; even if the individual error terms are not normally distributed, by the central limit theorem their sum can be well approximated by a normal distribution. Other illustrations Given its importance to statistics, a number of papers and computer packages are available that demonstrate the convergence involved in the central limit theorem. History Dutch mathematician Henk Tijms writes: Sir Francis Galton described the Central Limit Theorem in this way: The actual term "central limit theorem" (in German: "zentraler Grenzwertsatz") was first used by George Pólya in 1920 in the title of a paper. Pólya referred to the theorem as "central" due to its importance in probability theory. According to Le Cam, the French school of probability interprets the word central in the sense that "it describes the behaviour of the centre of the distribution as opposed to its tails". The abstract of the paper On the central limit theorem of calculus of probability and the problem of moments by Pólya in 1920 translates as follows. A thorough account of the theorem's history, detailing Laplace's foundational work, as well as Cauchy's, Bessel's and Poisson's contributions, is provided by Hald. Two historical accounts, one covering the development from Laplace to Cauchy, the second the contributions by von Mises, Pólya, Lindeberg, Lévy, and Cramér during the 1920s, are given by Hans Fischer. Le Cam describes a period around 1935. Bernstein presents a historical discussion focusing on the work of Pafnuty Chebyshev and his students Andrey Markov and Aleksandr Lyapunov that led to the first proofs of the CLT in a general setting. A curious footnote to the history of the Central Limit Theorem is that a proof of a result similar to the 1922 Lindeberg CLT was the subject of Alan Turing's 1934 Fellowship Dissertation for King's College at the University of Cambridge. Only after submitting the work did Turing learn it had already been proved. Consequently, Turing's dissertation was not published. See also Asymptotic equipartition property Asymptotic distribution Bates distribution Benford's law – result of extension of CLT to product of random variables. Berry–Esseen theorem Central limit theorem for directional statistics – Central limit theorem applied to the case of directional statistics Delta method – to compute the limit distribution of a function of a random variable. Erdős–Kac theorem – connects the number of prime factors of an integer with the normal probability distribution Fisher–Tippett–Gnedenko theorem – limit theorem for extremum values (such as ) Irwin–Hall distribution Markov chain central limit theorem Normal distribution Tweedie convergence theorem – a theorem that can be considered to bridge between the central limit theorem and the Poisson convergence theorem Donsker's theorem Notes References . External links Central Limit Theorem at Khan Academy A music video demonstrating the central limit theorem with a Galton board by Carl McTague Probability theorems Theorems in statistics Articles containing proofs Asymptotic theory (statistics)
Central limit theorem
[ "Mathematics" ]
4,497
[ "Theorems in statistics", "Central limit theorem", "Theorems in probability theory", "Mathematical problems", "Articles containing proofs", "Mathematical theorems" ]
39,407
https://en.wikipedia.org/wiki/Dirac%20equation
In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-1/2 massive particles, called "Dirac particles", such as electrons and quarks for which parity is a symmetry. It is consistent with both the principles of quantum mechanics and the theory of special relativity, and was the first theory to account fully for special relativity in the context of quantum mechanics. It was validated by accounting for the fine structure of the hydrogen spectrum in a completely rigorous way. It has become vital in the building of the Standard Model. The equation also implied the existence of a new form of matter, antimatter, previously unsuspected and unobserved and which was experimentally confirmed several years later. It also provided a theoretical justification for the introduction of several component wave functions in Pauli's phenomenological theory of spin. The wave functions in the Dirac theory are vectors of four complex numbers (known as bispinors), two of which resemble the Pauli wavefunction in the non-relativistic limit, in contrast to the Schrödinger equation which described wave functions of only one complex value. Moreover, in the limit of zero mass, the Dirac equation reduces to the Weyl equation. In the context of quantum field theory, the Dirac equation is reinterpreted to describe quantum fields corresponding to spin- particles. Dirac did not fully appreciate the importance of his results; however, the entailed explanation of spin as a consequence of the union of quantum mechanics and relativity—and the eventual discovery of the positron—represents one of the great triumphs of theoretical physics. This accomplishment has been described as fully on a par with the works of Newton, Maxwell, and Einstein before him. The equation has been deemed by some physicists to be the "real seed of modern physics". The equation has also been described as the "centerpiece of relativistic quantum mechanics", with it also stated that "the equation is perhaps the most important one in all of quantum mechanics". The Dirac equation is inscribed upon a plaque on the floor of Westminster Abbey. Unveiled on 13 November 1995, the plaque commemorates Dirac's life. History The Dirac equation in the form originally proposed by Dirac is: where is the wave function for an electron of rest mass with spacetime coordinates . are the components of the momentum, understood to be the momentum operator in the Schrödinger equation. is the speed of light, and is the reduced Planck constant; these fundamental physical constants reflect special relativity and quantum mechanics, respectively. Dirac's purpose in casting this equation was to explain the behavior of the relativistically moving electron, thus allowing the atom to be treated in a manner consistent with relativity. He hoped that the corrections introduced this way might have a bearing on the problem of atomic spectra. Up until that time, attempts to make the old quantum theory of the atom compatible with the theory of relativity—which were based on discretizing the angular momentum stored in the electron's possibly non-circular orbit of the atomic nucleus—had failed, and the new quantum mechanics of Heisenberg, Pauli, Jordan, Schrödinger, and Dirac himself had not developed sufficiently to treat this problem. Although Dirac's original intentions were satisfied, his equation had far deeper implications for the structure of matter and introduced new mathematical classes of objects that are now essential elements of fundamental physics. The new elements in this equation are the four matrices , , and , and the four-component wave function . There are four components in because the evaluation of it at any given point in configuration space is a bispinor. It is interpreted as a superposition of a spin-up electron, a spin-down electron, a spin-up positron, and a spin-down positron. The matrices and are all Hermitian and are involutory: and they all mutually anti-commute: These matrices and the form of the wave function have a deep mathematical significance. The algebraic structure represented by the gamma matrices had been created some 50 years earlier by the English mathematician W. K. Clifford. In turn, Clifford's ideas had emerged from the mid-19th-century work of German mathematician Hermann Grassmann in his Lineare Ausdehnungslehre (Theory of Linear Expansion). Making the Schrödinger equation relativistic The Dirac equation is superficially similar to the Schrödinger equation for a massive free particle: The left side represents the square of the momentum operator divided by twice the mass, which is the non-relativistic kinetic energy. Because relativity treats space and time as a whole, a relativistic generalization of this equation requires that space and time derivatives must enter symmetrically as they do in the Maxwell equations that govern the behavior of light — the equations must be differentially of the same order in space and time. In relativity, the momentum and the energies are the space and time parts of a spacetime vector, the four-momentum, and they are related by the relativistically invariant relation which says that the length of this four-vector is proportional to the rest mass . Substituting the operator equivalents of the energy and momentum from the Schrödinger theory produces the Klein–Gordon equation describing the propagation of waves, constructed from relativistically invariant objects, with the wave function being a relativistic scalar: a complex number which has the same numerical value in all frames of reference. Space and time derivatives both enter to second order. This has a telling consequence for the interpretation of the equation. Because the equation is second order in the time derivative, one must specify initial values both of the wave function itself and of its first time-derivative in order to solve definite problems. Since both may be specified more or less arbitrarily, the wave function cannot maintain its former role of determining the probability density of finding the electron in a given state of motion. In the Schrödinger theory, the probability density is given by the positive definite expression and this density is convected according to the probability current vector with the conservation of probability current and density following from the continuity equation: The fact that the density is positive definite and convected according to this continuity equation implies that one may integrate the density over a certain domain and set the total to 1, and this condition will be maintained by the conservation law. A proper relativistic theory with a probability density current must also share this feature. To maintain the notion of a convected density, one must generalize the Schrödinger expression of the density and current so that space and time derivatives again enter symmetrically in relation to the scalar wave function. The Schrödinger expression can be kept for the current, but the probability density must be replaced by the symmetrically formed expression which now becomes the 4th component of a spacetime vector, and the entire probability 4-current density has the relativistically covariant expression The continuity equation is as before. Everything is compatible with relativity now, but the expression for the density is no longer positive definite; the initial values of both and may be freely chosen, and the density may thus become negative, something that is impossible for a legitimate probability density. Thus, one cannot get a simple generalization of the Schrödinger equation under the naive assumption that the wave function is a relativistic scalar, and the equation it satisfies, second order in time. Although it is not a successful relativistic generalization of the Schrödinger equation, this equation is resurrected in the context of quantum field theory, where it is known as the Klein–Gordon equation, and describes a spinless particle field (e.g. pi meson or Higgs boson). Historically, Schrödinger himself arrived at this equation before the one that bears his name but soon discarded it. In the context of quantum field theory, the indefinite density is understood to correspond to the charge density, which can be positive or negative, and not the probability density. Dirac's coup Dirac thus thought to try an equation that was first order in both space and time. He postulated an equation of the form where the operators must be independent of for linearity and independent of for space-time homogeneity. These constraints implied additional dynamical variables that the operators will depend upon; from this requirement Dirac concluded that the operators would depend upon 4x4 matrices, related to the Pauli matrices. One could, for example, formally (i.e. by abuse of notation, since it is not straightforward to take a functional square root of the sum of two differential operators) take the relativistic expression for the energy replace by its operator equivalent, expand the square root in an infinite series of derivative operators, set up an eigenvalue problem, then solve the equation formally by iterations. Most physicists had little faith in such a process, even if it were technically possible. As the story goes, Dirac was staring into the fireplace at Cambridge, pondering this problem, when he hit upon the idea of taking the square root of the wave operator (see also half derivative) thus: On multiplying out the right side it is apparent that, in order to get all the cross-terms such as to vanish, one must assume with Dirac, who had just then been intensely involved with working out the foundations of Heisenberg's matrix mechanics, immediately understood that these conditions could be met if , , and are matrices, with the implication that the wave function has multiple components. This immediately explained the appearance of two-component wave functions in Pauli's phenomenological theory of spin, something that up until then had been regarded as mysterious, even to Pauli himself. However, one needs at least matrices to set up a system with the properties required — so the wave function had four components, not two, as in the Pauli theory, or one, as in the bare Schrödinger theory. The four-component wave function represents a new class of mathematical object in physical theories that makes its first appearance here. Given the factorization in terms of these matrices, one can now write down immediately an equation with to be determined. Applying again the matrix operator on both sides yields Taking shows that all the components of the wave function individually satisfy the relativistic energy–momentum relation. Thus the sought-for equation that is first-order in both space and time is Setting and because , the Dirac equation is produced as written above. Covariant form and relativistic invariance To demonstrate the relativistic invariance of the equation, it is advantageous to cast it into a form in which the space and time derivatives appear on an equal footing. New matrices are introduced as follows: and the equation takes the form (remembering the definition of the covariant components of the 4-gradient and especially that ) where there is an implied summation over the values of the twice-repeated index , and is the 4-gradient. In practice one often writes the gamma matrices in terms of 2 × 2 sub-matrices taken from the Pauli matrices and the 2 × 2 identity matrix. Explicitly the standard representation is The complete system is summarized using the Minkowski metric on spacetime in the form where the bracket expression denotes the anticommutator. These are the defining relations of a Clifford algebra over a pseudo-orthogonal 4-dimensional space with metric signature . The specific Clifford algebra employed in the Dirac equation is known today as the Dirac algebra. Although not recognized as such by Dirac at the time the equation was formulated, in hindsight the introduction of this geometric algebra represents an enormous stride forward in the development of quantum theory. The Dirac equation may now be interpreted as an eigenvalue equation, where the rest mass is proportional to an eigenvalue of the 4-momentum operator, the proportionality constant being the speed of light: Using ( is pronounced "d-slash"), according to Feynman slash notation, the Dirac equation becomes: In practice, physicists often use units of measure such that , known as natural units. The equation then takes the simple form A foundational theorem states that if two distinct sets of matrices are given that both satisfy the Clifford relations, then they are connected to each other by a similarity transform: If in addition the matrices are all unitary, as are the Dirac set, then itself is unitary; The transformation is unique up to a multiplicative factor of absolute value 1. Let us now imagine a Lorentz transformation to have been performed on the space and time coordinates, and on the derivative operators, which form a covariant vector. For the operator to remain invariant, the gammas must transform among themselves as a contravariant vector with respect to their spacetime index. These new gammas will themselves satisfy the Clifford relations, because of the orthogonality of the Lorentz transformation. By the previously mentioned foundational theorem, one may replace the new set by the old set subject to a unitary transformation. In the new frame, remembering that the rest mass is a relativistic scalar, the Dirac equation will then take the form If the transformed spinor is defined as then the transformed Dirac equation is produced in a way that demonstrates manifest relativistic invariance: Thus, settling on any unitary representation of the gammas is final, provided the spinor is transformed according to the unitary transformation that corresponds to the given Lorentz transformation. The various representations of the Dirac matrices employed will bring into focus particular aspects of the physical content in the Dirac wave function. The representation shown here is known as the standard representation – in it, the wave function's upper two components go over into Pauli's 2 spinor wave function in the limit of low energies and small velocities in comparison to light. The considerations above reveal the origin of the gammas in geometry, hearkening back to Grassmann's original motivation; they represent a fixed basis of unit vectors in spacetime. Similarly, products of the gammas such as represent oriented surface elements, and so on. With this in mind, one can find the form of the unit volume element on spacetime in terms of the gammas as follows. By definition, it is For this to be an invariant, the epsilon symbol must be a tensor, and so must contain a factor of , where is the determinant of the metric tensor. Since this is negative, that factor is imaginary. Thus This matrix is given the special symbol , owing to its importance when one is considering improper transformations of space-time, that is, those that change the orientation of the basis vectors. In the standard representation, it is This matrix will also be found to anticommute with the other four Dirac matrices: It takes a leading role when questions of parity arise because the volume element as a directed magnitude changes sign under a space-time reflection. Taking the positive square root above thus amounts to choosing a handedness convention on spacetime. Comparison with related theories Pauli theory The necessity of introducing half-integer spin goes back experimentally to the results of the Stern–Gerlach experiment. A beam of atoms is run through a strong inhomogeneous magnetic field, which then splits into parts depending on the intrinsic angular momentum of the atoms. It was found that for silver atoms, the beam was split in two; the ground state therefore could not be integer, because even if the intrinsic angular momentum of the atoms were as small as possible, 1, the beam would be split into three parts, corresponding to atoms with . The conclusion is that silver atoms have net intrinsic angular momentum of . Pauli set up a theory which explained this splitting by introducing a two-component wave function and a corresponding correction term in the Hamiltonian, representing a semi-classical coupling of this wave function to an applied magnetic field, as so in SI units: (Note that bold faced characters imply Euclidean vectors in 3 dimensions, whereas the Minkowski four-vector can be defined as ) Here and represent the components of the electromagnetic four-potential in their standard SI units, and the three sigmas are the Pauli matrices. On squaring out the first term, a residual interaction with the magnetic field is found, along with the usual classical Hamiltonian of a charged particle interacting with an applied field in SI units: This Hamiltonian is now a matrix, so the Schrödinger equation based on it must use a two-component wave function. On introducing the external electromagnetic 4-vector potential into the Dirac equation in a similar way, known as minimal coupling, it takes the form: A second application of the Dirac operator will now reproduce the Pauli term exactly as before, because the spatial Dirac matrices multiplied by , have the same squaring and commutation properties as the Pauli matrices. What is more, the value of the gyromagnetic ratio of the electron, standing in front of Pauli's new term, is explained from first principles. This was a major achievement of the Dirac equation and gave physicists great faith in its overall correctness. There is more however. The Pauli theory may be seen as the low energy limit of the Dirac theory in the following manner. First the equation is written in the form of coupled equations for 2-spinors with the SI units restored: so Assuming the field is weak and the motion of the electron non-relativistic, the total energy of the electron is approximately equal to its rest energy, and the momentum going over to the classical value, and so the second equation may be written which is of order Thus, at typical energies and velocities, the bottom components of the Dirac spinor in the standard representation are much suppressed in comparison to the top components. Substituting this expression into the first equation gives after some rearrangement The operator on the left represents the particle's total energy reduced by its rest energy, which is just its classical kinetic energy, so one can recover Pauli's theory upon identifying his 2-spinor with the top components of the Dirac spinor in the non-relativistic approximation. A further approximation gives the Schrödinger equation as the limit of the Pauli theory. Thus, the Schrödinger equation may be seen as the far non-relativistic approximation of the Dirac equation when one may neglect spin and work only at low energies and velocities. This also was a great triumph for the new equation, as it traced the mysterious that appears in it, and the necessity of a complex wave function, back to the geometry of spacetime through the Dirac algebra. It also highlights why the Schrödinger equation, although ostensibly in the form of a diffusion equation, actually represents wave propagation. It should be strongly emphasized that the entire Dirac spinor represents an irreducible whole. The separation, done here, of the Dirac spinor into large and small components depends on the low-energy approximation being valid. The components that were neglected above, to show that the Pauli theory can be recovered by a low-velocity approximation of Dirac's equation, are necessary to produce new phenomena observed in the relativistic regime – among them antimatter, and the creation and annihilation of particles. Weyl theory In the massless case , the Dirac equation reduces to the Weyl equation, which describes relativistic massless spin- particles. The theory acquires a second symmetry: see below. Physical interpretation Identification of observables The critical physical question in a quantum theory is this: what are the physically observable quantities defined by the theory? According to the postulates of quantum mechanics, such quantities are defined by self-adjoint operators that act on the Hilbert space of possible states of a system. The eigenvalues of these operators are then the possible results of measuring the corresponding physical quantity. In the Schrödinger theory, the simplest such object is the overall Hamiltonian, which represents the total energy of the system. To maintain this interpretation on passing to the Dirac theory, the Hamiltonian must be taken to be where, as always, there is an implied summation over the twice-repeated index . This looks promising, because one can see by inspection the rest energy of the particle and, in the case of , the energy of a charge placed in an electric potential . What about the term involving the vector potential? In classical electrodynamics, the energy of a charge moving in an applied potential is Thus, the Dirac Hamiltonian is fundamentally distinguished from its classical counterpart, and one must take great care to correctly identify what is observable in this theory. Much of the apparently paradoxical behavior implied by the Dirac equation amounts to a misidentification of these observables. Hole theory The negative solutions to the equation are problematic, for it was assumed that the particle has a positive energy. Mathematically speaking, however, there seems to be no reason for us to reject the negative-energy solutions. Since they exist, they cannot simply be ignored, for once the interaction between the electron and the electromagnetic field is included, any electron placed in a positive-energy eigenstate would decay into negative-energy eigenstates of successively lower energy. Real electrons obviously do not behave in this way, or they would disappear by emitting energy in the form of photons. To cope with this problem, Dirac introduced the hypothesis, known as hole theory, that the vacuum is the many-body quantum state in which all the negative-energy electron eigenstates are occupied. This description of the vacuum as a "sea" of electrons is called the Dirac sea. Since the Pauli exclusion principle forbids electrons from occupying the same state, any additional electron would be forced to occupy a positive-energy eigenstate, and positive-energy electrons would be forbidden from decaying into negative-energy eigenstates. Dirac further reasoned that if the negative-energy eigenstates are incompletely filled, each unoccupied eigenstate – called a hole – would behave like a positively charged particle. The hole possesses a positive energy because energy is required to create a particle–hole pair from the vacuum. As noted above, Dirac initially thought that the hole might be the proton, but Hermann Weyl pointed out that the hole should behave as if it had the same mass as an electron, whereas the proton is over 1800 times heavier. The hole was eventually identified as the positron, experimentally discovered by Carl Anderson in 1932. It is not entirely satisfactory to describe the "vacuum" using an infinite sea of negative-energy electrons. The infinitely negative contributions from the sea of negative-energy electrons have to be canceled by an infinite positive "bare" energy and the contribution to the charge density and current coming from the sea of negative-energy electrons is exactly canceled by an infinite positive "jellium" background so that the net electric charge density of the vacuum is zero. In quantum field theory, a Bogoliubov transformation on the creation and annihilation operators (turning an occupied negative-energy electron state into an unoccupied positive energy positron state and an unoccupied negative-energy electron state into an occupied positive energy positron state) allows us to bypass the Dirac sea formalism even though, formally, it is equivalent to it. In certain applications of condensed matter physics, however, the underlying concepts of "hole theory" are valid. The sea of conduction electrons in an electrical conductor, called a Fermi sea, contains electrons with energies up to the chemical potential of the system. An unfilled state in the Fermi sea behaves like a positively charged electron, and although it too is referred to as an "electron hole", it is distinct from a positron. The negative charge of the Fermi sea is balanced by the positively charged ionic lattice of the material. In quantum field theory In quantum field theories such as quantum electrodynamics, the Dirac field is subject to a process of second quantization, which resolves some of the paradoxical features of the equation. Mathematical formulation In its modern formulation for field theory, the Dirac equation is written in terms of a Dirac spinor field taking values in a complex vector space described concretely as , defined on flat spacetime (Minkowski space) . Its expression also contains gamma matrices and a parameter interpreted as the mass, as well as other physical constants. Dirac first obtained his equation through a factorization of Einstein's energy-momentum-mass equivalence relation assuming a scalar product of momentum vectors determined by the metric tensor and quantized the resulting relation by associating momenta to their respective operators. In terms of a field , the Dirac equation is then and in natural units, with Feynman slash notation, The gamma matrices are a set of four complex matrices (elements of ) which satisfy the defining anti-commutation relations: where is the Minkowski metric element, and the indices run over 0,1,2 and 3. These matrices can be realized explicitly under a choice of representation. Two common choices are the Dirac representation and the chiral representation. The Dirac representation is where are the Pauli matrices. For the chiral representation the are the same, but The slash notation is a compact notation for where is a four-vector (often it is the four-vector differential operator ). The summation over the index is implied. Alternatively the four coupled linear first-order partial differential equations for the four quantities that make up the wave function can be written as a vector. In Planck units this becomes: which makes it clearer that it is a set of four partial differential equations with four unknown functions. (Note that the term is not preceded by because is imaginary.) Dirac adjoint and the adjoint equation The Dirac adjoint of the spinor field is defined as Using the property of gamma matrices (which follows straightforwardly from Hermicity properties of the ) that one can derive the adjoint Dirac equation by taking the Hermitian conjugate of the Dirac equation and multiplying on the right by : where the partial derivative acts from the right on : written in the usual way in terms of a left action of the derivative, we have Klein–Gordon equation Applying to the Dirac equation gives That is, each component of the Dirac spinor field satisfies the Klein–Gordon equation. Conserved current A conserved current of the theory is Another approach to derive this expression is by variational methods, applying Noether's theorem for the global symmetry to derive the conserved current Solutions Since the Dirac operator acts on 4-tuples of square-integrable functions, its solutions should be members of the same Hilbert space. The fact that the energies of the solutions do not have a lower bound is unexpected. Plane-wave solutions Plane-wave solutions are those arising from an ansatz which models a particle with definite 4-momentum where For this ansatz, the Dirac equation becomes an equation for : After picking a representation for the gamma matrices , solving this is a matter of solving a system of linear equations. It is a representation-free property of gamma matrices that the solution space is two-dimensional (see here). For example, in the chiral representation for , the solution space is parametrised by a vector , with where and is the Hermitian matrix square-root. These plane-wave solutions provide a starting point for canonical quantization. Lagrangian formulation Both the Dirac equation and the Adjoint Dirac equation can be obtained from (varying) the action with a specific Lagrangian density that is given by: If one varies this with respect to one gets the adjoint Dirac equation. Meanwhile, if one varies this with respect to one gets the Dirac equation. In natural units and with the slash notation, the action is then For this action, the conserved current above arises as the conserved current corresponding to the global symmetry through Noether's theorem for field theory. Gauging this field theory by changing the symmetry to a local, spacetime point dependent one gives gauge symmetry (really, gauge redundancy). The resultant theory is quantum electrodynamics or QED. See below for a more detailed discussion. Lorentz invariance The Dirac equation is invariant under Lorentz transformations, that is, under the action of the Lorentz group or strictly , the component connected to the identity. For a Dirac spinor viewed concretely as taking values in , the transformation under a Lorentz transformation is given by a complex matrix . There are some subtleties in defining the corresponding , as well as a standard abuse of notation. Most treatments occur at the Lie algebra level. For a more detailed treatment see here. The Lorentz group of real matrices acting on is generated by a set of six matrices with components When both the indices are raised or lowered, these are simply the 'standard basis' of antisymmetric matrices. These satisfy the Lorentz algebra commutation relations In the article on the Dirac algebra, it is also found that the spin generators satisfy the Lorentz algebra commutation relations. A Lorentz transformation can be written as where the components are antisymmetric in . The corresponding transformation on spin space is This is an abuse of notation, but a standard one. The reason is is not a well-defined function of , since there are two different sets of components (up to equivalence) which give the same but different . In practice we implicitly pick one of these and then is well defined in terms of Under a Lorentz transformation, the Dirac equation becomes Associated to Lorentz invariance is a conserved Noether current, or rather a tensor of conserved Noether currents . Similarly, since the equation is invariant under translations, there is a tensor of conserved Noether currents , which can be identified as the stress-energy tensor of the theory. The Lorentz current can be written in terms of the stress-energy tensor in addition to a tensor representing internal angular momentum. Further discussion of Lorentz covariance of the Dirac equation The Dirac equation is Lorentz covariant. Articulating this helps illuminate not only the Dirac equation, but also the Majorana spinor and Elko spinor, which although closely related, have subtle and important differences. Understanding Lorentz covariance is simplified by keeping in mind the geometric character of the process. Let be a single, fixed point in the spacetime manifold. Its location can be expressed in multiple coordinate systems. In the physics literature, these are written as and , with the understanding that both and describe the same point , but in different local frames of reference (a frame of reference over a small extended patch of spacetime). One can imagine as having a fiber of different coordinate frames above it. In geometric terms, one says that spacetime can be characterized as a fiber bundle, and specifically, the frame bundle. The difference between two points and in the same fiber is a combination of rotations and Lorentz boosts. A choice of coordinate frame is a (local) section through that bundle. Coupled to the frame bundle is a second bundle, the spinor bundle. A section through the spinor bundle is just the particle field (the Dirac spinor, in the present case). Different points in the spinor fiber correspond to the same physical object (the fermion) but expressed in different Lorentz frames. Clearly, the frame bundle and the spinor bundle must be tied together in a consistent fashion to get consistent results; formally, one says that the spinor bundle is the associated bundle; it is associated to a principal bundle, which in the present case is the frame bundle. Differences between points on the fiber correspond to the symmetries of the system. The spinor bundle has two distinct generators of its symmetries: the total angular momentum and the intrinsic angular momentum. Both correspond to Lorentz transformations, but in different ways. The presentation here follows that of Itzykson and Zuber. It is very nearly identical to that of Bjorken and Drell. A similar derivation in a general relativistic setting can be found in Weinberg. Here we fix our spacetime to be flat, that is, our spacetime is Minkowski space. Under a Lorentz transformation the Dirac spinor to transform as It can be shown that an explicit expression for is given by where parameterizes the Lorentz transformation, and are the six 4×4 matrices satisfying: This matrix can be interpreted as the intrinsic angular momentum of the Dirac field. That it deserves this interpretation arises by contrasting it to the generator of Lorentz transformations, having the form This can be interpreted as the total angular momentum. It acts on the spinor field as Note the above does not have a prime on it: the above is obtained by transforming obtaining the change to and then returning to the original coordinate system . The geometrical interpretation of the above is that the frame field is affine, having no preferred origin. The generator generates the symmetries of this space: it provides a relabelling of a fixed point The generator generates a movement from one point in the fiber to another: a movement from with both and still corresponding to the same spacetime point These perhaps obtuse remarks can be elucidated with explicit algebra. Let be a Lorentz transformation. The Dirac equation is If the Dirac equation is to be covariant, then it should have exactly the same form in all Lorentz frames: The two spinors and should both describe the same physical field, and so should be related by a transformation that does not change any physical observables (charge, current, mass, etc.) The transformation should encode only the change of coordinate frame. It can be shown that such a transformation is a 4×4 unitary matrix. Thus, one may presume that the relation between the two frames can be written as Inserting this into the transformed equation, the result is The coordinates related by Lorentz transformation satisfy: The original Dirac equation is then regained if An explicit expression for (equal to the expression given above) can be obtained by considering a Lorentz transformation of infinitesimal rotation near the identity transformation: where is the metric tensor : and is symmetric while is antisymmetric. After plugging and chugging, one obtains which is the (infinitesimal) form for above and yields the relation . To obtain the affine relabelling, write After properly antisymmetrizing, one obtains the generator of symmetries given earlier. Thus, both and can be said to be the "generators of Lorentz transformations", but with a subtle distinction: the first corresponds to a relabelling of points on the affine frame bundle, which forces a translation along the fiber of the spinor on the spin bundle, while the second corresponds to translations along the fiber of the spin bundle (taken as a movement along the frame bundle, as well as a movement along the fiber of the spin bundle.) Weinberg provides additional arguments for the physical interpretation of these as total and intrinsic angular momentum. Other formulations The Dirac equation can be formulated in a number of other ways. Curved spacetime This article has developed the Dirac equation in flat spacetime according to special relativity. It is possible to formulate the Dirac equation in curved spacetime. The algebra of physical space This article developed the Dirac equation using four-vectors and Schrödinger operators. The Dirac equation in the algebra of physical space uses a Clifford algebra over the real numbers, a type of geometric algebra. Coupled Weyl Spinors As mentioned above, the massless Dirac equation immediately reduces to the homogeneous Weyl equation. By using the chiral representation of the gamma matrices, the nonzero-mass equation can also be decomposed into a pair of coupled inhomogeneous Weyl equations acting on the first and last pairs of indices of the original four-component spinor, i.e. , where and are each two-component Weyl spinors. This is because the skew block form of the chiral gamma matrices means that they swap the and and apply the two-by-two Pauli matrices to each: . So the Dirac equation becomes which in turn is equivalent to a pair of inhomogeneous Weyl equations for massless left- and right-helicity spinors, where the coupling strength is proportional to the mass: . This has been proposed as an intuitive explanation of Zitterbewegung, as these massless components would propagate at the speed of light and move in opposite directions, since the helicity is the projection of the spin onto the direction of motion. Here the role of the "mass" is not to make the velocity less than the speed of light, but instead controls the average rate at which these reversals occur; specifically, the reversals can be modeled as a Poisson process. U(1) symmetry Natural units are used in this section. The coupling constant is labelled by convention with : this parameter can also be viewed as modelling the electron charge. Vector symmetry The Dirac equation and action admits a symmetry where the fields transform as This is a global symmetry, known as the vector symmetry (as opposed to the axial symmetry: see below). By Noether's theorem there is a corresponding conserved current: this has been mentioned previously as Gauging the symmetry If we 'promote' the global symmetry, parametrised by the constant , to a local symmetry, parametrised by a function , or equivalently the Dirac equation is no longer invariant: there is a residual derivative of . The fix proceeds as in scalar electrodynamics: the partial derivative is promoted to a covariant derivative The covariant derivative depends on the field being acted on. The newly introduced is the 4-vector potential from electrodynamics, but also can be viewed as a gauge field (which, mathematically, is defined as a connection). The transformation law under gauge transformations for is then the usual but can also be derived by asking that covariant derivatives transform under a gauge transformation as We then obtain a gauge-invariant Dirac action by promoting the partial derivative to a covariant one: The final step needed to write down a gauge-invariant Lagrangian is to add a Maxwell Lagrangian term, Putting these together gives Expanding out the covariant derivative allows the action to be written in a second useful form: Axial symmetry Massless Dirac fermions, that is, fields satisfying the Dirac equation with , admit a second, inequivalent symmetry. This is seen most easily by writing the four-component Dirac fermion as a pair of two-component vector fields, and adopting the chiral representation for the gamma matrices, so that may be written where has components and has components . The Dirac action then takes the form That is, it decouples into a theory of two Weyl spinors or Weyl fermions. The earlier vector symmetry is still present, where and rotate identically. This form of the action makes the second inequivalent symmetry manifest: This can also be expressed at the level of the Dirac fermion as where is the exponential map for matrices. This isn't the only symmetry possible, but it is conventional. Any 'linear combination' of the vector and axial symmetries is also a symmetry. Classically, the axial symmetry admits a well-formulated gauge theory. But at the quantum level, there is an anomaly, that is, an obstruction to gauging. Extension to color symmetry We can extend this discussion from an abelian symmetry to a general non-abelian symmetry under a gauge group , the group of color symmetries for a theory. For concreteness, we fix , the special unitary group of matrices acting on . Before this section, could be viewed as a spinor field on Minkowski space, in other words a function , and its components in are labelled by spin indices, conventionally Greek indices taken from the start of the alphabet . Promoting the theory to a gauge theory, informally acquires a part transforming like , and these are labelled by color indices, conventionally Latin indices . In total, has components, given in indices by . The 'spinor' labels only how the field transforms under spacetime transformations. Formally, is valued in a tensor product, that is, it is a function Gauging proceeds similarly to the abelian case, with a few differences. Under a gauge transformation the spinor fields transform as The matrix-valued gauge field or connection transforms as and the covariant derivatives defined transform as Writing down a gauge-invariant action proceeds exactly as with the case, replacing the Maxwell Lagrangian with the Yang–Mills Lagrangian where the Yang–Mills field strength or curvature is defined here as and is the matrix commutator. The action is then Physical applications For physical applications, the case describes the quark sector of the Standard Model which models strong interactions. Quarks are modelled as Dirac spinors; the gauge field is the gluon field. The case describes part of the electroweak sector of the Standard Model. Leptons such as electrons and neutrinos are the Dirac spinors; the gauge field is the gauge boson. Generalisations This expression can be generalised to arbitrary Lie group with connection and a representation , where the colour part of is valued in . Formally, the Dirac field is a function Then transforms under a gauge transformation as and the covariant derivative is defined where here we view as a Lie algebra representation of the Lie algebra associated to . This theory can be generalised to curved spacetime, but there are subtleties which arise in gauge theory on a general spacetime (or more generally still, a manifold) which, on flat spacetime, can be ignored. This is ultimately due to the contractibility of flat spacetime which allows us to view a gauge field and gauge transformations as defined globally on . See also Articles on the Dirac equation Dirac field Dirac spinor Gordon decomposition Klein paradox Nonlinear Dirac equation Other equations Breit equation Dirac–Kähler equation Klein–Gordon equation Rarita–Schwinger equation Two-body Dirac equations Weyl equation Majorana equation Other topics Fermionic field Feynman checkerboard Foldy–Wouthuysen transformation Quantum electrodynamics Quantum chromodynamics ELKO theory References Citations Selected papers Textbooks External links The history of the positron Lecture given by Dirac in 1975 The Dirac Equation at MathPages The Dirac equation for a spin particle 1928 introductions Fermions Partial differential equations Equation Quantum field theory Spinors
Dirac equation
[ "Physics", "Materials_science" ]
8,871
[ "Quantum field theory", "Equations of physics", "Fermions", "Eponymous equations of physics", "Quantum mechanics", "Subatomic particles", "Condensed matter physics", "Dirac equation", "Matter" ]
39,562
https://en.wikipedia.org/wiki/Geochemistry
Geochemistry is the science that uses the tools and principles of chemistry to explain the mechanisms behind major geological systems such as the Earth's crust and its oceans. The realm of geochemistry extends beyond the Earth, encompassing the entire Solar System, and has made important contributions to the understanding of a number of processes including mantle convection, the formation of planets and the origins of granite and basalt. It is an integrated field of chemistry and geology. History The term geochemistry was first used by the Swiss-German chemist Christian Friedrich Schönbein in 1838: "a comparative geochemistry ought to be launched, before geognosy can become geology, and before the mystery of the genesis of our planets and their inorganic matter may be revealed." However, for the rest of the century the more common term was "chemical geology", and there was little contact between geologists and chemists. Geochemistry emerged as a separate discipline after major laboratories were established, starting with the United States Geological Survey (USGS) in 1884, which began systematic surveys of the chemistry of rocks and minerals. The chief USGS chemist, Frank Wigglesworth Clarke, noted that the elements generally decrease in abundance as their atomic weights increase, and summarized the work on elemental abundance in The Data of Geochemistry. The composition of meteorites was investigated and compared to terrestrial rocks as early as 1850. In 1901, Oliver C. Farrington hypothesised that, although there were differences, the relative abundances should still be the same. This was the beginnings of the field of cosmochemistry and has contributed much of what we know about the formation of the Earth and the Solar System. In the early 20th century, Max von Laue and William L. Bragg showed that X-ray scattering could be used to determine the structures of crystals. In the 1920s and 1930s, Victor Goldschmidt and associates at the University of Oslo applied these methods to many common minerals and formulated a set of rules for how elements are grouped. Goldschmidt published this work in the series Geochemische Verteilungsgesetze der Elemente [Geochemical Laws of the Distribution of Elements]. The research of Manfred Schidlowski from the 1960s to around the year 2002 was concerned with the biochemistry of the Early Earth with a focus on isotope-biogeochemistry and the evidence of the earliest life processes in Precambrian. Subfields Some subfields of geochemistry are: Aqueous geochemistry studies the role of various elements in watersheds, including copper, sulfur, mercury, and how elemental fluxes are exchanged through atmospheric-terrestrial-aquatic interactions. Biogeochemistry is the field of study focusing on the effect of life on the chemistry of the Earth. Cosmochemistry includes the analysis of the distribution of elements and their isotopes in the cosmos. Isotope geochemistry involves the determination of the relative and absolute concentrations of the elements and their isotopes in the Earth and on Earth's surface. Organic geochemistry, the study of the role of processes and compounds that are derived from living or once-living organisms. Photogeochemistry is the study of light-induced chemical reactions that occur or may occur among natural components of the Earth's surface. Regional geochemistry includes applications to environmental, hydrological and mineral exploration studies. Chemical elements The building blocks of materials are the chemical elements. These can be identified by their atomic number Z, which is the number of protons in the nucleus. An element can have more than one value for N, the number of neutrons in the nucleus. The sum of these is the mass number, which is roughly equal to the atomic mass. Atoms with the same atomic number but different neutron numbers are called isotopes. A given isotope is identified by a letter for the element preceded by a superscript for the mass number. For example, two common isotopes of chlorine are 35Cl and 37Cl. There are about 1700 known combinations of Z and N, of which only about 260 are stable. However, most of the unstable isotopes do not occur in nature. In geochemistry, stable isotopes are used to trace chemical pathways and reactions, while radioactive isotopes are primarily used to date samples. The chemical behavior of an atom – its affinity for other elements and the type of bonds it forms – is determined by the arrangement of electrons in orbitals, particularly the outermost (valence) electrons. These arrangements are reflected in the position of elements in the periodic table. Based on position, the elements fall into the broad groups of alkali metals, alkaline earth metals, transition metals, semi-metals (also known as metalloids), halogens, noble gases, lanthanides and actinides. Another useful classification scheme for geochemistry is the Goldschmidt classification, which places the elements into four main groups. Lithophiles combine easily with oxygen. These elements, which include Na, K, Si, Al, Ti, Mg and Ca, dominate in the Earth's crust, forming silicates and other oxides. Siderophile elements (Fe, Co, Ni, Pt, Re, Os) have an affinity for iron and tend to concentrate in the core. Chalcophile elements (Cu, Ag, Zn, Pb, S) form sulfides; and atmophile elements (O, N, H and noble gases) dominate the atmosphere. Within each group, some elements are refractory, remaining stable at high temperatures, while others are volatile, evaporating more easily, so heating can separate them. Differentiation and mixing The chemical composition of the Earth and other bodies is determined by two opposing processes: differentiation and mixing. In the Earth's mantle, differentiation occurs at mid-ocean ridges through partial melting, with more refractory materials remaining at the base of the lithosphere while the remainder rises to form basalt. After an oceanic plate descends into the mantle, convection eventually mixes the two parts together. Erosion differentiates granite, separating it into clay on the ocean floor, sandstone on the edge of the continent, and dissolved minerals in ocean waters. Metamorphism and anatexis (partial melting of crustal rocks) can mix these elements together again. In the ocean, biological organisms can cause chemical differentiation, while dissolution of the organisms and their wastes can mix the materials again. Fractionation A major source of differentiation is fractionation, an unequal distribution of elements and isotopes. This can be the result of chemical reactions, phase changes, kinetic effects, or radioactivity. On the largest scale, planetary differentiation is a physical and chemical separation of a planet into chemically distinct regions. For example, the terrestrial planets formed iron-rich cores and silicate-rich mantles and crusts. In the Earth's mantle, the primary source of chemical differentiation is partial melting, particularly near mid-ocean ridges. This can occur when the solid is heterogeneous or a solid solution, and part of the melt is separated from the solid. The process is known as equilibrium or batch melting if the solid and melt remain in equilibrium until the moment that the melt is removed, and fractional or Rayleigh melting if it is removed continuously. Isotopic fractionation can have mass-dependent and mass-independent forms. Molecules with heavier isotopes have lower ground state energies and are therefore more stable. As a result, chemical reactions show a small isotope dependence, with heavier isotopes preferring species or compounds with a higher oxidation state; and in phase changes, heavier isotopes tend to concentrate in the heavier phases. Mass-dependent fractionation is largest in light elements because the difference in masses is a larger fraction of the total mass. Ratios between isotopes are generally compared to a standard. For example, sulfur has four stable isotopes, of which the two most common are 32S and 34S. The ratio of their concentrations, , is reported as where is the same ratio for a standard. Because the differences are small, the ratio is multiplied by 1000 to make it parts per thousand (referred to as parts per mil). This is represented by the symbol . Equilibrium Equilibrium fractionation occurs between chemicals or phases that are in equilibrium with each other. In equilibrium fractionation between phases, heavier phases prefer the heavier isotopes. For two phases A and B, the effect can be represented by the factor In the liquid-vapor phase transition for water, at 20 degrees Celsius is 1.0098 for 18O and 1.084 for 2H. In general, fractionation is greater at lower temperatures. At 0 °C, the factors are 1.0117 and 1.111. Kinetic When there is no equilibrium between phases or chemical compounds, kinetic fractionation can occur. For example, at interfaces between liquid water and air, the forward reaction is enhanced if the humidity of the air is less than 100% or the water vapor is moved by a wind. Kinetic fractionation generally is enhanced compared to equilibrium fractionation and depends on factors such as reaction rate, reaction pathway and bond energy. Since lighter isotopes generally have weaker bonds, they tend to react faster and enrich the reaction products. Biological fractionation is a form of kinetic fractionation since reactions tend to be in one direction. Biological organisms prefer lighter isotopes because there is a lower energy cost in breaking energy bonds. In addition to the previously mentioned factors, the environment and species of the organism can have a large effect on the fractionation. Cycles Through a variety of physical and chemical processes, chemical elements change in concentration and move around in what are called geochemical cycles. An understanding of these changes requires both detailed observation and theoretical models. Each chemical compound, element or isotope has a concentration that is a function of position and time, but it is impractical to model the full variability. Instead, in an approach borrowed from chemical engineering, geochemists average the concentration over regions of the Earth called geochemical reservoirs. The choice of reservoir depends on the problem; for example, the ocean may be a single reservoir or be split into multiple reservoirs. In a type of model called a box model, a reservoir is represented by a box with inputs and outputs. Geochemical models generally involve feedback. In the simplest case of a linear cycle, either the input or the output from a reservoir is proportional to the concentration. For example, salt is removed from the ocean by formation of evaporites, and given a constant rate of evaporation in evaporite basins, the rate of removal of salt should be proportional to its concentration. For a given component , if the input to a reservoir is a constant and the output is for some constant , then the mass balance equation is This expresses the fact that any change in mass must be balanced by changes in the input or output. On a time scale of , the system approaches a steady state in which . The residence time is defined as where and are the input and output rates. In the above example, the steady-state input and output rates are both equal to , so . If the input and output rates are nonlinear functions of , they may still be closely balanced over time scales much greater than the residence time; otherwise, there will be large fluctuations in . In that case, the system is always close to a steady-state and the lowest order expansion of the mass balance equation will lead to a linear equation like Equation (). In most systems, one or both of the input and output depend on , resulting in feedback that tends to maintain the steady-state. If an external forcing perturbs the system, it will return to the steady-state on a time scale of . Abundance of elements Solar System The composition of the solar system is similar to that of many other stars, and aside from small anomalies it can be assumed to have formed from a solar nebula that had a uniform composition, and the composition of the Sun's photosphere is similar to that of the rest of the Solar System. The composition of the photosphere is determined by fitting the absorption lines in its spectrum to models of the Sun's atmosphere. By far the largest two elements by fraction of total mass are hydrogen (74.9%) and helium (23.8%), with all the remaining elements contributing just 1.3%. There is a general trend of exponential decrease in abundance with increasing atomic number, although elements with even atomic number are more common than their odd-numbered neighbors (the Oddo–Harkins rule). Compared to the overall trend, lithium, boron and beryllium are depleted and iron is anomalously enriched. The pattern of elemental abundance is mainly due to two factors. The hydrogen, helium, and some of the lithium were formed in about 20 minutes after the Big Bang, while the rest were created in the interiors of stars. Meteorites Meteorites come in a variety of compositions, but chemical analysis can determine whether they were once in planetesimals that melted or differentiated. Chondrites are undifferentiated and have round mineral inclusions called chondrules. With the ages of 4.56 billion years, they date to the early solar system. A particular kind, the CI chondrite, has a composition that closely matches that of the Sun's photosphere, except for depletion of some volatiles (H, He, C, N, O) and a group of elements (Li, B, Be) that are destroyed by nucleosynthesis in the Sun. Because of the latter group, CI chondrites are considered a better match for the composition of the early Solar System. Moreover, the chemical analysis of CI chondrites is more accurate than for the photosphere, so it is generally used as the source for chemical abundance, despite their rareness (only five have been recovered on Earth). Giant planets The planets of the Solar System are divided into two groups: the four inner planets are the terrestrial planets (Mercury, Venus, Earth and Mars), with relatively small sizes and rocky surfaces. The four outer planets are the giant planets, which are dominated by hydrogen and helium and have lower mean densities. These can be further subdivided into the gas giants (Jupiter and Saturn) and the ice giants (Uranus and Neptune) that have large icy cores. Most of our direct information on the composition of the giant planets is from spectroscopy. Since the 1930s, Jupiter was known to contain hydrogen, methane and ammonium. In the 1960s, interferometry greatly increased the resolution and sensitivity of spectral analysis, allowing the identification of a much greater collection of molecules including ethane, acetylene, water and carbon monoxide. However, Earth-based spectroscopy becomes increasingly difficult with more remote planets, since the reflected light of the Sun is much dimmer; and spectroscopic analysis of light from the planets can only be used to detect vibrations of molecules, which are in the infrared frequency range. This constrains the abundances of the elements H, C and N. Two other elements are detected: phosphorus in the gas phosphine (PH3) and germanium in germane (GeH4). The helium atom has vibrations in the ultraviolet range, which is strongly absorbed by the atmospheres of the outer planets and Earth. Thus, despite its abundance, helium was only detected once spacecraft were sent to the outer planets, and then only indirectly through collision-induced absorption in hydrogen molecules. Further information on Jupiter was obtained from the Galileo probe when it was sent into the atmosphere in 1995; and the final mission of the Cassini probe in 2017 was to enter the atmosphere of Saturn. In the atmosphere of Jupiter, He was found to be depleted by a factor of 2 compared to solar composition and Ne by a factor of 10, a surprising result since the other noble gases and the elements C, N and S were enhanced by factors of 2 to 4 (oxygen was also depleted but this was attributed to the unusually dry region that Galileo sampled). Spectroscopic methods only penetrate the atmospheres of Jupiter and Saturn to depths where the pressure is about equal to 1 bar, approximately Earth's atmospheric pressure at sea level. The Galileo probe penetrated to 22 bars. This is a small fraction of the planet, which is expected to reach pressures of over 40 Mbar. To constrain the composition in the interior, thermodynamic models are constructed using the information on temperature from infrared emission spectra and equations of state for the likely compositions. High-pressure experiments predict that hydrogen will be a metallic liquid in the interior of Jupiter and Saturn, while in Uranus and Neptune it remains in the molecular state. Estimates also depend on models for the formation of the planets. Condensation of the presolar nebula would result in a gaseous planet with the same composition as the Sun, but the planets could also have formed when a solid core captured nebular gas. In current models, the four giant planets have cores of rock and ice that are roughly the same size, but the proportion of hydrogen and helium decreases from about 300 Earth masses in Jupiter to 75 in Saturn and just a few in Uranus and Neptune. Thus, while the gas giants are primarily composed of hydrogen and helium, the ice giants are primarily composed of heavier elements (O, C, N, S), primarily in the form of water, methane, and ammonia. The surfaces are cold enough for molecular hydrogen to be liquid, so much of each planet is likely a hydrogen ocean overlaying one of heavier compounds. Outside the core, Jupiter has a mantle of liquid metallic hydrogen and an atmosphere of molecular hydrogen and helium. Metallic hydrogen does not mix well with helium, and in Saturn, it may form a separate layer below the metallic hydrogen. Terrestrial planets Terrestrial planets are believed to have come from the same nebular material as the giant planets, but they have lost most of the lighter elements and have different histories. Planets closer to the Sun might be expected to have a higher fraction of refractory elements, but if their later stages of formation involved collisions of large objects with orbits that sampled different parts of the Solar System, there could be little systematic dependence on position. Direct information on Mars, Venus and Mercury largely comes from spacecraft missions. Using gamma-ray spectrometers, the composition of the crust of Mars has been measured by the Mars Odyssey orbiter, the crust of Venus by some of the Venera missions to Venus, and the crust of Mercury by the MESSENGER spacecraft. Additional information on Mars comes from meteorites that have landed on Earth (the Shergottites, Nakhlites, and Chassignites, collectively known as SNC meteorites). Abundances are also constrained by the masses of the planets, while the internal distribution of elements is constrained by their moments of inertia. The planets condensed from the solar nebula, and much of the details of their composition are determined by fractionation as they cooled. The phases that condense fall into five groups. First to condense are materials rich in refractory elements such as Ca and Al. These are followed by nickel and iron, then magnesium silicates. Below about 700 kelvins (700 K), FeS and volatile-rich metals and silicates form a fourth group, and in the fifth group FeO enter the magnesium silicates. The compositions of the planets and the Moon are chondritic, meaning that within each group the ratios between elements are the same as in carbonaceous chondrites. The estimates of planetary compositions depend on the model used. In the equilibrium condensation model, each planet was formed from a feeding zone in which the compositions of solids were determined by the temperature in that zone. Thus, Mercury formed at 1400 K, where iron remained in a pure metallic form and there was little magnesium or silicon in solid form; Venus at 900 K, so all the magnesium and silicon condensed; Earth at 600 K, so it contains FeS and silicates; and Mars at 450 K, so FeO was incorporated into magnesium silicates. The greatest problem with this theory is that volatiles would not condense, so the planets would have no atmospheres and Earth no atmosphere. In chondritic mixing models, the compositions of chondrites are used to estimate planetary compositions. For example, one model mixes two components, one with the composition of C1 chondrites and one with just the refractory components of C1 chondrites. In another model, the abundances of the five fractionation groups are estimated using an index element for each group. For the most refractory group, uranium is used; iron for the second; the ratios of potassium and thallium to uranium for the next two; and the molar ratio FeO/(FeO+MgO) for the last. Using thermal and seismic models along with heat flow and density, Fe can be constrained to within 10 percent on Earth, Venus, and Mercury. U can be constrained within about 30% on Earth, but its abundance on other planets is based on "educated guesses". One difficulty with this model is that there may be significant errors in its prediction of volatile abundances because some volatiles are only partially condensed. Earth's crust The more common rock constituents are nearly all oxides; chlorides, sulfides and fluorides are the only important exceptions to this and their total amount in any rock is usually much less than 1%. By 1911, F. W. Clarke had calculated that a little more than 47% of the Earth's crust consists of oxygen. It occurs principally in combination as oxides, of which the chief are silica, alumina, iron oxides, and various carbonates (calcium carbonate, magnesium carbonate, sodium carbonate, and potassium carbonate). The silica functions principally as an acid, forming silicates, and all the commonest minerals of igneous rocks are of this nature. From a computation based on 1672 analyses of numerous kinds of rocks Clarke arrived at the following as the average percentage composition of the Earth's crust: SiO2=59.71, Al2O3=15.41, Fe2O3=2.63, FeO=3.52, MgO=4.36, CaO=4.90, Na2O=3.55, K2O=2.80, H2O=1.52, TiO2=0.60, P2O5=0.22, (total 99.22%). All the other constituents occur only in very small quantities, usually much less than 1%. These oxides combine in a haphazard way. For example, potash (potassium carbonate) and soda (sodium carbonate) combine to produce feldspars. In some cases, they may take other forms, such as nepheline, leucite, and muscovite, but in the great majority of instances they are found as feldspar. Phosphoric acid with lime (calcium carbonate) forms apatite. Titanium dioxide with ferrous oxide gives rise to ilmenite. Part of the lime forms lime feldspar. Magnesium carbonate and iron oxides with silica crystallize as olivine or enstatite, or with alumina and lime form the complex ferromagnesian silicates of which the pyroxenes, amphiboles, and biotites are the chief. Any excess of silica above what is required to neutralize the bases will separate out as quartz; excess of alumina crystallizes as corundum. These must be regarded only as general tendencies. It is possible, by rock analysis, to say approximately what minerals the rock contains, but there are numerous exceptions to any rule. Mineral constitution Except in acid or siliceous igneous rocks containing greater than 66% of silica, known as felsic rocks, quartz is not abundant in igneous rocks. In basic rocks (containing 20% of silica or less) it is rare for them to contain as much silicon, these are referred to as mafic rocks. If magnesium and iron are above average while silica is low, olivine may be expected; where silica is present in greater quantity over ferromagnesian minerals, such as augite, hornblende, enstatite or biotite, occur rather than olivine. Unless potash is high and silica relatively low, leucite will not be present, for leucite does not occur with free quartz. Nepheline, likewise, is usually found in rocks with much soda and comparatively little silica. With high alkalis, soda-bearing pyroxenes and amphiboles may be present. The lower the percentage of silica and alkali's, the greater is the prevalence of plagioclase feldspar as contracted with soda or potash feldspar. Earth's crust is composed of 90% silicate minerals and their abundance in the Earth is as follows: plagioclase feldspar (39%), alkali feldspar (12%), quartz (12%), pyroxene (11%), amphiboles (5%), micas (5%), clay minerals (5%); the remaining silicate minerals make up another 3% of Earth's crust. Only 8% of the Earth is composed of non-silicate minerals such as carbonates, oxides, and sulfides. The other determining factor, namely the physical conditions attending consolidation, plays, on the whole, a smaller part, yet is by no means negligible. Certain minerals are practically confined to deep-seated intrusive rocks, e.g., microcline, muscovite, diallage. Leucite is very rare in plutonic masses; many minerals have special peculiarities in microscopic character according to whether they crystallized in-depth or near the surface, e.g., hypersthene, orthoclase, quartz. There are some curious instances of rocks having the same chemical composition, but consisting of entirely different minerals, e.g., the hornblendite of Gran, in Norway, which contains only hornblende, has the same composition as some of the camptonites of the same locality that contain feldspar and hornblende of a different variety. In this connection, we may repeat what has been said above about the corrosion of porphyritic minerals in igneous rocks. In rhyolites and trachytes, early crystals of hornblende and biotite may be found in great numbers partially converted into augite and magnetite. Hornblende and biotite were stable under the pressures and other conditions below the surface, but unstable at higher levels. In the ground-mass of these rocks, augite is almost universally present. But the plutonic representatives of the same magma, granite, and syenite contain biotite and hornblende far more commonly than augite. Felsic, intermediate and mafic igneous rocks Those rocks that contain the most silica, and on crystallizing yield free quartz, form a group generally designated the "felsic" rocks. Those again that contain the least silica and most magnesia and iron, so that quartz is absent while olivine is usually abundant, form the "mafic" group. The "intermediate" rocks include those characterized by the general absence of both quartz and olivine. An important subdivision of these contains a very high percentage of alkalis, especially soda, and consequently has minerals such as nepheline and leucite not common in other rocks. It is often separated from the others as the "alkali" or "soda" rocks, and there is a corresponding series of mafic rocks. Lastly, a small sub-group rich in olivine and without feldspar has been called the "ultramafic" rocks. They have very low percentages of silica but much iron and magnesia. Except these last, practically all rocks contain felspars or feldspathoid minerals. In the acid rocks, the common feldspars are orthoclase, perthite, microcline, and oligoclase—all having much silica and alkalis. In the mafic rocks labradorite, anorthite, and bytownite prevail, being rich in lime and poor in silica, potash, and soda. Augite is the most common ferromagnesian in mafic rocks, but biotite and hornblende are on the whole more frequent in felsic rocks. Rocks that contain leucite or nepheline, either partly or wholly replacing felspar, are not included in this table. They are essentially of intermediate or of mafic character. We might in consequence regard them as varieties of syenite, diorite, gabbro, etc., in which feldspathoid minerals occur, and indeed there are many transitions between syenites of ordinary type and nepheline — or leucite — syenite, and between gabbro or dolerite and theralite or essexite. But, as many minerals develop in these "alkali" rocks that are uncommon elsewhere, it is convenient in a purely formal classification like that outlined here to treat the whole assemblage as a distinct series. This classification is based essentially on the mineralogical constitution of the igneous rocks. Any chemical distinctions between the different groups, though implied, are relegated to a subordinate position. It is admittedly artificial, but it has grown up with the growth of the science and is still adopted as the basis on which more minute subdivisions are erected. The subdivisions are by no means of equal value. The syenites, for example, and the peridotites, are far less important than the granites, diorites, and gabbros. Moreover, the effusive andesites do not always correspond to the plutonic diorites but partly also to the gabbros. As the different kinds of rock, regarded as aggregates of minerals, pass gradually into one another, transitional types are very common and are often so important as to receive special names. The quartz-syenites and nordmarkites may be interposed between granite and syenite, the tonalites and adamellites between granite and diorite, the monzonites between syenite and diorite, norites and hyperites between diorite and gabbro, and so on. Trace metals in the ocean Trace metals readily form complexes with major ions in the ocean, including hydroxide, carbonate, and chloride and their chemical speciation changes depending on whether the environment is oxidized or reduced. Benjamin (2002) defines complexes of metals with more than one type of ligand, other than water, as mixed-ligand-complexes. In some cases, a ligand contains more than one donor atom, forming very strong complexes, also called chelates (the ligand is the chelator). One of the most common chelators is EDTA (ethylenediaminetetraacetic acid), which can replace six molecules of water and form strong bonds with metals that have a plus two charge. With stronger complexation, lower activity of the free metal ion is observed. One consequence of the lower reactivity of complexed metals compared to the same concentration of free metal is that the chelation tends to stabilize metals in the aqueous solution instead of in solids. Concentrations of the trace metals cadmium, copper, molybdenum, manganese, rhenium, uranium and vanadium in sediments record the redox history of the oceans. Within aquatic environments, cadmium(II) can either be in the form CdCl+(aq) in oxic waters or CdS(s) in a reduced environment. Thus, higher concentrations of Cd in marine sediments may indicate low redox potential conditions in the past. For copper(II), a prevalent form is CuCl+(aq) within oxic environments and CuS(s) and Cu2S within reduced environments. The reduced seawater environment leads to two possible oxidation states of copper, Cu(I) and Cu(II). Molybdenum is present as the Mo(VI) oxidation state as MoO42−(aq) in oxic environments. Mo(V) and Mo(IV) are present in reduced environments in the forms MoO2+(aq) and MoS2(s). Rhenium is present as the Re(VII) oxidation state as ReO4− within oxic conditions, but is reduced to Re(IV) which may form ReO2 or ReS2. Uranium is in oxidation state VI in UO2(CO3)34−(aq) and is found in the reduced form UO2(s). Vanadium is in several forms in oxidation state V(V); HVO42− and H2VO4−. Its reduced forms can include VO2+, VO(OH)3−, and V(OH)3. These relative dominance of these species depends on pH. In the water column of the ocean or deep lakes, vertical profiles of dissolved trace metals are characterized as following conservative–type, nutrient–type, or scavenged–type distributions. Across these three distributions, trace metals have different residence times and are used to varying extents by planktonic microorganisms. Trace metals with conservative-type distributions have high concentrations relative to their biological use. One example of a trace metal with a conservative-type distribution is molybdenum. It has a residence time within the oceans of around 8 x 105 years and is generally present as the molybdate anion (MoO42−). Molybdenum interacts weakly with particles and displays an almost uniform vertical profile in the ocean. Relative to the abundance of molybdenum in the ocean, the amount required as a metal cofactor for enzymes in marine phytoplankton is negligible. Trace metals with nutrient-type distributions are strongly associated with the internal cycles of particulate organic matter, especially the assimilation by plankton. The lowest dissolved concentrations of these metals are at the surface of the ocean, where they are assimilated by plankton. As dissolution and decomposition occur at greater depths, concentrations of these trace metals increase. Residence times of these metals, such as zinc, are several thousand to one hundred thousand years. Finally, an example of a scavenged-type trace metal is aluminium, which has strong interactions with particles as well as a short residence time in the ocean. The residence times of scavenged-type trace metals are around 100 to 1000 years. The concentrations of these metals are highest around bottom sediments, hydrothermal vents, and rivers. For aluminium, atmospheric dust provides the greatest source of external inputs into the ocean. Iron and copper show hybrid distributions in the ocean. They are influenced by recycling and intense scavenging. Iron is a limiting nutrient in vast areas of the oceans and is found in high abundance along with manganese near hydrothermal vents. Here, many iron precipitates are found, mostly in the forms of iron sulfides and oxidized iron oxyhydroxide compounds. Concentrations of iron near hydrothermal vents can be up to one million times the concentrations found in the open ocean. Using electrochemical techniques, it is possible to show that bioactive trace metals (zinc, cobalt, cadmium, iron, and copper) are bound by organic ligands in surface seawater. These ligand complexes serve to lower the bioavailability of trace metals within the ocean. For example, copper, which may be toxic to open ocean phytoplankton and bacteria, can form organic complexes. The formation of these complexes reduces the concentrations of bioavailable inorganic complexes of copper that could be toxic to sea life at high concentrations. Unlike copper, zinc toxicity in marine phytoplankton is low and there is no advantage to increasing the organic binding of Zn2+. In high-nutrient, low-chlorophyll regions, iron is the limiting nutrient, with the dominant species being strong organic complexes of Fe(III). See also Biogeochemical cycle Geochemical cycle Petrology Radiometric dating Tephrochronology Volcanic gas References Further reading External links The Geochemistry of Igneous Rocks (Gunn Interactive Ltd.) Earth sciences
Geochemistry
[ "Chemistry" ]
7,536
[ "nan" ]
39,619
https://en.wikipedia.org/wiki/Electroporation
Electroporation, or electropermeabilization, is a technique in which an electrical field is applied to cells in order to increase the permeability of the cell membrane. This may allow chemicals, drugs, electrode arrays or DNA to be introduced into the cell (also called electrotransfer). In microbiology, the process of electroporation is often used to transform bacteria, yeast, or plant protoplasts by introducing new coding DNA. If bacteria and plasmids are mixed together, the plasmids can be transferred into the bacteria after electroporation, though depending on what is being transferred, cell-penetrating peptides or cell squeeze could also be used. Electroporation works by passing thousands of volts (~8 kV/cm) across suspended cells in an electroporation cuvette. Afterwards, the cells have to be handled carefully until they have had a chance to divide, producing new cells that contain reproduced plasmids. This process is approximately ten times more effective in increasing cell membrane's permeability than chemical transformation, although many laboratories lack the specialized equipment needed for electroporation. Electroporation is also highly efficient for the introduction of foreign genes into tissue culture cells, especially mammalian cells. For example, it is used in the process of producing knockout mice, as well as in tumor treatment, gene therapy, and cell-based therapy. The process of introducing foreign DNA into eukaryotic cells is known as transfection. Electroporation is highly effective for transfecting cells in suspension using electroporation cuvettes. Electroporation has proven efficient for use on tissues in vivo, for in utero applications as well as in ovo transfection. Adherent cells can also be transfected using electroporation, providing researchers with an alternative to trypsinizing their cells prior to transfection. One downside to electroporation, however, is that after the process the gene expression of over 7,000 genes can be affected. This can cause problems in studies where gene expression has to be controlled to ensure accurate and precise results. Although bulk electroporation has many benefits over physical delivery methods such as microinjections and gene guns, it still has limitations, including low cell viability. Miniaturization of electroporation has been studied, leading to microelectroporation and nanotransfection of tissue utilizing electroporation-based techniques via nanochannels to minimally invasively deliver cargo to the cells. Electroporation has also been used as a mechanism to trigger cell fusion. Artificially induced cell fusion can be used to investigate and treat different diseases, like diabetes, regenerate axons of the central nerve system, and produce cells with desired properties, such as in cell vaccines for cancer immunotherapy. However, the first and most known application of cell fusion is production of monoclonal antibodies in hybridoma technology, where hybrid cell lines (hybridomas) are formed by fusing specific antibody-producing B lymphocytes with a myeloma (B lymphocyte cancer) cell line. Laboratory practice Electroporation is performed with electroporators, purpose-built appliances that create an electrostatic field in a cell solution. The cell suspension is pipetted into a glass or plastic cuvette which has two aluminium electrodes on its sides. For bacterial electroporation, typically a suspension of around 50 microliters is used. Prior to electroporation, this suspension of bacteria is mixed with the plasmid to be transformed. The mixture is pipetted into the cuvette, the voltage and capacitance are set, and the cuvette is inserted into the electroporator. The process requires direct contact between the electrodes and the suspension. Immediately after electroporation, one milliliter of liquid medium is added to the bacteria (in the cuvette or in an Eppendorf tube), and the tube is incubated at the bacteria's optimal temperature for an hour or more to allow recovery of the cells and expression of the plasmid, followed by bacterial culture on agar plates. The success of the electroporation depends greatly on the purity of the plasmid solution, especially on its salt content. Solutions with high salt concentrations might cause an electrical discharge (known as arcing), which often reduces the viability of the bacteria. For a further detailed investigation of the process, more attention should be paid to the output impedance of the porator device and the input impedance of the cells suspension (e.g. salt content). Since the cell membrane is not able to pass current (except in ion channels), it acts as an electrical capacitor. Subjecting membranes to a high-voltage electric field results in their temporary breakdown, resulting in pores that are large enough to allow macromolecules (such as DNA) to enter or leave the cell. Additionally, electroporation can be used to increase permeability of cells during in Utero injections and surgeries. Particularly, the electroporation allows for a more efficient transfection of DNA, RNA, shRNA, and all nucleic acids into the cells of mice and rats. The success of in vivo electroporation depends greatly on voltage, repetition, pulses, and duration. Developing central nervous systems are most effective for in vivo electroporation due to the visibility of ventricles for injections of nucleic acids, as well as the increased permeability of dividing cells. Electroporation of injected in utero embryos is performed through the uterus wall, often with forceps-type electrodes to limit damage to the embryo. In vitro and animal studies In vivo gene electrotransfer was first described in 1991 and today there are many preclinical studies of gene electrotransfer. The method is used to deliver large variety of therapeutic genes for potential treatment of several diseases, such as: disorders in immune system, tumors, metabolic disorders, monogenetic diseases, cardiovascular diseases, analgesia.... With regards to irreversible electroporation, the first successful treatment of malignant cutaneous tumors implanted in mice was completed in 2007 by a group of scientists who achieved complete tumor ablation in 12 out of 13 mice. They accomplished this by sending 80 pulses of 100 microseconds at 0.3 Hz with an electrical field magnitude of 2500 V/cm to treat the cutaneous tumors. Currently, a number of companies, including AngioDynamics, Inc. and VoltMed, Inc., are continuing to develop and deploy irreversible electroporation-based technologies within clinical environments. The first group to look at electroporation for medical applications was led by Lluis M Mir at the Institute Gustave Roussy. In this case, they looked at the use of reversible electroporation in conjunction with impermeable macromolecules. The first research looking at how nanosecond pulses might be used on human cells was conducted by researchers at Eastern Virginia Medical School and Old Dominion University, and published in 2003. Medical applications The first medical application of electroporation was used for introducing poorly permeant anticancer drugs into tumor nodules. Soon also gene electrotransfer became of special interest because of its low cost, easiness of realization and safety. Namely, viral vectors can have serious limitations in terms of immunogenicity and pathogenicity when used for DNA transfer. Irreversible electroporation is being used and evaluated as cardiac ablation therapy to kill very small areas of heart muscle. This is done to treat irregularities of heart rhythm. A cardiac catheter delivers trains of high-voltage ultra-rapid electrical pulses that form irreversible pores in cell membranes, resulting in cell death. It is thought to allow better selectivity than the previous techniques, which used heat or cold to kill larger volumes of muscle. A higher voltage of electroporation was found in pigs to irreversibly destroy target cells within a narrow range while leaving neighboring cells unaffected, and thus represents a promising new treatment for cancer, heart disease and other disease states that require removal of tissue. Irreversible electroporation (IRE) has since proven effective in treating human cancer, with surgeons at Johns Hopkins and other institutions now using the technology to treat pancreatic cancer previously thought to be unresectable. Also first phase I clinical trial of gene electrotransfer in patients with metastatic melanoma was reported. Electroporation mediated delivery of a plasmid coding gene for interleukin-12 (pIL-12) was performed and safety, tolerability and therapeutic effect were monitored. Study concluded, that gene electrotransfer with pIL-12 is safe and well tolerated. In addition partial or complete response was observed also in distant non treated metastases, suggesting the systemic treatment effect. Based on these results they are already planning to move to Phase II clinical study. There are currently several ongoing clinical studies of gene electrotransfer where safety, tolerability and effectiveness of immunization with DNA vaccine, which is administered by the electric pulses is monitored. Although the method is not systemic, but strictly local one, it is still the most efficient non-viral strategy for gene delivery. N-TIRE A recent technique called non-thermal irreversible electroporation (N-TIRE) has proven successful in treating many different types of tumors and other unwanted tissue. This procedure is done using small electrodes (about 1mm in diameter), placed either inside or surrounding the target tissue to apply short, repetitive bursts of electricity at a predetermined voltage and frequency. These bursts of electricity increase the resting transmembrane potential (TMP), so that nanopores form in the plasma membrane. When the electricity applied to the tissue is above the electric field threshold of the target tissue, the cells become permanently permeable from the formation of nanopores. As a result, the cells are unable to repair the damage and die due to a loss of homeostasis. N-TIRE is unique to other tumor ablation techniques in that it does not create thermal damage to the tissue around it. Reversible electroporation Contrastingly, reversible electroporation occurs when the electricity applied with the electrodes is below the electric field threshold of the target tissue. Because the electricity applied is below the cells' threshold, it allows the cells to repair their phospholipid bilayer and continue on with their normal cell functions. Reversible electroporation is typically done with treatments that involve getting a drug or gene (or other molecule that is not normally permeable to the cell membrane) into the cell. Not all tissue has the same electric field threshold; therefore careful calculations need to be made prior to a treatment to ensure safety and efficacy. One major advantage of using N-TIRE is that, when done correctly according to careful calculations, it only affects the target tissue. Proteins, the extracellular matrix, and critical structures such as blood vessels and nerves are all unaffected and left healthy by this treatment. This allows for a quicker recovery, and facilitates a more rapid replacement of dead tumor cells with healthy cells. Before doing the procedure, scientists must carefully calculate exactly what needs to be done and treat each patient on an individual case-by-case basis. To do this, imaging technology such as CT scans and MRI's are commonly used to create a 3D image of the tumor. From this information, they can approximate the volume of the tumor and decide on the best course of action including the insertion site of electrodes, the angle they are inserted in, the voltage needed, and more, using software technology. Often, a CT machine will be used to help with the placement of electrodes during the procedure, particularly when the electrodes are being used to treat tumors in the brain. The entire procedure is very quick, typically taking about five minutes. The success rate of these procedures is high and is very promising for future treatment in humans. One disadvantage to using N-TIRE is that the electricity delivered from the electrodes can stimulate muscle cells to contract, which could have lethal consequences depending on the situation. Therefore, a paralytic agent must be used when performing the procedure. The paralytic agents that have been used in such research are successful; however, there is always some risk, albeit slight, when using anesthetics. H-FIRE A more recent technique has been developed called high-frequency irreversible electroporation (H-FIRE). This technique uses electrodes to apply bipolar bursts of electricity at a high frequency, as opposed to unipolar bursts of electricity at a low frequency. This type of procedure has the same tumor ablation success as N-TIRE. However, it has one distinct advantage, H-FIRE does not cause muscle contraction in the patient and therefore there is no need for a paralytic agent. Furthermore, H-FIRE has been demonstrated to produce more predictable ablations due to the lesser difference in the electrical properties of tissues at higher frequencies. Drug and gene delivery Electroporation can also be used to help deliver drugs or genes into the cell by applying short and intense electric pulses that transiently permeabilize cell membrane, thus allowing transport of molecules otherwise not transported through a cellular membrane. This procedure is referred to as electrochemotherapy when the molecules to be transported are chemotherapeutic agents or gene electrotransfer when the molecule to be transported is DNA. Scientists from Karolinska Institute and the University of Oxford use electroporation of exosomes to deliver siRNAs, antisense oligonucleotides, chemotherapeutic agents and proteins specifically to neurons after inject them systemically (in blood). Because these exosomes are able to cross the blood brain barrier, this protocol could solve the problem of poor delivery of medications to the central nervous system, and potentially treat Alzheimer's disease, Parkinson's disease, and brain cancer, among other conditions. Bacterial transformation is generally the easiest way to make large amounts of a particular protein needed for biotechnology purposes or in medicine. Since gene electrotransfer is very simple, rapid and highly effective technique it first became very convenient replacement for other transformation procedures. Recent research has shown that shock waves could be used for pre-treating the cell membrane prior to electroporation. This synergistic strategy has shown to reduce external voltage requirement and create larger pores. Also application of shock waves allow scope to target desired membrane site. This procedure allows to control the size of the pore. Physical mechanism Electroporation allows cellular introduction of large highly charged molecules such as DNA which would never passively diffuse across the hydrophobic bilayer core. This phenomenon indicates that the mechanism is the creation of nm-scale water-filled holes in the membrane. Electropores were optically imaged in lipid bilayer models like droplet interface bilayers and giant unilamellar vesicles, while addition of cytoskeletal proteins such as actin networks to the giant unilamellar vesicles seem to prevent the formation of visible electropores. Experimental evidences for actin networks in regulating the cell membrane permeability has also emerged. Although electroporation and dielectric breakdown both result from application of an electric field, the mechanisms involved are fundamentally different. In dielectric breakdown the barrier material is ionized, creating a conductive pathway. The material alteration is thus chemical in nature. In contrast, during electroporation the lipid molecules are not chemically altered but simply shift position, opening up a pore which acts as the conductive pathway through the bilayer as it is filled with water. Electroporation is a dynamic phenomenon that depends on the local transmembrane voltage at each point on the cell membrane. It is generally accepted that for a given pulse duration and shape, a specific transmembrane voltage threshold exists for the manifestation of the electroporation phenomenon (from 0.5 V to 1 V). This leads to the definition of an electric field magnitude threshold for electroporation (Eth). That is, only the cells within areas where E≧Eth are electroporated. If a second threshold (Eir) is reached or surpassed, electroporation will compromise the viability of the cells, i.e., irreversible electroporation (IRE). Electroporation is a multi-step process with several distinct phases. First, a short electrical pulse must be applied. Typical parameters would be 300–400 mV for < 1 ms across the membrane (note- the voltages used in cell experiments are typically much larger because they are being applied across large distances to the bulk solution so the resulting field across the actual membrane is only a small fraction of the applied bias). Upon application of this potential the membrane charges like a capacitor through the migration of ions from the surrounding solution. Once the critical field is achieved there is a rapid localized rearrangement in lipid morphology. The resulting structure is believed to be a "pre-pore" since it is not electrically conductive but leads rapidly to the creation of a conductive pore. Evidence for the existence of such pre-pores comes mostly from the "flickering" of pores, which suggests a transition between conductive and insulating states. It has been suggested that these pre-pores are small (~3 Å) hydrophobic defects. If this theory is correct, then the transition to a conductive state could be explained by a rearrangement at the pore edge, in which the lipid heads fold over to create a hydrophilic interface. Finally, these conductive pores can either heal, resealing the bilayer or expand, eventually rupturing it. The resultant fate depends on whether the critical defect size was exceeded which in turn depends on the applied field, local mechanical stress and bilayer edge energy. Gene electroporation Application of electric pulses of sufficient strength to the cell causes an increase in the trans-membrane potential difference, which provokes the membrane destabilization. Cell membrane permeability is increased and otherwise nonpermeant molecules enter the cell. Although the mechanisms of gene electrotransfer are not yet fully understood, it was shown that the introduction of DNA only occurs in the part of the membrane facing the cathode and that several steps are needed for successful transfection: electrophoretic migration of DNA towards the cell, DNA insertion into the membrane, translocation across the membrane, migration of DNA towards the nucleus, transfer of DNA across the nuclear envelope and finally gene expression. There are a number of factors that can influence the efficiency of gene electrotransfer, such as: temperature, parameters of electric pulses, DNA concentration, electroporation buffer used, cell size and the ability of cells to express transfected genes. In in vivo gene electrotransfer, DNA diffusion through extracellular matrix, properties of tissue and overall tissue conductivity are also crucial. History In the 1960s it was known that by applying an external electric field, a large membrane potential at the two pole of a cell can be created. In the 1970s it was discovered that when a membrane potential reached a critical level, the membrane would break down and that it could recover. By the 1980s, this opening was being used to introduce various materials/molecules into the cells. References Biotechnology Microbiology Molecular biology Gene delivery Laboratory techniques
Electroporation
[ "Chemistry", "Biology" ]
4,025
[ "Genetics techniques", "Biological engineering", "Genetic engineering", "Molecular biology techniques", "nan", "Molecular biology", "Gene delivery" ]
39,626
https://en.wikipedia.org/wiki/Symbiosis
Symbiosis (Ancient Greek : living with, companionship < : together; and bíōsis: living) is any type of a close and long-term biological interaction, between two organisms of different species. The two organisms, termed symbionts, can be either in a mutualistic, a commensalistic, or a parasitic relationship. In 1879, Heinrich Anton de Bary defined symbiosis as "the living together of unlike organisms". The term is sometimes more exclusively used in a restricted, mutualistic sense, where both symbionts contribute to each other's subsistence. Symbiosis can be obligatory, which means that one, or both of the symbionts depend on each other for survival, or facultative (optional), when they can also subsist independently. Symbiosis is also classified by physical attachment. Symbionts forming a single body live in conjunctive symbiosis, while all other arrangements are called disjunctive symbiosis. When one organism lives on the surface of another, such as head lice on humans, it is called ectosymbiosis; when one partner lives inside the tissues of another, such as Symbiodinium within coral, it is termed endosymbiosis. Definition The definition of symbiosis was a matter of debate for 130 years. In 1877, Albert Bernhard Frank used the term symbiosis to describe the mutualistic relationship in lichens. In 1878, the German mycologist Heinrich Anton de Bary defined it as "the living together of unlike organisms". The definition has varied among scientists, with some advocating that it should only refer to persistent mutualisms, while others thought it should apply to all persistent biological interactions (in other words, to mutualism, commensalism, and parasitism, but excluding brief interactions such as predation). In the 21st century, the latter has become the definition widely accepted by biologists. In 1949, Edward Haskell proposed an integrative approach with a classification of "co-actions", later adopted by biologists as "interactions". Types Obligate versus facultative Relationships can be obligate, meaning that one or both of the symbionts entirely depend on each other for survival. For example, in lichens, which consist of fungal and photosynthetic symbionts, the fungal partners cannot live on their own. The algal or cyanobacterial symbionts in lichens, such as Trentepohlia, can generally live independently, and their part of the relationship is therefore described as facultative (optional), or non-obligate. When one of the participants in a symbiotic relationship is capable of photosynthesis, as with lichens, it is called photosymbiosis. Ectosymbiosis versus endosymbiosis Ectosymbiosis is any symbiotic relationship in which the symbiont lives on the body surface of the host, including the inner surface of the digestive tract or the ducts of exocrine glands. Examples of this include ectoparasites such as lice; commensal ectosymbionts such as the barnacles, which attach themselves to the jaw of baleen whales; and mutualist ectosymbionts such as cleaner fish. Contrastingly, endosymbiosis is any symbiotic relationship in which one symbiont lives within the tissues of the other, either within the cells or extracellularly. Examples include diverse microbiomes: rhizobia, nitrogen-fixing bacteria that live in root nodules on legume roots; actinomycetes, nitrogen-fixing bacteria such as Frankia, which live in alder root nodules; single-celled algae inside reef-building corals; and bacterial endosymbionts that provide essential nutrients to about 10%–15% of insects. In endosymbiosis, the host cell lacks some of the nutrients which the endosymbiont provides. As a result, the host favors endosymbiont's growth processes within itself by producing some specialized cells. These cells affect the genetic composition of the host in order to regulate the increasing population of the endosymbionts and ensure that these genetic changes are passed onto the offspring via vertical transmission (heredity). As the endosymbiont adapts to the host's lifestyle, the endosymbiont changes dramatically. There is a drastic reduction in its genome size, as many genes are lost during the process of metabolism, and DNA repair and recombination, while important genes participating in the DNA-to-RNA transcription, protein translation and DNA/RNA replication are retained. The decrease in genome size is due to loss of protein coding genes and not due to lessening of inter-genic regions or open reading frame (ORF) size. Species that are naturally evolving and contain reduced sizes of genes can be accounted for an increased number of noticeable differences between them, thereby leading to changes in their evolutionary rates. When endosymbiotic bacteria related with insects are passed on to the offspring strictly via vertical genetic transmission, intracellular bacteria go across many hurdles during the process, resulting in the decrease in effective population sizes, as compared to the free-living bacteria. The incapability of the endosymbiotic bacteria to reinstate their wild type phenotype via a recombination process is called Muller's ratchet phenomenon. Muller's ratchet phenomenon, together with less effective population sizes, leads to an accretion of deleterious mutations in the non-essential genes of the intracellular bacteria. This can be due to lack of selection mechanisms prevailing in the relatively "rich" host environment. Competition Competition can be defined as an interaction between organisms or species, in which the fitness of one is lowered by the presence of another. Limited supply of at least one resource (such as food, water, and territory) used by both usually facilitates this type of interaction, although the competition can also be for other resources. Amensalism Amensalism is a non-symbiotic, asymmetric interaction where one species is harmed or killed by the other, and one is unaffected by the other. There are two types of amensalism, competition and antagonism (or antibiosis). Competition is where a larger or stronger organism deprives a smaller or weaker one of a resource. Antagonism occurs when one organism is damaged or killed by another through a chemical secretion. An example of competition is a sapling growing under the shadow of a mature tree. The mature tree can rob the sapling of necessary sunlight and, if the mature tree is very large, it can take up rainwater and deplete soil nutrients. Throughout the process, the mature tree is unaffected by the sapling. Indeed, if the sapling dies, the mature tree gains nutrients from the decaying sapling. An example of antagonism is Juglans nigra (black walnut), secreting juglone, a substance which destroys many herbaceous plants within its root zone. The term amensalism is often used to describe strongly asymmetrical competitive interactions, such as between the Spanish ibex and weevils of the genus Timarcha which feed upon the same type of shrub. Whilst the presence of the weevil has almost no influence on food availability, the presence of ibex has an enormous detrimental effect on weevil numbers, as they consume significant quantities of plant matter and incidentally ingest the weevils upon it. Commensalism Commensalism describes a relationship between two living organisms where one benefits and the other is not significantly harmed or helped. It is derived from the English word commensal, used of human social interaction. It derives from a medieval Latin word meaning sharing food, formed from com- (with) and mensa (table). Commensal relationships may involve one organism using another for transportation (phoresy) or for housing (inquilinism), or it may also involve one organism using something another created, after its death (metabiosis). Examples of metabiosis are hermit crabs using gastropod shells to protect their bodies, and spiders building their webs on plants. Mutualism Mutualism or interspecies reciprocal altruism is a long-term relationship between individuals of different species where both individuals benefit. Mutualistic relationships may be either obligate for both species, obligate for one but facultative for the other, or facultative for both. Many herbivores have mutualistic gut flora to help them digest plant matter, which is more difficult to digest than animal prey. This gut flora comprises cellulose-digesting protozoans or bacteria living in the herbivores' intestines. Coral reefs result from mutualism between coral organisms and various algae living inside them. Most land plants and land ecosystems rely on mutualism between the plants, which fix carbon from the air, and mycorrhyzal fungi, which help in extracting water and minerals from the ground. An example of mutualism is the relationship between the ocellaris clownfish that dwell among the tentacles of Ritteri sea anemones. The territorial fish protects the anemone from anemone-eating fish, and in turn, the anemone stinging tentacles protect the clownfish from its predators. A special mucus on the clownfish protects it from the stinging tentacles. A further example is the goby, a fish which sometimes lives together with a shrimp. The shrimp digs and cleans up a burrow in the sand in which both the shrimp and the goby fish live. The shrimp is almost blind, leaving it vulnerable to predators when outside its burrow. In case of danger, the goby touches the shrimp with its tail to warn it, and both quickly retreat into the burrow. Different species of gobies (Elacatinus spp.) also clean up ectoparasites in other fish, possibly another kind of mutualism. A spectacular example of obligate mutualism is the relationship between the siboglinid tube worms and symbiotic bacteria that live at hydrothermal vents and cold seeps. The worm has no digestive tract and is wholly reliant on its internal symbionts for nutrition. The bacteria oxidize either hydrogen sulfide or methane, which the host supplies to them. These worms were discovered in the late 1980s at the hydrothermal vents near the Galapagos Islands and have since been found at deep-sea hydrothermal vents and cold seeps in all of the world's oceans. Mutualism improves both organism's competitive ability and will outcompete organisms of the same species that lack the symbiont. A facultative symbiosis is seen in encrusting bryozoans and hermit crabs. The bryozoan colony (Acanthodesia commensale) develops a cirumrotatory growth and offers the crab (Pseudopagurus granulimanus) a helicospiral-tubular extension of its living chamber that initially was situated within a gastropod shell. Parasitism In a parasitic relationship, the parasite benefits while the host is harmed. Parasitism takes many forms, from endoparasites that live within the host's body to ectoparasites and parasitic castrators that live on its surface and micropredators like mosquitoes that visit intermittently. Parasitism is an extremely successful mode of life; about 40% of all animal species are parasites, and the average mammal species is host to 4 nematodes, 2 cestodes, and 2 trematodes. Mimicry Mimicry is a form of symbiosis in which a species adopts distinct characteristics of another species to alter its relationship dynamic with the species being mimicked, to its own advantage. Among the many types of mimicry are Batesian and Müllerian, the first involving one-sided exploitation, the second providing mutual benefit. Batesian mimicry is an exploitative three-party interaction where one species, the mimic, has evolved to mimic another, the model, to deceive a third, the dupe. In terms of signalling theory, the mimic and model have evolved to send a signal; the dupe has evolved to receive it from the model. This is to the advantage of the mimic but to the detriment of both the model, whose protective signals are effectively weakened, and of the dupe, which is deprived of an edible prey. For example, a wasp is a strongly-defended model, which signals with its conspicuous black and yellow coloration that it is an unprofitable prey to predators such as birds which hunt by sight; many hoverflies are Batesian mimics of wasps, and any bird that avoids these hoverflies is a dupe. In contrast, Müllerian mimicry is mutually beneficial as all participants are both models and mimics. For example, different species of bumblebee mimic each other, with similar warning coloration in combinations of black, white, red, and yellow, and all of them benefit from the relationship. Cleaning symbiosis Cleaning symbiosis is an association between individuals of two species, where one (the cleaner) removes and eats parasites and other materials from the surface of the other (the client). It is putatively mutually beneficial, but biologists have long debated whether it is mutual selfishness, or simply exploitative. Cleaning symbiosis is well known among marine fish, where some small species of cleaner fish – notably wrasses, but also species in other genera – are specialized to feed almost exclusively by cleaning larger fish and other marine animals. In a supreme situation, the host species (fish or marine life) will display itself at a designated station deemed the "cleaning station". Cleaner fish play an essential role in the reduction of parasitism on marine animals. Some shark species participate in cleaning symbiosis, where cleaner fish remove ectoparasites from the body of the shark. A study by Raymond Keyes addresses the atypical behavior of a few shark species when exposed to cleaner fish. In this experiment, cleaner wrasse (Labroides dimidiatus) and various shark species were placed in a tank together and observed. The different shark species exhibited different responses and behaviors around the wrasse. For example, Atlantic and Pacific lemon sharks consistently react to the wrasse fish in a fascinating way. During the interaction, the shark remains passive and the wrasse swims to it. It begins to scan the shark's body, sometimes stopping to inspect specific areas. Commonly, the wrasse would inspect the gills, labial regions, and skin. When the wrasse makes its way to the mouth of the shark, the shark often ceases breathing for up to two and a half minutes so that the fish is able to scan the mouth. Then, the fish passes further into the mouth to examine the gills, specifically the buccopharyngeal area, which typically holds the most parasites. When the shark begins to close its mouth, the wrasse finishes its examination and goes elsewhere. Male bull sharks exhibit slightly different behavior at cleaning stations: as the shark swims into a colony of wrasse fish, it drastically slows its speed to allow the cleaners to do their job. After approximately one minute, the shark returns to normal swimming speed. Role in evolution Symbiosis is increasingly recognized as an important selective force behind evolution; many species have a long history of interdependent co-evolution. Although symbiosis was once discounted as an anecdotal evolutionary phenomenon, evidence is now overwhelming that obligate or facultative associations among microorganisms and between microorganisms and multicellular hosts had crucial consequences in many landmark events in evolution and in the generation of phenotypic diversity and complex phenotypes able to colonise new environments. Mutualistic symbiosis can sometimes evolve from parasitism or commensalism, Fungi's relationship to plants in the form of mycelium evolved from parasitism and commensalism. Under certain conditions species of fungi previously in a state of mutualism can turn parasitic on weak or dying plants. Likewise the symbiotic relationship of clown fish and sea anemones emerged from a commensalist relationship. Hologenome development and evolution Evolution originated from changes in development where variations within species are selected for or against because of the symbionts involved. The hologenome theory relates to the holobiont and symbionts genome together as a whole. Microbes live everywhere in and on every multicellular organism. Many organisms rely on their symbionts in order to develop properly, this is known as co-development. In cases of co-development the symbionts send signals to their host which determine developmental processes. Co-development is commonly seen in both arthropods and vertebrates. Symbiogenesis One hypothesis for the origin of the nucleus in eukaryotes (plants, animals, fungi, and protists) is that it developed from a symbiogenesis between bacteria and archaea. It is hypothesized that the symbiosis originated when ancient archaea, similar to modern methanogenic archaea, invaded and lived within bacteria similar to modern myxobacteria, eventually forming the early nucleus. This theory is analogous to the accepted theory for the origin of eukaryotic mitochondria and chloroplasts, which are thought to have developed from a similar endosymbiotic relationship between proto-eukaryotes and aerobic bacteria. Evidence for this includes the fact that mitochondria and chloroplasts divide independently of the cell, and that these organelles have their own genome. The biologist Lynn Margulis, famous for her work on endosymbiosis, contended that symbiosis is a major driving force behind evolution. She considered Darwin's notion of evolution, driven by competition, to be incomplete and claimed that evolution is strongly based on co-operation, interaction, and mutual dependence among organisms. According to Margulis and her son Dorion Sagan, "Life did not take over the globe by combat, but by networking." Major examples of co-evolutionary relationships Mycorrhiza About 80% of vascular plants worldwide form symbiotic relationships with fungi, in particular in arbuscular mycorrhizas. Pollination Flowering plants and the animals that pollinate them have co-evolved. Many plants that are pollinated by insects (in entomophily), bats, or birds (in ornithophily) have highly specialized flowers modified to promote pollination by a specific pollinator that is correspondingly adapted. The first flowering plants in the fossil record had relatively simple flowers. Adaptive speciation quickly gave rise to many diverse groups of plants, and, at the same time, corresponding speciation occurred in certain insect groups. Some groups of plants developed nectar and large sticky pollen, while insects evolved more specialized morphologies to access and collect these rich food sources. In some taxa of plants and insects, the relationship has become dependent, where the plant species can only be pollinated by one species of insect. Acacia ants and acacias The acacia ant (Pseudomyrmex ferruginea) is an obligate plant ant that protects at least five species of "Acacia" (Vachellia) from preying insects and from other plants competing for sunlight, and the tree provides nourishment and shelter for the ant and its larvae. Seed dispersal Seed dispersal is the movement, spread or transport of seeds away from the parent plant. Plants have limited mobility and rely upon a variety of dispersal vectors to transport their propagules, including both abiotic vectors such as the wind and living (biotic) vectors like birds. In order to attract animals, these plants evolved a set of morphological characters such as fruit colour, mass, and persistence correlated to particular seed dispersal agents. For example, plants may evolve conspicuous fruit colours to attract avian frugivores, and birds may learn to associate such colours with a food resource. Rhizobia Lichens See also Notes References Sources External links TED-Education video – Symbiosis: a surprising tale of species cooperation Symbiosis Ecology
Symbiosis
[ "Biology" ]
4,253
[ "Biological interactions", "Behavior", "Symbiosis", "Ecology" ]
39,666
https://en.wikipedia.org/wiki/Supernova%20remnant
A supernova remnant (SNR) is the structure resulting from the explosion of a star in a supernova. The supernova remnant is bounded by an expanding shock wave, and consists of ejected material expanding from the explosion, and the interstellar material it sweeps up and shocks along the way. There are two common routes to a supernova: either a massive star may run out of fuel, ceasing to generate fusion energy in its core, and collapsing inward under the force of its own gravity to form a neutron star or a black hole; or a white dwarf star may accrete material from a companion star until it reaches a critical mass and undergoes a thermonuclear explosion. In either case, the resulting supernova explosion expels much or all of the stellar material with velocities as much as 10% the speed of light (or approximately 30,000 km/s) and a strong shock wave forms ahead of the ejecta. That heats the upstream plasma up to temperatures well above millions of K. The shock continuously slows down over time as it sweeps up the ambient medium, but it can expand over hundreds or thousands of years and over tens of parsecs before its speed falls below the local sound speed. One of the best observed young supernova remnants was formed by SN 1987A, a supernova in the Large Magellanic Cloud that was observed in February 1987. Other well-known supernova remnants include the Crab Nebula; Tycho, the remnant of SN 1572, named after Tycho Brahe who recorded the brightness of its original explosion; and Kepler, the remnant of SN 1604, named after Johannes Kepler. The youngest known remnant in the Milky Way is G1.9+0.3, discovered in the Galactic Center. Stages An SNR passes through the following stages as it expands: Free expansion of the ejecta, until they sweep up their own weight in circumstellar or interstellar medium. This can last tens to a few hundred years depending on the density of the surrounding gas. Sweeping up of a shell of shocked circumstellar and interstellar gas. This begins the Sedov-Taylor phase, which can be well modeled by a self-similar analytic solution (see blast wave). Strong X-ray emission traces the strong shock waves and hot shocked gas. Cooling of the shell, to form a thin (< 1 pc), dense (1 to 100 million atoms per cubic metre) shell surrounding the hot (few million kelvin) interior. This is the pressure-driven snowplow phase. The shell can be clearly seen in optical emission from recombining ionized hydrogen and ionized oxygen atoms. Cooling of the interior. The dense shell continues to expand from its own momentum. This stage is best seen in the radio emission from neutral hydrogen atoms. Merging with the surrounding interstellar medium. When the supernova remnant slows to the speed of the random velocities in the surrounding medium, after roughly 30,000 years, it will merge into the general turbulent flow, contributing its remaining kinetic energy to the turbulence. Types of supernova remnant There are three types of supernova remnant: Shell-like, such as Cassiopeia A Composite, in which a shell contains a central pulsar wind nebula, such as G11.2-0.3 or G21.5-0.9. Mixed-morphology (also called "thermal composite") remnants, in which central thermal X-ray emission is seen, enclosed by a radio shell. The thermal X-rays are primarily from swept-up interstellar material, rather than supernova ejecta. Examples of this class include the SNRs W28 and W44. (Confusingly, W44 additionally contains a pulsar and pulsar wind nebula; so it is simultaneously both a "classic" composite and a thermal composite.) Remnants which could only be created by significantly higher ejection energies than a standard supernova are called hypernova remnants, after the high-energy hypernova explosion that is assumed to have created them. Origin of cosmic rays Supernova remnants are considered the major source of galactic cosmic rays. The connection between cosmic rays and supernovas was first suggested by Walter Baade and Fritz Zwicky in 1934. Vitaly Ginzburg and Sergei Syrovatskii in 1964 remarked that if the efficiency of cosmic ray acceleration in supernova remnants is about 10 percent, the cosmic ray losses of the Milky Way are compensated. This hypothesis is supported by a specific mechanism called "shock wave acceleration" based on Enrico Fermi's ideas, which is still under development. In 1949, Fermi proposed a model for the acceleration of cosmic rays through particle collisions with magnetic clouds in the interstellar medium. This process, known as the "Second Order Fermi Mechanism", increases particle energy during head-on collisions, resulting in a steady gain in energy. A later model to produce Fermi Acceleration was generated by a powerful shock front moving through space. Particles that repeatedly cross the front of the shock can gain significant increases in energy. This became known as the "First Order Fermi Mechanism". Supernova remnants can provide the energetic shock fronts required to generate ultra-high energy cosmic rays. Observation of the SN 1006 remnant in the X-ray has shown synchrotron emission consistent with it being a source of cosmic rays. However, for energies higher than about 1018 eV a different mechanism is required as supernova remnants cannot provide sufficient energy. It is still unclear whether supernova remnants accelerate cosmic rays up to PeV energies. The future telescope CTA will help to answer this question. See also References External links List of All Known Galactic and Extragalactic Supernovae on the Open Supernova Catalog (these are not supernova remnants yet) Galactic SNR Catalogue (D. A. Green, University of Cambridge) Chandra observations of supernova remnants: catalog, photo album, selected picks 2MASS images of Supernova Remnants NASA: Introduction to Supernova Remnants NASA's Imagine: Supernova Remnants Afterlife of a Supernova on UniverseToday.com Supernova remnant on arxiv.org Supernova Remnants, SEDS Remnants Nebulae ja:超新星#超新星残骸
Supernova remnant
[ "Chemistry", "Astronomy" ]
1,287
[ "Supernovae", "Nebulae", "Astronomical events", "Explosions", "Astronomical objects" ]
39,674
https://en.wikipedia.org/wiki/Planetary%20nebula
A planetary nebula is a type of emission nebula consisting of an expanding, glowing shell of ionized gas ejected from red giant stars late in their lives. The term "planetary nebula" is a misnomer because they are unrelated to planets. The term originates from the planet-like round shape of these nebulae observed by astronomers through early telescopes. The first usage may have occurred during the 1780s with the English astronomer William Herschel who described these nebulae as resembling planets; however, as early as January 1779, the French astronomer Antoine Darquier de Pellepoix described in his observations of the Ring Nebula, "very dim but perfectly outlined; it is as large as Jupiter and resembles a fading planet". Though the modern interpretation is different, the old term is still used. All planetary nebulae form at the end of the life of a star of intermediate mass, about 1-8 solar masses. It is expected that the Sun will form a planetary nebula at the end of its life cycle. They are relatively short-lived phenomena, lasting perhaps a few tens of millennia, compared to considerably longer phases of stellar evolution. Once all of the red giant's atmosphere has been dissipated, energetic ultraviolet radiation from the exposed hot luminous core, called a planetary nebula nucleus (P.N.N.), ionizes the ejected material. Absorbed ultraviolet light then energizes the shell of nebulous gas around the central star, causing it to appear as a brightly coloured planetary nebula. Planetary nebulae probably play a crucial role in the chemical evolution of the Milky Way by expelling elements into the interstellar medium from stars where those elements were created. Planetary nebulae are observed in more distant galaxies, yielding useful information about their chemical abundances. Starting from the 1990s, Hubble Space Telescope images revealed that many planetary nebulae have extremely complex and varied morphologies. About one-fifth are roughly spherical, but the majority are not spherically symmetric. The mechanisms that produce such a wide variety of shapes and features are not yet well understood, but binary central stars, stellar winds and magnetic fields may play a role. Observations Discovery The first planetary nebula discovered (though not yet termed as such) was the Dumbbell Nebula in the constellation of Vulpecula. It was observed by Charles Messier on July 12, 1764 and listed as M27 in his catalogue of nebulous objects. To early observers with low-resolution telescopes, M27 and subsequently discovered planetary nebulae resembled the giant planets like Uranus. As early as January 1779, the French astronomer Antoine Darquier de Pellepoix described in his observations of the Ring Nebula, "a very dull nebula, but perfectly outlined; as large as Jupiter and looks like a fading planet". The nature of these objects remained unclear. In 1782, William Herschel, discoverer of Uranus, found the Saturn Nebula (NGC 7009) and described it as "A curious nebula, or what else to call it I do not know". He later described these objects as seeming to be planets "of the starry kind". As noted by Darquier before him, Herschel found that the disk resembled a planet but it was too faint to be one. In 1785, Herschel wrote to Jérôme Lalande: These are celestial bodies of which as yet we have no clear idea and which are perhaps of a type quite different from those that we are familiar with in the heavens. I have already found four that have a visible diameter of between 15 and 30 seconds. These bodies appear to have a disk that is rather like a planet, that is to say, of equal brightness all over, round or somewhat oval, and about as well defined in outline as the disk of the planets, of a light strong enough to be visible with an ordinary telescope of only one foot, yet they have only the appearance of a star of about ninth magnitude. He assigned these to Class IV of his catalogue of "nebulae", eventually listing 78 "planetary nebulae", most of which are in fact galaxies. Herschel used the term "planetary nebulae" for these objects. The origin of this term not known. The label "planetary nebula" became ingrained in the terminology used by astronomers to categorize these types of nebulae, and is still in use by astronomers today. Spectra The nature of planetary nebulae remained unknown until the first spectroscopic observations were made in the mid-19th century. Using a prism to disperse their light, William Huggins was one of the earliest astronomers to study the optical spectra of astronomical objects. On August 29, 1864, Huggins was the first to analyze the spectrum of a planetary nebula when he observed Cat's Eye Nebula. His observations of stars had shown that their spectra consisted of a continuum of radiation with many dark lines superimposed. He found that many nebulous objects such as the Andromeda Nebula (as it was then known) had spectra that were quite similar. However, when Huggins looked at the Cat's Eye Nebula, he found a very different spectrum. Rather than a strong continuum with absorption lines superimposed, the Cat's Eye Nebula and other similar objects showed a number of emission lines. Brightest of these was at a wavelength of 500.7 nanometres, which did not correspond with a line of any known element. At first, it was hypothesized that the line might be due to an unknown element, which was named nebulium. A similar idea had led to the discovery of helium through analysis of the Sun's spectrum in 1868. While helium was isolated on Earth soon after its discovery in the spectrum of the Sun, "nebulium" was not. In the early 20th century, Henry Norris Russell proposed that, rather than being a new element, the line at 500.7 nm was due to a familiar element in unfamiliar conditions. Physicists showed in the 1920s that in gas at extremely low densities, electrons can occupy excited metastable energy levels in atoms and ions that would otherwise be de-excited by collisions that would occur at higher densities. Electron transitions from these levels in nitrogen and oxygen ions (, (a.k.a. O ), and ) give rise to the 500.7 nm emission line and others. These spectral lines, which can only be seen in very low-density gases, are called forbidden lines. Spectroscopic observations thus showed that nebulae were made of extremely rarefied gas. Central stars The central stars of planetary nebulae are very hot. Only when a star has exhausted most of its nuclear fuel can it collapse to a small size. Planetary nebulae are understood as a final stage of stellar evolution. Spectroscopic observations show that all planetary nebulae are expanding. This led to the idea that planetary nebulae were caused by a star's outer layers being thrown into space at the end of its life. Modern observations Towards the end of the 20th century, technological improvements helped to further the study of planetary nebulae. Space telescopes allowed astronomers to study light wavelengths outside those that the Earth's atmosphere transmits. The first UV observations of PNe (IC 2149) were performed from space, with the Orion 2 Space Observatory (see Orion 1 and Orion 2 Space Observatories) on board the Soyuz 13 spacecraft in December 1973 , two photon emission from nebulae was detected for the first time.. Infrared and ultraviolet studies of planetary nebulae allowed much more accurate determinations of nebular temperatures, densities and elemental abundances. Charge-coupled device technology allowed much fainter spectral lines to be measured accurately than had previously been possible. The Hubble Space Telescope also showed that while many nebulae appear to have simple and regular structures when observed from the ground, the very high optical resolution achievable by telescopes above the Earth's atmosphere reveals extremely complex structures. Under the Morgan-Keenan spectral classification scheme, planetary nebulae are classified as Type-P, although this notation is seldom used in practice. Origins Stars greater than 8 solar masses (M⊙) will probably end their lives in dramatic supernovae explosions, while planetary nebulae seemingly only occur at the end of the lives of intermediate and low mass stars between 0.8 M⊙ to 8.0 M⊙. Progenitor stars that form planetary nebulae will spend most of their lifetimes converting their hydrogen into helium in the star's core by nuclear fusion at about 15 million K. This generates energy in the core, which creates outward pressure that balances the crushing inward pressures of gravity. This state of equilibrium is known as the main sequence, which can last for tens of millions to billions of years, depending on the mass. When the hydrogen in the core starts to run out, nuclear fusion generates less energy and gravity starts compressing the core, causing a rise in temperature to about 100 million K. Such high core temperatures then make the star's cooler outer layers expand to create much larger red giant stars. This end phase causes a dramatic rise in stellar luminosity, where the released energy is distributed over a much larger surface area, which in fact causes the average surface temperature to be lower. In stellar evolution terms, stars undergoing such increases in luminosity are known as asymptotic giant branch stars (AGB). During this phase, the star can lose 50–70% of its total mass from its stellar wind. For the more massive asymptotic giant branch stars that form planetary nebulae, whose progenitors exceed about 0.6M⊙, their cores will continue to contract. When temperatures reach about 100 million K, the available helium nuclei fuse into carbon and oxygen, so that the star again resumes radiating energy, temporarily stopping the core's contraction. This new helium burning phase (fusion of helium nuclei) forms a growing inner core of inert carbon and oxygen. Above it is a thin helium-burning shell, surrounded in turn by a hydrogen-burning shell. However, this new phase lasts only 20,000 years or so, a very short period compared to the entire lifetime of the star. The venting of atmosphere continues unabated into interstellar space, but when the outer surface of the exposed core reaches temperatures exceeding about 30,000 K, there are enough emitted ultraviolet photons to ionize the ejected atmosphere, causing the gas to shine as a planetary nebula. Lifetime After a star passes through the asymptotic giant branch (AGB) phase, the short planetary nebula phase of stellar evolution begins as gases blow away from the central star at speeds of a few kilometers per second. The central star is the remnant of its AGB progenitor, an electron-degenerate carbon-oxygen core that has lost most of its hydrogen envelope due to mass loss on the AGB. As the gases expand, the central star undergoes a two-stage evolution, first growing hotter as it continues to contract and hydrogen fusion reactions occur in the shell around the core and then slowly cooling when the hydrogen shell is exhausted through fusion and mass loss. In the second phase, it radiates away its energy and fusion reactions cease, as the central star is not heavy enough to generate the core temperatures required for carbon and oxygen to fuse. During the first phase, the central star maintains constant luminosity, while at the same time it grows ever hotter, eventually reaching temperatures around 100,000 K. In the second phase, it cools so much that it does not give off enough ultraviolet radiation to ionize the increasingly distant gas cloud. The star becomes a white dwarf, and the expanding gas cloud becomes invisible to us, ending the planetary nebula phase of evolution. For a typical planetary nebula, about 10,000 years passes between its formation and recombination of the resulting plasma. Role in galactic enrichment Planetary nebulae may play a very important role in galactic evolution. Newly born stars consist almost entirely of hydrogen and helium, but as stars evolve through the asymptotic giant branch phase, they create heavier elements via nuclear fusion which are eventually expelled by strong stellar winds. Planetary nebulae usually contain larger proportions of elements such as carbon, nitrogen and oxygen, and these are recycled into the interstellar medium via these powerful winds. In this way, planetary nebulae greatly enrich the Milky Way and their nebulae with these heavier elements – collectively known by astronomers as metals and specifically referred to by the metallicity parameter Z. Subsequent generations of stars formed from such nebulae also tend to have higher metallicities. Although these metals are present in stars in relatively tiny amounts, they have marked effects on stellar evolution and fusion reactions. When stars formed earlier in the universe they theoretically contained smaller quantities of heavier elements. Known examples are the metal poor Population II stars. (See Stellar population.) Identification of stellar metallicity content is found by spectroscopy. Characteristics Physical characteristics A typical planetary nebula is roughly one light year across, and consists of extremely rarefied gas, with a density generally from 100 to 10,000 particles . (The Earth's atmosphere, by comparison, contains 2.5 particles .) Young planetary nebulae have the highest densities, sometimes as high as 106 particles . As nebulae age, their expansion causes their density to decrease. The masses of planetary nebulae range from 0.1 to 1 solar masses. Radiation from the central star heats the gases to temperatures of about 10,000 K. The gas temperature in central regions is usually much higher than at the periphery reaching 16,000–25,000 K. The volume in the vicinity of the central star is often filled with a very hot (coronal) gas having the temperature of about 1,000,000 K. This gas originates from the surface of the central star in the form of the fast stellar wind. Nebulae may be described as matter bounded or radiation bounded. In the former case, there is not enough matter in the nebula to absorb all the UV photons emitted by the star, and the visible nebula is fully ionized. In the latter case, there are not enough UV photons being emitted by the central star to ionize all the surrounding gas, and an ionization front propagates outward into the circumstellar envelope of neutral atoms. Numbers and distribution About 3000 planetary nebulae are now known to exist in our galaxy, out of 200 billion stars. Their very short lifetime compared to total stellar lifetime accounts for their rarity. They are found mostly near the plane of the Milky Way, with the greatest concentration near the Galactic Center. Morphology Only about 20% of planetary nebulae are spherically symmetric (for example, see Abell 39). A wide variety of shapes exist with some very complex forms seen. Planetary nebulae are classified by different authors into: stellar, disk, ring, irregular, helical, bipolar, quadrupolar, and other types, although the majority of them belong to just three types: spherical, elliptical and bipolar. Bipolar nebulae are concentrated in the galactic plane, probably produced by relatively young massive progenitor stars; and bipolars in the galactic bulge appear to prefer orienting their orbital axes parallel to the galactic plane. On the other hand, spherical nebulae are probably produced by old stars similar to the Sun. The huge variety of the shapes is partially the projection effect—the same nebula when viewed under different angles will appear different. Nevertheless, the reason for the huge variety of physical shapes is not fully understood. Gravitational interactions with companion stars if the central stars are binary stars may be one cause. Another possibility is that planets disrupt the flow of material away from the star as the nebula forms. It has been determined that the more massive stars produce more irregularly shaped nebulae. In January 2005, astronomers announced the first detection of magnetic fields around the central stars of two planetary nebulae, and hypothesized that the fields might be partly or wholly responsible for their remarkable shapes. Membership in clusters Planetary nebulae have been detected as members in four Galactic globular clusters: Messier 15, Messier 22, NGC 6441 and Palomar 6. Evidence also points to the potential discovery of planetary nebulae in globular clusters in the galaxy M31. However, there is currently only one case of a planetary nebula discovered in an open cluster that is agreed upon by independent researchers. That case pertains to the planetary nebula PHR 1315-6555 and the open cluster Andrews-Lindsay 1. Indeed, through cluster membership, PHR 1315-6555 possesses among the most precise distances established for a planetary nebula (i.e., a 4% distance solution). The cases of NGC 2818 and NGC 2348 in Messier 46, exhibit mismatched velocities between the planetary nebulae and the clusters, which indicates they are line-of-sight coincidences. A subsample of tentative cases that may potentially be cluster/PN pairs includes Abell 8 and Bica 6, and He 2-86 and NGC 4463. Theoretical models predict that planetary nebulae can form from main-sequence stars of between one and eight solar masses, which puts the progenitor star's age at greater than 40 million years. Although there are a few hundred known open clusters within that age range, a variety of reasons limit the chances of finding a planetary nebula within. For one reason, the planetary nebula phase for more massive stars is on the order of millennia, which is a blink of the eye in astronomic terms. Also, partly because of their small total mass, open clusters have relatively poor gravitational cohesion and tend to disperse after a relatively short time, typically from 100 to 600 million years. Current issues in planetary nebula studies The distances to planetary nebulae are generally poorly determined, but the Gaia mission is now measuring direct parallactic distances between their central stars and neighboring stars. It is also possible to determine distances to nearby planetary nebula by measuring their expansion rates. High resolution observations taken several years apart will show the expansion of the nebula perpendicular to the line of sight, while spectroscopic observations of the Doppler shift will reveal the velocity of expansion in the line of sight. Comparing the angular expansion with the derived velocity of expansion will reveal the distance to the nebula. The issue of how such a diverse range of nebular shapes can be produced is a debatable topic. It is theorised that interactions between material moving away from the star at different speeds gives rise to most observed shapes. However, some astronomers postulate that close binary central stars might be responsible for the more complex and extreme planetary nebulae. Several have been shown to exhibit strong magnetic fields, and their interactions with ionized gas could explain some planetary nebulae shapes. There are two main methods of determining metal abundances in nebulae. These rely on recombination lines and collisionally excited lines. Large discrepancies are sometimes seen between the results derived from the two methods. This may be explained by the presence of small temperature fluctuations within planetary nebulae. The discrepancies may be too large to be caused by temperature effects, and some hypothesize the existence of cold knots containing very little hydrogen to explain the observations. However, such knots have yet to be observed. See also Asymptotic giant branch Cosmic distance ladder Fast Low-Ionization Emission Region Nova remnant PG 1159 star (predegenerates) Protoplanetary nebula Supernova remnant White dwarf List of planetary nebulae References Citations Cited sources (Chapter 1 can be downloaded here.) Further reading External links Entry in the Encyclopedia of Astrobiology, Astronomy, and Spaceflight Press release on recent observations of the Cat's Eye Nebula Planetary Nebulae, SEDS Messier Pages The first detection of magnetic fields in the central stars of four planetary nebulae Planetary Nebulae—Information and amateur observations Planetary nebula on arxiv.org Stellar evolution Articles containing video clips
Planetary nebula
[ "Physics" ]
4,037
[ "Astrophysics", "Stellar evolution" ]
39,739
https://en.wikipedia.org/wiki/Confocal
In geometry, confocal means having the same foci: confocal conic sections. For an optical cavity consisting of two mirrors, confocal means that they share their foci. If they are identical mirrors, their radius of curvature, Rmirror, equals L, where L is the distance between the mirrors. In conic sections, it is said of two ellipses, two hyperbolas, or an ellipse and a hyperbola which share both foci with each other. If an ellipse and a hyperbola are confocal, they are perpendicular to each other. In optics, it means that one focus or image point of one lens is the same as one focus of the next lens. See also Confocal laser scanning microscopy Confocal microscopy Elementary geometry Optics
Confocal
[ "Physics", "Chemistry", "Mathematics" ]
170
[ "Applied and interdisciplinary physics", "Optics", "Elementary mathematics", "Elementary geometry", " molecular", "Atomic", " and optical physics" ]
39,774
https://en.wikipedia.org/wiki/Optical%20rotation
Optical rotation, also known as polarization rotation or circular birefringence, is the rotation of the orientation of the plane of polarization about the optical axis of linearly polarized light as it travels through certain materials. Circular birefringence and circular dichroism are the manifestations of optical activity. Optical activity occurs only in chiral materials, those lacking microscopic mirror symmetry. Unlike other sources of birefringence which alter a beam's state of polarization, optical activity can be observed in fluids. This can include gases or solutions of chiral molecules such as sugars, molecules with helical secondary structure such as some proteins, and also chiral liquid crystals. It can also be observed in chiral solids such as certain crystals with a rotation between adjacent crystal planes (such as quartz) or metamaterials. When looking at the source of light, the rotation of the plane of polarization may be either to the right (dextrorotatory or dextrorotary — d-rotary, represented by (+), clockwise), or to the left (levorotatory or levorotary — l-rotary, represented by (−), counter-clockwise) depending on which stereoisomer is dominant. For instance, sucrose and camphor are d-rotary whereas cholesterol is l-rotary. For a given substance, the angle by which the polarization of light of a specified wavelength is rotated is proportional to the path length through the material and (for a solution) proportional to its concentration. Optical activity is measured using a polarized source and polarimeter. This is a tool particularly used in the sugar industry to measure the sugar concentration of syrup, and generally in chemistry to measure the concentration or enantiomeric ratio of chiral molecules in solution. Modulation of a liquid crystal's optical activity, viewed between two sheet polarizers, is the principle of operation of liquid-crystal displays (used in most modern televisions and computer monitors). Forms Dextrorotation and laevorotation (also spelled levorotation) in chemistry and physics are the optical rotation of plane-polarized light. From the point of view of the observer, dextrorotation refers to clockwise or right-handed rotation, and laevorotation refers to counterclockwise or left-handed rotation. A chemical compound that causes dextrorotation is dextrorotatory or dextrorotary, while a compound that causes laevorotation is laevorotatory or laevorotary. Compounds with these properties consist of chiral molecules and are said to have optical activity. If a chiral molecule is dextrorotary, its enantiomer (geometric mirror image) will be laevorotary, and vice versa. Enantiomers rotate plane-polarized light the same number of degrees, but in opposite directions. Chirality prefixes A compound may be labeled as dextrorotary by using the "(+)-" or "d-" prefix. Likewise, a levorotary compound may be labeled using the "(−)-" or "l-" prefix. The lowercase "d-" and "l-" prefixes are obsolete, and are distinct from the SMALL CAPS "D-" and "L-" prefixes. The "D-" and "L-" prefixes are used to specify the enantiomer of chiral organic compounds in biochemistry and are based on the compound's absolute configuration relative to (+)-glyceraldehyde, which is the D-form by definition. The prefix used to indicate absolute configuration is not directly related to the (+) or (−) prefix used to indicate optical rotation in the same molecule. For example, nine of the nineteen L-amino acids naturally occurring in proteins are, despite the L- prefix, actually dextrorotary (at a wavelength of 589 nm), and D-fructose is sometimes called "levulose" because it is levorotary. The D- and L- prefixes describe the molecule as a whole, as do the (+) and (−) prefixes for optical rotation. In contrast, the (R)- and (S)- prefixes from the Cahn–Ingold–Prelog priority rules characterize the absolute configuration of each specific chiral stereocenter with the molecule, rather than a property of the molecule as a whole. A molecule having exactly one chiral stereocenter (usually an asymmetric carbon atom) can be labeled (R) or (S), but a molecule having multiple stereocenters needs more than one label. For example, the essential amino acid L-threonine contains two chiral stereocenters and is written (2S,3S)-threonine. There is no strict relationship between the R/S, the D/L, and (+)/(−) designations, although some correlations exist. For example, of the naturally occurring amino acids, all are L, and most are (S). For some molecules the (R)-enantiomer is the dextrorotary (+) enantiomer, and in other cases it is the levorotary (−) enantiomer. The relationship must be determined on a case-by-case basis with experimental measurements or detailed computer modeling. History The rotation of the orientation of linearly polarized light was first observed in 1811 in quartz by French physicist François Arago. In 1820, the English astronomer Sir John F.W. Herschel discovered that different individual quartz crystals, whose crystalline structures are mirror images of each other (see illustration), rotate linear polarization by equal amounts but in opposite directions. Jean Baptiste Biot also observed the rotation of the axis of polarization in certain liquids and vapors of organic substances such as turpentine. In 1822, Augustin-Jean Fresnel found that optical rotation could be explained as a species of birefringence: whereas previously known cases of birefringence were due to the different speeds of light polarized in two perpendicular planes, optical rotation was due to the different speeds of right-hand and left-hand circularly polarized light. Simple polarimeters have been used since this time to measure the concentrations of simple sugars, such as glucose, in solution. In fact one name for D-glucose (the biological isomer), is dextrose, referring to the fact that it causes linearly polarized light to rotate to the right or dexter side. In a similar manner, levulose, more commonly known as fructose, causes the plane of polarization to rotate to the left. Fructose is even more strongly levorotatory than glucose is dextrorotatory. Invert sugar syrup, commercially formed by the hydrolysis of sucrose syrup to a mixture of the component simple sugars, fructose, and glucose, gets its name from the fact that the conversion causes the direction of rotation to "invert" from right to left. In 1849, Louis Pasteur resolved a problem concerning the nature of tartaric acid. A solution of this compound derived from living things (to be specific, wine lees) rotates the plane of polarization of light passing through it, but tartaric acid derived by chemical synthesis has no such effect, even though its reactions are identical and its elemental composition is the same. Pasteur noticed that crystals of this compound come in two asymmetric forms that are mirror images of one another. Sorting the crystals by hand gave two forms of the compound: Solutions of one form rotate polarized light clockwise, while the other form rotate light counterclockwise. An equal mix of the two has no polarizing effect on light. Pasteur deduced that the molecule in question is asymmetric and could exist in two different forms that resemble one another as would left- and right-hand gloves, and that the organic form of the compound consists of purely the one type. In 1874, Jacobus Henricus van 't Hoff and Joseph Achille Le Bel independently proposed that this phenomenon of optical activity in carbon compounds could be explained by assuming that the 4 saturated chemical bonds between carbon atoms and their neighbors are directed towards the corners of a regular tetrahedron. If the 4 neighbors are all different, then there are two possible orderings of the neighbors around the tetrahedron, which will be mirror images of each other. This led to a better understanding of the three-dimensional nature of molecules. In 1898, Jagadish Chandra Bose described the ability of twisted artificial structures to rotate the polarization of microwaves. In 1914, Karl F. Lindman showed the same effect for an artificial composite consisting of randomly-dispersed left- or right-handed wire helices in cotton. Since the early 21st century, the development of artificial materials has led to the prediction and realization of chiral metamaterials with optical activity exceeding that of natural media by orders of magnitude in the optical part of the spectrum. Extrinsic chirality associated with oblique illumination of metasurfaces lacking two-fold rotational symmetry has been observed to lead to large linear optical activity in transmission and reflection, as well as nonlinear optical activity exceeding that of lithium iodate by 30 million times. In 1945, Charles William Bunn predicted optical activity of achiral structures, if the wave's propagation direction and the achiral structure form an experimental arrangement that is different from its mirror image. Such optical activity due to extrinsic chirality was observed in the 1960s in liquid crystals. In 1950, Sergey Vavilov predicted optical activity that depends on the intensity of light and the effect of nonlinear optical activity was observed in 1979 in lithium iodate crystals. Optical activity is normally observed for transmitted light. However, in 1988, M. P. Silverman discovered that polarization rotation can also occur for light reflected from chiral substances. Shortly after, it was observed that chiral media can also reflect left-handed and right-handed circularly polarized waves with different efficiencies. These phenomena of specular circular birefringence and specular circular dichroism are jointly known as specular optical activity. Specular optical activity is very weak in natural materials. Theory Optical activity occurs due to molecules dissolved in a fluid or due to the fluid itself only if the molecules are one of two (or more) stereoisomers; this is known as an enantiomer. The structure of such a molecule is such that it is not identical to its mirror image (which would be that of a different stereoisomer, or the "opposite enantiomer"). In mathematics, this property is also known as chirality. For instance, a metal rod is not chiral, since its appearance in a mirror is not distinct from itself. However a screw or light bulb base (or any sort of helix) is chiral; an ordinary right-handed screw thread, viewed in a mirror, would appear as a left-handed screw (very uncommon) which could not possibly screw into an ordinary (right-handed) nut. A human viewed in a mirror would have their heart on the right side, clear evidence of chirality, whereas the mirror reflection of a doll might well be indistinguishable from the doll itself. In order to display optical activity, a fluid must contain only one, or a preponderance of one, stereoisomer. If two enantiomers are present in equal proportions, then their effects cancel out and no optical activity is observed; this is termed a racemic mixture. But when there is an enantiomeric excess, more of one enantiomer than the other, the cancellation is incomplete and optical activity is observed. Many naturally occurring molecules are present as only one enantiomer (such as many sugars). Chiral molecules produced within the fields of organic chemistry or inorganic chemistry are racemic unless a chiral reagent was employed in the same reaction. At the fundamental level, polarization rotation in an optically active medium is caused by circular birefringence, and can best be understood in that way. Whereas linear birefringence in a crystal involves a small difference in the phase velocity of light of two different linear polarizations, circular birefringence implies a small difference in the velocities between right and left-handed circular polarizations. Think of one enantiomer in a solution as a large number of little helices (or screws), all right-handed, but in random orientations. Birefringence of this sort is possible even in a fluid because the handedness of the helices is not dependent on their orientation: even when the direction of one helix is reversed, it still appears right handed. And circularly polarized light itself is chiral: as the wave proceeds in one direction the electric (and magnetic) fields composing it are rotating clockwise (or counterclockwise for the opposite circular polarization), tracing out a right (or left) handed screw pattern in space. In addition to the bulk refractive index which substantially lowers the phase velocity of light in any dielectric (transparent) material compared to the speed of light (in vacuum), there is an additional interaction between the chirality of the wave and the chirality of the molecules. Where their chiralities are the same, there will be a small additional effect on the wave's velocity, but the opposite circular polarization will experience an opposite small effect as its chirality is opposite that of the molecules. Unlike linear birefringence, however, natural optical rotation (in the absence of a magnetic field) cannot be explained in terms of a local material permittivity tensor (i.e., a charge response that only depends on the local electric field vector), as symmetry considerations forbid this. Rather, circular birefringence only appears when considering nonlocality of the material response, a phenomenon known as spatial dispersion. Nonlocality means that electric fields in one location of the material drive currents in another location of the material. Light travels at a finite speed, and even though it is much faster than the electrons, it makes a difference whether the charge response naturally wants to travel along with the electromagnetic wavefront, or opposite to it. Spatial dispersion means that light travelling in different directions (different wavevectors) sees a slightly different permittivity tensor. Natural optical rotation requires a special material, but it also relies on the fact that the wavevector of light is nonzero, and a nonzero wavevector bypasses the symmetry restrictions on the local (zero-wavevector) response. However, there is still reversal symmetry, which is why the direction of natural optical rotation must be 'reversed' when the direction of the light is reversed, in contrast to magnetic Faraday rotation. All optical phenomena have some nonlocality/wavevector influence but it is usually negligible; natural optical rotation, rather uniquely, absolutely requires it. The phase velocity of light in a medium is commonly expressed using the index of refraction n, defined as the speed of light (in free space) divided by its speed in the medium. The difference in the refractive indices between the two circular polarizations quantifies the strength of the circular birefringence (polarization rotation), While is small in natural materials, examples of giant circular birefringence resulting in a negative refractive index for one circular polarization have been reported for chiral metamaterials. The familiar rotation of the axis of linear polarization relies on the understanding that a linearly polarized wave can as well be described as the superposition (addition) of a left and right circularly polarized wave in equal proportion. The phase difference between these two waves is dependent on the orientation of the linear polarization which we'll call , and their electric fields have a relative phase difference of which then add to produce linear polarization: where is the electric field of the net wave, while and are the two circularly polarized basis functions (having zero phase difference). Assuming propagation in the +z direction, we could write and in terms of their x and y components as follows: where and are unit vectors, and i is the imaginary unit, in this case representing the 90-degree phase shift between the x and y components that we have decomposed each circular polarization into. As usual when dealing with phasor notation, it is understood that such quantities are to be multiplied by and then the actual electric field at any instant is given by the real part of that product. Substituting these expressions for and into the equation for we obtain The last equation shows that the resulting vector has the x and y components in phase and oriented exactly in the direction, as we had intended, justifying the representation of any linearly polarized state at angle as the superposition of right and left circularly polarized components with a relative phase difference of . Now let us assume transmission through an optically active material which induces an additional phase difference between the right and left circularly polarized waves of . Let us call the result of passing the original wave linearly polarized at angle through this medium. This will apply additional phase factors of and to the right and left circularly polarized components of : Using similar math as above, we find describing a wave linearly polarized at angle , thus rotated by relative to the incoming wave We defined above the difference in the refractive indices for right and left circularly polarized waves of . Considering propagation through a length L in such a material, there will be an additional phase difference induced between them of (as we used above) given by where is the wavelength of the light (in vacuum). This will cause a rotation of the linear axis of polarization by as we have shown. In general, the refractive index depends on wavelength (see dispersion) and the differential refractive index will also be wavelength dependent. The resulting variation in rotation with the wavelength of the light is called optical rotatory dispersion (ORD). ORD spectra and circular dichroism spectra are related through the Kramers–Kronig relations. Complete knowledge of one spectrum allows the calculation of the other. So we find that the degree of rotation depends on the color of the light (the yellow sodium D line near 589 nm wavelength is commonly used for measurements) and is directly proportional to the path length through the substance and the amount of circular birefringence of the material which, for a solution, may be computed from the substance's specific rotation and its concentration in solution. Although optical activity is normally thought of as a property of fluids, particularly aqueous solutions, it has also been observed in crystals such as quartz (SiO2). Although quartz has a substantial linear birefringence, that effect is cancelled when propagation is along the optic axis. In that case, rotation of the plane of polarization is observed due to the relative rotation between crystal planes, thus making the crystal formally chiral as we have defined it above. The rotation of the crystal planes can be right or left-handed, again producing opposite optical activities. On the other hand, amorphous forms of silica such as fused quartz, like a racemic mixture of chiral molecules, has no net optical activity since one or the other crystal structure does not dominate the substance's internal molecular structure. Applications For a pure substance in solution, if the color and path length are fixed and the specific rotation is known, the observed rotation can be used to calculate the concentration. This usage makes a polarimeter a tool of great importance to those trading in or using sugar syrups in bulk. Comparison to the Faraday effect Rotation of light's plane of polarization may also occur through the Faraday effect, which involves a static magnetic field. However, this is a distinct phenomenon and is not classified as "optical activity". Optical activity is reciprocal, i.e. it is the same for opposite directions of wave propagation through an optically active medium, for example, clockwise polarization rotation from the point of view of an observer. In case of optically active isotropic media, the rotation is the same for any direction of wave propagation. In contrast, the Faraday effect is non-reciprocal, i.e. opposite directions of wave propagation through a Faraday medium will result in clockwise and anti-clockwise polarization rotation from the point of view of an observer. Faraday rotation depends on the propagation direction relative to that of the applied magnetic field. All compounds can exhibit polarization rotation in the presence of an applied magnetic field, provided that (a component of) the magnetic field is oriented in the direction of light propagation. The Faraday effect is one of the first discoveries of the relationship between light and electromagnetic effects. See also Cryptochirality Specific rotation Circular dichroism Birefringence Geometric phase Polarization Chirality (chemistry) Chirality (electromagnetism) Polarization rotator Hyper–Rayleigh scattering Raman optical activity (ROA) References Further reading Eugene Hecht, Optics, 3rd ed., Addison-Wesley, 1998, Akhlesh Lakhtakia, Beltrami Fields in Chiral Media , World Scientific, Singapore, 1994 A step by step tutorial on Optical Rotation Morrison. Robert. T, and Boyd. Robert. N, Organic Chemistry (6th ed.), Prentice-Hall Inc (1992). Polarization (waves) Stereochemistry
Optical rotation
[ "Physics", "Chemistry" ]
4,467
[ "Stereochemistry", "Astrophysics", "Space", "nan", "Spacetime", "Polarization (waves)" ]
39,846
https://en.wikipedia.org/wiki/Outboard%20motor
An outboard motor is a propulsion system for boats, consisting of a self-contained unit that includes engine, gearbox and propeller or jet drive, designed to be affixed to the outside of the transom. They are the most common motorised method of propelling small watercraft. As well as providing propulsion, outboards provide steering control, as they are designed to pivot over their mountings and thus control the direction of thrust. The skeg also acts as a rudder when the engine is not running. Unlike inboard motors, outboard motors can be easily removed for storage or repairs. In order to eliminate the chances of hitting bottom with an outboard motor, the motor can be tilted up to an elevated position either electronically or manually. This helps when traveling through shallow waters where there may be debris that could potentially damage the motor as well as the propeller. If the electric motor required to move the pistons which raise or lower the engine is malfunctioning, every outboard motor is equipped with a manual piston release which will allow the operator to drop the motor down to its lowest setting. Advantages and disadvantages Large ships, boats and yachts will inevitably have inboard engines. Medium size vessels may have either inboards or outboards, and small vessels rarely have inboard motors. If one has a choice, these factors should be noted: Inboard engines are almost invariable diesel, allowing ruggedness, reliability and fuel economy. The very few outboards that are diesels tend to be large heavy items, suitable for workboats and very large RIBs. Diesel outboards are rarely found on leisure craft. Outboards may be easily removed from the vessel for safe-keeping and servicing. They are also vulnerable to theft (a risk rarely suffered by inboard engines). Outboards are cheaper and lighter than inboards. They are often fitted to cruising yachts. Cruising catamarans up to around 10 metres LOA frequently have a petrol longshaft engine with a propeller that is larger and slower-turning than other types. Catamarans that have an engine for each hull (to aid manoeverability) tend to have twin inboards, as twin outboards might interfere with rudder arrangements. While inboards may be mounted in a optimum position for balance, outboards must be mounted on (or shortly ahead of) the transom. This means that a significant weight is at the aft end of the boat, and this must be taken into consideration. General use Large outboards Large outboards are affixed to the transom using clamps and are either tiller steered or controlled from the helm. Generally motors of 100 hp plus are linked to controls at the helm. These range from 2-, 3-, and 4-cylinder models generating suitable for hulls up to in length to powerful V6 and V8 cylinder blocks rated up to ., with sufficient power to be used on boats of or longer. Portable Small outboard motors, up to or so, are easily portable. They are affixed to the boat via clamps and thus easily moved from boat to boat. These motors typically use a manual start system, with throttle and gearshift controls mounted on the body of the motor, and a tiller for steering. The smallest of these weigh as little as , have integral fuel tanks, and provide sufficient power to move a small dinghy at around This type of motor is typically used: to power small craft such as jon boats, dinghies, canoes, etc to provide auxiliary power for sailboats for trolling aboard larger craft, as small outboards are typically more efficient at trolling speeds. In this application, the motor is frequently installed on the transom alongside and connected to the primary outboard to enable helm steering. In addition many small motor manufacturers have begun offering variants with power trim/tilt and electric starting functions so that they may be completely controlled remotely Electric-powered Electric outboard motors are self-contained propulsory units for boats, first invented in 1973 by Morton Ray of Ray Electric Outboards. These are not to be confused with trolling motors, which are not designed as a primary source of power. Most electric outboard motors have 0.5- to 4-kilowatt direct-current (DC) electric motors, operated at 12 to 60 volts DC. Recently developed outboard motors are powered with an alternating current (AC) or DC electric motor in the power head like a conventional petrol engine. With this setup, a motor can produce 10 kW output or more and is able to replace a petrol engine of 15 HP or more. The advantage of the induction or asynchronous motor is the power transfer to the rotor by means of electromagnetic induction. As these engines do not use permanent magnets, they require less maintenance and develop more torque at lower propeller speeds. Pump-jet Pump-jet propulsion is available as an option on most outboard motors. Although less efficient than an open propeller, they are particularly useful in applications where the ability to operate in very shallow water is important. They also eliminate the laceration dangers of an open propeller. Propane Propane outboard motors are available from several manufacturers. These products have several advantages such as lower emissions, absence of ethanol-related issues, and no need for choke once the system is pressurized. Lehr is regarded as the first manufacturer to have brought a propane-powered outboard motor to market by Popular Mechanics and other boating publications. History and developments The first known outboard motor was a small 11 pound (5 kg) electric unit designed around 1870 by Gustave Trouvé, and patented in May 1880 (Patent N° 136,560). Later about 25 petrol powered outboards may have been produced in 1896 by American Motors Co—but neither of these two pioneering efforts appear to have had much impact. The Waterman outboard engine appears to be the first gasoline-powered outboard offered for sale in significant numbers. It was developed from 1903 in Grosse Ile, Michigan, with a patent application filed in 1905 Starting in 1906, the company went on to make thousands of his "Porto-Motor" units, claiming 25,000 sales by 1914. The inboard boat motor firm of Caille Motor Company of Detroit were instrumental in making the cylinder and engines. The most successful early outboard motor, was created by Norwegian-American inventor Ole Evinrude in 1909. Historically, a majority of outboards have been two-stroke powerheads fitted with a carburetor due to the design's inherent simplicity, reliability, low cost and light weight. Drawbacks include increased pollution, due to the high volume of unburned gasoline and oil in their exhaust, and louder noise. Four-stroke outboards Four-stroke outboards have been sold since the late 1920s, such as the Roness and Sharland. In 1962 Homelite introduced a four-stroke outboard a motor, based on the four-cylinder Crosley automobile engine. This outboard was called the Bearcat and was later purchased by Fischer-Pierce, the makers of Boston Whaler, for use in their boats because of their advantages over two-stroke engines. In 1964, Honda Motor Co. introduced its first four-stroke powerhead. In 1984, Yamaha introduced their first four-stroke outboards, which were only available in the low-power range. In 1990 Honda released 35 hp and 45 hp four-stroke models. They continued to lead in the development of four-stroke engines throughout the 1990s as US and European exhaust emissions regulations such as CARB (California Air Resources Board) led to the proliferation of four-stroke outboards. At first, North American manufacturers such as Mercury and OMC used engine technology from Japanese manufacturers such as Yamaha and Suzuki until they were able to develop their own four-stroke engine. The inherent advantages of four-stroke motors included: lower pollution (especially oil in the water), noise reduction, increased fuel economy, and increased torque at low engine speeds. Honda Marine Group, Mercury Marine, Mercury Racing, Nissan Marine, Suzuki Marine, Tohatsu Outboards, Yamaha Marine, and China Oshen-Hyfong marine have all developed new four-stroke engines. Some are carburetted, usually the smaller engines. The balance are electronically fuel-injected. Depending on the manufacturer, newer engines benefit from advanced technology such as multiple valves per cylinder, variable camshaft timing (Honda's VTEC), boosted low end torque (Honda's BLAST), 3-way cooling systems, and closed loop fuel injection. Mercury Verado four-strokes are unique in that they are supercharged. Mercury Marine, Mercury Racing, Tohatsu, Yamaha Marine, Nissan and Evinrude each developed computer-controlled direct-injected two-stroke engines. Each brand boasts a different method of DI. Fuel economy on both direct-injected and four-stroke outboards measures from a 10 percent to 80 percent improvement compared with conventional two-strokes. However, the gap between two-stroke and four-stroke outboard fuel economy is beginning to narrow. Two-stroke outboard motor manufacturers have recently introduced technologies that help to improve two-stroke fuel economy. LPG outboards In 2012, Lehr inc. introduced some small (<5 hp) outboards based on modified Chinese petrol engines to run on propane gas. Tohatsu currently also produces propane powered models, all rated 5 hp. Conversion of larger outboards to run on Liquified petroleum gas is considered unusual and exotic although some hobbyists continue to experiment. Outboard motor selection It is important to select a motor that is a good match for the hull in terms of power and shaft length. Power requirements Whether using a displacement or planing vessel, one should select an appropriate power level; too much power is wasteful (adding unnecessary weight), and may often be dangerous. Boats built in the US have Coast Guard Rating Plates, which specify the maximum recommended engine powers for the hulls. In the united kingdom, boats have CE plates on the transoms which specify maximum engine power, shaft length, maximum engine weight and maximum number of persons or maximum load. Shaft length Outboard motor shaft lengths are standardized to fit 15-, 20- and 25-inch (-, - and -centimeter) transoms. If the shaft is too long it will extend farther into the water than necessary creating drag, which will impair performance and fuel economy. If the shaft is too short, the motor will be prone to ventilation. Even worse, if the water intake ports on the lower unit are not sufficiently submerged, engine overheating is likely, which can result in severe damage. General dimensions Different outboard engine brands require different transom dimensions and sizes. This affects performance and trim. Operational considerations Motor mounting height Motor height on the transom is an important factor in achieving optimal performance. The motor should be as high as possible without ventilating or loss of water pressure. This minimizes the effect of hydrodynamic drag while underway, allowing for greater speed. Generally, the antiventilation plate should be about the same height as, or up to two inches higher than, the keel, with the motor in neutral trim. Trim Trim is the angle of the motor in relation to the hull, as illustrated below. The ideal trim angle is the one in which the boat rides level, with most of the hull on the surface instead of plowing through the water. If the motor is trimmed out too far, the bow will ride too high in the water. With too little trim, the bow rides too low. The optimal trim setting will vary depending on many factors including speed, hull design, weight and balance, and conditions on the water (wind and waves). Many large outboards are equipped with power trim, an electric motor on the mounting bracket, with a switch at the helm that enables the operator to adjust the trim angle on the fly. In this case, the motor should be trimmed fully in to start, and trimmed out (with an eye on the tachometer) as the boat gains momentum, until it reaches the point just before ventilation begins or further trim adjustment results in an increase in engine speed with no increase in travel speed. Motors not equipped with power trim are manually adjustable using a pin called a topper tilt lock. Ventilation Ventilation is a phenomenon that occurs when surface air or exhaust gas (in the case of motors equipped with through-hub exhaust) is drawn into the spinning propeller blades. With the propeller pushing mostly air instead of water, the load on the engine is greatly reduced, causing the engine to race and the propeller to spin fast enough to result in cavitation, at which point little thrust is generated at all. The condition continues until the prop slows enough for the air bubbles to rise to the surface. The primary causes of ventilation are: motor mounted too high, motor trimmed out excessively, damage to the antiventilation plate, damage to propeller, foreign object lodged in the diffuser ring. Safety If the helmsman goes overboard, the boat may continue under power but uncontrolled, risking serious or fatal injuries to the helmsman and others in the water. A safety measure is a "kill cord" attached to the boat and helmsman, which cuts the motor if the helmsman falls overboard. Cooling system The most common type of cooling used on outboards of all eras use a rubber impeller to pump water from below the waterline up into the engine. This design has remained the standard due mainly to the efficiency and simplicity of its design. One disadvantage to this system is that if the impeller is run dry for a length of time (such as leaving the engine running when pulling the boat out of the water or in some cases tilting the engine out of the water while running), the impeller is likely to be ruined in the process. Air-cooled outboards Air-cooled outboard engines are currently produced by some manufacturers. These tend to be small engines of less than . Outboard engines made by Briggs & Stratton are air-cooled. Closed-loop cooling Outboards manufactured by Seven Marine use a closed-loop cooling system with a heat exchanger. This means saltwater is not pumped through the engine block, as is the case with most outboard motors, but instead engine coolant and outside water are pumped through (opposite sides of) the heat exchanger. Engine Stalls An outboard engine may stall if it does not have the correct inputs. Common problems that lead to stalling are electrical issues, low quality gas or clogged fuel filer. Other issues may include a damaged carburetor oil switch. Use in long-tail boats In Vietnam and other parts of southeast Asia long-tail boats use outboard motors altered to extend their propellers far from the rest of the motor. In Vietnam these outboards are called (shrimp tail motor), which are smallish air-cooled or water-cooled gasoline, diesel or even modified automotive engines bolted to a welded steel tube frame, with another long steel tube up to 3 m long to hold an extended drive shaft driving a conventional propeller. The frame that holds the motor has a short, swiveling steel pin/tube approximately 15 cm long underneath, to be inserted into a corresponding hole on the transom, or a solid block or wood purposely built-in thereof. This drop-in arrangement enables extremely quick transfer of the motor to another boat or for storageall that is needed is to lift it out. The pivoting design allows the outboard motor to be swiveled by the operator in almost all directions: Sideways for direction, up and down to change the thrust line according to speed or bow lift, elevate completely out of water for easy starting, placing the drive shaft and the propeller forward along the side of the boat for reverse, or put them inside the boat for propeller replacement, which can be a regular occurrence to the cheap cast aluminum propellers on the often debris-prone inland waterways. Manufacturers Aquawatt Electric Outboard Motor Bolinder Briggs & Stratton - USA - Up to 5 hp Cimco Marine AB DBD Marine EP Carry ePropulsion - Hong Kong Halmarine Outboards Hidea - China Honda Marine Group - Japan - Up to 350 hp KARVIN motors - electric outboard see KARVIN website Kohler Company Jarvis Marine Lehr Maxus outboards Mud-skipper Longtail outboard Mercury/Mariner/Mercury Racing - USA - Up to 600 hp Nissan Marine (now Tohatsu) Oshen-Hyfong Marine Photon Marine Electric Outboard Motors for Commercial Boat Fleets see Photon Marine website Propel part of Saietta Group - Electric outboards Parsun - China Selva Marine - Italy - Up to 250 hp Suzuki Marine - Japan - Up to 350 hp Tohatsu - Japan Torqeedo part of Deutz AG - Electric outboards Zomair Ul'yanovsk Motor Plant West Bend Yamaha Outboards - Japan - Up to 425 hp Yanmar Diesel Electric outboard manufacturers Minn Kota Torqeedo ThrustMe Flux Marine Rim Drive Technology TEMO ePropulsion Former manufacturers British Anzani British Seagull (defunct) Chrysler Homelite Johnson Outboards (folded into Evinrude Outboard Motors) ELTO Evinrude, a division of Bombardier Recreational Products - USA - Up to 300 hp McCulloch Seven Marine - USA - 3 models rated up to 627 hp utilising a General Motors sourced V8 supercharged power-plant Tomos Volvo Penta Oliver See also References External links "How A Kicker Works", June 1951, Popular Science by George W. Waltz Jr - basic article on outboard motors with many drawings and illustrations The Antique Outboard Motor Club International The Super-Elto Outboard Motor (1927) Smithsonian Institution Libraries Early Outboard Motor Boat Racing in British Columbia Yamaha Outboards Resources Collection Patents - Marine propulsion mechanism - Canoe and other small craft Marine propulsion Marine engines French inventions
Outboard motor
[ "Technology", "Engineering" ]
3,653
[ "Marine engines", "Marine propulsion", "Engines", "Marine engineering" ]
40,114
https://en.wikipedia.org/wiki/Escherichia%20coli
Escherichia coli ( ) is a gram-negative, facultative anaerobic, rod-shaped, coliform bacterium of the genus Escherichia that is commonly found in the lower intestine of warm-blooded organisms. Most E. coli strains are part of the normal microbiota of the gut, where they constitute about 0.1%, along with other facultative anaerobes. These bacteria are mostly harmless or even beneficial to humans. For example, some strains of E. coli benefit their hosts by producing vitamin K2 or by preventing the colonization of the intestine by harmful pathogenic bacteria. These mutually beneficial relationships between E. coli and humans are a type of mutualistic biological relationship—where both the humans and the E. coli are benefitting each other. E. coli is expelled into the environment within fecal matter. The bacterium grows massively in fresh fecal matter under aerobic conditions for three days, but its numbers decline slowly afterwards. Some serotypes, such as EPEC and ETEC, are pathogenic, causing serious food poisoning in their hosts. Fecal–oral transmission is the major route through which pathogenic strains of the bacterium cause disease. This transmission method is occasionally responsible for food contamination incidents that prompt product recalls. Cells are able to survive outside the body for a limited amount of time, which makes them potential indicator organisms to test environmental samples for fecal contamination. A growing body of research, though, has examined environmentally persistent E. coli which can survive for many days and grow outside a host. The bacterium can be grown and cultured easily and inexpensively in a laboratory setting, and has been intensively investigated for over 60 years. E. coli is a chemoheterotroph whose chemically defined medium must include a source of carbon and energy. E. coli is the most widely studied prokaryotic model organism, and an important species in the fields of biotechnology and microbiology, where it has served as the host organism for the majority of work with recombinant DNA. Under favourable conditions, it takes as little as 20 minutes to reproduce. Biology and biochemistry Type and morphology E. coli is a gram-negative, facultative anaerobe, nonsporulating coliform bacterium. Cells are typically rod-shaped, and are about 2.0 μm long and 0.25–1.0 μm in diameter, with a cell volume of 0.6–0.7 μm3. E. coli stains gram-negative because its cell wall is composed of a thin peptidoglycan layer and an outer membrane. During the staining process, E. coli picks up the color of the counterstain safranin and stains pink. The outer membrane surrounding the cell wall provides a barrier to certain antibiotics, such that E. coli is not damaged by penicillin. The flagella which allow the bacteria to swim have a peritrichous arrangement. It also attaches and effaces to the microvilli of the intestines via an adhesion molecule known as intimin. Metabolism E. coli can live on a wide variety of substrates and uses mixed acid fermentation in anaerobic conditions, producing lactate, succinate, ethanol, acetate, and carbon dioxide. Since many pathways in mixed-acid fermentation produce hydrogen gas, these pathways require the levels of hydrogen to be low, as is the case when E. coli lives together with hydrogen-consuming organisms, such as methanogens or sulphate-reducing bacteria. In addition, E. colis metabolism can be rewired to solely use CO2 as the source of carbon for biomass production. In other words, this obligate heterotroph's metabolism can be altered to display autotrophic capabilities by heterologously expressing carbon fixation genes as well as formate dehydrogenase and conducting laboratory evolution experiments. This may be done by using formate to reduce electron carriers and supply the ATP required in anabolic pathways inside of these synthetic autotrophs. E. coli has three native glycolytic pathways: EMPP, EDP, and OPPP. The EMPP employs ten enzymatic steps to yield two pyruvates, two ATP, and two NADH per glucose molecule while OPPP serves as an oxidation route for NADPH synthesis. Although the EDP is the more thermodynamically favourable of the three pathways, E. coli do not use the EDP for glucose metabolism, relying mainly on the EMPP and the OPPP. The EDP mainly remains inactive except for during growth with gluconate. Catabolite repression When growing in the presence of a mixture of sugars, bacteria will often consume the sugars sequentially through a process known as catabolite repression. By repressing the expression of the genes involved in metabolizing the less preferred sugars, cells will usually first consume the sugar yielding the highest growth rate, followed by the sugar yielding the next highest growth rate, and so on. In doing so the cells ensure that their limited metabolic resources are being used to maximize the rate of growth. The well-used example of this with E. coli involves the growth of the bacterium on glucose and lactose, where E. coli will consume glucose before lactose. Catabolite repression has also been observed in E. coli in the presence of other non-glucose sugars, such as arabinose and xylose, sorbitol, rhamnose, and ribose. In E. coli, glucose catabolite repression is regulated by the phosphotransferase system, a multi-protein phosphorylation cascade that couples glucose uptake and metabolism. Culture growth Optimum growth of E. coli occurs at , but some laboratory strains can multiply at temperatures up to . E. coli grows in a variety of defined laboratory media, such as lysogeny broth, or any medium that contains glucose, ammonium phosphate monobasic, sodium chloride, magnesium sulfate, potassium phosphate dibasic, and water. Growth can be driven by aerobic or anaerobic respiration, using a large variety of redox pairs, including the oxidation of pyruvic acid, formic acid, hydrogen, and amino acids, and the reduction of substrates such as oxygen, nitrate, fumarate, dimethyl sulfoxide, and trimethylamine N-oxide. E. coli is classified as a facultative anaerobe. It uses oxygen when it is present and available. It can, however, continue to grow in the absence of oxygen using fermentation or anaerobic respiration. Respiration type is managed in part by the arc system. The ability to continue growing in the absence of oxygen is an advantage to bacteria because their survival is increased in environments where water predominates. Cell cycle The bacterial cell cycle is divided into three stages. The B period occurs between the completion of cell division and the beginning of DNA replication. The C period encompasses the time it takes to replicate the chromosomal DNA. The D period refers to the stage between the conclusion of DNA replication and the end of cell division. The doubling rate of E. coli is higher when more nutrients are available. However, the length of the C and D periods do not change, even when the doubling time becomes less than the sum of the C and D periods. At the fastest growth rates, replication begins before the previous round of replication has completed, resulting in multiple replication forks along the DNA and overlapping cell cycles. The number of replication forks in fast growing E. coli typically follows 2n (n = 1, 2 or 3). This only happens if replication is initiated simultaneously from all origins of replications, and is referred to as synchronous replication. However, not all cells in a culture replicate synchronously. In this case cells do not have multiples of two replication forks. Replication initiation is then referred to being asynchronous. However, asynchrony can be caused by mutations to for instance DnaA or DnaA initiator-associating protein DiaA. Although E. coli reproduces by binary fission the two supposedly identical cells produced by cell division are functionally asymmetric with the old pole cell acting as an aging parent that repeatedly produces rejuvenated offspring. When exposed to an elevated stress level, damage accumulation in an old E. coli lineage may surpass its immortality threshold so that it arrests division and becomes mortal. Cellular aging is a general process, affecting prokaryotes and eukaryotes alike. Genetic adaptation E. coli and related bacteria possess the ability to transfer DNA via bacterial conjugation or transduction, which allows genetic material to spread horizontally through an existing population. The process of transduction, which uses the bacterial virus called a bacteriophage, is where the spread of the gene encoding for the Shiga toxin from the Shigella bacteria to E. coli helped produce E. coli O157:H7, the Shiga toxin-producing strain of E. coli. Diversity E. coli encompasses an enormous population of bacteria that exhibit a very high degree of both genetic and phenotypic diversity. Genome sequencing of many isolates of E. coli and related bacteria shows that a taxonomic reclassification would be desirable. However, this has not been done, largely due to its medical importance, and E. coli remains one of the most diverse bacterial species: only 20% of the genes in a typical E. coli genome is shared among all strains. In fact, from the more constructive point of view, the members of genus Shigella (S. dysenteriae, S. flexneri, S. boydii, and S. sonnei) should be classified as E. coli strains, a phenomenon termed taxa in disguise. Similarly, other strains of E. coli (e.g. the K-12 strain commonly used in recombinant DNA work) are sufficiently different that they would merit reclassification. A strain is a subgroup within the species that has unique characteristics that distinguish it from other strains. These differences are often detectable only at the molecular level; however, they may result in changes to the physiology or lifecycle of the bacterium. For example, a strain may gain pathogenic capacity, the ability to use a unique carbon source, the ability to take upon a particular ecological niche, or the ability to resist antimicrobial agents. Different strains of E. coli are often host-specific, making it possible to determine the source of fecal contamination in environmental samples. For example, knowing which E. coli strains are present in a water sample allows researchers to make assumptions about whether the contamination originated from a human, another mammal, or a bird. Serotypes A common subdivision system of E. coli, but not based on evolutionary relatedness, is by serotype, which is based on major surface antigens (O antigen: part of lipopolysaccharide layer; H: flagellin; K antigen: capsule), e.g. O157:H7). It is, however, common to cite only the serogroup, i.e. the O-antigen. At present, about 190 serogroups are known. The common laboratory strain has a mutation that prevents the formation of an O-antigen and is thus not typeable. Genome plasticity and evolution Like all lifeforms, new strains of E. coli evolve through the natural biological processes of mutation, gene duplication, and horizontal gene transfer; in particular, 18% of the genome of the laboratory strain MG1655 was horizontally acquired since the divergence from Salmonella. E. coli K-12 and E. coli B strains are the most frequently used varieties for laboratory purposes. Some strains develop traits that can be harmful to a host animal. These virulent strains typically cause a bout of diarrhea that is often self-limiting in healthy adults but is frequently lethal to children in the developing world. More virulent strains, such as O157:H7, cause serious illness or death in the elderly, the very young, or the immunocompromised. The genera Escherichia and Salmonella diverged around 102 million years ago (credibility interval: 57–176 mya), an event unrelated to the much earlier (see Synapsid) divergence of their hosts: the former being found in mammals and the latter in birds and reptiles. This was followed by a split of an Escherichia ancestor into five species (E. albertii, E. coli, E. fergusonii, E. hermannii, and E. vulneris). The last E. coli ancestor split between 20 and 30 million years ago. The long-term evolution experiments using E. coli, begun by Richard Lenski in 1988, have allowed direct observation of genome evolution over more than 65,000 generations in the laboratory. For instance, E. coli typically do not have the ability to grow aerobically with citrate as a carbon source, which is used as a diagnostic criterion with which to differentiate E. coli from other, closely, related bacteria such as Salmonella. In this experiment, one population of E. coli unexpectedly evolved the ability to aerobically metabolize citrate, a major evolutionary shift with some hallmarks of microbial speciation. In the microbial world, a relationship of predation can be established similar to that observed in the animal world. Considered, it has been seen that E. coli is the prey of multiple generalist predators, such as Myxococcus xanthus. In this predator-prey relationship, a parallel evolution of both species is observed through genomic and phenotypic modifications, in the case of E. coli the modifications are modified in two aspects involved in their virulence such as mucoid production (excessive production of exoplasmic acid alginate ) and the suppression of the OmpT gene, producing in future generations a better adaptation of one of the species that is counteracted by the evolution of the other, following a co-evolutionary model demonstrated by the Red Queen hypothesis. Neotype strain E. coli is the type species of the genus (Escherichia) and in turn Escherichia is the type genus of the family Enterobacteriaceae, where the family name does not stem from the genus Enterobacter + "i" (sic.) + "aceae", but from "enterobacterium" + "aceae" (enterobacterium being not a genus, but an alternative trivial name to enteric bacterium). The original strain described by Escherich is believed to be lost, consequently a new type strain (neotype) was chosen as a representative: the neotype strain is U5/41T, also known under the deposit names DSM 30083, ATCC 11775, and NCTC 9001, which is pathogenic to chickens and has an O1:K1:H7 serotype. However, in most studies, either O157:H7, K-12 MG1655, or K-12 W3110 were used as a representative E. coli. The genome of the type strain has only lately been sequenced. Phylogeny of E. coli strains Many strains belonging to this species have been isolated and characterised. In addition to serotype (vide supra), they can be classified according to their phylogeny, i.e. the inferred evolutionary history, as shown below where the species is divided into six groups as of 2014. Particularly the use of whole genome sequences yields highly supported phylogenies. The phylogroup structure remains robust to newer methods and sequences, which sometimes adds newer groups, giving 8 or 14 as of 2023. The link between phylogenetic distance ("relatedness") and pathology is small, e.g. the O157:H7 serotype strains, which form a clade ("an exclusive group")—group E below—are all enterohaemorragic strains (EHEC), but not all EHEC strains are closely related. In fact, four different species of Shigella are nested among E. coli strains (vide supra), while E. albertii and E. fergusonii are outside this group. Indeed, all Shigella species were placed within a single subspecies of E. coli in a phylogenomic study that included the type strain. All commonly used research strains of E. coli belong to group A and are derived mainly from Clifton's K-12 strain (λ+ F+; O16) and to a lesser degree from d'Herelle's "Bacillus coli" strain (B strain; O7). There have been multiple proposals to revise the taxonomy to match phylogeny. However, all these proposals need to face the fact that Shigella remains a widely used name in medicine and find ways to reduce any confusion that can stem from renaming. Genomics The first complete DNA sequence of an E. coli genome (laboratory strain K-12 derivative MG1655) was published in 1997. It is a circular DNA molecule 4.6 million base pairs in length, containing 4288 annotated protein-coding genes (organized into 2584 operons), seven ribosomal RNA (rRNA) operons, and 86 transfer RNA (tRNA) genes. Despite having been the subject of intensive genetic analysis for about 40 years, many of these genes were previously unknown. The coding density was found to be very high, with a mean distance between genes of only 118 base pairs. The genome was observed to contain a significant number of transposable genetic elements, repeat elements, cryptic prophages, and bacteriophage remnants. Most genes have only a single copy. More than three hundred complete genomic sequences of Escherichia and Shigella species are known. The genome sequence of the type strain of E. coli was added to this collection before 2014. Comparison of these sequences shows a remarkable amount of diversity; only about 20% of each genome represents sequences present in every one of the isolates, while around 80% of each genome can vary among isolates. Each individual genome contains between 4,000 and 5,500 genes, but the total number of different genes among all of the sequenced E. coli strains (the pangenome) exceeds 16,000. This very large variety of component genes has been interpreted to mean that two-thirds of the E. coli pangenome originated in other species and arrived through the process of horizontal gene transfer. Gene nomenclature Genes in E. coli are usually named in accordance with the uniform nomenclature proposed by Demerec et al. Gene names are 3-letter acronyms that derive from their function (when known) or mutant phenotype and are italicized. When multiple genes have the same acronym, the different genes are designated by a capital later that follows the acronym and is also italicized. For instance, recA is named after its role in homologous recombination plus the letter A. Functionally related genes are named recB, recC, recD etc. The proteins are named by uppercase acronyms, e.g. RecA, RecB, etc. When the genome of E. coli strain K-12 substr. MG1655 was sequenced, all known or predicted protein-coding genes were numbered (more or less) in their order on the genome and abbreviated by b numbers, such as b2819 (= recD). The "b" names were created after Fred Blattner, who led the genome sequence effort. Another numbering system was introduced with the sequence of another E. coli K-12 substrain, W3110, which was sequenced in Japan and hence uses numbers starting by JW... (Japanese W3110), e.g. JW2787 (= recD). Hence, recD = b2819 = JW2787. Note, however, that most databases have their own numbering system, e.g. the EcoGene database uses EG10826 for recD. Finally, ECK numbers are specifically used for alleles in the MG1655 strain of E. coli K-12. Complete lists of genes and their synonyms can be obtained from databases such as EcoGene or Uniprot. Proteomics Proteome The genome sequence of E. coli predicts 4288 protein-coding genes, of which 38 percent initially had no attributed function. Comparison with five other sequenced microbes reveals ubiquitous as well as narrowly distributed gene families; many families of similar genes within E. coli are also evident. The largest family of paralogous proteins contains 80 ABC transporters. The genome as a whole is strikingly organized with respect to the local direction of replication; guanines, oligonucleotides possibly related to replication and recombination, and most genes are so oriented. The genome also contains insertion sequence (IS) elements, phage remnants, and many other patches of unusual composition indicating genome plasticity through horizontal transfer. Several studies have experimentally investigated the proteome of E. coli. By 2006, 1,627 (38%) of the predicted proteins (open reading frames, ORFs) had been identified experimentally. Mateus et al. 2020 detected 2,586 proteins with at least 2 peptides (60% of all proteins). Post-translational modifications (PTMs) Although much fewer bacterial proteins seem to have post-translational modifications (PTMs) compared to eukaryotic proteins, a substantial number of proteins are modified in E. coli. For instance, Potel et al. (2018) found 227 phosphoproteins of which 173 were phosphorylated on histidine. The majority of phosphorylated amino acids were serine (1,220 sites) with only 246 sites on histidine and 501 phosphorylated threonines and 162 tyrosines. Interactome The interactome of E. coli has been studied by affinity purification and mass spectrometry (AP/MS) and by analyzing the binary interactions among its proteins. Protein complexes. A 2006 study purified 4,339 proteins from cultures of strain K-12 and found interacting partners for 2,667 proteins, many of which had unknown functions at the time. A 2009 study found 5,993 interactions between proteins of the same E. coli strain, though these data showed little overlap with those of the 2006 publication. Binary interactions. Rajagopala et al. (2014) have carried out systematic yeast two-hybrid screens with most E. coli proteins, and found a total of 2,234 protein-protein interactions. This study also integrated genetic interactions and protein structures and mapped 458 interactions within 227 protein complexes. Normal microbiota E. coli belongs to a group of bacteria informally known as coliforms that are found in the gastrointestinal tract of warm-blooded animals. E. coli normally colonizes an infant's gastrointestinal tract within 40 hours of birth, arriving with food or water or from the individuals handling the child. In the bowel, E. coli adheres to the mucus of the large intestine. It is the primary facultative anaerobe of the human gastrointestinal tract. (Facultative anaerobes are organisms that can grow in either the presence or absence of oxygen.) As long as these bacteria do not acquire genetic elements encoding for virulence factors, they remain benign commensals. Therapeutic use Due to the low cost and speed with which it can be grown and modified in laboratory settings, E. coli is a popular expression platform for the production of recombinant proteins used in therapeutics. One advantage to using E. coli over another expression platform is that E. coli naturally does not export many proteins into the periplasm, making it easier to recover a protein of interest without cross-contamination. The E. coli K-12 strains and their derivatives (DH1, DH5α, MG1655, RV308 and W3110) are the strains most widely used by the biotechnology industry. Nonpathogenic E. coli strain Nissle 1917 (EcN), (Mutaflor) and E. coli O83:K24:H31 (Colinfant)) are used as probiotic agents in medicine, mainly for the treatment of various gastrointestinal diseases, including inflammatory bowel disease. It is thought that the EcN strain might impede the growth of opportunistic pathogens, including Salmonella and other coliform enteropathogens, through the production of microcin proteins the production of siderophores. Role in disease Most E. coli strains do not cause disease, naturally living in the gut, but virulent strains can cause gastroenteritis, urinary tract infections, neonatal meningitis, hemorrhagic colitis, and Crohn's disease. Common signs and symptoms include severe abdominal cramps, diarrhea, hemorrhagic colitis, vomiting, and sometimes fever. In rarer cases, virulent strains are also responsible for bowel necrosis (tissue death) and perforation without progressing to hemolytic-uremic syndrome, peritonitis, mastitis, sepsis, and gram-negative pneumonia. Very young children are more susceptible to develop severe illness, such as hemolytic uremic syndrome; however, healthy individuals of all ages are at risk to the severe consequences that may arise as a result of being infected with E. coli. Some strains of E. coli, for example O157:H7, can produce Shiga toxin. The Shiga toxin causes inflammatory responses in target cells of the gut, leaving behind lesions which result in the bloody diarrhea that is a symptom of a Shiga toxin-producing E. coli (STEC) infection. This toxin further causes premature destruction of the red blood cells, which then clog the body's filtering system, the kidneys, in some rare cases (usually in children and the elderly) causing hemolytic-uremic syndrome (HUS), which may lead to kidney failure and even death. Signs of hemolytic uremic syndrome include decreased frequency of urination, lethargy, and paleness of cheeks and inside the lower eyelids. In 25% of HUS patients, complications of nervous system occur, which in turn causes strokes. In addition, this strain causes the buildup of fluid (since the kidneys do not work), leading to edema around the lungs, legs, and arms. This increase in fluid buildup especially around the lungs impedes the functioning of the heart, causing an increase in blood pressure. Uropathogenic E. coli (UPEC) is one of the main causes of urinary tract infections. It is part of the normal microbiota in the gut and can be introduced in many ways. In particular for females, the direction of wiping after defecation (wiping back to front) can lead to fecal contamination of the urogenital orifices. Anal intercourse can also introduce this bacterium into the male urethra, and in switching from anal to vaginal intercourse, the male can also introduce UPEC to the female urogenital system. Enterotoxigenic E. coli (ETEC) is the most common cause of traveler's diarrhea, with as many as 840 million cases worldwide in developing countries each year. The bacteria, typically transmitted through contaminated food or drinking water, adheres to the intestinal lining, where it secretes either of two types of enterotoxins, leading to watery diarrhea. The rate and severity of infections are higher among children under the age of five, including as many as 380,000 deaths annually. In May 2011, one E. coli strain, O104:H4, was the subject of a bacterial outbreak that began in Germany. Certain strains of E. coli are a major cause of foodborne illness. The outbreak started when several people in Germany were infected with enterohemorrhagic E. coli (EHEC) bacteria, leading to hemolytic-uremic syndrome (HUS), a medical emergency that requires urgent treatment. The outbreak did not only concern Germany, but also 15 other countries, including regions in North America. On 30 June 2011, the German Bundesinstitut für Risikobewertung (BfR) (Federal Institute for Risk Assessment, a federal institute within the German Federal Ministry of Food, Agriculture and Consumer Protection) announced that seeds of fenugreek from Egypt were likely the cause of the EHEC outbreak. Some studies have demonstrated an absence of E. coli in the gut flora of subjects with the metabolic disorder Phenylketonuria. It is hypothesized that the absence of these normal bacterium impairs the production of the key vitamins B2 (riboflavin) and K2 (menaquinone) – vitamins which are implicated in many physiological roles in humans such as cellular and bone metabolism – and so contributes to the disorder. Carbapenem-resistant E. coli (carbapenemase-producing E. coli) that are resistant to the carbapenem class of antibiotics, considered the drugs of last resort for such infections. They are resistant because they produce an enzyme called a carbapenemase that disables the drug molecule. Incubation period The time between ingesting the STEC bacteria and feeling sick is called the "incubation period". The incubation period is usually 3–4 days after the exposure, but may be as short as 1 day or as long as 10 days. The symptoms often begin slowly with mild belly pain or non-bloody diarrhea that worsens over several days. HUS, if it occurs, develops an average 7 days after the first symptoms, when the diarrhea is improving. Diagnosis Diagnosis of infectious diarrhea and identification of antimicrobial resistance is performed using a stool culture with subsequent antibiotic sensitivity testing. It requires a minimum of 2 days and maximum of several weeks to culture gastrointestinal pathogens. The sensitivity (true positive) and specificity (true negative) rates for stool culture vary by pathogen, although a number of human pathogens can not be cultured. For culture-positive samples, antimicrobial resistance testing takes an additional 12–24 hours to perform. Current point of care molecular diagnostic tests can identify E. coli and antimicrobial resistance in the identified strains much faster than culture and sensitivity testing. Microarray-based platforms can identify specific pathogenic strains of E. coli and E. coli-specific AMR genes in two hours or less with high sensitivity and specificity, but the size of the test panel (i.e., total pathogens and antimicrobial resistance genes) is limited. Newer metagenomics-based infectious disease diagnostic platforms are currently being developed to overcome the various limitations of culture and all currently available molecular diagnostic technologies. Treatment The mainstay of treatment is the assessment of dehydration and replacement of fluid and electrolytes. Administration of antibiotics has been shown to shorten the course of illness and duration of excretion of enterotoxigenic E. coli (ETEC) in adults in endemic areas and in traveller's diarrhea, though the rate of resistance to commonly used antibiotics is increasing and they are generally not recommended. The antibiotic used depends upon susceptibility patterns in the particular geographical region. Currently, the antibiotics of choice are fluoroquinolones or azithromycin, with an emerging role for rifaximin. Rifaximin, a semisynthetic rifamycin derivative, is an effective and well-tolerated antibacterial for the management of adults with non-invasive traveller's diarrhea. Rifaximin was significantly more effective than placebo and no less effective than ciprofloxacin in reducing the duration of diarrhea. While rifaximin is effective in patients with E. coli-predominant traveller's diarrhea, it appears ineffective in patients infected with inflammatory or invasive enteropathogens. Prevention ETEC is the type of E. coli that most vaccine development efforts are focused on. Antibodies against the LT and major CFs of ETEC provide protection against LT-producing, ETEC-expressing homologous CFs. Oral inactivated vaccines consisting of toxin antigen and whole cells, i.e. the licensed recombinant cholera B subunit (rCTB)-WC cholera vaccine Dukoral, have been developed. There are currently no licensed vaccines for ETEC, though several are in various stages of development. In different trials, the rCTB-WC cholera vaccine provided high (85–100%) short-term protection. An oral ETEC vaccine candidate consisting of rCTB and formalin inactivated E. coli bacteria expressing major CFs has been shown in clinical trials to be safe, immunogenic, and effective against severe diarrhoea in American travelers but not against ETEC diarrhoea in young children in Egypt. A modified ETEC vaccine consisting of recombinant E. coli strains over-expressing the major CFs and a more LT-like hybrid toxoid called LCTBA, are undergoing clinical testing. Other proven prevention methods for E. coli transmission include handwashing and improved sanitation and drinking water, as transmission occurs through fecal contamination of food and water supplies. Additionally, thoroughly cooking meat and avoiding consumption of raw, unpasteurized beverages, such as juices and milk are other proven methods for preventing E. coli. Lastly, cross-contamination of utensils and work spaces should be avoided when preparing food. Model organism in life science research Because of its long history of laboratory culture and ease of manipulation, E. coli plays an important role in modern biological engineering and industrial microbiology. The work of Stanley Norman Cohen and Herbert Boyer in E. coli, using plasmids and restriction enzymes to create recombinant DNA, became a foundation of biotechnology. E. coli is a very versatile host for the production of heterologous proteins, and various protein expression systems have been developed which allow the production of recombinant proteins in E. coli. Researchers can introduce genes into the microbes using plasmids which permit high level expression of protein, and such protein may be mass-produced in industrial fermentation processes. One of the first useful applications of recombinant DNA technology was the manipulation of E. coli to produce human insulin. Many proteins previously thought difficult or impossible to be expressed in E. coli in folded form have been successfully expressed in E. coli. For example, proteins with multiple disulphide bonds may be produced in the periplasmic space or in the cytoplasm of mutants rendered sufficiently oxidizing to allow disulphide-bonds to form, while proteins requiring post-translational modification such as glycosylation for stability or function have been expressed using the N-linked glycosylation system of Campylobacter jejuni engineered into E. coli. Modified E. coli cells have been used in vaccine development, bioremediation, production of biofuels, lighting, and production of immobilised enzymes. Strain K-12 is a mutant form of E. coli that over-expresses the enzyme Alkaline phosphatase (ALP). The mutation arises due to a defect in the gene that constantly codes for the enzyme. A gene that is producing a product without any inhibition is said to have constitutive activity. This particular mutant form is used to isolate and purify the aforementioned enzyme. Strain OP50 of Escherichia coli is used for maintenance of Caenorhabditis elegans cultures. Strain JM109 is a mutant form of E. coli that is recA and endA deficient. The strain can be utilized for blue/white screening when the cells carry the fertility factor episome. Lack of recA decreases the possibility of unwanted restriction of the DNA of interest and lack of endA inhibit plasmid DNA decomposition. Thus, JM109 is useful for cloning and expression systems. Model organism E. coli is frequently used as a model organism in microbiology studies. Cultivated strains (e.g. E. coli K12) are well-adapted to the laboratory environment, and, unlike wild-type strains, have lost their ability to thrive in the intestine. Many laboratory strains lose their ability to form biofilms. These features protect wild-type strains from antibodies and other chemical attacks, but require a large expenditure of energy and material resources. E. coli is often used as a representative microorganism in the research of novel water treatment and sterilisation methods, including photocatalysis. By standard plate count methods, following sequential dilutions, and growth on agar gel plates, the concentration of viable organisms or CFUs (Colony Forming Units), in a known volume of treated water can be evaluated, allowing the comparative assessment of materials performance. In 1946, Joshua Lederberg and Edward Tatum first described the phenomenon known as bacterial conjugation using E. coli as a model bacterium, and it remains the primary model to study conjugation. E. coli was an integral part of the first experiments to understand phage genetics, and early researchers, such as Seymour Benzer, used E. coli and phage T4 to understand the topography of gene structure. Prior to Benzer's research, it was not known whether the gene was a linear structure, or if it had a branching pattern. E. coli was one of the first organisms to have its genome sequenced; the complete genome of E. coli K12 was published by Science in 1997. From 2002 to 2010, a team at the Hungarian Academy of Science created a strain of Escherichia coli called MDS42, which is now sold by Scarab Genomics of Madison, WI under the name of "Clean Genome E. coli", where 15% of the genome of the parental strain (E. coli K-12 MG1655) were removed to aid in molecular biology efficiency, removing IS elements, pseudogenes and phages, resulting in better maintenance of plasmid-encoded toxic genes, which are often inactivated by transposons. Biochemistry and replication machinery were not altered. By evaluating the possible combination of nanotechnologies with landscape ecology, complex habitat landscapes can be generated with details at the nanoscale. On such synthetic ecosystems, evolutionary experiments with E. coli have been performed to study the spatial biophysics of adaptation in an island biogeography on-chip. In other studies, non-pathogenic E. coli has been used as a model microorganism towards understanding the effects of simulated microgravity (on Earth) on the same. Uses in biological computing Since 1961, scientists proposed the idea of genetic circuits used for computational tasks. Collaboration between biologists and computing scientists has allowed designing digital logic gates on the metabolism of E. coli. As Lac operon is a two-stage process, genetic regulation in the bacteria is used to realize computing functions. The process is controlled at the transcription stage of DNA into messenger RNA. Studies are being performed attempting to program E. coli to solve complicated mathematics problems, such as the Hamiltonian path problem. A computer to control protein production of E. coli within yeast cells has been developed. A method has also been developed to use bacteria to behave as an LCD screen. In July 2017, separate experiments with E. coli published on Nature showed the potential of using living cells for computing tasks and storing information. A team formed with collaborators of the Biodesign Institute at Arizona State University and Harvard's Wyss Institute for Biologically Inspired Engineering developed a biological computer inside E. coli that responded to a dozen inputs. The team called the computer "ribocomputer", as it was composed of ribonucleic acid. Meanwhile, Harvard researchers probed that is possible to store information in bacteria after successfully archiving images and movies in the DNA of living E. coli cells. In 2021, a team led by biophysicist Sangram Bagh realized a study with E. coli to solve 2 × 2 maze problems to probe the principle for distributed computing among cells. History In 1885, the German-Austrian pediatrician Theodor Escherich discovered this organism in the feces of healthy individuals. He called it Bacterium coli commune because it is found in the colon. Early classifications of prokaryotes placed these in a handful of genera based on their shape and motility (at that time Ernst Haeckel's classification of bacteria in the kingdom Monera was in place). Bacterium coli was the type species of the now invalid genus Bacterium when it was revealed that the former type species ("Bacterium triloculare") was missing. Following a revision of Bacterium, it was reclassified as Bacillus coli by Migula in 1895 and later reclassified in the newly created genus Escherichia, named after its original discoverer, by Aldo Castellani and Albert John Chalmers. In 1996, an outbreak of E. coli food poisoning occurred in Wishaw, Scotland, killing 21 people. This death toll was exceeded in 2011, when the 2011 Germany E. coli O104:H4 outbreak, linked to organic fenugreek sprouts, killed 53 people. In 2024, an outbreak of E. coli food poisoning occurred across the U.S. was linked to U.S.-grown organic carrots causing one fatality and dozens of illnesses. Uses E. coli has several practical uses besides its use as a vector for genetic experiments and processes. For example, E. coli can be used to generate synthetic propane and recombinant human growth hormone. See also BolA-like protein family Carbon monoxide-releasing molecules Contamination control Dam dcm strain Eijkman test Fecal coliform International Code of Nomenclature of Bacteria List of strains of Escherichia coli Mannan oligosaccharide-based nutritional supplements Overflow metabolism T4 rII system References External links E. coli on Protein Data Bank Gut flora bacteria Tropical diseases Model organisms Bacteria described in 1919
Escherichia coli
[ "Biology" ]
8,835
[ "Biological models", "Gut flora bacteria", "Model organisms", "Bacteria", "Escherichia coli" ]
40,172
https://en.wikipedia.org/wiki/Botulinum%20toxin
Botulinum toxin, or botulinum neurotoxin (commonly called botox), is a neurotoxic protein produced by the bacterium Clostridium botulinum and related species. It prevents the release of the neurotransmitter acetylcholine from axon endings at the neuromuscular junction, thus causing flaccid paralysis. The toxin causes the disease botulism. The toxin is also used commercially for medical and cosmetic purposes. Botulinum toxin is an acetylcholine release inhibitor and a neuromuscular blocking agent. The seven main types of botulinum toxin are named types A to G (A, B, C1, C2, D, E, F and G). New types are occasionally found. Types A and B are capable of causing disease in humans, and are also used commercially and medically. Types C–G are less common; types E and F can cause disease in humans, while the other types cause disease in other animals. Botulinum toxins are among the most potent toxins known to science. Intoxication can occur naturally as a result of either wound or intestinal infection or by ingesting formed toxin in food. The estimated human median lethal dose of type A toxin is 1.3–2.1ng/kg intravenously or intramuscularly, 10–13ng/kg when inhaled, or 1000ng/kg when taken by mouth. Medical uses Botulinum toxin is used to treat a number of therapeutic indications, many of which are not part of the approved drug label. Muscle spasticity Botulinum toxin is used to treat a number of disorders characterized by overactive muscle movement, including cerebral palsy, post-stroke spasticity, post-spinal cord injury spasticity, spasms of the head and neck, eyelid, vagina, limbs, jaw, and vocal cords. Similarly, botulinum toxin is used to relax the clenching of muscles, including those of the esophagus, jaw, lower urinary tract and bladder, or clenching of the anus which can exacerbate anal fissure. Botulinum toxin appears to be effective for refractory overactive bladder. Other muscle disorders Strabismus, otherwise known as improper eye alignment, is caused by imbalances in the actions of muscles that rotate the eyes. This condition can sometimes be relieved by weakening a muscle that pulls too strongly, or pulls against one that has been weakened by disease or trauma. Muscles weakened by toxin injection recover from paralysis after several months, so injection might seem to need to be repeated, but muscles adapt to the lengths at which they are chronically held, so that if a paralyzed muscle is stretched by its antagonist, it grows longer, while the antagonist shortens, yielding a permanent effect. In January 2014, botulinum toxin was approved by UK's Medicines and Healthcare products Regulatory Agency for treatment of restricted ankle motion due to lower-limb spasticity associated with stroke in adults. In July 2016, the US Food and Drug Administration (FDA) approved abobotulinumtoxinA (Dysport) for injection for the treatment of lower-limb spasticity in pediatric patients two years of age and older. AbobotulinumtoxinA is the first and only FDA-approved botulinum toxin for the treatment of pediatric lower limb spasticity. In the US, the FDA approves the text of the labels of prescription medicines and for which medical conditions the drug manufacturer may sell the drug. However, prescribers may freely prescribe them for any condition they wish, also known as off-label use. Botulinum toxins have been used off-label for several pediatric conditions, including infantile esotropia. Excessive sweating AbobotulinumtoxinA has been approved for the treatment of axillary hyperhidrosis, which cannot be managed by topical agents. Migraine In 2010, the FDA approved intramuscular botulinum toxin injections for prophylactic treatment of chronic migraine headache. However, the use of botulinum toxin injections for episodic migraine has not been approved by the FDA. Cosmetic uses In cosmetic applications, botulinum toxin is considered relatively safe and effective for reduction of facial wrinkles, especially in the uppermost third of the face. Commercial forms are marketed under the brand names Botox Cosmetic/Vistabel from Allergan, Dysport/Azzalure from Galderma and Ipsen, Xeomin/Bocouture from Merz, Jeuveau/Nuceiva from Evolus, manufactured by Daewoong in South Korea. The effects of botulinum toxin injections for glabellar lines ('11's lines' between the eyes) typically last two to four months and in some cases, product-dependent, with some patients experiencing a longer duration of effect of up to six months or longer. Injection of botulinum toxin into the muscles under facial wrinkles causes relaxation of those muscles, resulting in the smoothing of the overlying skin. Smoothing of wrinkles is usually visible three to five days after injection, with maximum effect typically a week following injection. Muscles can be treated repeatedly to maintain the smoothed appearance. DaxibotulinumtoxinA (Daxxify) was approved for medical use in the United States in September 2022. It is indicated for the temporary improvement in the appearance of moderate to severe glabellar lines (wrinkles between the eyebrows). DaxibotulinumtoxinA is an acetylcholine release inhibitor and neuromuscular blocking agent. The FDA approved daxibotulinumtoxinA based on evidence from two clinical trials (Studies GL-1 and GL-2), of 609 adults with moderate to severe glabellar lines. The trials were conducted at 30 sites in the United States and Canada. Both trials enrolled participants 18 to 75 years old with moderate to severe glabellar lines. Participants received a single intramuscular injection of daxibotulinumtoxinA or placebo at five sites within the muscles between the eyebrows. The most common side effects of daxibotulinumtoxinA are headache, drooping eyelids, and weakness of facial muscles. LetibotulinumtoxinA (Letybo) was approved for medical use in the United States in February 2024. It is indicated to temporarily improve the appearance of moderate-to-severe glabellar lines. The FDA approved letibotulinumtoxinA based on evidence from three clinical trials (BLESS I [NCT02677298], BLESS II [NCT02677805], and BLESS III [NCT03985982]) of 1,271 participants with moderate to severe wrinkles between the eyebrows for efficacy and safety assessment. These trials were conducted at 31 sites in the United States and the European Union. All three trials enrolled participants 18 to 75 years old with moderate to severe glabellar lines (wrinkles between the eyebrows). Participants received a single intramuscular injection of letibotulinumtoxinA or placebo at five sites within the muscles between the eyebrows. The most common side effects of letibotulinumtoxinA are headache, drooping of eyelid and brow, and twitching of eyelid. Other Botulinum toxin is also used to treat disorders of hyperactive nerves including excessive sweating, neuropathic pain, and some allergy symptoms. In addition to these uses, botulinum toxin is being evaluated for use in treating chronic pain. Studies show that botulinum toxin may be injected into arthritic shoulder joints to reduce chronic pain and improve range of motion. The use of botulinum toxin A in children with cerebral palsy is safe in the upper and lower limb muscles. Side effects While botulinum toxin is generally considered safe in a clinical setting, serious side effects from its use can occur. Most commonly, botulinum toxin can be injected into the wrong muscle group or with time spread from the injection site, causing temporary paralysis of unintended muscles. In at least three cases temporary diplopia was reported due to subcutenious injections for cosmetic purposes. Side effects from cosmetic use generally result from unintended paralysis of facial muscles. These include partial facial paralysis, muscle weakness, and trouble swallowing. Side effects are not limited to direct paralysis, however, and can also include headaches, flu-like symptoms, and allergic reactions. Just as cosmetic treatments only last a number of months, paralysis side effects can have the same durations. At least in some cases, these effects are reported to dissipate in the weeks after treatment. Bruising at the site of injection is not a side effect of the toxin, but rather of the mode of administration, and is reported as preventable if the clinician applies pressure to the injection site; when it occurs, it is reported in specific cases to last 7–11 days. When injecting the masseter muscle of the jaw, loss of muscle function can result in a loss or reduction of power to chew solid foods. With continued high doses, the muscles can atrophy or lose strength; research has shown that those muscles rebuild after a break from Botox. Side effects from therapeutic use can be much more varied depending on the location of injection and the dose of toxin injected. In general, side effects from therapeutic use can be more serious than those that arise during cosmetic use. These can arise from paralysis of critical muscle groups and can include arrhythmia, heart attack, and in some cases, seizures, respiratory arrest, and death. Additionally, side effects common in cosmetic use are also common in therapeutic use, including trouble swallowing, muscle weakness, allergic reactions, and flu-like syndromes. In response to the occurrence of these side effects, in 2008, the FDA notified the public of the potential dangers of the botulinum toxin as a therapeutic. Namely, the toxin can spread to areas distant from the site of injection and paralyze unintended muscle groups, especially when used for treating muscle spasticity in children treated for cerebral palsy. In 2009, the FDA announced that boxed warnings would be added to available botulinum toxin products, warning of their ability to spread from the injection site. However, the clinical use of botulinum toxin A in cerebral palsy children has been proven to be safe with minimal side effects. Additionally, the FDA announced name changes to several botulinum toxin products, to emphasize that the products are not interchangeable and require different doses for proper use. Botox and Botox Cosmetic were given the generic name of onabotulinumtoxinA, Myobloc as rimabotulinumtoxinB, and Dysport retained its generic name of abobotulinumtoxinA. In conjunction with this, the FDA issued a communication to health care professionals reiterating the new drug names and the approved uses for each. A similar warning was issued by Health Canada in 2009, warning that botulinum toxin products can spread to other parts of the body. Role in disease Botulinum toxin produced by Clostridium botulinum (an anaerobic, gram-positive bacterium) is the cause of botulism. Humans most commonly ingest the toxin from eating improperly canned foods in which C. botulinum has grown. However, the toxin can also be introduced through an infected wound. In infants, the bacteria can sometimes grow in the intestines and produce botulinum toxin within the intestine and can cause a condition known as floppy baby syndrome. In all cases, the toxin can then spread, blocking nerves and muscle function. In severe cases, the toxin can block nerves controlling the respiratory system or heart, resulting in death. Botulism can be difficult to diagnose, as it may appear similar to diseases such as Guillain–Barré syndrome, myasthenia gravis, and stroke. Other tests, such as brain scan and spinal fluid examination, may help to rule out other causes. If the symptoms of botulism are diagnosed early, various treatments can be administered. In an effort to remove contaminated food that remains in the gut, enemas or induced vomiting may be used. For wound infections, infected material may be removed surgically. Botulinum antitoxin is available and may be used to prevent the worsening of symptoms, though it will not reverse existing nerve damage. In severe cases, mechanical respiration may be used to support people with respiratory failure. The nerve damage heals over time, generally over weeks to months. With proper treatment, the case fatality rate for botulinum poisoning can be greatly reduced. Two preparations of botulinum antitoxins are available for treatment of botulism. Trivalent (serotypes A, B, E) botulinum antitoxin is derived from equine sources using whole antibodies. The second antitoxin is heptavalent botulinum antitoxin (serotypes A, B, C, D, E, F, G), which is derived from equine antibodies that have been altered to make them less immunogenic. This antitoxin is effective against all main strains of botulism. Mechanism of action Botulinum toxin exerts its effect by cleaving key proteins required for nerve activation. First, the toxin binds specifically to presynaptic surface of neurons that use the neurotransmitter acetylcholine. Once bound to the nerve terminal, the neuron takes up the toxin into a vesicle by receptor-mediated endocytosis. As the vesicle moves farther into the cell, it acidifies, activating a portion of the toxin that triggers it to push across the vesicle membrane and into the cell cytoplasm. Botulinum neurotoxins recognize distinct classes of receptors simultaneously (gangliosides, synaptotagmin and SV2). Once inside the cytoplasm, the toxin cleaves SNARE proteins (proteins that mediate vesicle fusion, with their target membrane bound compartments) meaning that the acetylcholine vesicles cannot bind to the intracellular cell membrane, preventing the cell from releasing vesicles of neurotransmitter. This stops nerve signaling, leading to flaccid paralysis. The toxin itself is released from the bacterium as a single chain, then becomes activated when cleaved by its own proteases. The active form consists of a two-chain protein composed of a 100-kDa heavy chain polypeptide joined via disulfide bond to a 50-kDa light chain polypeptide. The heavy chain contains domains with several functions; it has the domain responsible for binding specifically to presynaptic nerve terminals, as well as the domain responsible for mediating translocation of the light chain into the cell cytoplasm as the vacuole acidifies. The light chain is a M27-family zinc metalloprotease and is the active part of the toxin. It is translocated into the host cell cytoplasm where it cleaves the host protein SNAP-25, a member of the SNARE protein family, which is responsible for fusion. The cleaved SNAP-25 cannot mediate fusion of vesicles with the host cell membrane, thus preventing the release of the neurotransmitter acetylcholine from axon endings. This blockage is slowly reversed as the toxin loses activity and the SNARE proteins are slowly regenerated by the affected cell. The seven toxin serotypes (A–G) are traditionally separated by their antigenicity. They have different tertiary structures and sequence differences. While the different toxin types all target members of the SNARE family, different toxin types target different SNARE family members. The A, B, and E serotypes cause human botulism, with the activities of types A and B enduring longest in vivo (from several weeks to months). Existing toxin types can recombine to create "hybrid" (mosaic, chimeric) types. Examples include BoNT/CD, BoNT/DC, and BoNT/FA, with the first letter indicating the light chain type and the latter indicating the heavy chain type. BoNT/FA received considerable attention under the name "BoNT/H", as it was mistakenly thought it could not be neutralized by any existing antitoxin. Botulinum toxins are closely related to tetanus toxin. The two are collectively known as Clostridium neurotoxins and the light chain is classified by MEROPS as family M27. Clostridium neurotoxins belong in the wider family of AB toxins, which also includes Anthrax toxin and Diphtheria toxin. Nonclassical types include BoNT/X (), which is toxic in mice and possibly in humans; a BoNT/J () found in cow Enterococcus; and a BoNT/Wo () found in the rice-colonizing Weissella oryzae. History Initial descriptions and discovery of Clostridium botulinum One of the earliest recorded outbreaks of foodborne botulism occurred in 1793 in the village of Wildbad in what is now Baden-Württemberg, Germany. Thirteen people became sick and six died after eating pork stomach filled with blood sausage, a local delicacy. Additional cases of fatal food poisoning in Württemberg led the authorities to issue a public warning against consuming smoked blood sausages in 1802 and to collect case reports of "sausage poisoning". Between 1817 and 1822, the German physician Justinus Kerner published the first complete description of the symptoms of botulism, based on extensive clinical observations and animal experiments. He concluded that the toxin develops in bad sausages under anaerobic conditions, is a biological substance, acts on the nervous system, and is lethal even in small amounts. Kerner hypothesized that this "sausage toxin" could be used to treat a variety of diseases caused by an overactive nervous system, making him the first to suggest that it could be used therapeutically. In 1870, the German physician Müller coined the term botulism to describe the disease caused by sausage poisoning, from the Latin word , meaning 'sausage'. In 1895 Émile van Ermengem, a Belgian microbiologist, discovered what is now called Clostridium botulinum and confirmed that a toxin produced by the bacteria causes botulism. On 14 December 1895, there was a large outbreak of botulism in the Belgian village of Ellezelles that occurred at a funeral where people ate pickled and smoked ham; three of them died. By examining the contaminated ham and performing autopsies on the people who died after eating it, van Ermengem isolated an anaerobic microorganism that he called Bacillus botulinus. He also performed experiments on animals with ham extracts, isolated bacterial cultures, and toxins extracts from the bacteria. From these he concluded that the bacteria themselves do not cause foodborne botulism, but rather produce a toxin that causes the disease after it is ingested. As a result of Kerner's and van Ermengem's research, it was thought that only contaminated meat or fish could cause botulism. This idea was refuted in 1904 when a botulism outbreak occurred in Darmstadt, Germany, because of canned white beans. In 1910, the German microbiologist J. Leuchs published a paper showing that the outbreaks in Ellezelles and Darmstadt were caused by different strains of Bacillus botulinus and that the toxins were serologically distinct. In 1917, Bacillus botulinus was renamed Clostridium botulinum, as it was decided that term Bacillus should only refer to a group of aerobic microorganisms, while Clostridium would be only used to describe a group of anaerobic microorganisms. In 1919, Georgina Burke used toxin-antitoxin reactions to identify two strains of Clostridium botulinum, which she designated A and B. Food canning Over the next three decades, 1895–1925, as food canning was approaching a billion-dollar-a-year industry, botulism was becoming a public health hazard. Karl Friedrich Meyer, a Swiss-American veterinary scientist, created a center at the Hooper Foundation in San Francisco, where he developed techniques for growing the organism and extracting the toxin, and conversely, for preventing organism growth and toxin production, and inactivating the toxin by heating. The California canning industry was thereby preserved. World War II With the outbreak of World War II, weaponization of botulinum toxin was investigated at Fort Detrick in Maryland. Carl Lamanna and James Duff developed the concentration and crystallization techniques that Edward J. Schantz used to create the first clinical product. When the Army's Chemical Corps was disbanded, Schantz moved to the Food Research Institute in Wisconsin, where he manufactured toxin for experimental use and provided it to the academic community. The mechanism of botulinum toxin action – blocking the release of the neurotransmitter acetylcholine from nerve endings – was elucidated in the mid-20th century, and remains an important research topic. Nearly all toxin treatments are based on this effect in various body tissues. Strabismus Ophthalmologists specializing in eye muscle disorders (strabismus) had developed the method of EMG-guided injection (using the electromyogram, the electrical signal from an activated muscle, to guide injection) of local anesthetics as a diagnostic technique for evaluating an individual muscle's contribution to an eye movement. Because strabismus surgery frequently needed repeating, a search was undertaken for non-surgical, injection treatments using various anesthetics, alcohols, enzymes, enzyme blockers, and snake neurotoxins. Finally, inspired by Daniel B. Drachman's work with chicks at Johns Hopkins, Alan B. Scott and colleagues injected botulinum toxin into monkey extraocular muscles. The result was remarkable; a few picograms induced paralysis that was confined to the target muscle, long in duration, and without side effects. After working out techniques for freeze-drying, buffering with albumin, and assuring sterility, potency, and safety, Scott applied to the FDA for investigational drug use, and began manufacturing botulinum type A neurotoxin in his San Francisco lab. He injected the first strabismus patients in 1977, reported its clinical utility in 1980, and had soon trained hundreds of ophthalmologists in EMG-guided injection of the drug he named Oculinum ("eye aligner"). In 1986, Oculinum Inc, Scott's micromanufacturer and distributor of botulinum toxin, was unable to obtain product liability insurance, and could no longer supply the drug. As supplies became exhausted, people who had come to rely on periodic injections became desperate. For four months, as liability issues were resolved, American blepharospasm patients traveled to Canadian eye centers for their injections. Based on data from thousands of people collected by 240 investigators, Oculinum Inc (which was soon acquired by Allergan) received FDA approval in 1989 to market Oculinum for clinical use in the United States to treat adult strabismus and blepharospasm. Allergan then began using the trademark Botox. This original approval was granted under the 1983 US Orphan Drug Act. Cosmetics The effect of botulinum toxin type-A on reducing and eliminating forehead wrinkles was first described and published by Richard Clark, MD, a plastic surgeon from Sacramento, California. In 1987 Clark was challenged with eliminating the disfigurement caused by only the right side of the forehead muscles functioning after the left side of the forehead was paralyzed during a facelift procedure. This patient had desired to look better from her facelift, but was experiencing bizarre unilateral right forehead eyebrow elevation while the left eyebrow drooped, and she constantly demonstrated deep expressive right forehead wrinkles while the left side was perfectly smooth due to the paralysis. Clark was aware that Botulinum toxin was safely being used to treat babies with strabismus and he requested and was granted FDA approval to experiment with Botulinum toxin to paralyze the moving and wrinkling normal functioning right forehead muscles to make both sides of the forehead appear the same. This study and case report of the cosmetic use of Botulinum toxin to treat a cosmetic complication of a cosmetic surgery was the first report on the specific treatment of wrinkles and was published in the journal Plastic and Reconstructive Surgery in 1989. Editors of the journal of the American Society of Plastic Surgeons have clearly stated "the first described use of the toxin in aesthetic circumstances was by Clark and Berris in 1989." Also in 1987, Jean and Alastair Carruthers, both doctors in Vancouver, British Columbia, observed that blepharospasm patients who received injections around the eyes and upper face also enjoyed diminished facial glabellar lines ("frown lines" between the eyebrows). Alastair Carruthers reported that others at the time also noticed these effects and discussed the cosmetic potential of botulinum toxin. Unlike other investigators, the Carruthers did more than just talk about the possibility of using botulinum toxin cosmetically. They conducted a clinical study on otherwise normal individuals whose only concern was their eyebrow furrow. They performed their study between 1987 and 1989 and presented their results at the 1990 annual meeting of the American Society for Dermatologic Surgery. Their findings were subsequently published in 1992. Chronic pain William J. Binder reported in 2000 that people who had cosmetic injections around the face reported relief from chronic headache. This was initially thought to be an indirect effect of reduced muscle tension, but the toxin is now known to inhibit release of peripheral nociceptive neurotransmitters, suppressing the central pain processing systems responsible for migraine headache. Society and culture Economics , botulinum toxin injections are the most common cosmetic operation, with 7.4 million procedures in the United States, according to the American Society of Plastic Surgeons. The global market for botulinum toxin products, driven by their cosmetic applications, was forecast to reach $2.9 billion by 2018. The facial aesthetics market, of which they are a component, was forecast to reach $4.7 billion ($2 billion in the US) in the same timeframe. US market In 2020, 4,401,536 botulinum toxin Type A procedures were administered. In 2019 the botulinum toxin market made US$3.19 billion. Botox cost Botox cost is generally determined by the number of units administered (avg. $10–30 per unit) or by the area ($200–1000) and depends on expertise of a physician, clinic location, number of units, and treatment complexity. Insurance In the US, botox for medical purposes is usually covered by insurance if deemed medically necessary by a doctor and covers a plethora of medical problems including overactive bladder (OAB), urinary incontinence due to neurologic conditions, headaches and migraines, TMJ, spasticity in adults, cervical dystonia in adults, severe axillary hyperhidrosis (or other areas of the body), blepharospasm, upper or lower limb spasticity. Hyperhidrosis Botox for excessive sweating is FDA approved. Cosmetic Standard areas for aesthetics botox injections include facial and other areas that can form fine lines and wrinkles due to every day muscle contractions and/or facial expressions such as smiling, frowning, squinting, and raising eyebrows. These areas include the glabellar region between the eyebrows, horizontal lines on the forehead, crow's feet around the eyes, and even circular bands that form around the neck secondary to platysmal hyperactivity. Bioterrorism Botulinum toxin has been recognized as a potential agent for use in bioterrorism. It can be absorbed through the eyes, mucous membranes, respiratory tract, and non-intact skin. The effects of botulinum toxin are different from those of nerve agents involved insofar in that botulism symptoms develop relatively slowly (over several days), while nerve agent effects are generally much more rapid. Evidence suggests that nerve exposure (simulated by injection of atropine and pralidoxime) will increase mortality by enhancing botulinum toxin's mechanism of toxicity. With regard to detection, protocols using NBC detection equipment (such as M-8 paper or the ICAM) will not indicate a "positive" when samples containing botulinum toxin are tested. To confirm a diagnosis of botulinum toxin poisoning, therapeutically or to provide evidence in death investigations, botulinum toxin may be quantitated by immunoassay of human biological fluids; serum levels of 12–24 mouse LD50 units per milliliter have been detected in poisoned people. During the early 1980s, German and French newspapers reported that the police had raided a Baader-Meinhof gang safe house in Paris and had found a makeshift laboratory that contained flasks full of Clostridium botulinum, which makes botulinum toxin. Their reports were later found to be incorrect; no such lab was ever found. Brand names Commercial forms are marketed under the brand names Botox (onabotulinumtoxinA), Dysport/Azzalure (abobotulinumtoxinA), Letybo (letibotulinumtoxinA), Myobloc (rimabotulinumtoxinB), Xeomin/Bocouture (incobotulinumtoxinA), and Jeuveau (prabotulinumtoxinA). Botulinum toxin A is sold under the brand names Jeuveau, Botox, and Xeomin. Botulinum toxin B is sold under the brand name Myobloc. In the United States, botulinum toxin products are manufactured by a variety of companies, for both therapeutic and cosmetic use. A US supplier reported in its company materials in 2011 that it could "supply the world's requirements for 25 indications approved by Government agencies around the world" with less than one gram of raw botulinum toxin. Myobloc or Neurobloc, a botulinum toxin type B product, is produced by Solstice Neurosciences, a subsidiary of US WorldMeds. AbobotulinumtoxinA), a therapeutic formulation of the type A toxin manufactured by Galderma in the United Kingdom, is licensed for the treatment of focal dystonias and certain cosmetic uses in the US and other countries. LetibotulinumtoxinA (Letybo) was approved for medical use in the United States in February 2024. Besides the three primary US manufacturers, numerous other botulinum toxin producers are known. Xeomin, manufactured in Germany by Merz, is also available for both therapeutic and cosmetic use in the US. Lanzhou Institute of Biological Products in China manufactures a botulinum toxin type-A product; as of 2014, it was the only botulinum toxin type-A approved in China. Botulinum toxin type-A is also sold as Lantox and Prosigne on the global market. Neuronox, a botulinum toxin type-A product, was introduced by Medy-Tox of South Korea in 2009. Toxin production Botulism toxins are produced by bacteria of the genus Clostridium, namely C. botulinum, C. butyricum, C. baratii and C. argentinense, which are widely distributed, including in soil and dust. Also, the bacteria can be found inside homes on floors, carpet, and countertops even after cleaning. Complicating the problem is that the taxonomy for C. botulinum remains chaotic. The toxin has likely been horizontally transferred across lineages, contributing to the multi-species pattern seen today. Food-borne botulism results, indirectly, from ingestion of food contaminated with Clostridium spores, where exposure to an anaerobic environment allows the spores to germinate, after which the bacteria can multiply and produce toxin. Critically, ingestion of toxin rather than spores or vegetative bacteria causes botulism. Botulism is nevertheless known to be transmitted through canned foods not cooked correctly before canning or after can opening, so is preventable. Infant botulism arising from consumption of honey or any other food that can carry these spores can be prevented by eliminating these foods from diets of children less than 12 months old. Organism and toxin susceptibilities Proper refrigeration at temperatures below slows the growth of C. botulinum. The organism is also susceptible to high salt, high oxygen, and low pH levels. The toxin itself is rapidly destroyed by heat, such as in thorough cooking. The spores that produce the toxin are heat-tolerant and will survive boiling water for an extended period of time. The botulinum toxin is denatured and thus deactivated at temperatures greater than for five minutes. As a zinc metalloprotease (see below), the toxin's activity is also susceptible, post-exposure, to inhibition by protease inhibitors, e.g., zinc-coordinating hydroxamates. Research Blepharospasm and strabismus University-based ophthalmologists in the US and Canada further refined the use of botulinum toxin as a therapeutic agent. By 1985, a scientific protocol of injection sites and dosage had been empirically determined for treatment of blepharospasm and strabismus. Side effects in treatment of this condition were deemed to be rare, mild and treatable. The beneficial effects of the injection lasted only four to six months. Thus, blepharospasm patients required re-injection two or three times a year. In 1986, Scott's micromanufacturer and distributor of Botox was no longer able to supply the drug because of an inability to obtain product liability insurance. People became desperate, as supplies of Botox were gradually consumed, forcing him to abandon people who would have been due for their next injection. For a period of four months, American blepharospasm patients had to arrange to have their injections performed by participating doctors at Canadian eye centers until the liability issues could be resolved. In December 1989, Botox was approved by the US FDA for the treatment of strabismus, blepharospasm, and hemifacial spasm in people over 12 years old. In the case of treatment of infantile esotropia in people younger than 12 years of age, several studies have yielded differing results. Cosmetic The effect of botulinum toxin type-A on reducing and eliminating forehead wrinkles was first described and published by Richard Clark, MD, a plastic surgeon from Sacramento, California. In 1987 Clark was challenged with eliminating the disfigurement caused by only the right side of the forehead muscles functioning after the left side of the forehead was paralyzed during a facelift procedure. This patient had desired to look better from her facelift, but was experiencing bizarre unilateral right forehead eyebrow elevation while the left eyebrow drooped and she emoted with deep expressive right forehead wrinkles while the left side was perfectly smooth due to the paralysis. Clark was aware that botulinum toxin was safely being used to treat babies with strabismus and he requested and was granted FDA approval to experiment with botulinum toxin to paralyze the moving and wrinkling normal functioning right forehead muscles to make both sides of the forehead appear the same. This study and case report on the cosmetic use of botulinum toxin to treat a cosmetic complication of a cosmetic surgery was the first report on the specific treatment of wrinkles and was published in the journal Plastic and Reconstructive Surgery in 1989. Editors of the journal of the American Society of Plastic Surgeons have clearly stated "the first described use of the toxin in aesthetic circumstances was by Clark and Berris in 1989." J. D. and J. A. Carruthers also studied and reported in 1992 the use of botulinum toxin type-A as a cosmetic treatment.[78] They conducted a study of participants whose only concern was their glabellar forehead wrinkle or furrow. Study participants were otherwise normal. Sixteen of seventeen participants available for follow-up demonstrated a cosmetic improvement. This study was reported at a meeting in 1991. The study for the treatment of glabellar frown lines was published in 1992. This result was subsequently confirmed by other groups (Brin, and the Columbia University group under Monte Keen). The FDA announced regulatory approval of botulinum toxin type A (Botox Cosmetic) to temporarily improve the appearance of moderate-to-severe frown lines between the eyebrows (glabellar lines) in 2002 after extensive clinical trials. Well before this, the cosmetic use of botulinum toxin type A became widespread. The results of Botox Cosmetic can last up to four months and may vary with each patient. The US Food and Drug Administration (FDA) approved an alternative product-safety testing method in response to increasing public concern that LD50 testing was required for each batch sold in the market. Botulinum toxin type-A has also been used in the treatment of gummy smiles; the material is injected into the hyperactive muscles of upper lip, which causes a reduction in the upward movement of lip thus resulting in a smile with a less exposure of gingiva. Botox is usually injected in the three lip elevator muscles that converge on the lateral side of the ala of the nose; the levator labii superioris (LLS), the levator labii superioris alaeque nasi muscle (LLSAN), and the zygomaticus minor (ZMi). Upper motor neuron syndrome Botulinum toxin type-A is now a common treatment for muscles affected by the upper motor neuron syndrome (UMNS), such as cerebral palsy, for muscles with an impaired ability to effectively lengthen. Muscles affected by UMNS frequently are limited by weakness, loss of reciprocal inhibition, decreased movement control, and hypertonicity (including spasticity). In January 2014, Botulinum toxin was approved by UK's Medicines and Healthcare products Regulatory Agency (MHRA) for the treatment of ankle disability due to lower limb spasticity associated with stroke in adults. Joint motion may be restricted by severe muscle imbalance related to the syndrome, when some muscles are markedly hypertonic, and lack effective active lengthening. Injecting an overactive muscle to decrease its level of contraction can allow improved reciprocal motion, so improved ability to move and exercise. Sialorrhea Sialorrhea is a condition where oral secretions are unable to be eliminated, causing pooling of saliva in the mouth. This condition can be caused by various neurological syndromes such as Bell's palsy, intellectual disability, and cerebral palsy. Injection of botulinum toxin type-A into salivary glands is useful in reducing the secretions. Cervical dystonia Botulinum toxin type-A is used to treat cervical dystonia, but it can become ineffective after a time. Botulinum toxin type B received FDA approval for treatment of cervical dystonia in December 2000. Brand names for botulinum toxin type-B include Myobloc in the United States and Neurobloc in the European Union. Chronic migraine Onabotulinumtoxin A (trade name Botox) received FDA approval for treatment of chronic migraines on 15 October 2010. The toxin is injected into the head and neck to treat these chronic headaches. Approval followed evidence presented to the agency from two studies funded by Allergan showing a very slight improvement in incidence of chronic migraines for those with migraines undergoing the Botox treatment. Since then, several randomized control trials have shown botulinum toxin type A to improve headache symptoms and quality of life when used prophylactically for participants with chronic migraine who exhibit headache characteristics consistent with: pressure perceived from outside source, shorter total duration of chronic migraines (<30 years), "detoxification" of participants with coexisting chronic daily headache due to medication overuse, and no current history of other preventive headache medications. Depression A few small trials have found benefits in people with depression. A 2021 meta-analysis supports the usefulness of botox in unipolar depression, but finds significant heterogenity among the findings. The main hypothesis for its action is based on the facial feedback hypothesis. Another hypothesis involves a connection between the facial muscle and specific brain regions in animals, but additional evidence is required to support or disprove this theory. Premature ejaculation The drug for the treatment of premature ejaculation has been under development since August 2013, and is in Phase II trials. References Further reading External links Drugs developed by AbbVie Acetylcholine release inhibitors Bacterial toxins Protein toxins Botulism EC 3.4.24 Muscle relaxants Neurotoxins Peripherally selective drugs Plastic surgery Protein domains
Botulinum toxin
[ "Chemistry", "Biology" ]
8,608
[ "Toxins by chemical classification", "Protein toxins", "Protein classification", "Protein domains", "Neurochemistry", "Neurotoxins" ]
40,180
https://en.wikipedia.org/wiki/Bessemer%20process
The Bessemer process was the first inexpensive industrial process for the mass production of steel from molten pig iron before the development of the open hearth furnace. The key principle is removal of impurities from the iron by oxidation with air being blown through the molten iron. The oxidation also raises the temperature of the iron mass and keeps it molten. The modern process is named after its inventor, the Englishman Henry Bessemer, who took out a patent on the process in 1856. The process was said to be independently discovered in 1851 by the American inventor William Kelly though the claim is controversial. The process using a basic refractory lining is known as the "basic Bessemer process" or Gilchrist–Thomas process after the English discoverers Percy Gilchrist and Sidney Gilchrist Thomas. History Patent In the early to mid-1850s, the American inventor William Kelly experimented with a method similar to the Bessemer process. Wagner writes that Kelly may have been inspired by techniques introduced by Chinese ironworkers hired by Kelly in 1854. The claim that both Kelly and Bessemer invented the same process remains controversial. When Bessemer's patent for the process was reported by Scientific American, Kelly responded by writing a letter to the magazine. In the letter, Kelly states that he had previously experimented with the process and claimed that Bessemer knew of Kelly's discovery. He wrote that "I have reason to believe my discovery was known in England three or four years ago, as a number of English puddlers visited this place to see my new process. Several of them have since returned to England and may have spoken of my invention there." It is suggested Kelly's process was less developed and less successful than Bessemer's process. Sir Henry Bessemer described the origin of his invention in his autobiography written in 1890. During the outbreak of the Crimean War, many English industrialists and inventors became interested in military technology. According to Bessemer, his invention was inspired by a conversation with Napoleon III in 1854 pertaining to the steel required for better artillery. Bessemer claimed that it "was the spark which kindled one of the greatest revolutions that the present century had to record, for during my solitary ride in a cab that night from Vincennes to Paris, I made up my mind to try what I could to improve the quality of iron in the manufacture of guns." At the time, steel was used to make only small items like cutlery and tools, but was too expensive for cannons. Starting in January 1855, he began working on a way to produce steel in the massive quantities required for artillery and by October he filed his first patent related to the Bessemer process. He patented the method a year later in 1856. William Kelley was awarded priority patent in 1857. Bessemer licensed the patent for his process to four ironmasters, for a total of £27,000, but the licensees failed to produce the quality of steel he had promised—it was "rotten hot and rotten cold", according to his friend, William Clay—and he later bought them back for £32,500. His plan had been to offer the licenses to one company in each of several geographic areas, at a royalty price per ton that included a lower rate on a proportion of their output in order to encourage production, but not so large a proportion that they might decide to reduce their selling prices. By this method he hoped to cause the new process to gain in standing and market share. He realised that the technical problem was due to impurities in the iron and concluded that the solution lay in knowing when to turn off the flow of air in his process so that the impurities were burned off but just the right amount of carbon remained. However, despite spending tens of thousands of pounds on experiments, he could not find the answer. Certain grades of steel are sensitive to the 78% nitrogen which was part of the air blast passing through the steel. The solution was first discovered by English metallurgist Robert Forester Mushet, who had carried out thousands of experiments in the Forest of Dean. His method was to first burn off, as far as possible, all the impurities and carbon, then reintroduce carbon and manganese by adding an exact amount of spiegeleisen, an alloy of iron and manganese with trace amounts of carbon and silicon. This had the effect of improving the quality of the finished product, increasing its malleability—its ability to withstand rolling and forging at high temperatures and making it more suitable for a vast array of uses. Mushet's patent ultimately lapsed due to Mushet's inability to pay the patent fees and was acquired by Bessemer. Bessemer earned over 5 million dollars in royalties from the patents. The first company to license the process was the Manchester firm of W & J Galloway, and they did so before Bessemer announced it at Cheltenham in 1856. They are not included in his list of the four to whom he refunded the license fees. However, they subsequently rescinded their license in 1858 in return for the opportunity to invest in a partnership with Bessemer and others. This partnership began to manufacture steel in Sheffield from 1858, initially using imported charcoal pig iron from Sweden. This was the first commercial production. A 20% share in the Bessemer patent was also purchased for use in Sweden and Norway by Swedish trader and Consul Göran Fredrik Göransson during a visit to London in 1857. During the first half of 1858, Göransson, together with a small group of engineers, experimented with the Bessemer process at Edsken near Hofors, Sweden before he finally succeeded. Later in 1858 he again met with Henry Bessemer in London, managed to convince him of his success with the process, and negotiated the right to sell his steel in England. Production continued in Edsken, but it was far too small for the industrial-scale production needed. In 1862 Göransson built a new factory for his Högbo Iron and Steel Works company on the shore of Lake Storsjön, where the town of Sandviken was founded. The company was renamed Sandviken's Ironworks, continued to grow and eventually became Sandvik in the 1970s. Industrial revolution Alexander Lyman Holley contributed significantly to the success of Bessemer steel in the United States. His A Treatise on Ordnance and Armor is an important work on contemporary weapons manufacturing and steel-making practices. In 1862, he visited Bessemer's Sheffield works, and became interested in licensing the process for use in the US. Upon returning to the US, Holley met with two iron producers from Troy, New York, John F. Winslow and John Augustus Griswold, who asked him to return to the United Kingdom and negotiate with the Bank of England on their behalf. Holley secured a license for Griswold and Winslow to use Bessemer's patented processes and returned to the United States in late 1863. The trio began setting up a mill in Troy, New York in 1865. The factory contained a number of Holley's innovations that greatly improved productivity over Bessemer's factory in Sheffield, and the owners gave a successful public exhibition in 1867. The Troy factory attracted the attention of the Pennsylvania Railroad, which wanted to use the new process to manufacture steel rail. It funded Holley's second mill as part of its Pennsylvania Steel subsidiary. Between 1866 and 1877, the partners were able to license a total of 11 Bessemer steel mills. One of the investors they attracted was Andrew Carnegie, who saw great promise in the new steel technology after a visit to Bessemer in 1872, and saw it as a useful adjunct to his existing businesses, the Keystone Bridge Company and the Union Iron Works. Holley built the new steel mill for Carnegie, and continued to improve and refine the process. The new mill, known as the Edgar Thomson Steel Works, opened in 1875, and started the growth of the United States as a major world steel producer. Using the Bessemer process, Carnegie Steel was able to reduce the costs of steel railroad rails from $100 per ton to $50 per ton between 1873 and 1875. The price of steel continued to fall until Carnegie was selling rails for $18 per ton by the 1890s. Prior to the opening of Carnegie's Thomson Works, steel output in the United States totaled around 157,000 tons per year. By 1910, American companies were producing 26 million tons of steel annually. William Walker Scranton, manager and owner of the Lackawanna Iron & Coal Company in Scranton, Pennsylvania, had also investigated the process in Europe. He built a mill in 1876 using the Bessemer process for steel rails and quadrupled his production. Bessemer steel was used in the United States primarily for railroad rails. During the construction of the Brooklyn Bridge, a major dispute arose over whether crucible steel should be used instead of the cheaper Bessemer steel. In 1877, Abram Hewitt wrote a letter urging against the use of Bessemer steel. Bids had been submitted for both crucible steel and Bessemer steel; John A. Roebling's Sons submitted the lowest bid for Bessemer steel, but at Hewitt's direction, the contract was awarded to J. Lloyd Haigh Co. Technical details Using the Bessemer process, it took between 10 and 20 minutes to convert three to five tons of iron into steelit would previously take at least a full day of heating, stirring and reheating to achieve this. Oxidation Blowing air through the molten pig iron introduces oxygen into the melt. This oxidizes impurities such as silicon, manganese, and carbon. These oxides either escape as gas or form a solid slag. The refractory lining of the converter plays a role in the conversion — clay linings may be used when little phosphorus is present in the raw material. Bessemer himself used ganister sandstone–in the acid Bessemer process. Given high phosphorus content, dolomite or magnesite linings are used in the basic Bessemer limestone process. Materials such as spiegeleisen (a ferromanganese alloy), can then be added to the molten steel to establish specific properties. Process When the required steel forms, it is poured into ladles and then transferred into moulds while the (lighter) slag is left behind. The conversion process, called the "blow", initially took approximately 20 minutes. During this interval, the progress of the oxidation of the impurities is judged by the appearance of the flame in the mouth of the converter. The human eye was later replaced by photoelectric methods of monitoring the flame, increasing ultimate precision. After the blow, carbon is readded to the liquid metal and other alloying materials are added. A Bessemer converter could treat a "heat" (batch of hot metal) of 5 to 30 tons at a time. They were usually operated in pairs: one was blown while the other was filled or tapped. "Basic" vs. acidic Bessemer process Industrial chemist Sidney Gilchrist Thomas tackled the problem of phosphorus in iron, which resulted in the production of low grade steel. Believing that he had discovered a solution, he contacted his cousin, Percy Gilchrist, who was a chemist at the Blaenavon Ironworks. The manager there, Edward Martin, offered Thomas test equipment and helped him draw up a patent issued in May 1878. Thomas's invention consisted of using dolomite or limestone linings for the Bessemer converter rather than clay, and it became known as the 'basic' Bessemer rather than the 'acid' Bessemer process. An additional advantage was that the processes formed more slag in the converter, and this could be recovered and used profitably as fertilizer. Importance In 1898, Scientific American published an article called Bessemer Steel and its Effect on the World explaining the significant economic effects of the increased supply in cheap steel. They noted that the expansion of railroads into previously sparsely inhabited regions of the country had led to settlement in those regions, and had made the trade of certain goods profitable, which had previously been too costly to transport. The Bessemer process revolutionized steel manufacture by decreasing its cost, from £40 per long ton to £6–7 per long ton, along with greatly increasing the scale and speed of production of this vital raw material. The process also decreased the labor requirements for steel-making. Before it was introduced, steel was far too expensive to make bridges or the framework for buildings and thus wrought iron had been used throughout the Industrial Revolution. After the introduction of the Bessemer process, steel and wrought iron became similarly priced, and some users, primarily railroads, turned to steel. Quality problems, such as brittleness caused by nitrogen in the blowing air, prevented Bessemer steel from being used for many structural applications. Open-hearth steel was suitable for structural applications. Steel greatly improved the productivity of railroads. Steel rails lasted ten times longer than iron rails. Steel rails, which became heavier as prices fell, could carry heavier locomotives, which could pull longer trains. Steel rail cars were longer and were able to increase the freight to car weight from 1:1 to 2:1. Obsolescence As early as 1895 in the UK it was being noted that the heyday of the Bessemer process was over and that the open hearth method predominated. The Iron and Coal Trades Review said that it was "in a semi-moribund condition. Year after year, it has not only ceased to make progress, but it has absolutely declined." It has been suggested, both at that time and more recently, that the cause of this was the lack of trained personnel and investment in technology rather than anything intrinsic to the process itself. For example, one of the major causes of the decline of the giant ironmaking company Bolckow Vaughan of Middlesbrough was its failure to upgrade its technology. The basic process, the Thomas-Gilchrist process, remained in use longer, especially in Continental Europe, where iron ores were of high phosphorus content and the open-hearth process was not able to remove all phosphorus; almost all inexpensive construction steel in Germany was produced with this method in the 1950s and 1960s. It was eventually superseded by basic oxygen steelmaking. In the U.S., commercial steel production using this method stopped in 1968. It was replaced by processes such as the basic oxygen (Linz–Donawitz) process, which offered better control of final chemistry. The Bessemer process was so fast (10–20 minutes for a heat) that it allowed little time for chemical analysis or adjustment of the alloying elements in the steel. Bessemer converters did not remove phosphorus efficiently from the molten steel; as low-phosphorus ores became more expensive, conversion costs increased. The process permitted only limited amount of scrap steel to be charged, further increasing costs, especially when scrap was inexpensive. Use of electric arc furnace technology competed favourably with the Bessemer process resulting in its obsolescence. Basic oxygen steelmaking is essentially an improved version of the Bessemer process (decarburization by blowing oxygen as gas into the heat rather than burning the excess carbon away by adding oxygen carrying substances into the heat). The advantages of pure oxygen blast over air blast were known to Henry Bessemer, but 19th-century technology was not advanced enough to allow for the production of the large quantities of pure oxygen necessary to make it economical. See also Cementation (metallurgy) process Methods of crucible steel production References Bibliography External links English inventions Steelmaking Metallurgical processes Economic history of the United States Economic history of the United Kingdom
Bessemer process
[ "Chemistry", "Materials_science" ]
3,206
[ "Metallurgical processes", "Steelmaking", "Metallurgy" ]
40,239
https://en.wikipedia.org/wiki/Geosynchronous%20orbit
A geosynchronous orbit (sometimes abbreviated GSO) is an Earth-centered orbit with an orbital period that matches Earth's rotation on its axis, 23 hours, 56 minutes, and 4 seconds (one sidereal day). The synchronization of rotation and orbital period means that, for an observer on Earth's surface, an object in geosynchronous orbit returns to exactly the same position in the sky after a period of one sidereal day. Over the course of a day, the object's position in the sky may remain still or trace out a path, typically in a figure-8 form, whose precise characteristics depend on the orbit's inclination and eccentricity. A circular geosynchronous orbit has a constant altitude of . A special case of geosynchronous orbit is the geostationary orbit (often abbreviated GEO), which is a circular geosynchronous orbit in Earth's equatorial plane with both inclination and eccentricity equal to 0. A satellite in a geostationary orbit remains in the same position in the sky to observers on the surface. Communications satellites are often given geostationary or close-to-geostationary orbits, so that the satellite antennas that communicate with them do not have to move but can be pointed permanently at the fixed location in the sky where the satellite appears. History In 1929, Herman Potočnik described both geosynchronous orbits in general and the special case of the geostationary Earth orbit in particular as useful orbits for space stations. The first appearance of a geosynchronous orbit in popular literature was in October 1942, in the first Venus Equilateral story by George O. Smith, but Smith did not go into details. British science fiction author Arthur C. Clarke popularised and expanded the concept in a 1945 paper entitled Extra-Terrestrial Relays – Can Rocket Stations Give Worldwide Radio Coverage?, published in Wireless World magazine. Clarke acknowledged the connection in his introduction to The Complete Venus Equilateral. The orbit, which Clarke first described as useful for broadcast and relay communications satellites, is sometimes called the Clarke Orbit. Similarly, the collection of artificial satellites in this orbit is known as the Clarke Belt. In technical terminology, the geosynchronous orbits are often referred to as geostationary if they are roughly over the equator, but the terms are used somewhat interchangeably. Specifically, geosynchronous Earth orbit (GEO) may be a synonym for geosynchronous equatorial orbit, or geostationary Earth orbit. The first geosynchronous satellite was designed by Harold Rosen while he was working at Hughes Aircraft in 1959. Inspired by Sputnik 1, he wanted to use a geostationary (geosynchronous equatorial) satellite to globalise communications. Telecommunications between the US and Europe was then possible between just 136 people at a time, and reliant on high frequency radios and an undersea cable. Conventional wisdom at the time was that it would require too much rocket power to place a satellite in a geosynchronous orbit and it would not survive long enough to justify the expense, so early efforts were put towards constellations of satellites in low or medium Earth orbit. The first of these were the passive Echo balloon satellites in 1960, followed by Telstar 1 in 1962. Although these projects had difficulties with signal strength and tracking that could be solved through geosynchronous satellites, the concept was seen as impractical, so Hughes often withheld funds and support. By 1961, Rosen and his team had produced a cylindrical prototype with a diameter of , height of , weighing ; it was light, and small, enough to be placed into orbit by then-available rocketry, was spin stabilised and used dipole antennas producing a pancake-shaped waveform. In August 1961, they were contracted to begin building the working satellite. They lost Syncom 1 to electronics failure, but Syncom 2 was successfully placed into a geosynchronous orbit in 1963. Although its inclined orbit still required moving antennas, it was able to relay TV transmissions, and allowed for US President John F. Kennedy to phone Nigerian prime minister Abubakar Tafawa Balewa from a ship on August 23, 1963. Today there are hundreds of geosynchronous satellites providing remote sensing, navigation and communications. Although most populated land locations on the planet now have terrestrial communications facilities (microwave, fiber-optic), which often have latency and bandwidth advantages, and telephone access covering 96% of the population and internet access 90% as of 2018, some rural and remote areas in developed countries are still reliant on satellite communications. Types Geostationary orbit A geostationary equatorial orbit (GEO) is a circular geosynchronous orbit in the plane of the Earth's equator with a radius of approximately (measured from the center of the Earth). A satellite in such an orbit is at an altitude of approximately above mean sea level. It maintains the same position relative to the Earth's surface. If one could see a satellite in geostationary orbit, it would appear to hover at the same point in the sky, i.e., not exhibit diurnal motion, while the Sun, Moon, and stars would traverse the skies behind it. Such orbits are useful for telecommunications satellites. A perfectly stable geostationary orbit is an ideal that can only be approximated. In practice the satellite drifts out of this orbit because of perturbations such as the solar wind, radiation pressure, variations in the Earth's gravitational field, and the gravitational effect of the Moon and Sun, and thrusters are used to maintain the orbit in a process known as station-keeping. Eventually, without the use of thrusters, the orbit will become inclined, oscillating between 0° and 15° every 55 years. At the end of the satellite's lifetime, when fuel approaches depletion, satellite operators may decide to omit these expensive manoeuvres to correct inclination and only control eccentricity. This prolongs the life-time of the satellite as it consumes less fuel over time, but the satellite can then only be used by ground antennas capable of following the N-S movement. Geostationary satellites will also tend to drift around one of two stable longitudes of 75° and 255° without station keeping. Elliptical and inclined geosynchronous orbits Many objects in geosynchronous orbits have eccentric and/or inclined orbits. Eccentricity makes the orbit elliptical and appear to oscillate E-W in the sky from the viewpoint of a ground station, while inclination tilts the orbit compared to the equator and makes it appear to oscillate N-S from a groundstation. These effects combine to form an analemma (figure-8). Satellites in elliptical/eccentric orbits must be tracked by steerable ground stations. Tundra orbit The Tundra orbit is an eccentric geosynchronous orbit, which allows the satellite to spend most of its time dwelling over one high latitude location. It sits at an inclination of 63.4°, which is a frozen orbit, which reduces the need for stationkeeping. At least two satellites are needed to provide continuous coverage over an area. It was used by the Sirius XM Satellite Radio to improve signal strength in the northern US and Canada. Quasi-zenith orbit The Quasi-Zenith Satellite System (QZSS) is a four-satellite system that operates in a geosynchronous orbit at an inclination of 42° and a 0.075 eccentricity. Each satellite dwells over Japan, allowing signals to reach receivers in urban canyons then passes quickly over Australia. Launch Geosynchronous satellites are launched to the east into a prograde orbit that matches the rotation rate of the equator. The smallest inclination that a satellite can be launched into is that of the launch site's latitude, so launching the satellite from close to the equator limits the amount of inclination change needed later. Additionally, launching from close to the equator allows the speed of the Earth's rotation to give the satellite a boost. A launch site should have water or deserts to the east, so any failed rockets do not fall on a populated area. Most launch vehicles place geosynchronous satellites directly into a geosynchronous transfer orbit (GTO), an elliptical orbit with an apogee at GSO height and a low perigee. On-board satellite propulsion is then used to raise the perigee, circularise and reach GSO. Once in a viable geostationary orbit, spacecraft can change their longitudinal position by adjusting their semi-major axis such that the new period is shorter or longer than a sidereal day, in order to effect an apparent "drift" Eastward or Westward, respectively. Once at the desired longitude, the spacecraft's period is restored to geosynchronous. Proposed orbits Statite proposal A statite is a hypothetical satellite that uses radiation pressure from the Sun against a solar sail to modify its orbit. It would hold its location over the dark side of the Earth at a latitude of approximately 30 degrees. It would return to the same spot in the sky every 24 hours from an Earth-based viewer's perspective, so be functionally similar to a geosynchronous orbit. Space elevator A further form of geosynchronous orbit is the theoretical space elevator. If a mass orbiting above the geostationary belt is tethered to the earth’s surface, and the mass is accelerated to maintain an orbital period equal to one sidereal day, then since the orbit now requires more downward force than is supplied by gravity alone. The tether will become tensioned by the extra centripetal force required, and this tension force is available to hoist objects up the tether structure. Retired satellites Geosynchronous satellites require some station-keeping in order to remain in position, and once they run out of thruster fuel and are no longer useful they are moved into a higher graveyard orbit. It is not feasible to deorbit geosynchronous satellites, for to do so would take far more fuel than would be used by slightly elevating the orbit; and atmospheric drag is negligible, giving GSOs lifetimes of thousands of years. The retirement process is becoming increasingly regulated and satellites must have a 90% chance of moving over 200 km above the geostationary belt at end of life. Space debris Space debris in geosynchronous orbits typically has a lower collision speed than at LEO since most GSO satellites orbit in the same plane, altitude and speed; however, the presence of satellites in eccentric orbits allows for collisions at up to 4 km/s. Although a collision is comparatively unlikely, GSO satellites have a limited ability to avoid any debris. Debris less than 10 cm in diameter cannot be seen from the Earth, making it difficult to assess their prevalence. Despite efforts to reduce risk, spacecraft collisions have occurred. The European Space Agency telecom satellite Olympus-1 was struck by a meteoroid on August 11, 1993, and eventually moved to a graveyard orbit, and in 2006 the Russian Express-AM11 communications satellite was struck by an unknown object and rendered inoperable, although its engineers had enough contact time with the satellite to send it into a graveyard orbit. In 2017 both AMC-9 and Telkom-1 broke apart from an unknown cause. Properties A geosynchronous orbit has the following properties: Period: 1436 minutes (one sidereal day) Semi-major axis: 42,164 km Period All geosynchronous orbits have an orbital period equal to exactly one sidereal day. This means that the satellite will return to the same point above the Earth's surface every (sidereal) day, regardless of other orbital properties. This orbital period, T, is directly related to the semi-major axis of the orbit through the formula: where: is the length of the orbit's semi-major axis is the standard gravitational parameter of the central body Inclination A geosynchronous orbit can have any inclination. Satellites commonly have an inclination of zero, ensuring that the orbit remains over the equator at all times, making it stationary with respect to latitude from the point of view of a ground observer (and in the ECEF reference frame). Another popular inclinations is 63.4° for a Tundra orbit, which ensures that the orbit's argument of perigee does not change over time. Ground track In the special case of a geostationary orbit, the ground track of a satellite is a single point on the equator. In the general case of a geosynchronous orbit with a non-zero inclination or eccentricity, the ground track is a more or less distorted figure-eight, returning to the same places once per sidereal day. See also Geostationary orbit Geosynchronous satellite Graveyard orbit High Earth orbit List of orbits List of satellites in geosynchronous orbit Low Earth orbit Medium Earth orbit Molniya orbit Subsynchronous orbit Supersynchronous orbit Synchronous orbit References External links Satellites currently in Geosynchronous Orbit, list updated daily Science@NASA – Geosynchronous Orbit NASA – Planetary Orbits Science Presse data on Geosynchronous Orbits (including historical data and launch statistics) Orbital Mechanics (Rocket and Space Technology) Earth orbits Satellite broadcasting + Planetary rings
Geosynchronous orbit
[ "Engineering" ]
2,757
[ "Telecommunications engineering", "Satellite broadcasting" ]
40,250
https://en.wikipedia.org/wiki/Specific%20impulse
Specific impulse (usually abbreviated ) is a measure of how efficiently a reaction mass engine, such as a rocket using propellant or a jet engine using fuel, generates thrust. In general, this is a ratio of the impulse, i.e. change in momentum, per mass of propellant. This is equivalent to "thrust per massflow". The resulting unit is equivalent to velocity, although it doesn't represent any physical velocity (see below); it is more properly thought of in terms of momentum per mass, since this represents a physical momentum and physical mass. The practical meaning of the measurement varies with different types of engines. Car engines consume onboard fuel, breathe environmental air to burn the fuel, and react (through the tires) against the ground beneath them. In this case, the only sensible interpretation is momentum per fuel burned. Chemical rocket engines, by contrast, carry aboard all of their combustion ingredients and reaction mass, so the only practical measure is momentum per reaction mass. Airplane engines are in the middle, as they only react against airflow through the engine, but some of this reaction mass (and combustion ingredients) is breathed rather than carried aboard. As such, "specific impulse" could be taken to mean either "per reaction mass", as with a rocket, or "per fuel burned" as with cars. The latter is the traditional and common choice. In sum, specific impulse isn't practically comparable between different types of engines. In any case, specific impulse can be taken as a measure of efficiency. In cars and planes, it typically corresponds with fuel mileage; in rocketry, it corresponds to the achievable delta-v, which is the typical way to measure changes between orbits. Rocketry traditionally uses a "bizarre" choice of units: rather than speaking of momentum-per-mass, or velocity, the rocket industry typically converts units of velocity to units of time by dividing by a standard reference acceleration, that being standard gravity g. This is a historical result of competing units, imperial units vs metric units. They shared a common unit of time (seconds) but not common units of distance or mass, so this conversion by reference to g became a standard way to make international comparisons. This choice of reference conversion is arbitrary and the resulting units of time have no physical meaning. The only physical quantities are the momentum change and the mass used to achieve it. Propulsion systems Rockets For any chemical rocket engine, the momentum transfer efficiency depends heavily on the effectiveness of the nozzle; the nozzle is the primary means of converting reactant energy (e.g. thermal or pressure energy) into a flow of momentum all directed the same way. Therefore, nozzle shape and effectiveness has a great impact on total momentum transfer from the reaction mass to the rocket. Efficiency of conversion of input energy to reactant energy also matters; be that thermal energy in combustion engines or electrical energy in ion engines, the engineering involved in converting such energy to outbound momentum can have high impact on specific impulse. Specific impulse in turn has deep impacts on the achievable delta-v and associated orbits achievable, and (by the rocket equation) mass fraction required to achieve a given delta-v. Optimizing the tradeoffs between mass fraction and specific impulse is one of the fundamental engineering challenges in rocketry. Although the specific impulse has units equivalent to velocity, it almost never corresponds to any physical velocity. In chemical and cold gas rockets, the shape of the nozzle has a high impact on the energy-to-momentum conversion, and is never perfect, and there are other sources of losses and inefficiencies (e.g. the details of the combustion in such engines). As such, the physical exhaust velocity is higher than the "effective exhaust velocity", i.e. that "velocity" suggested by the specific impulse. In any case, the momentum exchanged and the mass used to generate it are physically real measurements. Typically, rocket nozzles work better when the ambient pressure is lower, i.e. better in space than in atmosphere. Ion engines operate without a nozzle, although they have other sources of losses such that the momentum transferred is lower than the physical exhaust velocity. Cars Although the car industry almost never uses specific impulse on any practical level, the measure can be defined, and makes good contrast against other engine types. Car engines breath external air to combust their fuel, and (via the wheels) react against the ground. As such, the only meaningful way to interpret "specific impulse" is as "thrust per fuelflow", although one must also specify if the force is measured at the crankshaft or at the wheels, since there are transmission losses. Such a measure corresponds to fuel mileage. Airplanes In an aerodynamic context, there are similarities to both cars and rockets. Like cars, airplane engines breath outside air; unlike cars they react only against fluids flowing through the engine (including the propellers as applicable). As such, there are several possible ways to interpret "specific impulse": as thrust per fuel flow, as thrust per breathing-flow, or as thrust per "turbine-flow" (i.e. excluding air though the propeller/bypass fan). Since the air breathed isn't a direct cost, with wide engineering leeway on how much to breath, the industry traditionally chooses the "thrust per fuel flow" interpretation with its focus on cost efficiency. In this interpretation, the resulting specific impulse numbers are much higher than for rocket engines, although this comparison is essentially meaningless since the interpretations -- with or without reaction mass -- are so different. As with all kinds of engines, there are many engineering choices and tradeoffs that affect specific impulse. Nonlinear air resistance and the engine's inability to keep a high specific impulse at a fast burn rate are limiting factors to the fuel consumption rate. As with rocket engines, the interpretation of specific impulse as a "velocity" has no physical meaning. Since the usual interpretation excludes much of the reaction mass, the physical velocity of the reactants downstream is much lower than the I "velocity". General considerations Specific impulse should not be confused with energy efficiency, which can decrease as specific impulse increases, since propulsion systems that give high specific impulse require high energy to do so. Specific impulse should not be confused with total thrust. Thrust is the force supplied by the engine and depends on the propellant mass flow through the engine. Specific impulse measures the thrust per propellant mass flow. Thrust and specific impulse are related by the design and propellants of the engine in question, but this relationship is tenuous: in most cases, high thrust and high specific impulse are mutually exclusive engineering goals. For example, LH2/LO bipropellant produces higher (due to higher chemical energy and lower exhaust molecular mass) but lower thrust than RP-1/LO (due to higher density and propellant flow). In many cases, propulsion systems with very high specific impulse—some ion thrusters reach 25x-35x better than chemical engines—produce correspondingly low thrust. When calculating specific impulse, only propellant carried with the vehicle before use is counted, in the standard interpretation. This usage best corresponds to the cost of operating the vehicle. For a chemical rocket, unlike a plane or car, the propellant mass therefore would include both fuel and oxidizer. For any vehicle, optimising for specific impulse is generally not the same as optimising for total performance or total cost. In rocketry, a heavier engine with a higher specific impulse may not be as effective in gaining altitude, distance, or velocity as a lighter engine with a lower specific impulse, especially if the latter engine possesses a higher thrust-to-weight ratio. This is a significant reason for most rocket designs having multiple stages. The first stage can optimised for high thrust to effectively fight gravity drag and air drag, while the later stages operating strictly in orbit and in vacuum can be much easier optimised for higher specific impulse, especially for high delta-v orbits. Propellant quantity units The amount of propellant could be defined either in units of mass or weight. If mass is used, specific impulse is an impulse per unit of mass, which dimensional analysis shows to be equivalent to units of speed; this interpretation is commonly labeled the effective exhaust velocity. If a force-based unit system is used, impulse is divided by propellant weight (weight is a measure of force), resulting in units of time. The problem with weight, as a measure of quantity, is that it depends on the acceleration applied to the propellant, which is arbitrary with no relation to the design of the engine. Historically, standard gravity was the reference conversion between weight and mass. But since technology has progressed to the point that we can measure Earth gravity's variation across the surface, and where such differences can cause differences in practical engineering projects (not to mention science projects on other solar bodies), modern science and engineering focus on mass as the measure of quantity, so as to remove the acceleration dependence. As such, measuring specific impulse by propellant mass gives it the same meaning for a car at sea level, an airplane at cruising altitude, or a helicopter on Mars. No matter the choice of mass or weight, the resulting quotient of "velocity" or "time" has no physical meaning. Due to various losses in real engines, the actual exhaust velocity is different from the I "velocity" (and for cars there isn't even a sensible definition of "actual exhaust velocity"). Rather, the specific impulse is just that: a physical momentum from a physical quantity of propellant (be that in mass or weight). The particular habit in rocketry of measuring I in seconds results from the above historical circumstances. Since metric and imperial units had in common only the unit of time, this was the most convenient way to make international comparisons. However, the choice of reference acceleration conversion, (g) is arbitrary, and as above, the interpretation in terms of time or speed has no physical meaning. Units The most common unit for specific impulse is the second, as values are identical regardless of whether the calculations are done in SI, imperial, or US customary units. Nearly all manufacturers quote their engine performance in seconds, and the unit is also useful for specifying aircraft engine performance. The use of metres per second to specify effective exhaust velocity is also reasonably common. The unit is intuitive when describing rocket engines, although the effective exhaust speed of the engines may be significantly different from the actual exhaust speed, especially in gas-generator cycle engines. For airbreathing jet engines, the effective exhaust velocity is not physically meaningful, although it can be used for comparison purposes. Metres per second are numerically equivalent to newton-seconds per kg (N·s/kg), and SI measurements of specific impulse can be written in terms of either units interchangeably. This unit highlights the definition of specific impulse as impulse per unit mass of propellant. Specific fuel consumption is inversely proportional to specific impulse and has units of g/(kN·s) or lb/(lbf·h). Specific fuel consumption is used extensively for describing the performance of air-breathing jet engines. Specific impulse in seconds Specific impulse, measured in seconds, can be thought of as how many seconds one kilogram of fuel can produce one kilogram of thrust. Or, more precisely, how many seconds a given propellant, when paired with a given engine, can accelerate its own initial mass at 1 g. The longer it can accelerate its own mass, the more delta-V it delivers to the whole system. In other words, given a particular engine and a mass of a particular propellant, specific impulse measures for how long a time that engine can exert a continuous force (thrust) until fully burning that mass of propellant. A given mass of a more energy-dense propellant can burn for a longer duration than some less energy-dense propellant made to exert the same force while burning in an engine. Different engine designs burning the same propellant may not be equally efficient at directing their propellant's energy into effective thrust. For all vehicles, specific impulse (impulse per unit weight-on-Earth of propellant) in seconds can be defined by the following equation: where: is the thrust obtained from the engine (newtons or pounds force), is the standard gravity, which is nominally the gravity at Earth's surface (m/s2 or ft/s2), is the specific impulse measured (seconds), is the mass flow rate of the expended propellant (kg/s or slugs/s) Isp in seconds is the amount of time a rocket engine can generate thrust, given a quantity of propellant whose weight is equal to the engine's thrust. The advantage of this formulation is that it may be used for rockets, where all the reaction mass is carried on board, as well as airplanes, where most of the reaction mass is taken from the atmosphere. In addition, giving the result as a unit of time makes the result easily comparable between calculations in SI units, imperial units, US customary units or other unit framework. Imperial units conversion The English unit pound mass is more commonly used than the slug, and when using pounds per second for mass flow rate, it is more convenient to express standard gravity as 1 pound-force per pound-mass. Note that this is equivalent to 32.17405 ft/s2, but expressed in more convenient units. This gives: Rocketry In rocketry, the only reaction mass is the propellant, so the specific impulse is calculated using an alternative method, giving results with units of seconds. Specific impulse is defined as the thrust integrated over time per unit weight-on-Earth of the propellant: where is the specific impulse measured in seconds, is the average exhaust speed along the axis of the engine (in m/s or ft/s), is the standard gravity (in m/s2 or ft/s2). In rockets, due to atmospheric effects, the specific impulse varies with altitude, reaching a maximum in a vacuum. This is because the exhaust velocity isn't simply a function of the chamber pressure, but is a function of the difference between the interior and exterior of the combustion chamber. Values are usually given for operation at sea level ("sl") or in a vacuum ("vac"). Specific impulse as effective exhaust velocity Because of the geocentric factor of g0 in the equation for specific impulse, many prefer an alternative definition. The specific impulse of a rocket can be defined in terms of thrust per unit mass flow of propellant. This is an equally valid (and in some ways somewhat simpler) way of defining the effectiveness of a rocket propellant. For a rocket, the specific impulse defined in this way is simply the effective exhaust velocity relative to the rocket, ve. "In actual rocket nozzles, the exhaust velocity is not really uniform over the entire exit cross section and such velocity profiles are difficult to measure accurately. A uniform axial velocity, ve, is assumed for all calculations which employ one-dimensional problem descriptions. This effective exhaust velocity represents an average or mass equivalent velocity at which propellant is being ejected from the rocket vehicle." The two definitions of specific impulse are proportional to one another, and related to each other by: where is the specific impulse in seconds, is the specific impulse measured in m/s, which is the same as the effective exhaust velocity measured in m/s (or ft/s if g is in ft/s2), is the standard gravity, 9.80665 m/s2 (in United States customary units 32.174 ft/s2). This equation is also valid for air-breathing jet engines, but is rarely used in practice. (Note that different symbols are sometimes used; for example, c is also sometimes seen for exhaust velocity. While the symbol might logically be used for specific impulse in units of (N·s)/(m·kg); to avoid confusion, it is desirable to reserve this for specific impulse measured in seconds.) It is related to the thrust, or forward force on the rocket by the equation: where is the propellant mass flow rate, which is the rate of decrease of the vehicle's mass. A rocket must carry all its propellant with it, so the mass of the unburned propellant must be accelerated along with the rocket itself. Minimizing the mass of propellant required to achieve a given change in velocity is crucial to building effective rockets. The Tsiolkovsky rocket equation shows that for a rocket with a given empty mass and a given amount of propellant, the total change in velocity it can accomplish is proportional to the effective exhaust velocity. A spacecraft without propulsion follows an orbit determined by its trajectory and any gravitational field. Deviations from the corresponding velocity pattern (these are called Δv) are achieved by sending exhaust mass in the direction opposite to that of the desired velocity change. Actual exhaust speed versus effective exhaust speed When an engine is run within the atmosphere, the exhaust velocity is reduced by atmospheric pressure, in turn reducing specific impulse. This is a reduction in the effective exhaust velocity, versus the actual exhaust velocity achieved in vacuum conditions. In the case of gas-generator cycle rocket engines, more than one exhaust gas stream is present as turbopump exhaust gas exits through a separate nozzle. Calculating the effective exhaust velocity requires averaging the two mass flows as well as accounting for any atmospheric pressure. For air-breathing jet engines, particularly turbofans, the actual exhaust velocity and the effective exhaust velocity are different by orders of magnitude. This happens for several reasons. First, a good deal of additional momentum is obtained by using air as reaction mass, such that combustion products in the exhaust have more mass than the burned fuel. Next, inert gases in the atmosphere absorb heat from combustion, and through the resulting expansion provide additional thrust. Lastly, for turbofans and other designs there is even more thrust created by pushing against intake air which never sees combustion directly. These all combine to allow a better match between the airspeed and the exhaust speed, which saves energy/propellant and enormously increases the effective exhaust velocity while reducing the actual exhaust velocity. Again, this is because the mass of the air is not counted in the specific impulse calculation, thus attributing all of the thrust momentum to the mass of the fuel component of the exhaust, and omitting the reaction mass, inert gas, and effect of driven fans on overall engine efficiency from consideration. Essentially, the momentum of engine exhaust includes a lot more than just fuel, but specific impulse calculation ignores everything but the fuel. Even though the effective exhaust velocity for an air-breathing engine seems nonsensical in the context of actual exhaust velocity, this is still useful for comparing absolute fuel efficiency of different engines. Density specific impulse A related measure, the density specific impulse, sometimes also referred to as Density Impulse and usually abbreviated as is the product of the average specific gravity of a given propellant mixture and the specific impulse. While less important than the specific impulse, it is an important measure in launch vehicle design, as a low specific impulse implies that bigger tanks will be required to store the propellant, which in turn will have a detrimental effect on the launch vehicle's mass ratio. Specific fuel consumption Specific impulse is inversely proportional to specific fuel consumption (SFC) by the relationship for SFC in kg/(N·s) and for SFC in lb/(lbf·hr). Examples An example of a specific impulse measured in time is 453 seconds, which is equivalent to an effective exhaust velocity of , for the RS-25 engines when operating in a vacuum. An air-breathing jet engine typically has a much larger specific impulse than a rocket; for example a turbofan jet engine may have a specific impulse of 6,000 seconds or more at sea level whereas a rocket would be between 200 and 400 seconds. An air-breathing engine is thus much more propellant efficient than a rocket engine, because the air serves as reaction mass and oxidizer for combustion which does not have to be carried as propellant, and the actual exhaust speed is much lower, so the kinetic energy the exhaust carries away is lower and thus the jet engine uses far less energy to generate thrust. While the actual exhaust velocity is lower for air-breathing engines, the effective exhaust velocity is very high for jet engines. This is because the effective exhaust velocity calculation assumes that the carried propellant is providing all the reaction mass and all the thrust. Hence effective exhaust velocity is not physically meaningful for air-breathing engines; nevertheless, it is useful for comparison with other types of engines. The highest specific impulse for a chemical propellant ever test-fired in a rocket engine was with a tripropellant of lithium, fluorine, and hydrogen. However, this combination is impractical. Lithium and fluorine are both extremely corrosive, lithium ignites on contact with air, fluorine ignites on contact with most fuels, and hydrogen, while not hypergolic, is an explosive hazard. Fluorine and the hydrogen fluoride (HF) in the exhaust are very toxic, which damages the environment, makes work around the launch pad difficult, and makes getting a launch license that much more difficult. The rocket exhaust is also ionized, which would interfere with radio communication with the rocket. Nuclear thermal rocket engines differ from conventional rocket engines in that energy is supplied to the propellants by an external nuclear heat source instead of the heat of combustion. The nuclear rocket typically operates by passing liquid hydrogen gas through an operating nuclear reactor. Testing in the 1960s yielded specific impulses of about 850 seconds (8,340 m/s), about twice that of the Space Shuttle engines. A variety of other rocket propulsion methods, such as ion thrusters, give much higher specific impulse but with much lower thrust; for example the Hall-effect thruster on the SMART-1 satellite has a specific impulse of but a maximum thrust of only . The variable specific impulse magnetoplasma rocket (VASIMR) engine currently in development will theoretically yield , and a maximum thrust of . See also Jet engine Impulse Tsiolkovsky rocket equation System-specific impulse Specific energy Standard gravity Thrust specific fuel consumption—fuel consumption per unit thrust Specific thrust—thrust per unit of air for a duct engine Heating value Energy density Delta-v (physics) Rocket propellant Liquid rocket propellants Notes References External links RPA - Design Tool for Liquid Rocket Engine Analysis List of Specific Impulses of various rocket fuels Rocket propulsion Spacecraft propulsion Physical quantities Classical mechanics Engine technology
Specific impulse
[ "Physics", "Mathematics", "Technology" ]
4,622
[ "Physical phenomena", "Physical quantities", "Engines", "Quantity", "Classical mechanics", "Engine technology", "Mechanics", "Physical properties" ]
40,310
https://en.wikipedia.org/wiki/Magnetohydrodynamics
In physics and engineering, magnetohydrodynamics (MHD; also called magneto-fluid dynamics or hydro­magnetics) is a model of electrically conducting fluids that treats all interpenetrating particle species together as a single continuous medium. It is primarily concerned with the low-frequency, large-scale, magnetic behavior in plasmas and liquid metals and has applications in multiple fields including space physics, geophysics, astrophysics, and engineering. The word magneto­hydro­dynamics is derived from meaning magnetic field, meaning water, and meaning movement. The field of MHD was initiated by Hannes Alfvén, for which he received the Nobel Prize in Physics in 1970. History The MHD description of electrically conducting fluids was first developed by Hannes Alfvén in a 1942 paper published in Nature titled "Existence of Electromagnetic–Hydrodynamic Waves" which outlined his discovery of what are now referred to as Alfvén waves. Alfvén initially referred to these waves as "electromagnetic–hydrodynamic waves"; however, in a later paper he noted, "As the term 'electromagnetic–hydrodynamic waves' is somewhat complicated, it may be convenient to call this phenomenon 'magneto–hydrodynamic' waves." Equations In MHD, motion in the fluid is described using linear combinations of the mean motions of the individual species: the current density and the center of mass velocity . In a given fluid, each species has a number density , mass , electric charge , and a mean velocity . The fluid's total mass density is then , and the motion of the fluid can be described by the current density expressed as and the center of mass velocity expressed as: MHD can be described by a set of equations consisting of a continuity equation, an equation of motion, an equation of state, Ampère's Law, Faraday's law, and Ohm's law. As with any fluid description to a kinetic system, a closure approximation must be applied to highest moment of the particle distribution equation. This is often accomplished with approximations to the heat flux through a condition of adiabaticity or isothermality. In the adiabatic limit, that is, the assumption of an isotropic pressure and isotropic temperature, a fluid with an adiabatic index , electrical resistivity , magnetic field , and electric field can be described by the continuous equation the equation of state the equation of motion the low-frequency Ampère's law Faraday's law and Ohm's law Taking the curl of this equation and using Ampère's law and Faraday's law results in the induction equation, where is the magnetic diffusivity. In the equation of motion, the Lorentz force term can be expanded using Ampère's law and a vector calculus identity to give where the first term on the right hand side is the magnetic tension force and the second term is the magnetic pressure force. Ideal MHD The simplest form of MHD, ideal MHD, assumes that the resistive term in Ohm's law is small relative to the other terms such that it can be taken to be equal to zero. This occurs in the limit of large magnetic Reynolds numbers during which magnetic induction dominates over magnetic diffusion at the velocity and length scales under consideration. Consequently, processes in ideal MHD that convert magnetic energy into kinetic energy, referred to as ideal processes, cannot generate heat and raise entropy. A fundamental concept underlying ideal MHD is the frozen-in flux theorem which states that the bulk fluid and embedded magnetic field are constrained to move together such that one can be said to be "tied" or "frozen" to the other. Therefore, any two points that move with the bulk fluid velocity and lie on the same magnetic field line will continue to lie on the same field line even as the points are advected by fluid flows in the system. The connection between the fluid and magnetic field fixes the topology of the magnetic field in the fluid—for example, if a set of magnetic field lines are tied into a knot, then they will remain so as long as the fluid has negligible resistivity. This difficulty in reconnecting magnetic field lines makes it possible to store energy by moving the fluid or the source of the magnetic field. The energy can then become available if the conditions for ideal MHD break down, allowing magnetic reconnection that releases the stored energy from the magnetic field. Ideal MHD equations In ideal MHD, the resistive term vanishes in Ohm's law giving the ideal Ohm's law, Similarly, the magnetic diffusion term in the induction equation vanishes giving the ideal induction equation, Applicability of ideal MHD to plasmas Ideal MHD is only strictly applicable when: The plasma is strongly collisional, so that the time scale of collisions is shorter than the other characteristic times in the system, and the particle distributions are therefore close to Maxwellian. The resistivity due to these collisions is small. In particular, the typical magnetic diffusion times over any scale length present in the system must be longer than any time scale of interest. Interest in length scales much longer than the ion skin depth and Larmor radius perpendicular to the field, long enough along the field to ignore Landau damping, and time scales much longer than the ion gyration time (system is smooth and slowly evolving). Importance of resistivity In an imperfectly conducting fluid the magnetic field can generally move through the fluid following a diffusion law with the resistivity of the plasma serving as a diffusion constant. This means that solutions to the ideal MHD equations are only applicable for a limited time for a region of a given size before diffusion becomes too important to ignore. One can estimate the diffusion time across a solar active region (from collisional resistivity) to be hundreds to thousands of years, much longer than the actual lifetime of a sunspot—so it would seem reasonable to ignore the resistivity. By contrast, a meter-sized volume of seawater has a magnetic diffusion time measured in milliseconds. Even in physical systems—which are large and conductive enough that simple estimates of the Lundquist number suggest that the resistivity can be ignored—resistivity may still be important: many instabilities exist that can increase the effective resistivity of the plasma by factors of more than 109. The enhanced resistivity is usually the result of the formation of small scale structure like current sheets or fine scale magnetic turbulence, introducing small spatial scales into the system over which ideal MHD is broken and magnetic diffusion can occur quickly. When this happens, magnetic reconnection may occur in the plasma to release stored magnetic energy as waves, bulk mechanical acceleration of material, particle acceleration, and heat. Magnetic reconnection in highly conductive systems is important because it concentrates energy in time and space, so that gentle forces applied to a plasma for long periods of time can cause violent explosions and bursts of radiation. When the fluid cannot be considered as completely conductive, but the other conditions for ideal MHD are satisfied, it is possible to use an extended model called resistive MHD. This includes an extra term in Ohm's Law which models the collisional resistivity. Generally MHD computer simulations are at least somewhat resistive because their computational grid introduces a numerical resistivity. Structures in MHD systems In many MHD systems most of the electric current is compressed into thin nearly-two-dimensional ribbons termed current sheets. These can divide the fluid into magnetic domains, inside of which the currents are relatively weak. Current sheets in the solar corona are thought to be between a few meters and a few kilometers in thickness, which is quite thin compared to the magnetic domains (which are thousands to hundreds of thousands of kilometers across). Another example is in the Earth's magnetosphere, where current sheets separate topologically distinct domains, isolating most of the Earth's ionosphere from the solar wind. Waves The wave modes derived using the MHD equations are called magnetohydrodynamic waves or MHD waves. There are three MHD wave modes that can be derived from the linearized ideal-MHD equations for a fluid with a uniform and constant magnetic field: Alfvén waves Slow magnetosonic waves Fast magnetosonic waves These modes have phase velocities that are independent of the magnitude of the wavevector, so they experience no dispersion. The phase velocity depends on the angle between the wave vector and the magnetic field . An MHD wave propagating at an arbitrary angle with respect to the time independent or bulk field will satisfy the dispersion relation where is the Alfvén speed. This branch corresponds to the shear Alfvén mode. Additionally the dispersion equation gives where is the ideal gas speed of sound. The plus branch corresponds to the fast-MHD wave mode and the minus branch corresponds to the slow-MHD wave mode. A summary of the properties of these waves is provided: The MHD oscillations will be damped if the fluid is not perfectly conducting but has a finite conductivity, or if viscous effects are present. MHD waves and oscillations are a popular tool for the remote diagnostics of laboratory and astrophysical plasmas, for example, the corona of the Sun (Coronal seismology). Extensions Resistive Resistive MHD describes magnetized fluids with finite electron diffusivity (). This diffusivity leads to a breaking in the magnetic topology; magnetic field lines can 'reconnect' when they collide. Usually this term is small and reconnections can be handled by thinking of them as not dissimilar to shocks; this process has been shown to be important in the Earth-Solar magnetic interactions. Extended Extended MHD describes a class of phenomena in plasmas that are higher order than resistive MHD, but which can adequately be treated with a single fluid description. These include the effects of Hall physics, electron pressure gradients, finite Larmor Radii in the particle gyromotion, and electron inertia. Two-fluid Two-fluid MHD describes plasmas that include a non-negligible Hall electric field. As a result, the electron and ion momenta must be treated separately. This description is more closely tied to Maxwell's equations as an evolution equation for the electric field exists. Hall In 1960, M. J. Lighthill criticized the applicability of ideal or resistive MHD theory for plasmas. It concerned the neglect of the "Hall current term" in Ohm's law, a frequent simplification made in magnetic fusion theory. Hall-magnetohydrodynamics (HMHD) takes into account this electric field description of magnetohydrodynamics, and Ohm's law takes the form where is the electron number density and is the elementary charge. The most important difference is that in the absence of field line breaking, the magnetic field is tied to the electrons and not to the bulk fluid. Electron MHD Electron Magnetohydrodynamics (EMHD) describes small scales plasmas when electron motion is much faster than the ion one. The main effects are changes in conservation laws, additional resistivity, importance of electron inertia. Many effects of Electron MHD are similar to effects of the Two fluid MHD and the Hall MHD. EMHD is especially important for z-pinch, magnetic reconnection, ion thrusters, neutron stars, and plasma switches. Collisionless MHD is also often used for collisionless plasmas. In that case the MHD equations are derived from the Vlasov equation. Reduced By using a multiscale analysis the (resistive) MHD equations can be reduced to a set of four closed scalar equations. This allows for, amongst other things, more efficient numerical calculations. Limitations Importance of kinetic effects Another limitation of MHD (and fluid theories in general) is that they depend on the assumption that the plasma is strongly collisional (this is the first criterion listed above), so that the time scale of collisions is shorter than the other characteristic times in the system, and the particle distributions are Maxwellian. This is usually not the case in fusion, space and astrophysical plasmas. When this is not the case, or the interest is in smaller spatial scales, it may be necessary to use a kinetic model which properly accounts for the non-Maxwellian shape of the distribution function. However, because MHD is relatively simple and captures many of the important properties of plasma dynamics it is often qualitatively accurate and is therefore often the first model tried. Effects which are essentially kinetic and not captured by fluid models include double layers, Landau damping, a wide range of instabilities, chemical separation in space plasmas and electron runaway. In the case of ultra-high intensity laser interactions, the incredibly short timescales of energy deposition mean that hydrodynamic codes fail to capture the essential physics. Applications Geophysics Beneath the Earth's mantle lies the core, which is made up of two parts: the solid inner core and liquid outer core. Both have significant quantities of iron. The liquid outer core moves in the presence of the magnetic field and eddies are set up into the same due to the Coriolis effect. These eddies develop a magnetic field which boosts Earth's original magnetic field—a process which is self-sustaining and is called the geomagnetic dynamo. Based on the MHD equations, Glatzmaier and Paul Roberts have made a supercomputer model of the Earth's interior. After running the simulations for thousands of years in virtual time, the changes in Earth's magnetic field can be studied. The simulation results are in good agreement with the observations as the simulations have correctly predicted that the Earth's magnetic field flips every few hundred thousand years. During the flips, the magnetic field does not vanish altogether—it just gets more complex. Earthquakes Some monitoring stations have reported that earthquakes are sometimes preceded by a spike in ultra low frequency (ULF) activity. A remarkable example of this occurred before the 1989 Loma Prieta earthquake in California, although a subsequent study indicates that this was little more than a sensor malfunction. On December 9, 2010, geoscientists announced that the DEMETER satellite observed a dramatic increase in ULF radio waves over Haiti in the month before the magnitude 7.0 Mw 2010 earthquake. Researchers are attempting to learn more about this correlation to find out whether this method can be used as part of an early warning system for earthquakes. Space Physics The study of space plasmas near Earth and throughout the Solar System is known as space physics. Areas researched within space physics encompass a large number of topics, ranging from the ionosphere to auroras, Earth's magnetosphere, the Solar wind, and coronal mass ejections. MHD forms the framework for understanding how populations of plasma interact within the local geospace environment. Researchers have developed global models using MHD to simulate phenomena within Earth's magnetosphere, such as the location of Earth's magnetopause (the boundary between the Earth's magnetic field and the solar wind), the formation of the ring current, auroral electrojets, and geomagnetically induced currents. One prominent use of global MHD models is in space weather forecasting. Intense solar storms have the potential to cause extensive damage to satellites and infrastructure, thus it is crucial that such events are detected early. The Space Weather Prediction Center (SWPC) runs MHD models to predict the arrival and impacts of space weather events at Earth. Astrophysics MHD applies to astrophysics, including stars, the interplanetary medium (space between the planets), and possibly within the interstellar medium (space between the stars) and jets. Most astrophysical systems are not in local thermal equilibrium, and therefore require an additional kinematic treatment to describe all the phenomena within the system (see Astrophysical plasma). Sunspots are caused by the Sun's magnetic fields, as Joseph Larmor theorized in 1919. The solar wind is also governed by MHD. The differential solar rotation may be the long-term effect of magnetic drag at the poles of the Sun, an MHD phenomenon due to the Parker spiral shape assumed by the extended magnetic field of the Sun. Previously, theories describing the formation of the Sun and planets could not explain how the Sun has 99.87% of the mass, yet only 0.54% of the angular momentum in the Solar System. In a closed system such as the cloud of gas and dust from which the Sun was formed, mass and angular momentum are both conserved. That conservation would imply that as the mass concentrated in the center of the cloud to form the Sun, it would spin faster, much like a skater pulling their arms in. The high speed of rotation predicted by early theories would have flung the proto-Sun apart before it could have formed. However, magnetohydrodynamic effects transfer the Sun's angular momentum into the outer solar system, slowing its rotation. Breakdown of ideal MHD (in the form of magnetic reconnection) is known to be the likely cause of solar flares. The magnetic field in a solar active region over a sunspot can store energy that is released suddenly as a burst of motion, X-rays, and radiation when the main current sheet collapses, reconnecting the field. Magnetic confinement fusion MHD describes a wide range of physical phenomena occurring in fusion plasmas in devices such as tokamaks or stellarators. The Grad-Shafranov equation derived from ideal MHD describes the equilibrium of axisymmetric toroidal plasma in a tokamak. In tokamak experiments, the equilibrium during each discharge is routinely calculated and reconstructed, which provides information on the shape and position of the plasma controlled by currents in external coils. MHD stability theory is known to govern the operational limits of tokamaks. For example, the ideal MHD kink modes provide hard limits on the achievable plasma beta (Troyon limit) and plasma current (set by the requirement of the safety factor). Sensors Magnetohydrodynamic sensors are used for precision measurements of angular velocities in inertial navigation systems such as in aerospace engineering. Accuracy improves with the size of the sensor. The sensor is capable of surviving in harsh environments. Engineering MHD is related to engineering problems such as plasma confinement, liquid-metal cooling of nuclear reactors, and electromagnetic casting (among others). A magnetohydrodynamic drive or MHD propulsor is a method for propelling seagoing vessels using only electric and magnetic fields with no moving parts, using magnetohydrodynamics. The working principle involves electrification of the propellant (gas or water) which can then be directed by a magnetic field, pushing the vehicle in the opposite direction. Although some working prototypes exist, MHD drives remain impractical. The first prototype of this kind of propulsion was built and tested in 1965 by Steward Way, a professor of mechanical engineering at the University of California, Santa Barbara. Way, on leave from his job at Westinghouse Electric, assigned his senior-year undergraduate students to develop a submarine with this new propulsion system. In the early 1990s, a foundation in Japan (Ship & Ocean Foundation (Minato-ku, Tokyo)) built an experimental boat, the Yamato-1, which used a magnetohydrodynamic drive incorporating a superconductor cooled by liquid helium, and could travel at 15 km/h. MHD power generation fueled by potassium-seeded coal combustion gas showed potential for more efficient energy conversion (the absence of solid moving parts allows operation at higher temperatures), but failed due to cost-prohibitive technical difficulties. One major engineering problem was the failure of the wall of the primary-coal combustion chamber due to abrasion. In microfluidics, MHD is studied as a fluid pump for producing a continuous, nonpulsating flow in a complex microchannel design. MHD can be implemented in the continuous casting process of metals to suppress instabilities and control the flow. Industrial MHD problems can be modeled using the open-source software EOF-Library. Two simulation examples are 3D MHD with a free surface for electromagnetic levitation melting, and liquid metal stirring by rotating permanent magnets. Magnetic drug targeting An important task in cancer research is developing more precise methods for delivery of medicine to affected areas. One method involves the binding of medicine to biologically compatible magnetic particles (such as ferrofluids), which are guided to the target via careful placement of permanent magnets on the external body. Magnetohydrodynamic equations and finite element analysis are used to study the interaction between the magnetic fluid particles in the bloodstream and the external magnetic field. See also Computational magnetohydrodynamics Electrohydrodynamics Electromagnetic pump Ferrofluid Ion wind Lorentz force velocity meter Magnetic flow meter Magnetohydrodynamic generator Magnetohydrodynamic turbulence Molten salt Plasma stability Shocks and discontinuities (magnetohydrodynamics) List of textbooks in electromagnetism Further reading References Plasma theory and modeling
Magnetohydrodynamics
[ "Physics", "Chemistry" ]
4,342
[ "Magnetohydrodynamics", "Fluid dynamics", "Plasma theory and modeling", "Plasma physics" ]
40,318
https://en.wikipedia.org/wiki/Portland%20cement
Portland cement is the most common type of cement in general use around the world as a basic ingredient of concrete, mortar, stucco, and non-specialty grout. It was developed from other types of hydraulic lime in England in the early 19th century by Joseph Aspdin, and is usually made from limestone. It is a fine powder, produced by heating limestone and clay minerals in a kiln to form clinker, and then grinding the clinker with the addition of several percent (often around 5%) gypsum. Several types of portland cement are available. The most common, historically called ordinary portland cement (OPC), is grey, but white portland cement is also available. Its name is derived from its resemblance to Portland stone which is quarried on the Isle of Portland in Dorset, England. It was named by Joseph Aspdin who obtained a patent for it in 1824. His son William Aspdin is regarded as the inventor of "modern" portland cement due to his developments in the 1840s. The low cost and widespread availability of the limestone, shales, and other naturally occurring materials used in portland cement make it a relatively cheap building material. Its most common use is in the production of concrete, a composite material consisting of aggregate (gravel and sand), cement, and water. History Portland cement was developed from natural cements made in Britain beginning in the middle of the 18th century. Its name is derived from its similarity to Portland stone, a type of building stone quarried on the Isle of Portland in Dorset, England. The development of modern portland cement (sometimes called ordinary or normal portland cement) began in 1756, when John Smeaton experimented with combinations of different limestones and additives, including trass and pozzolanas, intended for the construction of a lighthouse, now known as Smeaton's Tower. In the late 18th century, Roman cement was developed and patented in 1796 by James Parker. Roman cement quickly became popular, but was largely replaced by portland cement in the 1850s. In 1811, James Frost produced a cement he called British cement. James Frost is reported to have erected a manufactory for making of an artificial cement in 1826. In 1811 Edgar Dobbs of Southwark patented a cement of the kind invented 7 years later by the French engineer Louis Vicat. Vicat's cement is an artificial hydraulic lime, and is considered the "principal forerunner" of portland cement. The name portland cement is recorded in a directory published in 1823 being associated with a William Lockwood and possibly others. In his 1824 cement patent, Joseph Aspdin called his invention "portland cement" because of its resemblance to Portland stone. Aspdin's cement was nothing like modern portland cement, but a first step in the development of modern portland cement, and has been called a "proto-portland cement". William Aspdin had left his father's company, to form his own cement manufactury. In the 1840s William Aspdin, apparently accidentally, produced calcium silicates which are a middle step in the development of portland cement. In 1848, William Aspdin further improved his cement. Then, in 1853, he moved to Germany, where he was involved in cement making. William Aspdin made what could be called "meso-portland cement" (a mix of portland cement and hydraulic lime). Isaac Charles Johnson further refined the production of "meso-portland cement" (middle stage of development), and claimed to be the real father of portland cement. In 1859, John Grant of the Metropolitan Board of Works, set out requirements for cement to be used in the London sewer project. This became a specification for portland cement. The next development in the manufacture of portland cement was the introduction of the rotary kiln, patented by Frederick Ransome in 1885 (U.K.) and 1886 (U.S.); which allowed a stronger, more homogeneous mixture and a continuous manufacturing process. The Hoffmann "endless" kiln which was said to give "perfect control over combustion" was tested in 1860 and shown to produce a superior grade of cement. This cement was made at the Portland Cementfabrik Stern at Stettin, which was the first to use a Hoffmann kiln. The Association of German Cement Manufacturers issued a standard on portland cement in 1878. Portland cement had been imported into the United States from Germany and England, and in the 1870s and 1880s, it was being produced by Eagle Portland cement near Kalamazoo, Michigan. In 1875, the first portland cement was produced in the Coplay Cement Company Kilns under the direction of David O. Saylor in Coplay, Pennsylvania. By the early 20th century, American-made portland cement had displaced most of the imported portland cement. Composition ASTM C150 defines portland cement as: The European Standard EN 197-1 uses the following definition: (The last two requirements were already set out in the German Standard, issued in 1909). Clinkers make up more than 90% of the cement, along with a limited amount of calcium sulphate (CaSO4, which controls the set time), and up to 5% minor constituents (fillers) as allowed by various standards. Clinkers are nodules (diameters, ) of a sintered material that is produced when a raw mixture of predetermined composition is heated to high temperature. The key chemical reaction distinguishing portland cement from other hydraulic limes occurs at these high temperatures (>) as belite (Ca2SiO4) combines with calcium oxide (CaO) to form alite (Ca3SiO5). Manufacturing Portland cement clinker is made by heating, in a cement kiln, a mixture of raw materials to a calcining temperature of above and then a fusion temperature, which is about for modern cements, to sinter the materials into clinker. The materials in cement clinker are alite, belite, tricalcium aluminate, and tetracalcium alumino ferrite. The aluminium, iron, and magnesium oxides are present as a flux allowing the calcium silicates to form at a lower temperature, and contribute little to the strength. For special cements, such as low heat (LH) and sulphate resistant (SR) types, it is necessary to limit the amount of tricalcium aluminate (3 CaO·Al2O3) formed. The major raw material for the clinker-making is usually limestone (CaCO3) mixed with a second material containing clay as source of alumino-silicate. Normally, an impure limestone which contains clay or SiO2 is used. The CaCO3 content of these limestones can be as low as 80%. Secondary raw materials (materials in the raw mix other than limestone) depend on the purity of the limestone. Some of the materials used are clay, shale, sand, iron ore, bauxite, fly ash, and slag. When a cement kiln is fired by coal, the ash of the coal acts as a secondary raw material. Cement grinding To achieve the desired setting qualities in the finished product, a quantity (2–8%, but typically 5%) of calcium sulphate (usually gypsum or anhydrite) is added to the clinker, and the mixture is finely ground to form the finished cement powder. This is achieved in a cement mill. The grinding process is controlled to obtain a powder with a broad particle size range, in which typically 15% by mass consists of particles below 5 μm diameter, and 5% of particles above 45 μm. The measure of fineness usually used is the 'specific surface area', which is the total particle surface area of a unit mass of cement. The rate of initial reaction (up to 24 hours) of the cement on addition of water is directly proportional to the specific surface area. Typical values are 320–380 m2·kg−1 for general purpose cements, and 450–650 m2·kg−1 for 'rapid hardening' cements. The cement is conveyed by belt or powder pump to a silo for storage. Cement plants normally have sufficient silo space for one to 20 weeks of production, depending upon local demand cycles. The cement is delivered to end users either in bags, or as bulk powder blown from a pressure vehicle into the customer's silo. In industrial countries, 80% or more of cement is delivered in bulk. Setting and hardening Cement sets when mixed with water by way of a complex series of chemical reactions still only partly understood. The different constituents slowly crystallise, and the interlocking of their crystals gives cement its strength. Carbon dioxide is slowly absorbed to convert the portlandite (Ca(OH)2) into insoluble calcium carbonate. After the initial setting, immersion in warm water will speed up setting. Gypsum is added as an inhibitor to prevent flash (or quick) setting. Use The most common use for portland cement is in the production of concrete. Concrete is a composite material consisting of aggregate (gravel and sand), cement, and water. As a construction material, concrete can be cast in almost any shape desired, and once hardened, can become a structural (load bearing) element. Concrete can be used in the construction of structural elements like panels, beams, and street furniture, or may be cast-in situ for superstructures like roads and dams. These may be supplied with concrete mixed on site, or may be provided with 'ready-mixed' concrete made at permanent mixing sites. Portland cement is also used in mortars (with sand and water only), for plasters and screeds, and in grouts (cement/water mixes squeezed into gaps to consolidate foundations, road-beds, etc.). When water is mixed with portland cement, the product sets in a few hours and hardens over a period of weeks. These processes can vary widely, depending upon the mix used and the conditions of curing of the product, but a typical concrete sets in about 6 hours and develops a compressive strength of 8 MPa in 24 hours. The strength rises to 15 MPa at 3 days, 23 MPa at 1 week, 35 MPa at 4 weeks, and 41 MPa at 3 months. In principle, the strength continues to rise slowly as long as water is available for continued hydration, but concrete is usually allowed to dry out after a few weeks and this causes strength growth to stop. Types General ASTM C150 Five types of portland cements exist, with variations of the first three according to ASTM C150. Type I portland cement is known as common or general-purpose cement. It is generally assumed unless another type is specified. It is commonly used for general construction, especially when making precast, and precast-prestressed concrete that is not to be in contact with soils or ground water. The typical compound compositions of this type are: 55% (C3S), 19% (C2S), 10% (C3A), 7% (C4AF), 2.8% MgO, 2.9% (SO3), 1.0% ignition loss, and 1.0% free CaO (utilizing cement chemist notation). A limitation on the composition is that the (C3A) shall not exceed 15%. Type II provides moderate sulphate resistance, and gives off less heat during hydration. This type of cement costs about the same as type I. Its typical compound composition is: 51% (C3S), 24% (C2S), 6% (C3A), 11% (C4AF), 2.9% MgO, 2.5% (SO3), 0.8% ignition loss, and 1.0% free CaO. A limitation on the composition is that the (C3A) shall not exceed 8%, which reduces its vulnerability to sulphates. This type is for general construction exposed to moderate sulphate attack, and is meant for use when concrete is in contact with soils and ground water, especially in the western United States due to the high sulphur content of the soils. Because of similar price to that of type I, type II is much used as a general purpose cement, and the majority of portland cement sold in North America meets this specification. Note: Cement meeting (among others) the specifications for types I and II has become commonly available on the world market. Type III has relatively high early strength. Its typical compound composition is: 57% (C3S), 19% (C2S), 10% (C3A), 7% (C4AF), 3.0% MgO, 3.1% (SO3), 0.9% ignition loss, and 1.3% free CaO. This cement is similar to type I, but ground finer. Some manufacturers make a separate clinker with higher C3S and/or C3A content, but this is increasingly rare, and the general purpose clinker is usually used, ground to a specific surface area typically 50–80% higher. The gypsum level may also be increased a small amount. This gives the concrete using this type of cement a three-day compressive strength equal to the seven-day compressive strength of types I and II. Its seven-day compressive strength is almost equal to 28-day compressive strengths of types I and II. The only downside is that the six-month strength of type III is the same or slightly less than that of types I and II. Therefore, the long-term strength is sacrificed. It is usually used for precast concrete manufacture, where high one-day strength allows fast turnover of molds. It may also be used in emergency construction and repairs, and construction of machine bases and gate installations. Type IV portland cement is generally known for its low heat of hydration. Its typical compound composition is: 28% (C3S), 49% (C2S), 4% (C3A), 12% (C4AF), 1.8% MgO, 1.9% (SO3), 0.9% ignition loss, and 0.8% free CaO. The percentages of (C2S) and (C4AF) are relatively high and (C3S) and (C3A) are relatively low. A limitation on this type is that the maximum percentage of (C3A) is seven, and the maximum percentage of (C3S) is thirty-five. This causes the heat given off by the hydration reaction to develop at a slower rate. Consequently, the strength of the concrete develops slowly. After one or two years the strength is higher than the other types after full curing. This cement is used for very large concrete structures, such as dams, which have a low surface to volume ratio. This type of cement is generally not stocked by manufacturers, but some might consider a large special order. This type of cement has not been made for many years, because portland-pozzolan cements and ground granulated blast furnace slag addition offer a cheaper and more reliable alternative. Type V is used where sulphate resistance is important. Its typical compound composition is: 38% (C3S), 43% (C2S), 4% (C3A), 9% (C4AF), 1.9% MgO, 1.8% (SO3), 0.9% ignition loss, and 0.8% free CaO. This cement has a very low (C3A) composition which accounts for its high sulphate resistance. The maximum content of (C3A) allowed is 5% for type V portland cement. Another limitation is that the (C4AF) + 2(C3A) composition cannot exceed 20%. This type is used in concrete to be exposed to alkali soil and ground water sulphates which react with (C3A) causing disruptive expansion. It is unavailable in many places, although its use is common in the western United States and Canada. As with type IV, type V portland cement has mainly been supplanted by the use of ordinary cement with added ground granulated blast furnace slag or tertiary blended cements containing slag and fly ash. Types Ia, IIa, and IIIa have the same composition as types I, II, and III. The only difference is that in Ia, IIa, and IIIa, an air-entraining agent is ground into the mix. The air-entrainment must meet the minimum and maximum optional specification found in the ASTM manual. These types are only available in the eastern United States and Canada, only on a limited basis. They are a poor approach to air-entrainment which improves resistance to freezing under low temperatures. Types II(MH) and II(MH)a have a similar composition as types II and IIa, but with a mild heat. EN 197 norm The European norm EN 197-1 defines five classes of common cement that comprise portland cement as a main constituent. These classes differ from the ASTM classes. Constituents that are permitted in portland-composite cements are artificial pozzolans (blast furnace slag (in fact a latent hydraulic binder), silica fume, and fly ashes), or natural pozzolans (siliceous or siliceous aluminous materials such as volcanic ash glasses, calcined clays and shale). CSA A3000-08 The Canadian standards describe six main classes of cement, four of which can also be supplied as a blend containing ground limestone (where a suffix L is present in the class names). White portland cement White portland cement or white ordinary portland cement (WOPC) is similar to ordinary gray portland cement in all respects, except for its high degree of whiteness. Obtaining this colour requires high purity raw materials (low Fe2O3 content), and some modification to the method of manufacture, among others a higher kiln temperature required to sinter the clinker in the absence of ferric oxides acting as a flux in normal clinker. As Fe2O3 contributes to decrease the melting point of the clinker (normally 1450 °C), the white cement requires a higher sintering temperature (around 1600 °C). Because of this, it is somewhat more expensive than the grey product. The main requirement is to have a low iron content which should be less than 0.5 wt.% expressed as Fe2O3 for white cement, and less than 0.9 wt.% for off-white cement. It also helps to have the iron oxide as ferrous oxide (FeO) which is obtained via slightly reducing conditions in the kiln, i.e., operating with zero excess oxygen at the kiln exit. This gives the clinker and cement a green tinge. Other metallic oxides such as Cr2O3 (green), MnO (pink), TiO2 (white), etc., in trace content, can also give colour tinges, so for a given project it is best to use cement from a single batch. Safety issues Bags of cement routinely have health and safety warnings printed on them, because not only is cement highly alkaline, but the setting process is also exothermic. As a result, wet cement is strongly caustic and can easily cause severe skin burns if not promptly washed off with water. Similarly, dry cement powder in contact with mucous membranes can cause severe eye or respiratory irritation. The reaction of cement dust with moisture in the sinuses and lungs can also cause a chemical burn, as well as headaches, fatigue, and lung cancer. The production of comparatively low-alkalinity cements (pH<11) is an area of ongoing investigation. In Scandinavia, France, and the United Kingdom, the level of chromium(VI), which is considered to be toxic and a major skin irritant, may not exceed 2 parts per million (ppm). In the US, the Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for portland cement exposure in the workplace as 50 mppcf (million particles per cubic foot) over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 10 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday. At levels of 5000 mg/m3, portland cement is immediately dangerous to life and health. Environmental effects Portland cement manufacture can cause environmental impacts at all stages of the process. These include emissions of airborne pollution in the form of dust; gases; noise and vibration when operating machinery and during blasting in quarries; consumption of large quantities of fuel during manufacture; release of from the raw materials during manufacture, and damage to countryside from quarrying. Equipment to reduce dust emissions during quarrying and manufacture of cement is widely used, and equipment to trap and separate exhaust gases are coming into increased use. Environmental protection also includes the re-integration of quarries into the countryside after they have been closed down by returning them to nature or re-cultivating them. Portland cement is caustic, so it can cause chemical burns. The powder can cause irritation or, with severe exposure, lung cancer, and can contain a number of hazardous components, including crystalline silica and hexavalent chromium. Environmental concerns are the high energy consumption required to mine, manufacture, and transport the cement, and the related air pollution, including the release of the greenhouse gas carbon dioxide, dioxin, , , and particulates. Production of portland cement contributes about 10% of world carbon dioxide emissions. The International Energy Agency has estimated that cement production will increase by between 12 and 23% by 2050 to meet the needs of the world's growing population. There are several ongoing researches targeting a suitable replacement of portland cement by supplementary cementitious materials. Epidemiologic Notes and Reports Sulfur Dioxide Exposure in Portland Cement Plants, from the Centers for Disease Control, states: An independent research effort of AEA Technology to identify critical issues for the cement industry today concluded the most important environment, health and safety performance issues facing the cement industry are atmospheric releases (including greenhouse gas emissions, dioxin, , , and particulates), accidents, and worker exposure to dust. The associated with portland cement manufacture comes mainly from four sources: Overall, with nuclear or hydroelectric power, and efficient manufacturing, generation can be reduced to per kg cement, but can be twice as high. The thrust of innovation for the future is to reduce sources 1 and 2 by modification of the chemistry of cement, by the use of wastes, and by adopting more efficient processes. Although cement manufacturing is clearly a very large emitter, concrete (of which cement makes up about 15%) compares quite favourably with other modern building systems in this regard.. Traditional materials such as lime based mortars as well as timber and earth based construction methods emit significantly less . Cement plants used for waste disposal or processing Due to the high temperatures inside cement kilns, combined with the oxidising (oxygen-rich) atmosphere and long residence times, cement kilns are used as a processing option for various types of waste streams; indeed, they efficiently destroy many hazardous organic compounds. The waste streams also often contain combustible materials which allow the substitution of part of the fossil fuel normally used in the process. Waste materials used in cement kilns as a fuel supplement: Car and truck tires – steel belts are easily tolerated in the kilns Paint sludge from automobile industries Waste solvents and lubricants Meat and bone meal – slaughterhouse waste due to bovine spongiform encephalopathy contamination concerns Waste plastics Sewage sludge Rice hulls Sugarcane waste Used wooden railroad ties (railway sleepers) Spent cell liner from the aluminium smelting industry (also called spent pot liner) Portland cement manufacture also has the potential to benefit from using industrial byproducts from the waste stream. These include in particular: Slag Fly ash (from power plants) Silica fume (from steel mills) Synthetic gypsum (from desulphurisation) See also References External links World Production of Hydraulic Cement, by Country Alpha The Guaranteed Portland Cement Company: 1917 Trade Literature from Smithsonian Institution Libraries A cracking alternative to cement Aerial views of the world's largest concentration of cement manufacturing capacity, Saraburi Province, Thailand, at CDC – NIOSH Pocket Guide to Chemical Hazards Further reading Cement Concrete English inventions Limestone Cement Building materials 19th-century inventions de:Zement#Portlandzement
Portland cement
[ "Physics", "Engineering" ]
5,103
[ "Structural engineering", "Building engineering", "Construction", "Materials", "Building materials", "Concrete", "Matter", "Architecture" ]
40,364
https://en.wikipedia.org/wiki/Electrometer
An electrometer is an electrical instrument for measuring electric charge or electrical potential difference. There are many different types, ranging from historical handmade mechanical instruments to high-precision electronic devices. Modern electrometers based on vacuum tube or solid-state technology can be used to make voltage and charge measurements with very low leakage currents, down to 1 femtoampere. A simpler but related instrument, the electroscope, works on similar principles but only indicates the relative magnitudes of voltages or charges. Historical electrometers Gold-leaf electroscope The gold-leaf electroscope was one of the instruments used to indicate electric charge. It is still used for science demonstrations but has been superseded in most applications by electronic measuring instruments. The instrument consists of two thin leaves of gold foil suspended from an electrode. When the electrode is charged by induction or by contact, the leaves acquire similar electric charges and repel each other due to the Coulomb force. Their separation is a direct indication of the net charge stored on them. On the glass opposite the leaves, pieces of tin foil may be pasted, so that when the leaves diverge fully they may discharge into the ground. The leaves may be enclosed in a glass envelope to protect them from drafts, and the envelope may be evacuated to minimize charge leakage. This principle has been used to detect ionizing radiation, as seen in the quartz fibre electrometer and Kearny fallout meter. This type of electroscope usually acts as an indicator and not a measuring device, although it can be calibrated. A calibrated electrometer with a more robust aluminium indicator was invented by Ferdinand Braun and first described in 1887. According to Braun, the standard gold-leaf electrometer is good up to about with a resolution of using an ocular micrometer. For larger voltages up to Braun's instrument can achieve a resolution of . The instrument was developed in the 18th century by several researchers, among them Abraham Bennet (1787) and Alessandro Volta. Early quadrant electrometer While the term "quadrant electrometer" eventually referred to Kelvin's version, this term was first used to describe a simpler device. Its body consists of an upright stem of wood affixed to a semicircle of ivory with angle markings. A light cork ball hangs by a string from a pivot at the center of the semicircle and makes contact with the stem. When the instrument is placed upon a charged body, the stem and ball become charged and repel each other. The amount of repulsion is quantified by reading the angle between the string and the stem off the semicircle, though the measured angle is not in direct proportion to the charge. Early inventors included William Henley (1770) and Horace-Bénédict de Saussure. Coulomb's electrometer Torsion is used to give a measurement more sensitive than repulsion of gold leaves or cork-balls. It consists of a glass cylinder with a glass tube on top. In the axes of the tube is a glass thread, the lower end of this holds a bar of gum lac, with a gilt pith ball at each extremity. Through another aperture on the cylinder, another gum lac rod with gilt balls may be introduced. This is called the carrier rod. If the lower ball of the carrier rod is charged when it is entered into the aperture, this will repel one of the movable balls inside. An index and scale (not pictured) is attached to the top of the twistable glass rod. The number of degrees twisted to bring the balls back together is in exact proportion of the amount of charge of the ball of the carrier rod. Francis Ronalds, the inaugural Director of the Kew Observatory, made important improvements to the Coulomb torsion balance around 1844 and the modified instrument was sold by London instrument-makers. Ronalds used a thin suspended needle rather than the gum lac bar and replaced the carrier rod with a fixed piece in the plane of the needle. Both were metal, as was the suspending line and its surrounding tube, so that the needle and the fixed piece could be charged directly through wire connections. Ronalds also employed a Faraday cage and trialled photography to record the readings continuously. It was the forerunner of Kelvin's quadrant electrometer (described below). Peltier electrometer Developed by Peltier, this uses a form of magnetic compass to measure deflection by balancing the electrostatic force with a magnetic needle. Bohnenberger electrometer The Bohnenberger electrometer, developed by J. G. F. von Bohnenberger from an invention by T. G. B. Behrens, consists of a single gold leaf suspended vertically between the anode and cathode of a dry pile. Any charge imparted to the gold leaf causes it to move toward one or the other pole; thus, the sign of the charge as well as its approximate magnitude may be gauged. Attraction electrometer Also known as "attracted disk electrometers", attraction electrometers are sensitive balances measuring the attraction between charged disks. William Snow Harris is credited with the invention of this instrument, which was further improved by Lord Kelvin. Kelvin's quadrant electrometer Developed by Lord Kelvin, this is the most sensitive and accurate of all the mechanical electrometers. The original design uses a light aluminum sector suspended inside a drum cut into four segments. The segments are insulated and connected diagonally in pairs. The charged aluminum sector is attracted to one pair of segments and repelled from the other. The deflection is observed by a beam of light reflected from a small mirror attached to the sector, just as in a galvanometer. The engraving on the right shows a slightly different form of this electrometer, using four flat plates rather than closed segments. The plates can be connected externally in the conventional diagonal way (as shown), or in a different order for specific applications. A more sensitive form of quadrant electrometer was developed by Frederick Lindemann. It employs a metal-coated quartz fiber instead of an aluminum sector. The deflection is measured by observing the movement of the fiber under a microscope. Initially used for measuring star light, it was employed for the infrared detection of airplanes in the early stages of World War II. Some mechanical electrometers were housed inside a cage often referred to as a “bird cage”. This is a form of Faraday Cage that protected the instrument from external electrostatic charges. Electrograph Electricity readings may be recorded continuously with a device known as an electrograph. Francis Ronalds created an early electrograph around 1814 in which the changing electricity made a pattern in a rotating resin-coated plate. It was employed at Kew Observatory and the Royal Observatory, Greenwich in the 1840s to create records of variations in atmospheric electricity. In 1845, Ronalds invented photographic means of registering the atmospheric electricity. The photosensitive surface was pulled slowly past of the aperture diaphragm of the camera box, which also housed an electrometer, and captured ongoing movements of the electrometer indices as a trace. Kelvin used similar photographic means for his quadrant electrometer (see above) in the 1860s. Modern electrometers A modern electrometer is a highly sensitive electronic voltmeter whose input impedance is so high that the current flowing into it can be considered, for most practical purposes, to be zero. The actual value of input resistance for modern electronic electrometers is around 1014Ω, compared to around 1010Ω for nanovoltmeters. Owing to the extremely high input impedance, special design considerations (such as driven shields and special insulation materials) must be applied to avoid leakage current. Among other applications, electrometers are used in nuclear physics experiments as they are able to measure the tiny charges left in matter by the passage of ionizing radiation. The most common use for modern electrometers is the measurement of radiation with ionization chambers, in instruments such as geiger counters. Vibrating reed electrometers Vibrating reed electrometers use a variable capacitor formed between a moving electrode (in the form of a vibrating reed) and a fixed input electrode. As the distance between the two electrodes varies, the capacitance also varies and electric charge is forced in and out of the capacitor. The alternating current signal produced by the flow of this charge is amplified and used as an analogue for the DC voltage applied to the capacitor. The DC input resistance of the electrometer is determined solely by the leakage resistance of the capacitor, and is typically extremely high, (although its AC input impedance is lower). For convenience of use, the vibrating reed assembly is often attached by a cable to the rest of the electrometer. This allows for a relatively small unit to be located near the charge to be measured while the much larger reed-driver and amplifier unit can be located wherever it is convenient for the operator. Valve electrometers Valve electrometers use a specialized vacuum tube (thermionic valve) with a very high gain (transconductance) and input resistance. The input current is allowed to flow into the high impedance grid, and the voltage so generated is vastly amplified in the anode (plate) circuit. Valves designed for electrometer use have leakage currents as low as a few femtoamperes (10−15 amperes). Such valves must be handled with gloved hands as the salts left on the glass envelope can provide leakage paths for these tiny currents. In a specialized circuit called inverted triode, the roles of anode and grid are reversed. This places the control electrode at a maximum distance from the space-charge region surrounding the filament, minimizing the number of electrons collected by the control electrode, and thus minimizing the input current. Solid-state electrometers The most modern electrometers consist of a solid state amplifier using one or more field-effect transistors, connections for external measurement devices, and usually a display and/or data-logging connections. The amplifier amplifies small currents so that they are more easily measured. The external connections are usually of a co-axial or tri-axial design, and allow attachment of diodes or ionization chambers for ionising radiation measurement. The display or data-logging connections allow the user to see the data or record it for later analysis. Electrometers designed for use with ionization chambers may include a high-voltage power supply, which is used to bias the ionization chamber. Solid-state electrometers are often multipurpose devices that can measure voltage, charge, resistance and current. They measure voltage by means of "voltage balancing", in which the input voltage is compared with an internal reference voltage source using an electronic circuit with a very high input impedance (of the order of 1014 Ω). A similar circuit modified to act as a current-to-voltage converter enables the instrument to measure currents as small as a few femtoamperes. Combined with an internal voltage source, the current measuring mode can be adapted to measure very high resistances, of the order of 1017 Ω. Finally, by calculation from the known capacitance of the electrometer's input terminal, the instrument can measure very small electric charges, down to a small fraction of a picocoulomb. See also Electrical measurements Electron Electroscope Radiation Faraday cup electrometer References Dr. J. Frick, Physical Technics; Or Practical Instructions for Making Experiments in Physics Translated By John D. Easter, Ph.D. - J. B. Lippincott & Co., Philadelphia 1862 Robert Mfurgeson Ph.D. Electricity - William and Robert Chambers, London and Edinburgh 1866 Silvanus P. Thompson, Elementary Lessons in electricity and Magnetism. - Macmillan and Co. Limited, London 1905 Jones, R. V., Instruments and Experiences - John Wiley and Sons, London 1988 External links Build this FET electrometer - A very simple circuit - 2 components Simple FET electrometer - A simple bridged circuit An op-amp electrometer Early electrometers Charging an Electroscope by Induction Using a Negatively-Charged Balloon from Multimedia Physics Studios Voltmeters Electrical meters Measuring instruments Electricity
Electrometer
[ "Physics", "Technology", "Engineering" ]
2,499
[ "Voltmeters", "Physical quantities", "Measuring instruments", "Voltage", "Electrical meters" ]
6,144,801
https://en.wikipedia.org/wiki/Critical%20speed
In solid mechanics, in the field of rotordynamics, the critical speed is the theoretical angular velocity that excites the natural frequency of a rotating object, such as a shaft, propeller, leadscrew, or gear. As the speed of rotation approaches the object's natural frequency, the object begins to resonate, which dramatically increases system vibration. The resulting resonance occurs regardless of orientation. When the rotational speed is equal to the natural frequency, then that speed is referred to as a critical speed. Critical speed of shafts All rotating shafts, even in the absence of external load, will deflect during rotation. The unbalanced mass of the rotating object causes deflection that will create resonant vibration at certain speeds, known as the critical speeds. The magnitude of deflection depends upon the following: Stiffness of the shaft and its support Total mass of shaft and attached parts Unbalance of the mass with respect to the axis of rotation The amount of damping in the system In general, it is necessary to calculate the critical speed of a rotating shaft, such as a fan shaft, in order to avoid issues with noise and vibration. Critical speed equation Like vibrating strings and other elastic structures, shafts and beams can vibrate in different mode shapes, with corresponding natural frequencies. The first vibrational mode corresponds to the lowest natural frequency. Higher modes of vibration correspond to higher natural frequencies. Often when considering rotating shafts, only the first natural frequency is needed. There are two main methods used to calculate critical speed—the Rayleigh–Ritz method and Dunkerley's method. Both calculate an approximation of the first natural frequency of vibration, which is assumed to be nearly equal to the critical speed of rotation. The Rayleigh–Ritz method is discussed here. For a shaft that is divided into n segments, the first natural frequency for a given beam, in rad/s, can be approximated as: where g is the acceleration of gravity, and the are the weights of each segment, and the are the static deflections (under gravitational loading only) of the center of each segment. Generally speaking, if n is 2 or higher, this method tends to slightly overestimate the first natural frequency, with the estimate becoming better the higher n is. If n is only 1, this method tends to underestimate the first natural frequency, but the equation simplifies to: where is the max static deflection of the shaft. These speeds are in rad/s, but can be converted to RPM by multiplying by . If a beam has multiple types of loading, deflections can be found for each, and then summed. If the shaft diameter changes along its length, deflection calculations become much more difficult. The static deflection expresses the relationship between rigidity of the shaft and inertial forces; it includes all the loads applied to the shaft when placed horizontally. However, the relationship is valid no matter what the orientation of the shaft is. A system's critical speeds depend upon the magnitude, location, and relative phase of shaft unbalance, the shaft's geometry and mechanical properties, and the stiffness and mass properties of the support structure. Many practical applications suggest as good practice that the maximum operating speed should not exceed 75% of the critical speed; however, some systems operate above the first critical speed, or supercritically. In such cases, it is important to accelerate the shaft through the first natural frequency quickly so that large deflections don't develop. See also Damping ratio Oscillate Natural frequency Resonance Campbell diagram Vibration References Mechanical engineering hu:Kritikus fordulatszám it:Velocità critica flessionale
Critical speed
[ "Physics", "Engineering" ]
762
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
20,689,609
https://en.wikipedia.org/wiki/Atomic%20recoil
In nuclear physics, atomic recoil is the result of the interaction of an atom with an energetic elementary particle, when the momentum of the interacting particle is transferred to the atom as a whole without altering non-translational degrees of freedom of the atom. It is a purely quantum phenomenon. Atomic recoil was discovered by Harriet Brooks, Canada's first female nuclear physicist, in 1904, but interpreted wrongly. Otto Hahn reworked, explained and demonstrated it in 1908/09. The physicist Walther Gerlach described radioactive recoil as "a profoundly significant discovery in physics with far-reaching consequences". If the transferred momentum of atomic recoil is enough to disrupt the crystal lattice of the material, a vacancy defect is formed; therefore a phonon is generated. Closely related to atomic recoil are electron recoil (see photoexcitation and photoionization) and nuclear recoil, in which momentum transfers to the atomic nucleus as whole. Nuclear recoil can cause the nucleus to be displaced from its normal position in the crystal lattice, which can result in the daughter atom being more susceptible to dissolution. This leads for example to an increase in the ratio of U to U in certain cases, which can be exploited in dating (see Uranium–thorium dating). In some cases, quantum effects can forbid momentum transfer to an individual nucleus, and momentum is transferred to the crystal lattice as a whole (see Mössbauer effect). Mathematical treatment Let us consider an atom or nucleus that emits a particle (a proton, neutron, alpha particle, neutrino, or gamma ray). In the simplest situation, the nucleus recoils with the same momentum, as the particle has. The total energy of the "daughter" nucleus afterwards is whereas that of the emitted particle is where and are the rest masses of the daughter nucleus and the particle respectively. The sum of these must equal the rest energy of the original nucleus: or Squaring both sides gives: or Again squaring both sides gives: or or Note that is the energy released by the decay, which we may designate . For the total energy of the particle we have: So the kinetic energy imparted to the particle is: Similarly, the kinetic energy imparted to the daughter nucleus is: When the emitted particle is a proton, neutron, or alpha particle the fraction of the decay energy going to the particle is approximately and the fraction going to the daughter nucleus For neutrinos and gamma rays, the departing particle gets almost all the energy, the fraction going to the daughter nucleus being only The speed of the emitted particle is given by divided by the total energy: Similarly, the speed of the recoiling nucleus is: If we take for neutrinos and gamma rays, this simplifies to: For similar decay energies, the recoil from emitting an alpha ray will be much greater than the recoil from emitting a neutrino (upon electron capture) or a gamma ray. For decays that produce two particles as well as the daughter nuclide, the above formulas can be used to find the maximum energy, momentum, or speed of any of the three, by assuming that the lighter of the other two ends up with a speed of zero. For example, the maximum energy of the neutrino, if we assume its rest mass to be zero, is found by using the formula as though only the daughter and the neutrino are involved: Note that here is not the mass of the neutral daughter isotope, but that minus the electron mass: With beta decay, the maximum recoil energy of the daughter nuclide, as a fraction of the decay energy, is greater than either of the approximations given above, and The first ignores the decay energy, and the second ignores the mass of the beta particle, but with beta decay these two are often comparable and neither can be ignored (see Beta decay#Energy release). References Bibliography Further reading nuclear recoil Britannica Online Encyclopedia Atomic physics
Atomic recoil
[ "Physics", "Chemistry" ]
797
[ "Quantum mechanics", "Atomic physics", " molecular", "Atomic", " and optical physics" ]
20,693,354
https://en.wikipedia.org/wiki/Deployment%20cost%E2%80%93benefit%20selection%20in%20physiology
Deployment cost–benefit selection in physiology concerns the costs and benefits of physiological process that can be deployed and selected in regard to whether they will increase or not an animal’s survival and biological fitness. Variably deployable physiological processes relate mostly to processes that defend or clear infections as these are optional while also having high costs and circumstance linked benefits. They include immune system responses, fever, antioxidants and the plasma level of iron. Notable determining factors are life history stage, and resource availability. Immunity Activating the immune system has the present and future benefit of clearing infections, but it is also both expensive in regard to present high metabolic energy consumption, and in the risk of resulting in a future immune related disorder. Therefore, an adaptive advantage exists if an animal can control its deployment in regard to actuary-like evaluations of future benefits and costs as to its biological fitness. In many circumstances, such trade-off calculations explain why immune responses are suppressed and infections are tolerated. Circumstances where immunity is not activated due to lack of an actuarial benefit include: Malnutrition Old age Hibernation Parasitism (low or high risk) Sexually transmitted diseases (low or high risk) Light patterns associated with winter (probable resource shortage) Fever Cost benefit trade-off actuary issues apply to the antibacterial and antiviral effects of fever (increased body temperature). Fever has the future benefit of clearing infections since it reduces the replication of bacteria and viruses. But it also has great present metabolic (BMR) cost, and the risk of hyperpyrexia. Where it is achieved internally, each degree raise in blood temperature, raises BMR by 10–15%. 90% of the total cost of fighting pneumonia, goes, for example, on energy devoted to raising body temperature. During sepsis, the resulting fever can raise BMR by 55%—and cause a 15% to 30% loss of body mass. Circumstances in which fever deployment is not selected or is reduced include: Aged individuals—the burden of tolerating infection will exist for a short time which reduces the actuarial future benefits of clearing an infection compared to the costs of its removal. This change favors reduced or no deployment of fever. When internal resources are limited (such as in winter), and the ability to afford high expenditure on increased metabolism is reduced. This increases the risks of activating fever relative to its potential benefit, and animals are less likely to use fever to fight infections. Late Pregnancy Antioxidants Antioxidants such as carotenoids, vitamin C, Vitamin E, and enzymes such as superoxide dismutase (SOD) and glutathione peroxidase (GPx) can protect against reactive oxygen species that damage DNA, proteins and lipids, and result in cell senescence and death. A cost exists in creating or obtaining these antioxidants. This creates a conflict between the biological fitness benefits of future survival compared with the use of these antioxidants to advantage present reproductive success. In some birds, antioxidants are diverted from maintaining the body to reproduction for this reason with the result that they have accelerated senescence Related to this, birds can show their biological capacity to afford the cost of diverting antioxidants (such as carotenoids) in the form of pigments into plumage as a costly signal. Hypoferremia Iron is vital to biological processes, not only of a host, but also to bacteria infecting the host. A biological fitness advantage can exist for hosts to reduce the availability of iron within itself to such bacteria (hypoferremia), even though this happens at a cost of the host impairing itself with anemia. The potential benefits of such self impairment is illustrated by the paradoxical effect that providing iron supplements to those with iron deficiency (which interferes with its antibacterial action) can result in an individual being cured of anemia but having increased bacterial illness. See also Adaptation Cost–benefit analysis Evolutionary medicine Notes Evolutionary biology Physiology
Deployment cost–benefit selection in physiology
[ "Biology" ]
831
[ "Evolutionary biology", "Physiology" ]
20,694,764
https://en.wikipedia.org/wiki/Solar%20radiation%20modification
Solar radiation modification (SRM) (or solar radiation management or solar geoengineering), is a group of large-scale approaches to limit global warming by increasing the amount of sunlight (solar radiation) that is reflected away from Earth and back to space. Among the potential approaches, stratospheric aerosol injection (SAI) is the most-studied, followed by marine cloud brightening (MCB); others such as ground- and space-based show less potential or feasibility and receive less attention. SRM could be a supplement to climate change mitigation and adaptation measures, but would not be a substitute for reducing greenhouse gas emissions. SRM is a form of climate engineering or geoengineering. Scientific studies, based on evidence from climate models, have consistently shown that some forms of SRM could reduce global warming and many effects of climate change. However, because warming from greenhouse gases and cooling from SRM would operate differently across latitudes and seasons, a world where global warming would be offset by SRM would have a different climate from one where this warming did not occur in the first place. SRM would therefore pose environmental risks, as would a warmed world without SRM. Confidence in the current projections of how SRM would affect regional climate and ecosystems is low. Furthermore, a suboptimal implementation of SRM--such as starting or stopping suddenly, or intervening too strongly in the Earth's energy balance--would increase environmental risks. SRM presents political, social and ethical challenges. A common concern is that attention to it would lessen efforts to reduce greenhouse gas emissions. Because some SRM approaches appear to be technically feasible and have relatively low direct financial costs, some countries could be capable of deploying it on their own, raising questions of international relations. Although some existing applicable governance instruments and institutions are applicable, there is currently no formal international framework designed to regulate SRM. Issues of governance and effectiveness are intertwined, as poorly governed use of SRM might lead to its suboptimal implementation. For these reasons and more, SRM is often a contested topic among environmentalists. In the face of ongoing global warming and insufficient reductions to greenhouse gas emissions, SRM receives increasing attention. Climate scientists and other experts from around the world research and publish academic articles, while more nongovernmental and intergovernmental organizations, as well as national governments, are examining and developing views. Context The context for the interest in solar radiation modification (SRM) options is continued high global emissions of greenhouse gases, rising global temperatures, and worsening climate impacts. Human's greenhouse gas emissions have disrupted the Earth's energy budget. Due to elevated atmospheric greenhouse gas concentrations, the net difference between the amount of sunlight absorbed by the Earth and the amount of energy radiated back to space has risen from 1.7 W/m2 in 1980, to 3.1 W/m2 in 2019. This imbalance, or "radiative forcing," means that the Earth absorbs more energy than it emits, causing global temperatures to rise which will, in turn, have negative impacts on humans and nature. In principle, net emissions could be reduced and even eliminated achieved through a combination of emission cuts and carbon dioxide removal (together called "mitigation"). However, emissions have persisted, consistently exceeding targets, and experts have raised serious questions regarding the feasibility of large-scale removals. The 2023 Emissions Gap Report from the UN Environment Programme estimated that even the most optimistic assumptions regarding countries' current conditional emissions policies and pledges has only a 14% chance of limiting global warming to 1.5 °C. SRM would increase Earth's reflection of sunlight by increasing the albedo of the atmosphere or the surface. An increase in planetary albedo of 1% would reduce radiative forcing by 2.35 W/m2, eliminating most of global warming from current anthropogenically elevated greenhouse gas concentrations, while a 2% albedo increase would negate the warming effect of doubling the atmospheric carbon dioxide concentration. SRM could theoretically buy time by slowing the rate of climate change or to eliminate the worst climate impacts until net negative emissions reduce atmospheric greenhouse gas concentrations sufficiently. This is because SRM could, unlike the other responses, cool the planet within months after deployment. SRM is generally intended to complement, not replace, emissions reduction and carbon dioxide removal. For example, the IPCC Sixth Assessment Report says: "There is high agreement in the literature that for addressing climate change risks SRM cannot be the main policy response to climate change and is, at best, a supplement to achieving sustained net zero or net negative emission levels globally". Major reports on SRM that have investigated advantages and disadvantages of SRM (sometimes grouped with carbon dioxide removal and under the title of climate engineering) include those by the Royal Society (2009), the US National Academies (2015 and 2021), the UN Environment Programme (2023), and the European Union's Scientific Advice Mechanism (2024). History In 1965, during the administration of U.S. President Lyndon B. Johnson, the President's Science Advisory Committee delivered Restoring the Quality of Our Environment, the first report which warned of the harmful effects of carbon dioxide emissions from fossil fuel. To counteract global warming, the report mentioned "deliberately bringing about countervailing climatic changes", including "raising the albedo, or reflectivity, of the Earth". In 1974, Russian climatologist Mikhail Budyko suggested that if global warming ever became a serious threat, it could be countered with airplane flights in the stratosphere, burning sulfur to make aerosols that would reflect sunlight away. Along with carbon dioxide removal, SRM was discussed jointly as geoengineering in a 1992 climate change report from the US National Academies. David Keith, an American physicist, has worked on solar geoengineering since 1992, when he and Hadi Dowlatabadi published one of the first assessments of the technology and its policy implications, introducing a structured comparison of cost and risk. Keith has consistently argued that geoengineering needs a "systematic research program" to determine whether or not its approaches are feasible. He has also appealed for international standards of governance and oversight for how such research might proceed. The first modeled results of SRM were published in 2000. In 2006 Nobel Laureate Paul Crutzen published an influential scholarly paper where he said, "Given the grossly disappointing international political response to the required greenhouse gas emissions, and further considering some drastic results of recent studies, research on the feasibility and environmental consequences of climate engineering [...] should not be tabooed." Atmospheric methods The atmospheric methods for SRM include stratospheric aerosol injection (SAI), marine cloud brightening (MCB) and cirrus cloud thinning (CCT). Stratospheric aerosol injection (SAI) For stratospheric aerosol injection (SAI) small particles would be injected into the upper atmosphere to cool the planet with both global dimming and increased albedo. Of all the proposed SRM methods, SAI has received the most sustained attention: The IPCC concluded in 2018 that SAI "is the most-researched SRM method, with high agreement that it could limit warming to below 1.5 °C." This technique would mimic a cooling phenomenon that occurs naturally by the eruption of volcanoes. Sulfates are the most commonly proposed aerosol, since there is a natural analogue with (and evidence from) volcanic eruptions. Alternative materials such as using photophoretic particles, titanium dioxide, and diamond have been proposed. Delivery by custom aircraft appears most feasible, with artillery and balloons sometimes discussed. This technique could give much more than 3.7 W/m2 of globally averaged negative forcing, which is sufficient to entirely offset the warming caused by a doubling of carbon dioxide. The most recent Scientific Assessment of Ozone Depletion report in 2022 from the World Meteorological Organization concluded "Stratospheric Aerosol Injection (SAI) has the potential to limit the rise in global surface temperatures by increasing the concentrations of particles in the stratosphere... . However, SAI comes with significant risks and can cause unintended consequences." A potential disadvantage of SAI is its potential to delay the regeneration of the stratospheric ozone layer (dependent on assumptions about which aerosols would be used to do the cooling). Marine cloud brightening (MCB) Marine cloud brightening (MCB) would involve spraying fine sea water to whiten clouds and thus increase cloud reflectivity. It would work by "seeding to promote nucleation, reducing optical thickness and cloud lifetime, to allow more outgoing longwave radiation to escape into space". The extra condensation nuclei created by the spray would change the size distribution of the drops in existing clouds to make them whiter. The sprayers would use fleets of unmanned rotor ships known as Flettner vessels to spray mist created from seawater into the air to thicken clouds and thus reflect more radiation from the Earth. The whitening effect is created by using very small cloud condensation nuclei, which whiten the clouds due to the Twomey effect. This technique can give more than 3.7 W/m2 of globally averaged negative forcing, which is sufficient to reverse the warming effect of a doubling of atmospheric carbon dioxide concentration. Cirrus cloud thinning (CCT) Cirrus cloud thinning (CCT) involves "seeding to promote nucleation, reducing optical thickness and cloud lifetime, to allow more outgoing longwave radiation to escape into space."Natural cirrus clouds are believed to have a net warming effect. These could be dispersed by the injection of various materials. This method is strictly not SRM, as it increases outgoing longwave radiation instead of decreasing incoming shortwave radiation. However, because it shares some of the physical and especially governance characteristics as the other SRM methods, it is often included. Other methods Ground-based albedo modification The IPCC describes ground-based albedo modification as "whitening roofs, changes in land use management (e.g., no-till farming), change of albedo at a larger scale (covering glaciers or deserts with reflective sheeting and changes in ocean albedo)." It is a method of enhancing Earth's albedo, i.e. the ability to reflect the visible, infrared, and ultraviolet wavelengths of the Sun, reducing heat transfer to the surface. Space-based Space-based approaches could be advantageous compared to stratospheric aerosol injection because they do not interfere directly with the biosphere and ecosystems. However, space-based approaches would cost about 1000 times more than their terrestrial alternatives. In 2022, the IPCC Sixth Assessment Report discussed SAI, MCB, CCT and even attempts to alter albedo on the ground or in the ocean but did not address space-based approaches. There has been a range of proposals to reflect or deflect solar radiation from space, before it even reaches the atmosphere, commonly described as a space sunshade. The most straightforward is to have mirrors orbiting around the Earth—an idea first suggested even before the wider awareness of climate change, with rocketry pioneer Hermann Oberth considering it a way to facilitate terraforming projects in 1923. and this was followed by other books in 1929, 1957 and 1978. By 1992, the U.S. National Academy of Sciences described a plan to suspend 55,000 mirrors with an individual area of 100 square meters in a Low Earth orbit. Another contemporary plan was to use space dust to replicate Rings of Saturn around the equator, although a large number of satellites would have been necessary to prevent it from dissipating. A 2006 variation on this idea suggested relying entirely on a ring of satellites electromagnetically tethered in the same location. In all cases, sunlight exerts pressure which can displace these reflectors from orbit over time, unless stabilized by enough mass. Yet, higher mass immediately drives up launch costs. When summarizing these spaced-based options in 2009, the Royal Society concluded that their deployment times are measured in decades and costs in the trillions of USD, meaning that they are "not realistic potential contributors to short-term, temporary measures for avoiding dangerous climate change", and may only be competitive with the other geoengineering approaches when viewed from a genuinely long (a century or more) perspective, as the long lifetime of L1-based approaches could make them cheaper than the need to continually renew atmospheric-based measures over that timeframe. Costs Cost estimates for SAI Technical problem areas Aspects of regional scales and seasonal timescales Modelling studies have consistently concluded that moderate SRM use would significantly reduce many of the impacts of global warming —for example, average and extreme temperature, water availability, and cyclone intensity Furthermore, SRM's effect would occur rapidly, unlike those of other responses to climate change. However, even under optimal implementation, some climatic anomalies—especially regarding precipitation—would persist, although mostly at lesser magnitudes than without SRM. However, SRM has significant potential risks and uncertainties. The IPCC Sixth Assessment Report explains some of the risks and uncertainties as follows: "[...] SRM could offset some of the effects of increasing GHGs on global and regional climate, including the carbon and water cycles. However, there would be substantial residual or overcompensating climate change at the regional scales and seasonal time scales, and large uncertainties associated with aerosol–cloud–radiation interactions persist. The cooling caused by SRM would increase the global land and ocean sinks, but this would not stop from increasing in the atmosphere or affect the resulting ocean acidification under continued anthropogenic emissions." Likewise, a 2023 report from the UN Environment Programme stated, "Climate model results indicate that an operational SRM deployment could fully or partially offset the global mean warming caused by anthropogenic GHG emissions and reduce some climate change hazards in most regions. There could be substantial residual or possible overcompensating climate change at regional scales and seasonal timescales." The report also said: "An operational SRM deployment would introduce new risks to people and ecosystems". SRM would imperfectly compensate for anthropogenic climate changes. Greenhouse gases warm throughout the globe and year, whereas SRM reflects light more effectively at low latitudes and in the hemispheric summer (due to the sunlight's angle of incidence) and only during daytime. Deployment regimes could compensate for this heterogeneity by changing and optimizing injection rates by latitude and season. Impacts on precipitation Models indicate that SRM would reverse warming-induced changes to precipitation more rapidly than changes to temperature. Therefore, using SRM to fully return global mean temperature to a preindustrial level would overcorrect for precipitation changes. This has led to claims that it would dry the planet or even cause drought, but this would depend on the intensity (i.e. radiative forcing) of SRM. Furthermore, soil moisture is more important for plants than average annual precipitation. Because SRM would reduce evaporation, it more precisely compensates for changes to soil moisture than for average annual precipitation. Likewise, the intensity of tropical monsoons is increased by climate change and decreased by SRM. A net reduction in tropical monsoon intensity might manifest at moderate use of SRM, although to some degree the effect of this on humans and ecosystems would be mitigated by greater net precipitation outside of the monsoon system. This has led to misleading claims that SRM "would disrupt the Asian and African summer monsoons", something that has been repeatedly challenged by climate scientists who study SRM. Ultimately the impact would depend on the particular implementation regime. Deployment length A modeling study in 2023 showed that the range of possible deployment timescales is vast even if pathways start at a similar point at the beginning of SRM deployment. This is because the evolution of mitigation under SRM, the availability of carbon removal technologies and the effects of climate reversibility are not precisely known. Since these effects will be mostly uncertain at the time of SRM initialization, a precedent prediction of deployment length seems unlikely, with possibilities ranging from decades to multiple centuries. This is a knowledge gap that must be considered before any SRM proposal is seriously considered. For all realizations that follow current NDC (nationally determined contributions) median 2100 warming projections (2.4 ∘C), none deploy SRM for a shorter period than 100 years. The direct climatic effects of SRM are reversible within short timescales. Models project that SRM interventions would take effect rapidly, but would also quickly fade out if not sustained. Termination shock If SRM masked significant warming, stopped abruptly, and was not resumed within a year or so, the climate would rapidly warm towards levels which would have existed without the use of SRM, sometimes known as termination shock. The rapid rise in temperature might lead to more severe consequences than a gradual rise of the same magnitude. However, some scholars have argued that this risk might be manageable because it would be in states' interest to resume any terminated deployment, and maintaining back-up SRM infrastructure would increase the resilience of an SRM system. Failure to reduce ocean acidification SRM does not directly influence atmospheric carbon dioxide concentration and thus does not reduce ocean acidification. While not a risk of SRM per se, this points to the limitations of relying on it to the exclusion of emissions reduction. Effect on sky and clouds Managing solar radiation using aerosols or cloud cover would involve changing the ratio between direct and indirect solar radiation. This would affect plant life and solar energy. Visible light, useful for photosynthesis, is reduced proportionally more than is the infrared portion of the solar spectrum due to the mechanism of Mie scattering. As a result, deployment of atmospheric SRM would affect the growth rates of phytoplankton, trees, and crops between now and the end of the century. Uniformly reduced net shortwave radiation would affect solar photovoltaics, but the real-world impact is complex and is affected by temperature and cloud fraction, and interacts with demand-side factors (especially heating and cooling load). Uncertainty regarding effects Much uncertainty remains about SRM's likely effects. Most of the evidence regarding SRM's expected effects comes from climate models and volcanic eruptions. Some uncertainties in climate models (such as aerosol microphysics, stratospheric dynamics, and sub-grid scale mixing) are particularly relevant to SRM and are a target for future research. Volcanoes are an imperfect analogue as they release the material in the stratosphere in a single pulse, as opposed to sustained injection. Climate change has various effects on agriculture. One of them is the CO2 fertilization effect which affects different crops in different ways. A net increase in agricultural productivity from SRM (in combination with raised carbon dioxide levels) has been predicted by some studies due to the combination of more diffuse light and carbon dioxide's fertilization effect. Other studies suggest that SRM would have little net effect on agriculture. There have also been proposals to focus SRM at the poles, in order to combat sea level rise or regional marine cloud brightening (MCB) in order to protect coral reefs from bleaching. However, there is low confidence about the ability to control geographical boundaries of the effect. SRM might be used in ways that are not optimal. In particular, SRM's climatic effects would be rapid and reversible, which would bring the disadvantage of sudden warming if it were to be stopped suddenly. Similarly, if SRM was very heterogenous, then the climatic responses could be severe and uncertain. Governance and policy risks Global governance issues The potential use of SRM poses several governance challenges because of its high leverage, low apparent direct costs, and technical feasibility as well as issues of power and jurisdiction. Because international law is generally consensual, this creates a challenge of widespread participation being required. Key issues include who will have control over the deployment of SRM and under what governance regime the deployment can be monitored and supervised. A governance framework for SRM must be sustainable enough to contain a multilateral commitment over a long period of time and yet be flexible as information is acquired, the techniques evolve, and interests change through time. Some political scientists have argued that the current international political system is inadequate for the fair and inclusive governance of SRM deployment on a global scale. Other researchers have suggested that building a global agreement on SRM deployment would be very difficult, and speculated whether power blocs might emerge. However, there may be significant incentives for states to cooperate in choosing a specific SRM policy, which make unilateral deployment unlikely. Other relevant aspects of the governance of SRM include supporting research, ensuring that it is conducted responsibly, regulating the roles of the private sector and (if any) the military, public engagement, setting and coordinating research priorities, undertaking trusted scientific assessment, building trust, and compensating for possible harms. Although climate models of SRM generally simulate consistent implementation, leaders of countries and other actors may disagree as to whether, how, and to what degree SRM be used. This could result in suboptimal deployments and exacerbate international tensions. Likewise, blame for actual or perceived local negative impacts from SRM could be a source of international tensions. There is a risk that countries may start using SRM without proper research and evaluation. SRM, at least by stratospheric aerosol injection, appears to have low direct implementation costs relative to its potential impact, and many countries have the financial and technical resources to undertake SRM. Some have suggested that SRM could be within reach of a lone "Greenfinger", a wealthy individual who takes it upon him or herself to be the "self-appointed protector of the planet". Others argue that states will insist on maintaining control of SRM. Lessened climate change mitigation A common concern is that the use of SRM, or even the idea, might reduce the political and social impetus for climate change mitigation. This has often been called a potential "moral hazard", although such language is not precise. However, some engagement work has suggested that SRM may in fact increase the likelihood of emissions reduction because the pursuit of such a risky approach underlines the seriousness of global warming. Support for SRM research Support for SRM research has come from scientists, NGOs, international organisations, and governments. The leading argument in support of SRM research is that there are large and immediate risks from climate change, and SRM is the only known way to quickly stop (or reverse) warming. Leading this effort have been some climate scientists (such as James Hansen), some of whom have endorsed one or both public letters that support further SRM research. Scientific and other large organizations that have called for further research on SRM include: In the UK in 2009: the Royal Society, the Institution of Mechanical Engineers (UK) In Australia in 2012: Australia's Office of the Chief Scientist In the Netherlands in 2013: Netherlands' scientific assessment institute In the United States from 2015 to 2022: the US National Academies, the American Geophysical Union, the American Meteorological Society, the U.S. Global Change Research Program, the Council on Foreign Relations Global organizations from 2023 to 2024: the World Climate Research Programme and reports from the UN Environment Programme and the UN Educational, Scientific and Cultural Organization In the European Union: the Group of Chief Scientific Advisors (the report from 2024 specifically examines "how the EU can address the risks and opportunities associated with research on solar radiation modification and with its potential deployment".) Two sign-on letters in 2023 from scientists and other experts have called for expanded "responsible SRM research". One wants to "objectively evaluate the potential for SRM to reduce climate risks and impacts, to understand and minimize the risks of SRM approaches, and to identify the information required for governance". It was endorsed by "more than 110 physical and biological scientists studying climate and climate impacts about the role of physical sciences research." Another called for "balance in research and assessment of solar radiation modification" and was endorsed by about 150 experts, mostly scientists. Some nongovernmental organizations actively support SRM research and governance dialogues. The Degrees Initiative is a UK registered charity, established to build capacity in developing countries to evaluate SRM. It works toward "changing the global environment in which SRM is evaluated, ensuring informed and confident representation from developing countries." Operaatio Arktis is a Finnish youth climate organisation that supports research into solar radiation modification alongside mitigation and carbon sequestration as a potential means to preserve polar ice caps and prevent tipping points. SilverLining is an American organization that advances SRM research as part of "climate interventions to reduce near-term climate risks and impacts." It is funded by "philanthropic foundations and individual donors focused on climate change". One of their funders is Quadrature Climate Foundation which "plans to provide $40 million for work in this field over the next three years" (as of 2024). The Alliance for Just Deliberation on Solar Geoengineering advances "just and inclusive deliberation" regarding SRM, in particular by engaging civil society organisations in the Global South and supporting a broader conversation on SRM governance. The Carnegie Climate Governance Initiative catalyzed governance of SRM and carbon dioxide removal, although it ended operations in 2023. The Climate Overshoot Commission is a group of global, eminent, and independent figures. It investigated and developed a comprehensive strategy to reduce climate risks. The Commission recommended additional research on SRM alongside a moratorium on deployment and large-scale outdoor experiments. It also concluded that "governance of SRM research should be expanded". Campaigners have claimed that the fossil fuels lobby advocates for SRM research. However, researchers have pointed out the lack of evidence in support of this claim. Opposition to deployment and research Opposition to SRM has come from various academics and NGOs. Common concerns are that SRM could lessen climate change mitigation efforts, that SRM is ultimately ungovernable, and that SRM would cause tensions, or even conflict, between nations. Opponents of SRM research often emphasize that reductions of greenhouse gas emissions would also bring co-benefits (for example reduced air pollution) and that consideration of SRM could prevent these outcomes. The ETC Group, an environmental justice organization, has been a pioneer in opposing SRM research. It was later joined by the Heinrich Böll Foundation (affiliated with the German Green Party) and the Center for International Environmental Law. In 2021, researchers at Harvard put plans for an SRM-related field experiment on hold after Indigenous Sámi people objected to the test taking place in their homeland. Although the test would not have involved any atmospheric experiments, members of the Saami Council spoke out against the lack of consultation and SRM more broadly. Speaking at a panel organized by the Center for International Environmental Law and other groups, Saami Council Vice President Åsa Larsson Blind said, "This goes against our worldview that we as humans should live and adapt to nature." In 2022, a scientific journal Wiley Interdisciplinary Reviews: Climate Change published "Solar geoengineering: The case for an international non-use agreement". The authors argued that geoengineering cannot be used in a responsible manner under the current system of international relations, so the only option is for as many governments as possible to make a commitment they would neither deploy such technologies, nor fund research into them, grant intellectual property rights or host such experiments when conducted by third parties. In 2024, the same journal had published a commentary from a different group of scientists, which criticized the proposed non-use agreement and argued for a more permissive research framework.. The academic paper launched a campaign which, as of December 2024, has been supported by nearly 540 academics and 60 advocacy organizations have endorsed the proposal. By 2024, U.S. government agencies were allegedly operating an airborne early warning system for detecting small concentrations of aerosols to determine where other countries might be carrying out geoengineering attempts, thought to have unpredictable effects on climate. Research funding As of 2018, total research funding worldwide remained modest, at less than 10 million US dollars annually. Almost all research into SRM has to date consisted of computer modeling or laboratory tests, and there are calls for more research funding as the science is poorly understood. A study from 2022 investigated where the funding for SRM research came from globally. The authors concluded "the primary funders of [SRM] research do not emanate from fossil capital" and that there are "close ties to mostly US financial and technological capital as well as a number of billionaire philanthropists". Under the World Climate Research Programme there is a Lighthouse Activity called Research on Climate Intervention as of 2024. This will include research on all possible climate interventions (another term for climate engineering): "large-scale Carbon Dioxide Removal (CDR; also known as Greenhouse Gas Removal, or Negative Emissions Technologies) and Solar Radiation Modification (SRM; also known as Solar Reflection Modification, Albedo Modification, or Radiative Forcing Management)". Government funding Few countries have an explicit governmental position on SRM. Those that do, such as the United Kingdom and Germany, support some SRM research even if they do not see it as a current climate policy option. For example, the German Federal Government does have an explicit position on SRM and stated in 2023 in a strategy document climate foreign policy: "Due to the uncertainties, implications and risks, the German Government is not currently considering solar radiation management (SRM) as a climate policy option". The document also stated: "Nonetheless, in accordance with the precautionary principle we will continue to analyse and assess the extensive scientific, technological, political, social and ethical risks and implications of SRM, in the context of technology-neutral basic research as distinguished from technology development for use at scale". Some countries, such as the U.S., U.K., Argentina, Germany, China, Finland, Norway, and Japan, as well as the European Union, have funded SRM research. NOAA in the United States has spent $22 million USD from 2019 to 2022, with only a few outdoor tests carried out to date. As of 2024, NOAA provides about $11 million USD a year through their solar geoengineering research program. In 2021, the National Academies of Sciences, Engineering, and Medicine released their consensus study report Recommendations for Solar Geoengineering Research and Research Governance. The report recommended an initial investment into SRM research of $100–200 million over five years. In late 2024, the Advanced Research and Invention Agency, a British funding agency, announced that research funds totaling 57 million pounds (about $75 million USD) will be made available to support projects which explore "Climate Cooling". This includes outdoor experiments: "This programme aims to answer fundamental questions as to the practicality, measurability, controllability and possible (side-)effects of such approaches through indoor and (where necessary) small, controlled, outdoor experiments." Successful applicants will be announced in 2025. Non-profits and philanthropic support for research There are also research activities on SRM that are funded by philanthropy. According to Bloomberg News, as of 2024 several American billionaires are funding research into SRM: "A growing number of Silicon Valley founders and investors are backing research into blocking the sun by spraying reflective particles high in the atmosphere or making clouds brighter." The article listed the following billionaires as being notable geoengineering research supporters: Mike Schroepfer, Sam Altman, Matt Cohler, Rachel Pritzker, Bill Gates, Dustin Moskovitz. SRM research initiatives, or non-profit knowledge hubs, include for example SRM360 which is "supporting an informed, evidence-based discussion of sunlight reflection methods (SRM)". Funding comes from the LAD Climate Fund. David Keith, a long-term proponent of SRM research, is one of the members of the advisory board. Another example is Reflective, which is "a philanthropically-funded initiative focused on sunlight reflection research and technology development". Their funding is "entirely by grants or donations from a number of leading philanthropies focused on addressing climate change": Outlier Projects, Navigation Fund, Astera Institute, Open Philanthropy, Crankstart, Matt Cohler, Richard and Sabine Wood. Deployment activities Make Sunsets At least one startup in the private sector has tried to sell "cooling credits" for SRM activities. Make Sunsets launches balloons containing helium and sulfur dioxide. The company sells cooling credits, making the contested claim that each US$10 credit would offset the warming effect of one ton of carbon dioxide warming for a year. Based in California, Make Sunsets conducted some of its first activities in Mexico. In response to these activities, which were conducted without prior notification or consent, the Mexican government announced measures to prohibit SRM experiments within its borders, although it is unclear whether this became actual policy. Even people who advocate for more research into SRM have criticized Make Sunsets' undertaking. Society and culture Studies into opinions about SRM have found low levels of awareness, uneasiness with the implementation of SRM, cautious support of research, and a preference for greenhouse gas emissions reduction. Although most public opinion studies have polled residents of developed countries, those that have examined residents of developing countries—which tend to be more vulnerable to climate change impacts—find slightly greater levels of support there. The largest assessment of public opinion and perception of SRM, which had over 30,000 respondents in 30 countries, found that "Global South publics are significantly more favorable about potential benefits and express greater support for climate-intervention technologies." Though the assessment also found Global South publics had greater concern the technologies could undermine climate-mitigation. See also References Climate change policy Planetary engineering Climate engineering Atmospheric radiation
Solar radiation modification
[ "Engineering" ]
6,995
[ "Planetary engineering", "Geoengineering" ]
20,695,220
https://en.wikipedia.org/wiki/AMOLED
AMOLED (active-matrix organic light-emitting diode; ) is a type of OLED display device technology. OLED describes a specific type of thin-film-display technology in which organic compounds form the electroluminescent material, and active matrix refers to the technology behind the addressing of pixels. Since 2007, AMOLED technology has been used in mobile phones, media players, TVs and digital cameras, and it has continued to make progress toward low-power, low-cost, high resolution and large size (for example, 88-inch and 8K resolution) applications. Design An AMOLED display consists of an active matrix of OLED pixels generating light (luminescence) upon electrical activation that have been deposited or integrated onto a thin-film transistor (TFT) array, which functions as a series of switches to control the current flowing to each individual pixel. Typically, this continuous current flow is controlled by at least two TFTs at each pixel (to trigger the luminescence), with one TFT to start and stop the charging of a storage capacitor and the second to provide a voltage source at the level needed to create a constant current to the pixel, thereby eliminating the need for the very high currents required for passive-matrix OLED operation. TFT backplane technology is crucial in the fabrication of AMOLED displays. In AMOLEDs, the two primary TFT backplane technologies, polycrystalline silicon (poly-Si) and amorphous silicon (a-Si), are currently used offering the potential for directly fabricating the active-matrix backplanes at low temperatures (below 150 °C) onto flexible plastic substrates for producing flexible AMOLED displays. History AMOLED was developed in 2006. Samsung SDI was one of the main investors in the technology, and many other display companies were also developing it. One of the earliest consumer electronics products with an AMOLED display was the BenQ-Siemens S88 mobile handset and, in 2007, the iriver Clix 2 portable media player. In 2008 it appeared on the Nokia N85 followed by the Samsung i7110 - both Nokia and Samsung Electronics were early adopters of this technology on their smartphones. Future development Manufacturers have developed in-cell touch panels, integrating the production of capacitive sensor arrays in the AMOLED module fabrication process. In-cell sensor AMOLED fabricators include AU Optronics and Samsung. Samsung has marketed its version of this technology as "Super AMOLED". Researchers at DuPont used computational fluid dynamics (CFD) software to optimize coating processes for a new solution-coated AMOLED display technology that is competitive in cost and performance with existing chemical vapor deposition (CVD) technology. Using custom modeling and analytic approaches, Samsung has developed short and long-range film-thickness control and uniformity that is commercially viable at large glass sizes. Comparison to other display technologies Compared to other display technologies, AMOLED screens have several advantages and disadvantages. AMOLED displays can provide higher refresh rates than passive-matrix, often have response times less than a millisecond, and they consume significantly less power. This advantage makes active-matrix OLEDs well-suited for portable electronics, where power consumption is critical to battery life. The amount of power the display consumes varies significantly depending on the color and brightness shown. As an example, one old QVGA OLED display consumes 0.3 watts while showing white text on a black background, but more than 0.7 watts showing black text on a white background, while an LCD may consume only a constant 0.35 watts regardless of what is being shown on screen. A new FHD+ or WQHD+ display will consume much more. Because the black pixels turn completely off, AMOLED also has contrast ratios that are significantly higher than LCDs. AMOLED displays may be difficult to view in direct sunlight compared with LCDs because of their reduced maximum brightness. Samsung's Super AMOLED technology addresses this issue by reducing the size of gaps between layers of the screen. Additionally, PenTile technology is often used for a higher resolution display while requiring fewer subpixels than needed otherwise, sometimes resulting in a display less sharp and more grainy than a non-PenTile display with the same resolution. The organic materials used in AMOLED displays are very prone to degradation over a relatively short period of time, resulting in color shifts as one color fades faster than another, image persistence, or burn-in. Flagship smartphones sold in 2020 and 2021 used a Super AMOLED. Super AMOLED displays, such as the one on the Samsung Galaxy S21+ / S21 Ultra and Samsung Galaxy Note 20 Ultra have often been compared to IPS LCDs, found in phones such as the Xiaomi Mi 10T, Huawei Nova 5T, and Samsung Galaxy A20e. For example, according to ABI Research, the AMOLED display found in the Motorola Moto X draws just 92 mA during bright conditions and 68 mA while dim. On the other hand, compared with the IPS, the yield rate of AMOLED is low; the cost is also higher. Marketing terms Super AMOLED "Super AMOLED" is a marketing term created by Samsung for an AMOLED display with an integrated touch screen digitizer: the layer that detects touch is integrated into the display, rather than overlaid on top of it and cannot be separated from the display itself. Super AMOLED is a more advanced version and it integrates touch-sensors and the actual screen in a single layer. When compared with a regular LCD display an AMOLED display consumes less power, provides more vivid picture quality, and renders faster motion response as compared to other display technologies such as LCD. However, Super AMOLED is even better at this with 20% brighter screen, 20% lower power consumption and 80% less sunlight reflection. According to Samsung, Super AMOLED reflects one-fifth as much sunlight as the first generation AMOLED. The generic term for this technology is One Glass Solution (OGS). Comparison Below is a mapping table of marketing terms versus resolutions and sub-pixel types. Note how the pixel density relates to choices of sub-pixel type. Future Future displays exhibited from 2011 to 2013 by Samsung have shown flexible, 3D, transparent Super AMOLED Plus displays using very high resolutions and in varying sizes for phones. These unreleased prototypes use a polymer as a substrate removing the need for glass cover, a metal backing, and touch matrix, combining them into one integrated layer. So far, Samsung plans on branding the newer displays as Youm, or y-octa. Also planned for the future are 3D stereoscopic displays that use eye-tracking (via stereoscopic front-facing cameras) to provide full resolution 3D visuals. See also List of flat panel display manufacturers microLED OLED References External links Mobile phones Conductive polymers Display technology Molecular electronics Optical diodes Organic electronics
AMOLED
[ "Chemistry", "Materials_science", "Engineering" ]
1,447
[ "Molecular physics", "Molecular electronics", "Electronic engineering", "Display technology", "Nanotechnology", "Conductive polymers" ]
20,695,378
https://en.wikipedia.org/wiki/OpenScientist
OpenScientist is an integration of open source products working together to do scientific visualization and data analysis, in particular for high energy physics (HEP). Among other things, it contains a light C++ AIDA implementation that can be used to run the histogramming part of Geant4 examples. External links Data analysis software Experimental particle physics Free plotting software Physics software Plotting software
OpenScientist
[ "Physics" ]
82
[ "Computational physics", "Experimental physics", "Particle physics", "Experimental particle physics", "Particle physics stubs", "Computational physics stubs", "Physics software" ]
452,160
https://en.wikipedia.org/wiki/Abdul%20Qadeer%20Khan
Abdul Qadeer Khan, ( ; ; 1 April 1936 – 10 October 2021), known as A. Q. Khan, was a Pakistani nuclear physicist and metallurgical engineer who is colloquially known as the "father of Pakistan's atomic weapons program". An émigré (Muhajir) from India who migrated to Pakistan in 1952, Khan was educated in the metallurgical engineering departments of Western European technical universities where he pioneered studies in phase transitions of metallic alloys, uranium metallurgy, and isotope separation based on gas centrifuges. After learning of India's "Smiling Buddha" nuclear test in 1974, Khan joined his nation's clandestine efforts to develop atomic weapons when he founded the Khan Research Laboratories (KRL) in 1976 and was both its chief scientist and director for many years. In January 2004, Khan was subjected to a debriefing by the Musharraf administration over evidence of nuclear proliferation handed to them by the Bush administration of the United States. Khan admitted his role in running a nuclear proliferation network – only to retract his statements in later years when he leveled accusations at the former administration of Pakistan's Prime Minister Benazir Bhutto in 1990, and also directed allegations at President Musharraf over the controversy in 2008. Khan was accused of selling nuclear secrets illegally and was put under house arrest in 2004. After years of house arrest, Khan successfully filed a lawsuit against the Federal Government of Pakistan at the Islamabad High Court whose verdict declared his debriefing unconstitutional and freed him on 6 February 2009. The United States reacted negatively to the verdict and the Obama administration issued an official statement warning that Khan still remained a "serious proliferation risk". On account of the knowledge of nuclear espionage by Khan and his contribution to nuclear proliferation throughout the world post 1970s, and the renewed fear of weapons of mass destruction in the hands of terrorists after the September 11 attacks, former CIA Director George Tenet described Khan as "at least as dangerous as Osama bin Laden". After his death on 10 October 2021, he was given a state funeral at Faisal Mosque before being buried at the H-8 graveyard in Islamabad. Early life and education Abdul Qadeer Khan was born on 1 April 1936, in Bhopal, a city then in the erstwhile British Indian princely state of Bhopal State, and now the capital city of Madhya Pradesh. He was a Muhajir of Urdu-speaking Pashtun origin. His maternal ancestors hailed from the Tirah Valley (now in the Khyber District of Khyber Pakhtunkhwa, Pakistan) while from his paternal side he descended from an Uzbek soldier who came to India with Muhammad of Ghor, the 12th century conqueror, the reason why years later he’d name his ballistic missiles as Ghauri. His father, Abdul Ghafoor, was a schoolteacher who once worked for the Ministry of Education, and his mother, Zulekha, was a housewife with a very religious mindset. His older siblings, along with other family members, had emigrated to Pakistan during the partition of India in 1947, who would often write to Khan's parents about the new life they had found in Pakistan. After his matriculation from a local school in Bhopal, in 1952 Khan emigrated from India to Pakistan on the Sind Mail train, partly due to the reservation politics at that time, and religious violence in India during his youth had left an indelible impression on his world view. Upon settling in Karachi with his family, Khan briefly attended the D. J. Science College before transferring to the University of Karachi, where he graduated in 1956 with a Bachelor of Science (BSc) in physics with a concentration on solid-state physics. From 1956 to 1959, Khan was employed by the Karachi Metropolitan Corporation (city government) as an Inspector of weights and measures, and applied for a scholarship that allowed him to study in West Germany. In 1961, Khan departed for West Germany to study material science at the Technical University in West Berlin, where he academically excelled in courses in metallurgy, but left West Berlin when he switched to the Delft University of Technology in the Netherlands in 1965. In 1962, while on vacation in The Hague, he met Hendrina "Henny" Reternik, a British passport holder who had been born in South Africa to Dutch expatriates. She spoke Dutch and had spent her childhood in Africa before returning with her parents to the Netherlands where she lived as a registered foreigner. In 1963, he married Henny in a modest Muslim ceremony at Pakistan's embassy in The Hague. Khan and Henny together had two daughters, Dina Khan - who is a doctor, and Ayesha Khan. In 1967, Khan obtained an engineer's degree in materials technology – an equivalent to a Master of Science (MS) offered in English-speaking nations such as Pakistan – and joined the doctoral program in metallurgical engineering at the Katholieke Universiteit Leuven in Belgium. He worked under Belgian professor Martin J. Brabers at Leuven University, who supervised his doctoral thesis which Khan successfully defended, and graduated with a DEng in metallurgical engineering in 1972. His thesis included fundamental work on martensite and its extended industrial applications in the field of graphene morphology. Career in Europe In 1972, Khan joined the Physics Dynamics Research Laboratory (or in Dutch: FDO), an engineering firm subsidiary of Verenigde Machinefabrieken (VMF) based in Amsterdam, from Brabers's recommendation. The FDO was a subcontractor for Ultra-Centrifuge Nederland of the, British-German-Dutch uranium enrichment consortium, URENCO which was operating a uranium enrichment plant in Almelo and employed gaseous centrifuge method to assure a supply of nuclear fuel for nuclear power plants in the Netherlands. Soon after, Khan left FDO when URENCO offered him a senior technical position, initially conducting studies on uranium metallurgy. Uranium enrichment is an extremely difficult process because uranium in its natural state is composed of just 0.71% of uranium-235 (U235), which is a fissile material, 99.3% of uranium-238 (U238), which is non-fissile, and 0.0055% of uranium-234 (U234), a daughter product which is also a non-fissile. The URENCO Group utilised the Zippe-type of centrifugal method to electromagnetically separate the isotopes U234, U235, and U238 from sublimed raw uranium by rotating the uranium hexafluoride (UF6) gas at up to ~100,000 revolutions per minute (rpm). Khan, whose work was based on physical metallurgy of the uranium metal, eventually dedicated his investigations to improving the efficiency of the centrifuges by 1973–74. Frits Veerman, Khan's colleague at FDO, uncovered nuclear espionage at Almelo where Khan had stolen designs of the centrifuges from URENCO for the nuclear weapons programme of Pakistan. Veerman became aware of the espionage when Khan had taken classified URENCO documents home to be copied and translated by his Dutch-speaking wife and had asked Veerman to photograph some of them. In 1975, Khan was transferred to a less sensitive section when URENCO became suspicious and he subsequently returned to Pakistan with his wife and two daughters. Khan was sentenced in absentia to four years in prison in 1983 by the Netherlands for espionage but the conviction was later overturned due to a legal technicality. Ruud Lubbers, Prime Minister of the Netherlands at the time, later said that the General Intelligence and Security Service (BVD) was aware of Khan's espionage activities but he was allowed to continue due to pressure from the CIA, with the US backing Pakistan during the Cold War. This was also highlighted when despite Archie Pervez (Khan's associate for nuclear procurement in the US) being convicted in 1988, no action was taken against Khan or his proliferation network by the US government which needed the support of Pakistan during the Soviet–Afghan War. , a Dutch engineer and businessman who had studied metallurgy with Khan at the Delft University of Technology, continued providing goods needed for enriching uranium to Khan in Pakistan through his company Slebos Research. Slebos was sentenced in 1985 to one year in prison but the sentence was reduced on appeal in 1986 to six months of probation and a fine of 20,000 guilders. Though Slebos continued to export goods to Pakistan and was again sentenced to one year in prison and a fine of around was imposed on his company. Ernst Piffl, was convicted and sentenced to three and a half years in prison by Germany in 1998 for supplying nuclear centrifuge parts through his company Team GmbH to Khan's Khan Research Laboratories in Kahuta. Asher Karni, a Hungarian-South African businessman was sentenced to three years in prison in the US for the sale of restricted nuclear equipment to Pakistan through Humayun Khan (an associate of A. Q. Khan) and his Pakland PME Corporation. Scientific career in Pakistan Smiling Buddha and initiation Upon learning of India's surprise nuclear test, 'Smiling Buddha', in May 1974, Khan wanted to contribute to efforts to build an atomic bomb and met with officials at the Pakistani Embassy in The Hague, who dissuaded him by saying it was "hard to find" a job in PAEC as a "metallurgist". In August 1974, Khan wrote a letter which went unnoticed, but he directed another letter through the Pakistani ambassador to the Prime Minister's Secretariat in September 1974. Unbeknownst to Khan, his nation's scientists were already working towards feasibility of the atomic bomb under a secretive crash weapons program since 20 January 1972 that was being directed by Munir Ahmad Khan, a reactor physicist, which calls into question of his "father-of" claim. After reading his letter, Prime Minister Zulfikar Ali Bhutto had his military secretary run a security check on Khan, who was unknown at that time, for verification and asked PAEC to dispatch a team under Bashiruddin Mahmood that met Khan at his family home in Almelo and directed Bhutto's letter to meet him in Islamabad. Upon arriving in December 1974, Khan took a taxi straight to the Prime Minister's Secretariat. He met with Prime Minister Bhutto in the presence of Ghulam Ishaq Khan, Agha Shahi, and Mubashir Hassan where he explained the significance of highly enriched uranium, with the meeting ending with Bhutto's remark: "He seems to make sense." The next day, Khan met with Munir Ahmad and other senior scientists where he focused the discussion on production of highly enriched uranium (HEU), against weapon-grade plutonium, and explained to Bhutto why he thought the idea of "plutonium" would not work. Later, Khan was advised by several officials in the Bhutto administration to remain in the Netherlands to learn more about centrifuge technology but continue to provide consultation on the Project-706 enrichment program led by Mahmood. By December 1975, Khan was given a transfer to a less sensitive section when URENCO became suspicious of his indiscreet open sessions with Mahmood to instruct him on centrifuge technology. Khan began to fear for his safety in the Netherlands, ultimately insisting on returning home. Khan Research Laboratories and atomic bomb program In April 1976, Khan joined the atomic bomb program and became part of the enrichment division, initially collaborating with Khalil Qureshi – a physical chemist. Calculations performed by him were valuable contributions to centrifuges and a vital link to nuclear weapon research, but continue to push for his ideas for feasibility of weapon-grade uranium even though it had a low priority, with most efforts still aimed to produce military-grade plutonium. Because of his interest in uranium metallurgy and his frustration at having been passed over for director of the uranium division (the job was instead given to Bashiruddin Mahmood), Khan refused to engage in further calculations and caused tensions with other researchers. Khan became highly unsatisfied and bored with the research led by Mahmood – finally, he submitted a critical report to Bhutto, in which he explained that the "enrichment program" was nowhere near success. Upon reviewing the report, Bhutto sensed a great danger as the scientists were split between military-grade uranium and plutonium and informed Khan to take over the enrichment division from Mahmood, who separated the program from PAEC by founding the Engineering Research Laboratories (ERL). The ERL functioned directly under the Army's Corps of Engineers, with Khan being its chief scientist, and the army engineers located the national site at isolated lands in Kahuta for the enrichment program as ideal site for preventing accidents. The PAEC did not forgo their electromagnetic isotope separation program, and a parallel program was led by G. D. Alam at the Air Research Laboratories (ARL) located at Chaklala Air Force Base, even though Alam had not seen a centrifuge, and only had a rudimentary knowledge of the Manhattan Project. During this time, Alam accomplished a great feat by perfectly balancing the rotation of the first generation of centrifuge to ~30,000 rpm and was immediately dispatched to ERL which was suffering from many setbacks in setting up its own program under Khan's direction based on centrifuge technology dependent on URENCO's methods. Khan eventually committed to work on problems involving the differential equations concerning the rotation around fixed axis to perfectly balance the machine under influence of gravity and the design of first generation of centrifuges became functional after Khan and Alam succeeded in separating the 235U and 238U isotopes from raw natural uranium. In the military circles, Khan's scientific ability was well recognised and was often known with his moniker "Centrifuge Khan" and the national laboratory was renamed after him upon the visit of President Muhammad Zia-ul-Haq in 1983. In spite of his role, Khan was never in charge of the actual designs of the nuclear devices, their calculations, and eventual weapons testing which remained under the directorship of Munir Ahmad Khan and the PAEC. The PAEC's senior scientists who worked with him and under him remember him as "an egomaniacal lightweight" given to exaggerating his scientific achievements in centrifuges. At one point, Munir Khan said that, "most of the scientists who work on the development of atomic bomb projects were extremely 'serious'. They were sobered by the weight of what they don't know; Abdul Qadeer Khan is a showman." During the timeline of the bomb program, Khan published papers on analytical mechanics of balancing of rotating masses and thermodynamics with mathematical rigour to compete, but still failed to impress his fellow theorists at PAEC, generally in the physics community. In later years, Khan became a staunch critic of Munir Khan's research in physics, and on many occasions tried unsuccessfully to belittle Munir Khan's role in the atomic bomb projects. Their scientific rivalry became public and widely popular in the physics community and seminars held in the country over the years. Nuclear tests: Chagai-I Many of his theorists were unsure that military-grade uranium would be feasible on time without the centrifuges, since Alam had notified PAEC that the "blueprints were incomplete" and "lacked the scientific information needed even for the basic gas-centrifuges." Calculations by Tasneem Shah, and confirmed by Alam, showed that Khan's earlier estimation of the quantity of uranium needing enrichment for the production of weapon-grade uranium was possible, even with the small number of centrifuges deployed. Khan produced the designs of the centrifuges from URENCO. However, they were riddled with serious technical errors, and while he bought some components for analysis, they were broken pieces, making them useless for quick assembly of a centrifuge. Its separative work unit (SWU) rate was extremely low, so that it would have to be rotated for thousands of RPMs at the cost of millions of taxpayers money, Alam maintained. Though Khan's knowledge of copper metallurgy greatly aided the it was the calculations and validation that came from his team of fellow theorists, including mathematician Tasneem Shah and Alam, who solved the differential equations concerning rotation around a fixed axis under the influence of gravity, which led Khan to come up with the innovative centrifuge designs. Scientists have said that Khan would have never got any closer to success without the assistance of Alam and others. The issue is controversial; Khan maintained to his biographer that when it came to defending the "centrifuge approach" and really putting work into it, both Shah and Alam refused. Khan was also very critical of PAEC's concentrated efforts towards developing a plutonium 'implosion-type' nuclear devices and provided strong advocacy for the relatively simple 'gun-type' device that only had to work with high-enriched uranium – a design concept of gun-type device he eventually submitted to Ministry of Energy (MoE) and Ministry of Defense (MoD). Khan downplayed the importance of plutonium despite many of the theorists maintaining that "plutonium and the fuel cycle has its significance", and he insisted on the uranium route to the Bhutto administration when France's offer for an extraction plant was in the offing. Though he had helped to come up with the centrifuge designs, and had been a long-time proponent of the concept, Khan was not chosen to head the development project to test his nation's first nuclear-weapons (his reputation of a thorny personality likely played a role in this) after India conducted its series of nuclear tests, 'Pokhran-II' in 1998. Intervention by the Chairman Joint Chiefs, General Jehangir Karamat, allowed Khan to be a participant and eye-witness his nation's first nuclear test, 'Chagai-I' in 1998. At a news conference, Khan confirmed the testing of the boosted fission devices while stating that it was KRL's highly enriched uranium (HEU) that was used in the detonation of Pakistan's first nuclear devices on 28 May 1998. Many of Khan's colleagues were irritated that he seemed to enjoy taking full credit for something he had only a small part in, and in response, he authored an article, "Torch-Bearers", which appeared in The News International, emphasising that he was not alone in the weapon's development. He made an attempt to work on the Teller–Ulam design for the hydrogen bomb, but the military strategists had objected to the idea as it went against the government's policy of minimum credible deterrence. Khan often got engrossed in projects which were theoretically interesting but practically unfeasible. Proliferation controversy In the 1970s, Khan had been very vocal about establishing a network to acquire imported electronic materials from the Dutch firms and had very little trust of PAEC's domestic manufacturing of materials, despite the government accepting PAEC's arguments for the long term sustainability of the nuclear weapons program. At one point, Khan reached out to the People's Republic of China for acquiring the uranium hexafluoride (UF6) when he attended a conference there – the Pakistani Government sent it back to the People's Republic of China, asking KRL to use the UF6 supplied by PAEC. In an investigative report published by Nuclear Threat Initiative, Chinese scientists were reportedly present at Khan Research Laboratories (KRL) in Kahuta in the early 1980s. In 1996, the U.S. intelligence community maintained that China provided magnetic rings for special suspension bearings mounted at the top of rotating centrifuge cylinders. In 2005, it was revealed that President Zia-ul-Haq's military government had KRL run a HEU programme in the Chinese nuclear weapons program. Khan said that "KRL has built a centrifuge facility for China in Hanzhong city". China also exported some of DF-11's ballistic missile technology to Pakistan, where Pakistan's Ghaznavi and Shaheen-II borrowed from DF-11 technology. In 1982, an unnamed Arab country reached out to Khan for the sale of centrifuge technology. Khan was very receptive to the financial offer, but one scientist alerted the Zia administration which investigated the matter, only for Khan to vehemently deny such an offer was made to him. The Zia administration tasked Major-General Ali Nawab, an engineering officer, to keep surveillance on Khan, which he did until 1983 when he retired from his military service, and Khan's activities went undetected for several years after. Court controversy and U.S. objections In 1979, the Dutch government eventually probed Khan on suspicion of nuclear espionage but he was not prosecuted due to lack of evidence, though it did file a criminal complaint against him in a local court in Amsterdam, which sentenced him in absentia in 1985 to four years in prison. Upon learning of the sentence, Khan filed an appeal through his attorney, S. M. Zafar, who teamed up with the administration of Leuven University, and successfully argued that the technical information requested by Khan was commonly found and taught in undergraduate and doctoral physics at the university – the court exonerated Khan by overturning his sentence on a legal technicality. Reacting to the suspicions of espionage, Khan stressed that: "I had requested for it as we had no library of our own at KRL, at that time. All the research work [at Kahuta] was the result of our innovation and struggle. We did not receive any technical 'know-how' from abroad, but we cannot reject the use of books, magazines, and research papers in this connection." In 1979, the Zia administration, which was making an effort to keep their nuclear capability discreet to avoid pressure from the Reagan administration of the United States (U.S.), nearly lost its patience with Khan when he reportedly attempted to meet with a local journalist to announce the existence of the enrichment program. During the Indian Operation Brasstacks military exercise in 1987, Khan gave another interview to local press and stated: "the Americans had been well aware of the success of the atomic quest of Pakistan", allegedly confirming the speculation of technology export. At both instances, the Zia administration sharply denied Khan's statement and a furious President Zia met with Khan and used a "tough tone", promising Khan severe repercussions had he not retracted all of his statements, which Khan immediately did by contacting several news correspondents. In 1996, Khan again appeared on his country's news channels and maintained that "at no stage was the program of producing 90% weapons-grade enriched uranium ever stopped", despite Benazir Bhutto's administration reaching an understanding with the United States Clinton administration to cap the program to 3% enrichment in 1990. North Korea, Iran, and Libya The innovation and improved designs of centrifuges were marked as classified for export restriction by the Pakistan government, though Khan was still in possession of earlier designs of centrifuges from when he worked for URENCO in the 1970s. In 1990, the United States alleged that highly sensitive information was being exported to North Korea in exchange for rocket engines. Pakistan's Ghauri missile was based entirely on North Korea's Rodong-1 as reflected in its technology. The project was supported by Benazir Bhutto who consulted for the project with North Korea and facilitated the technology transfer to Khan Research Laboratories in 1993. On multiple occasions, Khan levelled accusations against Benazir Bhutto's administration of providing secret enrichment information, on a compact disc (CD), to North Korea; these accusations were denied by Benazir Bhutto's staff and military personnel. Between 1987 and 1989, Khan secretly leaked knowledge of centrifuges to Iran without notifying the Pakistan Government, although this issue is a subject of political controversy. In 2003, the European Union pressured Iran to accept tougher inspections of its nuclear program and the International Atomic Energy Agency (IAEA) revealed an enrichment facility in the city of Natanz, Iran, utilising gas centrifuges based on the designs and methods used by URENCO. The IAEA inspectors quickly identified the centrifuges as P-1 types, which had been obtained "from a foreign intermediary in 1989", and the Iranian negotiators turned over the names of their suppliers, which identified Khan as one of them. Heinz Mebus, a German engineer and businessman and college friend of Khan, was named as one of the suppliers - acting as a middleman for Khan. In May 1998, Newsweek reported that Khan had sent Iraq centrifuge designs, which were apparently confiscated by the UNMOVIC officials. Iraqi officials said "the documents were authentic but that they had not agreed to work with A. Q. Khan, fearing an ISI sting operation, due to strained relations between two countries. On June 7, 1998, 10 days after Pakistan’s first underground nuclear test, there was yet another incident according to Foreign Policy. Kim Sa Nae, wife of a midlevel North Korean "diplomat", who was invited by Khan as part of a 20-member delegation was shot to death a few yards from Khan's official residence after she was suspected to be a spy for the United States by the ISI that subsequently informed the North Korean authorities. Privately, some Pakistani intelligence sources leaked this information to the Los Angeles Times. 3 days after Kim's death, both P-1 and P-2 centrifuges, warheads, and technical data, along with Kim's body, were flown to North Korea in the same American made C-130 cargo plane that was making rounds between Pakistan and North Korea from 1997-2002. In 2003, merchant vessel BBC China was caught carrying nuclear centrifuges to Libya from Malaysia, the Scomi Group and Khan Research Laboratories were supplying nuclear parts to Libya through Khan's Dubai-based Sri Lankan associate Buhary Syed Abu Tahir. This was further revealed in the Scomi Precision Engineering nuclear scandal surrounding Scomi CEO Shah Hakim Zain and , son of former Malaysian Prime Minister Abdullah Ahmad Badawi. Libya negotiated with the United States to roll back its nuclear program to have economic sanctions lifted, effected by the Iran and Libya Sanctions Act, and shipped centrifuges to the United States that were identified as P-1 models by the American inspectors. Ultimately, the Bush administration launched its investigation of Khan, focusing on his personal role, when Libya handed over a list of its suppliers. Friedrich Tinner, a nuclear engineer and friend of Khan since their days at the Leuven University, was one of the heads of Libya's nuclear programme and worked in nuclear enrichment for Libya and Pakistan. In 2008, German nuclear engineer was convicted and sentenced to five years and six months in prison for procuring centrifuges for Libya from Khan, Lerch also acted as Khan's middleman for Iran. Alfred Hempel, a German businessman, arranged the shipment of gas centrifuge parts from Khan in Pakistan to Libya and Iran via Dubai. The "A.Q. Khan network" involved numerous shell companies set-up by Khan in Dubai to obtain equipment necessary for nuclear enrichment. From 1999 onwards, Khan traveled to Dubai 41 times according to the Pakistan government. Khan also kept a penthouse on posh al-Maktoum Road. The shell companies consisting of "a fax machine and an empty office” would be used to facilitate shipments and shut down immediately after the deals. of Iran's Defense Industries Organization was involved in nuclear proliferation for Iran and North Korea through China. Parts needed for nuclear enrichment in Pakistan were also imported by Khan from several Japanese companies. Security hearings, pardon, and aftermath Starting in 2001, Khan served as an adviser on science and technology in the Musharraf administration and had become a public figure who enjoyed much support from his country's political conservative sphere. In 2003, the Bush administration reportedly turned over evidence of a nuclear proliferation network that implicated Khan's role to the Musharraf administration. Khan was dismissed from his post on 31 January 2004. On 4 February 2004, Khan appeared on Pakistan Television (PTV) and confessed to running a proliferation ring, and transferring technology to Iran between 1989 and 1991, and to North Korea and Libya between 1991 and 1997. The Musharraf administration avoided arresting Khan but launched security hearings on Khan who confessed to the military investigators that former Chief of Army Staff General Mirza Aslam Beg had given authorisation for technology transfer to Iran. On 5 February 2004, President Pervez Musharraf issued a pardon to Khan as he feared that the issue would be politicised by his political rivals. Despite the pardon, Khan, who had strong conservative support, had badly damaged the political credibility of the Musharraf administration and the image of the United States who was attempting to win hearts and minds of local populations during the height of the Insurgency in Khyber Pakhtunkhwa. While the local television news media aired sympathetic documentaries on Khan, the opposition parties in the country protested so strongly that the U.S. Embassy in Islamabad had pointed out to the Bush administration that the successor to Musharraf could be less friendly towards the United States. This restrained the Bush administration from applying further direct pressure on Musharraf due to a strategic calculation that it might cause the loss of Musharraf as an ally. In December 2006, the Weapons of Mass Destruction Commission (WMDC), headed by Hans Blix, stated that Khan could not have acted alone "without the awareness of the Pakistan Government". Blix's statement was also reciprocated by the United States government, with one anonymous American government intelligence official quoted by independent journalist and author Seymour Hersh: "Suppose if Edward Teller had suddenly decided to spread nuclear technology around the world. Could he really do that without the American government knowing?". In 2007, the U.S. and European Commission politicians as well as IAEA officials had made several strong calls to have Khan interrogated by IAEA investigators, given the lingering scepticism about the disclosures made by Pakistan, but Prime Minister Shaukat Aziz, who remained supportive of Khan and spoke highly of him, strongly dismissed the calls by terming it as "case closed". In 2008, the security hearings were officially terminated by Chairman joint chiefs General Tariq Majid who marked the details of debriefings as "classified". In 2008, in an interview, Khan laid the whole blame on former President Pervez Musharraf, and labelled Musharraf as the "Big Boss" for proliferation deals. In 2012, Khan also implicated Benazir Bhutto's administration in proliferation matters, pointing to the fact as she had issued "clear directions in thi[s] regard." Khan also said that he was persecuted because he was a Muhajir. Government work, academia, and political advocacy Khan's strong advocacy for nuclear sharing of technology eventually led to his ostracisation by much of the scientific community, but Khan was still quite welcome in his country's political and military circles. After leaving the directorship of the Khan Research Laboratories in 2001, Khan briefly joined the Musharraf administration as a policy adviser on science and technology on a request from President Musharraf. In this capacity, Khan promoted increased defence spending on his nation's missile program to counter the perceived threats from the Indian missile program and advised the Musharraf administration on space policy. He presented the idea of using the Ghauri missile system as an expendable launch system to launch satellites into space. At the height of the proliferation controversy in 2007, Khan was paid tribute by Prime Minister Shaukat Aziz on state television while commenting in the last part of his speech, Aziz stressed: "The services of [nuclear] scientist ... Dr. [Abdul] Qadeer Khan are "unforgettable" for the country". In the 1990s, Khan secured a fellowship with the Pakistan Academy of Sciences – he served as its president in 1996–97. Khan published two books on material science and started publishing his articles from KRL in the 1980s. Gopal S. Upadhyaya, an Indian metallurgist who attended Khan's conference and met him along with Kuldip Nayar, reportedly described him as being a proud Pakistani who wanted to show the world that scientists from Pakistan are inferior to no one in the world. Khan also served as project director of Ghulam Ishaq Khan Institute of Engineering Sciences and Technology and briefly tenured as professor of physics before joining the faculty of the Hamdard University; where he remained on the board of directors of the university up until his death in 2021. Later, Khan helped established the A. Q. Khan Institute of Biotechnology and Genetic Engineering at Karachi University. In 2012, Khan announced the formation of a conservative political advocacy group, Tehreek-e-Tahaffuz-e-Pakistan ('Movement for the Protection of Pakistan'). It was subsequently dissolved in 2013. Illness and death In August 2021, Khan was admitted to Khan Research Laboratories Hospital after testing positive for COVID-19. Khan died on 10 October 2021, at the age of 85, after being transferred to a hospital in Islamabad with lung problems. He was given a state funeral at the Faisal Mosque before being buried at the H-8 graveyard in Islamabad. The Prime Minister of Pakistan, Imran Khan, expressed grief over his death in a tweet adding that "for the people of Pakistan he was a national icon". President of Pakistan Arif Alvi also expressed sadness adding that "a grateful nation will never forget his services". Legacy During his time in the atomic bomb project, Khan pioneered research in the thermal quantum field theory and condensed matter physics, while he co-authored articles on chemical reactions of the highly unstable isotope particles in the controlled physical system. He maintained his stance of the use of controversial technological solutions to both military and civilian problems, including the use of military technologies for civilian welfare. Khan also remained a vigorous advocate for a nuclear testing program and defence strength through nuclear weapons. He justified Pakistan's nuclear deterrence program as sparing his country the fate of Iraq or Libya. In an interview in 2011, Khan maintained his stance on peace through strength and vigorously defended the nuclear weapons program as part of the deterrence policy: During his work on the nuclear weapons program and onwards, Khan faced heated and intense criticism from his fellow theorists, most notably Pervez Hoodbhoy who contested his scientific understanding in quantum physics. In addition, Khan's false claims that he was the "father" of the atomic bomb project since its inception and his personal attacks on Munir Ahmad Khan caused even greater animosity from his fellow theorists, and most particularly, within the general physics community, such as the Pakistan Physics Society. Various motivations have been cited for Khan's role in the proliferation of nuclear weapons. According to the editor-in-chief of Foreign Policy, Moisés Naím, although his actions were certainly ideological or political in nature, Khan's motives remain essentially financial. This is evidenced, according to him, by his commercial maneuvers, his presence in North Korean trade as well as his real estate ownerships. For instance, Khan owned the Hendrina Khan Hotel in Timbuktu, named after his wife. It was one of dozens of his commercial enterprises. To build his hotel in Timbuktu, he reportedly used a Pakistan Air Force C-130 transport aircraft in the early 2000s to transport carved wooden furniture. The plane landed at Tripoli Airport in Libya and the cargo was then taken to Timbuktu by road as it was unable to land in Mali. Khan himself accompanied the furniture from Islamabad.<ref>Khan built hotel in Timbuktu. Times of India. 2004.</ref> His wife, two daughters and brother Abdul Quyuim Khan were all named in the Panama Papers in 2016 as owners of Wahdat Ltd., an offshore company registered in the Bahamas. Bruno Tertrais, a senior researcher at the Foundation for Strategic Research states: "Khan's motivations were complex and evolving (...) The primary motivation seems to have been to ensure the legitimacy of his role in building Pakistan's nuclear force (...) The second motivation, which has become more important over time, is personal enrichment. Finally, the third important element of varying importance depending on the hypothesis: Khan's more or less diffuse desire to see other Muslim countries access nuclear power." In spite of the proliferation controversy and his volatile personality, Khan remained a popular public figure and has been as a symbol of national pride with many in Pakistan who see him as a national hero. While Khan has been bestowed with many medals and honours by the federal government and universities in Pakistan, Khan also remains the only citizen of Pakistan to have been honoured twice with the Nishan-e-Imtiaz.Shabbir, Usman, Remembering Unsung Heroes: Munir Ahmed Khan, Defence Journal, 27 June 2004 Nishan-e-Imtiaz (1999) Nishan-e-Imtiaz (1996) Hilal-e-Imtiaz (1989) Sir Syed University of Engineering and Technology 60 Gold medal from universities in the country. University of Karachi Baqai Medical University Hamdard University Gomal University University of Engineering and Technology, Lahore Publications Selected research papers and patents Nuclear and material physics Dilation investigation of metallic phase transformation in 18% Ni maraging steels, Proceedings of the International Conf. on Martensitic Transformations (1986), The Japan Institute of Metals, pp. 560–565. The spread of Nuclear weapons among nations: Militarization or Development, pp. 417–430. (Ref. Nuclear War Nuclear Proliferation and their consequences "Proceedings of the 5th International Colloquium organised by the Group De Bellerive Geneva 27–29 June 1985", Edited by: Sadruddin Aga Khan, Published by Clarendon Press-Oxford 1986). Flow-induced vibrations in Gas-tube assembly of centrifuges. Journal of Nuclear Science and Technology, 23(9) (September 1986), pp. 819–827. Dimensional anisotropy in 18% of maraging steel, Seven National Symposium on Frontiers in Physics, written with Anwar-ul-Haq, Mohammad Farooq, S. Qaisar, published at the Pakistan Physics Society (1998). Thermodynamics of Non-equilibrium phases in Electron-beam rapid solidification, Proceedings of the Second National Symposium on Frontiers in Physics, written with A. Tauqeer, Fakhar Hashmi, publisher Pakistan Physics Society (1988). Books See also Dr. A. Q. Khan Institute of Computer Sciences and Information Technology Dr. A. Q. Khan Research Laboratories Pakistani missile research and development program Conservatism in Pakistan Nuclear espionage Nuclear arms race Nuclear Secrets, 2007 documentary series about the nuclear race and proliferation including Khan's role therein Anwar Ali (physicist), Pakistani physicist charged with nuclear proliferation Peter Finke, German physicist in the nuclear weapons programme of Pakistan Notes References Citations Bibliography Burr, William. "The 'Labors of Atlas, Sisyphus, or Hercules'? US Gas-Centrifuge Policy and Diplomacy, 1954–60." The International History Review'' 37.3 (2015): 431–457. Web links Annotated bibliography for A.Q. Khan The Physics war of Munir Khan and A.Q. Khan "Kahuta Research Laboratories", Federation of American Scientists. British Reporter Adrian Levy: The United States Secretly Helped Pakistan Build Its Nuclear Arsenal External links Abdul Quadeer Khan at the Pakistan Academy of Sciences Prof. Abdul Qadeer Khan at the Islamic Academy of Sciences Why He Went Nuclear by Douglas Frantz and Catherine Collins Written by Abdul Qadeer Khan Heart disease Random thoughts 11 June 2012 More on thalassemia 4 June 2012 Memorable Karachi 28 May 2012 Great expectations 14 May 2012 Mass graves 30 April 2012 I saved my country 1 November 2012 Online books |- 1936 births 2021 deaths Deaths from the COVID-19 pandemic in Islamabad People from Bhopal State Indian emigrants to Pakistan Muhajir people D. J. Sindh Government Science College alumni University of Karachi alumni Pakistani physicists Engineers from Karachi 20th-century Pakistani engineers Pakistani expatriates in Germany Pakistani expatriates in the Netherlands Delft University of Technology alumni Academic staff of the Delft University of Technology Pakistani expatriates in Belgium Catholic University of Leuven alumni Theoretical chemists Pakistani metallurgists Project-706 people Pakistani nuclear physicists Weapons scientists and engineers Materials scientists and engineers Pakistani inventors Pakistani spies Scientists from Islamabad Founders of Pakistani schools and colleges Academic staff of Hamdard University Fellows of Pakistan Academy of Sciences Theoretical physicists Academic staff of Ghulam Ishaq Khan Institute of Engineering Sciences and Technology Recipients of Hilal-i-Imtiaz Recipients of Nishan-e-Imtiaz Space and Upper Atmosphere Research Commission people Recipients of Pakistani presidential pardons Pakistani memoirists Pakistani technology writers Pakistani textbook writers Pakistani columnists Members of the Pakistan Philosophical Congress Deaths from the COVID-19 pandemic in Pakistan Nuclear proliferation Nuclear weapons scientists and engineers Presidents of the Pakistan Academy of Sciences People from Karachi
Abdul Qadeer Khan
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
8,600
[ "Quantum chemistry", "Physical chemists", "Theoretical physics", "Materials science", "Theoretical chemistry", "Theoretical chemists", "Materials scientists and engineers", "Theoretical physicists" ]
452,493
https://en.wikipedia.org/wiki/De%20Laval%20nozzle
A de Laval nozzle (or convergent-divergent nozzle, CD nozzle or con-di nozzle) is a tube which is pinched in the middle, with a rapid convergence and gradual divergence. It is used to accelerate a compressible fluid to supersonic speeds in the axial (thrust) direction, by converting the thermal energy of the flow into kinetic energy. De Laval nozzles are widely used in some types of steam turbines and rocket engine nozzles. It also sees use in supersonic jet engines. Similar flow properties have been applied to jet streams within astrophysics. History Giovanni Battista Venturi designed converging-diverging tubes known as Venturi tubes for experiments on fluid pressure reduction effects when fluid flows through chokes (Venturi effect). German engineer and inventor Ernst Körting supposedly switched to a converging-diverging nozzle in his steam jet pumps by 1878 after using convergent nozzles but these nozzles remained a company secret. Later, Swedish engineer Gustaf de Laval applied his own converging diverging nozzle design for use on his impulse turbine in the year 1888. Laval's convergent-divergent nozzle was first applied in a rocket engine by Robert Goddard. Most modern rocket engines that employ hot gas combustion use de Laval nozzles. Operation Its operation relies on the different properties of gases flowing at subsonic, sonic, and supersonic speeds. The speed of a subsonic flow of gas will increase if the pipe carrying it narrows because the mass flow rate is constant. The gas flow through a de Laval nozzle is isentropic (gas entropy is nearly constant). In a subsonic flow, sound will propagate through the gas. At the "throat", where the cross-sectional area is at its minimum, the gas velocity locally becomes sonic (Mach number = 1.0), a condition called choked flow. As the nozzle cross-sectional area increases, the gas begins to expand, and the flow increases to supersonic velocities, where a sound wave will not propagate backward through the gas as viewed in the frame of reference of the nozzle (Mach number > 1.0). Conditions for operation A de Laval nozzle will choke at the throat only if the pressure and mass flow through the nozzle is sufficient to reach sonic speeds; otherwise no supersonic flow is achieved, and it will act as a Venturi tube. This requires the entry pressure to the nozzle to be significantly above ambient at all times (equivalently, the stagnation pressure of the jet must be above ambient). In addition, the pressure of the gas at the exit of the expansion portion of the exhaust of a nozzle must not be too low. Because pressure cannot travel upstream through the supersonic flow, the exit pressure can be significantly below the ambient pressure into which it exhausts, but if it is too far below ambient, then the flow will cease to be supersonic, or the flow will separate within the expansion portion of the nozzle, forming an unstable jet that may "flop" around within the nozzle, producing a lateral thrust and possibly damaging it. In practice, ambient pressure must be no higher than roughly 2–3 times the pressure in the supersonic gas at the exit for supersonic flow to leave the nozzle. Analysis of gas flow in de Laval nozzles The analysis of gas flow through de Laval nozzles involves a number of concepts and assumptions: For simplicity, the gas is assumed to be an ideal gas. The gas flow is isentropic (i.e., at constant entropy). As a result, the flow is reversible (frictionless and no dissipative losses), and adiabatic (i.e., no heat enters or leaves the system). The gas flow is constant (i.e., in steady state) during the period of the propellant burn. The gas flow is along a straight line from gas inlet to exhaust gas exit (i.e., along the nozzle's axis of symmetry) The gas flow behaviour is compressible since the flow is at very high velocities (Mach number > 0.3). Exhaust gas velocity As the gas enters a nozzle, it is moving at subsonic velocities. As the cross-sectional area contracts, the gas is forced to accelerate until the axial velocity becomes sonic at the nozzle throat, where the cross-sectional area is the smallest. From there the throat the cross-sectional area then increases, allowing the gas to expand and the axial velocity to become progressively more supersonic. The linear velocity of the exiting exhaust gases can be calculated using the following equation: Some typical values of the exhaust gas velocity ve for rocket engines burning various propellants are: 1,700 to 2,900 m/s (3,800 to 6,500 mph) for liquid monopropellants, 2,900 to 4,500 m/s (6,500 to 10,100 mph) for liquid bipropellants, 2,100 to 3,200 m/s (4,700 to 7,200 mph) for solid propellants. As a note of interest, ve is sometimes referred to as the ideal exhaust gas velocity because it is based on the assumption that the exhaust gas behaves as an ideal gas. As an example calculation using the above equation, assume that the propellant combustion gases are: at an absolute pressure entering the nozzle p = 7.0 MPa and exit the rocket exhaust at an absolute pressure pe = 0.1 MPa; at an absolute temperature of T = 3500 K; with an isentropic expansion factor γ = 1.22 and a molar mass M = 22 kg/kmol. Using those values in the above equation yields an exhaust velocity ve = 2802 m/s, or 2.80 km/s, which is consistent with above typical values. Technical literature often interchanges without note the universal gas law constant R, which applies to any ideal gas, with the gas law constant Rs, which applies only to a specific individual gas of molar mass M. The relationship between the two constants is Rs = R/M. Mass flow rate In accordance with conservation of mass the mass flow rate of the gas throughout the nozzle is the same regardless of the cross-sectional area. When the throat is at sonic speed Ma = 1 where the equation simplifies to: By Newton's third law of motion the mass flow rate can be used to determine the force exerted by the expelled gas by: In aerodynamics, the force exerted by the nozzle is defined as the thrust. See also History of the internal combustion engine Spacecraft propulsion Twister supersonic separator Isentropic nozzle flow Daniel Bernoulli References External links Exhaust gas velocity calculator Other applications of nozzle theory Flow of gases and steam through nozzles Nozzles Rocket propulsion Jet engines Astrophysics es:Tobera#Tobera De Laval
De Laval nozzle
[ "Physics", "Astronomy", "Technology" ]
1,452
[ "Jet engines", "Astronomical sub-disciplines", "Engines", "Astrophysics" ]
452,577
https://en.wikipedia.org/wiki/Free%20body%20diagram
In physics and engineering, a free body diagram (FBD; also called a force diagram) is a graphical illustration used to visualize the applied forces, moments, and resulting reactions on a free body in a given condition. It depicts a body or connected bodies with all the applied forces and moments, and reactions, which act on the body(ies). The body may consist of multiple internal members (such as a truss), or be a compact body (such as a beam). A series of free bodies and other diagrams may be necessary to solve complex problems. Sometimes in order to calculate the resultant force graphically the applied forces are arranged as the edges of a polygon of forces or force polygon (see ). Free body A body is said to be "free" when it is singled out from other bodies for the purposes of dynamic or static analysis. The object does not have to be "free" in the sense of being unforced, and it may or may not be in a state of equilibrium; rather, it is not fixed in place and is thus "free" to move in response to forces and torques it may experience. Figure 1 shows, on the left, green, red, and blue widgets stacked on top of each other, and for some reason the red cylinder happens to be the body of interest. (It may be necessary to calculate the stress to which it is subjected, for example.) On the right, the red cylinder has become the free body. In figure 2, the interest has shifted to just the left half of the red cylinder and so now it is the free body on the right. The example illustrates the context sensitivity of the term "free body". A cylinder can be part of a free body, it can be a free body by itself, and, as it is composed of parts, any of those parts may be a free body in itself. Figure 1 and 2 are not yet free body diagrams. In a completed free body diagram, the free body would be shown with forces acting on it. Purpose Free body diagrams are used to visualize forces and moments applied to a body and to calculate reactions in mechanics problems. These diagrams are frequently used both to determine the loading of individual structural components and to calculate internal forces within a structure. They are used by most engineering disciplines from Biomechanics to Structural Engineering. In the educational environment, a free body diagram is an important step in understanding certain topics, such as statics, dynamics and other forms of classical mechanics. Features A free body diagram is not a scaled drawing, it is a diagram. The symbols used in a free body diagram depends upon how a body is modeled. Free body diagrams consist of: A simplified version of the body (often a dot or a box) Forces shown as straight arrows pointing in the direction they act on the body Moments are shown as curves with an arrow head or a vector with two arrow heads pointing in the direction they act on the body One or more reference coordinate systems By convention, reactions to applied forces are shown with hash marks through the stem of the vector The number of forces and moments shown depends upon the specific problem and the assumptions made. Common assumptions are neglecting air resistance and friction and assuming rigid body action. In statics all forces and moments must balance to zero; the physical interpretation is that if they do not, the body is accelerating and the principles of statics do not apply. In dynamics the resultant forces and moments can be non-zero. Free body diagrams may not represent an entire physical body. Portions of a body can be selected for analysis. This technique allows calculation of internal forces, making them appear external, allowing analysis. This can be used multiple times to calculate internal forces at different locations within a physical body. For example, a gymnast performing the iron cross: modeling the ropes and person allows calculation of overall forces (body weight, neglecting rope weight, breezes, buoyancy, electrostatics, relativity, rotation of the earth, etc.). Then remove the person and show only one rope; you get force direction. Then only looking at the person the forces on the hand can be calculated. Now only look at the arm to calculate the forces and moments at the shoulders, and so on until the component you need to analyze can be calculated. Modeling the body A body may be modeled in three ways: a particle. This model may be used when any rotational effects are zero or have no interest even though the body itself may be extended. The body may be represented by a small symbolic blob and the diagram reduces to a set of concurrent arrows. A force on a particle is a bound vector. rigid extended. Stresses and strains are of no interest but rotational effects are. A force arrow should lie along the line of force, but where along the line is irrelevant. A force on an extended rigid body is a sliding vector. non-rigid extended. The point of application of a force becomes crucial and has to be indicated on the diagram. A force on a non-rigid body is a bound vector. Some use the tail of the arrow to indicate the point of application. Others use the tip. What is included An FBD represents the body of interest and the external forces acting on it. The body: This is usually a schematic depending on the body—particle/extended, rigid/non-rigid—and on what questions are to be answered. Thus if rotation of the body and torque is in consideration, an indication of size and shape of the body is needed. For example, the brake dive of a motorcycle cannot be found from a single point, and a sketch with finite dimensions is required. The external forces: These are indicated by labelled arrows. In a fully solved problem, a force arrow is capable of indicating the direction and the line of action the magnitude the point of application a reaction, as opposed to an applied force, if a hash is present through the stem of the arrow Often a provisional free body is drawn before everything is known. The purpose of the diagram is to help to determine magnitude, direction, and point of application of external loads. When a force is originally drawn, its length may not indicate the magnitude. Its line may not correspond to the exact line of action. Even its orientation may not be correct. External forces known to have negligible effect on the analysis may be omitted after careful consideration (e.g. buoyancy forces of the air in the analysis of a chair, or atmospheric pressure on the analysis of a frying pan). External forces acting on an object may include friction, gravity, normal force, drag, tension, or a human force due to pushing or pulling. When in a non-inertial reference frame (see coordinate system, below), fictitious forces, such as centrifugal pseudoforce are appropriate. At least one coordinate system is always included, and chosen for convenience. Judicious selection of a coordinate system can make defining the vectors simpler when writing the equations of motion or statics. The x direction may be chosen to point down the ramp in an inclined plane problem, for example. In that case the friction force only has an x component, and the normal force only has a y component. The force of gravity would then have components in both the x and y directions: mgsin(θ) in the x and mgcos(θ) in the y, where θ is the angle between the ramp and the horizontal. Exclusions A free body diagram should not show: Bodies other than the free body. Constraints. (The body is not free from constraints; the constraints have just been replaced by the forces and moments exerted on the body.) Forces exerted by the free body. (A diagram showing the forces exerted both on and by a body is likely to be confusing since all the forces will cancel out. By Newton's 3rd law if body A exerts a force on body B then B exerts an equal and opposite force on A. This should not be confused with the equal and opposite forces that are necessary to hold a body in equilibrium.) Internal forces. (For example, if an entire truss is being analyzed, the forces between the individual truss members are not included.) Velocity or acceleration vectors. Analysis In an analysis, a free body diagram is used by summing all forces and moments (often accomplished along or about each of the axes). When the sum of all forces and moments is zero, the body is at rest or moving and/or rotating at a constant velocity, by Newton's first law. If the sum is not zero, then the body is accelerating in a direction or about an axis according to Newton's second law. Forces not aligned to an axis Determining the sum of the forces and moments is straightforward if they are aligned with coordinate axes, but it is more complex if some are not. It is convenient to use the components of the forces, in which case the symbols ΣFx and ΣFy are used instead of ΣF (the variable M is used for moments). Forces and moments that are at an angle to a coordinate axis can be rewritten as two vectors that are equivalent to the original (or three, for three dimensional problems)—each vector directed along one of the axes (Fx) and (Fy). Example: A block on an inclined plane A simple free-body diagram, shown above, of a block on a ramp, illustrates this. All external supports and structures have been replaced by the forces they generate. These include: mg: the product of the mass of the block and the constant of gravitation acceleration: its weight. N: the normal force of the ramp. Ff: the friction force of the ramp. The force vectors show the direction and point of application and are labelled with their magnitude. It contains a coordinate system that can be used when describing the vectors. Some care is needed in interpreting the diagram. The normal force has been shown to act at the midpoint of the base, but if the block is in static equilibrium its true location is directly below the centre of mass, where the weight acts because that is necessary to compensate for the moment of the friction. Unlike the weight and normal force, which are expected to act at the tip of the arrow, the friction force is a sliding vector and thus the point of application is not relevant, and the friction acts along the whole base. Polygon of forces In the case of two applied forces, their sum (resultant force) can be found graphically using a parallelogram of forces. To graphically determine the resultant force of multiple forces, the acting forces can be arranged as edges of a polygon by attaching the beginning of one force vector to the end of another in an arbitrary order. Then the vector value of the resultant force would be determined by the missing edge of the polygon. In the diagram, the forces P1 to P6 are applied to the point O. The polygon is constructed starting with P1 and P2 using the parallelogram of forces (vertex a). The process is repeated (adding P3 yields the vertex b, etc.). The remaining edge of the polygon O-e represents the resultant force R. Kinetic diagram In dynamics a kinetic diagram is a pictorial device used in analyzing mechanics problems when there is determined to be a net force and/or moment acting on a body. They are related to and often used with free body diagrams, but depict only the net force and moment rather than all of the forces being considered. Kinetic diagrams are not required to solve dynamics problems; their use in teaching dynamics is argued against by some in favor of other methods that they view as simpler. They appear in some dynamics texts but are absent in others. See also Classical mechanics Force field analysis – applications of force diagram in social science Kinematic diagram Physics Shear and moment diagrams Strength of materials References Sources Notes External links Mechanics Diagrams Structural analysis
Free body diagram
[ "Physics", "Engineering" ]
2,430
[ "Structural engineering", "Structural analysis", "Mechanics", "Mechanical engineering", "Aerospace engineering" ]
452,667
https://en.wikipedia.org/wiki/Wave%20packet
In physics, a wave packet (also known as a wave train or wave group) is a short burst of localized wave action that travels as a unit, outlined by an envelope. A wave packet can be analyzed into, or can be synthesized from, a potentially-infinite set of component sinusoidal waves of different wavenumbers, with phases and amplitudes such that they interfere constructively only over a small region of space, and destructively elsewhere. Any signal of a limited width in time or space requires many frequency components around a center frequency within a bandwidth inversely proportional to that width; even a gaussian function is considered a wave packet because its Fourier transform is a "packet" of waves of frequencies clustered around a central frequency. Each component wave function, and hence the wave packet, are solutions of a wave equation. Depending on the wave equation, the wave packet's profile may remain constant (no dispersion) or it may change (dispersion) while propagating. Historical background Ideas related to wave packets – modulation, carrier waves, phase velocity, and group velocity – date from the mid-1800s. The idea of a group velocity distinct from a wave's phase velocity was first proposed by W.R. Hamilton in 1839, and the first full treatment was by Rayleigh in his "Theory of Sound" in 1877. Erwin Schrödinger introduced the idea of wave packets just after publishing his famous wave equation. He solved his wave equation for a quantum harmonic oscillator, introduced the superposition principle, and used it to show that a compact state could persist. While this work did result in the important concept of coherent states, the wave packet concept did not endure. The year after Schrödinger's paper, Werner Heisenberg published his paper on the uncertainty principle, showing in the process, that Schrödinger's results only applied to quantum harmonic oscillators, not for example to Coulomb potential characteristic of atoms. The following year, 1927, Charles Galton Darwin explored Schrödinger's equation for an unbound electron in free space, assuming an initial Gaussian wave packet. Darwin showed that at time later the position of the packet traveling at velocity would be where is the uncertainty in the initial position. Later in 1927 Paul Ehrenfest showed that the time, for a matter wave packet of width and mass to spread by a factor of 2 was . Since is so small, wave packets on the scale of macroscopic objects, with large width and mass, double only at cosmic time scales. Significance in quantum mechanics Quantum mechanics describes the nature of atomic and subatomic systems using Schrödinger's wave equation. The classical limit of quantum mechanics and many formulations of quantum scattering use wave packets formed from various solutions to this equation. Quantum wave packet profiles change while propagating; they show dispersion. Physicists have concluded that "wave packets would not do as representations of subatomic particles". Wave packets and the classical limit Schrodinger developed wave packets in hopes of interpreting quantum wave solutions as locally compact wave groups. Such packets tradeoff position localization for spreading momentum. In the coordinate representation of the wave (such as the Cartesian coordinate system), the position of the particle's localized probability is specified by the position of the packet solution. The narrower the spatial wave packet, and therefore the better localized the position of the wave packet, the larger the spread in the momentum of the wave. This trade-off between spread in position and spread in momentum is a characteristic feature of the Heisenberg uncertainty principle. One kind of optimal tradeoff minimizes the product of position uncertainty and momentum uncertainty . If we place such a packet at rest it stays at rest: the average value of the position and momentum match a classical particle. However it spreads out in all directions with a velocity given by the optimal momentum uncertainty . The spread is so fast that in the distance of once around an atom the wave packet is unrecognizable. Wave packets and quantum scattering Particle interactions are called scattering in physics; wave packet mathematics play an important role in quantum scattering approaches. A monochromatic (single momentum) source produces convergence difficulties in the scattering models. Scattering problems also have classical limits. Whenever the scattering target (for example an atom) has a size much smaller than wave packet, the center of the wave packet follows scattering classical trajectories. In other cases, the wave packet distorts and scatters as it interacts with the target. Basic behaviors Non-dispersive Without dispersion the wave packet maintains its shape as it propagates. As an example of propagation without dispersion, consider wave solutions to the following wave equation from classical physics where is the speed of the wave's propagation in a given medium. Using the physics time convention, , the wave equation has plane-wave solutions where the relation between the angular frequency and angular wave vector is given by the dispersion relation: such that . This relation should be valid so that the plane wave is a solution to the wave equation. As the relation is linear, the wave equation is said to be non-dispersive. To simplify, consider the one-dimensional wave equation with . Then the general solution is where the first and second term represent a wave propagating in the positive respectively negative . A wave packet is a localized disturbance that results from the sum of many different wave forms. If the packet is strongly localized, more frequencies are needed to allow the constructive superposition in the region of localization and destructive superposition outside the region. From the basic one-dimensional plane-wave solutions, a general form of a wave packet can be expressed as where the amplitude , containing the coefficients of the wave superposition, follows from taking the inverse Fourier transform of a "sufficiently nice" initial wave evaluated at : and comes from Fourier transform conventions. For example, choosing we obtain and finally The nondispersive propagation of the real or imaginary part of this wave packet is presented in the above animation. Dispersive By contrast, in the case of dispersion, a wave changes shape during propagation. For example, the free Schrödinger equation , has plane-wave solutions of the form: where is a constant and the dispersion relation satisfies with the subscripts denoting unit vector notation. As the dispersion relation is non-linear, the free Schrödinger equation is dispersive. In this case, the wave packet is given by: where once again is simply the Fourier transform of . If (and therefore ) is a Gaussian function, the wave packet is called a Gaussian wave packet. For example, the solution to the one-dimensional free Schrödinger equation (with , , and ħ set equal to one) satisfying the initial condition representing a wave packet localized in space at the origin as a Gaussian function, is seen to be An impression of the dispersive behavior of this wave packet is obtained by looking at the probability density: It is evident that this dispersive wave packet, while moving with constant group velocity , is delocalizing rapidly: it has a width increasing with time as , so eventually it diffuses to an unlimited region of space. Gaussian wave packets in quantum mechanics The above dispersive Gaussian wave packet, unnormalized and just centered at the origin, instead, at =0, can now be written in 3D, now in standard units: The Fourier transform is also a Gaussian in terms of the wavenumber, the k-vector, With and its inverse adhering to the uncertainty relation such that can be considered the square of the width of the wave packet, whereas its inverse can be written as Each separate wave only phase-rotates in time, so that the time dependent Fourier-transformed solution is The inverse Fourier transform is still a Gaussian, but now the parameter has become complex, and there is an overall normalization factor. The integral of over all space is invariant, because it is the inner product of with the state of zero energy, which is a wave with infinite wavelength, a constant function of space. For any energy eigenstate , the inner product, only changes in time in a simple way: its phase rotates with a frequency determined by the energy of . When has zero energy, like the infinite wavelength wave, it doesn't change at all. For a given , the phase of the wave function varies with position as . It varies quadratically with position, which means that it is different from multiplication by a linear phase factor as is the case of imparting a constant momentum to the wave packet. In general, the phase of a gaussian wave packet has both a linear term and a quadratic term. The coefficient of the quadratic term begins by increasing from towards as the gaussian wave packet becomes sharper, then at the moment of maximum sharpness, the phase of the wave function varies linearly with position. Then the coefficient of the quadratic term increases from towards , as the gaussian wave packet spreads out again. The integral is also invariant, which is a statement of the conservation of probability. Explicitly, where is the distance from the origin, the speed of the particle is zero, and width given by which is at (arbitrarily chosen) time while eventually growing linearly in time, as , indicating wave-packet spreading. For example, if an electron wave packet is initially localized in a region of atomic dimensions (i.e., m) then the width of the packet doubles in about s. Clearly, particle wave packets spread out very rapidly indeed (in free space): For instance, after ms, the width will have grown to about a kilometer. This linear growth is a reflection of the (time-invariant) momentum uncertainty: the wave packet is confined to a narrow , and so has a momentum which is uncertain (according to the uncertainty principle) by the amount , a spread in velocity of , and thus in the future position by . The uncertainty relation is then a strict inequality, very far from saturation, indeed! The initial uncertainty has now increased by a factor of (for large ). The 2D case A gaussian 2D quantum wave function: where The Airy wave train In contrast to the above Gaussian wave packet, which moves at constant group velocity, and always disperses, there exists a wave function based on Airy functions, that propagates freely without envelope dispersion, maintaining its shape, and accelerates in free space: where, for simplicity (and nondimensionalization), choosing , , and B an arbitrary constant results in There is no dissonance with Ehrenfest's theorem in this force-free situation, because the state is both non-normalizable and has an undefined (infinite) for all times. (To the extent that it could be defined, for all times, despite the apparent acceleration of the front.) The Airy wave train is the only dispersionless wave in one dimensional free space. In higher dimensions, other dispersionless waves are possible. In phase space, this is evident in the pure state Wigner quasiprobability distribution of this wavetrain, whose shape in x and p is invariant as time progresses, but whose features accelerate to the right, in accelerating parabolas. The Wigner function satisfiesThe three equalities demonstrate three facts: Time-evolution is equivalent to a translation in phase-space by . The contour lines of the Wigner function are parabolas of form . Time-evolution is equivalent to a shearing in phase space along the -direction at speed . Note the momentum distribution obtained by integrating over all is constant. Since this is the probability density in momentum space, it is evident that the wave function itself is not normalizable. Free propagator The narrow-width limit of the Gaussian wave packet solution discussed is the free propagator kernel . For other differential equations, this is usually called the Green's function, but in quantum mechanics it is traditional to reserve the name Green's function for the time Fourier transform of . Returning to one dimension for simplicity, with m and ħ set equal to one, when is the infinitesimal quantity , the Gaussian initial condition, rescaled so that its integral is one, becomes a delta function, , so that its time evolution, yields the propagator. Note that a very narrow initial wave packet instantly becomes infinitely wide, but with a phase which is more rapidly oscillatory at large values of x. This might seem strange—the solution goes from being localized at one point to being "everywhere" at all later times, but it is a reflection of the enormous momentum uncertainty of a localized particle, as explained above. Further note that the norm of the wave function is infinite, which is also correct, since the square of a delta function is divergent in the same way. The factor involving is an infinitesimal quantity which is there to make sure that integrals over are well defined. In the limit that , becomes purely oscillatory, and integrals of are not absolutely convergent. In the remainder of this section, it will be set to zero, but in order for all the integrations over intermediate states to be well defined, the limit ε→0 is to be only taken after the final state is calculated. The propagator is the amplitude for reaching point x at time t, when starting at the origin, x=0. By translation invariance, the amplitude for reaching a point x when starting at point y is the same function, only now translated, In the limit when t is small, the propagator goes to a delta function but only in the sense of distributions: The integral of this quantity multiplied by an arbitrary differentiable test function gives the value of the test function at zero. To see this, note that the integral over all space of equals 1 at all times, since this integral is the inner-product of K with the uniform wave function. But the phase factor in the exponent has a nonzero spatial derivative everywhere except at the origin, and so when the time is small there are fast phase cancellations at all but one point. This is rigorously true when the limit ε→0 is taken at the very end. So the propagation kernel is the (future) time evolution of a delta function, and it is continuous, in a sense: it goes to the initial delta function at small times. If the initial wave function is an infinitely narrow spike at position , it becomes the oscillatory wave, Now, since every function can be written as a weighted sum of such narrow spikes, the time evolution of every function 0 is determined by this propagation kernel , Thus, this is a formal way to express the fundamental solution or general solution. The interpretation of this expression is that the amplitude for a particle to be found at point at time is the amplitude that it started at , times the amplitude that it went from to , summed over all the possible starting points. In other words, it is a convolution of the kernel with the arbitrary initial condition , Since the amplitude to travel from to after a time +' can be considered in two steps, the propagator obeys the composition identity, which can be interpreted as follows: the amplitude to travel from to in time +' is the sum of the amplitude to travel from to in time , multiplied by the amplitude to travel from to in time ', summed over all possible intermediate states y. This is a property of an arbitrary quantum system, and by subdividing the time into many segments, it allows the time evolution to be expressed as a path integral. Analytic continuation to diffusion The spreading of wave packets in quantum mechanics is directly related to the spreading of probability densities in diffusion. For a particle which is randomly walking, the probability density function satisfies the diffusion equation where the factor of 2, which can be removed by rescaling either time or space, is only for convenience. A solution of this equation is the time-varying Gaussian function which is a form of the heat kernel. Since the integral of ρt is constant while the width is becoming narrow at small times, this function approaches a delta function at t=0, again only in the sense of distributions, so that for any test function . The time-varying Gaussian is the propagation kernel for the diffusion equation and it obeys the convolution identity, which allows diffusion to be expressed as a path integral. The propagator is the exponential of an operator , which is the infinitesimal diffusion operator, A matrix has two indices, which in continuous space makes it a function of and '. In this case, because of translation invariance, the matrix element only depend on the difference of the position, and a convenient abuse of notation is to refer to the operator, the matrix elements, and the function of the difference by the same name: Translation invariance means that continuous matrix multiplication, is essentially convolution, The exponential can be defined over a range of ts which include complex values, so long as integrals over the propagation kernel stay convergent, As long as the real part of is positive, for large values of , is exponentially decreasing, and integrals over are indeed absolutely convergent. The limit of this expression for approaching the pure imaginary axis is the above Schrödinger propagator encountered, which illustrates the above time evolution of Gaussians. From the fundamental identity of exponentiation, or path integration, holds for all complex z values, where the integrals are absolutely convergent so that the operators are well defined. Thus, quantum evolution of a Gaussian, which is the complex diffusion kernel K, amounts to the time-evolved state, This illustrates the above diffusive form of the complex Gaussian solutions, See also Wave Wave propagation Fourier analysis Group velocity Phase velocity Free particle Coherent states Waveform Wavelet Matter wave Pulse (signal processing) Pulse (physics) Schrödinger equation Introduction to quantum mechanics Soliton Notes References External links 1d Wave packet plot in Google 1d Wave train and probability density plot in Google 2d Wave packet plot in Google 2d Wave train plot in Google 2d probability density plot in Google Quantum physics online : Interactive simulation of a free wavepacket Web-Schrödinger: Interactive 2D wave packet dynamics simulation A simulation of a wave package in 2D (According to FOURIER-Synthesis in 2D) Wave mechanics Quantum mechanics
Wave packet
[ "Physics" ]
3,810
[ "Physical phenomena", "Theoretical physics", "Quantum mechanics", "Classical mechanics", "Waves", "Wave mechanics" ]
453,420
https://en.wikipedia.org/wiki/Directed-energy%20weapon
A directed-energy weapon (DEW) is a ranged weapon that damages its target with highly focused energy without a solid projectile, including lasers, microwaves, particle beams, and sound beams. Potential applications of this technology include weapons that target personnel, missiles, vehicles, and optical devices. In the United States, the Pentagon, DARPA, the Air Force Research Laboratory, United States Army Armament Research Development and Engineering Center, and the Naval Research Laboratory are researching directed-energy weapons to counter ballistic missiles, hypersonic cruise missiles, and hypersonic glide vehicles. These systems of missile defense are expected to come online no sooner than the mid to late-2020s. China, France, Germany, the United Kingdom, Russia, India, Israel, and Pakistan are also developing military-grade directed-energy weapons, while Iran and Turkey claim to have them in active service. The first use of directed-energy weapons in combat between military forces was claimed to have occurred in Libya in August 2019 by Turkey, which claimed to use the ALKA directed-energy weapon. After decades of research and development, most directed-energy weapons are still at the experimental stage and it remains to be seen if or when they will be deployed as practical, high-performance military weapons. Operational advantages Directed energy weapons could have several main advantages over conventional weaponry: Directed-energy weapons can be used discreetly; radiation does not generate sound and is invisible if outside the visible spectrum.<ref name="Defence IQ Press">"Defence IQ talks to Dr Palíšek about Directed Energy Weapon systems", Defence iQ', Nov. 20, 2012</ref> Light is, for practical purposes, unaffected by gravity, windage and Coriolis force, giving it an almost perfectly flat trajectory. This makes aim much more precise and extends the range to line-of-sight, limited only by beam diffraction and spread (which dilute the power and weaken the effect), and absorption or scattering by intervening atmospheric contents. Lasers travel at light-speed and have long range, making them suitable for use in space warfare. Laser weapons potentially eliminate many logistical problems in terms of ammunition supply, as long as there is enough energy to power them. Depending on several operational factors, directed-energy weapons may be cheaper to operate than conventional weapons in certain contexts. Use of high-powered microwave weapons, which are typically used to degrade and damage electronics such as drones, can be hard to attribute to a particular actor. Types Microwave Some devices are described as microwave weapons; the microwave frequency is commonly defined as being between 300 MHz and 300 GHz (wavelengths of 1 meter to 1 millimeter), which is within the radiofrequency (RF) range. Some examples of weapons which have been publicized by the military are as follows: Active Denial System Active Denial System is a millimeter wave source that heats the water in a human target's skin and thus causes incapacitating pain. It was developed by the U.S. Air Force Research Laboratory and Raytheon for riot-control duty. Though intended to cause severe pain while leaving no lasting damage, concern has been voiced as to whether the system could cause irreversible damage to the eyes. There has yet to be testing for long-term side effects of exposure to the microwave beam. It can also destroy unshielded electronics. Vigilant Eagle Vigilant Eagle is a ground-based airport defense system that directs high-frequency microwaves towards any projectile that is fired at an aircraft. It was announced by Raytheon in 2005, and the effectiveness of its waveforms was reported to have been demonstrated in field tests to be highly effective in defeating MANPADS missiles. The system consists of a missile-detecting and tracking subsystem (MDT), a command and control system, and a scanning array. The MDT is a fixed grid of passive infrared (IR) cameras. The command and control system determines the missile launch point. The scanning array projects microwaves that disrupt the surface-to-air missile's guidance system, deflecting it from the aircraft. Vigilant Eagle was not mentioned on Raytheon's Web site in 2022. Bofors HPM Blackout Bofors HPM Blackout is a high-powered microwave weapon that is said to be able to destroy at short distance a wide variety of commercial off-the-shelf (COTS) electronic equipment and is purportedly non-lethal.Magnus Karlsson (2009). "Bofors HPM Blackout". Artilleri-Tidskrift (2–2009): s. s 12–15. Retrieved 2010-01-04. EL/M-2080 Green Pine|EL/M-2080 Green P The effective radiated power (ERP) of the EL/M-2080 Green Pine radar makes it a hypothetical candidate for conversion into a directed-energy weapon, by focusing pulses of radar energy on target missiles. The energy spikes are tailored to enter missiles through antennas or sensor apertures where they can fool guidance systems, scramble computer memories or even burn out sensitive electronic components. Active electronically scanned array AESA radars mounted on fighter aircraft have been slated as directed energy weapons against missiles, however, a senior US Air Force officer noted: "they aren't particularly suited to create weapons effects on missiles because of limited antenna size, power and field of view". Potentially lethal effects are produced only inside 100 meters range, and disruptive effects at distances on the order of one kilometer. Moreover, cheap countermeasures can be applied to existing missiles. Anti-drone rifle A weapon often described as an "anti-drone rifle" or "anti-drone gun" is a battery-powered electromagnetic pulse weapon held to an operator's shoulder, pointed at a flying target in a way similar to a rifle, and operated. While not a rifle or gun, it is so nicknamed as it is handled in the same way as a personal rifle. The device emits separate electromagnetic pulses to suppress navigation and transmission channels used to operate an aerial drone, terminating the drone's contact with its operator; the out-of-control drone then crashes. The Russian Stupor is reported to have a range of two kilometers, covering a 20-degree sector; it also suppresses the drone's cameras. Stupor is reported to have been used by Russian forces during the Russian military intervention in the Syrian civil war. Both Russia and Ukraine are reported to use these devices during the 2022 Russian invasion of Ukraine. The Ukrainian army are reported to use the Ukrainian KVS G-6, with a 3.5 km range and able to operate continuously for 30 minutes. The manufacturer states that the weapon can disrupt remote control, the transmission of video at 2.4 and 5 GHz, and GPS and Glonass satellite navigation signals. Ukraine has also used the EDM4S anti drone rifle to shoot down Russian Eleron-3 drones. Due to the threat posed by drones in regard to terrorism, several police forces have carried anti-drone guns as part of their equipment. For example, during the policing of the Commonwealth Games in 2018, the Australian Queensland Police Service carried anti-drone guns with an effective range of . In Myanmar, police have been equipped with anti-drone guns "ostensibly to defend VIPs". Counter-electronics High Power Microwave Advanced Missile Project THOR/Mjolnir Radio Frequency Directed Energy Weapon (RFDEW) This UK-developed system was unveiled in May 2024 and uses radio waves to fry the electronic components of its targets, rendering them inoperable. It is capable of engaging multiple targets, including drone swarms, and reportedly costs less than 10 pence (13 cents) per shot, making it a cheaper alternative to traditional missile-based air defense systems. Laser A laser weapon is a directed-energy weapon based on lasers. DragonFire An example of a laser directed-energy weapon is the DragonFire currently being developed by the United Kingdom. It is reportedly in the 50 kW class and is capable of engaging any target within line-of-sight at a currently classified range. It has been tested against drones and mortar rounds and is expected to equip ships, aircraft and ground vehicles from 2027. Particle-beam Particle-beam weapons can use charged or neutral particles, and can be either endoatmospheric or exoatmospheric. Particle beams as beam weapons are theoretically possible, but practical weapons have not been demonstrated yet. Certain types of particle beams have the advantage of being self-focusing in the atmosphere. Blooming is also a problem in particle-beam weapons. Energy that would otherwise be focused on the target spreads out and the beam becomes less effective: Thermal blooming occurs in both charged and neutral particle beams, and occurs when particles bump into one another under the effects of thermal vibration, or bump into air molecules. Electrical blooming occurs only in charged particle beams, as ions of like charge repel one another. Plasma Plasma weapons fire a beam, bolt, or stream of plasma, which is an excited state of matter consisting of atomic electrons and nuclei, and free electrons if ionized, or other particles if pinched. The MARAUDER (Magnetically Accelerated Ring to Achieve Ultra-high Directed-Energy and Radiation) used the Shiva Star project (a high energy capacitor bank which provided the means to test weapons and other devices requiring brief and extremely large amounts of energy) to accelerate a toroid of plasma at a significant percentage of the speed of light. Additionally, the Russian Federation claims to be developing various plasma weapons. Sonic Long Range Acoustic Device (LRAD) The Long Range Acoustic Device (LRAD) is an acoustic hailing device developed by Genasys (formerly LRAD Corporation) to send messages and warning tones over longer distances or at higher volume than normal loudspeakers, and as a non-lethal directed-acoustic-energy weapon. LRAD systems are used for long-range communications in a variety of applications and as a means of non-lethal, non-projectile crowd control. They are also used on ships as an anti-piracy measure. According to the manufacturer's specifications, the systems weigh from and can emit sound in a 30°- 60° beam at 2.5 kHz. They range in size from small, portable handheld units which can be strapped to a person's chest, to larger models which require a mount. The power of the sound beam which LRADs produce is sufficient to penetrate vehicles and buildings while retaining a high degree of fidelity, so that verbal messages can be conveyed clearly in some situations. History Ancient Mirrors of Archimedes According to a legend, Archimedes created a mirror with an adjustable focal length (or more likely, a series of mirrors focused on a common point) to focus sunlight on ships of the Roman fleet as they invaded Syracuse, setting them on fire. Historians point out that the earliest accounts of the battle did not mention a "burning mirror", but merely stated that Archimedes's ingenuity combined with a way to hurl fire were relevant to the victory. Some attempts to replicate this feat have had some success; in particular, an experiment by students at MIT showed that a mirror-based weapon was at least possible, if not necessarily practical. The hosts of MythBusters tackled the Mirrors of Archimedes three times (in episodes 19, 57 and 172) and were never able to make the target ship catch fire, declaring the myth busted three separate times. 20th Century Robert Watson-Watt In 1935, the British Air Ministry asked Robert Watson-Watt of the Radio Research Station whether a "death ray" was possible. He and colleague Arnold Wilkins quickly concluded that it was not feasible, but as a consequence suggested using radio for the detection of aircraft and this started the development of radar in Britain. The fictional "engine-stopping ray" Stories in the 1930s and World War II gave rise to the idea of an "engine-stopping ray". They seemed to have arisen from the testing of the television transmitter in Feldberg, Germany. Because electrical noise from car engines would interfere with field strength measurements, sentries would stop all traffic in the vicinity for the twenty minutes or so needed for a test. Reversing the order of events in retelling the story created a "tale" where tourists car engine stopped first and then were approached by a German soldier who told them that they had to wait. The soldier returned a short time later to say that the engine would now work and the tourists drove off. Such stories were circulating in Britain around 1938 and during the war British Intelligence relaunched the myth as a "British engine-stopping ray," trying to spoof the Germans into researching what the British had supposedly invented in an attempt to tie up German scientific resources. German World War II experimental weapons During the early 1940s Axis engineers developed a sonic cannon that could cause fatal vibrations in its target body. A methane gas combustion chamber leading to two parabolic dishes pulse-detonated at roughly 44 Hz. This sound, magnified by the dish reflectors, caused vertigo and nausea at by vibrating the middle ear bones and shaking the cochlear fluid within the inner ear. At distances of , the sound waves could act on organ tissues and fluids by repeatedly compressing and releasing compressive resistant organs such as the kidneys, spleen, and liver. (It had little detectable effect on malleable organs such as the heart, stomach and intestines.) Lung tissue was affected at only the closest ranges as atmospheric air is highly compressible and only the blood rich alveoli resist compression. In practice, the weapon was highly vulnerable to enemy fire. Rifle, bazooka and mortar rounds easily deformed the parabolic reflectors, rendering the wave amplification ineffective. In the later phases of World War II, Nazi Germany increasingly put its hopes on research into technologically revolutionary secret weapons, the Wunderwaffe. Among the directed-energy weapons the Nazis investigated were X-ray beam weapons developed under Heinz Schmellenmeier, Richard Gans and Fritz Houtermans. They built an electron accelerator called Rheotron to generate hard X-ray synchrotron beams for the Reichsluftfahrtministerium (RLM). Invented by Max Steenbeck at Siemens-Schuckert in the 1930s, these were later called Betatrons by the Americans. The intent was to pre-ionize ignition in aircraft engines and hence serve as anti-aircraft DEW and bring planes down into the reach of the flak. The Rheotron was captured by the Americans in Burggrub on April 14, 1945. Another approach was Ernst Schiebolds 'Röntgenkanone' developed from 1943 in Großostheim near Aschaffenburg. Richert Seifert & Co from Hamburg delivered parts. Reported use in Sino-Soviet conflicts The Central Intelligence Agency informed Secretary Henry Kissinger that it had twelve reports of Soviet forces using laser weapons against Chinese forces during the 1969 Sino-Soviet border clashes, though William Colby doubted that they had actually been employed. Northern Ireland "squawk box" field trials In 1973, New Scientist magazine reported that a sonic weapon known as a "squawk box" underwent successful field trials in Northern Ireland, using soldiers as guinea pigs. The device combined two slightly different frequencies which when heard would be heard as the sum of the two frequencies (ultrasonic) and the difference between the two frequencies (infrasonic) e.g. two directional speakers emitting 16,000 Hz and 16,002 Hz frequencies would produce in the ear two frequencies of 32,002 Hz and 2 Hz. The article states: "The squawk box is highly directional which gives it its appeal. Its effective beam width is so small that it can be directed at individuals in a riot. Other members of a crowd are unaffected, except by panic when they see people fainting, being sick, or running from the scene with their hands over their ears. The virtual inaudibility of the equipment is said to produce a 'spooky' psychological effect." The UK's Ministry of Defence denied the existence of such a device. It stated that it did have, however, an "ultra-loud public address system which [...] could be 'used for verbal communication over two miles, or put out a sustained or modulated sound blanket to make conversation, and thus crowd organisation, impossible.'" East German "decomposition" methods In East Germany in the 1960s, in an effort to avoid international condemnation for arresting and interrogating people for holding politically incorrect views or for performing actions deemed hostile by the state the state security service, the Stasi, attempted alternative methods of repression which could paralyze people without imprisoning them. One such alternative method was called decomposition (transl. Zersetzung). In the 1970s and 1980s it became the primary method of repressing domestic "hostile-negative" forces. Some of the victims of this method suffered from cancer and claimed that they had also been targeted with directed X-rays. In addition, when the East German state collapsed, powerful X-ray equipment was found in prisons without there being any apparent reason to justify its presence. In 1999, the modern German state was investigating the possibility that this X-ray equipment was being used as weaponry and that it was a deliberate policy of the Stasi to attempt to give prisoners radiation poisoning, and thereby cancer, through the use of directed X-rays. The negative effects of the radiation poisoning and cancer would extend past the period of incarceration. In this manner someone could be debilitated even though they were no longer imprisoned. The historian Mary Fulbrook states, Strategic Defense Initiative In the 1980s, U.S. President Ronald Reagan proposed the Strategic Defense Initiative (SDI) program, which was nicknamed Star Wars. It suggested that lasers, perhaps space-based X-ray lasers, could destroy ICBMs in flight. Panel discussions on the role of high-power lasers in SDI took place at various laser conferences, during the 1980s, with the participation of noted physicists including Edward Teller.Duarte, F. J. (Ed.), Proceedings of the International Conference on Lasers '87 (STS, McLean, Va, 1988). A notable example of a directed energy system which came out of the SDI program is the Neutral Particle Beam Accelerator developed by Los Alamos National Laboratory. This system is officially described (on the Smithsonian Air and Space Museum website) as a low power neutral particle beam (NPB) accelerator, which was among several directed energy weapons examined by the Strategic Defense Initiative Organization for potential use in missile defense. In July 1989, the accelerator was launched from White Sands Missile Range as part of the Beam Experiment Aboard Rocket (BEAR) project, reaching an altitude of 200 kilometers (124 miles) and operating successfully in space before being recovered intact after reentry. The primary objectives of the test were to assess NPB propagation characteristics in space and gauge the effects on spacecraft components. Despite continued research into NPBs, no known weapon system utilizing this technology has been deployed. Though the strategic missile defense concept has continued to the present under the Missile Defense Agency, most of the directed-energy weapon concepts were shelved. However, Boeing has been somewhat successful with the Boeing YAL-1 and Boeing NC-135, the first of which destroyed two missiles in February 2010. Funding has been cut to both of the programs. Iraq War During the Iraq War, electromagnetic weapons, including high power microwaves, were used by the U.S. military to disrupt and destroy Iraqi electronic systems and may have been used for crowd control. Types and magnitudes of exposure to electromagnetic fields are unknown. Alleged tracking of Space Shuttle Challenger The Soviet Union invested some effort in the development of ruby and carbon dioxide lasers as anti-ballistic missile systems, and later as a tracking and anti-satellite system. There are reports that the Terra-3 complex at Sary Shagan was used on several occasions to temporarily "blind" US spy satellites in the IR range. It has been claimed that the USSR made use of the lasers at the Terra-3 site to target the Space Shuttle Challenger in 1984. At the time, the Soviet Union was concerned that the shuttle was being used as a reconnaissance platform. On 10 October 1984 (STS-41-G), the Terra-3 tracking laser was allegedly aimed at Challenger as it passed over the facility. Early reports claimed that this was responsible for causing "malfunctions on the space shuttle and distress to the crew", and that the United States filed a diplomatic protest about the incident. However, this story is comprehensively denied by the crew members of STS-41-G and knowledgeable members of the US intelligence community. After the end of the Cold War, the Terra-3 facility was found to be a low-power laser testing site with limited satellite tracking capabilities, which is now abandoned and partially disassembled. Modern 21st-century use Havana syndrome Havana syndrome is a disputed medical condition reported by US personnel in Havana, Cuba and other locations, originally suspected to be caused by microwave radiation. In January 2022, the Central Intelligence Agency issued an interim assessment concluding that the syndrome is not the result of "a sustained global campaign by a hostile power." Foreign involvement was ruled out in 976 cases of the 1,000 reviewed. In February 2022, the State Department released a report by the JASON Advisory Group, which stated that it was highly unlikely that a directed-energy attack had caused the health incidents. The cause of Havana syndrome remains unknown and controversial. Anti-piracy measures LRADs are often fitted on commercial and military ships. They have been used on several occasions to repel pirate attacks by sending warnings and by producing intolerable levels of sound. For example, in 2005 the cruise liner Seabourn Spirit used a sonic weapon to defend itself from Somali pirates in the Indian ocean. A few years later, the cruise liner Spirit of Adventure also defended itself from Somali pirates by using its LRAD to force them to retreat. Non-lethal weapon capability The TECOM Technology Symposium in 1997 concluded on non-lethal weapons, "determining the target effects on personnel is the greatest challenge to the testing community", primarily because "the potential of injury and death severely limits human tests". Also, "directed-energy weapons that target the central nervous system and cause neurophysiological disorders may violate the Certain Conventional Weapons Convention of 1980. Weapons that go beyond non-lethal intentions and cause 'superfluous injury or unnecessary suffering' may also violate the Protocol I to the Geneva Conventions of 1977." Some common bio-effects of non-lethal electromagnetic weapons include: Difficulty breathing Disorientation Nausea Pain Vertigo Other systemic discomfort Interference with breathing poses the most significant, potentially lethal results. Light and repetitive visual signals can induce epileptic seizures. Vection and motion sickness can also occur. Russia has reportedly been using blinding laser weapons during the Russo-Ukrainian War. See also Electronic warfare Electromagnetic pulse Ivan's hammer L3Harris Technologies Laser applications MEDUSA (weapon) Notes References The E-Bomb: How America's New Directed Energy Weapons Will Change the Way Future Wars Will Be Fought. Doug Beason (2005). . US claims that China has used high-energy lasers to interfere with US satellites: Jane's Defence Weekly, 18 October 2006 China jamming test sparks U.S. satellite concerns: USA Today Beijing secretly fires lasers to disable US satellites: The Daily Telegraph China Attempted To Blind U.S. Satellites With Laser: Defense News China Has Not Attacked US Satellites Says DoD: United Press International "China Has Not Attacked US Satellites Says DoD": Space Daily'' External links Airpower Australia Applied Energetics – Photonic and high-voltage energetics (formerly Ionatron) Wired (AP) article on weapons deployment in Iraq, Active Denial System and Stunstrike, July 10, 2005 Boeing Tests Laser-Mounted Humvee as IED Hunter, November 13, 2007 WSTIAC Quarterly, Vol. 7, No. 1 – "Directed Energy Weapons" Ogonek Report on '21st Century Weapons' "How 'Revolutionary' Is CHAMP, New Air Force Microwave Weapon?", November 28, 2012 by David Axe Electromagnetic radiation Non-lethal weapons
Directed-energy weapon
[ "Physics" ]
4,975
[ "Electromagnetic radiation", "Physical phenomena", "Radiation" ]
454,450
https://en.wikipedia.org/wiki/Analytical%20mechanics
In theoretical physics and mathematical physics, analytical mechanics, or theoretical mechanics is a collection of closely related formulations of classical mechanics. Analytical mechanics uses scalar properties of motion representing the system as a whole—usually its kinetic energy and potential energy. The equations of motion are derived from the scalar quantity by some underlying principle about the scalar's variation. Analytical mechanics was developed by many scientists and mathematicians during the 18th century and onward, after Newtonian mechanics. Newtonian mechanics considers vector quantities of motion, particularly accelerations, momenta, forces, of the constituents of the system; it can also be called vectorial mechanics. A scalar is a quantity, whereas a vector is represented by quantity and direction. The results of these two different approaches are equivalent, but the analytical mechanics approach has many advantages for complex problems. Analytical mechanics takes advantage of a system's constraints to solve problems. The constraints limit the degrees of freedom the system can have, and can be used to reduce the number of coordinates needed to solve for the motion. The formalism is well suited to arbitrary choices of coordinates, known in the context as generalized coordinates. The kinetic and potential energies of the system are expressed using these generalized coordinates or momenta, and the equations of motion can be readily set up, thus analytical mechanics allows numerous mechanical problems to be solved with greater efficiency than fully vectorial methods. It does not always work for non-conservative forces or dissipative forces like friction, in which case one may revert to Newtonian mechanics. Two dominant branches of analytical mechanics are Lagrangian mechanics (using generalized coordinates and corresponding generalized velocities in configuration space) and Hamiltonian mechanics (using coordinates and corresponding momenta in phase space). Both formulations are equivalent by a Legendre transformation on the generalized coordinates, velocities and momenta; therefore, both contain the same information for describing the dynamics of a system. There are other formulations such as Hamilton–Jacobi theory, Routhian mechanics, and Appell's equation of motion. All equations of motion for particles and fields, in any formalism, can be derived from the widely applicable result called the principle of least action. One result is Noether's theorem, a statement which connects conservation laws to their associated symmetries. Analytical mechanics does not introduce new physics and is not more general than Newtonian mechanics. Rather it is a collection of equivalent formalisms which have broad application. In fact the same principles and formalisms can be used in relativistic mechanics and general relativity, and with some modifications, quantum mechanics and quantum field theory. Analytical mechanics is used widely, from fundamental physics to applied mathematics, particularly chaos theory. The methods of analytical mechanics apply to discrete particles, each with a finite number of degrees of freedom. They can be modified to describe continuous fields or fluids, which have infinite degrees of freedom. The definitions and equations have a close analogy with those of mechanics. Motivation for analytical mechanics The goal of mechanical theory is to solve mechanical problems, such as arise in physics and engineering. Starting from a physical system—such as a mechanism or a star system—a mathematical model is developed in the form of a differential equation. The model can be solved numerically or analytically to determine the motion of the system. Newton's vectorial approach to mechanics describes motion with the help of vector quantities such as force, velocity, acceleration. These quantities characterise the motion of a body idealised as a "mass point" or a "particle" understood as a single point to which a mass is attached. Newton's method has been successfully applied to a wide range of physical problems, including the motion of a particle in Earth's gravitational field and the motion of planets around the Sun. In this approach, Newton's laws describe the motion by a differential equation and then the problem is reduced to the solving of that equation. When a mechanical system contains many particles, however (such as a complex mechanism or a fluid), Newton's approach is difficult to apply. Using a Newtonian approach is possible, under proper precautions, namely isolating each single particle from the others, and determining all the forces acting on it. Such analysis is cumbersome even in relatively simple systems. Newton thought that his third law "action equals reaction" would take care of all complications. This is false even for such simple system as rotations of a solid body. In more complicated systems, the vectorial approach cannot give an adequate description. The analytical approach simplifies problems by treating mechanical systems as ensembles of particles that interact with each other, rather considering each particle as an isolated unit. In the vectorial approach, forces must be determined individually for each particle, whereas in the analytical approach it is enough to know one single function which contains implicitly all the forces acting on and in the system. Such simplification is often done using certain kinematic conditions which are stated a priori. However, the analytical treatment does not require the knowledge of these forces and takes these kinematic conditions for granted. Still, deriving the equations of motion of a complicated mechanical system requires a unifying basis from which they follow. This is provided by various variational principles: behind each set of equations there is a principle that expresses the meaning of the entire set. Given a fundamental and universal quantity called action, the principle that this action be stationary under small variation of some other mechanical quantity generates the required set of differential equations. The statement of the principle does not require any special coordinate system, and all results are expressed in generalized coordinates. This means that the analytical equations of motion do not change upon a coordinate transformation, an invariance property that is lacking in the vectorial equations of motion. It is not altogether clear what is meant by 'solving' a set of differential equations. A problem is regarded as solved when the particles coordinates at time t are expressed as simple functions of t and of parameters defining the initial positions and velocities. However, 'simple function' is not a well-defined concept: nowadays, a function f(t) is not regarded as a formal expression in t (elementary function) as in the time of Newton but most generally as a quantity determined by t, and it is not possible to draw a sharp line between 'simple' and 'not simple' functions. If one speaks merely of 'functions', then every mechanical problem is solved as soon as it has been well stated in differential equations, because given the initial conditions and t determine the coordinates at t. This is a fact especially at present with the modern methods of computer modelling which provide arithmetical solutions to mechanical problems to any desired degree of accuracy, the differential equations being replaced by difference equations. Still, though lacking precise definitions, it is obvious that the two-body problem has a simple solution, whereas the three-body problem has not. The two-body problem is solved by formulas involving parameters; their values can be changed to study the class of all solutions, that is, the mathematical structure of the problem. Moreover, an accurate mental or drawn picture can be made for the motion of two bodies, and it can be as real and accurate as the real bodies moving and interacting. In the three-body problem, parameters can also be assigned specific values; however, the solution at these assigned values or a collection of such solutions does not reveal the mathematical structure of the problem. As in many other problems, the mathematical structure can be elucidated only by examining the differential equations themselves. Analytical mechanics aims at even more: not at understanding the mathematical structure of a single mechanical problem, but that of a class of problems so wide that they encompass most of mechanics. It concentrates on systems to which Lagrangian or Hamiltonian equations of motion are applicable and that include a very wide range of problems indeed. Development of analytical mechanics has two objectives: (i) increase the range of solvable problems by developing standard techniques with a wide range of applicability, and (ii) understand the mathematical structure of mechanics. In the long run, however, (ii) can help (i) more than a concentration on specific problems for which methods have already been designed. Intrinsic motion Generalized coordinates and constraints In Newtonian mechanics, one customarily uses all three Cartesian coordinates, or other 3D coordinate system, to refer to a body's position during its motion. In physical systems, however, some structure or other system usually constrains the body's motion from taking certain directions and pathways. So a full set of Cartesian coordinates is often unneeded, as the constraints determine the evolving relations among the coordinates, which relations can be modeled by equations corresponding to the constraints. In the Lagrangian and Hamiltonian formalisms, the constraints are incorporated into the motion's geometry, reducing the number of coordinates to the minimum needed to model the motion. These are known as generalized coordinates, denoted qi (i = 1, 2, 3...). Difference between curvillinear and generalized coordinates Generalized coordinates incorporate constraints on the system. There is one generalized coordinate qi for each degree of freedom (for convenience labelled by an index i = 1, 2...N), i.e. each way the system can change its configuration; as curvilinear lengths or angles of rotation. Generalized coordinates are not the same as curvilinear coordinates. The number of curvilinear coordinates equals the dimension of the position space in question (usually 3 for 3d space), while the number of generalized coordinates is not necessarily equal to this dimension; constraints can reduce the number of degrees of freedom (hence the number of generalized coordinates required to define the configuration of the system), following the general rule: For a system with N degrees of freedom, the generalized coordinates can be collected into an N-tuple: and the time derivative (here denoted by an overdot) of this tuple give the generalized velocities: D'Alembert's principle of virtual work D'Alembert's principle states that infinitesimal virtual work done by a force across reversible displacements is zero, which is the work done by a force consistent with ideal constraints of the system. The idea of a constraint is useful – since this limits what the system can do, and can provide steps to solving for the motion of the system. The equation for D'Alembert's principle is: where are the generalized forces (script Q instead of ordinary Q is used here to prevent conflict with canonical transformations below) and are the generalized coordinates. This leads to the generalized form of Newton's laws in the language of analytical mechanics: where T is the total kinetic energy of the system, and the notation is a useful shorthand (see matrix calculus for this notation). Constraints If the curvilinear coordinate system is defined by the standard position vector , and if the position vector can be written in terms of the generalized coordinates and time in the form: and this relation holds for all times , then are called holonomic constraints. Vector is explicitly dependent on in cases when the constraints vary with time, not just because of . For time-independent situations, the constraints are also called scleronomic, for time-dependent cases they are called rheonomic. Lagrangian mechanics The introduction of generalized coordinates and the fundamental Lagrangian function: where T is the total kinetic energy and V is the total potential energy of the entire system, then either following the calculus of variations or using the above formula – lead to the Euler–Lagrange equations; which are a set of N second-order ordinary differential equations, one for each qi(t). This formulation identifies the actual path followed by the motion as a selection of the path over which the time integral of kinetic energy is least, assuming the total energy to be fixed, and imposing no conditions on the time of transit. The Lagrangian formulation uses the configuration space of the system, the set of all possible generalized coordinates: where is N-dimensional real space (see also set-builder notation). The particular solution to the Euler–Lagrange equations is called a (configuration) path or trajectory, i.e. one particular q(t) subject to the required initial conditions. The general solutions form a set of possible configurations as functions of time: The configuration space can be defined more generally, and indeed more deeply, in terms of topological manifolds and the tangent bundle. Hamiltonian mechanics The Legendre transformation of the Lagrangian replaces the generalized coordinates and velocities (q, q̇) with (q, p); the generalized coordinates and the generalized momenta conjugate to the generalized coordinates: and introduces the Hamiltonian (which is in terms of generalized coordinates and momenta): where denotes the dot product, also leading to Hamilton's equations: which are now a set of 2N first-order ordinary differential equations, one for each qi(t) and pi(t). Another result from the Legendre transformation relates the time derivatives of the Lagrangian and Hamiltonian: which is often considered one of Hamilton's equations of motion additionally to the others. The generalized momenta can be written in terms of the generalized forces in the same way as Newton's second law: Analogous to the configuration space, the set of all momenta is the generalized momentum space: ("Momentum space" also refers to "k-space"; the set of all wave vectors (given by De Broglie relations) as used in quantum mechanics and theory of waves) The set of all positions and momenta form the phase space: that is, the Cartesian product of the configuration space and generalized momentum space. A particular solution to Hamilton's equations is called a phase path, a particular curve (q(t),p(t)) subject to the required initial conditions. The set of all phase paths, the general solution to the differential equations, is the phase portrait: The Poisson bracket All dynamical variables can be derived from position q, momentum p, and time t, and written as a function of these: A = A(q, p, t). If A(q, p, t) and B(q, p, t) are two scalar valued dynamical variables, the Poisson bracket is defined by the generalized coordinates and momenta: Calculating the total derivative of one of these, say A, and substituting Hamilton's equations into the result leads to the time evolution of A: This equation in A is closely related to the equation of motion in the Heisenberg picture of quantum mechanics, in which classical dynamical variables become quantum operators (indicated by hats (^)), and the Poisson bracket is replaced by the commutator of operators via Dirac's canonical quantization: Properties of the Lagrangian and the Hamiltonian Following are overlapping properties between the Lagrangian and Hamiltonian functions. All the individual generalized coordinates qi(t), velocities q̇i(t) and momenta pi(t) for every degree of freedom are mutually independent. Explicit time-dependence of a function means the function actually includes time t as a variable in addition to the q(t), p(t), not simply as a parameter through q(t) and p(t), which would mean explicit time-independence. The Lagrangian is invariant under addition of the total time derivative of any function of q and t, that is: so each Lagrangian L and L describe exactly the same motion. In other words, the Lagrangian of a system is not unique. Analogously, the Hamiltonian is invariant under addition of the partial time derivative of any function of q, p and t, that is: (K is a frequently used letter in this case). This property is used in canonical transformations (see below). If the Lagrangian is independent of some generalized coordinates, then the generalized momenta conjugate to those coordinates are constants of the motion, i.e. are conserved, this immediately follows from Lagrange's equations: Such coordinates are "cyclic" or "ignorable". It can be shown that the Hamiltonian is also cyclic in exactly the same generalized coordinates. If the Lagrangian is time-independent the Hamiltonian is also time-independent (i.e. both are constant in time). If the kinetic energy is a homogeneous function of degree 2 of the generalized velocities, and the Lagrangian is explicitly time-independent, then: where λ is a constant, then the Hamiltonian will be the total conserved energy, equal to the total kinetic and potential energies of the system: This is the basis for the Schrödinger equation, inserting quantum operators directly obtains it. Principle of least action Action is another quantity in analytical mechanics defined as a functional of the Lagrangian: A general way to find the equations of motion from the action is the principle of least action: where the departure t1 and arrival t2 times are fixed. The term "path" or "trajectory" refers to the time evolution of the system as a path through configuration space , in other words q(t) tracing out a path in . The path for which action is least is the path taken by the system. From this principle, all equations of motion in classical mechanics can be derived. This approach can be extended to fields rather than a system of particles (see below), and underlies the path integral formulation of quantum mechanics,Quantum Field Theory, D. McMahon, Mc Graw Hill (US), 2008, and is used for calculating geodesic motion in general relativity. Hamiltonian-Jacobi mechanics Canonical transformations The invariance of the Hamiltonian (under addition of the partial time derivative of an arbitrary function of p, q, and t) allows the Hamiltonian in one set of coordinates q and momenta p to be transformed into a new set Q = Q(q, p, t) and P = P(q, p, t), in four possible ways: With the restriction on P and Q such that the transformed Hamiltonian system is: the above transformations are called canonical transformations, each function Gn is called a generating function of the "nth kind" or "type-n". The transformation of coordinates and momenta can allow simplification for solving Hamilton's equations for a given problem. The choice of Q and P is completely arbitrary, but not every choice leads to a canonical transformation. One simple criterion for a transformation q → Q and p → P to be canonical is the Poisson bracket be unity, for all i = 1, 2,...N. If this does not hold then the transformation is not canonical. The Hamilton–Jacobi equation By setting the canonically transformed Hamiltonian K = 0, and the type-2 generating function equal to Hamilton's principal function (also the action ) plus an arbitrary constant C: the generalized momenta become: and P is constant, then the Hamiltonian-Jacobi equation (HJE) can be derived from the type-2 canonical transformation: where H is the Hamiltonian as before: Another related function is Hamilton's characteristic functionused to solve the HJE by additive separation of variables for a time-independent Hamiltonian H. The study of the solutions of the Hamilton–Jacobi equations leads naturally to the study of symplectic manifolds and symplectic topology. In this formulation, the solutions of the Hamilton–Jacobi equations are the integral curves of Hamiltonian vector fields. Routhian mechanics Routhian mechanics is a hybrid formulation of Lagrangian and Hamiltonian mechanics, not often used but especially useful for removing cyclic coordinates. If the Lagrangian of a system has s cyclic coordinates q = q1, q2, ... qs with conjugate momenta p = p1, p2, ... ps, with the rest of the coordinates non-cyclic and denoted ζ = ζ1, ζ1, ..., ζN − s, they can be removed by introducing the Routhian: which leads to a set of 2s Hamiltonian equations for the cyclic coordinates q, and N − s Lagrangian equations in the non cyclic coordinates ζ. Set up in this way, although the Routhian has the form of the Hamiltonian, it can be thought of a Lagrangian with N − s degrees of freedom. The coordinates q do not have to be cyclic, the partition between which coordinates enter the Hamiltonian equations and those which enter the Lagrangian equations is arbitrary. It is simply convenient to let the Hamiltonian equations remove the cyclic coordinates, leaving the non cyclic coordinates to the Lagrangian equations of motion. Appellian mechanics Appell's equation of motion involve generalized accelerations, the second time derivatives of the generalized coordinates: as well as generalized forces mentioned above in D'Alembert's principle. The equations are where is the acceleration of the k particle, the second time derivative of its position vector. Each acceleration ak is expressed in terms of the generalized accelerations αr, likewise each rk are expressed in terms the generalized coordinates qr. Classical field theory Lagrangian field theory Generalized coordinates apply to discrete particles. For N scalar fields φi(r, t) where i = 1, 2, ... N, the Lagrangian density is a function of these fields and their space and time derivatives, and possibly the space and time coordinates themselves: and the Euler–Lagrange equations have an analogue for fields: where ∂μ denotes the 4-gradient and the summation convention has been used. For N scalar fields, these Lagrangian field equations are a set of N second order partial differential equations in the fields, which in general will be coupled and nonlinear. This scalar field formulation can be extended to vector fields, tensor fields, and spinor fields. The Lagrangian is the volume integral of the Lagrangian density:Gravitation, J.A. Wheeler, C. Misner, K.S. Thorne, W.H. Freeman & Co, 1973, Originally developed for classical fields, the above formulation is applicable to all physical fields in classical, quantum, and relativistic situations: such as Newtonian gravity, classical electromagnetism, general relativity, and quantum field theory. It is a question of determining the correct Lagrangian density to generate the correct field equation. Hamiltonian field theory The corresponding "momentum" field densities conjugate to the N scalar fields φi(r, t) are: where in this context the overdot denotes a partial time derivative, not a total time derivative. The Hamiltonian density is defined by analogy with mechanics: The equations of motion are: where the variational derivative must be used instead of merely partial derivatives. For N fields, these Hamiltonian field equations are a set of 2N first order partial differential equations, which in general will be coupled and nonlinear. Again, the volume integral of the Hamiltonian density is the Hamiltonian Symmetry, conservation, and Noether's theorem Symmetry transformations in classical space and time Each transformation can be described by an operator (i.e. function acting on the position r or momentum p variables to change them). The following are the cases when the operator does not change r or p, i.e. symmetries. where R(n̂, θ) is the rotation matrix about an axis defined by the unit vector n̂''' and angle θ. Noether's theorem Noether's theorem states that a continuous symmetry transformation of the action corresponds to a conservation law, i.e. the action (and hence the Lagrangian) does not change under a transformation parameterized by a parameter s: the Lagrangian describes the same motion independent of s, which can be length, angle of rotation, or time. The corresponding momenta to q'' will be conserved. See also Lagrangian mechanics Hamiltonian mechanics Theoretical mechanics Classical mechanics Hamilton–Jacobi equation Hamilton's principle Kinematics Kinetics (physics) Non-autonomous mechanics Udwadia–Kalaba equation References and notes Mathematical physics Dynamical systems
Analytical mechanics
[ "Physics", "Mathematics" ]
5,000
[ "Applied mathematics", "Theoretical physics", "Mechanics", "Mathematical physics", "Dynamical systems" ]
454,896
https://en.wikipedia.org/wiki/Grignard%20reaction
The Grignard reaction () is an organometallic chemical reaction in which, according to the classical definition, carbon alkyl, allyl, vinyl, or aryl magnesium halides (Grignard reagent) are added to the carbonyl groups of either an aldehyde or ketone under anhydrous conditions. This reaction is important for the formation of carbon–carbon bonds. History and definitions Grignard reactions and reagents were discovered by and are named after the French chemist François Auguste Victor Grignard (University of Nancy, France), who described them in 1900. He was awarded the 1912 Nobel Prize in Chemistry for this work. The reaction of an organic halide with magnesium is not a Grignard reaction, but provides a Grignard reagent. Classically, the Grignard reaction refers to the reaction between a ketone or aldehyde group with a Grignard reagent to form a primary or tertiary alcohol. However, some chemists understand the definition to mean all reactions of any electrophiles with Grignard reagents. Therefore, there is some dispute about the modern definition of the Grignard reaction. In the Merck Index, published online by the Royal Society of Chemistry, the classical definition is acknowledged, followed by "A more modern interpretation extends the scope of the reaction to include the addition of Grignard reagents to a wide variety of electrophilic substrates." This variety of definitions illustrates that there is some dispute within the chemistry community about the definition of a Grignard reaction. Shown below are some reactions involving Grignard reagents, but they themselves are not classically understood as Grignard reactions. Reaction mechanism Because carbon is more electronegative than magnesium, the carbon attached to magnesium acts as a nucleophile and attacks the electrophilic carbon atom in the polar bond of a carbonyl group. The addition of the Grignard reagent to the carbonyl group typically proceeds through a six-membered ring transition state, as shown below. Based on the detection of radical coupling side products, an alternative single electron transfer (SET) mechanism that involves the initial formation of a ketyl radical intermediate has also been proposed. A recent computational study suggests that the operative mechanism (polar vs. radical) is substrate-dependent, with the reduction potential of the carbonyl compound serving as a key parameter. Conditions The Grignard reaction is conducted under anhydrous conditions. Otherwise, the reaction will fail because the Grignard reagent will act as a base rather than a nucleophile and pick up a labile proton rather than attacking the electrophilic site. This will result in no formation of the desired product as the R-group of the Grignard reagent will become protonated while the MgX portion will stabilize the deprotonated species. To prevent this, Grignard reactions are completed in an inert atmosphere to remove all water from the reaction flask and ensure that the desired product is formed. Additionally, if there are acidic protons in the starting material, as shown in the figure on the right, one can overcome this by protecting the acidic site of the reactant by turning it into an ether or a silyl ether to eliminate the labile proton from the solution prior to the Grignard reaction. Variants Other variations of the Grignard reagent have been discovered to improve the chemoselectivity of the Grignard reaction, which include but are not limited to: Turbo-Grignards, organocerium reagents, and organocuprate (Gilman) reagents. Turbo-Grignards Turbo-Grignards are Grignard reagents modified with lithium chloride. Compared to conventional Grignard reagents, Turbo-Grignards are more chemoselective; esters, amides, and nitriles do not react with the Turbo-Grignard reagent. Heterometal-modified Grignard reagents The behavior of Grignard reagents can be usefully modified in the present of other metals. Copper(I) salts give organocuprates that preferentially effect 1,4 addition. Cerium trichloride allows selective 1,2-additions to the same substrates. Nickel and palladium halides catalyze cross coupling reactions. See also Grignard reagent Wittig reaction Horner–Wadsworth–Emmons reaction Barbier reaction Bodroux–Chichibabin aldehyde synthesis Fujimoto–Belleau reaction Organolithium reagents Sakurai reaction Indium-mediated allylation Alkynylation References Organometallic chemistry Carbon-carbon bond forming reactions Magnesium Chemical tests Name reactions
Grignard reaction
[ "Chemistry" ]
991
[ "Carbon-carbon bond forming reactions", "Coupling reactions", "Chemical tests", "Organic reactions", "Name reactions", "Organometallic chemistry" ]
454,925
https://en.wikipedia.org/wiki/Slipstream
A slipstream is a region behind a moving object in which a wake of fluid (typically air or water) is moving at velocities comparable to that of the moving object, relative to the ambient fluid through which the object is moving. The term slipstream also applies to the similar region adjacent to an object with a fluid moving around it. "Slipstreaming" or "drafting" works because of the relative motion of the fluid in the slipstream. Overview A slipstream created by turbulent flow has a slightly lower pressure than the ambient fluid around the object. When the flow is laminar, the pressure behind the object is higher than the surrounding fluid. The shape of an object determines how strong the effect is. In general, the more aerodynamic an object is, the smaller and weaker its slipstream will be. For example, a box-like front (relative to the object's motion) will collide with the medium's particles at a high rate, transferring more momentum from the object to the fluid than a more aerodynamic object. A bullet-like profile will cause less turbulence and create a more laminar flow. A tapered rear will permit the particles of the medium to rejoin more easily and quickly than a truncated rear. This reduces lower-pressure effect in the slipstream, but also increases skin friction (in engineering designs, these effects must be balanced). Slipstreaming The term "slipstreaming" describes an object travelling inside the slipstream of another object (most often objects moving through the air though not necessarily flying). If an object is following another object, moving at the same speed, the rear object will require less power to maintain its speed than if it were moving independently. This technique, also called drafting, can be used by bicyclists. Following in the slipstream of another motor vehicle, or "drafting", allows for significantly improved fuel efficiency due to reduced atmospheric drag. Truck convoys are a common example, travelling highways in a single-file queue several vehicles long. In tests, this has been shown to produce significant fuel savings. Auto racing drivers also draft in order to conserve fuel, the better to gain competitive advantage by reducing the frequency of fuel stops or, more often, to reach a higher speed before pulling out to attempt to overtake another driver for example, a driver tries to overtake the leading driver so he follows the rear of the leading driver, the rear driver will gain slipstream causing the whole vehicle to gain more speed than the leading driver. A related effect used for lift rather than drag reduction is vortex surfing for airborne objects. The extended formations (V formation) or "skeins" in which many migratory birds (especially geese) fly enable the birds (except, of course, the bird at the front) to use vortex surfing to take advantage of one another's vortices. Other birds (for example cormorants) that typically fly in close formation, even on short journeys, are probably also exploiting this effect. Using wingtip devices to reduce induced drag caused by wingtip vortices has been tested for aircraft, and could save 10%–29% fuel. Spiral slipstream Spiral slipstream, also known as propwash, prop wash, or spiraling slipstream, is a spiral-shaped slipstream formed behind a rotating propeller on an aircraft. The most noticeable effect resulting from the formation of a spiral slipstream is the tendency to yaw nose-left at low speed and full throttle (in centerline tractor aircraft with a clockwise-rotating propeller.) This effect is caused by the slipstream acting upon the tail fin of the aircraft: The slipstream causes the air to rotate around the longitudinal axis of the aircraft, and this air flow exerts a force on the tail fin, pushing it to the right. To counteract this, some aircraft have the front of the fin (vertical stabilizer) slightly offset from the centreline so as to provide an opposing force that cancels out the one produced by the slipstream, albeit only at one particular (usually cruising) speed, an example being the Hawker Hurricane fighter from World War II. Propeller slipstream causes increased lift by increasing the airspeed over part of the wings. It also reduces the stall speed of the aircraft by energizing the flow over the wings. See also Drafting or slipstreaming as used in sports such as cycling and motor racing Peloton References Specific references General references Centennial of Flight Commission: diagram of the spiral slipstream Forces and Moments: Spiral Slipstream Aerodynamics Articles containing video clips
Slipstream
[ "Chemistry", "Engineering" ]
926
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
455,092
https://en.wikipedia.org/wiki/Sensitivity%20%28electronics%29
The sensitivity of an electronic device, such as a communications system receiver, or detection device, such as a PIN diode, is the minimum magnitude of input signal required to produce a specified output signal having a specified signal-to-noise ratio, or other specified criteria. In general, it is the signal level required for a particular quality of received information. In signal processing, sensitivity also relates to bandwidth and noise floor as is explained in more detail below. In the field of electronics different definitions are used for sensitivity. The IEEE dictionary states: "Definitions of sensitivity fall into two contrasting categories." It also provides multiple definitions relevant to sensors among which 1: "(measuring devices) The ratio of the magnitude of its response to the magnitude of the quantity measured.” and 2: "(radio receiver or similar device) Taken as the minimum input signal required to produce a specified output signal having a specified signal-to-noise ratio.”. The first of these definitions is similar to the definition of responsivity and as a consequence sensitivity is sometimes considered to be improperly used as a synonym for responsivity, and it is argued that the second definition, which is closely related to the detection limit, is a better indicator of the performance of a measuring system. To summarize, two contrasting definitions of sensitivity are used in the field of electronics Sensitivity first definition: the ratio between output and input signal, or the slope of the output versus input response curve of a transducer, microphone or sensor. An example is given in the section below on electroacoustics. Sensitivity second definition: the minimum magnitude of input signal required to produce an output signal with a specified signal-to-noise ratio of an instrument or sensor. Examples of the use of this definition are given in the sections below on receivers and electronic sensors. Electroacoustics The sensitivity of a microphone is usually expressed as the sound field strength in decibels (dB) relative to 1 V/Pa (Pa = N/m2) or as the transfer factor in millivolts per pascal (mV/Pa) into an open circuit or into a 1 kiloohm load. The sensitivity of a loudspeaker is usually expressed as dB / 2.83 VRMS at 1 metre. This is not the same as the electrical efficiency; see Efficiency vs sensitivity. The sensitivity of a hydrophone is usually expressed as dB relative to 1 V/μPa. This is an example where sensitivity is defined as the ratio of the sensor's response to the quantity measured. One should realize that when using this definition to compare sensors, the sensitivity of the sensor might depend on components like output voltage amplifiers, that can increase the sensor response such that the sensitivity is not a pure figure of merit of the sensor alone, but of the combination of all components in the signal path from input to response. Receivers Sensitivity in a receiver, such a radio receiver, indicates its capability to extract information from a weak signal, quantified as the lowest signal level that can be useful. It is mathematically defined as the minimum input signal required to produce a specified signal-to-noise S/N ratio at the output port of the receiver and is defined as the mean noise power at the input port of the receiver times the minimum required signal-to-noise ratio at the output of the receiver: where = sensitivity [W] = Boltzmann constant = equivalent noise temperature in [K] of the source (e.g. antenna) at the input of the receiver = equivalent noise temperature in [K] of the receiver referred to the input of the receiver = bandwidth [Hz] = Required SNR at output [-] The same formula can also be expressed in terms of noise factor of the receiver as where = noise factor = input noise power = required SNR at output. Because receiver sensitivity indicates how faint an input signal can be to be successfully received by the receiver, the lower power level, the better. Lower power for a given S/N ratio means better sensitivity since the receiver's contribution is smaller. When the power is expressed in dBm the larger the absolute value of the negative number, the better the receive sensitivity. For example, a receiver sensitivity of −98 dBm is better than a receive sensitivity of −95 dBm by 3 dB, or a factor of two. In other words, at a specified data rate, a receiver with a −98 dBm sensitivity can hear signals that are half the power of those heard by a receiver with a −95 dBm receiver sensitivity.. Electronic Sensors For electronic sensors the input signal can be of many types, like position, force, acceleration, pressure, or magnetic field. The output signal for an electronic analog sensor is usually a voltage or a current signal . The responsivity of an ideal linear sensor in the absence of noise is defined as , whereas for nonlinear sensors it is defined as the local slope . In the absence of noise and signals at the input, the sensor is assumed to generate a constant intrinsic output noise . To reach a specified signal to noise ratio at the output , one combines these equations and obtains the following idealized equation for its sensitivity , which is equal to the value of the input signal that results in the specified signal-to-noise ratio at the output: This equation shows that sensor sensitivity can be decreased (=improved) by either reducing the intrinsic noise of the sensor or by increasing its responsivity . This is an example of a case where sensivity is defined as the minimum input signal required to produce a specified output signal having a specified signal-to-noise ratio. This definition has the advantage that the sensitivity is closely related to the detection limit of a sensor if the minimum detectable SNRo is specified (SNR). The choice for the SNRo used in the definition of sensitivity depends on the required confidence level for a signal to be reliably detected (confidence (statistics)), and lies typically between 1-10. The sensitivity depends on parameters like bandwidth BW or integration time τ=1/(2BW) (as explained here: NEP), because noise level can be reduced by signal averaging, usually resulting in a reduction of the noise amplitude as where is the integration time over which the signal is averaged. A measure of sensitivity independent of bandwidth can be provided by using the amplitude or power spectral density of the noise and or signals () in the definition, with units like m/Hz1/2, N/Hz1/2, W/Hz or V/Hz1/2. For a white noise signal over the sensor bandwidth, its power spectral density can be determined from the total noise power (over the full bandwidth) using the equation . Its amplitude spectral density is the square-root of this value . Note that in signal processing the words energy and power are also used for quantities that do not have the unit Watt (Energy (signal processing)). In some instruments, like spectrum analyzers, a SNRo of 1 at a specified bandwidth of 1 Hz is assumed by default when defining their sensitivity. For instruments that measure power, which also includes photodetectors, this results in the sensitivity becoming equal to the noise-equivalent power and for other instruments it becomes equal to the noise-equivalent-input . A lower value of the sensitivity corresponds to better performance (smaller signals can be detected), which seems contrary to the common use of the word sensitivity where higher sensitivity corresponds to better performance. It has therefore been argued that it is preferable to use detectivity, which is the reciprocal of the noise-equivalent input, as a metric for the performance of detectors . As an example, consider a piezoresistive force sensor through which a constant current runs, such that it has a responsivity . The Johnson noise of the resistor generates a noise amplitude spectral density of . For a specified SNRo of 1, this results in a sensitivity and noise-equivalent input of and a detectivity of , such that an input signal of 10 nN generates the same output voltage as the noise does over a bandwidth of 1 Hz. References External links Microphone sensitivity conversion from dB at 1 V/Pa to transfer factor in mV/Pa Electrical parameters Microphone technology
Sensitivity (electronics)
[ "Engineering" ]
1,681
[ "Electrical engineering", "Electrical parameters" ]
455,626
https://en.wikipedia.org/wiki/Ratchet%20%28device%29
A ratchet (occasionally spelled rachet) is a mechanical device that allows continuous linear or rotary motion in only one direction while preventing motion in the opposite direction. Ratchets are widely used in machinery and tools. The word ratchet is also used informally to refer to a ratcheting socket wrench. Theory of operation A ratchet consists of a round gear or a linear rack with teeth, and a pivoting, spring-loaded finger called a pawl (or click, in clocks and watches) that engages the teeth. The teeth are uniform but are usually asymmetrical, with each tooth having a moderate slope on one edge and a much steeper slope on the other edge. When the teeth are moving in the unrestricted (i.e. forward) direction, the pawl easily slides up and over the gently sloped edges of the teeth, with a spring forcing it (often with an audible 'click') into the depression between the teeth as it passes the tip of each tooth. When the teeth move in the opposite (backward) direction, however, the pawl will catch against the steeply sloped edge of the first tooth it encounters, thereby locking it against the tooth and preventing any further motion in that direction. Backlash Because the ratchet can only stop backward motion at discrete points (i.e., at tooth boundaries), a ratchet does allow a limited amount of backward motion. This backward motion—which is limited to a maximum distance equal to the spacing between the teeth—is called backlash. In cases where backlash must be minimized, a smooth, toothless ratchet with a high friction surface such as rubber is sometimes used. The pawl bears against the surface at an angle so that any backward motion will cause the pawl to jam against the surface and thus prevent any further backward motion. Since the backward travel distance is primarily a function of the compressibility of the high friction surface, this mechanism can result in significantly reduced backlash. Uses Ratchet mechanisms are used in a wide variety of applications, including these: Cable ties Capstans Caulking guns Clocks Computer keyboards Freewheel (overrunning clutch) Grease guns Handcuffs Jacks Anti-rollback devices used in roller coasters Looms Slacklines Socket wrenches Tie down straps Turnstiles Typewriters Gallery See also Brownian ratchet Sprag clutch Check valve, a device that allows fluids to flow in only one direction Diode, a device that allows electric current to flow in only one direction References Mechanisms (engineering)
Ratchet (device)
[ "Engineering" ]
515
[ "Mechanical engineering", "Mechanisms (engineering)" ]
18,481,252
https://en.wikipedia.org/wiki/Protactinium%28V%29%20chloride
Protactinium(V) chloride is the chemical compound composed of protactinium and chlorine with the formula PaCl5. It forms yellow monoclinic crystals and has a unique structure composed of chains of 7 coordinate, pentagonal bipyramidal, protactinium atoms sharing edges. Protactinium(V) chloride can react with boron tribromide at high temperatures to form protactinium(V) bromide. It also reacts with fluorine to form protactinium(V) fluoride at high temperatures. See also Protactinium(IV) chloride References Protactinium(V) compounds Chlorides Actinide halides
Protactinium(V) chloride
[ "Chemistry" ]
146
[ "Chlorides", "Inorganic compounds", "Inorganic compound stubs", "Salts" ]
18,482,434
https://en.wikipedia.org/wiki/Fleuron%20%28architecture%29
A fleuron is a flower-shaped ornament, and in architecture may have a number of meanings: It is a collective noun for the ornamental termination at the ridge of a roof, such as a crop, finial or épi. It is also a form of stylised Late Gothic decoration in the form of a four-leafed square, often seen on crockets and cavetto mouldings. It can be the ornament in the middle of each concave face of a Corinthian abacus. Finally, it can be a form of anthemion, a Greek floral ornament. Gallery See also Flamboyant References Architectural elements Ornaments (architecture)
Fleuron (architecture)
[ "Technology", "Engineering" ]
144
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
18,482,489
https://en.wikipedia.org/wiki/Absorption%20edge
In physics, an absorption edge (also known as an absorption discontinuity or absorption limit) is a sharp discontinuity in the absorption spectrum of a substance. These discontinuities occur at wavelengths where the energy of an absorbed photon corresponds to an electronic transition or ionization potential. When the quantum energy of the incident radiation becomes smaller than the work required to eject an electron from one or other quantum states in the constituent absorbing atom, the incident radiation ceases to be absorbed by that state. For example, incident radiation on an atom of a wavelength that has a corresponding energy just below the binding energy of the K-shell electron in that atom cannot eject the K-shell electron. Siegbahn notation is used for notating absorption edges. In compound semiconductors, the bonding between atoms of different species forms a set of dipoles. These dipoles can absorb energy from an electromagnetic field, achieving a maximum coupling to the radiation when the frequency of the radiation equals a vibrational mode of the dipole. When this happens, the absorption coefficient gets a peak yielding the fundamental edge. This occurs in the far infrared region of the spectrum. See also K-edge Siegbahn notation References Electromagnetic radiation Radiation
Absorption edge
[ "Physics", "Chemistry" ]
248
[ "Transport phenomena", "Physical phenomena", "Electromagnetic radiation", "Waves", "Radiation", "Nuclear and atomic physics stubs", "Nuclear physics" ]
18,484,291
https://en.wikipedia.org/wiki/Resource%20intensity
Resource intensity is a measure of the resources (e.g. water, energy, materials) needed for the production, processing and disposal of a unit of good or service, or for the completion of a process or activity; it is therefore a measure of the efficiency of resource use. It is often expressed as the quantity of resource embodied in unit cost e.g. litres of water per $1 spent on product. In national economic and sustainability accounting it can be calculated as units of resource expended per unit of GDP. When applied to a single person it is expressed as the resource use of that person per unit of consumption. Relatively high resource intensities indicate a high price or environmental cost of converting resource into GDP; low resource intensity indicates a lower price or environmental cost of converting resource into GDP. Resource productivity and resource intensity are key concepts used in sustainability measurement as they measure attempts to decouple the connection between resource use and environmental degradation. Their strength is that they can be used as a metric for both economic and environmental cost. Although these concepts are two sides of the same coin, in practice they involve very different approaches and can be viewed as reflecting, on the one hand, the efficiency of resource production as outcome per unit of resource use (resource productivity) and, on the other hand, the efficiency of resource consumption as resource use per unit outcome (resource intensity). The sustainability objective is to maximize resource productivity while minimizing resource intensity. See also Bioeconomics Econophysics Energy and Environment Energy intensity Environmental economics Energy Accounting Ecodynamics Ecological Economics Industrial ecology Population dynamics Resource productivity Sustainability accounting Sustainable development Systems ecology Thermoeconomics References Sustainability metrics and indices Natural resource management Resource economics Thermodynamics Energy economics
Resource intensity
[ "Physics", "Chemistry", "Mathematics", "Environmental_science" ]
353
[ "Energy economics", "Environmental social science stubs", "Thermodynamics", "Environmental social science", "Dynamical systems" ]
18,484,667
https://en.wikipedia.org/wiki/Alpha%20blocker
Alpha blockers, also known as α-blockers or α-adrenoreceptor antagonists, are a class of pharmacological agents that act as antagonists on α-adrenergic receptors (α-adrenoceptors). Historically, alpha-blockers were used as a tool for pharmacologic research to develop a greater understanding of the autonomic nervous system. Using alpha blockers, scientists began characterizing arterial blood pressure and central vasomotor control in the autonomic nervous system. Today, they can be used as clinical treatments for a limited number of diseases. Alpha blockers can treat a small range of diseases such as hypertension, Raynaud's disease, benign prostatic hyperplasia (BPH) and erectile dysfunction. Generally speaking, these treatments function by binding an α-blocker to α receptors in the arteries and smooth muscle. Ultimately, depending on the type of alpha receptor, this relaxes the smooth muscle or blood vessels, which increases fluid flow in these entities. Classification α1-blockers act on α1-adrenoceptors α2-blockers act on α2-adrenoceptors When the term "alpha blocker" is used without further qualification, it can refer to an α1 blocker, an α2 blocker, a nonselective blocker (both α1 and α2 activity), or an α blocker with some β activity. However, the most common type of alpha blocker is usually an α1 blocker. Non-selective α-adrenergic receptor antagonists include: Phenoxybenzamine Phentolamine Tolazoline Trazodone Selective α1-adrenergic receptor antagonists include: Alfuzosin Doxazosin Prazosin (inverse agonist) Tamsulosin Terazosin Silodosin Selective α2-adrenergic receptor antagonists include: Atipamezole Idazoxan Mirtazapine Yohimbine Finally, the agents carvedilol and labetalol are both α and β-blockers. Below are some of the most common drugs used in the clinic. Medical uses While there are limited clinical α-blocker uses, in which most α-blockers are used for hypertension or benign prostatic hyperplasia, α-blockers can be used to treat a few other diseases, such as Raynaud's disease, congestive heart failure (CHF), pheochromocytoma, and erectile dysfunction. Furthermore, α-blockers can occasionally be used to treat anxiety and panic disorders, such as posttraumatic stress disorder (PTSD) induced nightmares. Studies have also had great medical interest in testing alpha blockers, specifically α2 blockers, to treat type II diabetes and psychiatric depression. Hypertension Hypertension is due to an increase in vascular resistance and vasoconstriction. Using α1 selective antagonists, such as prazosin, has been efficacious in treating mild to moderate hypertension. This is because they can decrease vascular resistance and decrease pressure. However, while these drugs are generally well tolerated, they have the potential to produce side effects such as orthostatic hypotension and dizziness. However, unlike other treatments for hypertension such as ACE inhibitors, ARBs, calcium channel blockers, thiazide diuretics or beta blockers, alpha blockers have not demonstrated the same mortality and morbidity benefits, and are therefore not generally used as first or even second line agents. Another treatment for hypertension is using drugs that have both α1 blocking activity, as well as nonselective β activity, such as Labetalol or carvedilol. In low doses, labetalol and carvedilol can decrease the peripheral resistance and block the effects of isoprenaline to reduce hypertensive symptoms. Pheochromocytoma Pheochromocytoma is a disease in which a catecholamine secreting tumor develops. Specifically, norepinephrine and epinephrine are secreted by these tumors, either continuously or intermittently. The excess release of these catecholamines increases central nervous system stimulation, thus causing blood vessels to increase in vascular resistance, and ultimately giving rise to hypertension. In addition, patients with these rare tumors are often subject to headaches, heart palpitations, and increased sweating. Phenoxybenzamine, a nonselective α1 and α2 blocker, has been used to treat pheochromocytoma. This drug blocks the activity of epinephrine and norepinephrine by antagonizing the alpha receptors, thus decreasing vascular resistance, increasing vasodilation, and decreasing blood pressure overall. Congestive heart failure Blockers that have both the ability to block both α and β receptors, such as carvedilol, bucindolol, and labetalol, have the ability to mitigate the symptoms in congestive heart failure. By binding to both the α and β receptors, these drugs can decrease the cardiac output and stimulate the dilation of blood vessels to promote a reduction in blood pressure. Erectile dysfunction Yohimbine, an α2 blocker derived from the bark of the Pausinystalia johimbe tree, has been tested to increase libido and treat erectile dysfunction. The proposed mechanism for yohimbine is blockade of the adrenergic receptors that are associated with neurotransmitters inhibition, including dopamine and nitric oxide, and thus aiding with penile erection and libido. By doing so, they can alter the blood flow in the penis to aid in achieving an erection. However, some side effects can occur, such as palpitation, tremor, elevated blood pressure, and anxiety. Yohimbe bark contains both α1 and α2 adrenergic receptors blocking alkaloids. Phentolamine, a non-selective alpha blocker, has also been tested to treat erectile dysfunction. By reducing vasoconstriction in the penis, there appears to be increased blood flow that aids in penile erection. Side effects associated with phentolamine include headache, flushing, and nasal congestion. Phenoxybenzamine, a non-competitive α1 and α2 blocker was used by Dr. Giles Brindley in the first intracavernosal pharmacotherapy for erectile dysfunction. Benign prostatic hyperplasia In benign prostatic hyperplasia (BPH), men experience urinary obstruction and are unable to urinate, thus leading to urinary retention. α1 specific blockers have been used to relax the smooth muscle in the bladder and enlarged prostate. Prazosin, doxazosin, and terazosin have been particularly useful for patients with BPH, especially in patients with hypertension. In such patients, these drugs can treat both conditions at the same time. In patients without hypertension, tamsulosin can be used, as it has the ability to relax the bladder and prostate smooth muscle without causing major changes in blood pressure. Raynaud's disease Both α1 blockers and α2 blockers have been examined to treat Raynaud's disease. Although α1 blockers, such as prazosin, have appeared to give slight improvement for the sclerotic symptoms of Raynaud's disease, there are many side effects that occur while taking this drug. Conversely, α2 blockers, such as yohimbine, appear to provide significant improvement of the sclerotic symptoms in Raynaud's Disease without excessive side effects. Post traumatic stress disorder Patients with posttraumatic stress disorder (PTSD) have often continued to be symptomatic despite being treated with PTSD-specific drugs. In addition, PTSD patients often have debilitating nightmares that continue, despite their treatments. High doses of the α1 blocker, prazosin, have been efficacious in treating patients with PTSD induced nightmares due to its ability to block the effects of norepinephrine. Adverse effects of prazosin to treat PTSD nightmares include dizziness, first dose effect (a sudden loss of consciousness), weakness, nausea, and fatigue. Adverse effects Although alpha blockers have the ability to reduce some disease pathology, there are some side effects that come with these alpha blockers. However, because there are several structural compositions that make each alpha blocker different, the side effects are different for each drug. Side effects that arise when taking alpha blockers can include the first dose effect, cardiovascular side effects, genitourinary side effects, as well as other side effects. First dose effect One of the most common side effects with alpha blockers is the first dose effect. This is a phenomenon in which patients with hypertension take an alpha blocker for the first time, and suddenly experience an intense decrease in blood pressure. Ultimately, this gives rise to orthostatic hypotension, dizziness, and a sudden loss of consciousness due to the drastic drop in blood pressure. Alpha blockers that possess these side effects include prazosin, doxazosin, and terazosin. Cardiovascular side effects There are some alpha blockers that can give rise to changes in the cardiovascular system, such as the induction of reflex tachycardia, orthostatic hypotension, or heart palpitations via alterations of the QT interval. Alpha blockers that may have these side effects include yohimbine, phenoxybenzamine, and phentolamine. Genitourinary side effects When alpha blockers are used to treat BPH, it causes vasodilation of blood vessels on the bladder and the prostate, thus increasing urination in general. However, these alpha blockers can produce the exact opposite side effect, in which edema, or abnormal fluid retention, occurs. In addition, due to the relaxation of the prostate smooth muscle, another side effect that arises in men being treated for BPH is impotence, as well as the inability to ejaculate. However, if any ejaculation activity does occur, oftentimes, it results in a phenomenon called retrograde ejaculation, in which semen flows into the urinary bladder instead of exiting through the urethra. Drugs that may produce such side effects include prazosin, terazosin, tamsulosin, and doxazosin. Other side effects Finally, there are other general side effects that can be caused by most alpha blockers (however, more frequently in alpha-1 blockers). Such side effects include dizziness, drowsiness, weakness, fatigue, psychiatric depression, and dry mouth. Priapism, an unwanted, painful long term erection not brought on by sexual arousal and lasting several hours has been associated with alpha blocker use. While this is extremely rare, particularly with tamsulosin, it can cause permanent impotence if not treated in a hospital setting. Male patients should be made aware of this as it can result from a single dose or develop over time. Contraindications There is only one compelling indication for alpha blockers, which is for benign prostatic hyperplasia. Patients who need alpha blockers for BPH, but have a history of hypotension or postural heart failure, should use these drugs with caution, as it may result in an even greater decrease in blood pressure or make heart failure even worse. The most compelling contraindication is urinary incontinence and overall fluid retention. To combat such fluid retention, patients can take a diuretic in combination with the alpha-blocker. In the absence of compelling indications or contraindications, patients should take alpha blockers as a step 4 therapy to reduce blood pressure, but only if the use of ACE inhibitors, angiotensin-II receptor blockers, calcium channel blockers, or thazide diuretics (in full dose or in combinations) have not been efficacious. Drug interactions As with any drug, there are drug interactions that can occur with alpha blockers. For instance, alpha blockers that are used for the reduction of blood pressure, such as phenoxybenzamine or phentolamine can have synergy with other drugs that affect smooth muscle, blood vessels, or drugs used for erectile dysfunction (i.e. sildenafil, tamsulosin, etc.). This stimulates exaggerated hypotension. Alternative alpha blockers, such as prazosin, tamsulosin, doxazosin, or terazosin can have adverse interactions with beta blockers, erectile dysfunction drugs, anxiolytics, and antihistamines. Again, these interactions can cause dangerous hypotension. Furthermore, in rare cases, drug interactions can cause irregular, rapid heartbeats or an increase blood pressure. Yohimbine can interact with stimulants, hypertension drugs, naloxone, and clonidine. Interactions with such drugs can cause either an unintended increase in blood pressure or potentiate an increase in blood pressure. Finally, in drugs with both alpha and beta blocking properties, such as carvedilol and labetalol, interactions with other alpha or beta blockers can exaggerate a decrease in blood pressure. Conversely, there are also drug interactions with carvedilol or labetalol in which blood pressure is increased unintentionally (such as with cough and cold medications). Finally, there may also be some alpha/beta blocker drug interactions that can worsen previous heart failure. Mechanism of action Alpha blockers work by blocking the effect of nerves in the sympathetic nervous system. This is done by binding to the alpha receptors in smooth muscle or blood vessels. α-blockers can bind both reversibly and irreversibly. There are several α receptors throughout the body where these drugs can bind. Specifically, α1 receptors can be found in most vascular smooth muscle, the pupillary dilator muscle, the heart, the prostate, and pilomotor smooth muscle. On the other hand, α2 receptors can be found in platelets, cholinergic nerve terminals, some vascular smooth muscle, postsynaptic CNS neurons, and fat cells. The structure of α receptors is a classic G protein–coupled receptors (GPCRs) consisting of 7 transmembrane domains, which form three intracellular loops and three extracellular loops. These receptors couple to heterotrimeric G proteins composed of α, β, and γ subunits. Although both of the α receptors are GPCRs, there are large differences in their mechanism of action. Specifically, α1 receptors are characterized as Gq GPCRs, signaling through Phospholipase C to increase IP3 and DAG, thus increasing the release of calcium. Meanwhile, α2 receptors are labeled as Gi GPCRs, which signal through adenylyl cyclase to decrease cAMP. Because the α1 and α2 receptors have different mechanisms of action, their antagonists also have different effects. α1 blockers can inhibit the release of IP3 and DAG to decrease calcium release, thus, decreasing overall signaling. On the other hand, α2 blockers prevent the reduction of cAMP, thus leading to an increase in overall signaling. See also Beta blocker Adrenergic receptor Adrenergic antagonist Sympathetic nervous system References
Alpha blocker
[ "Chemistry" ]
3,230
[ "Pharmacology", "Alpha blockers" ]
18,485,004
https://en.wikipedia.org/wiki/Efaroxan
Efaroxan is an α2-adrenergic receptor antagonist and antagonist of the imidazoline receptor. Synthesis The Darzens reaction between 2-fluorobenzaldehyde [57848-46-1] (1) and Ethyl 2-bromobutyrate [533-68-6] (2) gives ethyl 2-ethyl-3-(2-fluorophenyl)oxirane-2-carboxylate, CID:100942311 (3). A catalytic hydrogenation over Pd/C would give ethyl 2-[(2-fluorophenyl)methyl]-2-hydroxybutanoate, CID:77591056 (4). Saponification of the ester then gives 2-[(2-Fluorophenyl)methyl]-2-hydroxybutanoic acid, CID:53869347 (5). Treatment with 2 molar equivalents of sodium hydride apparently gives 2-Ethyl-2,3-dihydrobenzofuran-2-carboxylic acid [111080-50-3] (6). Treatment of the carboxylic acid with thionyl chloride then gives the acid chloride and subsequent treatment of this with ethylenediamine in the presence of trimethylaluminium completed the synthesis of (8). See also Fluparoxan Idazoxan References External links Benzofurans Imidazolines Alpha-2 blockers
Efaroxan
[ "Chemistry" ]
329
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
18,487,326
https://en.wikipedia.org/wiki/Germline%20mosaicism
Germline mosaicism, also called gonadal mosaicism, is a type of genetic mosaicism where more than one set of genetic information is found specifically within the gamete cells; conversely, somatic mosaicism is a type of genetic mosaicism found in somatic cells. Germline mosaicism can be present at the same time as somatic mosaicism or individually, depending on when the conditions occur. Pure germline mosaicism refers to mosaicism found exclusively in the gametes and not in any somatic cells. Germline mosaicism can be caused either by a mutation that occurs after conception, or by epigenetic regulation, alterations to DNA such as methylation that do not involve changes in the DNA coding sequence. A mutation in an allele acquired by a somatic cell early in its development can be passed on to its daughter cells, including those that later specialize to gametes. With such mutation within the gamete cells, a pair of medically typical individuals may have repeated succession of children who suffer from certain genetic disorders such as Duchenne muscular dystrophy and osteogenesis imperfecta because of germline mosaicism. It is possible for parents unaffected by germline mutations to produce an offspring with an autosomal dominant (AD) disorder due to a random new mutation within one’s gamete cells known as sporadic mutation; however, if these parents produce more than one child with an AD disorder, germline mosaicism is more likely the cause than a sporadic mutation. In the first documented case of its kind, two offspring of a French woman who had no phenotypic expression of the AD disorder hypertrophic cardiomyopathy, inherited the disease. Inheritance Germline mosaicism disorders are usually inherited in a pattern that suggests that the condition is dominant in either or both of the parents. That said, diverging from Mendelian gene inheritance patterns, a parent with a recessive allele can produce offspring expressing the phenotype as dominant through germline mosaicism. A situation may also arise in which the parents have milder phenotypic expression of a mutation yet produce offspring with more expressive phenotypic variance and a more frequent sibling recurrences of the mutation. Diseases caused by germline mosaicism can be difficult to diagnose as genetically-inherited because the mutant alleles are not likely to be present in the somatic cells. Somatic cells are more commonly used for genetic analysis because they are easier to obtain than gametes. If the disease is a result of pure germline mosaicism, then the disease causing mutant allele would never be present in the somatic cells. This is a source of uncertainty for genetic counselling. An individual may still be a carrier for a certain disease even if the disease causing mutant allele is not present in the cells that were analyzed because the causative mutation could still exist in some of the individual's gametes. Germline mosaicism may contribute to the inheritance of many genetic conditions. Conditions that are inherited by means of germline mosaicism are often mistaken as being the result of de novo mutations. Various diseases are now being re-examined for presence of mutant alleles in the germline of the parents in order to further our understanding of how they can be passed on. The frequency of germline mosaicism is not known due to the sporadic nature of the mutations causing it and the difficulty in obtaining the gametes that must be tested to diagnose it. Diagnosis Autosomal dominant or X-linked familial disorders often prompt prenatal testing for germline mosaicism. This diagnosis may involve minimally invasive procedures, such as blood sampling or amniotic fluid sampling. Collected samples can be sequenced via common DNA testing methods, such as Sanger Sequencing, MLPA, or Southern Blot analysis, to look for variations on relevant genes connected to the disorder. Recurrence rate The recurrence rate of conditions caused by germline mosaicism varies greatly between subjects. Recurrence is proportional to the number of gamete cells that carry the particular mutation with the condition. If the mutation occurred earlier on in the development of the gamete cells, then the recurrence rate would be higher because a greater number of cells would carry the mutant allele. Case studies A Moroccan family consisting of two healthy unrelated parents and three offspring—including two with Noonan syndrome, a rare autosomal dominant disorder with varying expression and genetic heterogeneity—underwent genetic testing revealing that both of the siblings with NS share the same PTPN11 haplotype from both parents, while a distinct paternal and maternal haplotype was inherited by the unaffected sibling. In the paper Germline and somatic mosaicism in transgenic mice published in 1986, Thomas M.Wilkie, Ralph L.Brinster, and Richard D.Palmiter analyzed a germline mosaicism experiment done on 262 transgenic mice and concluded that 30% of founder transgenic mice are mosaic in the germline. Notes Genetics
Germline mosaicism
[ "Biology" ]
1,038
[ "Genetics" ]
10,444,018
https://en.wikipedia.org/wiki/Depolymerization
Depolymerization (or depolymerisation) is the process of converting a polymer into a monomer or a mixture of monomers. This process is driven by an increase in entropy. Ceiling temperature The tendency of polymers to depolymerize is indicated by their ceiling temperature. At this temperature, the enthalpy of polymerization matches the entropy gained by converting a large molecule into monomers. Above the ceiling temperature, the rate of depolymerization is greater than the rate of polymerization, which inhibits the formation of the given polymer. Applications Depolymerization is a very common process. Digestion of food involves depolymerization of macromolecules, such as proteins. It is relevant to polymer recycling. Sometimes the depolymerization is well behaved, and clean monomers can be reclaimed and reused for making new plastic. In other cases, such as polyethylene, depolymerization gives a mixture of products. These products are, for polyethylene, ethylene, propylene, isobutylene, 1-hexene and heptane. Out of these, only ethylene can be used for polyethylene production, so other gases must be turned into ethylene, sold, or otherwise be destroyed or be disposed of by turning them into other products. Depolymerization is also related to production of chemicals and fuels from biomass. In this case, reagents are typically required. A simple case is the hydrolysis of celluloses to glucose by the action of water. Generally this process requires an acid catalyst: H(C6H10O5)nOH + (n - 1) H2O → n C6H12O6 See also Thermal depolymerization Polymer degradation Polymerisation Ceiling temperature Chain scission References External links Polymers Plastic recycling
Depolymerization
[ "Chemistry", "Materials_science" ]
381
[ "Polymer stubs", "Polymers", "Polymer chemistry", "Organic chemistry stubs" ]
10,446,157
https://en.wikipedia.org/wiki/COCO%20simulator
The COCO Simulator is a free-of-charge, non-commercial, graphical, modular and CAPE-OPEN compliant, steady-state, sequential simulation process modeling environment. It was originally intended as a test environment for CAPE-OPEN modeling tools but now provides free chemical process simulation for students. It is an open flowsheet modeling environment allowing anyone to add new unit operations or thermodynamics packages. The COCO Simulator uses a graphical representation, the Process Flow Diagram (PFD), for defining the process to be simulated. Clicking on a unit operation with the mouse allows the user to edit the unit operation parameters it defines via the CAPE-OPEN standard or to open the unit operation's own user interface, when available. This interoperability of process modeling software was enabled by the advent of the CAPE-OPEN standard. COCO thermodynamic library "TEA" and its chemical compound data bank are based on ChemSep LITE, a free equilibrium column simulator for distillation columns and liquid-liquid extractors. COCO's thermodynamic library exports more than 100 property calculation methods with their analytical or numerical derivatives. COCO includes a LITE version of COSMOtherm, an activity coefficient model based on Ab initio quantum chemistry methods. The simulator entails a set of unit-operations such as stream splitters/mixers, heat-exchangers, compressors, pumps and reactors. COCO features a reaction numerics package to power its simple conversion, equilibrium, CSTR, Gibbs minimization and plug flow reactor models. See also Process design (chemical engineering) List of Chemical Process Simulators Thermodynamic and thermophysical data Standard temperature and pressure References External links Homepage of the COCO Simulator CO-LaN - the CAPE-OPEN Laboratories Network is a neutral industry and academic association promoting open interface standards in process simulation software. CO-LaN members are committed to making Computer Aided Process Engineering easier, faster and less expensive by achieving complete interoperability of compliant commercial CAPE software tools. CO-LaN supports and maintains the CAPE-OPEN interface standards. Chemical engineering software
COCO simulator
[ "Chemistry", "Engineering" ]
434
[ "Chemical engineering software", "Chemical engineering" ]
10,451,618
https://en.wikipedia.org/wiki/Integrated%20catchment%20management
Integrated catchment management (ICM) is a subset of environmental planning which approaches sustainable resource management from a catchment perspective, in contrast to a piecemeal approach that artificially separates land management from water management. Details Integrated catchment management recognizes the existence of ecosystems and their role in supporting flora and fauna, providing services to human societies, and regulating the human environment. Integrated catchment management seeks to take into account complex relationships within those ecosystems: between flora and fauna, between geology, between soils and the biosphere, and between the biosphere and the atmosphere. Integrated catchment management recognizes the cyclic nature of processes within an ecosystem, and values scientific and technical information for understanding and analysing the natural world. See also Catchment Management Authority (New South Wales) Catchment Management Authority (Victoria) Motueka River List of drainage basins by area References External links Landcare Research - Integrated Catchment Management ABC catchment fact sheet Natural resource management Urban planning Hydrology Rivers
Integrated catchment management
[ "Chemistry", "Engineering", "Environmental_science" ]
188
[ "Hydrology", "Architecture", "Urban planning", "Environmental engineering" ]
10,451,852
https://en.wikipedia.org/wiki/Nahm%20equations
In differential geometry and gauge theory, the Nahm equations are a system of ordinary differential equations introduced by Werner Nahm in the context of the Nahm transform – an alternative to Ward's twistor construction of monopoles. The Nahm equations are formally analogous to the algebraic equations in the ADHM construction of instantons, where finite order matrices are replaced by differential operators. Deep study of the Nahm equations was carried out by Nigel Hitchin and Simon Donaldson. Conceptually, the equations arise in the process of infinite-dimensional hyperkähler reduction. They can also be viewed as a dimensional reduction of the anti-self-dual Yang-Mills equations . Among their many applications we can mention: Hitchin's construction of monopoles, where this approach is critical for establishing nonsingularity of monopole solutions; Donaldson's description of the moduli space of monopoles; and the existence of hyperkähler structure on coadjoint orbits of complex semisimple Lie groups, proved by , , and . Equations Let be three matrix-valued meromorphic functions of a complex variable . The Nahm equations are a system of matrix differential equations together with certain analyticity properties, reality conditions, and boundary conditions. The three equations can be written concisely using the Levi-Civita symbol, in the form More generally, instead of considering by matrices, one can consider Nahm's equations with values in a Lie algebra . Additional conditions The variable is restricted to the open interval , and the following conditions are imposed: can be continued to a meromorphic function of in a neighborhood of the closed interval , analytic outside of and , and with simple poles at and ; and At the poles, the residues of form an irreducible representation of the group SU(2). Nahm–Hitchin description of monopoles There is a natural equivalence between the monopoles of charge for the group , modulo gauge transformations, and the solutions of Nahm equations satisfying the additional conditions above, modulo the simultaneous conjugation of by the group . Lax representation The Nahm equations can be written in the Lax form as follows. Set then the system of Nahm equations is equivalent to the Lax equation As an immediate corollary, we obtain that the spectrum of the matrix does not depend on . Therefore, the characteristic equation which determines the so-called spectral curve in the twistor space is invariant under the flow in . See also Bogomolny equation Yang–Mills–Higgs equations References External links Islands project – a wiki about the Nahm equations and related topics Differential equations Eponymous equations of physics Mathematical physics Integrable systems
Nahm equations
[ "Physics", "Mathematics" ]
538
[ "Equations of physics", "Integrable systems", "Applied mathematics", "Theoretical physics", "Eponymous equations of physics", "Mathematical objects", "Equations", "Differential equations", "Mathematical physics" ]
10,452,186
https://en.wikipedia.org/wiki/Inverse%20problem%20for%20Lagrangian%20mechanics
In mathematics, the inverse problem for Lagrangian mechanics is the problem of determining whether a given system of ordinary differential equations can arise as the Euler–Lagrange equations for some Lagrangian function. There has been a great deal of activity in the study of this problem since the early 20th century. A notable advance in this field was a 1941 paper by the American mathematician Jesse Douglas, in which he provided necessary and sufficient conditions for the problem to have a solution; these conditions are now known as the Helmholtz conditions, after the German physicist Hermann von Helmholtz. Background and statement of the problem The usual set-up of Lagrangian mechanics on n-dimensional Euclidean space Rn is as follows. Consider a differentiable path u : [0, T] → Rn. The action of the path u, denoted S(u), is given by where L is a function of time, position and velocity known as the Lagrangian. The principle of least action states that, given an initial state x0 and a final state x1 in Rn, the trajectory that the system determined by L will actually follow must be a minimizer of the action functional S satisfying the boundary conditions u(0) = x0, u(T) = x1. Furthermore, the critical points (and hence minimizers) of S must satisfy the Euler–Lagrange equations for S: where the upper indices i denote the components of u = (u1, ..., un). In the classical case the Euler–Lagrange equations are the second-order ordinary differential equations better known as Newton's laws of motion: The inverse problem of Lagrangian mechanics is as follows: given a system of second-order ordinary differential equations that holds for times 0 ≤ t ≤ T, does there exist a Lagrangian L : [0, T] × Rn × Rn → R for which these ordinary differential equations (E) are the Euler–Lagrange equations? In general, this problem is posed not on Euclidean space Rn, but on an n-dimensional manifold M, and the Lagrangian is a function L : [0, T] × TM → R, where TM denotes the tangent bundle of M. Douglas' theorem and the Helmholtz conditions To simplify the notation, let and define a collection of n2 functions Φji by Theorem. (Douglas 1941) There exists a Lagrangian L : [0, T] × TM → R such that the equations (E) are its Euler–Lagrange equations if and only if there exists a non-singular symmetric matrix g with entries gij depending on both u and v satisfying the following three Helmholtz conditions: (The Einstein summation convention is in use for the repeated indices.) Applying Douglas' theorem At first glance, solving the Helmholtz equations (H1)–(H3) seems to be an extremely difficult task. Condition (H1) is the easiest to solve: it is always possible to find a g that satisfies (H1), and it alone will not imply that the Lagrangian is singular. Equation (H2) is a system of ordinary differential equations: the usual theorems on the existence and uniqueness of solutions to ordinary differential equations imply that it is, in principle, possible to solve (H2). Integration does not yield additional constants but instead first integrals of the system (E), so this step becomes difficult in practice unless (E) has enough explicit first integrals. In certain well-behaved cases (e.g. the geodesic flow for the canonical connection on a Lie group), this condition is satisfied. The final and most difficult step is to solve equation (H3), called the closure conditions since (H3) is the condition that the differential 1-form gi is a closed form for each i. The reason why this is so daunting is that (H3) constitutes a large system of coupled partial differential equations: for n degrees of freedom, (H3) constitutes a system of partial differential equations in the 2n independent variables that are the components gij of g, where denotes the binomial coefficient. In order to construct the most general possible Lagrangian, one must solve this huge system! Fortunately, there are some auxiliary conditions that can be imposed in order to help in solving the Helmholtz conditions. First, (H1) is a purely algebraic condition on the unknown matrix g. Auxiliary algebraic conditions on g can be given as follows: define functions Ψjki by The auxiliary condition on g is then In fact, the equations (H2) and (A) are just the first in an infinite hierarchy of similar algebraic conditions. In the case of a parallel connection (such as the canonical connection on a Lie group), the higher order conditions are always satisfied, so only (H2) and (A) are of interest. Note that (A) comprises conditions whereas (H1) comprises conditions. Thus, it is possible that (H1) and (A) together imply that the Lagrangian function is singular. As of 2006, there is no general theorem to circumvent this difficulty in arbitrary dimension, although certain special cases have been resolved. A second avenue of attack is to see whether the system (E) admits a submersion onto a lower-dimensional system and to try to "lift" a Lagrangian for the lower-dimensional system up to the higher-dimensional one. This is not really an attempt to solve the Helmholtz conditions so much as it is an attempt to construct a Lagrangian and then show that its Euler–Lagrange equations are indeed the system (E). References Calculus of variations Lagrangian mechanics Inverse problems
Inverse problem for Lagrangian mechanics
[ "Physics", "Mathematics" ]
1,206
[ "Applied mathematics", "Lagrangian mechanics", "Classical mechanics", "Inverse problems", "Dynamical systems" ]
12,744,141
https://en.wikipedia.org/wiki/Lagrange%20bracket
Lagrange brackets are certain expressions closely related to Poisson brackets that were introduced by Joseph Louis Lagrange in 1808–1810 for the purposes of mathematical formulation of classical mechanics, but unlike the Poisson brackets, have fallen out of use. Definition Suppose that (q1, ..., qn, p1, ..., pn) is a system of canonical coordinates on a phase space. If each of them is expressed as a function of two variables, u and v, then the Lagrange bracket of u and v is defined by the formula Properties Lagrange brackets do not depend on the system of canonical coordinates (q, p). If (Q,P) = (Q1, ..., Qn, P1, ..., Pn) is another system of canonical coordinates, so that is a canonical transformation, then the Lagrange bracket is an invariant of the transformation, in the sense that Therefore, the subscripts indicating the canonical coordinates are often omitted. If Ω is the symplectic form on the 2n-dimensional phase space W and u1,...,u2n form a system of coordinates on W, the symplectic form can be written as where the matrix represents the components of , viewed as a tensor, in the coordinates u. This matrix is the inverse of the matrix formed by the Poisson brackets of the coordinates u. As a corollary of the preceding properties, coordinates (Q1, ..., Qn, P1, ..., Pn) on a phase space are canonical if and only if the Lagrange brackets between them have the form Lagrange matrix in canonical transformations The concept of Lagrange brackets can be expanded to that of matrices by defining the Lagrange matrix. Consider the following canonical transformation: Defining , the Lagrange matrix is defined as , where is the symplectic matrix under the same conventions used to order the set of coordinates. It follows from the definition that: The Lagrange matrix satisfies the following known properties:where the is known as a Poisson matrix and whose elements correspond to Poisson brackets. The last identity can also be stated as the following:Note that the summation here involves generalized coordinates as well as generalized momentum. The invariance of Lagrange bracket can be expressed as: , which directly leads to the symplectic condition: . See also Lagrangian mechanics Hamiltonian mechanics References Cornelius Lanczos, The Variational Principles of Mechanics, Dover (1986), . Iglesias, Patrick, Les origines du calcul symplectique chez Lagrange [The origins of symplectic calculus in Lagrange's work], L'Enseign. Math. (2) 44 (1998), no. 3-4, 257–277. External links Bilinear maps Hamiltonian mechanics
Lagrange bracket
[ "Physics", "Mathematics" ]
602
[ "Hamiltonian mechanics", "Theoretical physics", "Classical mechanics", "Dynamical systems" ]
12,746,615
https://en.wikipedia.org/wiki/Cocompact%20group%20action
In mathematics, an action of a group G on a topological space X is cocompact if the quotient space X/G is a compact space. If X is locally compact, then an equivalent condition is that there is a compact subset K of X such that the image of K under the action of G covers X. It is sometimes referred to as mpact, a tongue-in-cheek reference to dual notions where prefixing with "co-" twice would "cancel out". References Group actions (mathematics)
Cocompact group action
[ "Physics", "Mathematics" ]
108
[ "Topology stubs", "Topology", "Group actions", "Symmetry" ]
12,747,246
https://en.wikipedia.org/wiki/Outer%20space%20%28mathematics%29
In the mathematical subject of geometric group theory, the Culler–Vogtmann Outer space or just Outer space of a free group Fn is a topological space consisting of the so-called "marked metric graph structures" of volume 1 on Fn. The Outer space, denoted Xn or CVn, comes equipped with a natural action of the group of outer automorphisms Out(Fn) of Fn. The Outer space was introduced in a 1986 paper of Marc Culler and Karen Vogtmann, and it serves as a free group analog of the Teichmüller space of a hyperbolic surface. Outer space is used to study homology and cohomology groups of Out(Fn) and to obtain information about algebraic, geometric and dynamical properties of Out(Fn), of its subgroups and individual outer automorphisms of Fn. The space Xn can also be thought of as the set of isometry types of minimal free discrete isometric actions of Fn on R-trees T such that the quotient metric graph T/Fn has volume 1. History The Outer space was introduced in a 1986 paper of Marc Culler and Karen Vogtmann, inspired by analogy with the Teichmüller space of a hyperbolic surface. They showed that the natural action of on is properly discontinuous, and that is contractible. In the same paper Culler and Vogtmann constructed an embedding, via the translation length functions discussed below, of into the infinite-dimensional projective space , where is the set of nontrivial conjugacy classes of elements of . They also proved that the closure of in is compact. Later a combination of the results of Cohen and Lustig and of Bestvina and Feighn identified (see Section 1.3 of ) the space with the space of projective classes of "very small" minimal isometric actions of on -trees. Formal definition Marked metric graphs Let n ≥ 2. For the free group Fn fix a "rose" Rn, that is a wedge, of n circles wedged at a vertex v, and fix an isomorphism between Fn and the fundamental group 1(Rn, v) of Rn. From this point on we identify Fn and 1(Rn, v) via this isomorphism. A marking on Fn consists of a homotopy equivalence f : Rn → Γ where Γ is a finite connected graph without degree-one and degree-two vertices. Up to a (free) homotopy, f is uniquely determined by the isomorphism f# : , that is by an isomorphism A metric graph is a finite connected graph together with the assignment to every topological edge e of Γ of a positive real number L(e) called the length of e. The volume of a metric graph is the sum of the lengths of its topological edges. A marked metric graph structure on Fn consists of a marking f : Rn → Γ together with a metric graph structure L on Γ. Two marked metric graph structures f1 : Rn → Γ1 and f2 : Rn → Γ2 are equivalent if there exists an isometry θ : Γ1 → Γ2 such that, up to free homotopy, we have θ o f1 = f2. The Outer space Xn consists of equivalence classes of all the volume-one marked metric graph structures on Fn. Weak topology on the Outer space Open simplices Let f : Rn → Γ where Γ is a marking and let k be the number of topological edges in Γ. We order the edges of Γ as e1, ..., ek. Let be the standard (k − 1)-dimensional open simplex in Rk. Given f, there is a natural map j : Δk → Xn, where for x = (x1, ..., xk) ∈ Δk, the point j(x) of Xn is given by the marking f together with the metric graph structure L on Γ such that L(ei) = xi for i = 1, ..., k. One can show that j is in fact an injective map, that is, distinct points of Δk correspond to non-equivalent marked metric graph structures on Fn. The set j(Δk) is called open simplex in Xn corresponding to f and is denoted S(f). By construction, Xn is the union of open simplices corresponding to all markings on Fn. Note that two open simplices in Xn either are disjoint or coincide. Closed simplices Let f : Rn → Γ where Γ is a marking and let k be the number of topological edges in Γ. As before, we order the edges of Γ as e1, ..., ek. Define Δk′ ⊆ Rk as the set of all x = (x1, ..., xk) ∈ Rk, such that , such that each xi ≥ 0 and such that the set of all edges ei in with xi = 0 is a subforest in Γ. The map j : Δk → Xn extends to a map h : Δk′ → Xn as follows. For x in Δk put h(x) = j(x). For x ∈ Δk′ − Δk the point h(x) of Xn is obtained by taking the marking f, contracting all edges ei of with xi = 0 to obtain a new marking f1 : Rn → Γ1 and then assigning to each surviving edge ei of Γ1 length xi > 0. It can be shown that for every marking f the map h : Δk′ → Xn is still injective. The image of h is called the closed simplex in Xn corresponding to f and is denoted by S′(f). Every point in Xn belongs to only finitely many closed simplices and a point of Xn represented by a marking f : Rn → Γ where the graph Γ is tri-valent belongs to a unique closed simplex in Xn, namely S′(f). The weak topology on the Outer space Xn is defined by saying that a subset C of Xn is closed if and only if for every marking f : Rn → Γ the set h−1(C) is closed in Δk′. In particular, the map h : Δk′ → Xn is a topological embedding. Points of Outer space as actions on trees Let x be a point in Xn given by a marking f : Rn → Γ with a volume-one metric graph structure L on Γ. Let T be the universal cover of Γ. Thus T is a simply connected graph, that is T is a topological tree. We can also lift the metric structure L to T by giving every edge of T the same length as the length of its image in Γ. This turns T into a metric space (T, d) which is a real tree. The fundamental group 1(Γ) acts on T by covering transformations which are also isometries of (T, d), with the quotient space T/1(Γ) = Γ. Since the induced homomorphism f# is an isomorphism between Fn = 1(Rn) and 1(Γ), we also obtain an isometric action of Fn on T with T/Fn = Γ. This action is free and discrete. Since Γ is a finite connected graph with no degree-one vertices, this action is also minimal, meaning that T has no proper Fn-invariant subtrees. Moreover, every minimal free and discrete isometric action of Fn on a real tree with the quotient being a metric graph of volume one arises in this fashion from some point x of Xn. This defines a bijective correspondence between Xn and the set of equivalence classes of minimal free and discrete isometric actions of Fn on a real trees with volume-one quotients. Here two such actions of Fn on real trees T1 and T2 are equivalent if there exists an Fn-equivariant isometry between T1 and T2. Length functions Give an action of Fn on a real tree T as above, one can define the translation length function associate with this action: For g ≠ 1 there is a (unique) isometrically embedded copy of R in T, called the axis of g, such that g acts on this axis by a translation of magnitude . For this reason is called the translation length of g. For any g, u in Fn we have , that is the function is constant on each conjugacy class in G. In the marked metric graph model of Outer space translation length functions can be interpreted as follows. Let T in Xn be represented by a marking f : Rn → Γ with a volume-one metric graph structure L on Γ. Let g ∈ Fn = 1(Rn). First push g forward via f# to get a closed loop in Γ and then tighten this loop to an immersed circuit in Γ. The L-length of this circuit is the translation length of g. A basic general fact from the theory of group actions on real trees says that a point of the Outer space is uniquely determined by its translation length function. Namely if two trees with minimal free isometric actions of Fn define equal translation length functions on Fn then the two trees are Fn-equivariantly isometric. Hence the map from Xn to the set of R-valued functions on Fn is injective. One defines the length function topology or axes topology on Xn as follows. For every T in Xn, every finite subset K of Fn and every ε > 0 let In the length function topology for every T in Xn a basis of neighborhoods of T in Xn is given by the family VT(K, ε) where K is a finite subset of Fn and where ε > 0. Convergence of sequences in the length function topology can be characterized as follows. For T in Xn and a sequence Ti in Xn we have if and only if for every g in Fn we have Gromov topology Another topology on is the so-called Gromov topology or the equivariant Gromov–Hausdorff convergence topology, which provides a version of Gromov–Hausdorff convergence adapted to the setting of an isometric group action. When defining the Gromov topology, one should think of points of as actions of on -trees. Informally, given a tree , another tree is "close" to in the Gromov topology, if for some large finite subtrees of and a large finite subset there exists an "almost isometry" between and with respect to which the (partial) actions of on and almost agree. For the formal definition of the Gromov topology see. Coincidence of the weak, the length function and Gromov topologies An important basic result states that the Gromov topology, the weak topology and the length function topology on Xn coincide. Action of Out(Fn) on Outer space The group Out(Fn) admits a natural right action by homeomorphisms on Xn. First we define the action of the automorphism group Aut(Fn) on Xn. Let α ∈ Aut(Fn) be an automorphism of Fn. Let x be a point of Xn given by a marking f : Rn → Γ with a volume-one metric graph structure L on Γ. Let τ : Rn → Rn be a homotopy equivalence whose induced homomorphism at the fundamental group level is the automorphism α of Fn = 1(Rn). The element xα of Xn is given by the marking f ∘ τ : Rn → Γ with the metric structure L on Γ. That is, to get xα from x we simply precompose the marking defining x with τ. In the real tree model this action can be described as follows. Let T in Xn be a real tree with a minimal free and discrete co-volume-one isometric action of Fn. Let α ∈ Aut(Fn). As a metric space, Tα is equal to T. The action of Fn is twisted by α. Namely, for any t in T and g in Fn we have: At the level of translation length functions the tree Tα is given as: One then checks that for the above action of Aut(Fn) on Outer space Xn the subgroup of inner automorphisms Inn(Fn) is contained in the kernel of this action, that is every inner automorphism acts trivially on Xn. It follows that the action of Aut(Fn) on Xn quotients through to an action of Out(Fn) = Aut(Fn)/Inn(Fn) on Xn. namely, if φ ∈ Out(Fn) is an outer automorphism of Fn and if α in Aut(Fn) is an actual automorphism representing φ then for any x in Xn we have xφ = xα. The right action of Out(Fn) on Xn can be turned into a left action via a standard conversion procedure. Namely, for φ ∈ Out(Fn) and x in Xn set φx = xφ−1. This left action of Out(Fn) on Xn is also sometimes considered in the literature although most sources work with the right action. Moduli space The quotient space Mn = Xn/Out(Fn) is the moduli space which consists of isometry types of finite connected graphs Γ without degree-one and degree-two vertices, with fundamental groups isomorphic to Fn (that is, with the first Betti number equal to n) equipped with volume-one metric structures. The quotient topology on Mn is the same as that given by the Gromov–Hausdorff distance between metric graphs representing points of Mn. The moduli space Mn is not compact and the "cusps" in Mn arise from decreasing towards zero lengths of edges for homotopically nontrivial subgraphs (e.g. an essential circuit) of a metric graph Γ. Basic properties and facts about Outer space Outer space Xn is contractible and the action of Out(Fn) on Xn is properly discontinuous, as was proved by Culler and Vogtmann in their original 1986 paper where Outer space was introduced. The space Xn has topological dimension 3n − 4. The reason is that if Γ is a finite connected graph without degree-one and degree-two vertices with fundamental group isomorphic to Fn, then Γ has at most 3n − 3 edges and it has exactly 3n − 3 edges when Γ is trivalent. Hence the top-dimensional open simplex in Xn has dimension 3n − 4. Outer space Xn contains a specific deformation retract Kn of Xn, called the spine of Outer space. The spine Kn has dimension 2n − 3, is Out(Fn)-invariant and has compact quotient under the action of Out(Fn). Unprojectivized Outer space The unprojectivized Outer space consists of equivalence classes of all marked metric graph structures on Fn where the volume of the metric graph in the marking is allowed to be any positive real number. The space can also be thought of as the set of all free minimal discrete isometric actions of Fn on R-trees, considered up to Fn-equivariant isometry. The unprojectivized Outer space inherits the same structures that has, including the coincidence of the three topologies (Gromov, axes, weak), and an -action. In addition, there is a natural action of on by scalar multiplication. Topologically, is homeomorphic to . In particular, is also contractible. Projectivized Outer space The projectivized Outer space is the quotient space under the action of on by scalar multiplication. The space is equipped with the quotient topology. For a tree its projective equivalence class is denoted . The action of on naturally quotients through to the action of on . Namely, for and put . A key observation is that the map is an -equivariant homeomorphism. For this reason the spaces and are often identified. Lipschitz distance The Lipschitz distance, named for Rudolf Lipschitz, for Outer space corresponds to the Thurston metric in Teichmüller space. For two points in Xn the (right) Lipschitz distance is defined as the (natural) logarithm of the maximally stretched closed path from to : and This is an asymmetric metric (also sometimes called a quasimetric), i.e. it only fails symmetry . The symmetric Lipschitz metric normally denotes: The supremum is always obtained and can be calculated by a finite set the so called candidates of . Where is the finite set of conjugacy classes in Fn which correspond to embeddings of a simple loop, a figure of eight, or a barbell into via the marking (see the diagram). The stretching factor also equals the minimal Lipschitz constant of a homotopy equivalence carrying over the marking, i.e. Where are the continuous functions such that for the marking on the marking is freely homotopic to the marking on . The induced topology is the same as the weak topology and the isometry group is for both, the symmetric and asymmetric Lipschitz distance. Applications and generalizations The closure of in the length function topology is known to consist of (Fn-equivariant isometry classes of) all very small minimal isometric actions of Fn on R-trees. Here the closure is taken in the space of all minimal isometric "irreducible" actions of on -trees, considered up to equivariant isometry. It is known that the Gromov topology and the axes topology on the space of irreducible actions coincide, so the closure can be understood in either sense. The projectivization of with respect to multiplication by positive scalars gives the space which is the length function compactification of and of , analogous to Thurston's compactification of the Teichmüller space. Analogs and generalizations of the Outer space have been developed for free products, for right-angled Artin groups, for the so-called deformation spaces of group actions and in some other contexts. A base-pointed version of Outer space, called Auter space, for marked metric graphs with base-points, was constructed by Hatcher and Vogtmann in 1998. The Auter space shares many properties in common with the Outer space, but only comes with an action of . See also Geometric group theory Mapping class group Train track map Out(Fn) References Further reading Mladen Bestvina, The topology of Out(Fn). Proceedings of the International Congress of Mathematicians, Vol. II (Beijing, 2002), pp. 373–384, Higher Education Press, Beijing, 2002; . Karen Vogtmann, On the geometry of outer space. Bulletin of the American Mathematical Society 52 (2015), no. 1, 27–46. Geometric group theory Geometric topology
Outer space (mathematics)
[ "Physics", "Mathematics" ]
3,996
[ "Geometric group theory", "Group actions", "Geometric topology", "Topology", "Symmetry" ]
12,750,298
https://en.wikipedia.org/wiki/Reaction%20dynamics
Reaction dynamics is a field within physical chemistry, studying why chemical reactions occur, how to predict their behavior, and how to control them. It is closely related to chemical kinetics, but is concerned with individual chemical events on atomic length scales and over very brief time periods. It considers state-to-state kinetics between reactant and product molecules in specific quantum states, and how energy is distributed between translational, vibrational, rotational, and electronic modes. Experimental methods of reaction dynamics probe the chemical physics associated with molecular collisions. They include crossed molecular beam and infrared chemiluminescence experiments, both recognized by the 1986 Nobel Prize in Chemistry awarded to Dudley Herschbach, Yuan T. Lee, and John C. Polanyi "for their contributions concerning the dynamics of chemical elementary processes", In the crossed beam method used by Herschbach and Lee, narrow beams of reactant molecules in selected quantum states are allowed to react in order to determine the reaction probability as a function of such variables as the translational, vibrational and rotational energy of the reactant molecules and their angle of approach. In contrast the method of Polanyi measures vibrational energy of the products by detecting the infrared chemiluminescence emitted by vibrationally excited molecules, in some cases for reactants in defined energy states. Spectroscopic observation of reaction dynamics on the shortest time scales is known as femtochemistry, since the typical times studied are of the order of 1 femtosecond = 10−15 s. This subject has been recognized by the award of the 1999 Nobel Prize in Chemistry to Ahmed Zewail. In addition, theoretical studies of reaction dynamics involve calculating the potential energy surface for a reaction as a function of nuclear positions, and then calculating the trajectory of a point on this surface representing the state of the system. A correction can be applied to include the effect of quantum tunnelling through the activation energy barrier, especially for the movement of hydrogen atoms. References Further reading Steinfeld J.I., Francisco J.S. and Hase W.L. Chemical Kinetics and Dynamics (2nd ed., Prentice-Hall 1999) chaps.6-13 Physical chemistry
Reaction dynamics
[ "Physics", "Chemistry" ]
444
[ "Physical chemistry", "Applied and interdisciplinary physics", "nan" ]
12,752,120
https://en.wikipedia.org/wiki/Island%20of%20inversion
An island of inversion is a region of the chart of nuclides where isotopes have enhanced stability in a sea of mostly very unstable nuclei at the edge of the nuclear map. Each island contains isotopes with a non-standard ordering of single particle levels in the nuclear shell model. Such an area was first described in 1975 by French physicists carrying out spectroscopic mass measurements of exotic isotopes of lithium and sodium. Since then further studies have shown that at least five such regions exist. These are centered on five neutron-rich nuclides: Li, C, Na, Si, and Cr. Because there are five known islands of inversion, physicists have suggested renaming the phenomenon "archipelago of islands of shell breaking". Studies with the purpose of defining the edges of this region are still ongoing. See also Table of nuclides Periodic table and Extended periodic table Island of stability References External links Abstract and references for the original paper Article on archipelago of shell-breaking with map of nuclide table showing the 5 known islands. From Physical Review Letters: New neutron-rich nuclei support "island of inversion" theory at the National Superconducting Cyclotron Laboratory website. Nuclear physics
Island of inversion
[ "Physics" ]
243
[ "Nuclear and atomic physics stubs", "Nuclear physics" ]
12,752,899
https://en.wikipedia.org/wiki/Alloenzyme
Alloenzymes (or also called allozymes) are variant forms of an enzyme which differ structurally but not functionally from other allozymes coded for by different alleles at the same locus. These are opposed to isozymes, which are enzymes that perform the same function, but which are coded by genes located at different loci. Alloenzymes are common biological enzymes that exhibit high levels of functional evolutionary conservation throughout specific phyla and kingdoms. They are used by phylogeneticists as molecular markers to gauge evolutionary histories and relationships between different species. This can be done because allozymes do not have the same structure. They can be separated by capillary electrophoresis. However, some species are monomorphic for many of their allozymes which would make it difficult for phylogeneticists to assess the evolutionary histories of these species. In these instances, phylogeneticists would have to use another method to determine the evolutionary history of a species. These enzymes generally perform very basic functions found commonly throughout all lifeforms, such as DNA polymerase, the enzyme that repairs and copies DNA. Significant changes in this enzyme reflect significant events in evolutionary history of organisms. As expected DNA polymerase shows relatively small differences in its amino acid sequence between phyla and even kingdoms. The key to choosing which alloenzyme to use in a comparison between multiple species is to choose one that is as variable as possible while still being present in all the organisms. By comparing the amino acid sequence of the enzyme in the species, more amino acid similarities should be seen in species that are more closely related, and fewer between those that are more distantly related. The less well conserved the enzyme is, the more amino acid differences will be present in even closely related species. See also Comparative genomics Phylogenetics Molecular phylogeny Molecular evolution Homology (biology) References Enzymes Molecular biology Evolutionary biology Genomics Phylogenetics
Alloenzyme
[ "Chemistry", "Biology" ]
389
[ "Evolutionary biology", "Taxonomy (biology)", "Bioinformatics", "Molecular biology", "Biochemistry", "Phylogenetics" ]
19,498,558
https://en.wikipedia.org/wiki/Nucleon%20pair%20breaking%20in%20fission
Nucleon pair breaking in fission has been an important topic in nuclear physics for decades. "Nucleon pair" refers to nucleon pairing effects which strongly influence the nuclear properties of a nuclide. The most measured quantities in research on nuclear fission are the charge and mass fragments yields for uranium-235 and other fissile nuclides. In this sense, experimental results on charge distribution for low-energy fission of actinides present a preference to an even Z fragment, which is called odd-even effect on charge yield. The importance of these distributions is because they are the result of rearrangement of nucleons on the fission process due to the interplay between collective variables and individual particle levels; therefore they permit to understand several aspects of dynamics of fission process. The process from saddle (when nucleus begins its irreversible evolution to fragmentation) to scission point (when fragments are formed and nuclear interaction between fragments dispels), fissioning system shape changes but also promote nucleons to excited particle levels. Because, for even Z (proton number) and even N (neutron number) nuclei, there is a gap from ground state to first excited particle state—which is reached by nucleon pair breaking—fragments with even Z is expected to have a higher probability to be produced than those with odd Z. The preference even Z even N divisions is interpreted as the preservation of superfluidity during the descent from saddle to scission. The absence of odd-even effect means that process is rather viscous. Contrary to observed for charge distributions no odd-even effect on fragments mass number (A) is observed. This result is interpreted by the hypothesis that in fission process always there will be nucleon pair breaking, which may be proton pair or neutron pair breaking in low energy fission of uranium-234, uranium-236, and plutonium-240 studied by Modesto Montoya. References Fission products Nuclear fission
Nucleon pair breaking in fission
[ "Physics", "Chemistry" ]
396
[ "Nuclear fission", "Fission products", "Nuclear fallout", "Nuclear chemistry stubs", "Nuclear and atomic physics stubs", "Nuclear physics" ]
19,503,111
https://en.wikipedia.org/wiki/Neutron%20number
The neutron number (symbol N) is the number of neutrons in a nuclide. Atomic number (proton number) plus neutron number equals mass number: . The difference between the neutron number and the atomic number is known as the neutron excess: . Neutron number is not written explicitly in nuclide symbol notation, but can be inferred as it is the difference between the two left-hand numbers (atomic number and mass). Nuclides that have the same neutron number but different proton numbers are called isotones. This word was formed by replacing the p in isotope with n for neutron. Nuclides that have the same mass number are called isobars. Nuclides that have the same neutron excess are called isodiaphers. Chemical properties are primarily determined by proton number, which determines which chemical element the nuclide is a member of; neutron number has only a slight influence. Neutron number is primarily of interest for nuclear properties. For example, actinides with odd neutron number are usually fissile (fissionable with slow neutrons) while actinides with even neutron number are usually not fissile (but are fissionable with fast neutrons). Only 58 stable nuclides have an odd neutron number, compared to 194 with an even neutron number. No odd-neutron-number isotope is the most naturally abundant isotope in its element, except for beryllium-9 (which is the only stable beryllium isotope), nitrogen-14, and platinum-195. No stable nuclides have a neutron number of 19, 21, 35, 39, 45, 61, 89, 115, 123, or ≥ 127. There are 6 stable nuclides and one radioactive primordial nuclide with neutron number 82 (82 is the neutron number with the most stable nuclides, since it is a magic number): barium-138, lanthanum-139, cerium-140, praseodymium-141, neodymium-142, and samarium-144, as well as the radioactive primordial nuclide xenon-136, which decays by a very slow double beta process. Except 20, 50 and 82 (all these three numbers are magic numbers), all other neutron numbers have at most 4 stable nuclides (in the case of 20, there are 5 stable nuclides 36S, 37Cl, 38Ar, 39K, and 40Ca, and in the case for 50, there are 5 stable nuclides: 86Kr, 88Sr, 89Y, 90Zr, and 92Mo, and 1 radioactive primordial nuclide, 87Rb). Most odd neutron numbers have at most one stable nuclide (exceptions are 1 (2H and 3He), 5 (9Be and 10B), 7 (13C and 14N), 55 (97Mo and 99Ru) and 107 (179Hf and 180mTa)). However, some even neutron numbers also have only one stable nuclide; these numbers are 0 (1H), 2 (4He), 4 (7Li), 84 (142Ce), 86 (146Nd) and 126 (208Pb), the case of 84 is special, since 142Ce is theoretically unstable to double beta decay, and the nuclides with 84 neutrons which are theoretically stable to both beta decay and double beta decay are 144Nd and 146Sm, but both nuclides are observed to alpha decay. (In theory, no stable nuclides have neutron number 19, 21, 35, 39, 45, 61, 71, 83–91, 95, 96, and ≥ 99) Besides, no nuclides with neutron number 19, 21, 35, 39, 45, 61, 71, 89, 115, 123, 147, ... are stable to beta decay (see Beta-decay stable isobars). Only two stable nuclides have fewer neutrons than protons: hydrogen-1 and helium-3. Hydrogen-1 has the smallest neutron number, 0. References Nuclear physics
Neutron number
[ "Physics", "Chemistry" ]
851
[ "Nuclear chemistry stubs", "Nuclear physics" ]
19,505,063
https://en.wikipedia.org/wiki/Matrix%20clock
A matrix clock is a mechanism for capturing chronological and causal relationships in a distributed system. Matrix clocks are a generalization of the notion of vector clocks. A matrix clock maintains a vector of the vector clocks for each communicating host. Every time a message is exchanged, the sending host sends not only what it knows about the global state of time, but also the state of time that it received from other hosts. This allows establishing a lower bound on what other hosts know, and is useful in applications such as checkpointing and garbage collection. References See also Lamport timestamps Vector clock Version vector Logical clock algorithms
Matrix clock
[ "Physics", "Technology" ]
123
[ "Physical quantities", "Time", "Computer science stubs", "Computer science", "Computing stubs", "Spacetime", "Logical clock algorithms" ]
17,302,858
https://en.wikipedia.org/wiki/Fructolysis
Fructolysis refers to the metabolism of fructose from dietary sources. Though the metabolism of glucose through glycolysis uses many of the same enzymes and intermediate structures as those in fructolysis, the two sugars have very different metabolic fates in human metabolism. Under one percent of ingested fructose is directly converted to plasma triglyceride. 29% - 54% of fructose is converted in liver to glucose, and about a quarter of fructose is converted to lactate. 15% - 18% is converted to glycogen. Glucose and lactate are then used normally as energy to fuel cells all over the body. Fructose is a dietary monosaccharide present naturally in fruits and vegetables, either as free fructose or as part of the disaccharide sucrose, and as its polymer inulin. It is also present in the form of refined sugars including granulated sugars (white crystalline table sugar, brown sugar, confectioner's sugar, and turbinado sugar), refined crystalline fructose, as high fructose corn syrups as well as in honey. About 10% of the calories contained in the Western diet are supplied by fructose (approximately 55 g/day). Unlike glucose, fructose is not an insulin secretagogue, and can in fact lower circulating insulin. In addition to the liver, fructose is metabolized in the intestines, testis, kidney, skeletal muscle, fat tissue and brain, but it is not transported into cells via insulin-sensitive pathways (insulin regulated transporters GLUT1 and GLUT4). Instead, fructose is taken in by GLUT5. Fructose in muscles and adipose tissue is phosphorylated by hexokinase. Fructolysis and glycolysis are independent pathways Although the metabolism of fructose and glucose share many of the same intermediate structures, they have very different metabolic fates in human metabolism. Fructose is metabolized almost completely in the liver in humans, and is directed toward replenishment of liver glycogen and triglyceride synthesis, while much of dietary glucose passes through the liver and goes to skeletal muscle, where it is metabolized to CO2, H2O and ATP, and to fat cells where it is metabolized primarily to glycerol phosphate for triglyceride synthesis as well as energy production. The products of fructose metabolism are liver glycogen and de novo lipogenesis of fatty acids and eventual synthesis of endogenous triglyceride. This synthesis can be divided into two main phases: The first phase is the synthesis of the trioses, dihydroxyacetone (DHAP) and glyceraldehyde; the second phase is the subsequent metabolism of these trioses either in the gluconeogenic pathway for glycogen replenishment and/or the complete metabolism in the fructolytic pathway to pyruvate, which enters the Krebs cycle, is converted to citrate and subsequently directed toward de novo synthesis of the free fatty acid palmitate. The metabolism of fructose to DHAP and glyceraldehyde The first step in the metabolism of fructose is the phosphorylation of fructose to fructose 1-phosphate by fructokinase (Km = 0.5 mM, ≈ 9 mg/100 ml), thus trapping fructose for metabolism in the liver. Hexokinase IV (Glucokinase), also occurs in the liver and would be capable of phosphorylating fructose to fructose 6-phosphate (an intermediate in the gluconeogenic pathway); however, it has a relatively high Km (12 mM) for fructose and, therefore, essentially all of the fructose is converted to fructose-1-phosphate in the human liver. Much of the glucose, on the other hand, is not phosphorylated (Km of hepatic glucokinase (hexokinase IV) = 10 mM), passes through the liver directed toward peripheral tissues, and is taken up by the insulin-dependent glucose transporter, GLUT 4, present on adipose tissue and skeletal muscle. Fructose-1-phosphate then undergoes hydrolysis by fructose-1-phosphate aldolase (aldolase B) to form dihydroxyacetone phosphate (DHAP) and glyceraldehyde; DHAP can either be isomerized to glyceraldehyde 3-phosphate by triosephosphate isomerase or undergo reduction to glycerol 3-phosphate by glycerol 3-phosphate dehydrogenase. The glyceraldehyde produced may also be converted to glyceraldehyde 3-phosphate by glyceraldehyde kinase or converted to glycerol 3-phosphate by glyceraldehyde 3-phosphate dehydrogenase. The metabolism of fructose at this point yields intermediates in gluconeogenic pathway leading to glycogen synthesis, or can be oxidized to pyruvate and reduced to lactate, or be decarboxylated to acetyl CoA in the mitochondria and directed toward the synthesis of free fatty acid, resulting finally in triglyceride synthesis. Synthesis of glycogen from DHAP and glyceraldehyde-3-phosphate The synthesis of glycogen in the liver following a fructose-containing meal proceeds from gluconeogenic precursors. Fructose is initially converted to DHAP and glyceraldehyde by fructokinase and aldolase B. The resultant glyceraldehyde then undergoes phosphorylation to glyceraldehyde-3-phosphate. Increased concentrations of DHAP and glyceraldehyde-3-phosphate in the liver drive the gluconeogenic pathway toward glucose-6-phosphate, glucose-1-phosphate and glycogen formation. It appears that fructose is a better substrate for glycogen synthesis than glucose and that glycogen replenishment takes precedence over triglyceride formation. Once liver glycogen is replenished, the intermediates of fructose metabolism are primarily directed toward triglyceride synthesis. Synthesis of triglyceride from DHAP and glyceraldehyde-3-phosphate Carbons from dietary fructose are found in both the FFA and glycerol moieties of plasma triglycerides (TG). Excess dietary fructose can be converted to pyruvate, enter the Krebs cycle and emerges as citrate directed toward free fatty acid synthesis in the cytosol of hepatocytes. The DHAP formed during fructolysis can also be converted to glycerol and then glycerol 3-phosphate for TG synthesis. Thus, fructose can provide trioses for both the glycerol 3-phosphate backbone, as well as the free fatty acids in TG synthesis. Indeed, fructose may provide the bulk of the carbohydrate directed toward de novo TG synthesis in humans. Fructose induces hepatic lipogenic enzymes Fructose consumption results in the insulin-independent induction of several important hepatic lipogenic enzymes including pyruvate kinase, NADP+-dependent malate dehydrogenase, citrate lyase, acetyl CoA carboxylase, fatty acid synthase, as well as pyruvate dehydrogenase. Although not a consistent finding among metabolic feeding studies, diets high in refined fructose have been shown to lead to hypertriglyceridemia in a wide range of populations including individuals with normal glucose metabolism as well as individuals with impaired glucose tolerance, diabetes, hypertriglyceridemia, and hypertension. The hypertriglyceridemic effects observed are a hallmark of increased dietary carbohydrate, and fructose appears to be dependent on a number of factors including the amount of dietary fructose consumed and degree of insulin resistance. ‡ = Mean ± SEM activity in nmol/min per mg protein § = 12 rats/group * = Significantly different from control at p < 0.05 Abnormalities in fructose metabolism The lack of two important enzymes in fructose metabolism results in the development of two inborn errors in carbohydrate metabolism – essential fructosuria and hereditary fructose intolerance. In addition, reduced phosphorylation potential within hepatocytes can occur with intravenous infusion of fructose. Inborn errors in fructose metabolism Essential fructosuria The absence of fructokinase results in the inability to phosphorylate fructose to fructose-1-phosphate within the cell. As a result, fructose is neither trapped within the cell nor directed toward its metabolism. Free fructose concentrations in the liver increase and fructose is free to leave the cell and enter plasma. This results in an increase in plasma concentration of fructose, eventually exceeding the kidneys' threshold for fructose reabsorption resulting in the appearance of fructose in the urine. Essential fructosuria is a benign asymptomatic condition. Hereditary fructose intolerance The absence of fructose-1-phosphate aldolase (aldolase B) results in the accumulation of fructose 1 phosphate in hepatocytes, kidney and small intestines. An accumulation of fructose-1-phosphate following fructose ingestion inhibits glycogenolysis (breakdown of glycogen) and gluconeogenesis, resulting in severe hypoglycemia. It is symptomatic resulting in severe hypoglycemia, abdominal pain, vomiting, hemorrhage, jaundice, hepatomegaly, and hyperuricemia eventually leading to liver and/or kidney failure and death. The incidence varies throughout the world, but it is estimated at 1:55,000 (range 1:10,000 to 1:100,000) live births. Reduced phosphorylation potential Intravenous (i.v.) infusion of fructose has been shown to lower phosphorylation potential in liver cells by trapping inorganic phosphate (Pi) as fructose 1-phosphate. The fructokinase reaction occurs quite rapidly in hepatocytes trapping fructose in cells by phosphorylation. On the other hand, the splitting of fructose 1 phosphate to DHAP and glyceraldehyde by Aldolase B is relatively slow. Therefore, fructose-1-phosphate accumulates with the corresponding reduction of intracellular Pi available for phosphorylation reactions in the cell. This is why fructose is contraindicated for total parenteral nutrition (TPN) solutions and is never given intravenously as a source of carbohydrate. It has been suggested that excessive dietary intake of fructose may also result in reduced phosphorylation potential. However, this is still a contentious issue. Dietary fructose is not well absorbed and increased dietary intake often results in malabsorption. Whether or not sufficient amounts of dietary fructose could be absorbed to cause a significant reduction in phosphorylating potential in liver cells remains questionable and there are no clear examples of this in the literature. References External links The Entry of Fructose and Galactose into Glycolysis, Chapter 16.1.11. Biochemistry, 5th edition, Jeremy M Berg, John L Tymoczko, and Lubert Stryer, New York: W H Freeman; 2002. Biochemistry Carbohydrate metabolism Cellular respiration Metabolic pathways
Fructolysis
[ "Chemistry", "Biology" ]
2,529
[ "Carbohydrate metabolism", "Cellular respiration", "Biochemistry", "Carbohydrate chemistry", "nan", "Metabolic pathways", "Metabolism" ]
17,303,780
https://en.wikipedia.org/wiki/Aerodynamic%20potential-flow%20code
In fluid dynamics, aerodynamic potential flow codes or panel codes are used to determine the fluid velocity, and subsequently the pressure distribution, on an object. This may be a simple two-dimensional object, such as a circle or wing, or it may be a three-dimensional vehicle. A series of singularities as sources, sinks, vortex points and doublets are used to model the panels and wakes. These codes may be valid at subsonic and supersonic speeds. History Early panel codes were developed in the late 1960s to early 1970s. Advanced panel codes, such as Panair (developed by Boeing), were first introduced in the late 1970s, and gained popularity as computing speed increased. Over time, panel codes were replaced with higher order panel methods and subsequently CFD (Computational Fluid Dynamics). However, panel codes are still used for preliminary aerodynamic analysis as the time required for an analysis run is significantly less due to a decreased number of elements. Assumptions These are the various assumptions that go into developing potential flow panel methods: Inviscid Incompressible Irrotational Steady However, the incompressible flow assumption may be removed from the potential flow derivation leaving: Potential flow (inviscid, irrotational, steady) Derivation of panel method solution to potential flow problem From Small Disturbances (subsonic) From Divergence Theorem Let Velocity U be a twice continuously differentiable function in a region of volume V in space. This function is the stream function . Let P be a point in the volume V Let S be the surface boundary of the volume V. Let Q be a point on the surface S, and . As Q goes from inside V to the surface of V, Therefore: For :, where the surface normal points inwards. This equation can be broken down into both a source term and a doublet term. The Source Strength at an arbitrary point Q is: The Doublet Strength at an arbitrary point Q is: The simplified potential flow equation is: With this equation, along with applicable boundary conditions, the potential flow problem may be solved. Required boundary conditions The velocity potential on the internal surface and all points inside V (or on the lower surface S) is 0. The Doublet Strength is: The velocity potential on the outer surface is normal to the surface and is equal to the freestream velocity. These basic equations are satisfied when the geometry is a 'watertight' geometry. If it is watertight, it is a well-posed problem. If it is not, it is an ill-posed problem. Discretization of potential flow equation The potential flow equation with well-posed boundary conditions applied is: Note that the integration term is evaluated only on the upper surface, while th integral term is evaluated on the upper and lower surfaces. The continuous surface S may now be discretized into discrete panels. These panels will approximate the shape of the actual surface. This value of the various source and doublet terms may be evaluated at a convenient point (such as the centroid of the panel). Some assumed distribution of the source and doublet strengths (typically constant or linear) are used at points other than the centroid. A single source term s of unknown strength and a single doublet term m of unknown strength are defined at a given point. where: These terms can be used to create a system of linear equations which can be solved for all the unknown values of . Methods for discretizing panels constant strength - simple, large number of panels required linear varying strength - reasonable answer, little difficulty in creating well-posed problems quadratic varying strength - accurate, more difficult to create a well-posed problem Some techniques are commonly used to model surfaces. Body Thickness by line sources Body Lift by line doublets Wing Thickness by constant source panels Wing Lift by constant pressure panels Wing-Body Interface by constant pressure panels Methods of determining pressure Once the Velocity at every point is determined, the pressure can be determined by using one of the following formulas. All various Pressure coefficient methods produce results that are similar and are commonly used to identify regions where the results are invalid. Pressure Coefficient is defined as: The Isentropic Pressure Coefficient is: The Incompressible Pressure Coefficient is: The Second Order Pressure Coefficient is: The Slender Body Theory Pressure Coefficient is: The Linear Theory Pressure Coefficient is: The Reduced Second Order Pressure Coefficient is: What panel methods cannot do Panel methods are inviscid solutions. You will not capture viscous effects except via user "modeling" by changing the geometry. Solutions are invalid as soon as the flow changes locally from subsonic to supersonic (i.e. the critical Mach number has been exceeded) or vice versa. Potential flow software See also Stream function Conformal mapping Velocity potential Divergence theorem Joukowsky transform Potential flow Circulation Biot–Savart law Notes References Public Domain Aerodynamic Software, A Panair Distribution Source, Ralph Carmichael Panair Volume I, Theory Manual, Version 3.0, Michael Epton, Alfred Magnus, 1990 Boeing Panair Volume II, Theory Manual, Version 3.0, Michael Epton, Alfred Magnus, 1990 Boeing Panair Volume III, Case Manual, Version 1.0, Michael Epton, Kenneth Sidewell, Alfred Magnus, 1981 Boeing Panair Volume IV, Maintenance Document, Version 3.0, Michael Epton, Kenneth Sidewell, Alfred Magnus, 1991 Boeing Recent Experience in Using Finite Element Methods For The Solution Of Problems In Aerodynamic Interference, Ralph Carmichael, 1971 NASA Ames Research Center Fluid dynamics
Aerodynamic potential-flow code
[ "Chemistry", "Engineering" ]
1,111
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
17,303,960
https://en.wikipedia.org/wiki/Plastic%20pipework
Plastic pipe is a tubular section, or hollow cylinder, made of plastic. It is usually, but not necessarily, of circular cross-section, used mainly to convey substances which can flow—liquids and gases (fluids), slurries, powders and masses of small solids. It can also be used for structural applications; hollow pipes are far stiffer per unit weight than solid members. Plastic pipework is used for the conveyance of drinking water, waste water, chemicals, heating fluid and cooling fluids, foodstuffs, ultra-pure liquids, slurries, gases, compressed air, irrigation, plastic pressure pipe systems, and vacuum system applications. Types There are three basic types of plastic pipes: Solid wall pipe Extruded pipes consisting of one layer of a homogeneous matrix of thermoplastic material which is ready for use in a pipeline. Structured wall pipe Structured-wall pipes and fittings are products which have an optimized design with regard to material usage to achieve the physical, mechanical and performance requirements. Structured Wall Pipes are tailor made solutions of piping systems, for a variety of applications and in most cases developed in cooperation with users. Barrier pipe Pipe incorporating a flexible metallic layer as the middle of three bonded layers. Barrier pipe is used, for example, to provide additional protection for the contents passing through the pipe (particularly drinking water) from aggressive chemicals or other pollution when laid in ground contaminated by previous use. Most plastic pipe systems are made from thermoplastic materials. The production method involves melting the material, shaping and then cooling. Pipes are normally produced by extrusion. Standards Plastic pipe systems fulfil a variety of service requirements. Product standards for plastics pipe systems are prepared within the CEN/TC155 standards committee. These requirements are described in a set of European Product Standards for each application alongside their specific characteristics, for example: Conveyance of drinking water: Hygienic requirements Conveyance of gas: Highest Safety requirements Plastic pipes for radiant heating and floor heating: Temperature resistance over decades Sewer applications: High chemical resistance Plastic pipes are capable of fulfilling the specific requirement for each application. They do so over a long lifetime and with reliability and safety. The key success factor is achieved by maintaining consistently high quality levels. For plastic pipe products, these levels are defined by the different standards. Two aspects are fundamentally important for the performance of plastic pipes: flexibility and long lifetime. Materials used ABS (acrylonitrile butadiene styrene) CPVC (chlorinated polyvinyl chloride) HDPE (high-density polyethylene) PB-1 (polybutylene) PE (polyethylene) of various densities, also abbreviated to LDPE, MDPE and HDPE (low, medium and high density polyethylene; the medium density version is at times referred to as "black alkathene" in the UK) PE-RT (polyethylene of raised temperature (RT)) PEX (cross-linked polyethylene) PP (polypropylene) PVDF (polyvinylidene difluoride) UPVC (unplasticized polyvinyl chloride) Material characteristics ABS (acrylonitrile butadiene styrene) Acrylonitrile butadiene styrene (ABS) is used for the conveyance of potable water, slurries and chemicals. Most commonly used for DWV (drain-waste-vent) applications. It has a wide temperature range, from -40 °C to +60 °C. ABS is a thermoplastic material and was originally developed in the early 1950s for use in oil fields and the chemical industry. The variability of the material and its relative cost effectiveness has made it a popular engineering plastic. It can be tailored to a range of applications by modifying the ratio of the individual chemical components. They are used mainly in industrial applications where high impact strength and rigidity are essential. This material is also used in non-pressure piping systems for soil and waste. CPVC (chlorinated polyvinyl chloride) Chlorinated polyvinyl chloride (CPVC) is resistant to many acids, bases, salts, paraffinic hydrocarbons, halogens and alcohols. It is not resistant to solvents, aromatics and some chlorinated hydrocarbons. It can carry higher temperature liquids than uPVC with a max operating temperature reaching . Due to its greater temperature threshold and chemical resistance, CPVC is one of the main recommended material choices in residential, commercial, and industrial water and liquid transport. HDPE (high-density polyethylene) High-density polyethylene (HDPE) - HDPE pipe is strong, flexible and light weight. It has a zero leak rate when fused together. PB-1 (polybutylene) PB-1 is used in pressure piping systems for hot and cold potable water, pre-insulated district heating networks, and surface heating and cooling systems. Key properties are weldability, temperature resistance, flexibility and high hydrostatic pressure resistance. One standard type, PB 125, has a minimum required strength (MRS) of 12.5 MPa. It also has low noise transmission, low linear thermal expansion, no corrosion and calcification. PB-1 piping systems are no longer sold in North America. Market share in Europe and Asia is small but steadily growing. In some markets, e.g. Kuwait, UK, Korea and Spain, PB-1 has a strong position. PE (polyethylene) Polyethylene has been successfully used for the safe conveyance of potable and waste water, hazardous waste, and compressed gases for many years. Two variants are HDPE pipe (high-density polyethylene) and the more heat resistant PEX (cross-linked polyethylene, also XLPE). PE has been used for pipes since the early 1950s. PE pipes are made by extrusion in a variety of sizes dimensions. PE is lightweight, flexible and easy to weld. Its smooth interior finish ensures good flow characteristics. Continuous development of the material has enhanced its performance, leading to rapidly increasing usage by major water and gas utility companies throughout the world. The pipes are also used in lining and trench-less technologies, the so-called no-dig applications where the pipes are installed without digging trenches without any disruption above ground. Here the pipes may be used to line old pipe systems to reduce leakage and improve water quality. These solutions are therefore helping engineers to rehabilitate antiquated pipe systems. Excavation is minimal and the process is carried out quickly below ground. Also for PE pipe material, several studies demonstrated the long track record with expected lifetime of more than 50 years. Cross-linked polyethylene is commonly referred to as XLPE or PEX. It is a thermoplastic material that can be made in three different ways depending how the cross-linking of the polymer chains is being made. PEX was developed in the 1950s. It has been used for pipes in Europe since the early 1970s and has been gaining rapid popularity over the last few decades. Often supplied in coils, it is flexible and can therefore be led around structures without fittings. Its strength at temperatures ranging from below freezing up to almost boiling makes it an ideal pipe material for hot and cold water installations, radiator and under floor heating, de-icing and ceiling cooling applications PE-RT Polyethylene of Raised Temperature (RT) or PE-RT expands the traditional properties of polyethylene. Enhanced strength at high temperatures are thus made possible through special molecular design and manufacturing process control. Its resistance to low or high temperatures makes PE-RT ideal for a broad range of hot and cold water pipe applications. PP (polypropylene) Polypropylene is suitable for use with foodstuffs, potable and ultra pure waters, as well as within the pharmaceutical and chemical industries. PP is a thermoplastic polymer made from polypropylene. It was first invented in the 1950s and has been used for pipes since the 1970s. Due to the high impact resistance combined with good stiffness and high chemical resistance makes this material suitable for sewer applications. A good performance at operating temperature range from up to (continuous) makes this material suitable for in-house discharge systems for soil & waste. A special PP grade with high temperature behaviour up to (short-term) makes that material a good choice for in-house warm water supply. PVDF (polyvinylidene difluoride) Polyvinylidene difluoride (PVDF) is a fairly non-reactive, thermoplastic fluoropolymer with excellent chemical and thermal resistance for plastic pipework uses. PVDF resin is produced through polymerization of the vinylidene fluoride monomer. The PVDF resin is then used to make PVDF pipe as well as many other products. Industries and applications select PVDF pipe due to its inert, durable qualities. PVDF piping is used most in the chemical process industry due to its ability to plumb aggressive, corrosive solutions. PVDF pipe also sees common use in high purity applications, semi-conductor fabrication, electronics / electricity, pharmaceutical developments, and nuclear waste processing. PVDF piping specifications and performance characteristics approve PVDF pipe up to under pressurized system conditions. The pipe does not support fungus growth according to military test standard method 508, 81-0B. Dissimilar from other common thermoplastic pipes, (uPVC, CPVC, PE, PP), PVDF does not exhibit sensitivity to UV light or ozone oxidative damage, approving it for long term outdoor uses. uPVC (unplasticized polyvinyl chloride) uPVC or PVC-U, is a thermoplastic material derived from common salt and fossil fuels. The pipe material has the longest track record of all plastic materials. The first uPVC pipes were made in the 1930s. Beginning in the 1950s, uPVC pipes were used to replace corroded metal pipes and thus bring fresh drinking water to a growing rural and later urban population. uPVC pipes are certified safe for drinking water per NSF Standard 61 and used extensively for water distribution and transmission pipelines throughout North America and around the world. uPVC is allowed for waste lines in homes and is the most often used pipe for sanitary sewers. Further pressure and non-pressure applications in the field of sewers, soil and waste, gas (low pressure) and cable protection soon followed. The material's contribution to public health, hygiene and well-being has therefore been significant. Polyvinyl chloride or uPVC (unplasticized polyvinyl chloride) pipes are not well suited for hot water lines and have been restricted from inside water supply line use in the US for homes since 2006. Code IRC P2904.5 uPVC Not listed. uPVC has high chemical resistance across its operating temperature range, with a broad band of operating pressures. Max operating temperature is reported at , and max working pressure: . Due to its long-term strength characteristics, high stiffness and cost effectiveness, uPVC systems account for a large proportion of plastic piping installations and some estimations put it that greater than of uPVC pipe are currently in service across applications. uPVC variants Based on the standard polyvinyl chloride material, three other variants are in use. One variant called OPVC, or PVCO, represents an important landmark in the history of plastic pipe technology. This molecular-oriented bi-axial high performance version combines higher strength with extra impact resistance. A ductile variant is the MPVC, polyvinyl chloride modified with acrylics or chlorinated PE. This more ductile material with high fracture resistance is used in higher-demand applications where resistance against cracking and stress corrosion is important. In several studies the long track record of uPVC pipes has been investigated. Recent investigations at the German KRV and the Dutch TNO have confirmed that uPVC water pressure pipes, when installed correctly have a useful life span of over 100 years. Characteristics Longevity of plastic piping systems Plastic pipes have been used in service for over 50 years. The predicted lifetime of plastic piping systems exceeds 100 years. Several industry studies have demonstrated this prognosis. Plastic pipe materials have always been classified on the basis of long-term pressure testing. The measured failure times as a function of the stresses in the pipe wall has been demonstrated in so-called Regression Curves. An extrapolation based on measured failure times has been calculated to reach 50 years. The predicted failure stress at 50 years was taken as a basis for the classification. This value is called MRS, Minimum Required Stress, at 50 years. Pipe system failure Some reasons why plastic piping systems may fail are poor product bonding/gluing during installation and naturally-occurring physical damage, such as from tree root infiltration. Plastic pipes were also found to fail more often during dry, hot summers. Flexibility Plastic Pipes are classified by their ring stiffness. The preferred stiffness classes as described in several product standards are: SN2, SN4, SN8 and SN16, where SN is Nominal Stiffness (kN/m2). Stiffness of pipes is important if they are to withstand external loadings during installation. The higher the figure, the stiffer the pipe. After correct installation, pipe deflection remains limited but it will continue to some extent for a while. In relation to the soil in which it is embedded, the plastic pipe behaves in a 'flexible' way. This means that further deflection in time depends on the settlement of the soil around the pipe. Basically, the pipe follows the soil movement or settlement of the backfill, as technicians call it. This means that good installation of pipes will result in good soil settlement. Further deflection will remain limited. For flexible pipes, the soil loading is distributed and supported by the surrounding soil. Stresses and strains caused by the deflection of the pipe will occur within the pipe wall. However, the induced stresses will never exceed the allowed limit values. The thermoplastic behavior of the pipe material is such that the induced stresses are relaxing to a low level. Induced strains are far below the allowable levels. This flexible behaviour means that the pipe will not fail. It will exhibit only more deflection while keeping its function without breaking. However, rigid pipes by their very nature are not flexible and will not follow ground movements. They will bear all the ground loadings, whatever the soil settlement. This means that when a rigid pipe is subject to excessive loading, it will reach the limit for stress values more quickly and break. It can therefore be concluded that the flexibility of plastic pipes offers an extra dimension of safety. Buried Pipes need flexibility. Components of plastic pressure pipe systems Pipes, fittings, valves, and accessories make up a plastic pressure pipe system. The range of pipe diameters for each pipe system does vary. However, the size ranges from and . Pipes are extruded and are generally available in: , , , and straight lengths and , , , and coils for LDPE and HDPE. Pipe fittings are moulded and come in many sizes: tee 90° equal (straight and reducing), tee 45°, cross equal, elbow 90° (straight and reducing), elbow 45°, short radius bend 90° socket/coupler (straight and reducing), union, end caps, reducing bush, and stub, full face, and blanking flanges. Valves are moulded and also come in many types: ball valves (also multiport valve), butterfly valves, spring-, ball-, and swing-check non-return valves, diaphragm valves, knife gate valve, globe valves and pressure relief/reduction valves. Accessories are solvents, cleaners, glues, clips, backing rings, and gaskets. See also HDPE Pipe Pipe support Piping Reinforced thermoplastic pipe References External links ISO Technical Committee TC 138 - Plastics pipes, fittings and valves for the transport of fluids ASTM Plastics Pipe Standards Plastics Pipe Institute (PPI) Plastics Pipes and Fittings Association (PPFA) The European Plastic Pipes and Fittings Association (TEPPFA) Piping Plumbing Pipe manufacture
Plastic pipework
[ "Chemistry", "Engineering" ]
3,361
[ "Building engineering", "Chemical engineering", "Plumbing", "Construction", "Pipe manufacture", "Mechanical engineering", "Piping" ]
22,173,048
https://en.wikipedia.org/wiki/Volume%20conjecture
In the branch of mathematics called knot theory, the volume conjecture is an open problem that relates quantum invariants of knots to the hyperbolic geometry of their complements. Statement Let O denote the unknot. For any knot , let be the Kashaev invariant of , which may be defined as , where is the -Colored Jones polynomial of . The volume conjecture states that , where is the simplicial volume of the complement of in the 3-sphere, defined as follows. By the JSJ decomposition, the complement may be uniquely decomposed into a system of tori with hyperbolic and Seifert-fibered. The simplicial volume is then defined as the sum , where is the hyperbolic volume of the hyperbolic manifold . As a special case, if is a hyperbolic knot, then the JSJ decomposition simply reads , and by definition the simplicial volume agrees with the hyperbolic volume . History The Kashaev invariant was first introduced by Rinat M. Kashaev in 1994 and 1995 for hyperbolic links as a state sum using the theory of quantum dilogarithms. Kashaev stated the formula of the volume conjecture in the case of hyperbolic knots in 1997. pointed out that the Kashaev invariant is related to the colored Jones polynomial by replacing the variable with the root of unity . They used an R-matrix as the discrete Fourier transform for the equivalence of these two descriptions. This paper was the first to state the volume conjecture in its modern form using the simplicial volume. They also prove that the volume conjecture implies the following conjecture of Victor Vasiliev: If all Vassiliev invariants of a knot agree with those of the unknot, then the knot is the unknot. The key observation in their proof is that if every Vassiliev invariant of a knot is trivial, then for any . Status The volume conjecture is open for general knots, and it is known to be false for arbitrary links. The volume conjecture has been verified in many special cases, including: The figure-eight knot (Tobias Ekholm), The three-twist knot (Rinat Kashaev and Yoshiyuki Yokota), The Borromean rings (Stavros Garoufalidis and Thang Le), Torus knots (Rinat Kashaev and Olav Tirkkonen), All knots and links with volume zero (Roland van der Veen), Twisted Whitehead links (Hao Zheng), Whitehead doubles of nontrivial torus knots with (Hao Zheng). Relation to Chern-Simons theory Using complexification, proved that for a hyperbolic knot , , where is the Chern–Simons invariant. They established a relationship between the complexified colored Jones polynomial and Chern–Simons theory. References Notes Sources . . . . . Knot theory Conjectures Unsolved problems in geometry
Volume conjecture
[ "Mathematics" ]
601
[ "Geometry problems", "Unsolved problems in mathematics", "Unsolved problems in geometry", "Conjectures", "Mathematical problems" ]
22,174,129
https://en.wikipedia.org/wiki/Community%20matrix
In mathematical biology, the community matrix is the linearization of a generalized Lotka–Volterra equation at an equilibrium point. The eigenvalues of the community matrix determine the stability of the equilibrium point. For example, the Lotka–Volterra predator–prey model is where x(t) denotes the number of prey, y(t) the number of predators, and α, β, γ and δ are constants. By the Hartman–Grobman theorem the non-linear system is topologically equivalent to a linearization of the system about an equilibrium point (x*, y*), which has the form where u = x − x* and v = y − y*. In mathematical biology, the Jacobian matrix evaluated at the equilibrium point (x*, y*) is called the community matrix. By the stable manifold theorem, if one or both eigenvalues of have positive real part then the equilibrium is unstable, but if all eigenvalues have negative real part then it is stable. See also Paradox of enrichment References . Mathematical and theoretical biology Population ecology Dynamical systems Matrices
Community matrix
[ "Physics", "Mathematics" ]
231
[ "Mathematical and theoretical biology", "Applied mathematics", "Mathematical objects", "Matrices (mathematics)", "Mechanics", "Applied mathematics stubs", "Dynamical systems" ]
760,994
https://en.wikipedia.org/wiki/Electron%20mobility
In solid-state physics, the electron mobility characterises how quickly an electron can move through a metal or semiconductor when pushed or pulled by an electric field. There is an analogous quantity for holes, called hole mobility. The term carrier mobility refers in general to both electron and hole mobility. Electron and hole mobility are special cases of electrical mobility of charged particles in a fluid under an applied electric field. When an electric field E is applied across a piece of material, the electrons respond by moving with an average velocity called the drift velocity, . Then the electron mobility μ is defined as Electron mobility is almost always specified in units of cm2/(V⋅s). This is different from the SI unit of mobility, m2/(V⋅s). They are related by 1 m2/(V⋅s) = 104 cm2/(V⋅s). Conductivity is proportional to the product of mobility and carrier concentration. For example, the same conductivity could come from a small number of electrons with high mobility for each, or a large number of electrons with a small mobility for each. For semiconductors, the behavior of transistors and other devices can be very different depending on whether there are many electrons with low mobility or few electrons with high mobility. Therefore mobility is a very important parameter for semiconductor materials. Almost always, higher mobility leads to better device performance, with other things equal. Semiconductor mobility depends on the impurity concentrations (including donor and acceptor concentrations), defect concentration, temperature, and electron and hole concentrations. It also depends on the electric field, particularly at high fields when velocity saturation occurs. It can be determined by the Hall effect, or inferred from transistor behavior. Introduction Drift velocity in an electric field Without any applied electric field, in a solid, electrons and holes move around randomly. Therefore, on average there will be no overall motion of charge carriers in any particular direction over time. However, when an electric field is applied, each electron or hole is accelerated by the electric field. If the electron were in a vacuum, it would be accelerated to ever-increasing velocity (called ballistic transport). However, in a solid, the electron repeatedly scatters off crystal defects, phonons, impurities, etc., so that it loses some energy and changes direction. The final result is that the electron moves with a finite average velocity, called the drift velocity. This net electron motion is usually much slower than the normally occurring random motion. The two charge carriers, electrons and holes, will typically have different drift velocities for the same electric field. Quasi-ballistic transport is possible in solids if the electrons are accelerated across a very small distance (as small as the mean free path), or for a very short time (as short as the mean free time). In these cases, drift velocity and mobility are not meaningful. Definition and units The electron mobility is defined by the equation: where: E is the magnitude of the electric field applied to a material, vd is the magnitude of the electron drift velocity (in other words, the electron drift speed) caused by the electric field, and μe is the electron mobility. The hole mobility is defined by a similar equation: Both electron and hole mobilities are positive by definition. Usually, the electron drift velocity in a material is directly proportional to the electric field, which means that the electron mobility is a constant (independent of the electric field). When this is not true (for example, in very large electric fields), mobility depends on the electric field. The SI unit of velocity is m/s, and the SI unit of electric field is V/m. Therefore the SI unit of mobility is (m/s)/(V/m) = m2/(V⋅s). However, mobility is much more commonly expressed in cm2/(V⋅s) = 10−4 m2/(V⋅s). Mobility is usually a strong function of material impurities and temperature, and is determined empirically. Mobility values are typically presented in table or chart form. Mobility is also different for electrons and holes in a given material. Derivation Starting with Newton's second law: where: a is the acceleration between collisions. F is the electric force exerted by the electric field, and is the effective mass of an electron. Since the force on the electron is −eE: This is the acceleration on the electron between collisions. The drift velocity is therefore: where is the mean free time Since we only care about how the drift velocity changes with the electric field, we lump the loose terms together to get where Similarly, for holes we have where Note that both electron mobility and hole mobility are positive. A minus sign is added for electron drift velocity to account for the minus charge. Relation to current density The drift current density resulting from an electric field can be calculated from the drift velocity. Consider a sample with cross-sectional area A, length l and an electron concentration of n. The current carried by each electron must be , so that the total current density due to electrons is given by: Using the expression for gives A similar set of equations applies to the holes, (noting that the charge on a hole is positive). Therefore the current density due to holes is given by where p is the hole concentration and the hole mobility. The total current density is the sum of the electron and hole components: Relation to conductivity We have previously derived the relationship between electron mobility and current density Now Ohm's law can be written in the form where is defined as the conductivity. Therefore we can write down: which can be factorised to Relation to electron diffusion In a region where n and p vary with distance, a diffusion current is superimposed on that due to conductivity. This diffusion current is governed by Fick's law: where: F is flux. De is the diffusion coefficient or diffusivity is the concentration gradient of electrons The diffusion coefficient for a charge carrier is related to its mobility by the Einstein relation. For a classical system (e.g. Boltzmann gas), it reads: where: kB is the Boltzmann constant T is the absolute temperature e is the electric charge of an electron For a metal, described by a Fermi gas (Fermi liquid), quantum version of the Einstein relation should be used. Typically, temperature is much smaller than the Fermi energy, in this case one should use the following formula: where: EF is the Fermi energy Examples Typical electron mobility at room temperature (300 K) in metals like gold, copper and silver is 30–50 cm2/(V⋅s). Carrier mobility in semiconductors is doping dependent. In silicon (Si) the electron mobility is of the order of 1,000, in germanium around 4,000, and in gallium arsenide up to 10,000 cm2/(V⋅s). Hole mobilities are generally lower and range from around 100 cm2/(V⋅s) in gallium arsenide, to 450 in silicon, and 2,000 in germanium. Very high mobility has been found in several ultrapure low-dimensional systems, such as two-dimensional electron gases (2DEG) (35,000,000 cm2/(V⋅s) at low temperature), carbon nanotubes (100,000 cm2/(V⋅s) at room temperature) and freestanding graphene (200,000 cm2/(V⋅s) at low temperature). Organic semiconductors (polymer, oligomer) developed thus far have carrier mobilities below 50 cm2/(V⋅s), and typically below 1, with well performing materials measured below 10. Electric field dependence and velocity saturation At low fields, the drift velocity vd is proportional to the electric field E, so mobility μ is constant. This value of μ is called the low-field mobility. As the electric field is increased, however, the carrier velocity increases sublinearly and asymptotically towards a maximum possible value, called the saturation velocity vsat. For example, the value of vsat is on the order of 1×107 cm/s for both electrons and holes in Si. It is on the order of 6×106 cm/s for Ge. This velocity is a characteristic of the material and a strong function of doping or impurity levels and temperature. It is one of the key material and semiconductor device properties that determine a device such as a transistor's ultimate limit of speed of response and frequency. This velocity saturation phenomenon results from a process called optical phonon scattering. At high fields, carriers are accelerated enough to gain sufficient kinetic energy between collisions to emit an optical phonon, and they do so very quickly, before being accelerated once again. The velocity that the electron reaches before emitting a phonon is: where ωphonon(opt.) is the optical-phonon angular frequency and m* the carrier effective mass in the direction of the electric field. The value of Ephonon (opt.) is 0.063 eV for Si and 0.034 eV for GaAs and Ge. The saturation velocity is only one-half of vemit, because the electron starts at zero velocity and accelerates up to vemit in each cycle. (This is a somewhat oversimplified description.) Velocity saturation is not the only possible high-field behavior. Another is the Gunn effect, where a sufficiently high electric field can cause intervalley electron transfer, which reduces drift velocity. This is unusual; increasing the electric field almost always increases the drift velocity, or else leaves it unchanged. The result is negative differential resistance. In the regime of velocity saturation (or other high-field effects), mobility is a strong function of electric field. This means that mobility is a somewhat less useful concept, compared to simply discussing drift velocity directly. Relation between scattering and mobility Recall that by definition, mobility is dependent on the drift velocity. The main factor determining drift velocity (other than effective mass) is scattering time, i.e. how long the carrier is ballistically accelerated by the electric field until it scatters (collides) with something that changes its direction and/or energy. The most important sources of scattering in typical semiconductor materials, discussed below, are ionized impurity scattering and acoustic phonon scattering (also called lattice scattering). In some cases other sources of scattering may be important, such as neutral impurity scattering, optical phonon scattering, surface scattering, and defect scattering. Elastic scattering means that energy is (almost) conserved during the scattering event. Some elastic scattering processes are scattering from acoustic phonons, impurity scattering, piezoelectric scattering, etc. In acoustic phonon scattering, electrons scatter from state k to k', while emitting or absorbing a phonon of wave vector q. This phenomenon is usually modeled by assuming that lattice vibrations cause small shifts in energy bands. The additional potential causing the scattering process is generated by the deviations of bands due to these small transitions from frozen lattice positions. Ionized impurity scattering Semiconductors are doped with donors and/or acceptors, which are typically ionized, and are thus charged. The Coulombic forces will deflect an electron or hole approaching the ionized impurity. This is known as ionized impurity scattering. The amount of deflection depends on the speed of the carrier and its proximity to the ion. The more heavily a material is doped, the higher the probability that a carrier will collide with an ion in a given time, and the smaller the mean free time between collisions, and the smaller the mobility. When determining the strength of these interactions due to the long-range nature of the Coulomb potential, other impurities and free carriers cause the range of interaction with the carriers to reduce significantly compared to bare Coulomb interaction. If these scatterers are near the interface, the complexity of the problem increases due to the existence of crystal defects and disorders. Charge trapping centers that scatter free carriers form in many cases due to defects associated with dangling bonds. Scattering happens because after trapping a charge, the defect becomes charged and therefore starts interacting with free carriers. If scattered carriers are in the inversion layer at the interface, the reduced dimensionality of the carriers makes the case differ from the case of bulk impurity scattering as carriers move only in two dimensions. Interfacial roughness also causes short-range scattering limiting the mobility of quasi-two-dimensional electrons at the interface. Lattice (phonon) scattering At any temperature above absolute zero, the vibrating atoms create pressure (acoustic) waves in the crystal, which are termed phonons. Like electrons, phonons can be considered to be particles. A phonon can interact (collide) with an electron (or hole) and scatter it. At higher temperature, there are more phonons, and thus increased electron scattering, which tends to reduce mobility. Piezoelectric scattering Piezoelectric effect can occur only in compound semiconductor due to their polar nature. It is small in most semiconductors but may lead to local electric fields that cause scattering of carriers by deflecting them, this effect is important mainly at low temperatures where other scattering mechanisms are weak. These electric fields arise from the distortion of the basic unit cell as strain is applied in certain directions in the lattice. Surface roughness scattering Surface roughness scattering caused by interfacial disorder is short range scattering limiting the mobility of quasi-two-dimensional electrons at the interface. From high-resolution transmission electron micrographs, it has been determined that the interface is not abrupt on the atomic level, but actual position of the interfacial plane varies one or two atomic layers along the surface. These variations are random and cause fluctuations of the energy levels at the interface, which then causes scattering. Alloy scattering In compound (alloy) semiconductors, which many thermoelectric materials are, scattering caused by the perturbation of crystal potential due to the random positioning of substituting atom species in a relevant sublattice is known as alloy scattering. This can only happen in ternary or higher alloys as their crystal structure forms by randomly replacing some atoms in one of the sublattices (sublattice) of the crystal structure. Generally, this phenomenon is quite weak but in certain materials or circumstances, it can become dominant effect limiting conductivity. In bulk materials, interface scattering is usually ignored. Inelastic scattering During inelastic scattering processes, significant energy exchange happens. As with elastic phonon scattering also in the inelastic case, the potential arises from energy band deformations caused by atomic vibrations. Optical phonons causing inelastic scattering usually have the energy in the range 30-50 meV, for comparison energies of acoustic phonon are typically less than 1 meV but some might have energy in order of 10 meV. There is significant change in carrier energy during the scattering process. Optical or high-energy acoustic phonons can also cause intervalley or interband scattering, which means that scattering is not limited within single valley. Electron–electron scattering Due to the Pauli exclusion principle, electrons can be considered as non-interacting if their density does not exceed the value 1016~1017 cm−3 or electric field value 103 V/cm. However, significantly above these limits electron–electron scattering starts to dominate. Long range and nonlinearity of the Coulomb potential governing interactions between electrons make these interactions difficult to deal with. Relation between mobility and scattering time A simple model gives the approximate relation between scattering time (average time between scattering events) and mobility. It is assumed that after each scattering event, the carrier's motion is randomized, so it has zero average velocity. After that, it accelerates uniformly in the electric field, until it scatters again. The resulting average drift mobility is: where q is the elementary charge, m* is the carrier effective mass, and is the average scattering time. If the effective mass is anisotropic (direction-dependent), m* is the effective mass in the direction of the electric field. Matthiessen's rule Normally, more than one source of scattering is present, for example both impurities and lattice phonons. It is normally a very good approximation to combine their influences using "Matthiessen's Rule" (developed from work by Augustus Matthiessen in 1864): where μ is the actual mobility, is the mobility that the material would have if there was impurity scattering but no other source of scattering, and is the mobility that the material would have if there was lattice phonon scattering but no other source of scattering. Other terms may be added for other scattering sources, for example Matthiessen's rule can also be stated in terms of the scattering time: where τ is the true average scattering time and τimpurities is the scattering time if there was impurity scattering but no other source of scattering, etc. Matthiessen's rule is an approximation and is not universally valid. This rule is not valid if the factors affecting the mobility depend on each other, because individual scattering probabilities cannot be summed unless they are independent of each other. The average free time of flight of a carrier and therefore the relaxation time is inversely proportional to the scattering probability. For example, lattice scattering alters the average electron velocity (in the electric-field direction), which in turn alters the tendency to scatter off impurities. There are more complicated formulas that attempt to take these effects into account. Temperature dependence of mobility With increasing temperature, phonon concentration increases and causes increased scattering. Thus lattice scattering lowers the carrier mobility more and more at higher temperature. Theoretical calculations reveal that the mobility in non-polar semiconductors, such as silicon and germanium, is dominated by acoustic phonon interaction. The resulting mobility is expected to be proportional to T −3/2, while the mobility due to optical phonon scattering only is expected to be proportional to T −1/2. Experimentally, values of the temperature dependence of the mobility in Si, Ge and GaAs are listed in table. As , where is the scattering cross section for electrons and holes at a scattering center and is a thermal average (Boltzmann statistics) over all electron or hole velocities in the lower conduction band or upper valence band, temperature dependence of the mobility can be determined. In here, the following definition for the scattering cross section is used: number of particles scattered into solid angle dΩ per unit time divided by number of particles per area per time (incident intensity), which comes from classical mechanics. As Boltzmann statistics are valid for semiconductors . For scattering from acoustic phonons, for temperatures well above Debye temperature, the estimated cross section Σph is determined from the square of the average vibrational amplitude of a phonon to be proportional to T. The scattering from charged defects (ionized donors or acceptors) leads to the cross section . This formula is the scattering cross section for "Rutherford scattering", where a point charge (carrier) moves past another point charge (defect) experiencing Coulomb interaction. The temperature dependencies of these two scattering mechanism in semiconductors can be determined by combining formulas for τ, Σ and , to be for scattering from acoustic phonons and from charged defects . The effect of ionized impurity scattering, however, decreases with increasing temperature because the average thermal speeds of the carriers are increased. Thus, the carriers spend less time near an ionized impurity as they pass and the scattering effect of the ions is thus reduced. These two effects operate simultaneously on the carriers through Matthiessen's rule. At lower temperatures, ionized impurity scattering dominates, while at higher temperatures, phonon scattering dominates, and the actual mobility reaches a maximum at an intermediate temperature. Disordered Semiconductors While in crystalline materials electrons can be described by wavefunctions extended over the entire solid, this is not the case in systems with appreciable structural disorder, such as polycrystalline or amorphous semiconductors. Anderson suggested that beyond a critical value of structural disorder, electron states would be localized. Localized states are described as being confined to finite region of real space, normalizable, and not contributing to transport. Extended states are spread over the extent of the material, not normalizable, and contribute to transport. Unlike crystalline semiconductors, mobility generally increases with temperature in disordered semiconductors. Multiple trapping and release Mott later developed the concept of a mobility edge. This is an energy , above which electrons undergo a transition from localized to delocalized states. In this description, termed multiple trapping and release, electrons are only able to travel when in extended states, and are constantly being trapped in, and re-released from, the lower energy localized states. Because the probability of an electron being released from a trap depends on its thermal energy, mobility can be described by an Arrhenius relationship in such a system: where is a mobility prefactor, is activation energy, is the Boltzmann constant, and is temperature. The activation energy is typically evaluated by measuring mobility as a function of temperature. The Urbach Energy can be used as a proxy for activation energy in some systems. Variable Range Hopping At low temperature, or in system with a large degree of structural disorder (such as fully amorphous systems), electrons cannot access delocalized states. In such a system, electrons can only travel by tunnelling for one site to another, in a process called variable range hopping. In the original theory of variable range hopping, as developed by Mott and Davis, the probability , of an electron hopping from one site , to another site , depends on their separation in space , and their separation in energy . Here is a prefactor associated with the phonon frequency in the material, and is the wavefunction overlap parameter. The mobility in a system governed by variable range hopping can be shown to be: where is a mobility prefactor, is a parameter (with dimensions of temperature) that quantifies the width of localized states, and is the dimensionality of the system. Measurement of semiconductor mobility Hall mobility Carrier mobility is most commonly measured using the Hall effect. The result of the measurement is called the "Hall mobility" (meaning "mobility inferred from a Hall-effect measurement"). Consider a semiconductor sample with a rectangular cross section as shown in the figures, a current is flowing in the x-direction and a magnetic field is applied in the z-direction. The resulting Lorentz force will accelerate the electrons (n-type materials) or holes (p-type materials) in the (−y) direction, according to the right hand rule and set up an electric field ξy. As a result there is a voltage across the sample, which can be measured with a high-impedance voltmeter. This voltage, VH, is called the Hall voltage. VH is negative for n-type material and positive for p-type material. Mathematically, the Lorentz force acting on a charge q is given by For electrons: For holes: In steady state this force is balanced by the force set up by the Hall voltage, so that there is no net force on the carriers in the y direction. For electrons, For electrons, the field points in the −y direction, and for holes, it points in the +y direction. The electron current I is given by . Sub vx into the expression for ξy, where RHn is the Hall coefficient for electron, and is defined as Since Similarly, for holes From the Hall coefficient, we can obtain the carrier mobility as follows: Similarly, Here the value of VHp (Hall voltage), t (sample thickness), I (current) and B (magnetic field) can be measured directly, and the conductivities σn or σp are either known or can be obtained from measuring the resistivity. Field-effect mobility The mobility can also be measured using a field-effect transistor (FET). The result of the measurement is called the "field-effect mobility" (meaning "mobility inferred from a field-effect measurement"). The measurement can work in two ways: From saturation-mode measurements, or linear-region measurements. (See MOSFET for a description of the different modes or regions of operation.) Using saturation mode In this technique, for each fixed gate voltage VGS, the drain-source voltage VDS is increased until the current ID saturates. Next, the square root of this saturated current is plotted against the gate voltage, and the slope msat is measured. Then the mobility is: where L and W are the length and width of the channel and Ci is the gate insulator capacitance per unit area. This equation comes from the approximate equation for a MOSFET in saturation mode: where Vth is the threshold voltage. This approximation ignores the Early effect (channel length modulation), among other things. In practice, this technique may underestimate the true mobility. Using the linear region In this technique, the transistor is operated in the linear region (or "ohmic mode"), where VDS is small and with slope mlin. Then the mobility is: This equation comes from the approximate equation for a MOSFET in the linear region: In practice, this technique may overestimate the true mobility, because if VDS is not small enough and VG is not large enough, the MOSFET may not stay in the linear region. Optical mobility Electron mobility may be determined from non-contact laser photo-reflectance technique measurements. A series of photo-reflectance measurements are made as the sample is stepped through focus. The electron diffusion length and recombination time are determined by a regressive fit to the data. Then the Einstein relation is used to calculate the mobility. Terahertz mobility Electron mobility can be calculated from time-resolved terahertz probe measurement. Femtosecond laser pulses excite the semiconductor and the resulting photoconductivity is measured using a terahertz probe, which detects changes in the terahertz electric field. Time resolved microwave conductivity (TRMC) A proxy for charge carrier mobility can be evaluated using time-resolved microwave conductivity (TRMC). A pulsed optical laser is used to create electrons and holes in a semiconductor, which are then detected as an increase in photoconductance. With knowledge of the sample absorbance, dimensions, and incident laser fluence, the parameter can be evaluated, where is the carrier generation yield (between 0 and 1), is the electron mobility and is the hole mobility. has the same dimensions as mobility, but carrier type (electron or hole) is obscured. Doping concentration dependence in heavily-doped silicon The charge carriers in semiconductors are electrons and holes. Their numbers are controlled by the concentrations of impurity elements, i.e. doping concentration. Thus doping concentration has great influence on carrier mobility. While there is considerable scatter in the experimental data, for noncompensated material (no counter doping) for heavily doped substrates (i.e. and up), the mobility in silicon is often characterized by the empirical relationship: where N is the doping concentration (either ND or NA), and Nref and α are fitting parameters. At room temperature, the above equation becomes: Majority carriers: Minority carriers: These equations apply only to silicon, and only under low field. See also Speed of electricity References External links semiconductor glossary entry for electron mobility Resistivity and Mobility Calculator from the BYU Cleanroom Online lecture- Mobility from an atomistic point of view Physical quantities Charge carriers Materials science Semiconductors Electric and magnetic fields in matter MOSFETs
Electron mobility
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
5,742
[ "Physical phenomena", "Matter", "Applied and interdisciplinary physics", "Physical quantities", "Charge carriers", "Semiconductors", "Quantity", "Electric and magnetic fields in matter", "Materials science", "Materials", "Electrical phenomena", "Electronic engineering", "Condensed matter phy...
761,336
https://en.wikipedia.org/wiki/Rijke%20tube
The Rijke tube is a cylindrical tube with both ends open, inside of which a heat source is placed that turns heat into sound, by creating a self-amplifying standing wave, due to thermo-acoustic instability. It is an entertaining phenomenon in acoustics and is an excellent example of resonance. Discovery P. L. Rijke was a professor of physics at the Leiden University in the Netherlands when, in 1859, he discovered a way of using heat to sustain a sound in a cylindrical tube open at both ends. He used a glass tube, about 0.8 m long and 3.5 cm in diameter. Inside it, about 20 cm from one end, he placed a disc of wire gauze as shown in the figure on right. Friction with the walls of the tube is sufficient to keep the gauze in position. With the tube vertical and the gauze in the lower half, he heated the gauze with a flame until it was glowing red hot. Upon removing the flame, he obtained a loud sound from the tube which lasted until the gauze cooled down (about 10s). It is safer in modern reproductions of this experiment to use a borosilicate glass tube or, better still, one made of metal. Instead of heating the gauze with a flame, Rijke also tried electrical heating. Making the gauze with electrical resistance wire causes it to glow red when a sufficiently large current is passed. With the heat being continuously supplied, the sound is also continuous and rather loud. Rijke seems to have received complaints from his university colleagues because he reports that the sound could be easily heard three rooms away from his laboratory. The electrical power required to achieve this is about 1 kW. Lord Rayleigh, who wrote the definitive textbook on sound in 1877, recommends this as a very effective lecture demonstration. He used a cast iron pipe 1.5 m long and 12 cm diameter with two layers of gauze made from iron wire inserted about quarter of the way up the tube. The extra gauze is to retain more heat, which makes the sound longer lasting. He reports in his book that the sound rises to such intensity as to shake the room! A "reverse" Rijke effect — namely, that a Rijke tube will also produce audio oscillations if hot air flows through a cold screen — was first observed by Rijke's assistant Johannes Bosscha and subsequently investigated by German physicist Peter Theophil Rieß. Mechanism The sound comes from a standing wave whose wavelength is about twice the length of the tube, giving the fundamental frequency. Lord Rayleigh, in his book, gave the correct explanation of how the sound is stimulated. The flow of air past the gauze is a combination of two motions. There is a uniform upwards motion of the air due to a convection current resulting from the gauze heating up the air. Superimposed on this is the motion due to the sound wave. For half the vibration cycle, the air flows into the tube from both ends until the pressure reaches a maximum. During the other half cycle, the flow of air is outwards until the minimum pressure is reached. All air flowing past the gauze is heated to the temperature of the gauze and any transfer of heat to the air will increase its pressure according to the ideal gas law. As the air flows upwards past the gauze most of it will already be hot because it has just come downwards past the gauze during the previous half cycle. However, just before the pressure maximum, a small quantity of cool air comes into contact with the gauze and its pressure is suddenly increased. This increases the pressure maximum, so reinforcing the vibration. During the other half cycle, when the pressure is decreasing, the air above the gauze is forced downwards past the gauze again. Since it is already hot, no pressure change due to the gauze takes place, since there is no transfer of heat. The sound wave is therefore reinforced once every vibration cycle, and it quickly builds up to a very large amplitude. This explains why there is no sound when the flame is heating the gauze: all air flowing through the tube is heated by the flame, so when it reaches the gauze, it is already hot and no pressure increase takes place. When the gauze is in the upper half of the tube, there is no sound. In this case, the cool air brought in from the bottom by the convection current reaches the gauze towards the end of the outward vibration movement. This is immediately before the pressure minimum, so a sudden increase in pressure due to the heat transfer tends to cancel out the sound wave instead of reinforcing it. The position of the gauze in the tube is not critical as long as it is in the lower half. To work out its best position, there are two things to consider. Most heat will be transferred to the air where the displacement of the wave is a maximum, i.e. at the end of the tube. However, the effect of increasing the pressure is greatest where there is the greatest pressure variation, i.e. in the middle of the tube. Placing the gauze midway between these two positions (one quarter of the way in from the bottom end) is a simple way to come close to the optimal placement. The Rijke tube is considered to be a standing wave form of thermoacoustic devices known as "heat engines" or "prime movers". Sondhauss tube The Rijke tube operates with both ends open. However, a tube with one end closed will also generate sound from heat, if the closed end is very hot. Such a device is called a "Sondhauss tube". The phenomenon was first observed by glassblowers and was first described in 1850 by the German physicist Karl Friedrich Julius Sondhauss (1815–1886). Lord Rayleigh first explained the operation of the Sondhauss tube. The Sondhauss tube operates in a way that is basically similar to the Rijke tube: Initially, air moves towards the hot, closed end of the tube, where it's heated, so that the pressure at that end increases. The hot, higher-pressure air then flows from the closed end towards the cooler, open end of the tube. The air transfers its heat to the tube and cools. The air surges slightly beyond the open end of the tube, briefly compressing the atmosphere; the compression propagates through the atmosphere as a sound wave. The atmosphere then pushes the air back into the tube, and the cycle repeats. Unlike the Rijke tube, the Sondhauss tube does not require a steady flow of air through it, and whereas the Rijke tube acts as a half-wave resonator, the Sondhauss tube acts as a quarter-wave resonator. Like the Rijke tube, it was discovered that placing a porous heater — as well as a "stack" (a "plug" that is porous) — in the tube greatly increased the power and efficiency of the Sondhauss tube. (In demonstration models, the tube can be heated externally and steel wool can serve as a stack.) See also Pyrophone References Further information Rijke-Rohr (Rijke tube) at: Wundersames Sammelsurium (Wondrous Collection) (in German) Includes original articles by early investigators of thermoacoustics (Rijke, Reiss, etc.). Julius Sumner Miller, "Sounding Pipes" on YouTube Demonstrations of Rijke tubes. Acoustics Hot air engines Plasmaphones Toy instruments and noisemakers
Rijke tube
[ "Physics" ]
1,565
[ "Classical mechanics", "Acoustics" ]
762,604
https://en.wikipedia.org/wiki/Giulio%20Natta
Giulio Natta (; 26 February 1903 – 2 May 1979) was an Italian chemical engineer and Nobel laureate. He won a Nobel Prize in Chemistry in 1963 with Karl Ziegler for work on high density polymers. He also received a Lomonosov Gold Medal in 1969. Biography Early years Natta was born in Imperia, Italy. He earned his degree in chemical engineering from the Politecnico di Milano university in Milan in 1924. In 1927 he passed the exams for becoming a professor there. From 1929 to 1933, he was also in charge of physical chemistry at the Faculty of Sciences of the University of Milan. In 1933 he became a full professor and the director of the Institute of General Chemistry of Pavia University, where he stayed until 1935. During this time he began using crystallography to elucidate the structures of a wide variety of molecules including phosphine, arsine and others. In that year he was appointed full professor in physical chemistry at the University of Rome. Career From 1936 to 1938 he moved as a full professor and director of the Institute of Industrial Chemistry at the Polytechnic Institute of Turin. In 1938 he took over as the head of the Department of chemical engineering at the Politecnico di Milano university, in a somewhat controversial manner, when his predecessor Mario Giacomo Levi was forced to step down because of racial laws against Jews being introduced in Fascist Italy. Natta's work at Politecnico di Milano led to the improvement of earlier work by Karl Ziegler and to the development of the Ziegler–Natta catalyst. He received the Nobel Prize in Chemistry in 1963 with Karl Ziegler for their research in high polymers. Personal life In 1935 Natta married Rosita Beati; a graduate in literature, she coined the terms "isotactic", "atactic" and "syndiotactic" for polymers discovered by her husband. They had two children, Giuseppe and Franca. Rosita died in 1968. Natta was diagnosed with Parkinson's disease in 1956. By 1963, his condition had progressed to the point that he required the assistance of his son and four colleagues to present his speech at the Nobel ceremonies in Stockholm. Natta died in Bergamo, Italy at age 76. See also Polypropylene References Further reading External links including the Nobel Lecture, December 12, 1963 From the Stereospecific Polymerization to the Asymmetric Autocatalytic Synthesis of Macromolecules 1903 births 1979 deaths People from Imperia Polytechnic University of Milan alumni Neurological disease deaths in Lombardy Deaths from Parkinson's disease in Italy Italian chemists Italian chemical engineers 20th-century Italian inventors Italian Nobel laureates Nobel laureates in Chemistry Members of the French Academy of Sciences Foreign members of the USSR Academy of Sciences Polymer scientists and engineers Academic staff of the Polytechnic University of Milan Recipients of the Lomonosov Gold Medal Academic staff of the University of Milan
Giulio Natta
[ "Chemistry", "Materials_science", "Technology" ]
596
[ "Physical chemists", "Recipients of the Lomonosov Gold Medal", "Polymer chemistry", "Science and technology awards", "Polymer scientists and engineers" ]
762,691
https://en.wikipedia.org/wiki/Supercritical%20fluid
A supercritical fluid (SCF) is a substance at a temperature and pressure above its critical point, where distinct liquid and gas phases do not exist, but below the pressure required to compress it into a solid. It can effuse through porous solids like a gas, overcoming the mass transfer limitations that slow liquid transport through such materials. SCFs are superior to gases in their ability to dissolve materials like liquids or solids. Near the critical point, small changes in pressure or temperature result in large changes in density, allowing many properties of a supercritical fluid to be "fine-tuned". Supercritical fluids occur in the atmospheres of the gas giants Jupiter and Saturn, the terrestrial planet Venus, and probably in those of the ice giants Uranus and Neptune. Supercritical water is found on Earth, such as the water issuing from black smokers, a type of hydrothermal vent. SCFs are used as a substitute for organic solvents in a range of industrial and laboratory processes, most commonly carbon dioxide for decaffeination and water for steam boilers for power generation. Some substances are soluble in the supercritical state of a solvent (e.g. carbon dioxide) but insoluble in the gaseous or liquid state—or vice versa. This can be used to extract a substance and transport it elsewhere in solution before depositing it in the desired place by allowing or inducing a phase transition in the solvent. Properties Supercritical fluids generally have properties between those of a gas and a liquid. In Table 1, the critical properties are shown for some substances that are commonly used as supercritical fluids. †Source: International Association for Properties of Water and Steam (IAPWS) Table 2 shows density, diffusivity and viscosity for typical liquids, gases and supercritical fluids. Also, there is no surface tension in a supercritical fluid, as there is no liquid/gas phase boundary. By changing the pressure and temperature of the fluid, the properties can be "tuned" to be more liquid-like or more gas-like. One of the most important properties is the solubility of material in the fluid. Solubility in a supercritical fluid tends to increase with density of the fluid (at constant temperature). Since density increases with pressure, solubility tends to increase with pressure. The relationship with temperature is a little more complicated. At constant density, solubility will increase with temperature. However, close to the critical point, the density can drop sharply with a slight increase in temperature. Therefore, close to the critical temperature, solubility often drops with increasing temperature, then rises again. Mixtures Typically, supercritical fluids are completely miscible with each other, so that a binary mixture forms a single gaseous phase if the critical point of the mixture is exceeded. However, exceptions are known in systems where one component is much more volatile than the other, which in some cases form two immiscible gas phases at high pressure and temperatures above the component critical points. This behavior has been found for example in the systems N2-NH3, NH3-CH4, SO2-N2 and n-butane-H2O. The critical point of a binary mixture can be estimated as the arithmetic mean of the critical temperatures and pressures of the two components, where χi denotes the mole fraction of component i. For greater accuracy, the critical point can be calculated using equations of state, such as the Peng–Robinson, or group-contribution methods. Other properties, such as density, can also be calculated using equations of state. Phase diagram Figures 1 and 2 show two-dimensional projections of a phase diagram. In the pressure-temperature phase diagram (Fig. 1) the boiling curve separates the gas and liquid region and ends in the critical point, where the liquid and gas phases disappear to become a single supercritical phase. The appearance of a single phase can also be observed in the density-pressure phase diagram for carbon dioxide (Fig. 2). At well below the critical temperature, e.g., 280 K, as the pressure increases, the gas compresses and eventually (at just over 40 bar) condenses into a much denser liquid, resulting in the discontinuity in the line (vertical dotted line). The system consists of 2 phases in equilibrium, a dense liquid and a low density gas. As the critical temperature is approached (300 K), the density of the gas at equilibrium becomes higher, and that of the liquid lower. At the critical point, (304.1 K and 7.38 MPa (73.8 bar)), there is no difference in density, and the 2 phases become one fluid phase. Thus, above the critical temperature a gas cannot be liquefied by pressure. At slightly above the critical temperature (310 K), in the vicinity of the critical pressure, the line is almost vertical. A small increase in pressure causes a large increase in the density of the supercritical phase. Many other physical properties also show large gradients with pressure near the critical point, e.g. viscosity, the relative permittivity and the solvent strength, which are all closely related to the density. At higher temperatures, the fluid starts to behave more like an ideal gas, with a more linear density/pressure relationship, as can be seen in Figure 2. For carbon dioxide at 400 K, the density increases almost linearly with pressure. Many pressurized gases are actually supercritical fluids. For example, nitrogen has a critical point of 126.2 K (−147 °C) and 3.4 MPa (34 bar). Therefore, nitrogen (or compressed air) in a gas cylinder above this pressure is actually a supercritical fluid. These are more often known as permanent gases. At room temperature, they are well above their critical temperature, and therefore behave as a nearly ideal gas, similar to CO2 at 400 K above. However, they cannot be liquified by mechanical pressure unless cooled below their critical temperature, requiring gravitational pressure such as within gas giants to produce a liquid or solid at high temperatures. Above the critical temperature, elevated pressures can increase the density enough that the SCF exhibits liquid-like density and behaviour. At very high pressures, an SCF can be compressed into a solid because the melting curve extends to the right of the critical point in the P/T phase diagram. While the pressure required to compress supercritical CO2 into a solid can be, depending on the temperature, as low as 570 MPa, that required to solidify supercritical water is 14,000 MPa. The Fisher–Widom line, the Widom line, or the Frenkel line are thermodynamic concepts that allow to distinguish liquid-like and gas-like states within the supercritical fluid. History In 1822, Baron Charles Cagniard de la Tour discovered the critical point of a substance in his famous cannon barrel experiments. Listening to discontinuities in the sound of a rolling flint ball in a sealed cannon filled with fluids at various temperatures, he observed the critical temperature. Above this temperature, the densities of the liquid and gas phases become equal and the distinction between them disappears, resulting in a single supercritical fluid phase. In recent years, a significant effort has been devoted to investigation of various properties of supercritical fluids. Supercritical fluids have found application in a variety of fields, ranging from the extraction of floral fragrance from flowers to applications in food science such as creating decaffeinated coffee, functional food ingredients, pharmaceuticals, cosmetics, polymers, powders, bio- and functional materials, nano-systems, natural products, biotechnology, fossil and bio-fuels, microelectronics, energy and environment. Much of the excitement and interest of the past decade is due to the enormous progress made in increasing the power of relevant experimental tools. The development of new experimental methods and improvement of existing ones continues to play an important role in this field, with recent research focusing on dynamic properties of fluids. Natural occurrence Hydrothermal circulation Hydrothermal circulation occurs within the Earth's crust wherever fluid becomes heated and begins to convect. These fluids are thought to reach supercritical conditions under a number of different settings, such as in the formation of porphyry copper deposits or high temperature circulation of seawater in the sea floor. At mid-ocean ridges, this circulation is most evident by the appearance of hydrothermal vents known as "black smokers". These are large (metres high) chimneys of sulfide and sulfate minerals which vent fluids up to 400 °C. The fluids appear like great black billowing clouds of smoke due to the precipitation of dissolved metals in the fluid. It is likely that at that depth many of these vent sites reach supercritical conditions, but most cool sufficiently by the time they reach the sea floor to be subcritical. One particular vent site, Turtle Pits, has displayed a brief period of supercriticality at the vent site. A further site, Beebe, in the Cayman Trough, is thought to display sustained supercriticality at the vent orifice. Planetary atmospheres The atmosphere of Venus is 96.5% carbon dioxide and 3.5% nitrogen. The surface pressure is and the surface temperature is , above the critical points of both major constituents and making the surface atmosphere a supercritical fluid. The interior atmospheres of the Solar System's four giant planets are composed mainly of hydrogen and helium at temperatures well above their critical points. The gaseous outer atmospheres of the gas giants Jupiter and Saturn transition smoothly into the dense liquid interior, while the nature of the transition zones of the ice giants Neptune and Uranus is unknown. Theoretical models of extrasolar planet Gliese 876 d have posited an ocean of pressurized, supercritical fluid water with a sheet of solid high pressure water ice at the bottom. Applications Supercritical fluid extraction The advantages of supercritical fluid extraction (compared with liquid extraction) are that it is relatively rapid because of the low viscosities and high diffusivities associated with supercritical fluids. Alternative solvents to supercritical fluids may be poisonous, flammable or an environmental hazard to a much larger extent than water or carbon dioxide are. The extraction can be selective to some extent by controlling the density of the medium, and the extracted material is easily recovered by simply depressurizing, allowing the supercritical fluid to return to gas phase and evaporate leaving little or no solvent residues. Carbon dioxide is the most common supercritical solvent. It is used on a large scale for the decaffeination of green coffee beans, the extraction of hops for beer production, and the production of essential oils and pharmaceutical products from plants. A few laboratory test methods include the use of supercritical fluid extraction as an extraction method instead of using traditional solvents. Supercritical fluid decomposition Supercritical water can be used to decompose biomass via Supercritical Water Gasification of biomass. This type of biomass gasification can be used to produce hydrocarbon fuels for use in an efficient combustion device or to produce hydrogen for use in a fuel cell. In the latter case, hydrogen yield can be much higher than the hydrogen content of the biomass due to steam reforming where water is a hydrogen-providing participant in the overall reaction. Dry-cleaning Supercritical carbon dioxide (SCD) can be used instead of PERC (perchloroethylene) or other undesirable solvents for dry-cleaning. Supercritical carbon dioxide sometimes intercalates into buttons, and, when the SCD is depressurized, the buttons pop, or break apart. Detergents that are soluble in carbon dioxide improve the solvating power of the solvent. CO2-based dry cleaning equipment uses liquid CO2, not supercritical CO2, to avoid damage to the buttons. Supercritical fluid chromatography Supercritical fluid chromatography (SFC) can be used on an analytical scale, where it combines many of the advantages of high performance liquid chromatography (HPLC) and gas chromatography (GC). It can be used with non-volatile and thermally labile analytes (unlike GC) and can be used with the universal flame ionization detector (unlike HPLC), as well as producing narrower peaks due to rapid diffusion. In practice, the advantages offered by SFC have not been sufficient to displace the widely used HPLC and GC, except in a few cases such as chiral separations and analysis of high-molecular-weight hydrocarbons. For manufacturing, efficient preparative simulated moving bed units are available. The purity of the final products is very high, but the cost makes it suitable only for very high-value materials such as pharmaceuticals. Chemical reactions Changing the conditions of the reaction solvent can allow separation of phases for product removal, or single phase for reaction. Rapid diffusion accelerates diffusion controlled reactions. Temperature and pressure can tune the reaction down preferred pathways, e.g., to improve yield of a particular chiral isomer. There are also significant environmental benefits over conventional organic solvents. Industrial syntheses that are performed at supercritical conditions include those of polyethylene from supercritical ethene, isopropyl alcohol from supercritical propene, 2-butanol from supercritical butene, and ammonia from a supercritical mix of nitrogen and hydrogen. Other reactions were, in the past, performed industrially in supercritical conditions, including the synthesis of methanol and thermal (non-catalytic) oil cracking. Because of the development of effective catalysts, the required temperatures of those two processes have been reduced and are no longer supercritical. Impregnation and dyeing Impregnation is, in essence, the converse of extraction. A substance is dissolved in the supercritical fluid, the solution flowed past a solid substrate, and is deposited on or dissolves in the substrate. Dyeing, which is readily carried out on polymer fibres such as polyester using disperse (non-ionic) dyes, is a special case of this. Carbon dioxide also dissolves in many polymers, considerably swelling and plasticising them and further accelerating the diffusion process. Nano and micro particle formation The formation of small particles of a substance with a narrow size distribution is an important process in the pharmaceutical and other industries. Supercritical fluids provide a number of ways of achieving this by rapidly exceeding the saturation point of a solute by dilution, depressurization or a combination of these. These processes occur faster in supercritical fluids than in liquids, promoting nucleation or spinodal decomposition over crystal growth and yielding very small and regularly sized particles. Recent supercritical fluids have shown the capability to reduce particles up to a range of 5-2000 nm. Generation of pharmaceutical cocrystals Supercritical fluids act as a new medium for the generation of novel crystalline forms of APIs (Active Pharmaceutical Ingredients) named as pharmaceutical cocrystals. Supercritical fluid technology offers a new platform that allows a single-step generation of particles that are difficult or even impossible to obtain by traditional techniques. The generation of pure and dried new cocrystals (crystalline molecular complexes comprising the API and one or more conformers in the crystal lattice) can be achieved due to unique properties of SCFs by using different supercritical fluid properties: supercritical CO2 solvent power, anti-solvent effect and its atomization enhancement. Supercritical drying Supercritical drying is a method of removing solvent without surface tension effects. As a liquid dries, the surface tension drags on small structures within a solid, causing distortion and shrinkage. Under supercritical conditions there is no surface tension, and the supercritical fluid can be removed without distortion. Supercritical drying is used in the manufacturing process of aerogels and drying of delicate materials such as archaeological samples and biological samples for electron microscopy. Supercritical water electrolysis Electrolysis of water in a supercritical state, reduces the overpotentials found in other electrolysers, thereby improving the electrical efficiency of the production of oxygen and hydrogen. Increased temperature reduces thermodynamic barriers and increases kinetics. No bubbles of oxygen or hydrogen are formed on the electrodes, therefore no insulating layer is formed between catalyst and water, reducing the ohmic losses. The gas-like properties provide rapid mass transfer. Supercritical water oxidation Supercritical water oxidation uses supercritical water as a medium in which to oxidize hazardous waste, eliminating production of toxic combustion products that burning can produce. The waste product to be oxidised is dissolved in the supercritical water along with molecular oxygen (or an oxidising agent that gives up oxygen upon decomposition, e.g. hydrogen peroxide) at which point the oxidation reaction occurs. Supercritical water hydrolysis Supercritical hydrolysis is a method of converting all biomass polysaccharides as well the associated lignin into low molecular compounds by contacting with water alone under supercritical conditions. The supercritical water, acts as a solvent, a supplier of bond-breaking thermal energy, a heat transfer agent and as a source of hydrogen atoms. All polysaccharides are converted into simple sugars in near-quantitative yield in a second or less. The aliphatic inter-ring linkages of lignin are also readily cleaved into free radicals that are stabilized by hydrogen originating from the water. The aromatic rings of the lignin are unaffected under short reaction times so that the lignin-derived products are low molecular weight mixed phenols. To take advantage of the very short reaction times needed for cleavage a continuous reaction system must be devised. The amount of water heated to a supercritical state is thereby minimized. Supercritical water gasification Supercritical water gasification is a process of exploiting the beneficial effect of supercritical water to convert aqueous biomass streams into clean water and gases like H2, CH4, CO2, CO etc. Supercritical fluid in power generation The efficiency of a heat engine is ultimately dependent on the temperature difference between heat source and sink (Carnot cycle). To improve efficiency of power stations the operating temperature must be raised. Using water as the working fluid, this takes it into supercritical conditions. Efficiencies can be raised from about 39% for subcritical operation to about 45% using current technology. Many coal-fired supercritical steam generators are operational all over the world. Supercritical carbon dioxide is also proposed as a working fluid, which would have the advantage of lower critical pressure than water, but issues with corrosion are not yet fully solved. One proposed application is the Allam cycle. Supercritical water reactors (SCWRs) are proposed advanced nuclear systems that offer similar thermal efficiency gains. Biodiesel production Conversion of vegetable oil to biodiesel is via a transesterification reaction, where a triglyceride is converted to the methyl esters (of the fatty acids) plus glycerol. This is usually done using methanol and caustic or acid catalysts, but can be achieved using supercritical methanol without a catalyst. The method of using supercritical methanol for biodiesel production was first studied by Saka and his coworkers. This has the advantage of allowing a greater range and water content of feedstocks (in particular, used cooking oil), the product does not need to be washed to remove catalyst, and is easier to design as a continuous process. Enhanced oil recovery and carbon capture and storage Supercritical carbon dioxide is used to enhance oil recovery in mature oil fields. At the same time, there is the possibility of using "clean coal technology" to combine enhanced recovery methods with carbon sequestration. The CO2 is separated from other flue gases, compressed to the supercritical state, and injected into geological storage, possibly into existing oil fields to improve yields. At present, only schemes isolating fossil CO2 from natural gas actually use carbon storage, (e.g., Sleipner gas field), but there are many plans for future CCS schemes involving pre- or post- combustion CO2. There is also the possibility to reduce the amount of CO2 in the atmosphere by using biomass to generate power and sequestering the CO2 produced. Enhanced geothermal system The use of supercritical carbon dioxide, instead of water, has been examined as a geothermal working fluid. Refrigeration Supercritical carbon dioxide is also emerging as a useful high-temperature refrigerant, being used in new, CFC/HFC-free domestic heat pumps making use of the transcritical cycle. These systems are undergoing continuous development with supercritical carbon dioxide heat pumps already being successfully marketed in Asia. The EcoCute systems from Japan are some of the first commercially successful high-temperature domestic water heat pumps. Supercritical fluid deposition Supercritical fluids can be used to deposit functional nanostructured films and nanometer-size particles of metals onto surfaces. The high diffusivities and concentrations of precursor in the fluid as compared to the vacuum systems used in chemical vapour deposition allow deposition to occur in a surface reaction rate limited regime, providing stable and uniform interfacial growth. This is crucial in developing more powerful electronic components, and metal particles deposited in this way are also powerful catalysts for chemical synthesis and electrochemical reactions. Additionally, due to the high rates of precursor transport in solution, it is possible to coat high surface area particles which under chemical vapour deposition would exhibit depletion near the outlet of the system and also be likely to result in unstable interfacial growth features such as dendrites. The result is very thin and uniform films deposited at rates much faster than atomic layer deposition, the best other tool for particle coating at this size scale. Antimicrobial properties CO2 at high pressures has antimicrobial properties. While its effectiveness has been shown for various applications, the mechanisms of inactivation have not been fully understood although they have been investigated for more than 60 years. See also Supercritical adsorption Transcritical cycle Critical point (thermodynamics) Iceland Deep Drilling Project References Further reading External links Handy calculator for density, enthalpy, entropy and other thermodynamic data of supercritical / water and others videos to present supercritical fluid critical point and solubility in supercritical fluid NewScientist Environment FOUND:The hottest water on Earth Critical phenomena Phases of matter Gases
Supercritical fluid
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
4,678
[ "Matter", "Physical phenomena", "Phases of matter", "Critical phenomena", "Condensed matter physics", "Statistical mechanics", "Gases", "Dynamical systems" ]
762,970
https://en.wikipedia.org/wiki/Morse%20potential
The Morse potential, named after physicist Philip M. Morse, is a convenient interatomic interaction model for the potential energy of a diatomic molecule. It is a better approximation for the vibrational structure of the molecule than the quantum harmonic oscillator because it explicitly includes the effects of bond breaking, such as the existence of unbound states. It also accounts for the anharmonicity of real bonds and the non-zero transition probability for overtone and combination bands. The Morse potential can also be used to model other interactions such as the interaction between an atom and a surface. Due to its simplicity (only three fitting parameters), it is not used in modern spectroscopy. However, its mathematical form inspired the MLR (Morse/Long-range) potential, which is the most popular potential energy function used for fitting spectroscopic data. Potential energy function The Morse potential energy function is of the form Here is the distance between the atoms, is the equilibrium bond distance, is the well depth (defined relative to the dissociated atoms), and controls the 'width' of the potential (the smaller is, the larger the well). The dissociation energy of the bond can be calculated by subtracting the zero point energy from the depth of the well. The force constant (stiffness) of the bond can be found by Taylor expansion of around to the second derivative of the potential energy function, from which it can be shown that the parameter, , is where is the force constant at the minimum of the well. Since the zero of potential energy is arbitrary, the equation for the Morse potential can be rewritten any number of ways by adding or subtracting a constant value. When it is used to model the atom-surface interaction, the energy zero can be redefined so that the Morse potential becomes which is usually written as where is now the coordinate perpendicular to the surface. This form approaches zero at infinite and equals at its minimum, i.e. . It clearly shows that the Morse potential is the combination of a short-range repulsion term (the former) and a long-range attractive term (the latter), analogous to the Lennard-Jones potential. Vibrational states and energies Like the quantum harmonic oscillator, the energies and eigenstates of the Morse potential can be found using operator methods. One approach involves applying the factorization method to the Hamiltonian. To write the stationary states on the Morse potential, i.e. solutions and of the following Schrödinger equation: it is convenient to introduce the new variables: Then, the Schrödinger equation takes the simplified form: Its eigenvalues (reduced by ) and eigenstates can be written as: where with denoting the largest integer smaller than and where and which satisfies the normalization condition and where is a generalized Laguerre polynomial: There also exists the following analytical expression for matrix elements of the coordinate operator: which is valid for and The eigenenergies in the initial variables have the form: where is the vibrational quantum number and has units of frequency. The latter is mathematically related to the particle mass, and the Morse constants via Whereas the energy spacing between vibrational levels in the quantum harmonic oscillator is constant at the energy between adjacent levels decreases with increasing in the Morse oscillator. Mathematically, the spacing of Morse levels is This trend matches the inharmonicity found in real molecules. However, this equation fails above some value of where is calculated to be zero or negative. Specifically, (integer part only). This failure is due to the finite number of bound levels in the Morse potential, and some maximum that remains bound. For energies above all the possible energy levels are allowed and the equation for is no longer valid. Below is a good approximation for the true vibrational structure in non-rotating diatomic molecules. In fact, the real molecular spectra are generally fit to the form in which the constants and can be directly related to the parameters for the Morse potential. Specifically, and Note that if and are given in is in cm/s (not m/s), is in kg, and is in in which case will be in and will be in As is clear from dimensional analysis, for historical reasons the last equation uses spectroscopic notation in which represents a wavenumber obeying and not an angular frequency given by Morse/Long-range potential An extension of the Morse potential that made the Morse form useful for modern (high-resolution) spectroscopy is the MLR (Morse/Long-range) potential. The MLR potential is used as a standard for representing spectroscopic and/or virial data of diatomic molecules by a potential energy curve. It has been used on N2, Ca2, KLi, MgH, several electronic states of Li2, Cs2, Sr2, ArXe, LiCa, LiNa, Br2, Mg2, HF, HCl, HBr, HI, MgD, Be2, BeH, and NaH. More sophisticated versions are used for polyatomic molecules. See also Lennard-Jones potential Molecular mechanics References 1 CRC Handbook of chemistry and physics, Ed David R. Lide, 87th ed, Section 9, SPECTROSCOPIC CONSTANTS OF DIATOMIC MOLECULES pp. 9–82 Khordad, R; Edet, C.O; and Ikot, A.N. (2022). "Application of Morse potential and improved deformed exponential-type potential (IDEP) model to predict thermodynamics properties of diatomic molecules" International Journal of Modern Physics C 33 (08): 2250106 doi:10.1142/S0129183122501066 Varshni, Yatendra Pal, (1957) "Comparative Study of Potential Energy Functions for Diatomic Molecules" Rev. Mod. Phys. 29: 664 doi:10.1103/RevModPhys.29.664 Kaplan, I.G. (2003) Handbook of Molecular Physics and Quantum Chemistry, Wiley, p207. Haynes W M, David R and Lide T J B (eds) (2017) CRC Handbook of Chemistry and Physics, Boca Raton, FL: CRC Press Chemical bonding Quantum chemistry Quantum models Quantum mechanical potentials
Morse potential
[ "Physics", "Chemistry", "Materials_science" ]
1,312
[ "Quantum chemistry", "Quantum mechanics", "Quantum models", "Quantum mechanical potentials", "Theoretical chemistry", "Condensed matter physics", " molecular", "nan", "Atomic", "Chemical bonding", " and optical physics" ]
763,256
https://en.wikipedia.org/wiki/Pier%20Luigi%20Nervi
Pier Luigi Nervi (21 June 1891 – 9 January 1979) was an Italian engineer and architect. He studied at the University of Bologna graduating in 1913. Nervi taught as a professor of engineering at Rome University from 1946 to 1961 and is known worldwide as a structural engineer and architect and for his innovative use of reinforced concrete, especially with numerous notable thin shell structures worldwide. Biography Nervi was born in Sondrio and attended the Civil Engineering School of Bologna from which he graduated in 1913; his formal education was quite similar to that experienced today by Italian civil engineering students. After graduating he joined the Society for Concrete Construction and, during World War I from 1915 to 1918, he served in the Corps of Engineering of the Italian Army. From 1961 to 1962 he was the Norton professor at Harvard University. Civil engineering works Nervi began practicing civil engineering after 1923. His projects in the 1930s included several airplane hangars that were important for his development as an engineer. A set of hangars in Orvieto (1935) were built entirely out of reinforced concrete, and a second set in Orbetello and Torre del Lago (1939) improved the design by using a lighter roof, precast ribs, and a modular construction method. During the 1940s he developed ideas for reinforced concrete which helped in the rebuilding of many buildings and factories throughout Western Europe, and even designed and created a boat hull that was made of reinforced concrete as a promotion for the Italian government. Nervi also stressed that intuition should be used as much as mathematics in design, especially with thin shell structures. He borrowed from both Roman and Renaissance architecture while applying ribbing and vaulting to improve strength and eliminate columns. He combined simple geometry and prefabrication to innovate design solutions. Engineer and architect Nervi was educated and practised as an ingegnere edile (translated as "building engineer") – in Italy. At the time (and to a lesser degree also today), a building engineer might also be considered an architect. After 1932, his aesthetically pleasing designs were used for major projects. This was due to the booming number of construction projects at the time which used concrete and steel in Europe and the architecture aspect took a step back to the potential of engineering. Nervi successfully made reinforced concrete the main structural material of the day. Nervi expounded his ideas on building in four books (see below) and many learned papers. Archeological excavations suggested that he may have some responsibilities for the Flaminio stadium foundations passing through ancient Roman tombs. His work was also part of the architecture event in the art competition at the 1936 Summer Olympics. International projects Most of his built structures are in his native Italy, but he also worked on projects abroad. Nervi's first project in the United States was the George Washington Bridge Bus Station, for which he designed the roof, which consists of triangular pieces that were cast in place. This building is still used today by over 700 buses and their passengers. Noted works Stadio Artemio Franchi, Florence (1931) Ugolino Golf House, Impruneta, Italy (1934) (collaborating with Gherardo Bosio) Torino Esposizioni, Turin, Italy (1949). UNESCO headquarters, Paris (1950) (collaborating with Marcel Breuer and Bernard Zehrfuss) The Pirelli Tower, Milan (1950) (collaborating with Gio Ponti) Palazzo dello sport EUR (now PalaLottomatica), Rome (1956) Palazzetto dello sport, Rome (1958) Stadio Flaminio, Rome (1957) , Turin (1961) Palazzetto dello sport, Turin (1961) Australia Square tower building, Sydney (1961 - 1967) Sacro Cuore (Bell Tower), Firenze (1962) Paper Mill, Mantua, Italy (1962) George Washington Bridge Bus Station, New York City (1963) Australia Square tower, Sydney (1964) Architect: Harry Seidler & Associates Tour de la Bourse, Montreal (1964) (collaborating with Luigi Moretti) Leverone Field House at Dartmouth College Sede Centrale della Banca del Monte di Parma, Parma (1968, collaboration with Gio Ponti, Antonio Fornaroli, and ) Edmund Barton Building (also published as Trade Group Offices), Canberra (1970), Australia. Architect Harry Seidler & Associates MLC Centre, Sydney (1973) Architect: Harry Seidler & Associates Thompson Arena at Dartmouth College (1973 - 1974) Cathedral of Saint Mary of the Assumption, San Francisco, California (1967) (collaborating with Pietro Belluschi) Paul VI Audience Hall, Vatican City (1971) Chrysler Hall, & Norfolk Scope Arena in Norfolk, Virginia (1971) Australian Embassy, Paris (1973) Consulting engineer. Architect. Harry Seidler & Associates Good Hope Centre, Cape Town (1976) by Studio Nervi, an exhibition hall and conference centre, with the exhibition hall comprising an arch with tie-beam on each of the four vertical facades and two diagonal arches supporting two intersecting barrel-like roofs which in turn were constructed from pre-cast concrete triangular coffers with in-situ concrete beams on the edges. Awards Pier Luigi Nervi was awarded Gold Medals by the Institution of Structural Engineers in the UK, the American Institute of Architects (AIA Gold Medal 1964) and the RIBA. In 1957, received the Frank P. Brown Medal of The Franklin Institute and the Wilhelm Exner Medal. Publications Scienza o arte del costruire? Bussola, Rome, 1945. Costruire correttamente, Hoepli, Milan, 1954. Structures, Dodge, New York, 1958. Aesthetics and Technology in Building (The Charles Eliot Norton Lectures, 1961-62). Cambridge, Massachusetts, Harvard University Press, 1965. See also Thin-shell structure References External links Ing. Nervi Pier Luigi. Fascismo - Architettura - Arte / Arte fascista web site Pierluigi Nervi e l'arte di costruire, Fausto Giovannardi, Borgo San Lorenzo (Florence) Italy 2008 NerViLab at Sapienza University, Rome Pier Luigi Nervi Project http://www.silvanaeditoriale.it/catalogo/prodotto.asp?id=3015, catalogue to the international travelling exhibition "Pier Luigi Nervi Architecture as Challenge, edited by Cristiana Chiorino and Carlo Olmo, Milan, 2010 1891 births 1979 deaths People from Sondrio IStructE Gold Medal winners Chartered designers 20th-century Italian architects Italian civil engineers Structural engineers Modernist architects from Italy Concrete shell structures University of Bologna alumni Harvard University faculty Recipients of the Royal Gold Medal Honorary members of the Royal Academy 20th-century Italian engineers Olympic competitors in art competitions Italian military personnel of World War I Recipients of the AIA Gold Medal Honorary Fellows of the American Institute of Architects
Pier Luigi Nervi
[ "Engineering" ]
1,425
[ "Structural engineering", "Structural engineers" ]
763,490
https://en.wikipedia.org/wiki/Fold%20%28geology%29
In structural geology, a fold is a stack of originally planar surfaces, such as sedimentary strata, that are bent or curved ("folded") during permanent deformation. Folds in rocks vary in size from microscopic crinkles to mountain-sized folds. They occur as single isolated folds or in periodic sets (known as fold trains). Synsedimentary folds are those formed during sedimentary deposition. Folds form under varied conditions of stress, pore pressure, and temperature gradient, as evidenced by their presence in soft sediments, the full spectrum of metamorphic rocks, and even as primary flow structures in some igneous rocks. A set of folds distributed on a regional scale constitutes a fold belt, a common feature of orogenic zones. Folds are commonly formed by shortening of existing layers, but may also be formed as a result of displacement on a non-planar fault (fault bend fold), at the tip of a propagating fault (fault propagation fold), by differential compaction or due to the effects of a high-level igneous intrusion e.g. above a laccolith. Fold terminology The fold hinge is the line joining points of maximum curvature on a folded surface. This line may be either straight or curved. The term hinge line has also been used for this feature. A fold surface seen perpendicular to its shortening direction can be divided into hinge and limb portions; the limbs are the flanks of the fold, and the limbs converge at the hinge zone. Within the hinge zone lies the hinge point, which is the point of minimum radius of curvature (maximum curvature) of the fold. The crest of the fold represents the highest point of the fold surface whereas the trough is the lowest point. The inflection point of a fold is the point on a limb at which the concavity reverses; on regular folds, this is the midpoint of the limb. The axial surface is defined as a plane connecting all the hinge lines of stacked folded surfaces. If the axial surface is planar, it is called an axial plane and can be described in terms of strike and dip. Folds can have a fold axis. A fold axis "is the closest approximation to a straight line that when moved parallel to itself, generates the form of the fold". (Ramsay 1967). A fold that can be generated by a fold axis is called a cylindrical fold. This term has been broadened to include near-cylindrical folds. Often, the fold axis is the same as the hinge line. Descriptive features Fold size Minor folds are quite frequently seen in outcrop; major folds seldom are except in the more arid countries. Minor folds can, however, often provide the key to the major folds they are related to. They reflect the same shape and style, the direction in which the closures of the major folds lie, and their cleavage indicates the attitude of the axial planes of the major folds and their direction of overturning Fold shape A fold can be shaped like a chevron, with planar limbs meeting at an angular axis, as cuspate with curved limbs, as circular with a curved axis, or as elliptical with unequal wavelength. Fold tightness Fold tightness is defined by the size of the angle between the fold's limbs (as measured tangential to the folded surface at the inflection line of each limb), called the interlimb angle. Gentle folds have an interlimb angle of between 180° and 120°, open folds range from 120° to 70°, close folds from 70° to 30°, and tight folds from 30° to 0°. Isoclines, or isoclinal folds, have an interlimb angle of between 10° and zero, with essentially parallel limbs. Fold symmetry Not all folds are equal on both sides of the axis of the fold. Those with limbs of relatively equal length are termed symmetrical, and those with highly unequal limbs are asymmetrical. Asymmetrical folds generally have an axis at an angle to the original unfolded surface they formed on. Facing and vergence Vergence is calculated in a direction perpendicular to the fold axis. Deformation style classes Folds that maintain uniform layer thickness are classed as concentric folds. Those that do not are called similar folds. Similar folds tend to display thinning of the limbs and thickening of the hinge zone. Concentric folds are caused by warping from active buckling of the layers, whereas similar folds usually form by some form of shear flow where the layers are not mechanically active. Ramsay has proposed a classification scheme for folds that often is used to describe folds in profile based upon the curvature of the inner and outer lines of a fold and the behavior of dip isogons. that is, lines connecting points of equal dip on adjacent folded surfaces: Types of fold Linear Anticline: linear, strata normally dip away from the axial center, oldest strata in center irrespective of orientation. Syncline: linear, strata normally dip toward the axial center, youngest strata in center irrespective of orientation. Antiform: linear, strata dip away from the axial center, age unknown, or inverted. Synform: linear, strata dip toward the axial center, age unknown, or inverted. Monocline: linear, strata dip in one direction between horizontal layers on each side. Recumbent: linear, fold axial plane oriented at a low angle resulting in overturned strata in one limb of the fold. Other Dome: nonlinear, strata dip away from center in all directions, oldest strata in center. Basin: nonlinear, strata dip toward center in all directions, youngest strata in center. Chevron: angular fold with straight limbs and small hinges Slump: typically monoclinal, the result of differential compaction or dissolution during sedimentation and lithification. Ptygmatic: Folds are chaotic, random and disconnected. Typical of sedimentary slump folding, migmatites and decollement detachment zones. Parasitic: short-wavelength folds formed within a larger wavelength fold structure - normally associated with differences in bed thickness Disharmonic: Folds in adjacent layers with different wavelengths and shapes (A homocline involves strata dipping in the same direction, though not necessarily any folding.) Causes of folding Folds appear on all scales, in all rock types, at all levels in the crust. They arise from a variety of causes. Layer-parallel shortening When a sequence of layered rocks is shortened parallel to its layering, this deformation may be accommodated in a number of ways, homogeneous shortening, reverse faulting or folding. The response depends on the thickness of the mechanical layering and the contrast in properties between the layers. If the layering does begin to fold, the fold style is also dependent on these properties. Isolated thick competent layers in a less competent matrix control the folding and typically generate classic rounded buckle folds accommodated by deformation in the matrix. In the case of regular alternations of layers of contrasting properties, such as sandstone-shale sequences, kink-bands, box-folds and chevron folds are normally produced. Fault-related folding Many folds are directly related to faults, associated with their propagation, displacement and the accommodation of strains between neighboring faults. Fault bend folding Fault-bend folds are caused by displacement along a non-planar fault. In non-vertical faults, the hanging-wall deforms to accommodate the mismatch across the fault as displacement progresses. Fault bend folds occur in both extensional and thrust faulting. In extension, listric faults form rollover anticlines in their hanging walls. In thrusting, ramp anticlines form whenever a thrust fault cuts up section from one detachment level to another. Displacement over this higher-angle ramp generates the folding. Fault propagation folding Fault propagation folds or tip-line folds are caused when displacement occurs on an existing fault without further propagation. In both reverse and normal faults this leads to folding of the overlying sequence, often in the form of a monocline. Detachment folding When a thrust fault continues to displace above a planar detachment without further fault propagation, detachment folds may form, typically of box-fold style. These generally occur above a good detachment such as in the Jura Mountains, where the detachment occurs on middle Triassic evaporites. Folding in shear zones Shear zones that approximate to simple shear typically contain minor asymmetric folds, with the direction of overturning consistent with the overall shear sense. Some of these folds have highly curved hinge-lines and are referred to as sheath folds. Folds in shear zones can be inherited, formed due to the orientation of pre-shearing layering or formed due to instability within the shear flow. Folding in sediments Recently deposited sediments are normally mechanically weak and prone to remobilization before they become lithified, leading to folding. To distinguish them from folds of tectonic origin, such structures are called synsedimentary (formed during sedimentation). Slump folding: When slumps form in poorly consolidated sediments, they commonly undergo folding, particularly at their leading edges, during their emplacement. The asymmetry of the slump folds can be used to determine paleoslope directions in sequences of sedimentary rocks. Dewatering: Rapid dewatering of sandy sediments, possibly triggered by seismic activity, can cause convolute bedding. Compaction: Folds can be generated in a younger sequence by differential compaction over older structures such as fault blocks and reefs. Igneous intrusion The emplacement of igneous intrusions tends to deform the surrounding country rock. In the case of high-level intrusions, near the Earth's surface, this deformation is concentrated above the intrusion and often takes the form of folding, as with the upper surface of a laccolith. Flow folding The compliance of rock layers is referred to as competence: a competent layer or bed of rock can withstand an applied load without collapsing and is relatively strong, while an incompetent layer is relatively weak. When rock behaves as a fluid, as in the case of very weak rock such as rock salt, or any rock that is buried deeply enough, it typically shows flow folding (also called passive folding, because little resistance is offered): the strata appear shifted undistorted, assuming any shape impressed upon them by surrounding more rigid rocks. The strata simply serve as markers of the folding. Such folding is also a feature of many igneous intrusions and glacier ice. Folding mechanisms Folding of rocks must balance the deformation of layers with the conservation of volume in a rock mass. This occurs by several mechanisms. Flexural slip Flexural slip allows folding by creating layer-parallel slip between the layers of the folded strata, which, altogether, result in deformation. A good analogy is bending a phone book, where volume preservation is accommodated by slip between the pages of the book. The fold formed by the compression of competent rock beds is called "flexure fold". Buckling Typically, folding is thought to occur by simple buckling of a planar surface and its confining volume. The volume change is accommodated by layer parallel shortening the volume, which grows in thickness. Folding under this mechanism is typical of a similar fold style, as thinned limbs are shortened horizontally and thickened hinges do so vertically. Mass displacement If the folding deformation cannot be accommodated by a flexural slip or volume-change shortening (buckling), the rocks are generally removed from the path of the stress. This is achieved by pressure dissolution, a form of metamorphic process, in which rocks shorten by dissolving constituents in areas of high strain and redepositing them in areas of lower strain. Folds generated in this way include examples in migmatites and areas with a strong axial planar cleavage. Mechanics of folding Folds in the rock are formed about the stress field in which the rocks are located and the rheology, or method of response to stress, of the rock at the time at which the stress is applied. The rheology of the layers being folded determines characteristic features of the folds that are measured in the field. Rocks that deform more easily form many short-wavelength, high-amplitude folds. Rocks that do not deform as easily form long-wavelength, low-amplitude folds. Economic implications Mining industry Layers of rock that fold into a hinge need to accommodate large deformations in the hinge zone. This results in voids between the layers. These voids, and especially the fact that the water pressure is lower in the voids than outside of them, act as triggers for the deposition of minerals. Over millions of years, this process is capable of gathering large quantities of trace minerals from large expanses of rock and depositing them at very concentrated sites. This may be a mechanism that is responsible for the veins. To summarize, when searching for veins of valuable minerals, it might be wise to look for highly folded rock, and this is the reason why the mining industry is very interested in the theory of geological folding. Oil industry Anticlinal traps are formed by folding of rock. For example, if a porous sandstone unit covered with low permeability shale is folded into an anticline, it may form a hydrocarbons trap, oil accumulating in the crest of the fold. Most anticlinal traps are produced as a result of sideways pressure, folding the layers of rock, but can also occur from sediments being compacted. See also 3D fold evolution Orogeny Mountain building Rock mechanics Thrust fault Notes Further reading Ramsay, J.G., 1967, Folding and fracturing of rocks: McGraw-Hill Book Company, New York, 560pp., ISBN 193066589X External links Mark Peletier Oil and gas traps Structural geology Geological processes Deformation (mechanics)
Fold (geology)
[ "Materials_science", "Engineering" ]
2,806
[ "Deformation (mechanics)", "Materials science" ]