id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
972,711
https://en.wikipedia.org/wiki/Single-photon%20avalanche%20diode
A single-photon avalanche diode (SPAD), also called Geiger-mode avalanche photodiode (G-APD or GM-APD) is a solid-state photodetector within the same family as photodiodes and avalanche photodiodes (APDs), while also being fundamentally linked with basic diode behaviours. As with photodiodes and APDs, a SPAD is based around a semi-conductor p-n junction that can be illuminated with ionizing radiation such as gamma, x-rays, beta and alpha particles along with a wide portion of the electromagnetic spectrum from ultraviolet (UV) through the visible wavelengths and into the infrared (IR). In a photodiode, with a low reverse bias voltage, the leakage current changes linearly with absorption of photons, i.e. the liberation of current carriers (electrons and/or holes) due to the internal photoelectric effect. However, in a SPAD, the reverse bias is so high that a phenomenon called impact ionisation occurs which is able to cause an avalanche current to develop. Simply, a photo-generated carrier is accelerated by the electric field in the device to a kinetic energy which is enough to overcome the ionisation energy of the bulk material, knocking electrons out of an atom. A large avalanche of current carriers grows exponentially and can be triggered from as few as a single photon-initiated carrier. A SPAD is able to detect single photons providing short duration trigger pulses that can be counted. However, they can also be used to obtain the time of arrival of the incident photon due to the high speed that the avalanche builds up and the device's low timing jitter. The fundamental difference between SPADs and APDs or photodiodes, is that a SPAD is biased well above its reverse-bias breakdown voltage and has a structure that allows operation without damage or undue noise. While an APD is able to act as a linear amplifier, the level of impact ionisation and avalanche within the SPAD has prompted researchers to liken the device to a Geiger-counter in which output pulses indicate a trigger or "click" event. The diode bias region that gives rise to this "click" type behaviour is therefore called the "Geiger-mode" region. As with photodiodes the wavelength region in which it is most sensitive is a product of its material properties, in particular the energy bandgap within the semiconductor. Many materials including silicon, germanium, germanium on silicon and III-V elements such as InGaAs/InP have been used to fabricate SPADs for the large variety of applications that now utilise the run-away avalanche process. There is much research in this topic with activity implementing SPAD-based systems in CMOS fabrication technologies, and investigation and use of III-V material combinations and Ge on Si for single-photon detection at short-wave infrared wavelengths suitable for telecommunications applications. Applications Since the 1970s, the applications of SPADs have increased significantly. Recent examples of their use include LIDAR, time of flight (ToF) 3D imaging, PET scanning, single-photon experimentation within physics, fluorescence lifetime microscopy, and optical communications (particularly quantum key distribution). Operation Structures SPADs are semiconductor devices that are based on a p–n junction that is reverse-biased at an operating voltage that exceeds the junction's breakdown voltage (Figure 1). "At this bias, the electric field is so high [higher than 3×105 V/cm] that a single charge carrier injected into the depletion layer can trigger a self-sustaining avalanche. The current rises swiftly [sub-nanosecond rise-time] to a macroscopic steady level in the milliampere range. If the primary carrier is photo-generated, the leading edge of the avalanche pulse marks [with picosecond time jitter] the arrival time of the detected photon." The current continues until the avalanche is quenched by lowering the bias voltage down to or below the breakdown voltage: the lower electric field is no longer able to accelerate carriers to impact-ionize with lattice atoms, therefore current ceases. In order to be able to detect another photon, the bias voltage must be raised again above breakdown. "This operation requires a suitable circuit, which has to: Sense the leading edge of the avalanche current. Generate a standard output pulse synchronous with the avalanche build-up. Quench the avalanche by lowering the bias down to the breakdown voltage. Restore the photodiode to the operative level. This circuit is usually referred to as a quenching circuit." Biasing regions and current-voltage characteristic A semiconductor p-n junction can be biased at several operating regions depending on the applied voltage. For normal uni-directional diode operation, the forward biasing region and the forward voltage are used during conduction, while the reverse bias region prevents conduction. When operated with a low reverse bias voltage, the p-n junction can operate as a unity gain photodiode. As the reverse bias increases, some internal gain through carrier multiplication can occur allowing the photodiode to operate as an avalanche photodiode (APD) with a stable gain and a linear response to the optical input signal. However, as the bias voltage continues to increase, the p-n junction breaks down when the electric field strength across the p-n junction reaches a critical level. As this electric field is induced by the bias voltage over the junction it is denoted as the breakdown voltage, VBD. A SPAD is reverse biased with an excess bias voltage, Vex, above the breakdown voltage, but below a second, higher breakdown voltage associated with the SPAD's guard ring. The total bias (VBD+Vex) therefore exceeds the breakdown voltage to such a degree that "At this bias, the electric field is so high [higher than 3×105 V/cm] that a single charge carrier injected into the depletion layer can trigger a self-sustaining avalanche. The current rises swiftly [sub-nanosecond rise-time] to a macroscopic steady level in the milliampere range. If the primary carrier is photo-generated, the leading edge of the avalanche pulse marks [with picosecond time jitter] the arrival time of the detected photon". As the current vs voltage (I-V) characteristic of a p-n junction gives information about the conduction behaviour of the diode, this is often measured using an analogue curve-tracer. This sweeps the bias voltage in fine steps under tightly controlled laboratory conditions. For a SPAD, without photon arrivals or thermally generated carriers, the I-V characteristic is similar to the reverse characteristic of a standard semi-conductor diode, i.e. an almost total blockage of charge flow (current) over the junction other than a small leakage current (nano-amperes). This condition can be described as an "off-branch" of the characteristic. However, when this experiment is conducted, a "flickering" effect and a second I-V characteristic can be observed beyond breakdown. This occurs when the SPAD has experienced a triggering event (photon arrival or thermally generated carrier) during the voltage sweeps that are applied to the device. The SPAD, during these sweeps, sustains an avalanche current which is described as the "on-branch" of the I-V characteristic. As the curve tracer increases the magnitude of the bias voltage over time, there are times that the SPAD is triggered during the voltage sweep above breakdown. In this case a transition occurs from the off-branch to the on-branch, with an appreciable current starting to flow. This leads to the flickering of the I-V characteristic that is observed and was denoted by early researchers in the field as "bifurcation" (def: the division of something into two branches or parts). To detect single-photons successfully, the p-n junction must have very low levels of the internal generation and recombination processes. To reduce thermal generation, devices are often cooled, while phenomena such as tunnelling across the p-n junctions also need to be reduced through careful design of semi-conductor dopants and implant steps. Finally, to reduce noise mechanisms being exacerbated by trapping centres within the p-n junction's band gap structure the diode needs to have a "clean" process free of erroneous dopants. Passive quenching circuits The simplest quenching circuit is commonly called passive quenching circuit and comprises a single resistor in series with the SPAD. This experimental setup has been employed since the early studies on the avalanche breakdown in junctions. The avalanche current self-quenches simply because it develops a voltage drop across a high-value ballast load RL (about 100 kΩ or more). After the quenching of the avalanche current, the SPAD bias slowly recovers to the operating bias, and therefore the detector is ready to be ignited again. This circuit mode is therefore called passive quenching passive reset (PQPR), although an active circuit element can be used for reset forming a passive quench active reset (PQAR) circuit mode. A detailed description of the quenching process is reported by Zappa et al. Active quenching circuits A more advanced quenching, which was explored from the 1970s onwards, is a scheme called active quenching. In this case a fast discriminator senses the steep onset of the avalanche current across a 50 Ω resistor (or integrated transistor) and provides a digital (CMOS, TTL, ECL, NIM) output pulse, synchronous with the photon arrival time. The circuit then quickly reduces the bias voltage to below breakdown (active quenching), then relatively quickly returns bias to above the breakdown voltage ready to sense the next photon. This mode is called active quench active reset (AQAR), however depending on circuit requirements, active quenching passive reset (AQPR) may be more suitable. AQAR circuits often allow lower dead times, and significantly reduced dead time variation. Photon counting and saturation The intensity of the input signal can be obtained by counting (photon counting) the number of output pulses within a measurement time period. This is useful for applications such as low light imaging, PET scanning and fluorescence lifetime microscopy. However, while the avalanche recovery circuit is quenching the avalanche and restoring bias, the SPAD cannot detect further photon arrivals. Any photons, (or dark counts or after-pulses), that reach the detector during this brief period are not counted. As the number of photons increases such that the (statistical) time interval between photons gets within a factor of ten or so of the avalanche recovery time, missing counts become statistically significant and the count rate begins to depart from a linear relationship with detected light level. At this point the SPAD begins to saturate. If the light level were to increase further, ultimately to the point where the SPAD immediately avalanches the moment the avalanche recovery circuit restores bias, the count rate reaches a maximum defined purely by the avalanche recovery time in the case of active quenching (hundred million counts per second or more). This can be harmful to the SPAD as it will be experiencing avalanche current nearly continuously. In the passive case, saturation may lead to the count rate decreasing once the maximum is reached. This is called paralysis, whereby a photon arriving as the SPAD is passively recharging, has a lower detection probability, but can extend the dead time. It is worth noting that passive quenching, while simpler to implement in terms of circuitry, incurs a 1/e reduction in maximum counting rates. Dark count rate (DCR) Besides photon-generated carriers, thermally-generated carriers (through generation-recombination processes within the semiconductor) can also fire the avalanche process. Therefore, it is possible to observe output pulses when the SPAD is in complete darkness. The resulting average number of counts per second is called dark count rate (DCR) and is the key parameter in defining the detector noise. It is worth noting that the reciprocal of the dark count rate defines the mean time that the SPAD remains biased above breakdown before being triggered by an undesired thermal generation. Therefore, in order to work as a single-photon detector, the SPAD must be able to remain biased above breakdown for a sufficiently long time (e.g., a few milliseconds, corresponding to a count rate well under a thousand counts per second, cps). Afterpulsing noise One other effect that can trigger an avalanche is known as afterpulsing. When an avalanche occurs, the PN junction is flooded with charge carriers and trap levels between the valence and conduction band become occupied to a degree that is much greater than that expected in a thermal-equilibrium distribution of charge carriers. After the SPAD has been quenched, there is some probability that a charge carrier in a trap level receives enough energy to free it from the trap and promote it to the conduction band, which triggers a new avalanche. Thus, depending on the quality of the process and exact layers and implants that were used to fabricate the SPAD, a significant number of extra pulses can be developed from a single originating thermal or photo-generation event. The degree of afterpulsing can be quantified by measuring the autocorrelation of the times of arrival between avalanches when a dark count measurement is set up. Thermal generation produces Poissonian statistics with an impulse function autocorrelation, and afterpulsing produces non-Poissonian statistics. Photon timing and jitter The leading edge of a SPAD's avalanche breakdown is particularly useful for timing the arrival of photons. This method is useful for 3D imaging, LIDAR and is used heavily in physical measurements relying on time-correlated single photon counting (TCSPC). However, to enable such functionality dedicated circuits such as time-to-digital converters (TDCs) and time-to-analogue (TAC) circuits are required. The measurement of a photon's arrival is complicated by two general processes. The first is the statistical fluctuation in the arrival time of the photon itself, which is a fundamental property of light. The second is the statistical variation in the detection mechanism within the SPAD due to a) depth of photon absorption, b) diffusion time to the active p-n junction, c) the build up statistics of the avalanche and d) the jitter of the detection and timing circuitry. Optical fill factor For a single SPAD, the ratio of its optically sensitive area, Aact, to its total area, Atot, is called the fill factor, . As SPADs require a guard ring to prevent premature edge breakdown, the optical fill factor becomes a product of the diode shape and size with relation its guard ring. If the active area is large and the outer guard ring is thin, the device will have a high fill factor. With a single device, the most efficient method to ensure full utilisation of the area and maximum sensitivity is to focus the incoming optical signal to be within the device's active area, i.e. all incident photons are absorbed within the planar area of the p-n junction such that any photon within this area can trigger an avalanche. Fill factor is more applicable when we consider arrays of SPAD devices. Here the diode active area may be small or commensurate with the guard ring's area. Likewise, the fabrication process of the SPAD array may put constraints on the separation of one guard ring to another, i.e. the minimum separation of SPADs. This leads to the situation where the area of the array becomes dominated by guard ring and separation regions rather than optically receptive p-n junctions. The fill factor is made worse when circuitry must be included within the array as this adds further separation between optically receptive regions. One method to mitigate this issue is to increase the active area of each SPAD in the array such that guard rings and separation are no longer dominant, however for CMOS integrated SPADs the erroneous detections caused by dark counts increases as the diode size increases. Geometric improvements One of the first methods to increase fill factors in arrays of circular SPADs was to offset the alignment of alternate rows such that the curve of one SPAD partially uses the area between the two SPADs on an adjacent row. This was effective but complicated the routing and layout of the array. To address fill factor limitations within SPAD arrays formed of circular SPADs, other shapes are utilised as these are known to have higher maximum area values within a typically square pixel area and have higher packing ratios. A square SPAD within a square pixel achieves the highest fill factor, however the sharp corners of this geometry are known to cause premature breakdown of the device, despite a guard ring and consequently produce SPADs with high dark count rates. To compromise, square SPADs with sufficiently rounded corners have been fabricated. These are termed Fermat shaped SPADs while the shape itself is a super-ellipse or a Lamé curve. This nomenclature is common in the SPAD literature, however the Fermat curve refers to a special case of the super-ellipse that puts restrictions on the ratio of the shape's length, "a" and width, "b" (they must be the same, a = b = 1) and restricts the degree of the curve "n" to be even integers (2, 4, 6, 8 etc). The degree "n" controls the curvature of the shape's corners. Ideally, to optimise the shape of the diode for both low noise and a high fill factor, the shape's parameters should be free of these restrictions. To minimise the spacing between SPAD active areas, researchers have removed all active circuitry from the arrays and have also explored the use of NMOS only CMOS SPAD arrays to remove SPAD guard ring to PMOS n-well spacing rules. This is of benefit but is limited by routing distances and congestion into the centre SPADs for larger arrays. The concept has been extended to develop arrays that use clusters of SPADs in so-called mini-SiPM arrangements whereby a smaller array is provided with its active circuitry at one edge, allowing a second small array to be abutted on a different edge. This reduced the routing difficulties by keeping the number of diodes in the cluster manageable and creating the required number of SPADs in total from collections of those clusters. A significant jump in fill factor and array pixel pitch was achieved by sharing the deep n-well of the SPADs in CMOS processes, and more recently also sharing portions of the guard-ring structure. This removed one of the major guard-ring to guard-ring separation rules and allowed the fill-factor to increase towards 60 or 70%. The n-well and guard ring sharing idea has been crucial in efforts towards lowering pixel pitch and increasing the total number of diodes in the array. Recently SPAD pitches have been reduced to 3.0 um and 2.2 um. Porting a concept from photodiodes and APDs, researchers have also investigated the use of drift electric fields within the CMOS substrate to attract photo generated carriers towards a SPAD's active p-n junction. By doing so a large optical collection area can be achieved with a smaller SPAD region. Another concept ported from CMOS image sensor technologies, is the exploration of stacked p-n junctions similar to Foveon sensors. The idea being that higher-energy photons (blue) tend to be absorbed at a short absorption depth, i.e. near the silicon surface. Red and infra-red photons (lower energy) travel deeper into the silicon. If there is a junction at that depth, red and IR sensitivity can be improved. IC fabrication improvements With the advancement of 3D IC technologies, i.e. stacking of integrated circuits, the fill factor could be enhanced further by allowing the top die to be optimised for a high fill-factor SPAD array, and the lower die for readout circuits and signal processing. As small dimension, high-speed processes for transistors may require different optimisations than optically sensitive diodes, 3D-ICs allow the layers to be separately optimised. Pixel-level optical improvements As with CMOS image sensors micro-lenses can be fabricated on the SPAD pixel array to focus light into the centre of the SPAD. As with a single SPAD, this allows light to only hit the sensitive regions and avoid both the guard ring and any routing that is needed within the array. This has also recently included Fresnel type lenses. Pixel pitch The above fill-factor enhancement methods, mostly concentrating on SPAD geometry along with other advancements, have led SPAD arrays to recently push the 1 mega pixel barrier. While this lags CMOS image sensors (with pitches now below 0.8 um), this is a product of both the youth of the research field (with CMOS SPADs introduced in 2003) and the complications of high voltages, avalanche multiplication within the silicon and the required spacing rules. Comparison with APDs While both APDs and SPADs are semiconductor p-n junctions that are heavily reverse biased, the principle difference in their properties is derived from their different biasing points upon the reverse I-V characteristic, i.e. the reverse voltage applied to their junction. An APD, in comparison to a SPAD, is not biased above its breakdown voltage. This is because the multiplication of charge carriers is known to occur prior to the breakdown of the device with this being utilised to achieve a stable gain that varies with the applied voltage. For optical detection applications, the resulting avalanche and subsequent current in its biasing circuit is linearly related to the optical signal intensity. The APD is therefore useful to achieve moderate up-front amplification of low-intensity optical signals but is often combined with a trans-impedance amplifier (TIA) as the APD's output is a current rather than the voltage of a typical amplifier. The resultant signal is a non-distorted, amplified version of the input, allowing for the measurement of complex processes that modulate the amplitude of the incident light. The internal multiplication gain factors for APDs vary by application, however typical values are of the order of a few hundred. The avalanche of carriers is not divergent in this operating region, while the avalanche present in SPADs quickly builds into a run-away (divergent) condition. In comparison, SPADs operate at a bias voltage above the breakdown voltage. This is such a highly unstable above-breakdown regime that a single photon or a single dark-current electron can trigger a significant avalanche of carriers. The semiconductor p-n junction breaks down completely, and a significant current is developed. A single photon can trigger a current spike equivalent to billions of billions of electrons per second (with this being dependent on the physical size of the device and its bias voltage). This allows subsequent electronic circuits to easily count such trigger events. As the device produces a trigger event, the concept of gain is not strictly compatible. However, as the photon detection efficiency (PDE) of SPADs varies with the reverse bias voltage, gain, in a general conceptual sense can be used to distinguish devices that are heavily biased and therefore highly sensitive in comparison to lightly biased and therefore of lower sensitivity. While APDs can amplify an input signal preserving any changes in amplitude, SPADs distort the signal into a series of trigger or pulse events. The output can still be treated as proportional to the input signal intensity, however it is now transformed into the frequency of trigger events, i.e. pulse frequency modulation (PFM). Pulses can be counted giving an indication of the input signal's optical intensity, while pulses can trigger timing circuits to provide accurate time-of-arrival measurements. One crucial issue present in APDs is multiplication noise induced by the statistical variation of the avalanche multiplication process. This leads to a corresponding noise factor on the output amplified photo current. Statistical variation in the avalanche is also present in SPAD devices, however due to the runaway process it is often manifest as timing jitter on the detection event. Along with their bias region, there are also structural differences between APDs and SPADs, principally due to the increased reverse bias voltages required and the need for SPADs to have a long quiescent period between noise trigger events to be suitable for the single-photon level signals to be measured. History, development and early pioneers The history and development of SPADs and APDs shares a number of important points with the development of solid-state technologies such as diodes and early p–n junction transistors (particularly war-efforts at Bell Labs). John Townsend in 1901 and 1903 investigated the ionisation of trace gases within vacuum tubes, finding that as the electric potential increased, gaseous atoms and molecules could become ionised by the kinetic energy of free electrons accelerated though the electric field. The new liberated electrons were then themselves accelerated by the field, producing new ionisations once their kinetic energy has reached sufficient levels. This theory was later instrumental in the development of the thyratron and the Geiger-Mueller tube. The Townsend discharge was also instrumental as a base theory for electron multiplication phenomena, (both DC and AC), within both silicon and germanium. However, the major advances in early discovery and utilisation of the avalanche gain mechanism were a product of the study of Zener breakdown, related (avalanche) breakdown mechanisms and structural defects in early silicon and germanium transistor and p–n junction devices. These defects were called 'microplasmas' and are critical in the history of APDs and SPADs. Likewise investigation of the light detection properties of p–n junctions is crucial, especially the early 1940s findings of Russel Ohl. Light detection in semiconductors and solids through the internal photoelectric effect is older with Foster Nix pointing to the work of Gudden and Pohl in the 1920s, who use the phrase primary and secondary to distinguish the internal and external photoelectric effects respectively. In the 1950s and 1960s, significant effort was made to reduce the number of microplasma breakdown and noise sources, with artificial microplasmas being fabricated for study. It became clear that the avalanche mechanism could be useful for signal amplification within the diode itself, as both light and alpha particles were used for the study of these devices and breakdown mechanisms. In the early 2000s, SPADs have been implemented within CMOS processes. This has radically increased their performance, (dark count rate, jitter, array pixel pitch etc), and has leveraged the analog and digital circuits that can be implemented alongside these devices. Notable circuits include photon counting using fast digital counters, photon timing using both time-to-digital converters (TDCs) and time-to-analog converters (TACs), passive quenching circuits using either NMOS or PMOS transistors in place of poly-silicon resistors, active quenching and reset circuits for high counting rates, and many on-chip digital signal processing blocks. Such devices, now reaching optical fill factors of >70%, with >1024 SPADs, with DCRs < 10 Hz and jitter values in the 50ps region are now available with dead times of 1-2ns. Recent devices have leaveraged 3D-IC technologies such as through-silicon-vias (TSVs) to present a high-fill-factor SPAD optimised top CMOS layer (90 nm or 65 nm node) with a dedicated signal processing and readout CMOS layer (45 nm node). Significant advancements in the noise terms for SPADs have been obtained by silicon process modelling tools such as TCAD, where guard rings, junction depths and device structures and shapes can be optimised prior to validation by experimental SPAD structures. See also Avalanche photodiode (APD) Oversampled binary image sensor p–n junction Silicon photomultiplier (SiPM) References Optical devices Optical diodes Particle detectors Photodetectors Single-photon detectors
Single-photon avalanche diode
[ "Materials_science", "Technology", "Engineering" ]
5,830
[ "Glass engineering and science", "Particle detectors", "Optical devices", "Measuring instruments" ]
972,800
https://en.wikipedia.org/wiki/Abyssal%20plain
An abyssal plain is an underwater plain on the deep ocean floor, usually found at depths between . Lying generally between the foot of a continental rise and a mid-ocean ridge, abyssal plains cover more than 50% of the Earth's surface. They are among the flattest, smoothest, and least explored regions on Earth. Abyssal plains are key geologic elements of oceanic basins (the other elements being an elevated mid-ocean ridge and flanking abyssal hills). The creation of the abyssal plain is the result of the spreading of the seafloor (plate tectonics) and the melting of the lower oceanic crust. Magma rises from above the asthenosphere (a layer of the upper mantle), and as this basaltic material reaches the surface at mid-ocean ridges, it forms new oceanic crust, which is constantly pulled sideways by spreading of the seafloor. Abyssal plains result from the blanketing of an originally uneven surface of oceanic crust by fine-grained sediments, mainly clay and silt. Much of this sediment is deposited by turbidity currents that have been channelled from the continental margins along submarine canyons into deeper water. The rest is composed chiefly of pelagic sediments. Metallic nodules are common in some areas of the plains, with varying concentrations of metals, including manganese, iron, nickel, cobalt, and copper. There are also amounts of carbon, nitrogen, phosphorus and silicon, due to material that comes down and decomposes. Owing in part to their vast size, abyssal plains are believed to be major reservoirs of biodiversity. They also exert significant influence upon ocean carbon cycling, dissolution of calcium carbonate, and atmospheric CO2 concentrations over time scales of a hundred to a thousand years. The structure of abyssal ecosystems is strongly influenced by the rate of flux of food to the seafloor and the composition of the material that settles. Factors such as climate change, fishing practices, and ocean fertilization have a substantial effect on patterns of primary production in the euphotic zone. Animals absorb dissolved oxygen from the oxygen-poor waters. Much dissolved oxygen in abyssal plains came from polar regions that had melted long ago. Due to scarcity of oxygen, abyssal plains are inhospitable for organisms that would flourish in the oxygen-enriched waters above. Deep sea coral reefs are mainly found in depths of 3,000 meters and deeper in the abyssal and hadal zones. Abyssal plains were not recognized as distinct physiographic features of the sea floor until the late 1940s and, until recently, none had been studied on a systematic basis. They are poorly preserved in the sedimentary record, because they tend to be consumed by the subduction process. Due to darkness and a water pressure that can reach about 750 times atmospheric pressure (76 megapascal), abyssal plains are not well explored. Oceanic zones The ocean can be conceptualized as zones, depending on depth, and presence or absence of sunlight. Nearly all life forms in the ocean depend on the photosynthetic activities of phytoplankton and other marine plants to convert carbon dioxide into organic carbon, which is the basic building block of organic matter. Photosynthesis in turn requires energy from sunlight to drive the chemical reactions that produce organic carbon. The stratum of the water column nearest the surface of the ocean (sea level) is referred to as the photic zone. The photic zone can be subdivided into two different vertical regions. The uppermost portion of the photic zone, where there is adequate light to support photosynthesis by phytoplankton and plants, is referred to as the euphotic zone (also referred to as the epipelagic zone, or surface zone). The lower portion of the photic zone, where the light intensity is insufficient for photosynthesis, is called the dysphotic zone (dysphotic means "poorly lit" in Greek). The dysphotic zone is also referred to as the mesopelagic zone, or the twilight zone. Its lowermost boundary is at a thermocline of , which, in the tropics generally lies between 200 and 1,000 metres. The euphotic zone is somewhat arbitrarily defined as extending from the surface to the depth where the light intensity is approximately 0.1–1% of surface sunlight irradiance, depending on season, latitude and degree of water turbidity. In the clearest ocean water, the euphotic zone may extend to a depth of about 150 metres, or rarely, up to 200 metres. Dissolved substances and solid particles absorb and scatter light, and in coastal regions the high concentration of these substances causes light to be attenuated rapidly with depth. In such areas the euphotic zone may be only a few tens of metres deep or less. The dysphotic zone, where light intensity is considerably less than 1% of surface irradiance, extends from the base of the euphotic zone to about 1,000 metres. Extending from the bottom of the photic zone down to the seabed is the aphotic zone, a region of perpetual darkness. Since the average depth of the ocean is about 4,300 metres, the photic zone represents only a tiny fraction of the ocean's total volume. However, due to its capacity for photosynthesis, the photic zone has the greatest biodiversity and biomass of all oceanic zones. Nearly all primary production in the ocean occurs here. Life forms which inhabit the aphotic zone are often capable of movement upwards through the water column into the photic zone for feeding. Otherwise, they must rely on material sinking from above, or find another source of energy and nutrition, such as occurs in chemosynthetic archaea found near hydrothermal vents and cold seeps. The aphotic zone can be subdivided into three different vertical regions, based on depth and temperature. First is the bathyal zone, extending from a depth of 1,000 metres down to 3,000 metres, with water temperature decreasing from to as depth increases. Next is the abyssal zone, extending from a depth of 3,000 metres down to 6,000 metres. The final zone includes the deep oceanic trenches, and is known as the hadal zone. This, the deepest oceanic zone, extends from a depth of 6,000 metres down to approximately 11,034 meters, at the very bottom of the Mariana Trench, the deepest point on planet Earth. Abyssal plains are typically in the abyssal zone, at depths from 3,000 to 6,000 metres. The table below illustrates the classification of oceanic zones: Formation Oceanic crust, which forms the bedrock of abyssal plains, is continuously being created at mid-ocean ridges (a type of divergent boundary) by a process known as decompression melting. Plume-related decompression melting of solid mantle is responsible for creating ocean islands like the Hawaiian islands, as well as the ocean crust at mid-ocean ridges. This phenomenon is also the most common explanation for flood basalts and oceanic plateaus (two types of large igneous provinces). Decompression melting occurs when the upper mantle is partially melted into magma as it moves upwards under mid-ocean ridges. This upwelling magma then cools and solidifies by conduction and convection of heat to form new oceanic crust. Accretion occurs as mantle is added to the growing edges of a tectonic plate, usually associated with seafloor spreading. The age of oceanic crust is therefore a function of distance from the mid-ocean ridge. The youngest oceanic crust is at the mid-ocean ridges, and it becomes progressively older, cooler and denser as it migrates outwards from the mid-ocean ridges as part of the process called mantle convection. The lithosphere, which rides atop the asthenosphere, is divided into a number of tectonic plates that are continuously being created and consumed at their opposite plate boundaries. Oceanic crust and tectonic plates are formed and move apart at mid-ocean ridges. Abyssal hills are formed by stretching of the oceanic lithosphere. Consumption or destruction of the oceanic lithosphere occurs at oceanic trenches (a type of convergent boundary, also known as a destructive plate boundary) by a process known as subduction. Oceanic trenches are found at places where the oceanic lithospheric slabs of two different plates meet, and the denser (older) slab begins to descend back into the mantle. At the consumption edge of the plate (the oceanic trench), the oceanic lithosphere has thermally contracted to become quite dense, and it sinks under its own weight in the process of subduction. The subduction process consumes older oceanic lithosphere, so oceanic crust is seldom more than 200 million years old. The overall process of repeated cycles of creation and destruction of oceanic crust is known as the Supercontinent cycle, first proposed by Canadian geophysicist and geologist John Tuzo Wilson. New oceanic crust, closest to the mid-oceanic ridges, is mostly basalt at shallow levels and has a rugged topography. The roughness of this topography is a function of the rate at which the mid-ocean ridge is spreading (the spreading rate). Magnitudes of spreading rates vary quite significantly. Typical values for fast-spreading ridges are greater than 100 mm/yr, while slow-spreading ridges are typically less than 20 mm/yr. Studies have shown that the slower the spreading rate, the rougher the new oceanic crust will be, and vice versa. It is thought this phenomenon is due to faulting at the mid-ocean ridge when the new oceanic crust was formed. These faults pervading the oceanic crust, along with their bounding abyssal hills, are the most common tectonic and topographic features on the surface of the Earth. The process of seafloor spreading helps to explain the concept of continental drift in the theory of plate tectonics. The flat appearance of mature abyssal plains results from the blanketing of this originally uneven surface of oceanic crust by fine-grained sediments, mainly clay and silt. Much of this sediment is deposited from turbidity currents that have been channeled from the continental margins along submarine canyons down into deeper water. The remainder of the sediment comprises chiefly dust (clay particles) blown out to sea from land, and the remains of small marine plants and animals which sink from the upper layer of the ocean, known as pelagic sediments. The total sediment deposition rate in remote areas is estimated at two to three centimeters per thousand years. Sediment-covered abyssal plains are less common in the Pacific Ocean than in other major ocean basins because sediments from turbidity currents are trapped in oceanic trenches that border the Pacific Ocean. Abyssal plains are typically covered by deep sea, but during parts of the Messinian salinity crisis much of the Mediterranean Sea's abyssal plain was exposed to air as an empty deep hot dry salt-floored sink. Discovery The landmark scientific expedition (December 1872 – May 1876) of the British Royal Navy survey ship HMS Challenger yielded a tremendous amount of bathymetric data, much of which has been confirmed by subsequent researchers. Bathymetric data obtained during the course of the Challenger expedition enabled scientists to draw maps, which provided a rough outline of certain major submarine terrain features, such as the edge of the continental shelves and the Mid-Atlantic Ridge. This discontinuous set of data points was obtained by the simple technique of taking soundings by lowering long lines from the ship to the seabed. The Challenger expedition was followed by the 1879–1881 expedition of the Jeannette, led by United States Navy Lieutenant George Washington DeLong. The team sailed across the Chukchi Sea and recorded meteorological and astronomical data in addition to taking soundings of the seabed. The ship became trapped in the ice pack near Wrangel Island in September 1879, and was ultimately crushed and sunk in June 1881. The Jeannette expedition was followed by the 1893–1896 Arctic expedition of Norwegian explorer Fridtjof Nansen aboard the Fram, which proved that the Arctic Ocean was a deep oceanic basin, uninterrupted by any significant land masses north of the Eurasian continent. Beginning in 1916, Canadian physicist Robert William Boyle and other scientists of the Anti-Submarine Detection Investigation Committee (ASDIC) undertook research which ultimately led to the development of sonar technology. Acoustic sounding equipment was developed which could be operated much more rapidly than the sounding lines, thus enabling the German Meteor expedition aboard the German research vessel Meteor (1925–27) to take frequent soundings on east-west Atlantic transects. Maps produced from these techniques show the major Atlantic basins, but the depth precision of these early instruments was not sufficient to reveal the flat featureless abyssal plains. As technology improved, measurement of depth, latitude and longitude became more precise and it became possible to collect more or less continuous sets of data points. This allowed researchers to draw accurate and detailed maps of large areas of the ocean floor. Use of a continuously recording fathometer enabled Tolstoy & Ewing in the summer of 1947 to identify and describe the first abyssal plain. This plain, south of Newfoundland, is now known as the Sohm Abyssal Plain. Following this discovery many other examples were found in all the oceans. The Challenger Deep is the deepest surveyed point of all of Earth's oceans; it is at the south end of the Mariana Trench near the Mariana Islands group. The depression is named after HMS Challenger, whose researchers made the first recordings of its depth on 23 March 1875 at station 225. The reported depth was 4,475 fathoms (8184 meters) based on two separate soundings. On 1 June 2009, sonar mapping of the Challenger Deep by the Simrad EM120 multibeam sonar bathymetry system aboard the R/V Kilo Moana indicated a maximum depth of 10971 meters (6.82 miles). The sonar system uses phase and amplitude bottom detection, with an accuracy of better than 0.2% of water depth (this is an error of about 22 meters at this depth). Terrain features Hydrothermal vents A rare but important terrain feature found in the bathyal, abyssal and hadal zones is the hydrothermal vent. In contrast to the approximately 2 °C ambient water temperature at these depths, water emerges from these vents at temperatures ranging from 60 °C up to as high as 464 °C. Due to the high barometric pressure at these depths, water may exist in either its liquid form or as a supercritical fluid at such temperatures. At a barometric pressure of 218 atmospheres, the critical point of water is 375 °C. At a depth of 3,000 meters, the barometric pressure of sea water is more than 300 atmospheres (as salt water is denser than fresh water). At this depth and pressure, seawater becomes supercritical at a temperature of 407 °C (see image). However the increase in salinity at this depth pushes the water closer to its critical point. Thus, water emerging from the hottest parts of some hydrothermal vents, black smokers and submarine volcanoes can be a supercritical fluid, possessing physical properties between those of a gas and those of a liquid. Sister Peak (Comfortless Cove Hydrothermal Field, , elevation −2996 m), Shrimp Farm and Mephisto (Red Lion Hydrothermal Field, , elevation −3047 m), are three hydrothermal vents of the black smoker category, on the Mid-Atlantic Ridge near Ascension Island. They are presumed to have been active since an earthquake shook the region in 2002. These vents have been observed to vent phase-separated, vapor-type fluids. In 2008, sustained exit temperatures of up to 407 °C were recorded at one of these vents, with a peak recorded temperature of up to 464 °C. These thermodynamic conditions exceed the critical point of seawater, and are the highest temperatures recorded to date from the seafloor. This is the first reported evidence for direct magmatic-hydrothermal interaction on a slow-spreading mid-ocean ridge. The initial stages of a vent chimney begin with the deposition of the mineral anhydrite. Sulfides of copper, iron, and zinc then precipitate in the chimney gaps, making it less porous over the course of time. Vent growths on the order of 30 cm (1 ft) per day have been recorded.[11] An April 2007 exploration of the deep-sea vents off the coast of Fiji found those vents to be a significant source of dissolved iron (see iron cycle). Hydrothermal vents in the deep ocean typically form along the mid-ocean ridges, such as the East Pacific Rise and the Mid-Atlantic Ridge. These are locations where two tectonic plates are diverging and new crust is being formed. Cold seeps Another unusual feature found in the abyssal and hadal zones is the cold seep, sometimes called a cold vent. This is an area of the seabed where seepage of hydrogen sulfide, methane and other hydrocarbon-rich fluid occurs, often in the form of a deep-sea brine pool. The first cold seeps were discovered in 1983, at a depth of 3200 meters in the Gulf of Mexico. Since then, cold seeps have been discovered in many other areas of the World Ocean, including the Monterey Submarine Canyon just off Monterey Bay, California, the Sea of Japan, off the Pacific coast of Costa Rica, off the Atlantic coast of Africa, off the coast of Alaska, and under an ice shelf in Antarctica. Biodiversity Though the plains were once assumed to be vast, desert-like habitats, research over the past decade or so shows that they teem with a wide variety of microbial life. However, ecosystem structure and function at the deep seafloor have historically been poorly studied because of the size and remoteness of the abyss. Recent oceanographic expeditions conducted by an international group of scientists from the Census of Diversity of Abyssal Marine Life (CeDAMar) have found an extremely high level of biodiversity on abyssal plains, with up to 2000 species of bacteria, 250 species of protozoans, and 500 species of invertebrates (worms, crustaceans and molluscs), typically found at single abyssal sites. New species make up more than 80% of the thousands of seafloor invertebrate species collected at any abyssal station, highlighting our heretofore poor understanding of abyssal diversity and evolution. Richer biodiversity is associated with areas of known phytodetritus input and higher organic carbon flux. Abyssobrotula galatheae, a species of cusk eel in the family Ophidiidae, is among the deepest-living species of fish. In 1970, one specimen was trawled from a depth of 8370 meters in the Puerto Rico Trench. The animal was dead, however, upon arrival at the surface. In 2008, the hadal snailfish (Pseudoliparis amblystomopsis) was observed and recorded at a depth of 7700 meters in the Japan Trench. In December 2014 a type of snailfish was filmed at a depth of 8145 meters, followed in May 2017 by another sailfish filmed at 8178 meters. These are, to date, the deepest living fish ever recorded. Other fish of the abyssal zone include the fishes of the family Ipnopidae, which includes the abyssal spiderfish (Bathypterois longipes), tripodfish (Bathypterois grallator), feeler fish (Bathypterois longifilis), and the black lizardfish (Bathysauropsis gracilis). Some members of this family have been recorded from depths of more than 6000 meters. CeDAMar scientists have demonstrated that some abyssal and hadal species have a cosmopolitan distribution. One example of this would be protozoan foraminiferans, certain species of which are distributed from the Arctic to the Antarctic. Other faunal groups, such as the polychaete worms and isopod crustaceans, appear to be endemic to certain specific plains and basins. Many apparently unique taxa of nematode worms have also been recently discovered on abyssal plains. This suggests that the deep ocean has fostered adaptive radiations. The taxonomic composition of the nematode fauna in the abyssal Pacific is similar, but not identical to, that of the North Atlantic. A list of some of the species that have been discovered or redescribed by CeDAMar can be found here. Eleven of the 31 described species of Monoplacophora (a class of mollusks) live below 2000 meters. Of these 11 species, two live exclusively in the hadal zone. The greatest number of monoplacophorans are from the eastern Pacific Ocean along the oceanic trenches. However, no abyssal monoplacophorans have yet been found in the Western Pacific and only one abyssal species has been identified in the Indian Ocean. Of the 922 known species of chitons (from the Polyplacophora class of mollusks), 22 species (2.4%) are reported to live below 2000 meters and two of them are restricted to the abyssal plain. Although genetic studies are lacking, at least six of these species are thought to be eurybathic (capable of living in a wide range of depths), having been reported as occurring from the sublittoral to abyssal depths. A large number of the polyplacophorans from great depths are herbivorous or xylophagous, which could explain the difference between the distribution of monoplacophorans and polyplacophorans in the world's oceans. Peracarid crustaceans, including isopods, are known to form a significant part of the macrobenthic community that is responsible for scavenging on large food falls onto the sea floor. In 2000, scientists of the Diversity of the deep Atlantic benthos (DIVA 1) expedition (cruise M48/1 of the German research vessel RV Meteor III) discovered and collected three new species of the Asellota suborder of benthic isopods from the abyssal plains of the Angola Basin in the South Atlantic Ocean. In 2003, De Broyer et al. collected some 68,000 peracarid crustaceans from 62 species from baited traps deployed in the Weddell Sea, Scotia Sea, and off the South Shetland Islands. They found that about 98% of the specimens belonged to the amphipod superfamily Lysianassoidea, and 2% to the isopod family Cirolanidae. Half of these species were collected from depths of greater than 1000 meters. In 2005, the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) remotely operated vehicle, KAIKO, collected sediment core from the Challenger Deep. 432 living specimens of soft-walled foraminifera were identified in the sediment samples. Foraminifera are single-celled protists that construct shells. There are an estimated 4,000 species of living foraminifera. Out of the 432 organisms collected, the overwhelming majority of the sample consisted of simple, soft-shelled foraminifera, with others representing species of the complex, multi-chambered genera Leptohalysis and Reophax. Overall, 85% of the specimens consisted of soft-shelled allogromiids. This is unusual compared to samples of sediment-dwelling organisms from other deep-sea environments, where the percentage of organic-walled foraminifera ranges from 5% to 20% of the total. Small organisms with hard calciferous shells have trouble growing at extreme depths because the water at that depth is severely lacking in calcium carbonate. The giant (5–20 cm) foraminifera known as xenophyophores are only found at depths of 500–10,000 metres, where they can occur in great numbers and greatly increase animal diversity due to their bioturbation and provision of living habitat for small animals. While similar lifeforms have been known to exist in shallower oceanic trenches (>7,000 m) and on the abyssal plain, the lifeforms discovered in the Challenger Deep may represent independent taxa from those shallower ecosystems. This preponderance of soft-shelled organisms at the Challenger Deep may be a result of selection pressure. Millions of years ago, the Challenger Deep was shallower than it is now. Over the past six to nine million years, as the Challenger Deep grew to its present depth, many of the species present in the sediment of that ancient biosphere were unable to adapt to the increasing water pressure and changing environment. Those species that were able to adapt may have been the ancestors of the organisms currently endemic to the Challenger Deep. Polychaetes occur throughout the Earth's oceans at all depths, from forms that live as plankton near the surface, to the deepest oceanic trenches. The robot ocean probe Nereus observed a 2–3 cm specimen (still unclassified) of polychaete at the bottom of the Challenger Deep on 31 May 2009. There are more than 10,000 described species of polychaetes; they can be found in nearly every marine environment. Some species live in the coldest ocean temperatures of the hadal zone, while others can be found in the extremely hot waters adjacent to hydrothermal vents. Within the abyssal and hadal zones, the areas around submarine hydrothermal vents and cold seeps have by far the greatest biomass and biodiversity per unit area. Fueled by the chemicals dissolved in the vent fluids, these areas are often home to large and diverse communities of thermophilic, halophilic and other extremophilic prokaryotic microorganisms (such as those of the sulfide-oxidizing genus Beggiatoa), often arranged in large bacterial mats near cold seeps. In these locations, chemosynthetic archaea and bacteria typically form the base of the food chain. Although the process of chemosynthesis is entirely microbial, these chemosynthetic microorganisms often support vast ecosystems consisting of complex multicellular organisms through symbiosis. These communities are characterized by species such as vesicomyid clams, mytilid mussels, limpets, isopods, giant tube worms, soft corals, eelpouts, galatheid crabs, and alvinocarid shrimp. The deepest seep community discovered thus far is in the Japan Trench, at a depth of 7700 meters. Probably the most important ecological characteristic of abyssal ecosystems is energy limitation. Abyssal seafloor communities are considered to be food limited because benthic production depends on the input of detrital organic material produced in the euphotic zone, thousands of meters above. Most of the organic flux arrives as an attenuated rain of small particles (typically, only 0.5–2% of net primary production in the euphotic zone), which decreases inversely with water depth. The small particle flux can be augmented by the fall of larger carcasses and downslope transport of organic material near continental margins. Exploitation of resources In addition to their high biodiversity, abyssal plains are of great current and future commercial and strategic interest. For example, they may be used for the legal and illegal disposal of large structures such as ships and oil rigs, radioactive waste and other hazardous waste, such as munitions. They may also be attractive sites for deep-sea fishing, and extraction of oil and gas and other minerals. Future deep-sea waste disposal activities that could be significant by 2025 include emplacement of sewage and sludge, carbon sequestration, and disposal of dredge spoils. As fish stocks dwindle in the upper ocean, deep-sea fisheries are increasingly being targeted for exploitation. Because deep sea fish are long-lived and slow growing, these deep-sea fisheries are not thought to be sustainable in the long term given current management practices. Changes in primary production in the photic zone are expected to alter the standing stocks in the food-limited aphotic zone. Hydrocarbon exploration in deep water occasionally results in significant environmental degradation resulting mainly from accumulation of contaminated drill cuttings, but also from oil spills. While the oil blowout involved in the Deepwater Horizon oil spill in the Gulf of Mexico originates from a wellhead only 1500 meters below the ocean surface, it nevertheless illustrates the kind of environmental disaster that can result from mishaps related to offshore drilling for oil and gas. Sediments of certain abyssal plains contain abundant mineral resources, notably polymetallic nodules. These potato-sized concretions of manganese, iron, nickel, cobalt, and copper, distributed on the seafloor at depths of greater than 4000 meters, are of significant commercial interest. The area of maximum commercial interest for polymetallic nodule mining (called the Pacific nodule province) lies in international waters of the Pacific Ocean, stretching from 118°–157°, and from 9°–16°N, an area of more than 3 million km2. The abyssal Clarion-Clipperton fracture zone (CCFZ) is an area within the Pacific nodule province that is currently under exploration for its mineral potential. Eight commercial contractors are currently licensed by the International Seabed Authority (an intergovernmental organization established to organize and control all mineral-related activities in the international seabed area beyond the limits of national jurisdiction) to explore nodule resources and to test mining techniques in eight claim areas, each covering 150,000 km2. When mining ultimately begins, each mining operation is projected to directly disrupt 300–800 km2 of seafloor per year and disturb the benthic fauna over an area 5–10 times that size due to redeposition of suspended sediments. Thus, over the 15-year projected duration of a single mining operation, nodule mining might severely damage abyssal seafloor communities over areas of 20,000 to 45,000 km2 (a zone at least the size of Massachusetts). Limited knowledge of the taxonomy, biogeography and natural history of deep sea communities prevents accurate assessment of the risk of species extinctions from large-scale mining. Data acquired from the abyssal North Pacific and North Atlantic suggest that deep-sea ecosystems may be adversely affected by mining operations on decadal time scales. In 1978, a dredge aboard the Hughes Glomar Explorer, operated by the American mining consortium Ocean Minerals Company (OMCO), made a mining track at a depth of 5000 meters in the nodule fields of the CCFZ. In 2004, the French Research Institute for Exploitation of the Sea (IFREMER) conducted the Nodinaut expedition to this mining track (which is still visible on the seabed) to study the long-term effects of this physical disturbance on the sediment and its benthic fauna. Samples taken of the superficial sediment revealed that its physical and chemical properties had not shown any recovery since the disturbance made 26 years earlier. On the other hand, the biological activity measured in the track by instruments aboard the crewed submersible bathyscaphe Nautile did not differ from a nearby unperturbed site. This data suggests that the benthic fauna and nutrient fluxes at the water–sediment interface has fully recovered. List of abyssal plains See also List of oceanic landforms List of submarine topographical features Oceanic ridge Physical oceanography References Bibliography External links Coastal and oceanic landforms Submarine topography Oceanic plateaus Oceanographical terminology Physical oceanography Aquatic ecology
Abyssal plain
[ "Physics", "Biology" ]
6,499
[ "Aquatic ecology", "Ecosystems", "Applied and interdisciplinary physics", "Physical oceanography" ]
973,239
https://en.wikipedia.org/wiki/Ethion
Ethion (C9H22O4P2S4) is an organophosphate insecticide. It is known to affect the neural enzyme acetylcholinesterase and disrupt its function. History Ethion was first registered in the US as an insecticide in the 1950s. Annual usage of ethion since then has varied depending on overall crop yields and weather conditions. For example, 1999 was a very dry year; since the drought reduced yields, the use of ethion was less economically rewarding. Since 1998, risk assessment studies have been conducted by (among others) the EPA (United States Environmental Protection Agency). Risk assessments for ethion were presented at a July 14, 1999 briefing with stakeholders in Florida, which was followed by an opportunity for public comment on risk management for this pesticide. Regulatory review Ethion was one of many substances approved for use based on data from Industrial Bio-Test Laboratories, which was later discovered to have engaged in extensive scientific misconduct and fraud, prompting the Food and Agriculture Organization and World Health Organization to recommend ethion's reanalysis in 1982. Synthesis Ethion is produced under controlled pH conditions by reacting dibromomethane with Diethyl dithiophosphoric acid in ethanol. Other methods of synthesis include the reaction of methylene bromide and sodium diethyl phosphorodithioate or the reaction of diethyl dithiophosphoric acid and formaldehyde. Reactivity and mechanism Ethion is a small lipophilic molecule. This promotes rapid passive absorption across cell membranes. Thus absorption through skin, lungs, and the gut into the blood occurs via passive diffusion. Ethion is metabolized in the liver via desulfurization, producing the metabolite ethion monoxon. This transformation leads to liver damage. Ethion monoxon is an inhibitor of the neuroenzyme cholinesterase (ChE), which normally facilitates nerve impulse transmission; secondary damage thus occurs in the brain. Because the chemical structure of ethion monoxon is similar to that of an organophosphate, its mechanism of poisoning is thought to be the same. See the figure, "Inhibition of cholinesterase by ethion monoxon." The figure depicts enzyme inhibition as a two-step process. Here, a hydroxyl group (OH) from a serine residue in the active site of ChE is phosphorylated by an organophosphate, causing enzyme inhibition and preventing the serine hydroxyl group from participating in the hydrolysis of another enzyme called acetylcholinesterase (Ach). The phosphorylated form of the enzyme is highly stable, and depending on the R and groups attached to phosphorus, this inhibition can be either reversible or irreversible. Metabolism Goats exposed to ethion showed clear distinctions in excretion, absorption half-life and bioavailability. These differences depend on the method of administration. Intravenous injection resulted in a half-life time of 2 hours, while oral administration resulted in a half-life time of 10 hours. Dermal administration lead to a half-life time of 85 hours. These differences in half-life times can be correlated with differences in bio-availability. The bio-availability of ethion via oral administration was less than 5%, whereas the bio-availability via dermal administration of ethion was 20%. In a study conducted among rats, it was found that ethion is readily metabolized after oral administration. Rat urine samples contained four to six polar water-soluble ethion metabolites. A study among chickens revealed more about spontaneous ethion distribution in the body. In a representative study, liver, muscle, and fat tissues were examined after 10 days of ethion exposure. In all three cases, ethion or ethion derivatives were present, indicating that it is widely spread in the body. Chicken eggs were also investigated, and it was found that the egg white reaches a steady ethion derivative concentration after four days, while the concentration in yolk was still rising after ten days. In the investigated chickens, about six polar water-soluble metabolites were also found to be present. In a study performed on goats, heart and kidney tissues were investigated after a period of ethion exposure, and in these tissues, ethion-derivatives were found. This study indicates that the highest levels were found in the liver and kidneys, and the lowest levels in fat. Derivatives were also detected in the goats' milk. Biotransformation Biotransformation of ethion occurs in the liver, where it undergoes desulfurisation to form the active metabolite ethion monoxon. The enzyme cytochrome P-450 catalyzes this step. Because it contains an active oxygen, ethion monoxon is an inhibitor of the neuroenzyme cholinesterase (ChE). ChE can dephosphorylate organophosphate, so in the next step of the biotransformation, ethion monoxon is dephosphorylated and ChE is phosphorylated. The subsequent step in the biotransformation process is not yet completely known, yet it is understood that this happens via esterases in the blood and liver (1). Besides the dephosphorylation of ethion monoxon by ChE, it is likely that the ethion monoxon is partially oxidized toward ethion dioxon. After solvent partitioning of urine from rats that had been fed ethion, it became clear that the metabolites found in the urine were 99% dissolved in the aqueous phase. This means that only non-significant levels (<1 %) were present in the organic phase and that the metabolites are very hydrophilic. In a parallel study in goats, radioactive labeled ethion with incorporated 14C was used. After identification of the 14C residues in organs of the goats, such as the liver, heart, kidneys, muscles and fat tissue, it appeared that 0.03 ppm or less of the 14C compounds present was non-metabolized ethion. The metabolites ethion monoxon and ethion dioxon were also not detected in any samples with a substantial threshold (0.005-0.01 ppm). In total, 64% to 75% of the metabolites from the tissues were soluble in methanol. After the addition of a protease, another 17% to 32% were solubilized. In the aqueous phase, at least four different radioactive metabolites were found. However, characterization of these compounds was repeatedly unsuccessful due to their high volatility. One compound was trapped in the kidney and was identified as formaldehyde. This is an indication that the 14C of ethion is used in the formation of natural products. Toxicity Summary of toxicity Exposure to ethion can happen by ingestion, absorption via the skin, and inhalation. Exposure can lead to vomiting, diarrhea, headache, sweating, and confusion. Severe poisoning might lead to fatigue, involuntary muscle contractions, loss of reflexes and slurred speech. In even more severe cases, death will be the result of respiratory failure or cardiac arrest. When being exposed through skin contact, the lowest dose to kill a rat was found to be 150 mg/kg for males and 50 mg/kg for females. The minimum survival time was 6 hours for female rats and 3 hours for male rats, and the maximum time of death was 3 days for females and 7 days for males. The LD50 was 245 mg /kg for male rats and 62 mg/kg for female rats. When being exposed through ingestion, 10 mg/kg/day and 2 mg/kg/day showed no histopathological effect on the respiratory track of rats, nor did 13-week testing on dogs (8.25 mg/kg/day). LD50 values for pure ethion in rats is 208 mg/kg, and for technical ethion is 21 to 191 mg/kg. Other reported oral LD50 values are 40 mg/kg in mice and guinea pigs. Furthermore, inhalation of ethion is very toxic - during one study which was looking at technical-grade ethion, an LC50 of 2.31 mg/m^3 was found in male rats and of 0.45 mg/m^3 in female rats. Other data reported a 4-hour LC50 in rats of 0.864 mg/L. Acute toxicity Ethion causes toxic effects following absorption via the skin, ingestion, and inhalation, and may cause burns when skin is exposed to it. According to Extoxnet, any form of exposure could result in the following symptoms: pallor, nausea, vomiting, diarrhea, abdominal cramps, headache, dizziness, eye pain, blurred vision, constriction or dilation of the eye pupils, tears, salivation, sweating, and confusion may develop within 12 hours. Severe poisoning may result in distorted coordination, loss of reflexes, slurred speech, fatigue and weakness, tremors of the tongue and eyelids, involuntary muscle contractions and can also lead to paralysis and respiratory problems. In more severe cases, ethion poisoning can lead to involuntary discharge of urine or feces, irregular heart beats, psychosis, loss of consciousness, and, in some cases, coma or death. Death is the result of respiratory failure or cardiac arrest. Hypothermia, AC heart blocks and arrhythmias are also found to be possible consequences of ethion poisoning. Ethion may also lead to delayed symptoms of other organophosphates. Skin exposure In rabbits receiving 250 mg/kg of technical-grade ethion for 21 days, the dermal exposure lead to increased cases of erythema and desquamation. It also lead to inhibition of brain acetylcholinesterase at 1 mg/kg/day and the NOAEL was determined to be 0.8 mg/kg/day. In guinea pigs, ethion ALS lead to a slight erythema that cleared in 48 hours, and it was determined that the compound was not a skin sensitizer. In a study determining the LD50 of ethion, 80 male and 60 female adult rats were dermally exposed to ethion dissolved in xylene. The lowest dose to kill a rat was found to be 150 mg/kg for males and 50 mg/kg for females. The minimum survival time was 6 hours for females and 3 hours for males, while the maximum time of death was 3 days for females and 7 days for males. The LD50 was 245 mg /kg for males and 62 mg/kg for females. Skin contact with organophosphates, in general, may cause localized sweating and involuntary muscle contractions. Other studies found the LC50 via the dermal route to be 915 mg/kg in guinea pigs and 890 mg/kg in rabbits. Ethion can also cause slight redness and inflammation to the eye and skin that will clear within 48 hours. It is also known to cause blurred vision, pupil constriction and pain. Ingestion A six-month-old boy experienced shallow excursions and intercostal retractions after accidentally ingesting 15.7 mg/kg ethion. The symptoms started one hour after ingestion, and were treated. Five hours after ingestion, respiratory arrest occurred and mechanical ventilation was needed for three hours. Following examinations after one week, one month and one year suggested that full recovery was made. The same boy also showed occurrence of tachycardia, frothy saliva (1 hour after ingestion), watery bowel movements (90 minutes after ingestion), increased white blood cell counts in urine, inability to control his head and limbs, occasional twitching, pupils non-reactive to light, purposeless eye movements, palpable liver and spleen, and there were some symptoms of paralysis. Testing on rats with 10 mg/kg/day and 2 mg/kg/day showed no histopathological effect on the respiratory tract, nor did 13 week testing on dogs (8.25 mg/kg/day). values for pure ethion in rats of 208 mg/kg, and for technical-grade ethion of 21 to 191 mg/kg,. Other reported oral LD50 values (for the technical product) are 40 mg/kg in mice and guinea pigs. In a group of six male volunteers no differences in blood pressure or pulse rate were noted, neither in mice or dogs. Diarrhea did occur in mice orally exposed to ethion, severe signs of neurotoxicity were also present. The effects were consistent with cholinergic over stimulation of the gastrointestinal tract. No hematological effects were reported in an experiment with six male volunteers, nor in rats or dogs. The volunteers did not show differences in muscle tone after intermediate-duration oral exposure, nor did the testing animals to different exposure. It is however knows that ethion can result in muscle tremors and fasciculations. The animal-testing studies on rats and dogs showed no effect on the kidneys and liver, but a different study showed an increased incidence in orange-colored urine. The animal-testing studies on rats and dogs did also not show dermal or ocular effects. Rabbits, receiving 2.5 mg/kg/day of ethion showed a decrease in body weight, but no effects were seen at 0.6 mg/kg/day. The decrease body, combined with reduced food consumption, was observed for rabbits receiving 9.6 mg/kg/day . Male and female dogs receiving 0.71 mg/kg/day did not show a change in body weight, but dogs receiving 6.9 and 8.25 mg/kg/day showed reduced food consumption and reduced body weight. In a study with human volunteers, a decrease of plasma cholinesterase was observed during 0.075 mg/kg/day (16% decrease), 0.1 mg/kg/day (23% decrease) and 0.15 mg/kg/day (31%decrease) treatment periods. This was partially recovered after 7 days, and fully recovered after 12 days. No effect on erythrocyte acetylcholinesterase was observed, nor signs of adverse neurological effects. Another study showed severe neurological effects after a single oral exposure in rats. For male rats, salivation, tremors, nose bleeding, urination, diarrhea, and convulsions occurred at 100 mg/kg, and for female rats, at 10 mg/kg. In a study with albino rats, it was observed that brain acetylcholinesterase was inhibited by 22%, erythrocyte acetylcholinesterase by 87%, and plasma cholinesterase by 100% in male rats after being fed 9 mg/kg/day of ethion for 93 days. After 14 days of recovery, plasma cholinesterase recovered completely, and erythrocyte acetylcholinesterase recovered 63%. There were no observed effects at 1 mg/kg/day. In a study involving various rats, researchers observed no effects on erythrocyte acetylcholinesterase at 0, 0.1, 0.2, and 2 mg/kg/day of ethion. In a 90-day study on dogs, in which the males received 6.9 mg/kg/day and the females 8.25 mg/kg/day, ataxia, emesis, miosis, and tremors were observed. Brain and erythrocyte acetylcholinesterase were inhibited (61-64% and 93-04%, respectively). At 0.71 mg/kg/day in male dogs, the reduction in brain acetylcholinesterase was 23%. There were no observed effects at 0.06 and 0.01 mg/kg/day. Based on these findings, a minimal risk level of 0,002 mg/kg/day for oral exposure for acute and intermediate duration was established. Researchers also calculated a chronic-duration minimal risk level of 0.0004 mg/kg/day. In one study, in which rats received a maximum of 1.25 mg/kg/day, no effects on reproduction were observed. In a study on pregnant river rats, eating 2.5 mg/kg/day, it was observed that the fetuses had increased incidence of delayed ossification of pubes. Another study found that the fetuses of pregnant rabbits, eating 9.6 mg/kg/day had increased incidence of fused sterna centers. Inhalation Ethion is quite toxic to lethal via inhalation. One study, looking at technical-grade ethion, found an LC50 of 2.31 mg/m3 in male rats and of 0.45 mg/m3 in female rats. Other data reported a 4-hour LC50 in rats of 0.864 mg/L. As stated earlier, ethion can also lead to pupillary constriction, muscle cramps, excessive salivation, sweating, nausea, dizziness, labored breathing, convulsions, and unconsciousness. A sensation of tightness in the chest and rhinorrhea are also very common after inhalation. Carcinogenic effects There are no indications that ethion is carcinogenic in rats and mice. When rats and mice were fed ethion for two years, the animals did not develop cancer any faster than the control group of animals that were not given ethion. Ethion has not been classified for carcinogenicity by the United States Department of Health and Human Services (DHHS), the International Agency for Research on Cancer (IARC) or the EPA. Treatment When orally exposed, gastric lavage shortly after exposure can be used to reduce the peak absorption. It is also suspected that treatment with active charcoal could be effective to reduce peak absorption. Safety guidelines also encourage to induce vomiting to reduce oral exposure, if the victim is still conscious. In case of skin exposure, it is advised to wash and rinse with plenty of water and soap to reduce exposure. In case of inhalation, fresh air is advised to reduce exposure. Treating the ethion-exposure itself is done in the same way as exposure with other organophosphates. The main danger lies in respiratory problems - if symptoms are present, then artificial respiration with an endotracheal tube is used as a treatment. The effect of ethion on muscles or nerves is counteracted with atropine. Pralidoxime can be used to act against organophosphate poisoning, this must be given as fast as possible after the ethion poisoning, for its efficacy is inhibited by the chemical change of ethion-enzyme in the body that occurs over time. Effects on animals Ethion has an influence on the environment as it is persistent and thus might accumulate through plants and animals. When looking at songbirds, ethion is very toxic. The LD50 in red-winged blackbirds is 45 mg/kg. However, it is moderately toxic to birds like the bobwhite quail (LD50 is 128.8 mg/kg) and starlings (LD50 is greater than 304 mg/kg). These birds would be classified as 'medium sized birds. When looking at larger, upland game birds (like the ring-necked pheasant and waterfowl like the mallard duck, ethion varies from barely toxic to nontoxic. Ethion, however, is very toxic to aquatic organisms like freshwater and marine fish, and is extremely toxic to freshwater invertebrates, with an average LD50 of 0.056 μg/L to 0.0077 mg/L. The LD50 for marine and estuarine invertebrates are 0.04 to 0.05 mg/L. In a chronic toxicity study, rats were fed 0, 0.1, 0.2 or 2 mg/kg/day ethion for 18 months, and no severe toxic effects were observed. The only significant change was a decrease of cholinesterase levels in the group with the highest dose. Therefore, the NOEL of this study was 0.2 mg/kg. The oral LD50 for pure ethion in rats is 208 mg/kg. The dermal LD50 in rats is 62 mg/kg, 890 mg/kg in rabbits, and 915 mg/kg in guinea pigs. For rats, the 4-hour LD50 is 0.864 mg/L ethion. Detection Methods Insecticides such as ethion can be detected by using a variety of chemical analysis methods. Some analysis methods, however, are not specific for this substance. In a recently introduced method, the interaction of silver nanoparticles (AgNPs) with ethion results in the quenching of the resonance relay scattering (RRS) intensity. The change in RRS was shown to be linearly correlated to the concentration of ethion (range: 10.0–900 mg/L). Another advantage of this method over general detection methods is that ethion can be measured in just 3 minutes with no requirement for pretreatment of the sample. From interference tests, it was shown that this method achieves good selectivity for ethion. The limit of detection (LOD) was 3.7 mg/L and limit of quantification (LOQ) was 11.0 mg/L. Relative standard deviations (RSDs) for samples containing 15.0 and 60.0 mg/L of ethion in water were 4.1 and 0.2 mg/L, respectively. Microbial degradation Ethion remains a major environmental contaminant in Australia, among other locations, because of its former usage in agriculture. However, there are some microbes that can convert ethion into less toxic compounds. Some Pseudomonas and Azospirillum bacteria were shown to degrade ethion when cultivated in minimal salts medium, where ethion was the only source of carbon. Analysis of the compounds present in the medium after bacterial digestion of ethion demonstrated that no abiotic hydrolytic degradation products of ethion (e.g., ethion dioxon or ethion monoxon) were present. The biodigestion of ethion is likely used to support rapid growth of these bacteria. References External links Acetylcholinesterase inhibitors Organophosphate insecticides Phosphorodithioates Ethyl esters
Ethion
[ "Chemistry" ]
4,764
[ "Functional groups", "Phosphorodithioates" ]
973,296
https://en.wikipedia.org/wiki/Tong%20Dizhou
Tong Dizhou (; May 28, 1902 – March 30, 1979) was a Chinese embryologist known for his contributions to the field of cloning. He was a vice president of Chinese Academy of Science. Biography Born in Yinxian, Zhejiang province, Tong graduated from Fudan University in 1924 with a degree in biology, and received a PhD in zoology in 1930 from Free University Brussels. In 1963, Tong inserted DNA of a male carp into the egg of a female carp and became the first to successfully clone a fish. He is regarded as "the father of China's clone". Tong was also an academician at the Chinese Academy of Sciences and the first director of its Institute of Oceanology from its founding in 1950 until 1978. Tong died on 30 March 1979 at Beijing Hospital in Beijing. References 1902 births 1979 deaths 20th-century Chinese biologists 20th-century Chinese scientists Biologists from Zhejiang Cloning Educators from Ningbo Free University of Brussels (1834–1969) alumni Fudan University alumni Academic staff of Fudan University Members of Academia Sinica Members of the Chinese Academy of Sciences Academic staff of the National Central University Scientists from Ningbo Academic staff of Tongji University Vice Chairpersons of the National Committee of the Chinese People's Political Consultative Conference
Tong Dizhou
[ "Engineering", "Biology" ]
257
[ "Cloning", "Genetic engineering" ]
973,372
https://en.wikipedia.org/wiki/IBM%20Future%20Systems%20project
The Future Systems project (FS) was a research and development project undertaken in IBM in the early 1970s to develop a revolutionary line of computer products, including new software models which would simplify software development by exploiting modern powerful hardware. The new systems were intended to replace the System/370 in the market some time in the late 1970s. There were two key components to FS. The first was the use of a single-level store that allows data stored on secondary storage like disk drives to be referred to within a program as if it was data stored in main memory; variables in the code could point to objects in storage and they would invisibly be loaded into memory, eliminating the need to write code for file handling. The second was to include instructions corresponding to the statements in high-level programming languages, allowing the system to directly run programs without the need for a compiler to convert from the language to machine code. One could, for instance, write a program in a text editor and the machine would be able to run that directly. Combining the two concepts in a single system in a single step proved to be an impossible task. This concern was pointed out from the start by the engineers, but it was ignored by management and project leaders for many reasons. Officially started in the fall of 1971, by 1974 the project was moribund, and formally cancelled in February 1975. The single-level store was implemented in the System/38 and moved to other systems in the lineup after that, but the concept of a machine that directly ran high-level languages has never appeared in an IBM product. History 370 The System/360 was announced in April 1964. Only six months later, IBM began a study project on what trends were taking place in the market and how these should be used in a series of machines that would replace the 360 in the future. One significant change was the introduction of useful integrated circuits (ICs), which would allow the many individual components of the 360 to be replaced with a smaller number of ICs. This would allow a more powerful machine to be built for the same price as existing models. By the mid-1960s, the 360 had become a massive best-seller. This influenced the design of the new machines, as it led to demands that the machines have complete backward compatibility with the 360 series. When the machines were announced in 1970, now known as the System/370, they were essentially 360s using small-scale ICs for logic, much larger amounts of internal memory and other relatively minor changes. A few new instructions were added and others cleaned up, but the system was largely identical from the programmer's point of view. The recession of 1969–1970 led to slowing sales in the 1970-71 time period and much smaller orders for the 370 compared to the rapid uptake of the 360 five years earlier. For the first time in decades, IBM's growth stalled. While some in the company began efforts to introduce useful improvements to the 370 as soon as possible to make them more attractive, others felt nothing short of a complete reimagining of the system would work in the long term. Replacing the 370 Two months before the announcement of the 370s, the company once again started considering changes in the market and how that would influence future designs. In 1965, Gordon Moore predicted that integrated circuits would see exponential growth in the number of circuits they supported, today known as Moore's Law. IBM's Jerrier A. Haddad wrote a memo on the topic, suggesting that the cost of logic and memory was going to zero faster than it could be measured. An internal Corporate Technology Committee (CTC) study concluded a 30-fold reduction in the price of memory would take place in the next five years, and another 30 in the five after that. If IBM was going to maintain its sales figures, it was going to have to sell 30 times as much memory in five years, and 900 times as much five years later. Similarly, hard disk cost was expected to fall ten times in the next ten years. To maintain their traditional 15% year-over-year growth, by 1980 they would have to be selling 40 times as much disk space and 3600 times as much memory. In terms of the computer itself, if one followed the progression from the 360 to the 370 and onto some hypothetical System/380, the new machines would be based on large-scale integration and would be dramatically reduced in complexity and cost. There was no way they could sell such a machine at their current pricing, if they tried, another company would introduce far less expensive systems. They could instead produce much more powerful machines at the same price points, but their customers were already underutilizing their existing systems. To provide a reasonable argument to buy a new high-end machine, IBM had to come up with reasons for their customers to need this extra power. Another strategic issue was that while the cost of computing was steadily going down, the costs of programming and operations, being made of personnel costs, were steadily going up. Therefore, the part of the customer's IT budget available for hardware vendors would be significantly reduced in the coming years, and with it the base for IBM revenue. It was imperative that IBM, by addressing the cost of application development and operations in its future products, would at the same time reduce the total cost of IT to the customers and capture a larger portion of that cost. AFS In 1969, Bob O. Evans, president of the IBM System Development Division which developed their largest mainframes, asked Erich Bloch of the IBM Poughkeepsie Lab to consider how the company might use these much cheaper components to build machines that would still retain the company's profits. Bloch, in turn, asked Carl Conti to outline such systems. Having seen the term "future systems" being used, Evans referred to the group as Advanced Future Systems. The group met roughly biweekly. Among the many developments initially studied under AFS, one concept stood out. At the time, the first systems with virtual memory (VM) were emerging, and the seminal Multics project had expanded on this concept as the basis for a single-level store. In this concept, all data in the system is treated as if it is in main memory, and if the data is physically located on secondary storage, the VM system automatically loads it into memory when a program calls for it. Instead of writing code to read and write data in files, the programmer simply told the operating system they would be using certain data, which then appeared as objects in the program's memory and could be manipulated like any other variable. The VM system would ensure that the data was synchronized with storage when needed. This was seen as a particularly useful concept at the time, as the emergence of bubble memory suggested that future systems would not have separate core memory and disk drives, instead everything would be stored in a large amount of bubble memory. Physically, systems would be single-level stores, so the idea of having another layer for "files" which represented separate storage made no sense, and having pointers into a single large memory would not only mean one could simply refer to any data as it if were local, but also eliminate the need for separate application programming interfaces (APIs) for the same data depending on whether it was loaded or not. HLS Evans also asked John McPherson at IBM's Armonk headquarters to chair another group to consider how IBM would offer these new designs across their many divisions. A group of twelve participants spread across three divisions produced the "Higher Level System Report", or HLS, which was delivered on 25 February 1970. A key component of HLS was the idea that programming was more expensive than hardware. If a system could greatly reduce the cost of development, then that system could be sold for more money, as the overall cost of operation would still be lower than the competition. The basic concept of the System/360 series was that a single instruction set architecture (ISA) would be defined that offered every possible instruction the assembly language programmer might desire. Whereas previous systems might be dedicated to scientific programming or currency calculations and had instructions for that sort of data, the 360 offered instructions for both of these and practically every other task. Individual machines were then designed that targeted particular workloads and ran those instructions directly in hardware and implemented the others in microcode. This meant any machine in the 360 family could run programs from any other, just faster or slower depending on the task. This proved enormously successful, as a customer could buy a low-end machine and always upgrade to a faster one in the future, knowing all their applications would continue to run. Although the 360's instruction set was large, those instructions were still low-level, representing single operations that the central processing unit (CPU) would perform, like "add two numbers" or "compare this number to zero". Programming languages and their links to the operating system allowed users to type in programs using high-level concepts like "open file" or "add these arrays". The compilers would convert these higher-level abstractions into a series of machine code instructions. For HLS, the instructions would instead represent those higher-level tasks directly. That is, there would be instructions in the machine code for "open file". If a program called this instruction, there was no need to convert this into lower-level code, the machine would do this internally in microcode or even a direct hardware implementation. This worked hand-in-hand with the single-level store; to implement HLS, every bit of data in the system was paired with a descriptor, a record that contained the type of the data, its location in memory, and its precision and size. As descriptors could point to arrays and record structures as well, this allowed the machine language to process these as atomic objects. By representing these much higher-level objects directly in the system, user programs would be much smaller and simpler. For instance, to add two arrays of numbers held in files in traditional languages, one would generally open the two files, read one item from each, add them, and then store the value to a third file. In the HLS approach, one would simply open the files and call add. The underlying operating system would map these into memory, create descriptors showing them both to be arrays and then the add instruction would see they were arrays and add all the values together. Assigning that value into a newly created array would have the effect of writing it back to storage. A program that might take a page or so of code was now reduced to a few lines. Moreover, as this was the natural language of the machine, the command shell was itself programmable in the same way, there would be no need to "write a program" for a simple task like this, it could be entered as a command. The report concluded: Compatible concerns Until the end of the 1960s, IBM had been making most of its profit on hardware, bundling support software and services along with its systems to make them more attractive. Only hardware carried a price tag, but those prices included an allocation for software and services. Other manufacturers had started to market compatible hardware, mainly peripherals such as tape and disk drives, at a price significantly lower than IBM, thus shrinking the possible base for recovering the cost of software and services. IBM responded by refusing to service machines with these third-party add-ons, which led almost immediately to sweeping anti-trust investigations and many subsequent legal remedies. In 1969, the company was forced to end its bundling arrangements and announced they would sell software products separately. Gene Amdahl saw an opportunity to sell compatible machines without software; the customer could purchase a machine from Amdahl and the operating system and other software from IBM. If IBM refused to sell it to them, they would be breaching their legal obligations. In early 1970, Amdahl quit IBM and announced his intention to introduce System/370 compatible machines that would be faster than IBM's high-end offerings but cost less to purchase and operate. At first, IBM was unconcerned. They made most of their money on software and support, and that money would still be going to them. But to be sure, in early 1971 an internal IBM task force, Project Counterpoint, was formed to study the concept. They concluded that the compatible mainframe business was indeed viable and that the basis for charging for software and services as part of the hardware price would quickly vanish. These events created a desire within the company to find some solution that would once again force the customers to purchase everything from IBM but in a way that would not violate antitrust laws. If IBM followed the suggestions of the HLS report, this would mean that other vendors would have to copy the microcode implementing the huge number of instructions. As this was software, if they did, those companies would be subject to copyright violations. At this point, the AFS/HLS concepts gained new currency within the company. Future Systems In May–June 1971, an international task force convened in Armonk under John Opel, then a vice-president of IBM. Its assignment was to investigate the feasibility of a new line of computers which would take advantage of IBM's technological advantages in order to render obsolete all previous computers - compatible offerings but also IBM's own products. The task force concluded that the project was worth pursuing, but that the key to acceptance in the marketplace was an order-of-magnitude reduction in the costs of developing, operating and maintaining application software. The major objectives of the FS project were consequently stated as follows: make obsolete all existing computing equipment, including IBM's, by fully exploiting the newest technologies, diminish greatly the costs and efforts involved in application development and operations, provide a technically sound basis for re-bundling as much as possible of IBM's offerings (hardware, software and services) It was hoped that a new architecture making heavier use of hardware resources, the cost of which was going down, could significantly simplify software development and reduce costs for both IBM and customers. Technology Data access One design principle of FS was a "single-level store" which extended the idea of virtual memory (VM) to cover persistent data. In traditional designs, programs allocate memory to hold values that represent data. This data would normally disappear if the machine is turned off, or the user logs out. In order to have this data available in the future, additional code is needed to write it to permanent storage like a hard drive, and then read it back in the future. To ease these common operations, a number of database engines emerged in the 1960s that allowed programs to hand data to the engine which would then save it and retrieve it again on demand. Another emerging technology at the time was the concept of virtual memory. In early systems, the amount of memory available to a program to allocate for data was limited by the amount of main memory in the system, which might vary based on such factors as it is moved from one machine to another, or if other programs were allocating memory of their own. Virtual memory systems addressed this problem by defining a maximum amount of memory available to all programs, typically some very large number, much more than the physical memory in the machine. In the case that a program asks to allocate memory that is not physically available, a block of main memory is written out to disk, and that space is used for the new allocation. If the program requests data from that offloaded ("paged" or "spooled") memory area, it is invisibly loaded back into main memory again. A single-level store is essentially an expansion of virtual memory to all memory, internal or external. VM systems invisibly write memory to a disk, which is the same task as the file system, so there is no reason it cannot be used as the file system. Instead of programs allocating memory from "main memory" which is then perhaps sent to some other backing store by VM, all memory is immediately allocated by the VM. This means there is no need to save and load data, simply allocating it in memory will have that effect as the VM system writes it out. When the user logs back in, that data, and the programs that were running it as they are also in the same unified memory, are immediately available in the same state they were before. The entire concept of loading and saving is removed, programs, and entire systems, pick up where they were even after a machine restart. This concept had been explored in the Multics system but proved to be very slow, but that was a side-effect of available hardware where the main memory was implemented in core with a far slower backing store in the form of a hard drive or drum. With the introduction of new forms of non-volatile memory, most notably bubble memory, that worked at speeds similar to core but had memory density similar to a hard disk, it appeared a single-level store would no longer have any performance downside. Future Systems planned on making the single-level store the key concept in its new operating systems. Instead of having a separate database engine that programmers would call, there would simply be calls in the system's application programming interface (API) to retrieve memory. And those API calls would be based on particular hardware or microcode implementations, which would only be available on IBM systems, thereby achieving IBM's goal of tightly tying the hardware to the programs that ran on it. Processor Another principle was the use of very high-level complex instructions to be implemented in microcode. As an example, one of the instructions, CreateEncapsulatedModule, was a complete linkage editor. Other instructions were designed to support the internal data structures and operations of programming languages such as FORTRAN, COBOL, and PL/I. In effect, FS was designed to be the ultimate complex instruction set computer (CISC). Another way of presenting the same concept was that the entire collection of functions previously implemented as hardware, operating system software, data base software and more would now be considered as making up one integrated system, with each and every elementary function implemented in one of many layers including circuitry, microcode, and conventional software. More than one layer of microcode and code were contemplated, sometimes referred to as picocode or millicode. Depending on the people one was talking to, the very notion of a "machine" therefore ranged between those functions which were implemented as circuitry (for the hardware specialists) to the complete set of functions offered to users, irrespective of their implementation (for the systems architects). The overall design also called for a "universal controller" to handle primarily input-output operations outside of the main processor. That universal controller would have a very limited instruction set, restricted to those operations required for I/O, pioneering the concept of a reduced instruction set computer (RISC). Meanwhile, John Cocke, one of the chief designers of early IBM computers, began a research project to design the first reduced instruction set computer (RISC). In the long run, the IBM 801 RISC architecture, which eventually evolved into IBM's POWER, PowerPC, and Power architectures, proved to be vastly cheaper to implement and capable of achieving much higher clock rate. Development Project start The FS project was officially started in September 1971, following the recommendations of a special task force assembled in the second quarter of 1971. In the course of time, several other research projects in various IBM locations merged into the FS project or became associated with it. Project management During its entire life, the FS project was conducted under tight security provisions. The project was broken down into many subprojects assigned to different teams. The documentation was similarly broken down into many pieces, and access to each document was subject to verification of the need-to-know by the project office. Documents were tracked and could be called back at any time. In Sowa's memo (see External Links, below) he noted The avowed aim of all this red tape is to prevent anyone from understanding the whole system; this goal has certainly been achieved. As a consequence, most people working on the project had an extremely limited view of it, restricted to what they needed to know in order to produce their expected contribution. Some teams were even working on FS without knowing. This explains why, when asked to define FS, most people give a very partial answer, limited to the intersection of FS with their field of competence. Planned product lines Three implementations of the FS architecture were planned: the top-of-line model was being designed in Poughkeepsie, NY, where IBM's largest and fastest computers were built; the next model down was being designed in Endicott, NY, which had responsibility for the mid-range computers; the model below that was being designed in Böblingen, Germany, and the smallest model was being designed in Hursley, UK. A continuous range of performance could be offered by varying the number of processors in a system at each of the four implementation levels. Early 1973, overall project management and the teams responsible for the more "outside" layers common to all implementations were consolidated in the Mohansic ASDD laboratory (halfway between the Armonk/White Plains headquarters and Poughkeepsie). Project end The FS project was terminated in 1975. The reasons given for terminating the project depend on the person asked, each of whom puts forward the issues related to the domain with which they were familiar. In reality, the success of the project was dependent on a large number of breakthroughs in all areas from circuit design and manufacturing to marketing and maintenance. Although each single issue, taken in isolation, might have been resolved, the probability that they could all be resolved in time and in mutually compatible ways was practically zero. One symptom was the poor performance of its largest implementation, but the project was also marred by protracted internal arguments about various technical aspects, including internal IBM debates about the merits of RISC vs. CISC designs. The complexity of the instruction set was another obstacle; it was considered "incomprehensible" by IBM's own engineers and there were strong indications that the system wide single-level store could not be backed up in part, foretelling the IBM AS/400's partitioning of the System/38's single-level store. Moreover, simulations showed that the execution of native FS instructions on the high-end machine was slower than the System/370 emulator on the same machine. The FS project was finally terminated when IBM realized that customer acceptance would be much more limited than originally predicted because there was no reasonable application migration path for 360 architecture customers. In order to leave maximum freedom to design a truly revolutionary system, ease of application migration was not one of the primary design goals for the FS project, but was to be addressed by software migration aids taking the new architecture as a given. In the end, it appeared that the cost of migrating the mass of user investments in COBOL and assembly language based applications to FS was in many cases likely to be greater than the cost of acquiring a new system. Results Although the FS project as a whole was terminated, a simplified version of the architecture for the smallest of the three machines continued to be developed in Rochester. It was finally released as the IBM System/38, which proved to be a good design for ease of programming, but it was woefully underpowered. The AS/400 inherited the same architecture, but with performance improvements. In both machines, the high-level instruction set generated by compilers is not interpreted, but translated into a lower-level machine instruction set and executed; the original lower-level instruction set was a CISC instruction set with some similarities to the System/360 instruction set. In later machines the lower-level instruction set was an extended version of the PowerPC instruction set, which evolved from John Cocke's RISC machine. The dedicated hardware platform was replaced in 2008 by the IBM Power Systems platform running the IBM i operating system. Besides System/38 and the AS/400, which inherited much of the FS architecture, bits and pieces of Future Systems technology were incorporated in the following parts of IBM's product line: the IBM 3081 mainframe computer, which was essentially the top-of-the line machine designed in Poughkeepsie, using the System/370 emulator microcode, and with the FS microcode removed and used the 3800 laser printer, and some machines that would lead to the IBM 3279 terminal and GDDM the IBM 3850 automatic magnetic tape library the IBM 8100 mid-range computer, which was based on a CPU called the Universal Controller, which had been intended for FS input/output processing network enhancements concerning VTAM and NCP References Citations Bibliography External links An internal memo by John F. Sowa. This outlines the technical and organizational problems of the FS project in late 1974. Overview of IBM Future Systems Computing platforms Future Systems project Information technology projects
IBM Future Systems project
[ "Technology", "Engineering" ]
5,135
[ "Information technology", "Computing platforms", "Information technology projects" ]
973,479
https://en.wikipedia.org/wiki/Pseudo-differential%20operator
In mathematical analysis a pseudo-differential operator is an extension of the concept of differential operator. Pseudo-differential operators are used extensively in the theory of partial differential equations and quantum field theory, e.g. in mathematical models that include ultrametric pseudo-differential equations in a non-Archimedean space. History The study of pseudo-differential operators began in the mid 1960s with the work of Kohn, Nirenberg, Hörmander, Unterberger and Bokobza. They played an influential role in the second proof of the Atiyah–Singer index theorem via K-theory. Atiyah and Singer thanked Hörmander for assistance with understanding the theory of pseudo-differential operators. Motivation Linear differential operators with constant coefficients Consider a linear differential operator with constant coefficients, which acts on smooth functions with compact support in Rn. This operator can be written as a composition of a Fourier transform, a simple multiplication by the polynomial function (called the symbol) and an inverse Fourier transform, in the form: Here, is a multi-index, are complex numbers, and is an iterated partial derivative, where ∂j means differentiation with respect to the j-th variable. We introduce the constants to facilitate the calculation of Fourier transforms. Derivation of formula () The Fourier transform of a smooth function u, compactly supported in Rn, is and Fourier's inversion formula gives By applying P(D) to this representation of u and using one obtains formula (). Representation of solutions to partial differential equations To solve the partial differential equation we (formally) apply the Fourier transform on both sides and obtain the algebraic equation If the symbol P(ξ) is never zero when ξ ∈ Rn, then it is possible to divide by P(ξ): By Fourier's inversion formula, a solution is Here it is assumed that: P(D) is a linear differential operator with constant coefficients, its symbol P(ξ) is never zero, both u and ƒ have a well defined Fourier transform. The last assumption can be weakened by using the theory of distributions. The first two assumptions can be weakened as follows. In the last formula, write out the Fourier transform of ƒ to obtain This is similar to formula (), except that 1/P(ξ) is not a polynomial function, but a function of a more general kind. Definition of pseudo-differential operators Here we view pseudo-differential operators as a generalization of differential operators. We extend formula (1) as follows. A pseudo-differential operator P(x,D) on Rn is an operator whose value on the function u(x) is the function of x: where is the Fourier transform of u and the symbol P(x,ξ) in the integrand belongs to a certain symbol class. For instance, if P(x,ξ) is an infinitely differentiable function on Rn × Rn with the property for all x,ξ ∈Rn, all multiindices α,β, some constants Cα, β and some real number m, then P belongs to the symbol class of Hörmander. The corresponding operator P(x,D) is called a pseudo-differential operator of order m and belongs to the class Properties Linear differential operators of order m with smooth bounded coefficients are pseudo-differential operators of order m. The composition PQ of two pseudo-differential operators P, Q is again a pseudo-differential operator and the symbol of PQ can be calculated by using the symbols of P and Q. The adjoint and transpose of a pseudo-differential operator is a pseudo-differential operator. If a differential operator of order m is (uniformly) elliptic (of order m) and invertible, then its inverse is a pseudo-differential operator of order −m, and its symbol can be calculated. This means that one can solve linear elliptic differential equations more or less explicitly by using the theory of pseudo-differential operators. Differential operators are local in the sense that one only needs the value of a function in a neighbourhood of a point to determine the effect of the operator. Pseudo-differential operators are pseudo-local, which means informally that when applied to a distribution they do not create a singularity at points where the distribution was already smooth. Just as a differential operator can be expressed in terms of D = −id/dx in the form for a polynomial p in D (which is called the symbol), a pseudo-differential operator has a symbol in a more general class of functions. Often one can reduce a problem in analysis of pseudo-differential operators to a sequence of algebraic problems involving their symbols, and this is the essence of microlocal analysis. Kernel of pseudo-differential operator Pseudo-differential operators can be represented by kernels. The singularity of the kernel on the diagonal depends on the degree of the corresponding operator. In fact, if the symbol satisfies the above differential inequalities with m ≤ 0, it can be shown that the kernel is a singular integral kernel. See also Differential algebra for a definition of pseudo-differential operators in the context of differential algebras and differential rings. Fourier transform Fourier integral operator Oscillatory integral operator Sato's fundamental theorem Operational calculus Footnotes References . Further reading Nicolas Lerner, Metrics on the phase space and non-selfadjoint pseudo-differential operators. Pseudo-Differential Operators. Theory and Applications, 3. Birkhäuser Verlag, Basel, 2010. Michael E. Taylor, Pseudodifferential Operators, Princeton Univ. Press 1981. M. A. Shubin, Pseudodifferential Operators and Spectral Theory, Springer-Verlag 2001. Francois Treves, Introduction to Pseudo Differential and Fourier Integral Operators, (University Series in Mathematics), Plenum Publ. Co. 1981. F. G. Friedlander and M. Joshi, Introduction to the Theory of Distributions, Cambridge University Press 1999. André Unterberger, Pseudo-differential operators and applications: an introduction. Lecture Notes Series, 46. Aarhus Universitet, Matematisk Institut, Aarhus, 1976. External links Lectures on Pseudo-differential Operators by Mark S. Joshi on arxiv.org. Differential operators Microlocal analysis Functional analysis Harmonic analysis Generalized functions Partial differential equations
Pseudo-differential operator
[ "Mathematics" ]
1,267
[ "Mathematical analysis", "Functions and mappings", "Functional analysis", "Mathematical objects", "Mathematical relations", "Differential operators" ]
973,828
https://en.wikipedia.org/wiki/Relativistic%20Euler%20equations
In fluid mechanics and astrophysics, the relativistic Euler equations are a generalization of the Euler equations that account for the effects of general relativity. They have applications in high-energy astrophysics and numerical relativity, where they are commonly used for describing phenomena such as gamma-ray bursts, accretion phenomena, and neutron stars, often with the addition of a magnetic field. Note: for consistency with the literature, this article makes use of natural units, namely the speed of light and the Einstein summation convention. Motivation For most fluids observable on Earth, traditional fluid mechanics based on Newtonian mechanics is sufficient. However, as the fluid velocity approaches the speed of light or moves through strong gravitational fields, or the pressure approaches the energy density (), these equations are no longer valid. Such situations occur frequently in astrophysical applications. For example, gamma-ray bursts often feature speeds only less than the speed of light, and neutron stars feature gravitational fields that are more than times stronger than the Earth's. Under these extreme circumstances, only a relativistic treatment of fluids will suffice. Introduction The equations of motion are contained in the continuity equation of the stress–energy tensor : where is the covariant derivative. For a perfect fluid, Here is the total mass-energy density (including both rest mass and internal energy density) of the fluid, is the fluid pressure, is the four-velocity of the fluid, and is the metric tensor. To the above equations, a statement of conservation is usually added, usually conservation of baryon number. If is the number density of baryons this may be stated These equations reduce to the classical Euler equations if the fluid three-velocity is much less than the speed of light, the pressure is much less than the energy density, and the latter is dominated by the rest mass density. To close this system, an equation of state, such as an ideal gas or a Fermi gas, is also added. Equations of motion in flat space In the case of flat space, that is and using a metric signature of , the equations of motion are, Where is the energy density of the system, with being the pressure, and being the four-velocity of the system. Expanding out the sums and equations, we have, (using as the material derivative) Then, picking to observe the behavior of the velocity itself, we see that the equations of motion become Note that taking the non-relativistic limit, we have . This says that the energy of the fluid is dominated by its rest energy. In this limit, we have and , and can see that we return the Euler Equation of . Derivation In order to determine the equations of motion, we take advantage of the following spatial projection tensor condition: We prove this by looking at and then multiplying each side by . Upon doing this, and noting that , we have . Relabeling the indices as shows that the two completely cancel. This cancellation is the expected result of contracting a temporal tensor with a spatial tensor. Now, when we note that where we have implicitly defined that , we can calculate that and thus Then, let's note the fact that and . Note that the second identity follows from the first. Under these simplifications, we find that and thus by , we have We have two cancellations, and are thus left with See also Relativistic heat conduction Equation of state (cosmology) References Euler equations Equations of fluid dynamics
Relativistic Euler equations
[ "Physics", "Chemistry" ]
715
[ "Equations of fluid dynamics", "Equations of physics", "Special relativity", "Relativity stubs", "Theory of relativity", "Fluid dynamics" ]
974,084
https://en.wikipedia.org/wiki/List%20of%20manifolds
This is a list of particular manifolds, by Wikipedia page. See also list of geometric topology topics. For categorical listings see :Category:Manifolds and its subcategories. Generic families of manifolds Euclidean space, Rn n-sphere, Sn n-torus, Tn Real projective space, RPn Complex projective space, CPn Quaternionic projective space, HPn Flag manifold Grassmann manifold Stiefel manifold Lie groups provide several interesting families. See Table of Lie groups for examples. See also: List of simple Lie groups and List of Lie group topics. Manifolds of a specific dimension 1-manifolds Circle, S1 Long line Real line, R Real projective line, RP1 ≅ S1 2-manifolds Cylinder, S1 × R Klein bottle, RP2 # RP2 Klein quartic (a genus 3 surface) Möbius strip Real projective plane, RP2 Sphere, S2 Surface of genus g Torus Double torus 3-manifolds 3-sphere, S3 3-torus, T3 Poincaré homology sphere SO(3) ≅ RP3 Solid Klein bottle Solid torus Whitehead manifold Meyerhoff manifold Weeks manifold For more examples see 3-manifold. 4-manifolds Complex projective plane Del Pezzo surface E8 manifold Enriques surface Exotic R4 Hirzebruch surface K3 surface For more examples see 4-manifold. Special types of manifolds Manifolds related to spheres Brieskorn manifold Exotic sphere Homology sphere Homotopy sphere Lens space Spherical 3-manifold Special classes of Riemannian manifolds Einstein manifold Ricci-flat manifold G2 manifold Kähler manifold Calabi–Yau manifold Hyperkähler manifold Quaternionic Kähler manifold Riemannian symmetric space Spin(7) manifold Categories of manifolds Manifolds definable by a particular choice of atlas Affine manifold Analytic manifold Complex manifold Differentiable (smooth) manifold Piecewise linear manifold Lipschitz manifold Topological manifold Manifolds with additional structure Almost complex manifold Almost symplectic manifold Calibrated manifold Complex manifold Contact manifold CR manifold Finsler manifold Hermitian manifold Hyperkähler manifold Kähler manifold Lie group Pseudo-Riemannian manifold Riemannian manifold Sasakian manifold Spin manifold Symplectic manifold Infinite-dimensional manifolds Banach manifold Fréchet manifold Hilbert manifold See also References Manifolds
List of manifolds
[ "Mathematics" ]
488
[ "Topological spaces", "Topology", "Manifolds", "Space (mathematics)" ]
974,169
https://en.wikipedia.org/wiki/Algebraic%20function
In mathematics, an algebraic function is a function that can be defined as the root of an irreducible polynomial equation. Algebraic functions are often algebraic expressions using a finite number of terms, involving only the algebraic operations addition, subtraction, multiplication, division, and raising to a fractional power. Examples of such functions are: Some algebraic functions, however, cannot be expressed by such finite expressions (this is the Abel–Ruffini theorem). This is the case, for example, for the Bring radical, which is the function implicitly defined by . In more precise terms, an algebraic function of degree in one variable is a function that is continuous in its domain and satisfies a polynomial equation of positive degree where the coefficients are polynomial functions of , with integer coefficients. It can be shown that the same class of functions is obtained if algebraic numbers are accepted for the coefficients of the 's. If transcendental numbers occur in the coefficients the function is, in general, not algebraic, but it is algebraic over the field generated by these coefficients. The value of an algebraic function at a rational number, and more generally, at an algebraic number is always an algebraic number. Sometimes, coefficients that are polynomial over a ring are considered, and one then talks about "functions algebraic over ". A function which is not algebraic is called a transcendental function, as it is for example the case of . A composition of transcendental functions can give an algebraic function: . As a polynomial equation of degree n has up to n roots (and exactly n roots over an algebraically closed field, such as the complex numbers), a polynomial equation does not implicitly define a single function, but up to n functions, sometimes also called branches. Consider for example the equation of the unit circle: This determines y, except only up to an overall sign; accordingly, it has two branches: An algebraic function in m variables is similarly defined as a function which solves a polynomial equation in m + 1 variables: It is normally assumed that p should be an irreducible polynomial. The existence of an algebraic function is then guaranteed by the implicit function theorem. Formally, an algebraic function in m variables over the field K is an element of the algebraic closure of the field of rational functions K(x1, ..., xm). Algebraic functions in one variable Introduction and overview The informal definition of an algebraic function provides a number of clues about their properties. To gain an intuitive understanding, it may be helpful to regard algebraic functions as functions which can be formed by the usual algebraic operations: addition, multiplication, division, and taking an nth root. This is something of an oversimplification; because of the fundamental theorem of Galois theory, algebraic functions need not be expressible by radicals. First, note that any polynomial function is an algebraic function, since it is simply the solution y to the equation More generally, any rational function is algebraic, being the solution to Moreover, the nth root of any polynomial is an algebraic function, solving the equation Surprisingly, the inverse function of an algebraic function is an algebraic function. For supposing that y is a solution to for each value of x, then x is also a solution of this equation for each value of y. Indeed, interchanging the roles of x and y and gathering terms, Writing x as a function of y gives the inverse function, also an algebraic function. However, not every function has an inverse. For example, y = x2 fails the horizontal line test: it fails to be one-to-one. The inverse is the algebraic "function" . Another way to understand this, is that the set of branches of the polynomial equation defining our algebraic function is the graph of an algebraic curve. The role of complex numbers From an algebraic perspective, complex numbers enter quite naturally into the study of algebraic functions. First of all, by the fundamental theorem of algebra, the complex numbers are an algebraically closed field. Hence any polynomial relation p(y, x) = 0 is guaranteed to have at least one solution (and in general a number of solutions not exceeding the degree of p in y) for y at each point x, provided we allow y to assume complex as well as real values. Thus, problems to do with the domain of an algebraic function can safely be minimized. Furthermore, even if one is ultimately interested in real algebraic functions, there may be no means to express the function in terms of addition, multiplication, division and taking nth roots without resorting to complex numbers (see casus irreducibilis). For example, consider the algebraic function determined by the equation Using the cubic formula, we get For the square root is real and the cubic root is thus well defined, providing the unique real root. On the other hand, for the square root is not real, and one has to choose, for the square root, either non-real square root. Thus the cubic root has to be chosen among three non-real numbers. If the same choices are done in the two terms of the formula, the three choices for the cubic root provide the three branches shown, in the accompanying image. It may be proven that there is no way to express this function in terms of nth roots using real numbers only, even though the resulting function is real-valued on the domain of the graph shown. On a more significant theoretical level, using complex numbers allows one to use the powerful techniques of complex analysis to discuss algebraic functions. In particular, the argument principle can be used to show that any algebraic function is in fact an analytic function, at least in the multiple-valued sense. Formally, let p(x, y) be a complex polynomial in the complex variables x and y. Suppose that x0 ∈ C is such that the polynomial p(x0, y) of y has n distinct zeros. We shall show that the algebraic function is analytic in a neighborhood of x0. Choose a system of n non-overlapping discs Δi containing each of these zeros. Then by the argument principle By continuity, this also holds for all x in a neighborhood of x0. In particular, p(x, y) has only one root in Δi, given by the residue theorem: which is an analytic function. Monodromy Note that the foregoing proof of analyticity derived an expression for a system of n different function elements fi(x), provided that x is not a critical point of p(x, y). A critical point is a point where the number of distinct zeros is smaller than the degree of p, and this occurs only where the highest degree term of p or the discriminant vanish. Hence there are only finitely many such points c1, ..., cm. A close analysis of the properties of the function elements fi near the critical points can be used to show that the monodromy cover is ramified over the critical points (and possibly the point at infinity). Thus the holomorphic extension of the fi has at worst algebraic poles and ordinary algebraic branchings over the critical points. Note that, away from the critical points, we have since the fi are by definition the distinct zeros of p. The monodromy group acts by permuting the factors, and thus forms the monodromy representation of the Galois group of p. (The monodromy action on the universal covering space is related but different notion in the theory of Riemann surfaces.) History The ideas surrounding algebraic functions go back at least as far as René Descartes. The first discussion of algebraic functions appears to have been in Edward Waring's 1794 An Essay on the Principles of Human Knowledge in which he writes: let a quantity denoting the ordinate, be an algebraic function of the abscissa x, by the common methods of division and extraction of roots, reduce it into an infinite series ascending or descending according to the dimensions of x, and then find the integral of each of the resulting terms. See also Algebraic expression Analytic function Complex function Elementary function Function (mathematics) Generalized function List of special functions and eponyms List of types of functions Polynomial Rational function Special functions Transcendental function References External links Definition of "Algebraic function" in the Encyclopedia of Math Definition of "Algebraic function" in David J. Darling's Internet Encyclopedia of Science Analytic functions Functions and mappings Meromorphic functions Special functions Types of functions Polynomials Algebraic number theory
Algebraic function
[ "Mathematics" ]
1,721
[ "Functions and mappings", "Mathematical analysis", "Special functions", "Algebra", "Polynomials", "Mathematical objects", "Combinatorics", "Mathematical relations", "Algebraic number theory", "Types of functions", "Number theory" ]
974,191
https://en.wikipedia.org/wiki/Bulkhead%20%28partition%29
A bulkhead is an upright wall within the hull of a ship, within the fuselage of an airplane, or a car. Other kinds of partition elements within a ship are decks and deckheads. Etymology The word bulki meant "cargo" in Old Norse. During the 15th century sailors and builders in Europe realized that walls within a vessel would prevent cargo from shifting during passage. In shipbuilding, any vertical panel was called a head. So walls installed abeam (side-to-side) in a vessel's hull were called "bulkheads". Now, the term bulkhead applies to every vertical panel aboard a ship, except for the hull itself. History Bulkheads were known to the ancient Greeks, who employed bulkheads in triremes to support the back of rams. By the Athenian trireme era (500 BC), the hull was strengthened by enclosing the bow behind the ram, forming a bulkhead compartment. Instead of using bulkheads to protect ships against rams, Greeks preferred to reinforce the hull with extra timber along the waterline, making larger ships almost resistant to ramming by smaller ones. Bulkhead partitions are considered to have been a feature of Chinese junks, a type of ship. Song dynasty author Zhu Yu (fl. 12th century) wrote in his book of 1119 that the hulls of Chinese ships had a bulkhead build. The 5th-century book Garden of Strange Things by Liu Jingshu mentioned that a ship could allow water to enter the bottom without sinking. Archaeological evidence of bulkhead partitions has been found on a 24 m (78 ft) long Song dynasty ship dredged from the waters off the southern coast of China in 1973, the hull of the ship divided into twelve walled compartmental sections built watertight, dated to about 1277. Texts written by writers such as Marco Polo (1254–1324), Ibn Battuta (1304–1369), Niccolò Da Conti (1395–1469), and Benjamin Franklin (1706–1790) describe the bulkhead partitions of East Asian shipbuilding. An account of the early fifteenth century describes Indian ships as being built in compartments so that even if one part was damaged, the rest remained intact—a forerunner of the modern day watertight compartments using bulkheads. As wood began to be replaced by iron in European ships in the 18th century, new structures, like bulkheads, started to become prevalent. Bulkhead partitions became widespread in Western shipbuilding during the early 19th century. Benjamin Franklin wrote in a 1787 letter that "as these vessels are not to be laden with goods, their holds may without inconvenience be divided into separate apartments, after the Chinese manner, and each of these apartments caulked tight so as to keep out water." A 19th-century book on shipbuilding attributes the introduction of watertight bulkheads to Charles Wye Williams, known for his steamships. Purpose Bulkheads in a ship serve several purposes: increase the structural rigidity of the vessel, divide functional areas into rooms and create watertight compartments that can contain water in the case of a hull breach or other leak. some bulkheads and decks are fire-resistance rated to achieve compartmentalisation, a passive fire protection measure; see firewall (construction). Not all bulkheads are intended to be watertight, in modern ships the bottom floor is supported against the hull by transverse walls(bulkheads) and longitudinal walls, being common to use bulkheads with lightening holes. On an aircraft, bulkheads divide the cabin into multiple areas. On passenger aircraft a common application is for physically dividing cabins used for different classes of service (e.g. economy and business.) On combination cargo/passenger, or "combi" aircraft, bulkhead walls are inserted to divide areas intended for passenger seating and cargo storage. Requirements of bulkheads Fire-resistance Openings in fire-resistance rated bulkheads and decks must be firestopped to restore the fire-resistance ratings that would otherwise be compromised if the openings were left unsealed. The authority having jurisdiction for such measures varies depending upon the flag of the ship. Merchant vessels are typically subject to the regulations and inspections of the coast guards of the flag country. Combat ships are subject to the regulations set out by the navy of the country that owns the ship. Prevention of electromagnetic damage Bulkheads and decks of warships may be fully electrically grounded as a countermeasure against damage from electromagnetic interference and electromagnetic pulse due to nearby nuclear or electromagnetic bomb detonations, which could severely damage the vital electronic systems on a ship. In the case of firestops, cable jacketing is usually removed within the seal and firestop rubber modules are internally fitted with copper shields, which contact the cables' armour to ground the seal. Automotive Most passenger vehicles and some freight vehicles will have a bulkhead which separates the engine compartment from the passenger compartment or cab; the automotive use is analogous to the nautical term in that the bulkhead is an internal wall which separates different parts of the vehicle. Some passenger vehicles (particularly sedan/saloon-type vehicles) will also have a rear bulkhead, which separates the passenger compartment from the trunk/boot. Other uses of the term The term was later applied to other vehicles, such as railroad cars, hopper cars, trams, automobiles, aircraft or spacecraft, as well as to containers, intermediate bulk containers and fuel tanks. In some of these cases bulkheads are airtight to prevent air leakage or the spread of a fire. The term may also be used for the "end walls" of bulkhead flatcars. Mechanically, a partition or panel through which connectors pass, or a connector designed to pass through a partition. In architecture the term is frequently used to denote any boxed in beam or other downstand from a ceiling and by extension even the vertical downstand face of an area of lower ceiling beyond. This usage presumably derives from experience on boats where to maintain the structural function personnel openings through bulkheads always retain a portion of the bulkhead crossing the head of the opening. Head strikes on these downstand elements are commonplace, hence in architecture any overhead downstand element comes to be referred to as a bulkhead. Bulkhead also refers to a moveable structure often found in an Olympic-size swimming pool, as a means to set the pool into a "double-ended short course" configuration, or long-course, depending on the type of event being run. Pool bulkheads are usually air-fillable, but power driven solutions do exist. The term is also used to refer to large retroactively installed pressure barriers for temporary or permanent use, often during maintenance or construction activities. See also References External links Britannica definition Merriam-Webster definition WIPO Bulkhead for motor vehicle Canadian Armed Forces Glossary, see Fire Zone, page 5 of 14 Det Norske Veritas Type Approval for a fire damper inside and A60 bulkhead Subject-related patent by Free Patents Online An example treatise on the use of A60 bulkheads onboard tankers. Shipbuilding Nautical terminology Chinese inventions Ship compartments
Bulkhead (partition)
[ "Engineering" ]
1,439
[ "Shipbuilding", "Marine engineering" ]
974,207
https://en.wikipedia.org/wiki/Ring%20circuit
In electricity supply design, a ring circuit is an electrical wiring technique in which sockets and the distribution point are connected in a ring. It is contrasted with the usual radial circuit, in which sockets and the distribution point are connected in a line with the distribution point at one end. Ring circuits are also known as ring final circuits and often incorrectly as ring mains, a term used historically, or informally simply as rings. It is used primarily in the United Kingdom, where it was developed, and to a lesser extent in Ireland and Hong Kong. This design enables the use of smaller-diameter wire than would be used in a radial circuit of equivalent total current capacity. The reduced diameter conductors in the flexible cords connecting an appliance to the plug intended for use with sockets on a ring circuit are individually protected by a fuse in the plug. Its advantages over radial circuits are therefore reduced quantity of copper used, and greater flexibility of appliances and equipment that can be connected. Ideally, the ring circuit acts like two radial circuits proceeding in opposite directions around the ring, the dividing point between them dependent on the distribution of load in the ring. If the load is evenly split across the two directions, the current in each direction is half of the total, allowing the use of wire with half the total current-carrying capacity. In practice, the load does not always split evenly, so thicker wire is used. Description The ring starts at the consumer unit (also known as fuse box, distribution board, or breaker box), visits each socket in turn, and then returns to the consumer unit. The ring is fed from a fuse or circuit breaker in the consumer unit. Ring circuits are commonly used in British wiring with socket-outlets taking fused plugs to BS 1363. Because the breaker rating is much higher than that of any one socket outlet, the system can only be used with fused plugs or fused appliance outlets. They are generally wired with 2.5 mm2 cable and protected by a 30 A fuse, an older 30 A circuit breaker, or a European harmonised 32 A circuit breaker. Sometimes 4 mm2 cable is used if very long cable runs (to help reduce voltage drop) or derating factors such as very thick thermal insulation are involved. 1.5 mm2 mineral-insulated copper-clad cable (known as pyro) may also be used (as mineral insulated cable can withstand heat more effectively than normal PVC) though more care must be taken with regard to voltage drop on longer runs. The protection devices for the fixed wiring need to be rated higher than would protect flexible appliance cords, so BS 1363 requires that all plugs and connection units incorporate fuses appropriate to the appliance cord. History and use The ring circuit and the associated BS 1363 plug and socket system were developed in Britain during 1942–1947. They are commonly used in the United Kingdom and to a lesser extent in the Republic of Ireland. They are also found in the United Arab Emirates, Singapore, Hong Kong, Beijing, Indonesia and many places where the UK had a strong influence, including for example Cyprus and Uganda. Pre-World War II practice was to use various sizes of plugs and sockets to suit the current requirement of the appliance, and these were connected to suitably fused radial circuits; the ratings of those fuses were appropriate to protect both the fixed wiring and the flexible cord attached to the plug. The Electrical Installations Committee which was convened in 1942 as part of the Post War Building Studies programme determined, amongst other things, that the ring final circuit offered a more efficient and lower cost system which would safely support a greater number of sockets. The scheme was specified to use 13 A socket-outlets and fused plugs; several designs for the plugs and sockets were considered. The design chosen as the British Standard was the flat pin system now known as BS 1363. Other designs of 13 A fused plugs and socket-outlets, notably the Wylex and Dorman & Smith systems, which did not conform to the chosen standard, were used into the 1950s, but by the 1960s BS 1363 had become the single standard for new installations. The committee mandated the ring circuit both to increase consumer safety and to combat the anticipated post-war copper shortage. The committee estimated that using ring-circuit and single-pole fusing would reduce raw materials requirements by approximately 25% compared with pre-war regulations. The ring circuit is still the most common mains wiring configuration in the UK, although both 20 A and 30 A radial circuits are also permitted by the Wiring Regulations, with a recommendation based on the floor area served (20 A for area up to 25 m2, 30 A for up to 100 m2). Installation rules Rules for ring circuits provide that the cable rating must be no less than two thirds of the rating of the protective device. This means that the risk of sustained overloading of the cable can be considered minimal. In practice, however, it is extremely uncommon to encounter a ring with a protective device other than a 30 A fuse, 30 A breaker, or 32 A breaker, and a cable size other than those mentioned above. Because the BS 1363 plug contains a fuse not exceeding 13A, the load at any one point on the ring is limited. The IET Wiring Regulations (BS 7671) permit an unlimited number of 13A socket outlets (at any point unfused single or double, or any number fused) to be installed on a ring circuit, provided that the floor area served does not exceed 100 m2. In practice, most small and medium houses have one ring circuit per storey, with larger premises having more. An installation designer may determine if additional circuits are required for areas of high demand. For example, it is common practice to put kitchens on their own ring circuit or sometimes a ring circuit shared with a utility room to avoid putting a heavy load at one point on the main downstairs ring circuit. Since any load on a ring is fed by the ring conductors on either side of it, it is desirable to avoid a concentrated load placed very near the consumer unit, since the shorter conductors will have less resistance and carry a disproportionate share of the load. Unfused spurs from a ring wired in the same cable as the ring are allowed to run one socket (single or double) or one fused connection unit (FCU). Before 1970 the use of two single sockets on one spur was allowed, but has since been disallowed because of their conversion to double sockets. Spurs may either start from a socket or be joined to the ring cable with a junction box or other approved method of joining cables. BS 1363 compliant triple and larger sockets are always fused at 13A and therefore can also be placed on a spur. Since 1970 it is permitted to have more spurs than sockets on the ring, but it is considered poor practice by many electricians to have too many unfused spurs in a new installation. Where loads other than BS 1363 sockets are connected to a ring circuit or it is desired to place more than one socket for low power equipment on a spur, a BS 1363 fused connection unit (FCU) is used. In the case of fixed appliances this will be a switched fused connection unit (SFCU) to provide a point of isolation for the appliance, but in other cases such as feeding multiple lighting points (putting lighting on a ring though is generally considered bad practice in new installation but is often done when adding lights to an existing property) or multiple sockets, an unswitched one is often preferable. Fixed appliances with a power rating of 3 kW or more (for example, water heaters and some electric cookers) or with a non-trivial power demand for long periods (for example, immersion heaters) may be connected to a ring circuit, but it is strongly recommended that instead they are connected to their own dedicated circuit. However, there are plenty of older installations with such loads on a ring circuit. Advantages Proponents of the ring circuit point out that, when correctly installed, there are also a number of advantages to be considered. Area served For rooms that are square or circular, a ring circuit can deliver more power per unit of floor area for a given cable size than a simple radial circuit, and the source impedance and therefore voltage drop to the furthest point is lower. Alternatively, to deliver the same power to the same building with radial circuits would require more final circuits or a heavier cable. High integrity earthing As all fittings on the ring are earthed from both sides, two independent faults are needed to create an 'off earth' fault. Continuous continuity verification from any point The continuity of each conductor right round all the points on the ring can be verified from any point, and if this needs to be done as part of live installation monitoring, it can be verified by current clamp injection with the system energised. Criticism The ring final circuit concept has been criticized in a number of ways compared to radials, and some of these concerns could explain the lack of widespread adoption outside the United Kingdom. Fault conditions are not apparent when in use Ring circuits may continue to operate without the user being aware of any problem if there are certain types of fault condition or installation errors. This gives both robustness against failure and a potential for danger. Safety tests are complex At least one author claims that testing ring circuits may take 5–6 times longer than testing radial circuits. The installation tests required for the safe operation of a ring circuit are more time-consuming than those for a radial circuit, and DIY installers or electricians qualified in other countries may not be familiar with them. Load balance required Regulation 433-02-04 of BS 7671 requires that the installed load must be distributed around the ring such that no part of the cable exceeds its rated capacity. In some cases this requirement is difficult to guarantee, and may be largely ignored in practice, as loads are often co-located (e.g., washing machine, tumble dryer, dish washer all next to kitchen sink) at a point not necessarily near the centre of the ring. However, the fact that the cable rating is 67% that of the circuit breaker, not 50%, means that a ring has to be significantly out of balance to cause a problem. In a ring circuit, if any poor joint causes a high resistance on one branch of the ring, current will be unevenly distributed, possibly overloading the remaining conductor of the ring. See also Electrical wiring in the United Kingdom References External links Ring Circuit wiring guide Electrical wiring Electricity supply circuits
Ring circuit
[ "Physics", "Engineering" ]
2,151
[ "Electrical systems", "Building engineering", "Physical systems", "Electricity supply circuits", "Electrical engineering", "Electrical wiring" ]
11,726,298
https://en.wikipedia.org/wiki/Linear%20extension
In order theory, a branch of mathematics, a linear extension of a partial order is a total order (or linear order) that is compatible with the partial order. As a classic example, the lexicographic order of totally ordered sets is a linear extension of their product order. Definitions Linear extension of a partial order A partial order is a reflexive, transitive and antisymmetric relation. Given any partial orders and on a set is a linear extension of exactly when is a total order, and For every if then It is that second property that leads mathematicians to describe as extending Alternatively, a linear extension may be viewed as an order-preserving bijection from a partially ordered set to a chain on the same ground set. Linear extension of a preorder A preorder is a reflexive and transitive relation. The difference between a preorder and a partial-order is that a preorder allows two different items to be considered "equivalent", that is, both and hold, while a partial-order allows this only when . A relation is called a linear extension of a preorder if: is a total preorder, and For every if then , and For every if then . Here, means " and not ". The difference between these definitions is only in condition 3. When the extension is a partial order, condition 3 need not be stated explicitly, since it follows from condition 2. Proof: suppose that and not . By condition 2, . By reflexivity, "not " implies that . Since is a partial order, and imply "not ". Therefore, . However, for general preorders, condition 3 is needed to rule out trivial extensions. Without this condition, the preorder by which all elements are equivalent ( and hold for all pairs x,y) would be an extension of every preorder. Order-extension principle The statement that every partial order can be extended to a total order is known as the order-extension principle. A proof using the axiom of choice was first published by Edward Marczewski (Szpilrajin) in 1930. Marczewski writes that the theorem had previously been proven by Stefan Banach, Kazimierz Kuratowski, and Alfred Tarski, again using the axiom of choice, but that the proofs had not been published. There is an analogous statement for preorders: every preorder can be extended to a total preorder. This statement was proved by Hansson. In modern axiomatic set theory the order-extension principle is itself taken as an axiom, of comparable ontological status to the axiom of choice. The order-extension principle is implied by the Boolean prime ideal theorem or the equivalent compactness theorem, but the reverse implication doesn't hold. Applying the order-extension principle to a partial order in which every two elements are incomparable shows that (under this principle) every set can be linearly ordered. This assertion that every set can be linearly ordered is known as the ordering principle, OP, and is a weakening of the well-ordering theorem. However, there are models of set theory in which the ordering principle holds while the order-extension principle does not. Related results The order extension principle is constructively provable for sets using topological sorting algorithms, where the partial order is represented by a directed acyclic graph with the set's elements as its vertices. Several algorithms can find an extension in linear time. Despite the ease of finding a single linear extension, the problem of counting all linear extensions of a finite partial order is #P-complete; however, it may be estimated by a fully polynomial-time randomized approximation scheme. Among all partial orders with a fixed number of elements and a fixed number of comparable pairs, the partial orders that have the largest number of linear extensions are semiorders. The order dimension of a partial order is the minimum cardinality of a set of linear extensions whose intersection is the given partial order; equivalently, it is the minimum number of linear extensions needed to ensure that each critical pair of the partial order is reversed in at least one of the extensions. Antimatroids may be viewed as generalizing partial orders; in this view, the structures corresponding to the linear extensions of a partial order are the basic words of the antimatroid. This area also includes one of order theory's most famous open problems, the 1/3–2/3 conjecture, which states that in any finite partially ordered set that is not totally ordered there exists a pair of elements of for which the linear extensions of in which number between 1/3 and 2/3 of the total number of linear extensions of An equivalent way of stating the conjecture is that, if one chooses a linear extension of uniformly at random, there is a pair which has probability between 1/3 and 2/3 of being ordered as However, for certain infinite partially ordered sets, with a canonical probability defined on its linear extensions as a limit of the probabilities for finite partial orders that cover the infinite partial order, the 1/3–2/3 conjecture does not hold. Algebraic combinatorics Counting the number of linear extensions of a finite poset is a common problem in algebraic combinatorics. This number is given by the leading coefficient of the order polynomial multiplied by Young tableau can be considered as linear extensions of a finite order-ideal in the infinite poset and they are counted by the hook length formula. References Order theory
Linear extension
[ "Mathematics" ]
1,119
[ "Order theory" ]
11,727,583
https://en.wikipedia.org/wiki/Macropod%20hybrid
Macropod hybrids are hybrids of animals within the family Macropodidae, which includes kangaroos and wallabies. Several macropod hybrids have been experimentally bred, including: Some hybrids between similar species have been achieved by housing males of one species and females of the other together to limit the choice of a mate. To create a "natural" macropod hybrid, young animals of one species have been transferred to the pouch of another so as to imprint into them the other species. In-vitro fertilization has also been used and the fertilized egg implanted into a female of either species. References Macropods Mammal hybrids Intergeneric hybrids
Macropod hybrid
[ "Biology" ]
132
[ "Intergeneric hybrids", "Hybrid organisms" ]
11,730,924
https://en.wikipedia.org/wiki/Merit%20order
The merit order is a way of ranking available sources of energy, especially electrical generation, based on ascending order of price (which may reflect the order of their short-run marginal costs of production) and sometimes pollution, together with amount of energy that will be generated. In a centralized management scheme, the ranking is such that those with the lowest marginal costs are the first sources to be brought online to meet demand, and the plants with the highest marginal costs are the last to be brought on line. Dispatching power generation in this way, known as economic dispatch, minimizes the cost of production of electricity. Sometimes generating units must be started out of merit order, due to transmission congestion, system reliability or other reasons. In environmental dispatch, additional considerations concerning reduction of pollution further complicate the power dispatch problem. The basic constraints of the economic dispatch problem remain in place but the model is optimized to minimize pollutant emission in addition to minimizing fuel costs and total power loss. The effect of renewable energy on merit order The high demand for electricity during peak demand pushes up the bidding price for electricity, and the often relatively inexpensive baseload power supply mix is supplemented by 'peaking power plants', which produce electrical power at higher cost, and therefore are priced higher for their electrical output. Increasing the supply of renewable energy tends to lower the average price per unit of electricity because wind energy and solar energy have very low marginal costs: they do not have to pay for fuel, and the sole contributors to their marginal cost is operations and maintenance. With cost often reduced by feed-in-tariff revenue, their electricity is as a result, less costly on the spot market than that from coal or natural gas, and transmission companies typically` buy from them first. Solar and wind electricity therefore substantially reduce the amount of highly priced peak electricity that transmission companies need to buy, during the times when solar/wind power is available, reducing the overall cost. A study by the Fraunhofer Institute ISI found that this "merit order effect" had allowed solar power to reduce the price of electricity on the German energy exchange by 10% on average, and by as much as 40% in the early afternoon. In 2007; as more solar electricity was fed into the grid, peak prices may come down even further. By 2006, the "merit order effect" indicated that the savings in electricity costs to German consumers, on average, more than offset the support payments paid by customers for renewable electricity generation. A 2013 study estimated the merit order effect of both wind and photovoltaic electricity generation in Germany between the years 2008 and 2012. For each additional GWh of renewables fed into the grid, the price of electricity in the day-ahead market was reduced by 0.11–0.13¢/kWh. The total merit order effect of wind and photovoltaics ranged from 0.5¢/kWh in 2010 to more than 1.1¢/kWh in 2012. The near-zero marginal cost of wind and solar energy does not, however, translate into zero marginal cost of peak load electricity in a competitive open electricity market system as wind and solar supply alone often cannot be dispatched to meet peak demand without incurring marginal transmission costs and potentially the costs of ``batteries. The purpose of the merit order dispatching paradigm was to enable the lowest net cost electricity to be dispatched first thus minimising overall electricity system costs to consumers. Intermittent wind and solar is sometimes able to supply this economic function. If peak wind (or solar) supply and peak demand both coincide in time and quantity, the price reduction is larger. On the other hand, solar energy tends to be most abundant at noon, whereas peak demand is late afternoon in warm climates, leading to the so-called duck curve. A 2008 study by the Fraunhofer Institute ISI in Karlsruhe, Germany found that windpower saves German consumers €5billion a year. It is estimated to have lowered prices in European countries with high wind generation by between 3 and 23€/MWh. On the other hand, renewable energy in Germany increased the price for electricity, consumers there now pay 52.8 €/MWh more only for renewable energy (see German Renewable Energy Sources Act), average price for electricity in Germany now is increased to 26¢/kWh. Increasing electrical grid costs for new transmission, market trading and storage associated with wind and solar are not included in the marginal cost of power sources, instead grid costs are combined with source costs at the consumer end. Economic dispatch Economic dispatch is the short-term determination of the optimal output of a number of electricity generation facilities, to meet the system load, at the lowest possible cost, subject to transmission and operational constraints. The Economic Dispatch Problem can be solved by specialized computer software which should satisfy the operational and system constraints of the available resources and corresponding transmission capabilities. In the US Energy Policy Act of 2005, the term is defined as "the operation of generation facilities to produce energy at the lowest cost to reliably serve consumers, recognising any operational limits of generation and transmission facilities". The main idea is that, in order to satisfy the load at a minimum total cost, the set of generators with the lowest marginal costs must be used first, with the marginal cost of the final generator needed to meet load setting the system marginal cost. This is the cost of delivering one additional MWh of energy onto the system. Due to transmission constraints, this cost can vary at different locations within the power grid - these different cost levels are identified as "locational marginal prices" (LMPs). The historic methodology for economic dispatch was developed to manage fossil fuel burning power plants, relying on calculations involving the input/output characteristics of power stations. Basic mathematical formulation The following is based on an analytical methodology following Biggar and Hesamzadeh (2014) and Kirschen (2010). The economic dispatch problem can be thought of as maximising the economic welfare of a power network whilst meeting system constraints. For a network with buses (nodes), suppose that is the rate of generation, and is the rate of consumption at bus . Suppose, further, that is the cost function of producing power (i.e., the rate at which the generator incurs costs when producing at rate ), and is the rate at which the load receives value or benefits (expressed in currency units) when consuming at rate . The total welfare is then The economic dispatch task is to find the combination of rates of production and consumption () which maximise this expression subject to a number of constraints: The first constraint, which is necessary to interpret the constraints that follow, is that the net injection at each bus is equal to the total production at that bus less the total consumption: The power balance constraint requires that the sum of the net injections at all buses must be equal to the power losses in the branches of the network: The power losses depend on the flows in the branches and thus on the net injections as shown in the above equation. However it cannot depend on the injections on all the buses as this would give an over-determined system. Thus one bus is chosen as the Slack bus and is omitted from the variables of the function . The choice of Slack bus is entirely arbitrary, here bus is chosen. The second constraint involves capacity constraints on the flow on network lines. For a system with lines this constraint is modeled as: where is the flow on branch , and is the maximum value that this flow is allowed to take. Note that the net injection at the slack bus is not included in this equation for the same reasons as above. These equations can now be combined to build the Lagrangian of the optimization problem: where π and μ are the Lagrangian multipliers of the constraints. The conditions for optimality are then: where the last condition is needed to handle the inequality constraint on line capacity. Solving these equations is computationally difficult as they are nonlinear and implicitly involve the solution of the power flow equations. The analysis can be simplified using a linearised model called a DC power flow. There is a special case which is found in much of the literature. This is the case in which demand is assumed to be perfectly inelastic (i.e., unresponsive to price). This is equivalent to assuming that for some very large value of and inelastic demand . Under this assumption, the total economic welfare is maximised by choosing . The economic dispatch task reduces to: Subject to the constraint that and the other constraints set out above. Environmental dispatch In environmental dispatch, additional considerations concerning reduction of pollution further complicate the power dispatch problem. The basic constraints of the economic dispatch problem remain in place but the model is optimized to minimize pollutant emission in addition to minimizing fuel costs and total power loss. Due to the added complexity, a number of algorithms have been employed to optimize this environmental/economic dispatch problem. Notably, a modified bees algorithm implementing chaotic modeling principles was successfully applied not only in silico, but also on a physical model system of generators. Other methods used to address the economic emission dispatch problem include Particle Swarm Optimization (PSO) and neural networks Another notable algorithm combination is used in a real-time emissions tool called Locational Emissions Estimation Methodology (LEEM) that links electric power consumption and the resulting pollutant emissions. The LEEM estimates changes in emissions associated with incremental changes in power demand derived from the locational marginal price (LMP) information from the independent system operators (ISOs) and emissions data from the US Environmental Protection Agency (EPA). LEEM was developed at Wayne State University as part of a project aimed at optimizing water transmission systems in Detroit, MI starting in 2010 and has since found a wider application as a load profile management tool that can help reduce generation costs and emissions. References External links Economic Dispatch: Concepts, Practices and Issues See also Electricity market Bid-based, security-constrained, economic dispatch with nodal prices Unit commitment problem in electrical power production German Renewable Energy Sources Act Electric power Energy production Energy in the United Kingdom Energy economics
Merit order
[ "Physics", "Engineering", "Environmental_science" ]
2,061
[ "Government by algorithm", "Physical quantities", "Energy economics", "Automation", "Power (physics)", "Electric power", "Electrical engineering", "Environmental social science" ]
11,733,139
https://en.wikipedia.org/wiki/Friction%20torque
In mechanics, friction torque is the torque caused by the frictional force that occurs when two objects in contact move. Like all torques, it is a rotational force that may be measured in newton meters or pounds-feet. Engineering Friction torque can be disruptive in engineering. There are a variety of measures engineers may choose to take to eliminate these disruptions. Ball bearings are an example of an attempt to minimize the friction torque. Friction torque can also be an asset in engineering. Bolts and nuts, or screws are often designed to be fastened with a given amount of torque, where the friction is adequate during use or operation for the bolt, nut, or screw to remain safely fastened. This is true with such applications as lug nuts retaining wheels to vehicles, or equipment subjected to vibration with sufficiently well-attached bolts, nuts, or screws to prevent the vibration from shaking them loose. Examples When a cyclist applies the brake to the forward wheel, the bicycle tips forward due to the frictional torque between the wheel and the ground. When a golf ball hits the ground it begins to spin in part because of the friction torque applied to the golf ball from the friction between the golf ball and the ground. References See also Torque Force Engineering Mechanics Moment (physics)
Friction torque
[ "Physics", "Mathematics", "Engineering" ]
255
[ "Physical quantities", "Quantity", "Mechanics", "Mechanical engineering", "Moment (physics)" ]
11,733,219
https://en.wikipedia.org/wiki/Tin%28II%29%20iodide
Tin(II) iodide, also known as stannous iodide, is an ionic tin salt of iodine with the formula SnI2. It has a formula weight of 372.519 g/mol. It is a red to red-orange solid. Its melting point is 320 °C, and its boiling point is 714 °C. Tin(II) iodide can be synthesised by heating metallic tin with iodine in 2 M hydrochloric acid. Sn + I2 → SnI2 References Tin(II) compounds Iodides Metal halides Reducing agents
Tin(II) iodide
[ "Chemistry" ]
126
[ "Redox", "Inorganic compounds", "Salts", "Reducing agents", "Metal halides" ]
11,733,308
https://en.wikipedia.org/wiki/Friedel%27s%20law
Friedel's law, named after Georges Friedel, is a property of Fourier transforms of real functions. Given a real function , its Fourier transform has the following properties. where is the complex conjugate of . Centrosymmetric points are called Friedel's pairs. The squared amplitude () is centrosymmetric: The phase of is antisymmetric: . Friedel's law is used in X-ray diffraction, crystallography and scattering from real potential within the Born approximation. Note that a twin operation ( Opération de maclage) is equivalent to an inversion centre and the intensities from the individuals are equivalent under Friedel's law. References Fourier analysis Crystallography
Friedel's law
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
148
[ "Materials science stubs", "Materials science", "Crystallography stubs", "Crystallography", "Condensed matter physics" ]
2,970,491
https://en.wikipedia.org/wiki/Johnjoe%20McFadden
Johnjoe McFadden (born 17 May 1956) is an Anglo-Irish scientist, academic and writer. He is Professor of Molecular Genetics at the University of Surrey, United Kingdom. Life McFadden was born in Donegal, Ireland but raised in the UK. He holds joint British and Irish Nationality. He obtained his BSc in Biochemistry University of London in 1977 and his PhD at Imperial College London in 1982. He went on to work on human genetic diseases and then infectious diseases, at St Mary's Hospital Medical School, London (1982–84) and St George's Hospital Medical School, London (1984–88) and then at the University of Surrey in Guildford, UK. For more than a decade, McFadden has researched the genetics of microbes such as the agents of tuberculosis and meningitis and invented a test for the diagnosis of meningitis. He has published more than 100 articles in scientific journals on subjects as wide-ranging as bacterial genetics, tuberculosis, idiopathic diseases and computer modelling of evolution. He has contributed to more than a dozen books and has edited a book on the genetics of mycobacteria. He produced a widely reported artificial life computer model which modelled evolution in organisms. McFadden has lectured extensively in the UK, Europe, the US and Japan and his work has been featured on radio, television and national newspaper articles particularly for the Guardian. His present post, which he has held since 2001, is Professor of Molecular Genetics at the University of Surrey. Living in London, he is married and has one son. Quantum evolution McFadden wrote the popular science book, Quantum Evolution. The book examines the role of quantum mechanics in life, evolution and consciousness. The book has been described as offering an alternative evolutionary mechanism, beyond the neo-Darwinian framework. The book received positive reviews by Kirkus Reviews and Publishers Weekly. It was negatively reviewed in the journal Heredity by evolutionary biologist Wallace Arthur. Writing In 2006 McFadden co-edited the book, Human Nature: Fact and Fiction on the insights of both science and literature on human nature, with contributions from Ian McEwan, Philip Pullman, Steven Pinker, A.C. Grayling and others. in 2014 McFadden co-wrote the popular science book, Life on the Edge: The Coming Age of Quantum Biology, in which he and Jim Al-Khalili further explore quantum biology and particularly recent findings in photosynthesis, enzyme catalysis, avian navigation, olfaction, mutation and neurobiology. The book received positive reviews, for example: "'Life on the Edge’ gives the clearest account I’ve ever read of the possible ways in which the very small events of the quantum world can affect the world of middle-sized living creatures like us. With great vividness and clarity it shows how our world is tinged, even saturated, with the weirdness of the quantum." (Philip Pullman) "Hugely ambitious ... the skill of the writing provides the uplift to keep us aloft as we fly through the strange and spectacular terra incognita of genuinely new science." (Tom Whipple The Times) McFadden regularly writes articles for The Guardian newspaper on topics as varied as quantum mechanics, evolution and genetically modified crops, and has reviewed books there. The Washington Post and Frankfurter Allgemeine Sonntagszeitung have also published his articles. Life Is Simple: How Occam’s Razor Set Science Free and Unlocked the Universe (Basic Books, 384pp) ISBN 9781529364934 See also Electromagnetic theories of consciousness Mind's eye Quantum Aspects of Life References External links - Johnjoe McFadden's Homepage Johnjoe McFadden's Machines Like Us interview - Johnjoe McFadden's homepage at the University of Surrey, UK. Quantum Evolution - Explore the role of quantum mechanics in life, evolution and consciousness. - Life on the Edge: The Coming of Age of Quantum Biology. Johnjoe McFadden and Jim Al-Khalili (2014) Living people 1956 births Alumni of Imperial College London Academics of the University of Surrey British science writers British biologists Evolutionary biologists Extended evolutionary synthesis Quantum biology Writers from County Donegal Scientists from County Donegal 21st-century Irish biologists
Johnjoe McFadden
[ "Physics", "Biology" ]
880
[ "Quantum mechanics", "nan", "Quantum biology" ]
2,970,534
https://en.wikipedia.org/wiki/Lufenuron
Lufenuron is the active ingredient in the veterinary flea control medication Program, and one of the two active ingredients in the flea, heartworm, and anthelmintic medicine milbemycin oxime/lufenuron (Sentinel). Lufenuron is stored in the animal's body fat and transferred to adult fleas through the host's blood when they feed. Adult fleas transfer it to their growing eggs through their blood, and to hatched larvae feeding on their excrement. It does not kill adult fleas. Lufenuron, a benzoylurea pesticide, inhibits the production of chitin in insects. Without chitin, a larval flea will never develop a hard outer shell (exoskeleton). With its inner organs exposed to air, the insect dies from dehydration soon after hatching or molting (shedding its old, smaller shell). Lufenuron is also used to fight fungal infections, since fungus cell walls are about one third chitin. Lufenuron is also sold as an agricultural pesticide for use against lepidopterans, eriophyid mites, and western flower thrips. It is an effective antifungal in plants. References External links Veterinary drugs Insecticides Ureas Antifungals Chloroarenes Organofluorides Phenol ethers Benzamides Dog medications Fungicides Fluoroarenes Trifluoromethyl compounds
Lufenuron
[ "Chemistry", "Biology" ]
305
[ "Organic compounds", "Fungicides", "Biocides", "Ureas" ]
2,970,774
https://en.wikipedia.org/wiki/Radiant%20flux
In radiometry, radiant flux or radiant power is the radiant energy emitted, reflected, transmitted, or received per unit time, and spectral flux or spectral power is the radiant flux per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. The SI unit of radiant flux is the watt (W), one joule per second (), while that of spectral flux in frequency is the watt per hertz () and that of spectral flux in wavelength is the watt per metre ()—commonly the watt per nanometre (). Mathematical definitions Radiant flux Radiant flux, denoted ('e' for "energetic", to avoid confusion with photometric quantities), is defined as where is the time; is the radiant energy passing out of a closed surface ; is the Poynting vector, representing the current density of radiant energy; is the normal vector of a point on ; represents the area of ; represents the time period. The rate of energy flow through the surface fluctuates at the frequency of the radiation, but radiation detectors only respond to the average rate of flow. This is represented by replacing the Poynting vector with the time average of its norm, giving where is the time average, and is the angle between and Spectral flux Spectral flux in frequency, denoted Φe,ν, is defined as where is the frequency. Spectral flux in wavelength, denoted , is defined as where is the wavelength. SI radiometry units See also Luminous flux Heat flux Power (physics) Radiosity (heat transfer) References Further reading Power (physics) Physical quantities Radiometry Temporal rates
Radiant flux
[ "Physics", "Mathematics", "Engineering" ]
330
[ "Temporal quantities", "Physical phenomena", "Force", "Telecommunications engineering", "Physical quantities", "Quantity", "Temporal rates", "Power (physics)", "Energy (physics)", "Wikipedia categories named after physical quantities", "Physical properties", "Radiometry" ]
2,971,012
https://en.wikipedia.org/wiki/Manufacturing%20process%20management
Manufacturing process management (MPM) is a collection of technologies and methods used to define how products are to be manufactured. MPM differs from ERP/MRP which is used to plan the ordering of materials and other resources, set manufacturing schedules, and compile cost data. A cornerstone of MPM is the central repository for the integration of all these tools and activities aids in the exploration of alternative production line scenarios; making assembly lines more efficient with the aim of reduced lead time to product launch, shorter product times and reduced work in progress (WIP) inventories as well as allowing rapid response to product or product changes. Production process planning Manufacturing concept planning Factory layout planning and analysis work flow simulation. walk-path assembly planning plant design optimization Mixed model line balancing. Workloads on multiple stations. Process simulation tools e.g. die press lines, manufacturing lines Ergonomic simulation and assessment of production assembly tasks Resource planning Computer-aided manufacturing (CAM) Numerical control CNC Direct numerical control (DNC) Tooling/equipment/fixtures development Tooling and Robot work-cell setup and offline programming (OLP) Generation of shop floor work instructions Time and cost estimates ABC – Manufacturing activity-based costing Outline of industrial organization Quality computer-aided quality assurance (CAQ) Failure mode and effects analysis (FMEA) Statistical process control (SPC) Computer aided inspection with coordinate-measuring machine (CMM) Tolerance stack-up analysis using PMI models. Success measurements Overall equipment effectiveness (OEE), Communication with other systems Enterprise resource planning (ERP) Manufacturing operations management (MOM) Product data management (PDM) SCADA (supervisory control and data acquisition) real time process monitoring and control Human–machine interface (HMI) (or man-machine interface (MMI)) Distributed control system (DCS) See also List of production topics Process management Quality management system processes Operations Management Industrial Management Industrial technology Industrial Engineering References Further reading Materials and Manufacturing Processes, (electronic) (paper), Taylor & Francis Product lifecycle management Engineering management Manufacturing
Manufacturing process management
[ "Engineering" ]
418
[ "Engineering economics", "Engineering management", "Manufacturing", "Mechanical engineering" ]
2,971,205
https://en.wikipedia.org/wiki/Van%20Deemter%20equation
The van Deemter equation in chromatography, named for Jan van Deemter, relates the variance per unit length of a separation column to the linear mobile phase velocity by considering physical, kinetic, and thermodynamic properties of a separation. These properties include pathways within the column, diffusion (axial and longitudinal), and mass transfer kinetics between stationary and mobile phases. In liquid chromatography, the mobile phase velocity is taken as the exit velocity, that is, the ratio of the flow rate in ml/second to the cross-sectional area of the ‘column-exit flow path.’ For a packed column, the cross-sectional area of the column exit flow path is usually taken as 0.6 times the cross-sectional area of the column. Alternatively, the linear velocity can be taken as the ratio of the column length to the dead time. If the mobile phase is a gas, then the pressure correction must be applied. The variance per unit length of the column is taken as the ratio of the column length to the column efficiency in theoretical plates. The van Deemter equation is a hyperbolic function that predicts that there is an optimum velocity at which there will be the minimum variance per unit column length and, thence, a maximum efficiency. The van Deemter equation was the result of the first application of rate theory to the chromatography elution process. Van Deemter equation The van Deemter equation relates height equivalent to a theoretical plate (HETP) of a chromatographic column to the various flow and kinetic parameters which cause peak broadening, as follows: Where HETP = a measure of the resolving power of the column [m] A = Eddy-diffusion parameter, related to channeling through a non-ideal packing [m] B = diffusion coefficient of the eluting particles in the longitudinal direction, resulting in dispersion [m2 s−1] C = Resistance to mass transfer coefficient of the analyte between mobile and stationary phase [s] u = speed [m s−1] In open tubular capillaries, the A term will be zero as the lack of packing means channeling does not occur. In packed columns, however, multiple distinct routes ("channels") exist through the column packing, which results in band spreading. In the latter case, A will not be zero. The form of the Van Deemter equation is such that HETP achieves a minimum value at a particular flow velocity. At this flow rate, the resolving power of the column is maximized, although in practice, the elution time is likely to be impractical. Differentiating the van Deemter equation with respect to velocity, setting the resulting expression equal to zero, and solving for the optimum velocity yields the following: Plate count The plate height given as: with the column length and the number of theoretical plates can be estimated from a chromatogram by analysis of the retention time for each component and its standard deviation as a measure for peak width, provided that the elution curve represents a Gaussian curve. In this case the plate count is given by: By using the more practical peak width at half height the equation is: or with the width at the base of the peak: Expanded van Deemter The Van Deemter equation can be further expanded to: Where: H is plate height λ is particle shape (with regard to the packing) dp is particle diameter γ, ω, and R are constants Dm is the diffusion coefficient of the mobile phase dc is the capillary diameter df is the film thickness Ds is the diffusion coefficient of the stationary phase. u is the linear velocity Rodrigues equation The Rodrigues equation, named for Alírio Rodrigues, is an extension of the Van Deemter equation used to describe the efficiency of a bed of permeable (large-pore) particles. The equation is: where and is the intraparticular Péclet number. See also Resolution (chromatography) Jan van Deemter References Chromatography Equations
Van Deemter equation
[ "Chemistry", "Mathematics" ]
834
[ "Chromatography", "Mathematical objects", "Equations", "Separation processes" ]
2,971,730
https://en.wikipedia.org/wiki/Static%20mixer
A static mixer is a device for the continuous mixing of fluid materials, without moving components. Normally the fluids to be mixed are liquid, but static mixers can also be used to mix gas streams, disperse gas into liquid or blend immiscible liquids. The energy needed for mixing comes from a loss in pressure as fluids flow through the static mixer. One design of static mixer is the plate-type mixer and another common device type consists of mixer elements contained in a cylindrical (tube) or squared housing. Mixer size can vary from about 6 mm to 6 meters diameter. Typical construction materials for static mixer components include stainless steel, polypropylene, Teflon, PVDF, PVC, CPVC and polyacetal. The latest designs involve static mixing elements made of glass-lined steel. Design Plate type In the plate type design mixing is accomplished through intense turbulence in the flow. Housed-elements design In the housed-elements design the static mixer elements consist of a series of baffles made of metal or a variety of plastics. Similarly, the mixer housing can be made of metal or plastic. The housed-elements design incorporates a method for delivering two streams of fluids into the static mixer. As the streams move through the mixer, the non-moving elements continuously blend the materials. Complete mixing depends on many variables including the fluids' properties, tube inner diameter, number of elements and their design. The housed-elements mixer's fixed, typically helical elements can simultaneously produce patterns of flow division and radial mixing: Flow division: In laminar flow, a processed material divides at the leading edge of each element of the mixer and follows the channels created by the element shape. At each succeeding element, the two channels are further divided, resulting in an exponential increase in stratification. The number of striations produced is 2n where 'n' is the number of elements in the mixer. Radial mixing: In either turbulent flow or laminar flow, rotational circulation of a processed material around its own hydraulic center in each channel of the mixer causes radial mixing of the material. Processed material is intermixed to reduce or eliminate radial gradients in temperature, velocity and material composition. Applications A common application is mixing nozzles for two-component adhesives (e.g., epoxy) and sealants (see Resin casting). Other applications include wastewater treatment and chemical processing. Static mixers can be used in the refinery and oil and gas markets as well, for example in bitumen processing or for desalting crude oil. In polymer production, static mixers can be used to facilitate polymerization reactions or for the admixing of liquid additives. History The static mixer traces its origins to an invention for a mixing device filed on Nov. 29, 1965 by the Arthur D. Little Company. This device was the housed-elements type and was licensed to the Kenics Corporation and marketed as the Kenics Motionless Mixer. Today, the Kenics brand is owned by National Oilwell Varco. The plate type static mixer patent was issued on November 24, 1998, to Robert W. Glanville of Westfall Manufacturing. See also Thermal cleaning References Laboratory equipment Turbulence Piping
Static mixer
[ "Chemistry", "Engineering" ]
653
[ "Turbulence", "Building engineering", "Chemical engineering", "Mechanical engineering", "Piping", "Fluid dynamics" ]
2,973,866
https://en.wikipedia.org/wiki/Momentum%20transfer
In particle physics, wave mechanics, and optics, momentum transfer is the amount of momentum that one particle gives to another particle. It is also called the scattering vector as it describes the transfer of wavevector in wave mechanics. In the simplest example of scattering of two colliding particles with initial momenta , resulting in final momenta , the momentum transfer is given by where the last identity expresses momentum conservation. Momentum transfer is an important quantity because is a better measure for the typical distance resolution of the reaction than the momenta themselves. Wave mechanics and optics A wave has a momentum and is a vectorial quantity. The difference of the momentum of the scattered wave to the incident wave is called momentum transfer. The wave number k is the absolute of the wave vector and is related to the wavelength . Momentum transfer is given in wavenumber units in reciprocal space Diffraction The momentum transfer plays an important role in the evaluation of neutron, X-ray, and electron diffraction for the investigation of condensed matter. Laue-Bragg diffraction occurs on the atomic crystal lattice, conserves the wave energy and thus is called elastic scattering, where the wave numbers final and incident particles, and , respectively, are equal and just the direction changes by a reciprocal lattice vector with the relation to the lattice spacing . As momentum is conserved, the transfer of momentum occurs to crystal momentum. The presentation in reciprocal space is generic and does not depend on the type of radiation and wavelength used but only on the sample system, which allows to compare results obtained from many different methods. Some established communities such as powder diffraction employ the diffraction angle as the independent variable, which worked fine in the early years when only a few characteristic wavelengths such as Cu-K were available. The relationship to -space is with and basically states that larger corresponds to larger . See also References diffraction momentum neutron-related techniques synchrotron-related techniques
Momentum transfer
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
391
[ "Spectrum (physical sciences)", "Physical quantities", "Quantity", "Diffraction", "Crystallography", "Spectroscopy", "Momentum", "Moment (physics)" ]
2,973,881
https://en.wikipedia.org/wiki/Tunnel%20injection
Tunnel injection is a field electron emission effect; specifically a quantum process called Fowler–Nordheim tunneling, whereby charge carriers are injected to an electric conductor through a thin layer of an electric insulator. It is used to program NAND flash memory. The process used for erasing is called tunnel release. This injection is achieved by creating a large voltage difference between the gate and the body of the MOSFET. When VGB >> 0, electrons are injected into the floating gate. When VGB << 0, electrons are forced out of the floating gate. An alternative to tunnel injection is the spin injection. See also Hot carrier injection References Quantum mechanics Semiconductors
Tunnel injection
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
136
[ "Electrical resistance and conductance", "Matter", "Physical quantities", "Semiconductors", "Theoretical physics", "Quantum mechanics", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Quantum physics stubs" ]
2,973,937
https://en.wikipedia.org/wiki/Split%20supersymmetry
In particle physics, split supersymmetry is a proposal for physics beyond the Standard Model. History It was proposed separately in three papers. The first by James Wells in June 2003 in a more modest form that mildly relaxed the assumption about naturalness in the Higgs potential. In May 2004 Nima Arkani-Hamed and Savas Dimopoulos argued that naturalness in the Higgs sector may not be an accurate guide to propose new physics beyond the Standard Model and argued that supersymmetry may be realized in a different fashion that preserved gauge coupling unification and has a dark matter candidate. In June 2004 Gian Giudice and Andrea Romanino argued from a general point of view that if one wants gauge coupling unification and a dark matter candidate, that split supersymmetry is one amongst a few theories that exists. Overview The new light (~TeV) particles in Split Supersymmetry (beyond the Standard Models particles) are The Lagrangian for Split Supersymmetry is constrained from the existence of high energy supersymmetry. There are five couplings in Split Supersymmetry: the Higgs quartic coupling and four Yukawa couplings between the Higgsinos, Higgs and gauginos. The couplings are set by one parameter, , at the scale where the supersymmetric scalars decouple. Beneath the supersymmetry breaking scale, these five couplings evolve through the renormalization group equation down to the TeV scale. At a future Linear collider, these couplings could be measured at the 1% level and then renormalization group evolved up to high energies to show that the theory is supersymmetric at an exceedingly high scale. Long lived Gluinos The striking feature of split supersymmetry is that the gluino becomes a quasi-stable particle with a lifetime that could be up to 100 seconds long. A gluino that lived longer than this would disrupt Big Bang nucleosynthesis or would have been observed as an additional source of cosmic gamma rays. The gluino is long lived because it can only decay into a squark and a quark and because the squarks are so heavy and these decays are highly suppressed. Thus, the decay rate of the gluino can roughly be estimated, in natural units, as where is the gluino rest mass and the squark rest mass. For gluino mass of the order of 1 TeV, the cosmological bound mentioned above sets an upper bound of about GeV on squarks masses. The potentially long lifetime of the gluino leads to different collider signatures at the Tevatron and the Large Hadron Collider. There are three ways to see these particles: Measuring the ratio of momentum to energy or velocity in tracking chambers (dE/dx in the inner tracking chamber or p/v in the outer muon tracking chamber) Looking for excess singlet jet events that arise from initial or final state radiation. Looking for gluinos that have come to rest inside the detector and later decay. Such an event may occur if the gluino hadronize to form an exotic hadron which strongly interacts with a nucleon in the detector to create an exotic charged hadron. The latter will decelerate by electromagnetic interaction inside the detector and will eventually stop. Advantages and drawbacks Split supersymmetry allows gauge coupling unification as supersymmetry does, because the particles which have masses way beyond the TeV scale play no major role in the unification. These particles are the gravitino - which has a small coupling (of order of the gravitational interaction) to the other particles, and the scalar partners to the standard model fermions - namely, squarks and sleptons. The latter move the beta functions of all gauge couplings together, and do not influence their unification, because in the grand unification theory they form a full SU(5) multiplet, just like a complete generation of particles. Split supersymmetry also solves the gravitino cosmological problem, because the gravitino mass is much higher than TeV. The upper bounds on proton decay rate can also be satisfied because the squarks are very heavy as well. On the other hand, unlike conventional supersymmetry, split supersymmetry does not solve the hierarchy problem which has been a primary motivation for proposals for new physics beyond the Standard Model since 1979. One proposal is that the hierarchy problem is "solved" by assuming fine-tuning due to anthropic reasons. History The initial attitude of some of the high energy physics community towards split supersymmetry was illustrated by a parody called supersplit supersymmetry. Often when a new notion in physics is proposed there is a knee-jerk backlash. When naturalness in the Higgs sector was initially proposed as a motivation for new physics, the notion was not taken seriously. After the supersymmetric Standard Model was proposed, Sheldon Glashow quipped that 'half of the particles have already been discovered.' After 25 years, the notion of naturalness had become so ingrained in the community that proposing a theory that did not use naturalness as the primary motivation was ridiculed. Split supersymmetry makes predictions that are distinct from both the Standard Model and the Minimal Supersymmetric Standard Model and the ultimate nature of the naturalness in the Higgs sector will hopefully be determined at future colliders. Many of the original proponents of naturalness no longer believe that it should be an exclusive constraint on new physics. Kenneth Wilson originally advocated for it, but has recently called it one of his biggest mistakes during his career. Steven Weinberg relaxed the notion of naturalness in the cosmological constant and argued for an environmental explanation for it in 1987. Leonard Susskind, who initially proposed technicolor, is a firm advocate of the notion of a landscape and non-naturalness. Savas Dimopoulos, who initially proposed the supersymmetric Standard Model, proposed split supersymmetry. See also Minimal Supersymmetric Standard Model Supersplit supersymmetry Supersymmetry External links Implications of Supersymmetry Breaking with a Little Hierarchy between Gauginos and Scalars by James D. Wells Supersymmetric Unification Without Low Energy Supersymmetry And Signatures for Fine-Tuning at the LHC by Nima Arkani-Hamed and Savas Dimopoulos Split Supersymmetry by G.F. Giudice and A. Romanino Authority Articles on Split supersymmetry Supersymmetric quantum field theory Particle physics Physics beyond the Standard Model
Split supersymmetry
[ "Physics" ]
1,379
[ "Supersymmetric quantum field theory", "Unsolved problems in physics", "Particle physics", "Physics beyond the Standard Model", "Supersymmetry", "Symmetry" ]
2,974,045
https://en.wikipedia.org/wiki/Cosmological%20natural%20selection
Cosmological natural selection, also called the fecund universes, is a hypothesis proposed by Lee Smolin intended as a scientific alternative to the anthropic principle. It addresses why our universe has the particular properties that allow for complexity and life. The hypothesis suggests that a process analogous to biological natural selection applies at the grandest of scales. Smolin first proposed the idea in 1992 and summarized it in a book aimed at a lay audience called The Life of the Cosmos, published in 1997. Hypothesis Black holes have a role in natural selection. In fecund theory a collapsing black hole causes the emergence of a new universe on the "other side", whose fundamental constant parameters (masses of elementary particles, Planck constant, elementary charge, and so forth) may differ slightly from those of the universe where the black hole collapsed. Each universe thus gives rise to as many new universes as it has black holes. The theory contains the evolutionary ideas of "reproduction" and "mutation" of universes, and so is formally analogous to models of population biology. Alternatively, black holes play a role in cosmological natural selection by reshuffling only some matter affecting the distribution of elementary quark universes. The resulting population of universes can be represented as a distribution of a landscape of parameters where the height of the landscape is proportional to the numbers of black holes that a universe with those parameters will have. Applying reasoning borrowed from the study of fitness landscapes in population biology, one can conclude that the population is dominated by universes whose parameters drive the production of black holes to a local peak in the landscape. This was the first use of the notion of a landscape of parameters in physics. Leonard Susskind, who later promoted a similar string theory landscape, stated: I'm not sure why Smolin's idea didn't attract much attention. I actually think it deserved far more than it got. However, Susskind also argued that, since Smolin's theory relies on information transfer from the parent universe to the baby universe through a black hole, it ultimately makes no sense as a theory of cosmological natural selection. According to Susskind and many other physicists, the last decade of black hole physics has shown us that no information that goes into a black hole can be lost. Even Stephen Hawking, who was the largest proponent of the idea that information is lost in a black hole, later reversed his position. The implication is that information transfer from the parent universe into the baby universe through a black hole is not conceivable. Smolin has noted that the string theory landscape is not Popper-falsifiable if other universes are not observable. This is the subject of the Smolin–Susskind debate concerning Smolin's argument: "[The] Anthropic Principle cannot yield any falsifiable predictions, and therefore cannot be a part of science." There are then only two ways out: traversable wormholes connecting the different parallel universes, and "signal nonlocality", as described by Antony Valentini, a scientist at the Perimeter Institute. In a critical review of The Life of the Cosmos, astrophysicist Joe Silk suggested that our universe falls short by about four orders of magnitude from being maximal for the production of black holes. In his book Questions of Truth, particle physicist John Polkinghorne puts forward another difficulty with Smolin's thesis: one cannot impose the consistent multiversal time required to make the evolutionary dynamics work, since short-lived universes with few descendants would then dominate long-lived universes with many descendants. Smolin responded to these criticisms in Life of the Cosmos and later scientific papers. When Smolin published the theory in 1992, he proposed as a prediction of his theory that no neutron star should exist with a mass of more than 1.6 times the mass of the sun. Later this figure was raised to two solar masses following more precise modeling of neutron star interiors by nuclear astrophysicists. If a more massive neutron star was ever observed, it would show that our universe's natural laws were not tuned for maximal black hole production, because the mass of the strange quark could be retuned to lower the mass threshold for production of a black hole. A 1.97-solar-mass pulsar was discovered in 2010. In 2019, neutron star PSR J0740+6620 was discovered with a solar-mass of 2.08 ±.07. In 1992 Smolin also predicted that inflation, if true, must only be in its simplest form, governed by a single field and parameter. This idea was further studied by Nikodem Poplawski. See also Black hole cosmology Biocosm Anthropic principle Quantum gravity General relativity Quantum mechanics Lee Smolin Fine-tuned universe References External links Cosmological Natural Selection— Underscores the coincidence of the constants being tuned for biological life as well as for black holes. Challenges the notion of "coincidence" in this context. Scientific Alternatives to the Anthropic Principle Cosmic natural selection - Leonard Susskind's criticism of this idea Physical cosmology
Cosmological natural selection
[ "Physics", "Astronomy" ]
1,056
[ "Astronomical sub-disciplines", "Theoretical physics", "Physical cosmology", "Astrophysics" ]
2,974,121
https://en.wikipedia.org/wiki/Higgs%20phase
In theoretical physics, it is often important to consider gauge theory that admits many physical phenomena and "phases", connected by phase transitions, in which the vacuum may be found. Global symmetries in a gauge theory may be broken by the Higgs mechanism. In more general theories such as those relevant in string theory, there are often many Higgs fields that transform in different representations of the gauge group. If they transform in the adjoint representation or a similar representation, the original gauge symmetry is typically broken to a product of U(1) factors. Because U(1) describes electromagnetism including the Coulomb field, the corresponding phase is called a Coulomb phase. If the Higgs fields that induce the spontaneous symmetry breaking transform in other representations, the Higgs mechanism often breaks the gauge group completely and no U(1) factors are left. In this case, the corresponding vacuum expectation values describe a Higgs phase. Using the representation of a gauge theory in terms of a D-brane, for example D4-brane combined with D0-branes, the Coulomb phase describes D0-branes that have left the D4-branes and carry their own independent U(1) symmetries. The Higgs phase describes D0-branes dissolved in the D4-branes as instantons. References Gauge theories
Higgs phase
[ "Physics" ]
282
[ "Quantum mechanics", "Quantum physics stubs" ]
2,974,577
https://en.wikipedia.org/wiki/Added%20mass
In fluid mechanics, added mass or virtual mass is the inertia added to a system because an accelerating or decelerating body must move (or deflect) some volume of surrounding fluid as it moves through it. Added mass is a common issue because the object and surrounding fluid cannot occupy the same physical space simultaneously. For simplicity this can be modeled as some volume of fluid moving with the object, though in reality "all" the fluid will be accelerated, to various degrees. The dimensionless added mass coefficient is the added mass divided by the displaced fluid mass – i.e. divided by the fluid density times the volume of the body. In general, the added mass is a second-order tensor, relating the fluid acceleration vector to the resulting force vector on the body. Background Friedrich Wilhelm Bessel proposed the concept of added mass in 1828 to describe the motion of a pendulum in a fluid. The period of such a pendulum increased relative to its period in a vacuum (even after accounting for buoyancy effects), indicating that the surrounding fluid increased the effective mass of the system. The concept of added mass is arguably the first example of renormalization in physics. The concept can also be thought of as a classical physics analogue of the quantum mechanical concept of quasiparticles. It is, however, not to be confused with relativistic mass increase. It is often erroneously stated that the added mass is determined by the momentum of the fluid. That this is not the case, it becomes clear when considering the case of the fluid in a large box, where the fluid momentum is exactly zero at every moment of time. The added mass is actually determined by the quasi-momentum: the added mass times the body acceleration is equal to the time derivative of the fluid quasi-momentum. Virtual mass force Unsteady forces due to a change of the relative velocity of a body submerged in a fluid can be divided into two parts: the virtual mass effect and the Basset force. The origin of the force is that the fluid will gain kinetic energy at the expense of the work done by an accelerating submerged body. It can be shown that the virtual mass force, for a spherical particle submerged in an inviscid, incompressible fluid is where bold symbols denote vectors, is the fluid flow velocity, is the spherical particle velocity, is the mass density of the fluid (continuous phase), is the volume of the particle, and D/Dt denotes the material derivative. The origin of the notion "virtual mass" becomes evident when we take a look at the momentum equation for the particle. where is the sum of all other force terms on the particle, such as gravity, pressure gradient, drag, lift, Basset force, etc. Moving the derivative of the particle velocity from the right hand side of the equation to the left we get so the particle is accelerated as if it had an added mass of half the fluid it displaces, and there is also an additional force contribution on the right hand side due to acceleration of the fluid. Applications The added mass can be incorporated into most physics equations by considering an effective mass as the sum of the mass and added mass. This sum is commonly known as the "virtual mass". A simple formulation of the added mass for a spherical body permits Newton's classical second law to be written in the form becomes One can show that the added mass for a sphere (of radius ) is , which is half the volume of the sphere times the density of the fluid. For a general body, the added mass becomes a tensor (referred to as the induced mass tensor), with components depending on the direction of motion of the body. Not all elements in the added mass tensor will have dimension mass, some will be mass × length and some will be mass × length2. All bodies accelerating in a fluid will be affected by added mass, but since the added mass is dependent on the density of the fluid, the effect is often neglected for dense bodies falling in much less dense fluids. For situations where the density of the fluid is comparable to or greater than the density of the body, the added mass can often be greater than the mass of the body and neglecting it can introduce significant errors into a calculation. For example, a spherical air bubble rising in water has a mass of but an added mass of Since water is approximately 800 times denser than air (at RTP), the added mass in this case is approximately 400 times the mass of the bubble. Naval architecture These principles also apply to ships, submarines, and offshore platforms. In the marine industry, added mass is referred to as hydrodynamic added mass. In ship design, the energy required to accelerate the added mass must be taken into account when performing a sea keeping analysis. For ships, the added mass can easily reach one fourth or one third of the mass of the ship and therefore represents a significant inertia, in addition to frictional and wavemaking drag forces. For certain geometries freely sinking through a column of water, hydrodynamic added mass associated with the sinking body can be much larger than the mass of the object. This situation can occur, for instance, when the sinking body has a large flat surface with its normal vector pointed in the direction of motion (downward). A substantial amount of kinetic energy is released when such an object is abruptly decelerated (e.g., due to an impact with the seabed). In the offshore industry hydrodynamic added mass of different geometries are the subject of considerable investigation. These studies typically are required as input to subsea dropped object risk assessments (studies focused on quantifying risk of dropped object impacts to subsea infrastructure). As hydrodynamic added mass can make up a significant proportion of a sinking object's total mass at the instant of impact, it significantly influences the design resistance considered for subsea protection structures. Proximity to a boundary (or another object) can influence the quantity of hydrodynamic added mass. This means that added mass depends on both the object geometry and its proximity to a boundary. For floating bodies (e.g., ships/vessels) this means that the response of the floating body (i.e., due to wave action) is altered in finite water depths (the effect is virtually nonexistent in deep water). The specific depth (or proximity to a boundary) at which the hydrodynamic added mass is affected depends on the body's geometry and location and shape of a boundary (e.g., a dock, seawall, bulkhead, or the seabed). The hydrodynamic added mass associated with a freely sinking object near a boundary is similar to that of a floating body. In general, hydrodynamic added mass increases as the distance between a boundary and a body decreases. This characteristic is important when planning subsea installations or predicting the motion of a floating body in shallow water conditions. Aeronautics In aircraft (other than lighter-than-air balloons and blimps), the added mass is not usually taken into account because the density of the air is so small. Hydraulic structures Hydraulic structures like weirs or locks often contain moveable steel structures like valves or gates, which are submerged under water. These steel structures are often constructed with thin steel plates mounted on girders. When the steel structures are accelerated or decelerated, substantial amounts of water are moved, too. This added mass must e.g. be taken into account when designing the drive systems for these steel structures. See also Basset force for describing the effect of the body's relative motion history on the viscous forces in a Stokes flow Basset–Boussinesq–Oseen equation for the description of the motion of – and forces on – a particle moving in an unsteady flow at low Reynolds numbers Darwin drift for the relation between added mass and the Darwin drift volume Keulegan–Carpenter number for a dimensionless parameter giving the relative importance of the drag force to inertia in wave loading Morison equation for an empirical force model in wave loading, involving added mass and drag Response Amplitude Operator for the use of added mass in ship design References External links MIT OpenCourse Ware Naval Civil Engineering Laboratory Det Norske Veritas DNV-RP-H103 Modelling And Analysis Of Marine Operations Fluid dynamics
Added mass
[ "Chemistry", "Engineering" ]
1,704
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
2,975,044
https://en.wikipedia.org/wiki/Polish%20units%20of%20measurement
The traditional Polish units of measurement included two uniform yet distinct systems of weights and measures, as well as a number of related systems borrowed from neighbouring states. The first attempt at standardisation came with the introduction of the Old Polish measurement [system], also dubbed the Warsaw system, introduced by a royal decree of December 6, 1764. The system was later replaced by the New Polish measurement [system] introduced on January 1, 1819. The traditional Polish systems of weights and measures were later replaced with those of surrounding nations (due to the Partitions of Poland), only to be replaced with metric system by the end of the 19th century (between 1872 and 1876). History Historic weights and measures The first recorded weights and measures used in Poland were related to dimensions of human body, hence the most basic measures in use were sążeń (fathom), łokieć (ell), piędź (span), stopa (foot) and skok (jump). With time trade relations with the neighbouring nations brought to use additional units, with names often borrowed from German, Arabic or Czech. From Middle Ages until the 18th century, there was no single system of measurement used in all of Poland. Traditional units like stopa (foot) or łokieć (ell) were used throughout the country, but their meaning differed from region to region. Most major cities in the area used their own systems of measurement, which were used in the surrounding areas as well. Among the commonly used systems were Austrian, Galician, Danzig, Kraków, Prussian, Russian and Breslau. The matter was further complicated by the fact that Austrian or German systems were hardly uniform either and differed from town to town. Furthermore, the systems tended to evolve over time: in the 13th century the Kraków's ell was equivalent to 64.66 centimetres, a century later it was equivalent to 62.5 cm, then in the 16th century it shrunk to 58.6 cm and finally was equalled to standard "old Polish ell" of 59.6 cm only in 1836. To add to the confusion, various goods were traditionally measured with different units, often incompatible or difficult to convert. For instance, beer was sold in units named achtel (0.5 of barrel, that is 62 Kraków gallons of 2.75 litres each). However honey and mead were recorded for tax purposes in units named rączka (slightly more than 10 Kraków gallons). As the weights and measures were important in everyday life of merchants, in 1420 the royal decree allowed each voivode to create and maintain a single system used in his voivodeship. This law was later confirmed by a Sejm act of 1565. Steel or copper rods used as local standard of ell (basic unit of length) were created in a voivode's capital and then dispatched to all nearby towns, where they were further duplicated for everyday use. One bar was to be stored in the town hall for comparison, while additional rods were stored in the gatehouses or toll points to be borrowed by merchants as needed. Damaging or losing a rod was punishable by law. Measuring time Outside of this set of systems was the measurement of time. As clock towers only started to appear in late Middle Ages, and their usability was limited to within a small radius, some basic substitutes for modern minutes and hours were developed, based on Christian prayers. The pacierz (or paternoster) was a non-standard unit of time comprising some 25 seconds, that is enough time to recite the Lord's Prayer. Similarly, zdrowaśka (from Zdrowaś Mario, the first words of the Hail Mary) was used, as was the Rosary (różaniec) that is the time needed to recite Hail Mary 50 times (roughly 16 minutes). Those units were never strictly defined, but is used in rural areas of Poland even today. Early attempts at standardisation While this system introduced some level of standardisation throughout the country, the systems used in various voivodeships still differed from one another. To counter this problem the Kraków ell and Poznań ell were made equal in 1507. The same applied to ells used in Lwów and Lublin, which however were different from those in Kraków and Poznań. In 1532 the Płock ell was aligned with the Kraków ell, which in 1565 was declared an official ell to be used in all of the Crown of Poland. The system used by Warsaw was adopted in Płock and all of Masovia in 1569. In 1613 additional systems were created for Vilnius and Kaunas. The standardisation of other units of measurement also made some progress since the 15th century, but at a different pace. In the end this created even more confusion, as two towns could use the same units of length, but two different units of weight, although using the same terms. 1764 reform - the Old Polish system As until then not only different units varied from town to town but also their relation to one another, in 1764 a major overhaul of the measurement system was prepared. By a royal decree of December 6, 1764 all units of measurement were to be converted to a new system, common to all of Poland and its dependencies. The system relied on previously used units, but introduced a common, unified system of relations between them. It had no official name and it was not until the 19th century when it started to be called the Old Polish system (miary staropolskie, or Old-Polish measures), in contrast to the new system introduced then. The basic unit of length - the ell or łokieć in Polish - was set to 0.5955 metres. For trade and everyday use it was further subdivided into the foot (stopa, ≈29.78 centimetres); sztych (≈19.86cm); quarter (ćwierć, ≈14.89cm); palm (dłoń, ≈7.44cm); and inch (cal, ≈2.48 centimetres), or gathered into the fathom (sążeń, 3 ells or 1.787 metres in length), such that:1 ell = 2 feet = 3 sztychs = 4 quarters = 8 palms = 24 inches ( = ⅓ of a fathom ).A different system of units, although complementary and interchangeable, was used in measuring lengths for agrarian purposes. The basic unit was a step (krok), equalling 3.75 of standard ell, or 2.2333 metres. Two steps made a rod (pręt, 4.4665 metres), 2 rods made a stick (laska), and five sticks were equal to a cable (sznur of 44.665 metres). Finally 3 cables made up a furlong (staje) of roughly 134 metres. In measuring the distance between cities, the basic unit was staje, although it was different from the staje mentioned before and had the length of roughly 893 metres. Eight staje made up a Polish mile of 7144 metres. The weights were based on the (funt of 0.4052 kg) composed of two grzywnas, each in turn comprising 16 lots (łut of 0.0127 kg). For heavier goods the basic units were a stone (kamień, 32 pounds or 12.976 kg) and Hundredweight (cetnar, five stones or 64.80 kg). There were two sets of units of volume: one for fluids and the other for dry goods. Both used the gallon (garniec) of 3.7689 litres as the basic unit. This was subdivided into 4 quarts () of 0.9422 L or 16 . For dry goods four gallons comprised a measure (), 2 measures comprised a quarter (), 4 quarters comprised a bushel () of 120.6 L, and 30 bushels comprised a last () of 3618 L. For fluids, 5 gallons comprised a konew of 18.8445 L and 14.4 konew made up a barrel of 271.36 L. Current use Though the traditional systems were officially abandoned in the 19th century, traces of their use, especially in rural areas, were found by ethnographers as late as 1969. Length Krok (:pl:Krok (miara)) Ławka (:pl:Ławka (jednostka długości)) Łokieć (:pl:Łokieć (miara)) Piędź (:pl:Piędź) Staje (:pl:Staje) Stopa (:pl:Stopa (miara)) Area Łan (:pl:Łan (miara powierzchni)) Morga (:pl:Morga) Staje (:pl:Staje) Włóka (:pl:Włóka (miara powierzchni)) Źreb (:pl:Źreb) Volume Garniec (:pl:Garniec) Korzec (:pl:Korzec) Łaszt (:pl:Łaszt) Mass and monetary units Grzywna (:pl:Grzywna (ekonomia); :pl:Grzywna (jednostka miar)) Kamień (:pl:Kamień (miara)) Kwarta (:pl:Kwarta (jednostka wagowa)) Kwartnik (:pl:Kwartnik) Łut (:pl:Łut) Skojec (:pl:Skojec) Wiardunek (:pl:Wiardunek) Time Pacierz (:pl:Pacierz) Zdrowaśka (:pl:Zdrowaśka) References Polish 1760s establishments in the Polish–Lithuanian Commonwealth Science and technology in Poland Culture of Poland Polish Poland 1810s establishments in Poland
Polish units of measurement
[ "Mathematics" ]
2,069
[ "Obsolete units of measurement", "Systems of units", "Units of measurement by country", "Quantity", "Units of measurement" ]
6,950,441
https://en.wikipedia.org/wiki/Hydroxamic%20acid
In organic chemistry, hydroxamic acids are a class of organic compounds having a general formula bearing the functional group , where R and R' are typically organyl groups (e.g., alkyl or aryl) or hydrogen. They are amides () wherein the nitrogen atom has a hydroxyl () substituent. They are often used as metal chelators. Common example of hydroxamic acid is aceto-N-methylhydroxamic acid (). Some uncommon examples of hydroxamic acids are formo-N-chlorohydroxamic acid () and chloroformo-N-methylhydroxamic acid (). Synthesis and reactions Hydroxamic acids are usually prepared from either esters or acid chlorides by a reaction with hydroxylamine salts. For the synthesis of benzohydroxamic acid ( or , where Ph is phenyl group), the overall equation is: Hydroxamic acids can also be synthesized from aldehydes and N-sulfonylhydroxylamine via the Angeli-Rimini reaction. Alternatively, molybdenum oxide diperoxide oxidizes trimethylsilated amides to hydroxamic acids, although yields are only about 50%. In a variation on the Nef reaction, primary nitro compounds kept in an acidic solution (to minimize the nitronate tautomer) hydrolyze to a hydroxamic acid. A well-known reaction of hydroxamic acid esters is the Lossen rearrangement. Coordination chemistry and biochemistry The conjugate base of hydroxamic acids forms is called a hydroxamate. Deprotonation occurs at the group, with the hydrogen atom being removed, resulting in a hydroxamate anion . The resulting conjugate base presents the metal with an anionic, conjugated O,O chelating ligand. Many hydroxamic acids and many iron hydroxamates have been isolated from natural sources. They function as ligands, usually for iron. Nature has evolved families of hydroxamic acids to function as iron-binding compounds (siderophores) in bacteria. They extract iron(III) from otherwise insoluble sources (rust, minerals, etc.). The resulting complexes are transported into the cell, where the iron is extracted and utilized metabolically. Ligands derived from hydroxamic acid and thiohydroxamic acid (a hydroxamic acid where one or both oxygens in the functional group are replaced by sulfur) also form strong complexes with lead(II). Other uses and occurrences Hydroxamic acids are used extensively in flotation of rare earth minerals during the concentration and extraction of ores to be subjected to further processing. Some hydroxamic acids (e.g. vorinostat, belinostat, panobinostat, and trichostatin A) are HDAC inhibitors with anti-cancer properties. Fosmidomycin is a natural hydroxamic acid inhibitor of 1-deoxy-D-xylulose-5-phosphate reductoisomerase (DXP reductoisomerase). Hydroxamic acids have also been investigated for reprocessing of irradiated fuel. References Further reading Functional groups
Hydroxamic acid
[ "Chemistry" ]
694
[ "Organic compounds", "Functional groups", "Hydroxamic acids" ]
6,953,458
https://en.wikipedia.org/wiki/Hadley%20cell
The Hadley cell, also known as the Hadley circulation, is a global-scale tropical atmospheric circulation that features air rising near the equator, flowing poleward near the tropopause at a height of above the Earth's surface, cooling and descending in the subtropics at around 25 degrees latitude, and then returning equatorward near the surface. It is a thermally direct circulation within the troposphere that emerges due to differences in insolation and heating between the tropics and the subtropics. On a yearly average, the circulation is characterized by a circulation cell on each side of the equator. The Southern Hemisphere Hadley cell is slightly stronger on average than its northern counterpart, extending slightly beyond the equator into the Northern Hemisphere. During the summer and winter months, the Hadley circulation is dominated by a single, cross-equatorial cell with air rising in the summer hemisphere and sinking in the winter hemisphere. Analogous circulations may occur in extraterrestrial atmospheres, such as on Venus and Mars. Global climate is greatly influenced by the structure and behavior of the Hadley circulation. The prevailing trade winds are a manifestation of the lower branches of the Hadley circulation, converging air and moisture in the tropics to form the Intertropical Convergence Zone (ITCZ) where the Earth's heaviest rains are located. Shifts in the ITCZ associated with the seasonal variability of the Hadley circulation cause monsoons. The sinking branches of the Hadley cells give rise to the oceanic subtropical ridges and suppress rainfall; many of the Earth's deserts and arid regions are located in the subtropics coincident with the position of the sinking branches. The Hadley circulation is also a key mechanism for the meridional transport of heat, angular momentum, and moisture, contributing to the subtropical jet stream, the moist tropics, and maintaining a global thermal equilibrium. The Hadley circulation is named after George Hadley, who in 1735 postulated the existence of hemisphere-spanning circulation cells driven by differences in heating to explain the trade winds. Other scientists later developed similar arguments or critiqued Hadley's qualitative theory, providing more rigorous explanations and formalism. The existence of a broad meridional circulation of the type suggested by Hadley was confirmed in the mid-20th century once routine observations of the upper troposphere became available via radiosondes. Observations and climate modelling indicate that the Hadley circulation has expanded poleward since at least the 1980s as a result of climate change, with an accompanying but less certain intensification of the circulation; these changes have been associated with trends in regional weather patterns. Model projections suggest that the circulation will widen and weaken throughout the 21st century due to climate change. Mechanism and characteristics The Hadley circulation describes the broad, thermally direct, and meridional overturning of air within the troposphere over the low latitudes. Within the global atmospheric circulation, the meridional flow of air averaged along lines of latitude are organized into circulations of rising and sinking motions coupled with the equatorward or poleward movement of air called meridional cells. These include the prominent "Hadley cells" centered over the tropics and the weaker "Ferrell cells" centered over the mid-latitudes. The Hadley cells result from the contrast of insolation between the warm equatorial regions and the cooler subtropical regions. The uneven heating of Earth's surface results in regions of rising and descending air. Over the course of a year, the equatorial regions absorb more radiation from the Sun than they radiate away. At higher latitudes, the Earth emits more radiation than it receives from the Sun. Without a mechanism to exchange heat meridionally, the equatorial regions would warm and the higher latitudes would cool progressively in disequilibrium. The broad ascent and descent of air results in a pressure gradient force that drives the Hadley circulation and other large-scale flows in both the atmosphere and the ocean, distributing heat and maintaining a global long-term and subseasonal thermal equilibrium. The Hadley circulation covers almost half of the Earth's surface area, spanning from roughly the Tropic of Cancer to the Tropic of Capricorn. Vertically, the circulation occupies the entire depth of the troposphere. The Hadley cells comprising the circulation consist of air carried equatorward by the trade winds in the lower troposphere that ascends when heated near the equator, along with air moving poleward in the upper troposphere. Air that is moved into the subtropics cools and then sinks before returning equatorward to the tropics; the position of the sinking air associated with the Hadley cell is often used as a measure of the meridional width of the global tropics. The equatorward return of air and the strong influence of heating make the Hadley cell a thermally-driven and enclosed circulation. Due to the buoyant rise of air near the equator and the sinking of air at higher latitudes, a pressure gradient develops near the surface with lower pressures near the equator and higher pressures in the subtropics; this provides the motive force for the equatorward flow in the lower troposphere. However, the release of latent heat associated with condensation in the tropics also relaxes the decrease in pressure with height, resulting in higher pressures aloft in the tropics compared to the subtropics for a given height in the upper troposphere; this pressure gradient is stronger than its near-surface counterpart and provides the motive force for the poleward flow in the upper troposphere. Hadley cells are most commonly identified using the mass-weighted, zonally-averaged stream function of meridional winds, but they can also be identified by other measurable or derivable physical parameters such as velocity potential or the vertical component of wind at a particular pressure level. Given the latitude and the pressure level , the Stokes stream function characterizing the Hadley circulation is given by where is the radius of Earth, is the acceleration due to the gravity of Earth, and is the zonally averaged meridional wind at the prescribed latitude and pressure level. The value of gives the integrated meridional mass flux between the specified pressure level and the top of the Earth's atmosphere, with positive values indicating northward mass transport. The strength of the Hadley cells can be quantified based on including the maximum and minimum values or averages of the stream function both overall and at various pressure levels. Hadley cell intensity can also be assessed using other physical quantities such as the velocity potential, vertical component of wind, transport of water vapor, or total energy of the circulation. Structure and components The structure of the Hadley circulation and its components can be inferred by graphing zonal and temporal averages of global winds throughout the troposphere. At shorter timescales, individual weather systems perturb wind flow. Although the structure of the Hadley circulation varies seasonally, when winds are averaged annually (from an Eulerian perspective) the Hadley circulation is roughly symmetric and composed of two similar Hadley cells with one in each of the northern and southern hemispheres, sharing a common region of ascending air near the equator; however, the Southern Hemisphere Hadley cell is stronger. The winds associated with the annually-averaged Hadley circulation are on the order of . However, when averaging the motions of air parcels as opposed to the winds at fixed locations (a Lagrangian perspective), the Hadley circulation manifests as a broader circulation that extends farther poleward. Each Hadley cell can be described by four primary branches of airflow within the tropics: An equatorward, lower branch within the planetary boundary layer An ascending branch near the equator A poleward, upper branch in the upper troposphere A descending branch in the subtropics The trade winds in the low-latitudes of both Earth's northern and southern hemispheres converge air towards the equator, producing a belt of low atmospheric pressure exhibiting abundant storms and heavy rainfall known as the Intertropical Convergence Zone (ITCZ). This equatorward movement of air near the Earth's surface constitutes the lower branch of the Hadley cell. The position of the ITCZ is influenced by the warmth of sea surface temperatures (SST) near the equator and the strength of cross-equatorial pressure gradients. In general, the ITCZ is located near the equator or is offset towards the summer hemisphere where the warmest SSTs are located. On an annual average, the rising branch of the Hadley circulation is slightly offset towards the Northern Hemisphere, away from the equator. Due to the Coriolis force, the trade winds deflect opposite the direction of Earth's rotation, blowing partially westward rather than directly equatorward in both hemispheres. The lower branch accrues moisture resulting from evaporation across Earth's tropical oceans. A warmer environment and converging winds force the moistened air to ascend near the equator, resulting in the rising branch of the Hadley cell. The upward motion is further enhanced by the release of latent heat as the uplift of moist air results in an equatorial band of condensation and precipitation. The Hadley circulation's upward branch largely occurs in thunderstorms occupying only around one percent of the surface area of the tropics. The transport of heat in the Hadley circulation's ascending branch is accomplished most efficiently by hot towerscumulonimbus clouds bearing strong updrafts that do not mix in drier air commonly found in the middle troposphere and thus allow the movement of air from the highly moist tropical lower troposphere into the upper troposphere. Approximately 1,500–5,000 hot towers daily near the ITCZ region are required to sustain the vertical heat transport exhibited by the Hadley circulation. The ascent of air rises into the upper troposphere to a height of , after which air diverges outward from the ITCZ and towards the poles. The top of the Hadley cell is set by the height of the tropopause as the stable stratosphere above prevents the continued ascent of air. Air arising from the low latitudes has higher absolute angular momentum about Earth's axis of rotation. The distance between the atmosphere and Earth's axis decreases poleward; to conserve angular momentum, poleward-moving air parcels must accelerate eastward. The Coriolis effect limits the poleward extent of the Hadley circulation, accelerating air in the direction of the Earth's rotation and forming a jet stream directed zonally rather than continuing the poleward flow of air at each Hadley cell's poleward boundary. Considering only the conservation of angular momentum, a parcel of air at rest along the equator would accelerate to a zonal speed of by the time it reached 30° latitude. However, small-scale turbulence along the parcel's poleward trek and large-scale eddies in the mid-latitude dissipate angular momentum. The jet associated with the Southern Hemisphere Hadley cell is stronger than its northern counterpart due to the stronger intensity of the Southern Hemisphere cell. The cooler, higher-latitudes leads to cooling of air parcels, which causes the poleward air to eventually descend. When the movement of air is averaged annually, the descending branch of the Hadley cell is located roughly over the 25th parallel north and the 25th parallel south. The moisture in the subtropics is then partly advected poleward by eddies and partly advected equatorward by the lower branch of the Hadley cell, where it is later brought towards the ITCZ. Although the zonally-averaged Hadley cell is organized into four main branches, these branches are aggregations of more concentrated air flows and regions of mass transport. Several theories and physical models have attempted to explain the latitudinal width of the Hadley cell. The Held–Hou model provides one theoretical constraint on the meridional extent of the Hadley cells. By assuming a simplified atmosphere composed of a lower layer subject to friction from the Earth's surface and an upper layer free from friction, the model predicts that the Hadley circulation would be restricted to within of the equator if parcels do not have any net heating within the circulation. According to the Held–Hou model, the latitude of the Hadley cell's poleward edge scales according to where is the difference in potential temperature between the equator and the pole in radiative equilibrium, is the height of the tropopause, is the Earth's rotation rate, and is a reference potential temperature. Other compatible models posit that the width of the Hadley cell may scale with other physical parameters such as the vertically-averaged Brunt–Väisälä frequency in the tropopshere or the growth rate of baroclinic waves shed by the cell. Seasonality and variability The Hadley circulation varies considerably with seasonal changes. Around the equinox during the spring and autumn for either the northern or southern hemisphere, the Hadley circulation takes the form of two relatively weaker Hadley cells in both hemispheres, sharing a common region of ascent over the ITCZ and moving air aloft towards each cell's respective hemisphere. However, closer to the solstices, the Hadley circulation transitions into a more singular and stronger cross-equatorial Hadley cell with air rising in the summer hemisphere and broadly descending in the winter hemisphere. The transition between the two-cell and single-cell configuration is abrupt, and during most of the year the Hadley circulation is characterized by a single dominant Hadley cell that transports air across the equator. In this configuration, the ascending branch is located in the tropical latitudes of the warmer summer hemisphere and the descending branch is positioned in the subtropics of the cooler winter hemisphere. Two cells are still present in each hemisphere, though the winter hemisphere's cell becomes much more prominent while the summer hemisphere's cell becomes displaced poleward. The intensification of the winter hemisphere's cell is associated with a steepening of gradients in geopotential height, leading to an acceleration of trade winds and stronger meridional flows. The presence of continents relaxes temperature gradients in the summer hemisphere, accentuating the contrast between the hemispheric Hadley cells. Reanalysis data from 1979–2001 indicated that the dominant Hadley cell in boreal summer extended from 13°S to 31°N on average. In both boreal and austral winters, the Indian Ocean and the western Pacific Ocean contribute most to the rising and sinking motions in the zonally-averaged Hadley circulation. However, vertical flows over Africa and the Americas are more marked in boreal winter. At longer interannual timescales, variations in the Hadley circulation are associated with variations in the El Niño–Southern Oscillation (ENSO), which impacts the positioning of the ascending branch; the response of the circulation to ENSO is non-linear, with a more marked response to El Niño events than La Niña events. During El Niño, the Hadley circulation strengthens due to the increased warmth of the upper troposphere over the tropical Pacific and the resultant intensification of poleward flow. However, these changes are not asymmetric, during the same events, the Hadley cells over the western Pacific and the Atlantic are weakened. During the Atlantic Niño, the circulation over the Atlantic is intensified. The Atlantic circulation is also enhanced during periods when the North Atlantic oscillation is strongly positive. The variation in the seasonally-averaged and annually-averaged Hadley circulation from year to year is largely accounted for by two juxtaposed modes of oscillation: an equatorial symmetric mode characterized by single cell straddling the equator and an equatorial symmetric mode characterized by two cells on either side of the equator. Energetics and transport The Hadley cell is an important mechanism by which moisture and energy are transported both between the tropics and subtropics and between the northern and southern hemispheres. However, it is not an efficient transporter of energy due to the opposing flows of the lower and upper branch, with the lower branch transporting sensible and latent heat equatorward and the upper branch transporting potential energy poleward. The resulting net energy transport poleward represents around 10 percent of the overall energy transport involved in the Hadley cell. The descending branch of the Hadley cell generates clear skies and a surplus of evaporation relative to precipitation in the subtropics. The lower branch of the Hadley circulation accomplishes most of the transport of the excess water vapor accumulated in the subtropical atmosphere towards the equatorial region. The strong Southern Hemisphere Hadley cell relative to its northern counterpart leads to a small net energy transport from the northern to the southern hemisphere; as a result, the transport of energy at the equator is directed southward on average, with an annual net transport of around 0.1 PW. In contrast to the higher latitudes where eddies are the dominant mechanism for transporting energy poleward, the meridional flows imposed by the Hadley circulation are the primary mechanism for poleward energy transport in the tropics. As a thermally direct circulation, the Hadley circulation converts available potential energy to the kinetic energy of horizontal winds. Based on data from January 1979 and December 2010, the Hadley circulation has an average power output of 198 TW, with maxima in January and August and minima in May and October. Although the stability of the tropopause largely limits the movement of air from the troposphere to the stratosphere, some tropospheric air penetrates into the stratosphere via the Hadley cells. The Hadley circulation may be idealized as a heat engine converting heat energy into mechanical energy. As air moves towards the equator near the Earth's surface, it accumulates entropy from the surface either by direct heating or the flux of sensible or latent heat. In the ascending branch of a Hadley cell, the ascent of air is approximately an adiabatic process with respect to the surrounding environment. However, as parcels of air move equatorward in the cell's upper branch, they lose entropy by radiating heat to space at infrared wavelengths and descend in response. This radiative cooling occurs at a rate of at least 60  W m−2 and may exceed 100 W m−2 in winter. The heat accumulated during the equatorward branch of the circulation is greater than the heat lost in the upper poleward branch; the excess heat is converted into the mechanical energy that drives the movement of air. This difference in heating also results in the Hadley circulation transporting heat poleward as the air supplying the Hadley cell's upper branch has greater moist static energy than the air supplying the cell's lower branch. Within the Earth's atmosphere, the timescale at which air parcels lose heat due to radiative cooling and the timescale at which air moves along the Hadley circulation are at similar orders of magnitude, allowing the Hadley circulation to transport heat despite cooling in the circulation's upper branch. Air with high potential temperature is ultimately moved poleward in the upper troposphere while air with lower potential temperature is brought equatorward near the surface. As a result, the Hadley circulation is one mechanism by which the disequilibrium produced by uneven heating of the Earth is brought towards equilibrium. When considered as a heat engine, the thermodynamic efficiency of the Hadley circulation averaged around 2.6 percent between 1979–2010, with small seasonal variability. The Hadley circulation also transports planetary angular momentum poleward due to Earth's rotation. Because the trade winds are directed opposite the Earth's rotation, eastward angular momentum is transferred to the atmosphere via frictional interaction between the winds and topography. The Hadley cell then transfers this angular momentum through its upward and poleward branches. The poleward branch accelerates and is deflected east in both the northern and southern hemispheres due to the Coriolis force and the conservation of angular momentum, resulting in a zonal jet stream above the descending branch of the Hadley cell. The formation of such a jet implies the existence of a thermal wind balance supported by the amplification of temperature gradients in the jet's vicinity resulting from the Hadley circulation's poleward heat advection. The subtropical jet in the upper troposphere coincides with where the Hadley cell meets the Ferrell cell. The strong wind shear accompanying the jet presents a significant source of baroclinic instability from which waves grow; the growth of these waves transfers heat and momentum polewards. Atmospheric eddies extract westerly angular momentum from the Hadley cell and transport it downward, resulting in the mid-latitude westerly winds. Formulation and discovery The broad structure and mechanism of the Hadley circulationcomprising convective cells moving air due to temperature differences in a manner influenced by the Earth's rotationwas first proposed by Edmund Halley in 1685 and George Hadley in 1735. Hadley had sought to explain the physical mechanism for the trade winds and the westerlies; the Hadley circulation and the Hadley cells are named in honor of his pioneering work. Although Hadley's ideas invoked physical concepts that would not be formalized until well after his death, his model was largely qualitative and without mathematical rigor. Hadley's formulation was later recognized by most meteorologists by the 1920s to be a simplification of more complicated atmospheric processes. The Hadley circulation may have been the first attempt to explain the global distribution of winds in Earth's atmosphere using physical processes. However, Hadley's hypothesis could not be verified without observations of winds in the upper-atmosphere. Data collected by routine radiosondes beginning in the mid-20th century confirmed the existence of the Hadley circulation. Early explanations of the trade winds In the 15th and 16th centuries, observations of maritime weather conditions were of considerable importance to maritime transport. Compilations of these observations showed consistent weather conditions from year to year and significant seasonal variability. The prevalence of dry conditions and weak winds at around 30° latitude and the equatorward trade winds closer to the equator, mirrored in the northern and southern hemispheres, was apparent by 1600. Early efforts by scientists to explain aspects of global wind patterns often focused on the trade winds as the steadiness of the winds was assumed to portend a simple physical mechanism. Galileo Galilei proposed that the trade winds resulted from the atmosphere lagging behind the Earth's faster tangential rotation speed in the low latitudes, resulting in the westward trades directed opposite of Earth's rotation. In 1685, English polymath Edmund Halley proposed at a debate organized by the Royal Society that the trade winds resulted from east to west temperature differences produced over the course of a day within the tropics. In Halley's model, as the Earth rotated, the location of maximum heating from the Sun moved west across the Earth's surface. This would cause air to rise, and by conservation of mass, Halley argued that air would be moved to the region of evacuated air, generating the trade winds. Halley's hypothesis was criticized by his friends, who noted that his model would lead to changing wind directions throughout the course of a day rather than the steady trade winds. Halley conceded in personal correspondence with John Wallis that "Your questioning my hypothesis for solving the Trade Winds makes me less confident of the truth thereof". Nonetheless, Halley's formulation was incorporated into Chambers's Encyclopaedia and La Grande Encyclopédie, becoming the most widely-known explanation for the trade winds until the early 19th century. Though his explanation of the trade winds was incorrect, Halley correctly predicted that the surface trade winds should be accompanied by an opposing flow aloft following mass conservation. George Hadley's explanation Unsatisfied with preceding explanations for the trade winds, George Hadley proposed an alternate mechanism in 1735. Hadley's hypothesis was published in the paper "On the Cause of the General Trade Winds" in Philosophical Transactions of the Royal Society. Like Halley, Hadley's explanation viewed the trade winds as a manifestation of air moving to take the place of rising warm air. However, the region of rising air prompting this flow lay along the lower latitudes. Understanding that the tangential rotation speed of the Earth was fastest at the equator and slowed farther poleward, Hadley conjectured that as air with lower momentum from higher latitudes moved equatorward to replace the rising air, it would conserve its momentum and thus curve west. By the same token, the rising air with higher momentum would spread poleward, curving east and then sinking as it cooled to produce westerlies in the mid-latitudes. Hadley's explanation implied the existence of hemisphere-spanning circulation cells in the northern and southern hemispheres extending from the equator to the poles, though he relied on an idealization of Earth's atmosphere that lacked seasonality or the asymmetries of the oceans and continents. His model also predicted rapid easterly trade winds of around , though he argued that the action of surface friction over the course of a few days slowed the air to the observed wind speeds. Colin Maclaurin extended Hadley's model to the ocean in 1740, asserting that meridional ocean currents were subject to similar westward or eastward deflections. Hadley was not widely associated with his theory due to conflation with his older brother, John Hadley, and Halley; his theory failed to gain much traction in the scientific community for over a century due to its unintuitive explanation and the lack of validating observations. Several other natural philosophers independently forwarded explanations for the global distribution of winds soon after Hadley's 1735 proposal. In 1746, Jean le Rond d'Alembert provided a mathematical formulation for global winds, but disregarded solar heating and attributed the winds to the gravitational effects of the Sun and Moon. Immanuel Kant, also unsatisfied with Halley's explanation for the trade winds, published an explanation for the trade winds and westerlies in 1756 with similar reasoning as Hadley. In the latter part of the 18th century, Pierre-Simon Laplace developed a set of equations establishing a direct influence of Earth's rotation on wind direction. Swiss scientist Jean-André Deluc published an explanation of the trade winds in 1787 similar to Hadley's hypothesis, connecting differential heating and the Earth's rotation with the direction of the winds. English chemist John Dalton was the first to clearly credit Hadley's explanation of the trade winds to George Hadley, mentioning Hadley's work in his 1793 book Meteorological Observations and Essays. In 1837, Philosophical Magazine published a new theory of wind currents developed by Heinrich Wilhelm Dove without reference to Hadley but similarly explaining the direction of the trade winds as being influenced by the Earth's rotation. In response, Dalton later wrote a letter to the editor to the journal promoting Hadley's work. Dove subsequently credited Hadley so frequently that the overarching theory became known as the "Hadley–Dove principle", popularizing Hadley's explanation for the trade winds in Germany and Great Britain. Critique of Hadley's explanation The work of Gustave Coriolis, William Ferrel, Jean Bernard Foucault, and Henrik Mohn in the 19th century helped establish the Coriolis force as the mechanism for the deflection of winds due to Earth's rotation, emphasizing the conservation of angular momentum in directing flows rather than the conservation of linear momentum as Hadley suggested; Hadley's assumption led to an underestimation of the deflection by a factor of two. The acceptance of the Coriolis force in shaping global winds led to debate among German atmospheric scientists beginning in the 1870s over the completeness and validity of Hadley's explanation, which narrowly explained the behavior of initially meridional motions. Hadley's use of surface friction to explain why the trade winds were much slower than his theory would predict was seen as a key weakness in his ideas. The southwesterly motions observed in cirrus clouds at around 30°N further discounted Hadley's theory as their movement was far slower than the theory would predict when accounting for the conservation of angular momentum. In 1899, William Morris Davis, a professor of physical geography at Harvard University, gave a speech at the Royal Meteorological Society criticizing Hadley's theory for its failure to account for the transition of an initially unbalanced flow to geostrophic balance. Davis and other meteorologists in the 20th century recognized that the movement of air parcels along Hadley's envisaged circulation was sustained by a constant interplay between the pressure gradient and Coriolis forces rather than the conservation of angular momentum alone. Ultimately, while the atmospheric science community considered the general ideas of Hadley's principle valid, his explanation was viewed as a simplification of more complex physical processes. Hadley's model of the global atmospheric circulation being characterized by hemisphere-wide circulation cells was also challenged by weather observations showing a zone of high pressure in the subtropics and a belt of low pressure at around 60° latitude. This pressure distribution would imply a poleward flow near the surface in the mid-latitudes rather than an equatorward flow implied by Hadley's envisioned cells. Ferrel and James Thomson later reconciled the pressure pattern with Hadley's model by proposing a circulation cell limited to lower altitudes in the mid-latitudes and nestled within the broader, hemisphere-wide Hadley cells. Carl-Gustaf Rossby proposed in 1947 that the Hadley circulation was limited to the tropics, forming one part of a dynamically-driven and multi-celled meridional flow. Rossby's model resembled that of a similar three-celled model developed by Ferrel in 1860. Direct observation The three-celled model of the global atmospheric circulationwith Hadley's conceived circulation forming its tropical componenthad been widely accepted by the meteorological community by the early 20th century. However, the Hadley cell's existence was only validated by weather observations near the surface, and its predictions of winds in the upper troposphere remained untested. The routine sampling of the upper troposphere by radiosondes that emerged in the mid-20th century confirmed the existence of meridional overturning cells in the atmosphere. Influence on climate The Hadley circulation is one of the most important influences on global climate and planetary habitability, as well as an important transporter of angular momentum, heat, and water vapor. Hadley cells flatten the temperature gradient between the equator and the poles, making the extratropics milder. The global precipitation pattern of high precipitation in the tropics and a lack of precipitation at higher latitudes is a consequence of the positioning of the rising and sinking branches of Hadley cells, respectively. Near the equator, the ascent of humid air results in the heaviest precipitation on Earth. The periodic movement of the ITCZ and thus the seasonal variation of the Hadley circulation's rising branches produces the world's monsoons. The descending motion of air associating with the sinking branch produces surface divergence consistent with the prominence of subtropical high-pressure areas. These semipermanent regions of high pressure lie primarily over the ocean between 20° and 40° latitude. Arid conditions are associated with the descending branches of the Hadley circulation, with many of the Earth's deserts and semiarid or arid regions underlying the sinking branches of the Hadley circulation. The cloudy marine boundary layer common in the subtropics may be seeded by cloud condensation nuclei exported out of the tropics by the Hadley circulation. Effects of climate change Natural variability Paleoclimate reconstructions of trade winds and rainfall patterns suggest that the Hadley circulation changed in response to natural climate variability. During Heinrich events within the last 100,000 years, the Northern Hemisphere Hadley cell strengthened while the Southern Hemisphere Hadley cell weakened. Variation in insolation during the mid- to late-Holocene resulted in a southward migration of the Northern Hemisphere Hadley cell's ascending and descending branches closer to their present-day positions. Tree rings from the mid-latitudes of the Northern Hemisphere suggest that the historical position of the Hadley cell branches have also shifted in response to shorter oscillations, with the Northern Hemisphere descending branch moving southward during positive phases of the El Niño–Southern Oscillation and Pacific decadal oscillation and northward during the corresponding negative phases. The Hadley cells were displaced southward between 1400–1850, concurrent with drought in parts of the Northern Hemisphere. Hadley cell expansion and intensity changes Observed trends According to the IPCC Sixth Assessment Report (AR6), the Hadley circulation has likely expanded since at least the 1980s in response to climate change, with medium confidence in an accompanying intensification of the circulation. An expansion of the overall circulation poleward by about 0.1°–0.5° latitude per decade since the 1980s is largely accounted for by the poleward shift of the Northern Hemisphere Hadley cell, which in atmospheric reanalysis has shown a more marked expansion since 1992. However, the AR6 also reported medium confidence in the expansion of the Northern Hemisphere Hadley cell being within the range of internal variability. In contrast, the AR6 assessed that it was likely that the Southern Hemisphere Hadley cell's poleward expansion was due to anthropogenic influence; this finding was based on CMIP5 and CMIP6 climate models. Studies have produced a large range of estimates for the rate of widening of the tropics due to the use of different metrics; estimates based on upper-tropospheric properties tend to yield a wider range of values. The degree to which the circulation has expanded varies by season, with trends in summer and autumn being larger and statistically significant in both hemispheres. The widening of the Hadley circulation has also resulted in a likely widening of the ITCZ since the 1970s. Reanalyses also suggest that the summer and autumn Hadley cells in both hemispheres have widened and that the global Hadley circulation has intensified since 1979, with a more pronounced intensification in the Northern Hemisphere. Between 1979–2010, the power generated by the global Hadley circulation increased by an average of 0.54 TW per year, consistent with an increased input of energy into the circulation by warming SSTs over the tropical oceans. (For comparison, the Hadley circulation's overall power ranges from 0.5 TW to 218 TW throughout the year in the Northern Hemisphere and from 32 to 204 TW in the Southern.) In contrast to reanalyses, CMIP5 climate models depict a weakening of the Hadley circulation since 1979. The magnitude of long-term changes in the circulation strength are thus uncertain due to the influence of large interannual variability and the poor representation of the distribution of latent heat release in reanalyses. The expansion of the Hadley circulation due to climate change is consistent with the Held–Hou model, which predicts that the latitudinal extent of the circulation is proportional to the square root of the height of the tropopause. Warming of the troposphere raises the tropopause height, enabling the upper poleward branch of the Hadley cells to extend farther and leading to an expansion of the cells. Results from climate models suggest that the impact of internal variability (such as from the Pacific decadal oscillation) and the anthropogenic influence on the expansion of the Hadley circulation since the 1980s have been comparable. Human influence is most evident in the expansion of the Southern Hemisphere Hadley cell; the AR6 assessed medium confidence in associating the expansion of the Hadley circulation in both hemispheres with the added radiative forcing of greenhouse gasses. Physical mechanisms and projected changes The physical processes by which the Hadley circulation expands by human influence are unclear but may be linked to the increased warming of the subtropics relative to other latitudes in both the Northern and Southern hemispheres. The enhanced subtropical warmth could enable expansion of the circulation poleward by displacing the subtropical jet and baroclinic eddies poleward. Poleward expansion of the Southern Hemisphere Hadley cell in the austral summer was attributed by the IPCC Fifth Assessment Report (AR5) to stratospheric ozone depletion based on CMIP5 model simulations, while CMIP6 simulations have not shown as clear of a signal. Ozone depletion could plausibly affect the Hadley circulation through the increase of radiative cooling in the lower stratosphere; this would increase the phase speed of baroclinic eddies and displace them poleward, leading to expansion of Hadley cells. Other eddy-driven mechanisms for expanding Hadley cells have been proposed, involving changes in baroclinicity, wave breaking, and other releases of instability. In the extratropics of the Northern Hemisphere, increasing concentrations of black carbon and tropospheric ozone may be a major forcing on that hemisphere's Hadley cell expansion in boreal summer. Projections from climate models indicate that a continued increase in the concentration of greenhouse gas would result in continued widening of the Hadley circulation. However, simulations using historical data suggest that forcing from greenhouse gasses may account for about 0.1° per decade of expansion of the tropics. Although the widening of the Hadley cells due to climate change has occurred concurrent with an increase in their intensity based on atmospheric reanalyses, climate model projections generally depict a weakening circulation in tandem with a widening circulation by the end of the 21st century. A longer term increase in the concentration of carbon dioxide may lead to a weakening of the Hadley circulation as a result of the reduction of radiative cooling in the troposphere near the circulation's sinking branches. However, changes in the oceanic circulation within the tropics may attenuate changes in the intensity and width of the Hadley cells by reducing thermal contrasts. Changes to weather patterns The expansion of the Hadley circulation due to climate change is connected to changes in regional and global weather patterns. A widening of the tropics could displace the tropical rain belt, expand subtropical deserts, and exacerbate wildfires and drought. The documented shift and expansion of subtropical ridges are associated with changes in the Hadley circulation, including a westward extension of the subtropical high over the northwestern Pacific, changes in the intensity and position of the Azores High, and the poleward displacement and intensification of the subtropical high pressure belt in the Southern Hemisphere. These changes have influenced regional precipitation amounts and variability, including drying trends over southern Australia, northeastern China, and northern South Asia. The AR6 assessed limited evidence that the expansion of the Northern Hemisphere Hadley cell may have led in part to drier conditions in the subtropics and a poleward expansion of aridity during boreal summer. Precipitation changes induced by Hadley circulation changes may lead to changes in regional soil moisture, with modelling showing the most significant declines in the Mediterranean Sea, South Africa, and the Southwestern United States. However, the concurrent effects of changing surface temperature patterns over land lead to uncertainties over the influence of Hadley cell broadening on drying over subtropical land areas. Climate modelling suggests that the shift in the position of the subtropical highs induced by Hadley cell broadening may reduce oceanic upwelling at low latitudes and enhance oceanic upwelling at high latitudes. The expansion of subtropical highs in tandem with the circulation's expansion may also entail a widening of oceanic regions of high salinity and low marine primary production. A decline in extratropical cyclones in the storm track regions in model projections is partly influenced by Hadley cell expansion. Poleward shifts in the Hadley circulation are associated with shifts in the paths of tropical cyclones in the Northern and Southern hemispheres, including a poleward trend in the locations where storms attained their peak intensity. Extraterrestrial Hadley circulations Outside of Earth, any thermally direct circulation that circulates air meridionally across planetary-scale gradients of insolation may be described as a Hadley circulation. A terrestrial atmosphere subject to excess equatorial heating tends to maintain an axisymmetric Hadley circulation with rising motions near the equator and sinking at higher latitudes. Differential heating is hypothesized to result in Hadley circulations analogous to Earth's on other atmospheres in the Solar System, such as on Venus, Mars, and Titan. As with Earth's atmosphere, the Hadley circulation would be the dominant meridional circulation for these extraterrestrial atmospheres. Though less understood, Hadley circulations may also be present on the gas giants of the Solar System and should in principle materialize on exoplanetary atmospheres. The spatial extent of a Hadley cell on any atmosphere may be dependent on the rotation rate of the planet or moon, with a faster rotation rate leading to more contracted Hadley cells (with a more restrictive poleward extent) and a more cellular global meridional circulation. The slower rotation rate reduces the Coriolis effect, thus reducing the meridional temperature gradient needed to sustain a jet at the Hadley cell's poleward boundary and thus allowing the Hadley cell to extend farther poleward. Venus, which rotates slowly, may have Hadley cells that extend farther poleward than Earth's, spanning from the equator to high latitudes in each of the northern and southern hemispheres. Its broad Hadley circulation would efficiently maintain the nearly isothermal temperature distribution between the planet's pole and equator and vertical velocities of around . Observations of chemical tracers such as carbon monoxide provide indirect evidence for the existence of the Venusian Hadley circulation. The presence of poleward winds with speeds up to around at an altitude of are typically understood to be associated with the upper branch of a Hadley cell, which may be located above the Venusian surface. The slow vertical velocities associated with the Hadley circulation have not been measured, though they may have contributed to the vertical velocities measured by Vega and Venera missions. The Hadley cells may extend to around 60° latitude, equatorward of a mid-latitude jet stream demarcating the boundary between the hypothesized Hadley cell and the polar vortex. The planet's atmosphere may exhibit two Hadley circulations, with one near the surface and the other at the level of the upper cloud deck. The Venusian Hadley circulation may contribute to the superrotation of the planet's atmosphere. Simulations of the Martian atmosphere suggest that a Hadley circulation is also present in Mars' atmosphere, exhibiting a stronger seasonality compared to Earth's Hadley circulation. This greater seasonality results from diminished thermal inertia resulting from the lack of an ocean and the planet's thinner atmosphere. Additionally, Mars' orbital eccentricity leads to a stronger and wider Hadley cell during its northern winter compared to its southern winter. During most of the Martian year, when a single Hadley cell prevails, its rising and sinking branches are located at 30° and 60° latitude, respectively, in global climate modelling. The tops of the Hadley cells on Mars may reach higher (to around altitude) and be less defined compared to on Earth due to the lack of a strong tropopause on Mars. While latent heating from phase changes associated with water drive much of the ascending motion in Earth's Hadley circulation, ascent in Mars' Hadley circulation may be driven by radiative heating of lofted dust and intensified by the condensation of carbon dioxide near the polar ice cap of Mars' wintertime hemisphere, steepening pressure gradients. Over the course of the Martian year, the mass flux of the Hadley circulation ranges between 109 kg s−1 during the equinoxes and 1010 at the solstices. A Hadley circulation may also be present in the atmosphere of Saturn's moon Titan. Like Venus, the slow rotation rate of Titan may support a spatially broad Hadley circulation. General circulation modeling of Titan's atmosphere suggests the presence of a cross-equatorial Hadley cell. This configuration is consistent with the meridional winds observed by the Huygens spacecraft when it landed near Titan's equator. During Titan's solstices, its Hadley circulation may take the form of a single Hadley cell that extends from pole to pole, with warm gas rising in the summer hemisphere and sinking in the winter hemisphere. A two-celled configuration with ascent near the equator is present in modelling during a limited transitional period near the equinoxes. The distribution of convective methane clouds on Titan and observations from Huygens spacecraft suggest that the rising branch of its Hadley circulation occurs in the mid-latitudes of its summer hemisphere. Frequent cloud formation occurs at 40° latitude in Titan's summer hemisphere from ascent analogous to Earth's ITCZ. See also Polar vortex – a broad semi-permanent region of cold, cyclonically-rotating air encircling Earth's poles Brewer–Dobson circulation – a circulation between the tropical troposphere and the stratosophere Atlantic meridional overturning circulation – a broad oceanic circulation important for energy exchange across a wide range of latitudes Notes References Sources Tropical meteorology Oceanography Atmospheric circulation fr:Circulation atmosphérique#Cellules de Hadley
Hadley cell
[ "Physics", "Environmental_science" ]
9,117
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
13,245,649
https://en.wikipedia.org/wiki/Bubble%20point
In thermodynamics, the bubble point is the temperature (at a given pressure) where the first bubble of vapor is formed when heating a liquid consisting of two or more components. Given that vapor will probably have a different composition than the liquid, the bubble point (along with the dew point) at different compositions are useful data when designing distillation systems. For a single component the bubble point and the dew point are the same and are referred to as the boiling point. Calculating the bubble point At the bubble point, the following relationship holds: where . K is the distribution coefficient or K factor, defined as the ratio of mole fraction in the vapor phase to the mole fraction in the liquid phase at equilibrium. When Raoult's law and Dalton's law hold for the mixture, the K factor is defined as the ratio of the vapor pressure to the total pressure of the system: Given either of or and either the temperature or pressure of a two-component system, calculations can be performed to determine the unknown information. See also Phase diagram Azeotrope Dew point References Temperature Phase transitions Gases
Bubble point
[ "Physics", "Chemistry" ]
224
[ "Scalar physical quantities", "Temperature", "Physical phenomena", "Phase transitions", "Physical quantities", "Gases", "Thermodynamic properties", "SI base quantities", "Intensive quantities", "Phases of matter", "Critical phenomena", "Thermodynamics", "Statistical mechanics", "Wikipedia ...
42,975
https://en.wikipedia.org/wiki/Hubble%27s%20law
Hubble's law, also known as the Hubble–Lemaître law, is the observation in physical cosmology that galaxies are moving away from Earth at speeds proportional to their distance. In other words, the farther a galaxy is from the Earth, the faster it moves away. A galaxy's recessional velocity is typically determined by measuring its redshift, a shift in the frequency of light emitted by the galaxy. The discovery of Hubble's law is attributed to work published by Edwin Hubble in 1929, but the notion of the universe expanding at a calculable rate was first derived from general relativity equations in 1922 by Alexander Friedmann. The Friedmann equations showed the universe might be expanding, and presented the expansion speed if that were the case. Before Hubble, astronomer Carl Wilhelm Wirtz had, in 1922 and 1924, deduced with his own data that galaxies that appeared smaller and dimmer had larger redshifts and thus that more distant galaxies recede faster from the observer. In 1927, Georges Lemaître concluded that the universe might be expanding by noting the proportionality of the recessional velocity of distant bodies to their respective distances. He estimated a value for this ratio, which—after Hubble confirmed cosmic expansion and determined a more precise value for it two years later—became known as the Hubble constant. Hubble inferred the recession velocity of the objects from their redshifts, many of which were earlier measured and related to velocity by Vesto Slipher in 1917. Combining Slipher's velocities with Henrietta Swan Leavitt's intergalactic distance calculations and methodology allowed Hubble to better calculate an expansion rate for the universe. Hubble's law is considered the first observational basis for the expansion of the universe, and is one of the pieces of evidence most often cited in support of the Big Bang model. The motion of astronomical objects due solely to this expansion is known as the Hubble flow. It is described by the equation , with the constant of proportionality—the Hubble constant—between the "proper distance" to a galaxy (which can change over time, unlike the comoving distance) and its speed of separation , i.e. the derivative of proper distance with respect to the cosmic time coordinate. Though the Hubble constant is constant at any given moment in time, the Hubble parameter , of which the Hubble constant is the current value, varies with time, so the term constant is sometimes thought of as somewhat of a misnomer. The Hubble constant is most frequently quoted in km/s/Mpc, which gives the speed of a galaxy away as . Simplifying the units of the generalized form reveals that specifies a frequency (SI unit: s−1), leading the reciprocal of to be known as the Hubble time (14.4 billion years). The Hubble constant can also be stated as a relative rate of expansion. In this form  = 7%/Gyr, meaning that, at the current rate of expansion, it takes one billion years for an unbound structure to grow by 7%. Discovery A decade before Hubble made his observations, a number of physicists and mathematicians had established a consistent theory of an expanding universe by using Einstein field equations of general relativity. Applying the most general principles to the nature of the universe yielded a dynamic solution that conflicted with the then-prevalent notion of a static universe. Slipher's observations In 1912, Vesto M. Slipher measured the first Doppler shift of a "spiral nebula" (the obsolete term for spiral galaxies) and soon discovered that almost all such objects were receding from Earth. He did not grasp the cosmological implications of this fact, and indeed at the time it was highly controversial whether or not these nebulae were "island universes" outside the Milky Way galaxy. FLRW equations In 1922, Alexander Friedmann derived his Friedmann equations from Einstein field equations, showing that the universe might expand at a rate calculable by the equations. The parameter used by Friedmann is known today as the scale factor and can be considered as a scale invariant form of the proportionality constant of Hubble's law. Georges Lemaître independently found a similar solution in his 1927 paper discussed in the following section. The Friedmann equations are derived by inserting the metric for a homogeneous and isotropic universe into Einstein's field equations for a fluid with a given density and pressure. This idea of an expanding spacetime would eventually lead to the Big Bang and Steady State theories of cosmology. Lemaître's equation In 1927, two years before Hubble published his own article, the Belgian priest and astronomer Georges Lemaître was the first to publish research deriving what is now known as Hubble's law. According to the Canadian astronomer Sidney van den Bergh, "the 1927 discovery of the expansion of the universe by Lemaître was published in French in a low-impact journal. In the 1931 high-impact English translation of this article, a critical equation was changed by omitting reference to what is now known as the Hubble constant." It is now known that the alterations in the translated paper were carried out by Lemaître himself. Shape of the universe Before the advent of modern cosmology, there was considerable talk about the size and shape of the universe. In 1920, the Shapley–Curtis debate took place between Harlow Shapley and Heber D. Curtis over this issue. Shapley argued for a small universe the size of the Milky Way galaxy, and Curtis argued that the universe was much larger. The issue was resolved in the coming decade with Hubble's improved observations. Cepheid variable stars outside the Milky Way Edwin Hubble did most of his professional astronomical observing work at Mount Wilson Observatory, home to the world's most powerful telescope at the time. His observations of Cepheid variable stars in "spiral nebulae" enabled him to calculate the distances to these objects. Surprisingly, these objects were discovered to be at distances which placed them well outside the Milky Way. They continued to be called nebulae, and it was only gradually that the term galaxies replaced it. Combining redshifts with distance measurements The velocities and distances that appear in Hubble's law are not directly measured. The velocities are inferred from the redshift of radiation and distance is inferred from brightness. Hubble sought to correlate brightness with parameter . Combining his measurements of galaxy distances with Vesto Slipher and Milton Humason's measurements of the redshifts associated with the galaxies, Hubble discovered a rough proportionality between redshift of an object and its distance. Though there was considerable scatter (now known to be caused by peculiar velocities—the 'Hubble flow' is used to refer to the region of space far enough out that the recession velocity is larger than local peculiar velocities), Hubble was able to plot a trend line from the 46 galaxies he studied and obtain a value for the Hubble constant of 500 (km/s)/Mpc (much higher than the currently accepted value due to errors in his distance calibrations; see cosmic distance ladder for details). Hubble diagram Hubble's law can be easily depicted in a "Hubble diagram" in which the velocity (assumed approximately proportional to the redshift) of an object is plotted with respect to its distance from the observer. A straight line of positive slope on this diagram is the visual depiction of Hubble's law. Cosmological constant abandoned After Hubble's discovery was published, Albert Einstein abandoned his work on the cosmological constant, a term he had inserted into his equations of general relativity to coerce them into producing the static solution he previously considered the correct state of the universe. The Einstein equations in their simplest form model either an expanding or contracting universe, so Einstein introduced the constant to counter expansion or contraction and lead to a static and flat universe. After Hubble's discovery that the universe was, in fact, expanding, Einstein called his faulty assumption that the universe is static his "greatest mistake". On its own, general relativity could predict the expansion of the universe, which (through observations such as the bending of light by large masses, or the precession of the orbit of Mercury) could be experimentally observed and compared to his theoretical calculations using particular solutions of the equations he had originally formulated. In 1931, Einstein went to Mount Wilson Observatory to thank Hubble for providing the observational basis for modern cosmology. The cosmological constant has regained attention in recent decades as a hypothetical explanation for dark energy. Interpretation The discovery of the linear relationship between redshift and distance, coupled with a supposed linear relation between recessional velocity and redshift, yields a straightforward mathematical expression for Hubble's law as follows: where is the recessional velocity, typically expressed in km/s. is Hubble's constant and corresponds to the value of (often termed the Hubble parameter which is a value that is time dependent and which can be expressed in terms of the scale factor) in the Friedmann equations taken at the time of observation denoted by the subscript . This value is the same throughout the universe for a given comoving time. is the proper distance (which can change over time, unlike the comoving distance, which is constant) from the galaxy to the observer, measured in mega parsecs (Mpc), in the 3-space defined by given cosmological time. (Recession velocity is just ). Hubble's law is considered a fundamental relation between recessional velocity and distance. However, the relation between recessional velocity and redshift depends on the cosmological model adopted and is not established except for small redshifts. For distances larger than the radius of the Hubble sphere , objects recede at a rate faster than the speed of light (See Uses of the proper distance for a discussion of the significance of this): Since the Hubble "constant" is a constant only in space, not in time, the radius of the Hubble sphere may increase or decrease over various time intervals. The subscript '0' indicates the value of the Hubble constant today. Current evidence suggests that the expansion of the universe is accelerating (see Accelerating universe), meaning that for any given galaxy, the recession velocity is increasing over time as the galaxy moves to greater and greater distances; however, the Hubble parameter is actually thought to be decreasing with time, meaning that if we were to look at some distance and watch a series of different galaxies pass that distance, later galaxies would pass that distance at a smaller velocity than earlier ones. Redshift velocity and recessional velocity Redshift can be measured by determining the wavelength of a known transition, such as hydrogen α-lines for distant quasars, and finding the fractional shift compared to a stationary reference. Thus, redshift is a quantity unambiguously acquired from observation. Care is required, however, in translating these to recessional velocities: for small redshift values, a linear relation of redshift to recessional velocity applies, but more generally the redshift-distance law is nonlinear, meaning the co-relation must be derived specifically for each given model and epoch. Redshift velocity The redshift is often described as a redshift velocity, which is the recessional velocity that would produce the same redshift it were caused by a linear Doppler effect (which, however, is not the case, as the velocities involved are too large to use a non-relativistic formula for Doppler shift). This redshift velocity can easily exceed the speed of light. In other words, to determine the redshift velocity , the relation: is used. That is, there is between redshift velocity and redshift: they are rigidly proportional, and not related by any theoretical reasoning. The motivation behind the "redshift velocity" terminology is that the redshift velocity agrees with the velocity from a low-velocity simplification of the so-called Fizeau–Doppler formula Here, , are the observed and emitted wavelengths respectively. The "redshift velocity" is not so simply related to real velocity at larger velocities, however, and this terminology leads to confusion if interpreted as a real velocity. Next, the connection between redshift or redshift velocity and recessional velocity is discussed. Recessional velocity Suppose is called the scale factor of the universe, and increases as the universe expands in a manner that depends upon the cosmological model selected. Its meaning is that all measured proper distances between co-moving points increase proportionally to . (The co-moving points are not moving relative to their local environments.) In other words: where is some reference time. If light is emitted from a galaxy at time and received by us at , it is redshifted due to the expansion of the universe, and this redshift is simply: Suppose a galaxy is at distance , and this distance changes with time at a rate . We call this rate of recession the "recession velocity" : We now define the Hubble constant as and discover the Hubble law: From this perspective, Hubble's law is a fundamental relation between (i) the recessional velocity associated with the expansion of the universe and (ii) the distance to an object; the connection between redshift and distance is a crutch used to connect Hubble's law with observations. This law can be related to redshift approximately by making a Taylor series expansion: If the distance is not too large, all other complications of the model become small corrections, and the time interval is simply the distance divided by the speed of light: or According to this approach, the relation is an approximation valid at low redshifts, to be replaced by a relation at large redshifts that is model-dependent. See velocity-redshift figure. Observability of parameters Strictly speaking, neither nor in the formula are directly observable, because they are properties of a galaxy, whereas our observations refer to the galaxy in the past, at the time that the light we currently see left it. For relatively nearby galaxies (redshift much less than one), and will not have changed much, and can be estimated using the formula where is the speed of light. This gives the empirical relation found by Hubble. For distant galaxies, (or ) cannot be calculated from without specifying a detailed model for how changes with time. The redshift is not even directly related to the recession velocity at the time the light set out, but it does have a simple interpretation: is the factor by which the universe has expanded while the photon was traveling towards the observer. Expansion velocity vs. peculiar velocity In using Hubble's law to determine distances, only the velocity due to the expansion of the universe can be used. Since gravitationally interacting galaxies move relative to each other independent of the expansion of the universe, these relative velocities, called peculiar velocities, need to be accounted for in the application of Hubble's law. Such peculiar velocities give rise to redshift-space distortions. Time-dependence of Hubble parameter The parameter is commonly called the "Hubble constant", but that is a misnomer since it is constant in space only at a fixed time; it varies with time in nearly all cosmological models, and all observations of far distant objects are also observations into the distant past, when the "constant" had a different value. "Hubble parameter" is a more correct term, with denoting the present-day value. Another common source of confusion is that the accelerating universe does imply that the Hubble parameter is actually increasing with time; since in most accelerating models increases relatively faster than so decreases with time. (The recession velocity of one chosen galaxy does increase, but different galaxies passing a sphere of fixed radius cross the sphere more slowly at later times.) On defining the dimensionless deceleration parameter it follows that From this it is seen that the Hubble parameter is decreasing with time, unless ; the latter can only occur if the universe contains phantom energy, regarded as theoretically somewhat improbable. However, in the standard Lambda cold dark matter model (Lambda-CDM or ΛCDM model), will tend to −1 from above in the distant future as the cosmological constant becomes increasingly dominant over matter; this implies that will approach from above to a constant value of ≈ 57 (km/s)/Mpc, and the scale factor of the universe will then grow exponentially in time. Idealized Hubble's law The mathematical derivation of an idealized Hubble's law for a uniformly expanding universe is a fairly elementary theorem of geometry in 3-dimensional Cartesian/Newtonian coordinate space, which, considered as a metric space, is entirely homogeneous and isotropic (properties do not vary with location or direction). Simply stated, the theorem is this: In fact, this applies to non-Cartesian spaces as long as they are locally homogeneous and isotropic, specifically to the negatively and positively curved spaces frequently considered as cosmological models (see shape of the universe). An observation stemming from this theorem is that seeing objects recede from us on Earth is not an indication that Earth is near to a center from which the expansion is occurring, but rather that observer in an expanding universe will see objects receding from them. Ultimate fate and age of the universe The value of the Hubble parameter changes over time, either increasing or decreasing depending on the value of the so-called deceleration parameter , which is defined by In a universe with a deceleration parameter equal to zero, it follows that , where is the time since the Big Bang. A non-zero, time-dependent value of simply requires integration of the Friedmann equations backwards from the present time to the time when the comoving horizon size was zero. It was long thought that was positive, indicating that the expansion is slowing down due to gravitational attraction. This would imply an age of the universe less than (which is about 14 billion years). For instance, a value for of 1/2 (once favoured by most theorists) would give the age of the universe as . The discovery in 1998 that is apparently negative means that the universe could actually be older than . However, estimates of the age of the universe are very close to . Olbers' paradox The expansion of space summarized by the Big Bang interpretation of Hubble's law is relevant to the old conundrum known as Olbers' paradox: If the universe were infinite in size, static, and filled with a uniform distribution of stars, then every line of sight in the sky would end on a star, and the sky would be as bright as the surface of a star. However, the night sky is largely dark. Since the 17th century, astronomers and other thinkers have proposed many possible ways to resolve this paradox, but the currently accepted resolution depends in part on the Big Bang theory, and in part on the Hubble expansion: in a universe that existed for a finite amount of time, only the light of a finite number of stars has had enough time to reach us, and the paradox is resolved. Additionally, in an expanding universe, distant objects recede from us, which causes the light emanated from them to be redshifted and diminished in brightness by the time we see it. Dimensionless Hubble constant Instead of working with Hubble's constant, a common practice is to introduce the dimensionless Hubble constant, usually denoted by and commonly referred to as "little h", then to write Hubble's constant as , all the relative uncertainty of the true value of being then relegated to . The dimensionless Hubble constant is often used when giving distances that are calculated from redshift using the formula . Since is not precisely known, the distance is expressed as: In other words, one calculates 2998 × and one gives the units as Mpc  or  Mpc. Occasionally a reference value other than 100 may be chosen, in which case a subscript is presented after to avoid confusion; e.g. denotes  , which implies . This should not be confused with the dimensionless value of Hubble's constant, usually expressed in terms of Planck units, obtained by multiplying by (from definitions of parsec and ), for example for , a Planck unit version of is obtained. Acceleration of the expansion A value for measured from standard candle observations of Type Ia supernovae, which was determined in 1998 to be negative, surprised many astronomers with the implication that the expansion of the universe is currently "accelerating" (although the Hubble factor is still decreasing with time, as mentioned above in the Interpretation section; see the articles on dark energy and the ΛCDM model). Derivation of the Hubble parameter Start with the Friedmann equation: where is the Hubble parameter, is the scale factor, is the gravitational constant, is the normalised spatial curvature of the universe and equal to −1, 0, or 1, and is the cosmological constant. Matter-dominated universe (with a cosmological constant) If the universe is matter-dominated, then the mass density of the universe can be taken to include just matter so where is the density of matter today. From the Friedmann equation and thermodynamic principles we know for non-relativistic particles that their mass density decreases proportional to the inverse volume of the universe, so the equation above must be true. We can also define (see density parameter for ) therefore: Also, by definition, where the subscript refers to the values today, and . Substituting all of this into the Friedmann equation at the start of this section and replacing with gives Matter- and dark energy-dominated universe If the universe is both matter-dominated and dark energy-dominated, then the above equation for the Hubble parameter will also be a function of the equation of state of dark energy. So now: where is the mass density of the dark energy. By definition, an equation of state in cosmology is , and if this is substituted into the fluid equation, which describes how the mass density of the universe evolves with time, then If is constant, then implying: Therefore, for dark energy with a constant equation of state , If this is substituted into the Friedman equation in a similar way as before, but this time set , which assumes a spatially flat universe, then (see shape of the universe) If the dark energy derives from a cosmological constant such as that introduced by Einstein, it can be shown that . The equation then reduces to the last equation in the matter-dominated universe section, with set to zero. In that case the initial dark energy density is given by If dark energy does not have a constant equation-of-state , then and to solve this, must be parametrized, for example if , giving Other ingredients have been formulated. Units derived from the Hubble constant Hubble time The Hubble constant has units of inverse time; the Hubble time is simply defined as the inverse of the Hubble constant, i.e. This is slightly different from the age of the universe, which is approximately 13.8 billion years. The Hubble time is the age it would have had if the expansion had been linear, and it is different from the real age of the universe because the expansion is not linear; it depends on the energy content of the universe (see ). We currently appear to be approaching a period where the expansion of the universe is exponential due to the increasing dominance of vacuum energy. In this regime, the Hubble parameter is constant, and the universe grows by a factor each Hubble time: Likewise, the generally accepted value of 2.27 Es−1 means that (at the current rate) the universe would grow by a factor of in one exasecond. Over long periods of time, the dynamics are complicated by general relativity, dark energy, inflation, etc., as explained above. Hubble length The Hubble length or Hubble distance is a unit of distance in cosmology, defined as — the speed of light multiplied by the Hubble time. It is equivalent to 4,420 million parsecs or 14.4 billion light years. (The numerical value of the Hubble length in light years is, by definition, equal to that of the Hubble time in years.) Substituting into the equation for Hubble's law, reveals that the Hubble distance specifies the distance from our location to those galaxies which are receding from us at the speed of light Hubble volume The Hubble volume is sometimes defined as a volume of the universe with a comoving size of . The exact definition varies: it is sometimes defined as the volume of a sphere with radius , or alternatively, a cube of side . Some cosmologists even use the term Hubble volume to refer to the volume of the observable universe, although this has a radius approximately three times larger. Determining the Hubble constant The value of the Hubble constant, , cannot be measured directly, but is derived from a combination of astronomical observations and model-dependent assumptions. Increasingly accurate observations and new models over many decades have led to two sets of highly precise values which do not agree. This difference is known as the "Hubble tension". Earlier measurements For the original 1929 estimate of the constant now bearing his name, Hubble used observations of Cepheid variable stars as "standard candles" to measure distance. The result he obtained was , much larger than the value astronomers currently calculate. Later observations by astronomer Walter Baade led him to realize that there were distinct "populations" for stars (Population I and Population II) in a galaxy. The same observations led him to discover that there are two types of Cepheid variable stars with different luminosities. Using this discovery, he recalculated Hubble constant and the size of the known universe, doubling the previous calculation made by Hubble in 1929. He announced this finding to considerable astonishment at the 1952 meeting of the International Astronomical Union in Rome. For most of the second half of the 20th century, the value of was estimated to be between . The value of the Hubble constant was the topic of a long and rather bitter controversy between Gérard de Vaucouleurs, who claimed the value was around 100, and Allan Sandage, who claimed the value was near 50. In one demonstration of vitriol shared between the parties, when Sandage and Gustav Andreas Tammann (Sandage's research colleague) formally acknowledged the shortcomings of confirming the systematic error of their method in 1975, Vaucouleurs responded "It is unfortunate that this sober warning was so soon forgotten and ignored by most astronomers and textbook writers". In 1996, a debate moderated by John Bahcall between Sidney van den Bergh and Gustav Tammann was held in similar fashion to the earlier Shapley–Curtis debate over these two competing values. This previously wide variance in estimates was partially resolved with the introduction of the ΛCDM model of the universe in the late 1990s. Incorporating the ΛCDM model, observations of high-redshift clusters at X-ray and microwave wavelengths using the Sunyaev–Zel'dovich effect, measurements of anisotropies in the cosmic microwave background radiation, and optical surveys all gave a value of around 50–70 km/s/Mpc for the constant. Precision cosmology and the Hubble tension By the late 1990s, advances in ideas and technology allowed higher precision measurements. However, two major categories of methods, each with high precision, fail to agree. "Late universe" measurements using calibrated distance ladder techniques have converged on a value of approximately . Since 2000, "early universe" techniques based on measurements of the cosmic microwave background have become available, and these agree on a value near . (This accounts for the change in the expansion rate since the early universe, so is comparable to the first number.) Initially, this discrepancy was within the estimated measurement uncertainties and thus no cause for concern. However, as techniques have improved, the estimated measurement uncertainties have shrunk, but the discrepancies have not, to the point that the disagreement is now highly statistically significant. This discrepancy is called the Hubble tension. An example of an "early" measurement, the Planck mission published in 2018 gives a value for of . In the "late" camp is the higher value of determined by the Hubble Space Telescope and confirmed by the James Webb Space Telescope in 2023. The "early" and "late" measurements disagree at the >5 σ level, beyond a plausible level of chance. The resolution to this disagreement is an ongoing area of active research. Reducing systematic errors Since 2013 much effort has gone in to new measurements to check for possible systematic errors and improved reproducibility. The "late universe" or distance ladder measurements typically employ three stages or "rungs". In the first rung distances to Cepheids are determined while trying to reduce luminosity errors from dust and correlations of metallicity with luminosity. The second rung uses Type Ia supernova, explosions of almost constant amount of mass and thus very similar amounts of light; the primary source of systematic error is the limited number of objects that can be observed. The third rung of the distance ladder measures the red-shift of supernova to extract the Hubble flow and from that the constant. At this rung corrections due to motion other than expansion are applied. As an example of the kind of work needed to reduce systematic errors, photometry on observations from the James Webb Space Telescope of extra-galactic Cepheids confirm the findings from the HST. The higher resolution avoided confusion from crowding of stars in the field of view but came to the same value for H0. The "early universe" or inverse distance ladder measures the observable consequences of spherical sound waves on primordial plasma density. These pressure waves – called baryon acoustic oscillations (BAO) – cease once the universe cooled enough for electrons to stay bound to nuclei, ending the plasma and allowing the photons trapped by interaction with the plasma to escape. The pressure waves then become very small perturbations in density imprinted on the cosmic microwave background and on the large scale density of galaxies across the sky. Detailed structure in high precision measurements of the CMB can matched to physics models of the oscillations. These models depend upon the Hubble constant such that a match reveals a value for the constant. Similarly, the BAO affects the statistical distribution of matter, observed as distant galaxies across the sky. These two independent kinds of measurements produce similar values for the constant from the current models, giving strong evidence that systematic errors in the measurements themselves do not affect the result. Other kinds of measurements In addition to measurements based on calibrated distance ladder techniques or measurements of the CMB, other methods have been used to determine the Hubble constant. In October 2018, scientists used information from gravitational wave events (especially those involving the merger of neutron stars, like GW170817), of determining the Hubble constant. In July 2019, astronomers reported that a new method to determine the Hubble constant, and resolve the discrepancy of earlier methods, has been proposed based on the mergers of pairs of neutron stars, following the detection of the neutron star merger of GW170817, an event known as a dark siren. Their measurement of the Hubble constant is (km/s)/Mpc. Also in July 2019, astronomers reported another new method, using data from the Hubble Space Telescope and based on distances to red giant stars calculated using the tip of the red-giant branch (TRGB) distance indicator. Their measurement of the Hubble constant is . In February 2020, the Megamaser Cosmology Project published independent results based on astrophysical masers visible at cosmological distances and which do not require multi-step calibration. That work confirmed the distance ladder results and differed from the early-universe results at a statistical significance level of 95%. In July 2020, measurements of the cosmic background radiation by the Atacama Cosmology Telescope predict that the Universe should be expanding more slowly than is currently observed. In July 2023, an independent estimate of the Hubble constant was derived from a kilonova, the optical afterglow of a neutron star merger. Due to the blackbody nature of early kilonova spectra, such systems provide strongly constraining estimators of cosmic distance. Using the kilonova AT2017gfo (the aftermath of, once again, GW170817), these measurements indicate a local-estimate of the Hubble constant of . Possible resolutions of the Hubble tension The cause of the Hubble tension is unknown, and there are many possible proposed solutions. The most conservative is that there is an unknown systematic error affecting either early-universe or late-universe observations. Although intuitively appealing, this explanation requires multiple unrelated effects regardless of whether early-universe or late-universe observations are incorrect, and there are no obvious candidates. Furthermore, any such systematic error would need to affect multiple different instruments, since both the early-universe and late-universe observations come from several different telescopes. Alternatively, it could be that the observations are correct, but some unaccounted-for effect is causing the discrepancy. If the cosmological principle fails (see ), then the existing interpretations of the Hubble constant and the Hubble tension have to be revised, which might resolve the Hubble tension. In particular, we would need to be located within a very large void, up to about a redshift of 0.5, for such an explanation to conflate with supernovae and baryon acoustic oscillation observations. Yet another possibility is that the uncertainties in the measurements could have been underestimated, but given the internal agreements this is neither likely, nor resolves the overall tension. Finally, another possibility is new physics beyond the currently accepted cosmological model of the universe, the ΛCDM model. There are very many theories in this category, for example, replacing general relativity with a modified theory of gravity could potentially resolve the tension, as can a dark energy component in the early universe, dark energy with a time-varying equation of state, or dark matter that decays into dark radiation. A problem faced by all these theories is that both early-universe and late-universe measurements rely on multiple independent lines of physics, and it is difficult to modify any of those lines while preserving their successes elsewhere. The scale of the challenge can be seen from how some authors have argued that new early-universe physics alone is not sufficient; while other authors argue that new late-universe physics alone is also not sufficient. Nonetheless, astronomers are trying, with interest in the Hubble tension growing strongly since the mid 2010s. Measurements of the Hubble constant See also S8 tension- a similar problem from another parameter of the ΛCDM model. Notes References Bibliography External links NASA's WMAP B ig Bang Expansion: the Hubble Constant The Hubble Key Project The Hubble Diagram Project Coming to terms with different Hubble Constants (Forbes; 3 May 2019) Law Eponymous laws of physics Large-scale structure of the cosmos Physical cosmology Equations of astronomy
Hubble's law
[ "Physics", "Astronomy" ]
7,331
[ "Astronomical sub-disciplines", "Concepts in astronomy", "Theoretical physics", "Astrophysics", "Equations of astronomy", "Physical cosmology" ]
42,986
https://en.wikipedia.org/wiki/Alternating%20current
Alternating current (AC) is an electric current that periodically reverses direction and changes its magnitude continuously with time, in contrast to direct current (DC), which flows only in one direction. Alternating current is the form in which electric power is delivered to businesses and residences, and it is the form of electrical energy that consumers typically use when they plug kitchen appliances, televisions, fans and electric lamps into a wall socket. The abbreviations AC and DC are often used to mean simply alternating and direct, respectively, as when they modify current or voltage. The usual waveform of alternating current in most electric power circuits is a sine wave, whose positive half-period corresponds with positive direction of the current and vice versa (the full period is called a cycle). "Alternating current" most commonly refers to power distribution, but a wide range of other applications are technically alternating current although it is less common to describe them by that term. In many applications, like guitar amplifiers, different waveforms are used, such as triangular waves or square waves. Audio and radio signals carried on electrical wires are also examples of alternating current. These types of alternating current carry information such as sound (audio) or images (video) sometimes carried by modulation of an AC carrier signal. These currents typically alternate at higher frequencies than those used in power transmission. Transmission, distribution, and domestic power supply Electrical energy is distributed as alternating current because AC voltage may be increased or decreased with a transformer. This allows the power to be transmitted through power lines efficiently at high voltage, which reduces the energy lost as heat due to resistance of the wire, and transformed to a lower, safer voltage for use. Use of a higher voltage leads to significantly more efficient transmission of power. The power losses () in the wire are a product of the square of the current ( I ) and the resistance (R) of the wire, described by the formula: This means that when transmitting a fixed power on a given wire, if the current is halved (i.e. the voltage is doubled), the power loss due to the wire's resistance will be reduced to one quarter. The power transmitted is equal to the product of the current and the voltage (assuming no phase difference); that is, Consequently, power transmitted at a higher voltage requires less loss-producing current than for the same power at a lower voltage. Power is often transmitted at hundreds of kilovolts on pylons, and transformed down to tens of kilovolts to be transmitted on lower level lines, and finally transformed down to 100 V – 240 V for domestic use. High voltages have disadvantages, such as the increased insulation required, and generally increased difficulty in their safe handling. In a power plant, energy is generated at a convenient voltage for the design of a generator, and then stepped up to a high voltage for transmission. Near the loads, the transmission voltage is stepped down to the voltages used by equipment. Consumer voltages vary somewhat depending on the country and size of load, but generally motors and lighting are built to use up to a few hundred volts between phases. The voltage delivered to equipment such as lighting and motor loads is standardized, with an allowable range of voltage over which equipment is expected to operate. Standard power utilization voltages and percentage tolerance vary in the different mains power systems found in the world. High-voltage direct-current (HVDC) electric power transmission systems have become more viable as technology has provided efficient means of changing the voltage of DC power. Transmission with high voltage direct current was not feasible in the early days of electric power transmission, as there was then no economically viable way to step the voltage of DC down for end user applications such as lighting incandescent bulbs. Three-phase electrical generation is very common. The simplest way is to use three separate coils in the generator stator, physically offset by an angle of 120° (one-third of a complete 360° phase) to each other. Three current waveforms are produced that are equal in magnitude and 120° out of phase to each other. If coils are added opposite to these (60° spacing), they generate the same phases with reverse polarity and so can be simply wired together. In practice, higher pole orders are commonly used. For example, a 12-pole machine would have 36 coils (10° spacing). The advantage is that lower rotational speeds can be used to generate the same frequency. For example, a 2-pole machine running at 3600 rpm and a 12-pole machine running at 600 rpm produce the same frequency; the lower speed is preferable for larger machines. If the load on a three-phase system is balanced equally among the phases, no current flows through the neutral point. Even in the worst-case unbalanced (linear) load, the neutral current will not exceed the highest of the phase currents. Non-linear loads (e.g. the switch-mode power supplies widely used) may require an oversized neutral bus and neutral conductor in the upstream distribution panel to handle harmonics. Harmonics can cause neutral conductor current levels to exceed that of one or all phase conductors. For three-phase at utilization voltages a four-wire system is often used. When stepping down three-phase, a transformer with a Delta (3-wire) primary and a Star (4-wire, center-earthed) secondary is often used so there is no need for a neutral on the supply side. For smaller customers (just how small varies by country and age of the installation) only a single phase and neutral, or two phases and neutral, are taken to the property. For larger installations, all three phases and neutral are taken to the main distribution panel. From the three-phase main panel, both single and three-phase circuits may lead off. Three-wire single-phase systems, with a single center-tapped transformer giving two live conductors, is a common distribution scheme for residential and small commercial buildings in North America. This arrangement is sometimes incorrectly referred to as two phase. A similar method is used for a different reason on construction sites in the UK. Small power tools and lighting are supposed to be supplied by a local center-tapped transformer with a voltage of 55 V between each power conductor and earth. This significantly reduces the risk of electric shock in the event that one of the live conductors becomes exposed through an equipment fault whilst still allowing a reasonable voltage of 110 V between the two conductors for running the tools. An additional wire, called the bond (or earth) wire, is often connected between non-current-carrying metal enclosures and earth ground. This conductor provides protection from electric shock due to accidental contact of circuit conductors with the metal chassis of portable appliances and tools. Bonding all non-current-carrying metal parts into one complete system ensures there is always a low electrical impedance path to ground sufficient to carry any fault current for as long as it takes for the system to clear the fault. This low impedance path allows the maximum amount of fault current, causing the overcurrent protection device (breakers, fuses) to trip or burn out as quickly as possible, bringing the electrical system to a safe state. All bond wires are bonded to ground at the main service panel, as is the neutral/identified conductor if present. AC power supply frequencies The frequency of the electrical system varies by country and sometimes within a country; most electric power is generated at either 50 or 60 Hertz. Some countries have a mixture of 50 Hz and 60 Hz supplies, notably electricity power transmission in Japan. Low frequency A low frequency eases the design of electric motors, particularly for hoisting, crushing and rolling applications, and commutator-type traction motors for applications such as railways. However, low frequency also causes noticeable flicker in arc lamps and incandescent light bulbs. The use of lower frequencies also provided the advantage of lower transmission losses, which are proportional to frequency. The original Niagara Falls generators were built to produce 25 Hz power, as a compromise between low frequency for traction and heavy induction motors, while still allowing incandescent lighting to operate (although with noticeable flicker). Most of the 25 Hz residential and commercial customers for Niagara Falls power were converted to 60 Hz by the late 1950s, although some 25 Hz industrial customers still existed as of the start of the 21st century. 16.7 Hz power (formerly 16 2/3 Hz) is still used in some European rail systems, such as in Austria, Germany, Norway, Sweden and Switzerland. High frequency Off-shore, military, textile industry, marine, aircraft, and spacecraft applications sometimes use 400 Hz, for benefits of reduced weight of apparatus or higher motor speeds. Computer mainframe systems were often powered by 400 Hz or 415 Hz for benefits of ripple reduction while using smaller internal AC to DC conversion units. Effects at high frequencies A direct current flows uniformly throughout the cross-section of a homogeneous electrically conducting wire. An alternating current of any frequency is forced away from the wire's center, toward its outer surface. This is because an alternating current (which is the result of the acceleration of electric charge) creates electromagnetic waves (a phenomenon known as electromagnetic radiation). Electric conductors are not conducive to electromagnetic waves (a perfect electric conductor prohibits all electromagnetic waves within its boundary), so a wire that is made of a non-perfect conductor (a conductor with finite, rather than infinite, electrical conductivity) pushes the alternating current, along with their associated electromagnetic fields, away from the wire's center. The phenomenon of alternating current being pushed away from the center of the conductor is called skin effect, and a direct current does not exhibit this effect, since a direct current does not create electromagnetic waves. At very high frequencies, the current no longer flows in the wire, but effectively flows on the surface of the wire, within a thickness of a few skin depths. The skin depth is the thickness at which the current density is reduced by 63%. Even at relatively low frequencies used for power transmission (50 Hz – 60 Hz), non-uniform distribution of current still occurs in sufficiently thick conductors. For example, the skin depth of a copper conductor is approximately 8.57 mm at 60 Hz, so high-current conductors are usually hollow to reduce their mass and cost. This tendency of alternating current to flow predominantly in the periphery of conductors reduces the effective cross-section of the conductor. This increases the effective AC resistance of the conductor since resistance is inversely proportional to the cross-sectional area. A conductor's AC resistance is higher than its DC resistance, causing a higher energy loss due to ohmic heating (also called I2R loss). Techniques for reducing AC resistance For low to medium frequencies, conductors can be divided into stranded wires, each insulated from the others, with the relative positions of individual strands specially arranged within the conductor bundle. Wire constructed using this technique is called Litz wire. This measure helps to partially mitigate skin effect by forcing more equal current throughout the total cross section of the stranded conductors. Litz wire is used for making high-Q inductors, reducing losses in flexible conductors carrying very high currents at lower frequencies, and in the windings of devices carrying higher radio frequency current (up to hundreds of kilohertz), such as switch-mode power supplies and radio frequency transformers. Techniques for reducing radiation loss As written above, an alternating current is made of electric charge under periodic acceleration, which causes radiation of electromagnetic waves. Energy that is radiated is lost. Depending on the frequency, different techniques are used to minimize the loss due to radiation. Twisted pairs At frequencies up to about 1 GHz, pairs of wires are twisted together in a cable, forming a twisted pair. This reduces losses from electromagnetic radiation and inductive coupling. A twisted pair must be used with a balanced signaling system so that the two wires carry equal but opposite currents. Each wire in a twisted pair radiates a signal, but it is effectively canceled by radiation from the other wire, resulting in almost no radiation loss. Coaxial cables Coaxial cables are commonly used at audio frequencies and above for convenience. A coaxial cable has a conductive wire inside a conductive tube, separated by a dielectric layer. The current flowing on the surface of the inner conductor is equal and opposite to the current flowing on the inner surface of the outer tube. The electromagnetic field is thus completely contained within the tube, and (ideally) no energy is lost to radiation or coupling outside the tube. Coaxial cables have acceptably small losses for frequencies up to about 5 GHz. For microwave frequencies greater than 5 GHz, the losses (due mainly to the dielectric separating the inner and outer tubes being a non-ideal insulator) become too large, making waveguides a more efficient medium for transmitting energy. Coaxial cables often use a perforated dielectric layer to separate the inner and outer conductors in order to minimize the power dissipated by the dielectric. Waveguides Waveguides are similar to coaxial cables, as both consist of tubes, with the biggest difference being that waveguides have no inner conductor. Waveguides can have any arbitrary cross section, but rectangular cross sections are the most common. Because waveguides do not have an inner conductor to carry a return current, waveguides cannot deliver energy by means of an electric current, but rather by means of a guided electromagnetic field. Although surface currents do flow on the inner walls of the waveguides, those surface currents do not carry power. Power is carried by the guided electromagnetic fields. The surface currents are set up by the guided electromagnetic fields and have the effect of keeping the fields inside the waveguide and preventing leakage of the fields to the space outside the waveguide. Waveguides have dimensions comparable to the wavelength of the alternating current to be transmitted, so they are feasible only at microwave frequencies. In addition to this mechanical feasibility, electrical resistance of the non-ideal metals forming the walls of the waveguide causes dissipation of power (surface currents flowing on lossy conductors dissipate power). At higher frequencies, the power lost to this dissipation becomes unacceptably large. Fiber optics At frequencies greater than 200 GHz, waveguide dimensions become impractically small, and the ohmic losses in the waveguide walls become large. Instead, fiber optics, which are a form of dielectric waveguides, can be used. For such frequencies, the concepts of voltages and currents are no longer used. Formulation Alternating currents are accompanied (or caused) by alternating voltages. An AC voltage v can be described mathematically as a function of time by the following equation: , where is the peak voltage (unit: volt), is the angular frequency (unit: radians per second). The angular frequency is related to the physical frequency, (unit: hertz), which represents the number of cycles per second, by the equation . is the time (unit: second). The peak-to-peak value of an AC voltage is defined as the difference between its positive peak and its negative peak. Since the maximum value of is +1 and the minimum value is −1, an AC voltage swings between and . The peak-to-peak voltage, usually written as or , is therefore . Root mean square voltage Below an AC waveform (with no DC component) is assumed. The RMS voltage is the square root of the mean over one cycle of the square of the instantaneous voltage. Power The relationship between voltage and the power delivered is: , where represents a load resistance. Rather than using instantaneous power, , it is more practical to use a time-averaged power (where the averaging is performed over any integer number of cycles). Therefore, AC voltage is often expressed as a root mean square (RMS) value, written as , because Power oscillation For this reason, AC power's waveform becomes Full-wave rectified sine, and its fundamental frequency is double of the one of the voltage's. Examples of alternating current To illustrate these concepts, consider a 230 V AC mains supply used in many countries around the world. It is so called because its root mean square value is 230 V. This means that the time-averaged power delivered is equivalent to the power delivered by a DC voltage of 230 V. To determine the peak voltage (amplitude), we can rearrange the above equation to: For 230 V AC, the peak voltage is therefore , which is about 325 V, and the peak power is , that is 460 RW. During the course of one cycle (two cycle as the power) the voltage rises from zero to 325 V, the power from zero to 460 RW, and both falls through zero. Next, the voltage descends to reverse direction, -325 V, but the power ascends again to 460 RW, and both returns to zero. Information transmission Alternating current is used to transmit information, as in the cases of telephone and cable television. Information signals are carried over a wide range of AC frequencies. POTS telephone signals have a frequency of about 3 kHz, close to the baseband audio frequency. Cable television and other cable-transmitted information currents may alternate at frequencies of tens to thousands of megahertz. These frequencies are similar to the electromagnetic wave frequencies often used to transmit the same types of information over the air. History The first alternator to produce alternating current was an electric generator based on Michael Faraday's principles constructed by the French instrument maker Hippolyte Pixii in 1832. Pixii later added a commutator to his device to produce the (then) more commonly used direct current. The earliest recorded practical application of alternating current is by Guillaume Duchenne, inventor and developer of electrotherapy. In 1855, he announced that AC was superior to direct current for electrotherapeutic triggering of muscle contractions. Alternating current technology was developed further by the Hungarian Ganz Works company (1870s), and in the 1880s: Sebastian Ziani de Ferranti, Lucien Gaulard, and Galileo Ferraris. In 1876, Russian engineer Pavel Yablochkov invented a lighting system where sets of induction coils were installed along a high-voltage AC line. Instead of changing voltage, the primary windings transferred power to the secondary windings which were connected to one or several electric candles (arc lamps) of his own design, used to keep the failure of one lamp from disabling the entire circuit. In 1878, the Ganz factory, Budapest, Hungary, began manufacturing equipment for electric lighting and, by 1883, had installed over fifty systems in Austria-Hungary. Their AC systems used arc and incandescent lamps, generators, and other equipment. Transformers The development of the alternating current transformer to change voltage from low to high level and back, allowed generation and consumption at low voltages and transmission, over great distances, at high voltage, with savings in the cost of conductors and energy losses. A bipolar open-core power transformer developed by Lucien Gaulard and John Dixon Gibbs was demonstrated in London in 1881, and attracted the interest of Westinghouse. They exhibited an AC system powering arc and incandescent lights was installed along five railway stations for the Metropolitan Railway in London and a single-phase multiple-user AC distribution system Turin in 1884. These early induction coils with open magnetic circuits were inefficient at transferring power to loads. Until about 1880, the paradigm for AC power transmission from a high voltage supply to a low voltage load was a series circuit. Open-core transformers with a ratio near 1:1 were connected with their primaries in series to allow use of a high voltage for transmission while presenting a low voltage to the lamps. The inherent flaw in this method was that turning off a single lamp (or other electric device) affected the voltage supplied to all others on the same circuit. Many adjustable transformer designs were introduced to compensate for this problematic characteristic of the series circuit, including those employing methods of adjusting the core or bypassing the magnetic flux around part of a coil. The direct current systems did not have these drawbacks, giving it significant advantages over early AC systems. In the UK, Sebastian de Ferranti, who had been developing AC generators and transformers in London since 1882, redesigned the AC system at the Grosvenor Gallery power station in 1886 for the London Electric Supply Corporation (LESCo) including alternators of his own design and open core transformer designs with serial connections for utilization loads - similar to Gaulard and Gibbs. In 1890, he designed their power station at Deptford and converted the Grosvenor Gallery station across the Thames into an electrical substation, showing the way to integrate older plants into a universal AC supply system. In the autumn of 1884, Károly Zipernowsky, Ottó Bláthy and Miksa Déri (ZBD), three engineers associated with the Ganz Works of Budapest, determined that open-core devices were impractical, as they were incapable of reliably regulating voltage. Bláthy had suggested the use of closed cores, Zipernowsky had suggested the use of parallel shunt connections, and Déri had performed the experiments; In their joint 1885 patent applications for novel transformers (later called ZBD transformers), they described two designs with closed magnetic circuits where copper windings were either wound around a ring core of iron wires or else surrounded by a core of iron wires. In both designs, the magnetic flux linking the primary and secondary windings traveled almost entirely within the confines of the iron core, with no intentional path through air (see toroidal cores). The new transformers were 3.4 times more efficient than the open-core bipolar devices of Gaulard and Gibbs. The Ganz factory in 1884 shipped the world's first five high-efficiency AC transformers. This first unit had been manufactured to the following specifications: 1,400 W, 40 Hz, 120:72 V, 11.6:19.4 A, ratio 1.67:1, one-phase, shell form. The ZBD patents included two other major interrelated innovations: one concerning the use of parallel connected, instead of series connected, utilization loads, the other concerning the ability to have high turns ratio transformers such that the supply network voltage could be much higher (initially 140 to 2000 V) than the voltage of utilization loads (100 V initially preferred). When employed in parallel connected electric distribution systems, closed-core transformers finally made it technically and economically feasible to provide electric power for lighting in homes, businesses and public spaces. The other essential milestone was the introduction of 'voltage source, voltage intensive' (VSVI) systems' by the invention of constant voltage generators in 1885. In early 1885, the three engineers also eliminated the problem of eddy current losses with the invention of the lamination of electromagnetic cores. Ottó Bláthy also invented the first AC electricity meter. Adoption The AC power system was developed and adopted rapidly after 1886. In March of that year, Westinghouse engineer William Stanley, designing a system based on the Gaulard and Gibbs transformer, demonstrated a lighting system in Great Barrington: A Siemens generator's voltage of 500 volts was converted into 3000 volts, and then the voltage was stepped down to 500 volts by six Westinghouse transformers. With this setup, the Westinghouse company successfully powered thirty 100-volt incandescent bulbs in twenty shops along the main street of Great Barrington. By the fall of that year Ganz engineers installed a ZBD transformer power system with AC generators in Rome. Based on Stanley's success, the new Westinghouse Electric went on to develop alternating current (AC) electric infrastructure throughout the United States. The spread of Westinghouse and other AC systems triggered a push back in late 1887 by Thomas Edison (a proponent of direct current), who attempted to discredit alternating current as too dangerous in a public campaign called the "war of the currents". In 1888, alternating current systems gained further viability with the introduction of a functional AC motor, something these systems had lacked up till then. The design, an induction motor, was independently invented by Galileo Ferraris and Nikola Tesla (with Tesla's design being licensed by Westinghouse in the US). This design was independently further developed into the modern practical three-phase form by Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown in Germany on one side, and Jonas Wenström in Sweden on the other, though Brown favored the two-phase system. The Ames Hydroelectric Generating Plant, constructed in 1890, was among the first hydroelectric alternating current power plants. A long-distance transmission of single-phase electricity from a hydroelectric generating plant in Oregon at Willamette Falls sent power fourteen miles downriver to downtown Portland for street lighting in 1890. In 1891, another transmission system was installed in Telluride Colorado. The first three-phase system was established in 1891 in Frankfurt, Germany. The Tivoli–Rome transmission was completed in 1892. The San Antonio Canyon Generator was the third commercial single-phase hydroelectric AC power plant in the United States to provide long-distance electricity. It was completed on December 31, 1892, by Almarian William Decker to provide power to the city of Pomona, California, which was 14 miles away. Meanwhile, the possibility of transferring electrical power from a waterfall at a distance was explored at the Grängesberg mine in Sweden. A fall at Hällsjön, Smedjebackens kommun, where a small iron work had been located, was selected. In 1893, a three-phase system was used to transfer 400 horsepower a distance of , becoming the first commercial application. In 1893, Westinghouse built an alternating current system for the Chicago World Exposition. In 1893, Decker designed the first American commercial three-phase power plant using alternating current—the hydroelectric Mill Creek No. 1 Hydroelectric Plant near Redlands, California. Decker's design incorporated 10 kV three-phase transmission and established the standards for the complete system of generation, transmission and motors used in USA today. The original Niagara Falls Adams Power Plant with three two-phase generators was put into operation in August 1895, but was connected to the remote transmission system only in 1896. The Jaruga Hydroelectric Power Plant in Croatia was set in operation two days later, on 28 August 1895. Its generator (42 Hz, 240 kW) was made and installed by the Hungarian company Ganz, while the transmission line from the power plant to the City of Šibenik was long, and the municipal distribution grid 3000 V/110 V included six transforming stations. Alternating current circuit theory developed rapidly in the latter part of the 19th and early 20th century. Notable contributors to the theoretical basis of alternating current calculations include Charles Steinmetz, Oliver Heaviside, and many others. Calculations in unbalanced three-phase systems were simplified by the symmetrical components methods discussed by Charles LeGeyt Fortescue in 1918. See also AC power Electrical wiring Heavy-duty power plugs Hertz Leading and lagging current Mains electricity by country AC power plugs and sockets Utility frequency War of the currents AC/DC receiver design References Further reading Willam A. Meyers, History and Reflections on the Way Things Were: Mill Creek Power Plant – Making History with AC, IEEE Power Engineering Review, February 1997, pp. 22–24 External links "AC/DC: What's the Difference?". Edison's Miracle of Light, American Experience. (PBS) "AC/DC: Inside the AC Generator ". Edison's Miracle of Light, American Experience. (PBS) Professor Mark Csele's tour of the 25 Hz Rankine generating station Blalock, Thomas J., "The Frequency Changer Era: Interconnecting Systems of Varying Cycles". The history of various frequencies and interconversion schemes in the US at the beginning of the 20th century AC Power History and Timeline Electrical engineering Electric current Electric power AC power
Alternating current
[ "Physics", "Engineering" ]
5,720
[ "Physical quantities", "Electrical engineering", "Power (physics)", "Electric power", "Electric current", "Wikipedia categories named after physical quantities" ]
43,024
https://en.wikipedia.org/wiki/Levee
A levee ( or ), dike (American English), dyke (British English; see spelling differences), embankment, floodbank, or stop bank is an elevated ridge, natural or artificial, alongside the banks of a river, often intended to protect against flooding of the area adjoining the river. It is usually earthen and often runs parallel to the course of a river in its floodplain or along low-lying coastlines. Naturally occurring levees form on river floodplains following flooding, where sediment and alluvium is deposited and settles, forming a ridge and increasing the river channel's capacity. Alternatively, levees can be artificially constructed from fill, designed to regulate water levels. In some circumstances, artificial levees can be environmentally damaging. Ancient civilizations in the Indus Valley, ancient Egypt, Mesopotamia and China all built levees. Today, levees can be found around the world, and failures of levees due to erosion or other causes can be major disasters, such as the catastrophic 2005 levee failures in Greater New Orleans that occurred as a result of Hurricane Katrina. Etymology Speakers of American English use the word levee, from the French word (from the feminine past participle of the French verb , 'to raise'). It originated in New Orleans a few years after the city's founding in 1718 and was later adopted by English speakers. The name derives from the trait of the levee's ridges being raised higher than both the channel and the surrounding floodplains. The modern word dike or dyke most likely derives from the Dutch word , with the construction of dikes well attested as early as the 11th century. The Westfriese Omringdijk, completed by 1250, was formed by connecting existing older dikes. The Roman chronicler Tacitus mentions that the rebellious Batavi pierced dikes to flood their land and to protect their retreat (70 CE). The word originally indicated both the trench and the bank. It closely parallels the English verb to dig. In Anglo-Saxon, the word already existed and was pronounced as dick in northern England and as ditch in the south. Similar to Dutch, the English origins of the word lie in digging a trench and forming the upcast soil into a bank alongside it. This practice has meant that the name may be given to either the excavation or to the bank. Thus Offa's Dyke is a combined structure and Car Dyke is a trench – though it once had raised banks as well. In the English Midlands and East Anglia, and in the United States, a dike is what a ditch is in the south of England, a property-boundary marker or drainage channel. Where it carries a stream, it may be called a running dike as in Rippingale Running Dike, which leads water from the catchwater drain, Car Dyke, to the South Forty Foot Drain in Lincolnshire (TF1427). The Weir Dike is a soak dike in Bourne North Fen, near Twenty and alongside the River Glen, Lincolnshire. In the Norfolk and Suffolk Broads, a dyke may be a drainage ditch or a narrow artificial channel off a river or broad for access or mooring, some longer dykes being named, e.g., Candle Dyke. In parts of Britain, particularly Scotland and Northern England, a dyke may be a field wall, generally made with dry stone. Uses The main purpose of artificial levees is to prevent flooding of the adjoining countryside and to slow natural course changes in a waterway to provide reliable shipping lanes for maritime commerce over time; they also confine the flow of the river, resulting in higher and faster water flow. Levees can be mainly found along the sea, where dunes are not strong enough, along rivers for protection against high floods, along lakes or along polders. Furthermore, levees have been built for the purpose of impoldering, or as a boundary for an inundation area. The latter can be a controlled inundation by the military or a measure to prevent inundation of a larger area surrounded by levees. Levees have also been built as field boundaries and as military defences. More on this type of levee can be found in the article on dry-stone walls. Levees can be permanent earthworks or emergency constructions (often of sandbags) built hastily in a flood emergency. Some of the earliest levees were constructed by the Indus Valley civilization (in Pakistan and North India from ) on which the agrarian life of the Harappan peoples depended. Levees were also constructed over 3,000 years ago in ancient Egypt, where a system of levees was built along the left bank of the River Nile for more than , stretching from modern Aswan to the Nile Delta on the shores of the Mediterranean. The Mesopotamian civilizations and ancient China also built large levee systems. Because a levee is only as strong as its weakest point, the height and standards of construction have to be consistent along its length. Some authorities have argued that this requires a strong governing authority to guide the work and may have been a catalyst for the development of systems of governance in early civilizations. However, others point to evidence of large-scale water-control earthen works such as canals and/or levees dating from before King Scorpion in Predynastic Egypt, during which governance was far less centralized. Another example of a historical levee that protected the growing city-state of Mēxihco-Tenōchtitlan and the neighboring city of Tlatelōlco, was constructed during the early 1400s, under the supervision of the tlahtoani of the altepetl Texcoco, Nezahualcoyotl. Its function was to separate the brackish waters of Lake Texcoco (ideal for the agricultural technique Chināmitls) from the fresh potable water supplied to the settlements. However, after the Europeans destroyed Tenochtitlan, the levee was also destroyed and flooding became a major problem, which resulted in the majority of The Lake being drained in the 17th century. Levees are usually built by piling earth on a cleared, level surface. Broad at the base, they taper to a level top, where temporary embankments or sandbags can be placed. Because flood discharge intensity increases in levees on both river banks, and because silt deposits raise the level of riverbeds, planning and auxiliary measures are vital. Sections are often set back from the river to form a wider channel, and flood valley basins are divided by multiple levees to prevent a single breach from flooding a large area. A levee made from stones laid in horizontal rows with a bed of thin turf between each of them is known as a spetchel. Artificial levees require substantial engineering. Their surface must be protected from erosion, so they are planted with vegetation such as Bermuda grass in order to bind the earth together. On the land side of high levees, a low terrace of earth known as a banquette is usually added as another anti-erosion measure. On the river side, erosion from strong waves or currents presents an even greater threat to the integrity of the levee. The effects of erosion are countered by planting suitable vegetation or installing stones, boulders, weighted matting, or concrete revetments. Separate ditches or drainage tiles are constructed to ensure that the foundation does not become waterlogged. River flood prevention Prominent levee systems have been built along the Mississippi River and Sacramento River in the United States, and the Po, Rhine, Meuse River, Rhône, Loire, Vistula, the delta formed by the Rhine, Maas/Meuse and Scheldt in the Netherlands and the Danube in Europe. During the Chinese Warring States period, the Dujiangyan irrigation system was built by the Qin as a water conservation and flood control project. The system's infrastructure is located on the Min River, which is the longest tributary of the Yangtze River, in Sichuan, China. The Mississippi levee system represents one of the largest such systems found anywhere in the world. It comprises over of levees extending some along the Mississippi, stretching from Cape Girardeau, Missouri, to the Mississippi delta. They were begun by French settlers in Louisiana in the 18th century to protect the city of New Orleans. The first Louisiana levees were about high and covered a distance of about along the riverside. The U.S. Army Corps of Engineers, in conjunction with the Mississippi River Commission, extended the levee system beginning in 1882 to cover the riverbanks from Cairo, Illinois to the mouth of the Mississippi delta in Louisiana. By the mid-1980s, they had reached their present extent and averaged in height; some Mississippi levees are as high as . The Mississippi levees also include some of the longest continuous individual levees in the world. One such levee extends southwards from Pine Bluff, Arkansas, for a distance of some . The scope and scale of the Mississippi levees has often been compared to the Great Wall of China. The United States Army Corps of Engineers (USACE) recommends and supports cellular confinement technology (geocells) as a best management practice. Particular attention is given to the matter of surface erosion, overtopping prevention and protection of levee crest and downstream slope. Reinforcement with geocells provides tensile force to the soil to better resist instability. Artificial levees can lead to an elevation of the natural riverbed over time; whether this happens or not and how fast, depends on different factors, one of them being the amount and type of the bed load of a river. Alluvial rivers with intense accumulations of sediment tend to this behavior. Examples of rivers where artificial levees led to an elevation of the riverbed, even up to a point where the riverbed is higher than the adjacent ground surface behind the levees, are found for the Yellow River in China and the Mississippi in the United States. Coastal flood prevention Levees are very common on the marshlands bordering the Bay of Fundy in New Brunswick and Nova Scotia, Canada. The Acadians who settled the area can be credited with the original construction of many of the levees in the area, created for the purpose of farming the fertile tidal marshlands. These levees are referred to as dykes. They are constructed with hinged sluice gates that open on the falling tide to drain freshwater from the agricultural marshlands and close on the rising tide to prevent seawater from entering behind the dyke. These sluice gates are called "aboiteaux". In the Lower Mainland around the city of Vancouver, British Columbia, there are levees (known locally as dikes, and also referred to as "the sea wall") to protect low-lying land in the Fraser River delta, particularly the city of Richmond on Lulu Island. There are also dikes to protect other locations which have flooded in the past, such as the Pitt Polder, land adjacent to the Pitt River, and other tributary rivers. Coastal flood prevention levees are also common along the inland coastline behind the Wadden Sea, an area devastated by many historic floods. Thus the peoples and governments have erected increasingly large and complex flood protection levee systems to stop the sea even during storm floods. The biggest of these are the huge levees in the Netherlands, which have gone beyond just defending against floods, as they have aggressively taken back land that is below mean sea level. Spur dykes or groynes These typically man-made hydraulic structures are situated to protect against erosion. They are typically placed in alluvial rivers perpendicular, or at an angle, to the bank of the channel or the revetment, and are used widely along coastlines. There are two common types of spur dyke, permeable and impermeable, depending on the materials used to construct them. Natural examples Natural levees commonly form around lowland rivers and creeks without human intervention. They are elongated ridges of mud and/or silt that form on the river floodplains immediately adjacent to the cut banks. Like artificial levees, they act to reduce the likelihood of floodplain inundation. Deposition of levees is a natural consequence of the flooding of meandering rivers which carry high proportions of suspended sediment in the form of fine sands, silts, and muds. Because the carrying capacity of a river depends in part on its depth, the sediment in the water which is over the flooded banks of the channel is no longer capable of keeping the same number of fine sediments in suspension as the main thalweg. The extra fine sediments thus settle out quickly on the parts of the floodplain nearest to the channel. Over a significant number of floods, this will eventually result in the building up of ridges in these positions and reducing the likelihood of further floods and episodes of levee building. If aggradation continues to occur in the main channel, this will make levee overtopping more likely again, and the levees can continue to build up. In some cases, this can result in the channel bed eventually rising above the surrounding floodplains, penned in only by the levees around it; an example is the Yellow River in China near the sea, where oceangoing ships appear to sail high above the plain on the elevated river. Levees are common in any river with a high suspended sediment fraction and thus are intimately associated with meandering channels, which also are more likely to occur where a river carries large fractions of suspended sediment. For similar reasons, they are also common in tidal creeks, where tides bring in large amounts of coastal silts and muds. High spring tides will cause flooding, and result in the building up of levees. Failures and breaches Both natural and man-made levees can fail in a number of ways. Factors that cause levee failure include overtopping, erosion, structural failures, and levee saturation. The most frequent (and dangerous) is a levee breach. Here, a part of the levee actually breaks or is eroded away, leaving a large opening for water to flood land otherwise protected by the levee. A breach can be a sudden or gradual failure, caused either by surface erosion or by subsurface weakness in the levee. A breach can leave a fan-shaped deposit of sediment radiating away from the breach, described as a crevasse splay. In natural levees, once a breach has occurred, the gap in the levee will remain until it is again filled in by levee building processes. This increases the chances of future breaches occurring in the same location. Breaches can be the location of meander cutoffs if the river flow direction is permanently diverted through the gap. Sometimes levees are said to fail when water overtops the crest of the levee. This will cause flooding on the floodplains, but because it does not damage the levee, it has fewer consequences for future flooding. Among various failure mechanisms that cause levee breaches, soil erosion is found to be one of the most important factors. Predicting soil erosion and scour generation when overtopping happens is important in order to design stable levee and floodwalls. There have been numerous studies to investigate the erodibility of soils. Briaud et al. (2008) used Erosion Function Apparatus (EFA) test to measure the erodibility of the soils and afterwards by using Chen 3D software, numerical simulations were performed on the levee to find out the velocity vectors in the overtopping water and the generated scour when the overtopping water impinges the levee. By analyzing the results from EFA test, an erosion chart to categorize erodibility of the soils was developed. Hughes and Nadal in 2009 studied the effect of combination of wave overtopping and storm surge overflow on the erosion and scour generation in levees. The study included hydraulic parameters and flow characteristics such as flow thickness, wave intervals, surge level above levee crown in analyzing scour development. According to the laboratory tests, empirical correlations related to average overtopping discharge were derived to analyze the resistance of levee against erosion. These equations could only fit to the situation, similar to the experimental tests, while they can give a reasonable estimation if applied to other conditions. Osouli et al. (2014) and Karimpour et al. (2015) conducted lab scale physical modeling of levees to evaluate score characterization of different levees due to floodwall overtopping. Another approach applied to prevent levee failures is electrical resistivity tomography (ERT). This non-destructive geophysical method can detect in advance critical saturation areas in embankments. ERT can thus be used in monitoring of seepage phenomena in earth structures and act as an early warning system, e.g., in critical parts of levees or embankments. Negative impacts Large scale structures designed to modify natural processes inevitably have some drawbacks or negative impacts. Ecological impact Levees interrupt floodplain ecosystems that developed under conditions of seasonal flooding. In many cases, the impact is two-fold, as reduced recurrence of flooding also facilitates land-use change from forested floodplain to farms. Increased height In a natural watershed, floodwaters spread over a landscape and slowly return to the river. Downstream, the delivery of water from the area of flooding is spread out in time. If levees keep the floodwaters inside a narrow channel, the water is delivered downstream over a shorter time period. The same volume of water over a shorter time interval means higher river stage (height). As more levees are built upstream, the recurrence interval for high-water events in the river increases, often requiring increases in levee height. Levee breaches produce high-energy flooding During natural flooding, water spilling over banks rises slowly. When a levee fails, a wall of water held back by the levee suddenly pours out over the landscape, much like a dam break. Impacted areas far from a breach may experience flooding similar to a natural event, while damage near a breach can be catastrophic, including carving out deep holes and channels in the nearby landscape. Prolonged flooding after levee failure Under natural conditions, floodwaters return quickly to the river channel as water-levels drop. During a levee breach, water pours out into the floodplain and moves down-slope where it is blocked from return to the river. Flooding is prolonged over such areas, waiting for floodwater to slowly infiltrate and evaporate. Subsidence and seawater intrusion Natural flooding adds a layer of sediment to the floodplain. The added weight of such layers over many centuries makes the crust sink deeper into the mantle, much like a floating block of wood is pushed deeper into the water if another board is added on top. The momentum of downward movement does not immediately stop when new sediment layers stop being added, resulting in subsidence (sinking of land surface). In coastal areas, this results in land dipping below sea level, the ocean migrating inland, and salt-water intruding into freshwater aquifers. Coastal sediment loss Where a large river spills out into the ocean, the velocity of the water suddenly slows and its ability to transport sand and silt decreases. Sediments begin to settle out, eventually forming a delta and extending to the coastline seaward. During subsequent flood events, water spilling out of the channel will find a shorter route to the ocean and begin building a new delta. Wave action and ocean currents redistribute some of the sediment to build beaches along the coast. When levees are constructed all the way to the ocean, sediments from flooding events are cut off, the river never migrates, and elevated river velocity delivers sediment to deep water where wave action and ocean currents cannot redistribute. Instead of a natural wedge shaped delta forming, a "birds-foot delta" extends far out into the ocean. The results for surrounding land include beach depletion, subsidence, salt-water intrusion, and land loss. See also Lava channel Notes References External links "Well Diggers Trick", June 1951, Popular Science article on how flood control engineers were using an old method to protect flood levees along rivers from seepage undermining the levee "Design and Construction of Levees" US Army Engineer Manual EM-1110-2-1913 The International Levee Handbook Flood control Fluvial landforms Riparian zone
Levee
[ "Chemistry", "Engineering", "Environmental_science" ]
4,165
[ "Flood control", "Riparian zone", "Hydrology", "Environmental engineering" ]
43,050
https://en.wikipedia.org/wiki/Neo-Darwinism
Neo-Darwinism is generally used to describe any integration of Charles Darwin's theory of evolution by natural selection with Gregor Mendel's theory of genetics. It mostly refers to evolutionary theory from either 1895 (for the combinations of Darwin's and August Weismann's theories of evolution) or 1942 ("modern synthesis"), but it can mean any new Darwinian- and Mendelian-based theory, such as the current evolutionary theory. Original use Darwin's theory of evolution by natural selection, as published in 1859, provided a selection mechanism for evolution, but not a trait transfer mechanism. Lamarckism was still a very popular candidate for this. August Weismann and Alfred Russel Wallace rejected the Lamarckian idea of inheritance of acquired characteristics that Darwin had accepted and later expanded upon in his writings on heredity. The basis for the complete rejection of Lamarckism was Weismann's germ plasm theory. Weismann realised that the cells that produce the germ plasm, or gametes (such as sperm and eggs in animals), separate from the somatic cells that go on to make other body tissues at an early stage in development. Since he could see no obvious means of communication between the two, he asserted that the inheritance of acquired characteristics was therefore impossible; a conclusion now known as the Weismann barrier. It is, however, usually George Romanes who is credited with the first use of the word in a scientific context. Romanes used the term to describe the combination of natural selection and Weismann's germ plasm theory that evolution occurs solely through natural selection, and not by the inheritance of acquired characteristics resulting from use or disuse, thus using the word to mean "Darwinism without Lamarckism." Following the development, from about 1918 to 1947, of the modern synthesis of evolutionary biology, the term neo-Darwinian started to be used to refer to that contemporary evolutionary theory. Current meaning Biologists, however, have not limited their application of the term neo-Darwinism to the historical synthesis. For example, Ernst Mayr wrote in 1984 that: The term neo-Darwinism for the synthetic theory [of the early 20th century] is sometimes considered wrong, because the term neo-Darwinism was coined by Romanes in 1895 as a designation of Weismann's theory. Publications such as Encyclopædia Britannica use neo-Darwinism to refer to current-consensus evolutionary theory, not the version prevalent during the early 20th century. Similarly, Richard Dawkins and Stephen Jay Gould have used neo-Darwinism in their writings and lectures to denote the forms of evolutionary biology that were contemporary when they were writing. See also History of evolutionary thought References Evolutionary biology
Neo-Darwinism
[ "Biology" ]
568
[ "Evolutionary biology" ]
43,052
https://en.wikipedia.org/wiki/Quantum%20evolution
Quantum evolution is a component of George Gaylord Simpson's multi-tempoed theory of evolution proposed to explain the rapid emergence of higher taxonomic groups in the fossil record. According to Simpson, evolutionary rates differ from group to group and even among closely related lineages. These different rates of evolutionary change were designated by Simpson as bradytelic (slow tempo), horotelic (medium tempo), and tachytelic (rapid tempo). Quantum evolution differed from these styles of change in that it involved a drastic shift in the adaptive zones of certain classes of animals. The word "quantum" therefore refers to an "all-or-none reaction", where transitional forms are particularly unstable, and thereby perish rapidly and completely. Although quantum evolution may happen at any taxonomic level, it plays a much larger role in "the origin taxonomic units of relatively high rank, such as families, orders, and classes." Quantum evolution in plants Usage of the phrase "quantum evolution" in plants was apparently first articulated by Verne Grant in 1963 (pp. 458-459). He cited an earlier 1958 paper by Harlan Lewis and Peter H. Raven, wherein Grant asserted that Lewis and Raven gave a "parallel" definition of quantum evolution as defined by Simpson. Lewis and Raven postulated that species in the Genus Clarkia had a mode of speciation that resulted ...as a consequence of a rapid reorganization of the chromosomes due to the presence, at some time, of a genotype conducive to extensive chromosome breakage. A similar mode of origin by rapid reorganization of the chromosomes is suggested for the derivation of other species of Clarkia. In all of these examples the derivative populations grow adjacent to the parental species, which they resemble closely in morphology, but from which they are reproductively isolated because of multiple structural differences in their chromosomes. The spatial relationship of each parental species and its derivative suggests that differentiation has been recent. The repeated occurrence of the same pattern of differentiation in Clarkia suggests that a rapid reorganization of chromosomes has been an important mode of evolution in the genus. This rapid reorganization of the chromosomes is comparable to the systemic mutations proposed by Goldschmidt as a mechanism of macroevolution. In Clarkia, we have not observed marked changes in physiology and pattern of development that could be described as macroevolution. Reorganization of the genomes may, however, set the stage for subsequent evolution along a very different course from that of the ancestral populations Harlan Lewis refined this concept in a 1962 paper where he coined the term "Catastrophic Speciation" to describe this mode of speciation, since he theorized that the reductions in population size and consequent inbreeding that led to chromosomal rearrangements occurred in small populations that were subject to severe drought. Leslie D. Gottlieb in his 2003 summary of the subject in plants stated we can define quantum speciation as the budding off of a new and very different daughter species from a semi-isolated peripheral population of the ancestral species in a cross-fertilizing organism...as compared with geographical speciation, which is a gradual and conservative process, quantum speciation is rapid and radical in its phenotypic or genotypic effects or both. Gottlieb did not believe that sympatric speciation required disruptive selection to form a reproductive isolating barrier, as defined by Grant, and in fact Gottlieb stated that requiring disruptive selection was "unnecessarily restrictive" in identifying cases of sympatric speciation. In this 2003 paper Gottlieb summarized instances of quantum evolution in the plant species Clarkia, Layia, and Stephanomeria. Mechanisms According to Simpson (1944), quantum evolution resulted from Sewall Wright's model of random genetic drift. Simpson believed that major evolutionary transitions would arise when small populations, that were isolated and limited from gene flow, would fixate upon unusual gene combinations. This "inadaptive phase" (caused by genetic drift) would then (by natural selection) drive a deme population from one stable adaptive peak to another on the adaptive fitness landscape. However, in his Major Features of Evolution (1953) Simpson wrote that this mechanism was still controversial: "whether prospective adaptation as prelude to quantum evolution arises adaptively or inadaptively. It was concluded above that it usually arises adaptively . . . . The precise role of, say, genetic drift in this process thus is largely speculative at present. It may have an essential part or none. It surely is not involved in all cases of quantum evolution, but there is a strong possibility that it is often involved. If or when it is involved, it is an initiating mechanism. Drift can only rarely, and only for lower categories, have completed the transition to a new adaptive zone." This preference for adaptive over inadaptive forces led Stephen Jay Gould to call attention to the "hardening of the Modern Synthesis", a trend in the 1950s where adaptationism took precedence over the pluralism of mechanisms common in the 1930s and 40s. Simpson considered quantum evolution his crowning achievement, being "perhaps the most important outcome of [my] investigation, but also the most controversial and hypothetical." See also Environmental niche modelling Mutationism Punctuated equilibrium Quantum speciation Rapid modes of evolution Shifting balance theory Sympatric speciation References Sources Eldredge, Niles (1995). Reinventing Darwin. New York: John Wiley & Sons. pp. 20-26. Gould, S. J. (1994). "Tempo and mode in the macroevolutionary reconstruction on Darwinism" PNAS USA 91(15): 6764-71. Gould S.J. (2002). The Structure of Evolutionary Theory Cambridge MA: Harvard Univ. Press. pp. 529-31. Mayr, Ernst (1976). Evolution and the Diversity of Life. Cambridge MA: Belknap Press. p. 206. Mayr, Ernst (1982). The Growth of Biological Thought. Cambridge MA: Belknap Press. pp. 555, 609-10. External links George Gaylord Simpson - Biographical sketch. Tempo and Mode in Evolution: Genetics and Paleontology 50 Years After Simpson Evolutionary biology Modern synthesis (20th century) Rate of evolution
Quantum evolution
[ "Biology" ]
1,286
[ "Evolutionary biology" ]
43,093
https://en.wikipedia.org/wiki/Flagellum
A flagellum (; : flagella) (Latin for 'whip' or 'scourge') is a hair-like appendage that protrudes from certain plant and animal sperm cells, from fungal spores (zoospores), and from a wide range of microorganisms to provide motility. Many protists with flagella are known as flagellates. A microorganism may have from one to many flagella. A gram-negative bacterium Helicobacter pylori, for example, uses its flagella to propel itself through the stomach to reach the mucous lining where it may colonise the epithelium and potentially cause gastritis, and ulcers – a risk factor for stomach cancer. In some swarming bacteria, the flagellum can also function as a sensory organelle, being sensitive to wetness outside the cell. Across the three domains of Bacteria, Archaea, and Eukaryota, the flagellum has a different structure, protein composition, and mechanism of propulsion but shares the same function of providing motility. The Latin word means "whip" to describe its lash-like swimming motion. The flagellum in archaea is called the archaellum to note its difference from the bacterial flagellum. Eukaryotic flagella and cilia are identical in structure but have different lengths and functions. Prokaryotic fimbriae and pili are smaller, and thinner appendages, with different functions. Cilia are attached to the surface of flagella and are used to swim or move fluid from one region to another. Types The three types of flagella are bacterial, archaeal, and eukaryotic. The flagella in eukaryotes have dynein and microtubules that move with a bending mechanism. Bacteria and archaea do not have dynein or microtubules in their flagella, and they move using a rotary mechanism. Other differences among these three types are: Bacterial flagella are helical filaments, each with a rotary motor at its base which can turn clockwise or counterclockwise. They provide two of several kinds of bacterial motility. Archaeal flagella (archaella) are superficially similar to bacterial flagella in that it also has a rotary motor, but are different in many details and considered non-homologous. Eukaryotic flagella—those of animal, plant, and protist cells—are complex cellular projections that lash back and forth. Eukaryotic flagella and motile cilia are identical in structure, but have different lengths, waveforms, and functions. Primary cilia are immotile, and have a structurally different 9+0 axoneme rather than the 9+2 axoneme found in both flagella and motile cilia. Bacterial flagella Structure and composition The bacterial flagellum is made up of protein subunits of flagellin. Its shape is a 20-nanometer-thick hollow tube. It is helical and has a sharp bend just outside the outer membrane; this "hook" allows the axis of the helix to point directly away from the cell. A shaft runs between the hook and the basal body, passing through protein rings in the cell's membrane that act as bearings. Gram-positive organisms have two of these basal body rings, one in the peptidoglycan layer and one in the plasma membrane. Gram-negative organisms have four such rings: the L ring associates with the lipopolysaccharides, the P ring associates with peptidoglycan layer, the M ring is embedded in the plasma membrane, and the S ring is directly attached to the cytoplasm. The filament ends with a capping protein. The flagellar filament is the long, helical screw that propels the bacterium when rotated by the motor, through the hook. In most bacteria that have been studied, including the gram-negative Escherichia coli, Salmonella typhimurium, Caulobacter crescentus, and Vibrio alginolyticus, the filament is made up of 11 protofilaments approximately parallel to the filament axis. Each protofilament is a series of tandem protein chains. However, Campylobacter jejuni has seven protofilaments. The basal body has several traits in common with some types of secretory pores, such as the hollow, rod-like "plug" in their centers extending out through the plasma membrane. The similarities between bacterial flagella and bacterial secretory system structures and proteins provide scientific evidence supporting the theory that bacterial flagella evolved from the type-three secretion system (TTSS). The atomic structure of both bacterial flagella as well as the TTSS injectisome have been elucidated in great detail, especially with the development of cryo-electron microscopy. The best understood parts are the parts between the inner and outer membrane, that is, the scaffolding rings of the inner membrane (IM), the scaffolding pairs of the outer membrane (OM), and the rod/needle (injectisome) or rod/hook (flagellum) sections. Motor The bacterial flagellum is driven by a rotary engine (Mot complex) made up of protein, located at the flagellum's anchor point on the inner cell membrane. The engine is powered by proton-motive force, i.e., by the flow of protons (hydrogen ions) across the bacterial cell membrane due to a concentration gradient set up by the cell's metabolism (Vibrio species have two kinds of flagella, lateral and polar, and some are driven by a sodium ion pump rather than a proton pump). The rotor transports protons across the membrane, and is turned in the process. The rotor alone can operate at 6,000 to 100,000 rpm, but with the flagellar filament attached usually only reaches 200 to 1000 rpm. The direction of rotation can be changed by the flagellar motor switch almost instantaneously, caused by a slight change in the position of a protein, FliG, in the rotor. The torque is transferred from the MotAB to the torque helix on FliG's D5 domain and with the increase in the requirement of the torque or speed more MotAB are employed. Because the flagellar motor has no on-off switch, the protein epsE is used as a mechanical clutch to disengage the motor from the rotor, thus stopping the flagellum and allowing the bacterium to remain in one place. The production and rotation of a flagellum can take up to 10% of an Escherichia coli cell's energy budget and has been described as an "energy-guzzling machine". Its operation generates reactive oxygen species that elevate mutation rates. The cylindrical shape of flagella is suited to locomotion of microscopic organisms; these organisms operate at a low Reynolds number, where the viscosity of the surrounding water is much more important than its mass or inertia. The rotational speed of flagella varies in response to the intensity of the proton-motive force, thereby permitting certain forms of speed control, and also permitting some types of bacteria to attain remarkable speeds in proportion to their size; some achieve roughly 60 cell lengths per second. At such a speed, a bacterium would take about 245 days to cover 1 km; although that may seem slow, the perspective changes when the concept of scale is introduced. In comparison to macroscopic life forms, it is very fast indeed when expressed in terms of number of body lengths per second. A cheetah, for example, only achieves about 25 body lengths per second. Through use of their flagella, bacteria are able to move rapidly towards attractants and away from repellents, by means of a biased random walk, with runs and tumbles brought about by rotating its flagellum counterclockwise and clockwise, respectively. The two directions of rotation are not identical (with respect to flagellum movement) and are selected by a molecular switch. Clockwise rotation is called the traction mode with the body following the flagella. Counterclockwise rotation is called the thruster mode with the flagella lagging behind the body. Assembly During flagellar assembly, components of the flagellum pass through the hollow cores of the basal body and the nascent filament. During assembly, protein components are added at the flagellar tip rather than at the base. In vitro, flagellar filaments assemble spontaneously in a solution containing purified flagellin as the sole protein. Evolution At least 10 protein components of the bacterial flagellum share homologous proteins with the type three secretion system (T3SS) found in many gram-negative bacteria, hence one likely evolved from the other. Because the T3SS has a similar number of components as a flagellar apparatus (about 25 proteins), which one evolved first is difficult to determine. However, the flagellar system appears to involve more proteins overall, including various regulators and chaperones, hence it has been argued that flagella evolved from a T3SS. However, it has also been suggested that the flagellum may have evolved first or the two structures evolved in parallel. Early single-cell organisms' need for motility (mobility) support that the more mobile flagella would be selected by evolution first, but the T3SS evolving from the flagellum can be seen as 'reductive evolution', and receives no topological support from the phylogenetic trees. The hypothesis that the two structures evolved separately from a common ancestor accounts for the protein similarities between the two structures, as well as their functional diversity. Flagella and the intelligent design debate Some authors have argued that flagella cannot have evolved, assuming that they can only function properly when all proteins are in place. In other words, the flagellar apparatus is "irreducibly complex". However, many proteins can be deleted or mutated and the flagellum still works, though sometimes at reduced efficiency. Moreover, with many proteins unique to some number across species, diversity of bacterial flagella composition was higher than expected. Hence, the flagellar apparatus is clearly very flexible in evolutionary terms and perfectly able to lose or gain protein components. For instance, a number of mutations have been found that increase the motility of E. coli. Additional evidence for the evolution of bacterial flagella includes the existence of vestigial flagella, intermediate forms of flagella and patterns of similarities among flagellar protein sequences, including the observation that almost all of the core flagellar proteins have known homologies with non-flagellar proteins. Furthermore, several processes have been identified as playing important roles in flagellar evolution, including self-assembly of simple repeating subunits, gene duplication with subsequent divergence, recruitment of elements from other systems ('molecular bricolage') and recombination. Flagellar arrangements Different species of bacteria have different numbers and arrangements of flagella, named using the term tricho, from the Greek trichos meaning hair. Monotrichous bacteria such as Vibrio cholerae have a single polar flagellum. Amphitrichous bacteria have a single flagellum on each of two opposite ends (e.g., Campylobacter jejuni or Alcaligenes faecalis)—both flagella rotate but coordinate to produce coherent thrust. Lophotrichous bacteria (lopho Greek combining term meaning crest or tuft) have multiple flagella located at the same spot on the bacterial surface such as Helicobacter pylori, which act in concert to drive the bacteria in a single direction. In many cases, the bases of multiple flagella are surrounded by a specialized region of the cell membrane, called the polar organelle. Peritrichous bacteria have flagella projecting in all directions (e.g., E. coli). Counterclockwise rotation of a monotrichous polar flagellum pushes the cell forward with the flagellum trailing behind, much like a corkscrew moving inside cork. Water on the microscopic scale is highly viscous, unlike usual water. Spirochetes, in contrast, have flagella called endoflagella arising from opposite poles of the cell, and are located within the periplasmic space as shown by breaking the outer-membrane and also by electron cryotomography microscopy. The rotation of the filaments relative to the cell body causes the entire bacterium to move forward in a corkscrew-like motion, even through material viscous enough to prevent the passage of normally flagellated bacteria. In certain large forms of Selenomonas, more than 30 individual flagella are organized outside the cell body, helically twining about each other to form a thick structure (easily visible with the light microscope) called a "fascicle". In some Vibrio spp. (particularly Vibrio parahaemolyticus) and related bacteria such as Aeromonas, two flagellar systems co-exist, using different sets of genes and different ion gradients for energy. The polar flagella are constitutively expressed and provide motility in bulk fluid, while the lateral flagella are expressed when the polar flagella meet too much resistance to turn. These provide swarming motility on surfaces or in viscous fluids. Bundling Bundling is an event that can happen in multi-flagellated cells, bundling the flagella together and causing them to rotate in a coordinated manner. Flagella are left-handed helices, and when rotated counter-clockwise by their rotors, they can bundle and rotate together. When the rotors reverse direction, thus rotating clockwise, the flagellum unwinds from the bundle. This may cause the cell to stop its forward motion and instead start twitching in place, referred to as tumbling. Tumbling results in a stochastic reorientation of the cell, causing it to change the direction of its forward swimming. It is not known which stimuli drive the switch between bundling and tumbling, but the motor is highly adaptive to different signals. In the model describing chemotaxis ("movement on purpose") the clockwise rotation of a flagellum is suppressed by chemical compounds favorable to the cell (e.g. food). When moving in a favorable direction, the concentration of such chemical attractants increases and therefore tumbles are continually suppressed, allowing forward motion; likewise, when the cell's direction of motion is unfavorable (e.g., away from a chemical attractant), tumbles are no longer suppressed and occur much more often, with the chance that the cell will be thus reoriented in the correct direction. Even if all flagella would rotate clockwise, however, they often cannot form a bundle due to geometrical and hydrodynamic reasons. Eukaryotic flagella Terminology Aiming to emphasize the distinction between the bacterial flagella and the eukaryotic cilia and flagella, some authors attempted to replace the name of these two eukaryotic structures with "undulipodia" (e.g., all papers by Margulis since the 1970s) or "cilia" for both (e.g., Hülsmann, 1992; Adl et al., 2012; most papers of Cavalier-Smith), preserving "flagella" for the bacterial structure. However, the discriminative usage of the terms "cilia" and "flagella" for eukaryotes adopted in this article (see below) is still common (e.g., Andersen et al., 1991; Leadbeater et al., 2000). Internal structure The core of a eukaryotic flagellum, known as the axoneme is a bundle of nine fused pairs of microtubules known as doublets surrounding two central single microtubules (singlets). This 9+2 axoneme is characteristic of the eukaryotic flagellum. At the base of a eukaryotic flagellum is a basal body, "blepharoplast" or kinetosome, which is the microtubule organizing center for flagellar microtubules and is about 500 nanometers long. Basal bodies are structurally identical to centrioles. The flagellum is encased within the cell's plasma membrane, so that the interior of the flagellum is accessible to the cell's cytoplasm. Besides the axoneme and basal body, relatively constant in morphology, other internal structures of the flagellar apparatus are the transition zone (where the axoneme and basal body meet) and the root system (microtubular or fibrilar structures that extend from the basal bodies into the cytoplasm), more variable and useful as indicators of phylogenetic relationships of eukaryotes. Other structures, more uncommon, are the paraflagellar (or paraxial, paraxonemal) rod, the R fiber, and the S fiber. For surface structures, see below. Mechanism Each of the outer 9 doublet microtubules extends a pair of dynein arms (an "inner" and an "outer" arm) to the adjacent microtubule; these produce force through ATP hydrolysis. The flagellar axoneme also contains radial spokes, polypeptide complexes extending from each of the outer nine microtubule doublets towards the central pair, with the "head" of the spoke facing inwards. The radial spoke is thought to be involved in the regulation of flagellar motion, although its exact function and method of action are not yet understood. Flagella versus cilia The regular beat patterns of eukaryotic cilia and flagella generate motion on a cellular level. Examples range from the propulsion of single cells such as the swimming of spermatozoa to the transport of fluid along a stationary layer of cells such as in the respiratory tract. Although eukaryotic cilia and flagella are ultimately the same, they are sometimes classed by their pattern of movement, a tradition from before their structures have been known. In the case of flagella, the motion is often planar and wave-like, whereas the motile cilia often perform a more complicated three-dimensional motion with a power and recovery stroke. Yet another traditional form of distinction is by the number of 9+2 organelles on the cell. Intraflagellar transport Intraflagellar transport, the process by which axonemal subunits, transmembrane receptors, and other proteins are moved up and down the length of the flagellum, is essential for proper functioning of the flagellum, in both motility and signal transduction. Evolution and occurrence Eukaryotic flagella or cilia, probably an ancestral characteristic, are widespread in almost all groups of eukaryotes, as a relatively perennial condition, or as a flagellated life cycle stage (e.g., zoids, gametes, zoospores, which may be produced continually or not). The first situation is found either in specialized cells of multicellular organisms (e.g., the choanocytes of sponges, or the ciliated epithelia of metazoans), as in ciliates and many eukaryotes with a "flagellate condition" (or "monadoid level of organization", see Flagellata, an artificial group). Flagellated lifecycle stages are found in many groups, e.g., many green algae (zoospores and male gametes), bryophytes (male gametes), pteridophytes (male gametes), some gymnosperms (cycads and Ginkgo, as male gametes), centric diatoms (male gametes), brown algae (zoospores and gametes), oomycetes (assexual zoospores and gametes), hyphochytrids (zoospores), labyrinthulomycetes (zoospores), some apicomplexans (gametes), some radiolarians (probably gametes), foraminiferans (gametes), plasmodiophoromycetes (zoospores and gametes), myxogastrids (zoospores), metazoans (male gametes), and chytrid fungi (zoospores and gametes). Flagella or cilia are completely absent in some groups, probably due to a loss rather than being a primitive condition. The loss of cilia occurred in red algae, some green algae (Zygnematophyceae), the gymnosperms except cycads and Ginkgo, angiosperms, pennate diatoms, some apicomplexans, some amoebozoans, in the sperm of some metazoans, and in fungi (except chytrids). Typology A number of terms related to flagella or cilia are used to characterize eukaryotes. According to surface structures present, flagella may be: whiplash flagella (= smooth, acronematic flagella): without hairs, e.g., in Opisthokonta hairy flagella (= tinsel, flimmer, pleuronematic flagella): with hairs (= mastigonemes sensu lato), divided in: with fine hairs (= non-tubular, or simple hairs): occurs in Euglenophyceae, Dinoflagellata, some Haptophyceae (Pavlovales) with stiff hairs (= tubular hairs, retronemes, mastigonemes sensu stricto), divided in: bipartite hairs: with two regions. Occurs in Cryptophyceae, Prasinophyceae, and some Heterokonta tripartite (= straminipilous) hairs: with three regions (a base, a tubular shaft, and one or more terminal hairs). Occurs in most Heterokonta stichonematic flagella: with a single row of hairs pantonematic flagella: with two rows of hairs acronematic: flagella with a single, terminal mastigoneme or flagellar hair (e.g., bodonids); some authors use the term as synonym of whiplash with scales: e.g., Prasinophyceae with spines: e.g., some brown algae with undulating membrane: e.g., some kinetoplastids, some parabasalids with proboscis (trunk-like protrusion of the cell): e.g., apusomonads, some bodonids According to the number of flagella, cells may be: (remembering that some authors use "ciliated" instead of "flagellated") uniflagellated: e.g., most Opisthokonta biflagellated: e.g., all Dinoflagellata, the gametes of Charophyceae, of most bryophytes and of some metazoans triflagellated: e.g., the gametes of some Foraminifera quadriflagellated: e.g., some Prasinophyceae, Collodictyonidae octoflagellated: e.g., some Diplomonada, some Prasinophyceae multiflagellated: e.g., Opalinata, Ciliophora, Stephanopogon, Parabasalida, Hemimastigophora, Caryoblastea, Multicilia, the gametes (or zoids) of Oedogoniales (Chlorophyta), some pteridophytes and some gymnosperms According to the place of insertion of the flagella: opisthokont: cells with flagella inserted posteriorly, e.g., in Opisthokonta (Vischer, 1945). In Haptophyceae, flagella are laterally to terminally inserted, but are directed posteriorly during rapid swimming. akrokont: cells with flagella inserted apically subakrokont: cells with flagella inserted subapically pleurokont: cells with flagella inserted laterally According to the beating pattern: gliding: a flagellum that trails on the substrate heterodynamic: flagella with different beating patterns (usually with one flagellum functioning in food capture and the other functioning in gliding, anchorage, propulsion or "steering") isodynamic: flagella beating with the same patterns Other terms related to the flagellar type: isokont: cells with flagella of equal length. It was also formerly used to refer to the Chlorophyta anisokont: cells with flagella of unequal length, e.g., some Euglenophyceae and Prasinophyceae heterokont: term introduced by Luther (1899) to refer to the Xanthophyceae, due to the pair of flagella of unequal length. It has taken on a specific meaning in referring to cells with an anterior straminipilous flagellum (with tripartite mastigonemes, in one or two rows) and a posterior usually smooth flagellum. It is also used to refer to the taxon Heterokonta stephanokont: cells with a crown of flagella near its anterior end, e.g., the gametes and spores of Oedogoniales, the spores of some Bryopsidales. Term introduced by Blackman & Tansley (1902) to refer to the Oedogoniales akont: cells without flagella. It was also used to refer to taxonomic groups, as Aconta or Akonta: the Zygnematophyceae and Bacillariophyceae (Oltmanns, 1904), or the Rhodophyceae (Christensen, 1962) Archaeal flagella The archaellum possessed by some species of Archaea is superficially similar to the bacterial flagellum; in the 1980s, they were thought to be homologous on the basis of gross morphology and behavior. Both flagella and archaella consist of filaments extending outside the cell, and rotate to propel the cell. Archaeal flagella have a unique structure which lacks a central channel. Similar to bacterial type IV pilins, the archaeal proteins (archaellins) are made with class 3 signal peptides and they are processed by a type IV prepilin peptidase-like enzyme. The archaellins are typically modified by the addition of N-linked glycans which are necessary for proper assembly or function. Discoveries in the 1990s revealed numerous detailed differences between the archaeal and bacterial flagella. These include: Bacterial flagella rotation is powered by the proton motive force – a flow of H+ ions or occasionally by the sodium-motive force – a flow of Na+ ions; archaeal flagella rotation is powered by ATP. While bacterial cells often have many flagellar filaments, each of which rotates independently, the archaeal flagellum is composed of a bundle of many filaments that rotates as a single assembly. Bacterial flagella grow by the addition of flagellin subunits at the tip; archaeal flagella grow by the addition of subunits to the base. Bacterial flagella are thicker than archaella, and the bacterial filament has a large enough hollow "tube" inside that the flagellin subunits can flow up the inside of the filament and get added at the tip; the archaellum is too thin (12-15 nm) to allow this. Many components of bacterial flagella share sequence similarity to components of the type III secretion systems, but the components of bacterial flagella and archaella share no sequence similarity. Instead, some components of archaella share sequence and morphological similarity with components of type IV pili, which are assembled through the action of type II secretion systems (the nomenclature of pili and protein secretion systems is not consistent). These differences support the theory that the bacterial flagella and archaella are a classic case of biological analogy, or convergent evolution, rather than homology. Research into the structure of archaella made significant progress beginning in the early 2010s, with the first atomic resolution structure of an archaella protein, the discovery of additional functions of archaella, and the first reports of archaella in Nanoarchaeota and Thaumarchaeota. Fungal The only fungi to have a single flagellum on their spores are the chytrids. In Batrachochytrium dendrobatidis the flagellum is 19–20 μm long. A nonfunctioning centriole lies adjacent to the kinetosome. Nine interconnected props attach the kinetosome to the plasmalemma, and a terminal plate is present in the transitional zone. An inner ring-like structure attached to the tubules of the flagellar doublets within the transitional zone has been observed in transverse section. Additional images See also Ciliopathy RpoF References Further reading External links Cell Image Library - Flagella Cell movement Organelles Protein complexes Bacteria
Flagellum
[ "Biology" ]
6,128
[ "Prokaryotes", "Microorganisms", "Bacteria" ]
43,218
https://en.wikipedia.org/wiki/Zipf%27s%20law
Zipf's law (; ) is an empirical law stating that when a list of measured values is sorted in decreasing order, the value of the -th entry is often approximately inversely proportional to . The best known instance of Zipf's law applies to the frequency table of words in a text or corpus of natural language: It is usually found that the most common word occurs approximately twice as often as the next common one, three times as often as the third most common, and so on. For example, in the Brown Corpus of American English text, the word "the" is the most frequently occurring word, and by itself accounts for nearly 7% of all word occurrences (69,971 out of slightly over 1 million). True to Zipf's law, the second-place word "of" accounts for slightly over 3.5% of words (36,411 occurrences), followed by "and" (28,852). It is often used in the following form, called Zipf-Mandelbrot law: where and are fitted parameters, with , and . This law is named after the American linguist George Kingsley Zipf, and is still an important concept in quantitative linguistics. It has been found to apply to many other types of data studied in the physical and social sciences. In mathematical statistics, the concept has been formalized as the Zipfian distribution: A family of related discrete probability distributions whose rank-frequency distribution is an inverse power law relation. They are related to Benford's law and the Pareto distribution. Some sets of time-dependent empirical data deviate somewhat from Zipf's law. Such empirical distributions are said to be quasi-Zipfian. History In 1913, the German physicist Felix Auerbach observed an inverse proportionality between the population sizes of cities, and their ranks when sorted by decreasing order of that variable. Zipf's law had been discovered before Zipf, first by the French stenographer Jean-Baptiste Estoup in 1916, and also by G. Dewey in 1923, and by E. Condon in 1928. The same relation for frequencies of words in natural language texts was observed by George Zipf in 1932, but he never claimed to have originated it. In fact, Zipf did not like mathematics. In his 1932 publication, the author speaks with disdain about mathematical involvement in linguistics, a.o. ibidem, p. 21: ... let me say here for the sake of any mathematician who may plan to formulate the ensuing data more exactly, the ability of the highly intense positive to become the highly intense negative, in my opinion, introduces the devil into the formula in the form of The only mathematical expression Zipf used looks like which he "borrowed" from Alfred J. Lotka's 1926 publication. The same relationship was found to occur in many other contexts, and for other variables besides frequency. For example, when corporations are ranked by decreasing size, their sizes are found to be inversely proportional to the rank. The same relation is found for personal incomes (where it is called Pareto principle), number of people watching the same TV channel, notes in music, cells transcriptomes, and more. In 1992 bioinformatician Wentian Li published a short paper showing that Zipf's law emerges even in randomly generated texts. It included proof that the power law form of Zipf's law was a byproduct of ordering words by rank. Formal definition Formally, the Zipf distribution on elements assigns to the element of rank (counting from 1) the probability: where is a normalization constant: The th harmonic number: The distribution is sometimes generalized to an inverse power law with exponent instead of Namely, where , is a generalized harmonic number The generalized Zipf distribution can be extended to infinitely many items ( = ∞) only if the exponent exceeds In that case, the normalization constant , becomes Riemann's zeta function, The infinite item case is characterized by the Zeta distribution and is called Lotka's law. If the exponent is or less, the normalization constant , diverges as tends to infinity. Empirical testing Empirically, a data set can be tested to see whether Zipf's law applies by checking the goodness of fit of an empirical distribution to the hypothesized power law distribution with a Kolmogorov–Smirnov test, and then comparing the (log) likelihood ratio of the power law distribution to alternative distributions like an exponential distribution or lognormal distribution. Zipf's law can be visuallized by plotting the item frequency data on a log-log graph, with the axes being the logarithm of rank order, and logarithm of frequency. The data conform to Zipf's law with exponent to the extent that the plot approximates a linear (more precisely, affine) function with slope . For exponent one can also plot the reciprocal of the frequency (mean interword interval) against rank, or the reciprocal of rank against frequency, and compare the result with the line through the origin with slope Statistical explanations Although Zipf's Law holds for most natural languages, and even certain artificial ones such as Esperanto and Toki Pona, the reason is still not well understood. Recent reviews of generative processes for Zipf's law include Mitzenmacher, "A Brief History of Generative Models for Power Law and Lognormal Distributions", and Simkin, "Re-inventing Willis". However, it may be partly explained by statistical analysis of randomly generated texts. Wentian Li has shown that in a document in which each character has been chosen randomly from a uniform distribution of all letters (plus a space character), the "words" with different lengths follow the macro-trend of Zipf's law (the more probable words are the shortest and have equal probability). In 1959, Vitold Belevitch observed that if any of a large class of well-behaved statistical distributions (not only the normal distribution) is expressed in terms of rank and expanded into a Taylor series, the first-order truncation of the series results in Zipf's law. Further, a second-order truncation of the Taylor series resulted in Mandelbrot's law. The principle of least effort is another possible explanation: Zipf himself proposed that neither speakers nor hearers using a given language wants to work any harder than necessary to reach understanding, and the process that results in approximately equal distribution of effort leads to the observed Zipf distribution. A minimal explanation assumes that words are generated by monkeys typing randomly. If language is generated by a single monkey typing randomly, with fixed and nonzero probability of hitting each letter key or white space, then the words (letter strings separated by white spaces) produced by the monkey follows Zipf's law. Another possible cause for the Zipf distribution is a preferential attachment process, in which the value of an item tends to grow at a rate proportional to (intuitively, "the rich get richer" or "success breeds success"). Such a growth process results in the Yule–Simon distribution, which has been shown to fit word frequency versus rank in language and population versus city rank better than Zipf's law. It was originally derived to explain population versus rank in species by Yule, and applied to cities by Simon. A similar explanation is based on atlas models, systems of exchangeable positive-valued diffusion processes with drift and variance parameters that depend only on the rank of the process. It has been shown mathematically that Zipf's law holds for Atlas models that satisfy certain natural regularity conditions. Related laws A generalization of Zipf's law is the Zipf–Mandelbrot law, proposed by Benoit Mandelbrot, whose frequencies are: The constant is the Hurwitz zeta function evaluated at . Zipfian distributions can be obtained from Pareto distributions by an exchange of variables. The Zipf distribution is sometimes called the discrete Pareto distribution because it is analogous to the continuous Pareto distribution in the same way that the discrete uniform distribution is analogous to the continuous uniform distribution. The tail frequencies of the Yule–Simon distribution are approximately for any choice of In the parabolic fractal distribution, the logarithm of the frequency is a quadratic polynomial of the logarithm of the rank. This can markedly improve the fit over a simple power-law relationship. Like fractal dimension, it is possible to calculate Zipf dimension, which is a useful parameter in the analysis of texts. It has been argued that Benford's law is a special bounded case of Zipf's law, with the connection between these two laws being explained by their both originating from scale invariant functional relations from statistical physics and critical phenomena. The ratios of probabilities in Benford's law are not constant. The leading digits of data satisfying Zipf's law with satisfy Benford's law. Occurrences City sizes Following Auerbach's 1913 observation, there has been substantial examination of Zipf's law for city sizes. However, more recent empirical and theoretical studies have challenged the relevance of Zipf's law for cities. Word frequencies in natural languages In many texts in human languages, word frequencies approximately follow a Zipf distribution with exponent close to 1; that is, the most common word occurs about times the -th most common one. The actual rank-frequency plot of a natural language text deviates in some extent from the ideal Zipf distribution, especially at the two ends of the range. The deviations may depend on the language, on the topic of the text, on the author, on whether the text was translated from another language, and on the spelling rules used. Some deviation is inevitable because of sampling error. At the low-frequency end, where the rank approaches , the plot takes a staircase shape, because each word can occur only an integer number of times. In some Romance languages, the frequencies of the dozen or so most frequent words deviate significantly from the ideal Zipf distribution, because of those words include articles inflected for grammatical gender and number. In many East Asian languages, such as Chinese, Tibetan, and Vietnamese, each morpheme (word or word piece) consists of a single syllable; a word of English being often translated to a compound of two such syllables. The rank-frequency table for those morphemes deviates significantly from the ideal Zipf law, at both ends of the range. Even in English, the deviations from the ideal Zipf's law become more apparent as one examines large collections of texts. Analysis of a corpus of 30,000 English texts showed that only about 15% of the texts in it have a good fit to Zipf's law. Slight changes in the definition of Zipf's law can increase this percentage up to close to 50%. In these cases, the observed frequency-rank relation can be modeled more accurately as by separate Zipf–Mandelbrot laws distributions for different subsets or subtypes of words. This is the case for the frequency-rank plot of the first 10 million words of the English Wikipedia. In particular, the frequencies of the closed class of function words in English is better described with lower than 1, while open-ended vocabulary growth with document size and corpus size require greater than 1 for convergence of the Generalized Harmonic Series. When a text is encrypted in such a way that every occurrence of each distinct plaintext word is always mapped to the same encrypted word (as in the case of simple substitution ciphers, like the Caesar ciphers, or simple codebook ciphers), the frequency-rank distribution is not affected. On the other hand, if separate occurrences of the same word may be mapped to two or more different words (as happens with the Vigenère cipher), the Zipf distribution will typically have a flat part at the high-frequency end. Applications Zipf's law has been used for extraction of parallel fragments of texts out of comparable corpora. Laurance Doyle and others have suggested the application of Zipf's law for detection of alien language in the search for extraterrestrial intelligence. The frequency-rank word distribution is often characteristic of the author and changes little over time. This feature has been used in the analysis of texts for authorship attribution. The word-like sign groups of the 15th-century codex Voynich Manuscript have been found to satisfy Zipf's law, suggesting that text is most likely not a hoax but rather written in an obscure language or cipher. See also Letter frequency Most common words in English Notes References Further reading External links —An article on Zipf's law applied to city populations Seeing Around Corners (Artificial societies turn up Zipf's law) PlanetMath article on Zipf's law Distributions de type "fractal parabolique" dans la Nature (French, with English summary) An analysis of income distribution Zipf List of French words Zipf list for English, French, Spanish, Italian, Swedish, Icelandic, Latin, Portuguese and Finnish from Gutenberg Project and online calculator to rank words in texts Citations and the Zipf–Mandelbrot's law Zipf's Law examples and modelling (1985) Complex systems: Unzipping Zipf's law (2011) Benford's law, Zipf's law, and the Pareto distribution by Terence Tao. Discrete distributions Computational linguistics Power laws Statistical laws Empirical laws Eponyms Tails of probability distributions Quantitative linguistics Bibliometrics Corpus linguistics 1949 introductions
Zipf's law
[ "Mathematics", "Technology" ]
2,801
[ "Metrics", "Bibliometrics", "Quantity", "Science and technology studies", "Computational linguistics", "Natural language and computing" ]
43,476
https://en.wikipedia.org/wiki/Operations%20research
Operations research () (U.S. Air Force Specialty Code: Operations Analysis), often shortened to the initialism OR, is a branch of applied mathematics that deals with the development and application of analytical methods to improve decision-making. Although the term management science is sometimes used similarly, the two fields differ in their scope and emphasis. Employing techniques from other mathematical sciences, such as modeling, statistics, and optimization, operations research arrives at optimal or near-optimal solutions to decision-making problems. Because of its emphasis on practical applications, operations research has overlapped with many other disciplines, notably industrial engineering. Operations research is often concerned with determining the extreme values of some real-world objective: the maximum (of profit, performance, or yield) or minimum (of loss, risk, or cost). Originating in military efforts before World War II, its techniques have grown to concern problems in a variety of industries. Overview Operations research (OR) encompasses the development and the use of a wide range of problem-solving techniques and methods applied in the pursuit of improved decision-making and efficiency, such as simulation, mathematical optimization, queueing theory and other stochastic-process models, Markov decision processes, econometric methods, data envelopment analysis, ordinal priority approach, neural networks, expert systems, decision analysis, and the analytic hierarchy process. Nearly all of these techniques involve the construction of mathematical models that attempt to describe the system. Because of the computational and statistical nature of most of these fields, OR also has strong ties to computer science and analytics. Operational researchers faced with a new problem must determine which of these techniques are most appropriate given the nature of the system, the goals for improvement, and constraints on time and computing power, or develop a new technique specific to the problem at hand (and, afterwards, to that type of problem). The major sub-disciplines (but not limited to) in modern operational research, as identified by the journal Operations Research and The Journal of the Operational Research Society are: Computing and information technologies Financial engineering Manufacturing, service sciences, and supply chain management Policy modeling and public sector work Revenue management Simulation Stochastic models Transportation theory Game theory for strategies Linear programming Nonlinear programming Integer programming in NP-complete problem specially for 0-1 integer linear programming for binary Dynamic programming in Aerospace engineering and Economics Information theory used in Cryptography, Quantum computing Quadratic programming for solutions of Quadratic equation and Quadratic function History In the decades after the two world wars, the tools of operations research were more widely applied to problems in business, industry, and society. Since that time, operational research has expanded into a field widely used in industries ranging from petrochemicals to airlines, finance, logistics, and government, moving to a focus on the development of mathematical models that can be used to analyse and optimize sometimes complex systems, and has become an area of active academic and industrial research. Historical origins In the 17th century, mathematicians Blaise Pascal and Christiaan Huygens solved problems involving sometimes complex decisions (problem of points) by using game-theoretic ideas and expected values; others, such as Pierre de Fermat and Jacob Bernoulli, solved these types of problems using combinatorial reasoning instead. Charles Babbage's research into the cost of transportation and sorting of mail led to England's universal "Penny Post" in 1840, and to studies into the dynamical behaviour of railway vehicles in defence of the GWR's broad gauge. Beginning in the 20th century, study of inventory management could be considered the origin of modern operations research with economic order quantity developed by Ford W. Harris in 1913. Operational research may have originated in the efforts of military planners during World War I (convoy theory and Lanchester's laws). Percy Bridgman brought operational research to bear on problems in physics in the 1920s and would later attempt to extend these to the social sciences. Modern operational research originated at the Bawdsey Research Station in the UK in 1937 as the result of an initiative of the station's superintendent, A. P. Rowe and Robert Watson-Watt. Rowe conceived the idea as a means to analyse and improve the working of the UK's early-warning radar system, code-named "Chain Home" (CH). Initially, Rowe analysed the operating of the radar equipment and its communication networks, expanding later to include the operating personnel's behaviour. This revealed unappreciated limitations of the CH network and allowed remedial action to be taken. Scientists in the United Kingdom (including Patrick Blackett (later Lord Blackett OM PRS), Cecil Gordon, Solly Zuckerman, (later Baron Zuckerman OM, KCB, FRS), C. H. Waddington, Owen Wansbrough-Jones, Frank Yates, Jacob Bronowski and Freeman Dyson), and in the United States (George Dantzig) looked for ways to make better decisions in such areas as logistics and training schedules. Second World War The modern field of operational research arose during World War II. In the World War II era, operational research was defined as "a scientific method of providing executive departments with a quantitative basis for decisions regarding the operations under their control". Other names for it included operational analysis (UK Ministry of Defence from 1962) and quantitative management. During the Second World War close to 1,000 men and women in Britain were engaged in operational research. About 200 operational research scientists worked for the British Army. Patrick Blackett worked for several different organizations during the war. Early in the war while working for the Royal Aircraft Establishment (RAE) he set up a team known as the "Circus" which helped to reduce the number of anti-aircraft artillery rounds needed to shoot down an enemy aircraft from an average of over 20,000 at the start of the Battle of Britain to 4,000 in 1941. In 1941, Blackett moved from the RAE to the Navy, after first working with RAF Coastal Command, in 1941 and then early in 1942 to the Admiralty. Blackett's team at Coastal Command's Operational Research Section (CC-ORS) included two future Nobel prize winners and many other people who went on to be pre-eminent in their fields. They undertook a number of crucial analyses that aided the war effort. Britain introduced the convoy system to reduce shipping losses, but while the principle of using warships to accompany merchant ships was generally accepted, it was unclear whether it was better for convoys to be small or large. Convoys travel at the speed of the slowest member, so small convoys can travel faster. It was also argued that small convoys would be harder for German U-boats to detect. On the other hand, large convoys could deploy more warships against an attacker. Blackett's staff showed that the losses suffered by convoys depended largely on the number of escort vessels present, rather than the size of the convoy. Their conclusion was that a few large convoys are more defensible than many small ones. While performing an analysis of the methods used by RAF Coastal Command to hunt and destroy submarines, one of the analysts asked what colour the aircraft were. As most of them were from Bomber Command they were painted black for night-time operations. At the suggestion of CC-ORS a test was run to see if that was the best colour to camouflage the aircraft for daytime operations in the grey North Atlantic skies. Tests showed that aircraft painted white were on average not spotted until they were 20% closer than those painted black. This change indicated that 30% more submarines would be attacked and sunk for the same number of sightings. As a result of these findings Coastal Command changed their aircraft to using white undersurfaces. Other work by the CC-ORS indicated that on average if the trigger depth of aerial-delivered depth charges were changed from 100 to 25 feet, the kill ratios would go up. The reason was that if a U-boat saw an aircraft only shortly before it arrived over the target then at 100 feet the charges would do no damage (because the U-boat wouldn't have had time to descend as far as 100 feet), and if it saw the aircraft a long way from the target it had time to alter course under water so the chances of it being within the 20-foot kill zone of the charges was small. It was more efficient to attack those submarines close to the surface when the targets' locations were better known than to attempt their destruction at greater depths when their positions could only be guessed. Before the change of settings from 100 to 25 feet, 1% of submerged U-boats were sunk and 14% damaged. After the change, 7% were sunk and 11% damaged; if submarines were caught on the surface but had time to submerge just before being attacked, the numbers rose to 11% sunk and 15% damaged. Blackett observed "there can be few cases where such a great operational gain had been obtained by such a small and simple change of tactics". Bomber Command's Operational Research Section (BC-ORS), analyzed a report of a survey carried out by RAF Bomber Command. For the survey, Bomber Command inspected all bombers returning from bombing raids over Germany over a particular period. All damage inflicted by German air defenses was noted and the recommendation was given that armor be added in the most heavily damaged areas. This recommendation was not adopted because the fact that the aircraft were able to return with these areas damaged indicated the areas were not vital, and adding armor to non-vital areas where damage is acceptable reduces aircraft performance. Their suggestion to remove some of the crew so that an aircraft loss would result in fewer personnel losses, was also rejected by RAF command. Blackett's team made the logical recommendation that the armor be placed in the areas which were completely untouched by damage in the bombers who returned. They reasoned that the survey was biased, since it only included aircraft that returned to Britain. The areas untouched in returning aircraft were probably vital areas, which, if hit, would result in the loss of the aircraft. This story has been disputed, with a similar damage assessment study completed in the US by the Statistical Research Group at Columbia University, the result of work done by Abraham Wald. When Germany organized its air defences into the Kammhuber Line, it was realized by the British that if the RAF bombers were to fly in a bomber stream they could overwhelm the night fighters who flew in individual cells directed to their targets by ground controllers. It was then a matter of calculating the statistical loss from collisions against the statistical loss from night fighters to calculate how close the bombers should fly to minimize RAF losses. The "exchange rate" ratio of output to input was a characteristic feature of operational research. By comparing the number of flying hours put in by Allied aircraft to the number of U-boat sightings in a given area, it was possible to redistribute aircraft to more productive patrol areas. Comparison of exchange rates established "effectiveness ratios" useful in planning. The ratio of 60 mines laid per ship sunk was common to several campaigns: German mines in British ports, British mines on German routes, and United States mines in Japanese routes. Operational research doubled the on-target bomb rate of B-29s bombing Japan from the Marianas Islands by increasing the training ratio from 4 to 10 percent of flying hours; revealed that wolf-packs of three United States submarines were the most effective number to enable all members of the pack to engage targets discovered on their individual patrol stations; revealed that glossy enamel paint was more effective camouflage for night fighters than conventional dull camouflage paint finish, and a smooth paint finish increased airspeed by reducing skin friction. On land, the operational research sections of the Army Operational Research Group (AORG) of the Ministry of Supply (MoS) were landed in Normandy in 1944, and they followed British forces in the advance across Europe. They analyzed, among other topics, the effectiveness of artillery, aerial bombing and anti-tank shooting. After World War II In 1947, under the auspices of the British Association, a symposium was organized in Dundee. In his opening address, Watson-Watt offered a definition of the aims of OR: "To examine quantitatively whether the user organization is getting from the operation of its equipment the best attainable contribution to its overall objective." With expanded techniques and growing awareness of the field at the close of the war, operational research was no longer limited to only operational, but was extended to encompass equipment procurement, training, logistics and infrastructure. Operations research also grew in many areas other than the military once scientists learned to apply its principles to the civilian sector. The development of the simplex algorithm for linear programming was in 1947. In the 1950s, the term Operations Research was used to describe heterogeneous mathematical methods such as game theory, dynamic programming, linear programming, warehousing, spare parts theory, queue theory, simulation and production control, which were used primarily in civilian industry. Scientific societies and journals on the subject of operations research were founded in the 1950s, such as the Operation Research Society of America (ORSA) in 1952 and the Institute for Management Science (TIMS) in 1953. Philip Morse, the head of the Weapons Systems Evaluation Group of the Pentagon, became the first president of ORSA and attracted the companies of the military-industrial complex to ORSA, which soon had more than 500 members. In the 1960s, ORSA reached 8000 members. Consulting companies also founded OR groups. In 1953, Abraham Charnes and William Cooper published the first textbook on Linear Programming. In the 1950s and 1960s, chairs of operations research were established in the U.S. and United Kingdom (from 1964 in Lancaster) in the management faculties of universities. Further influences from the U.S. on the development of operations research in Western Europe can be traced here. The authoritative OR textbooks from the U.S. were published in Germany in German language and in France in French (but not in Italian), such as the book by George Dantzig "Linear Programming"(1963) and the book by C. West Churchman et al. "Introduction to Operations Research"(1957). The latter was also published in Spanish in 1973, opening at the same time Latin American readers to Operations Research. NATO gave important impulses for the spread of Operations Research in Western Europe; NATO headquarters (SHAPE) organised four conferences on OR in the 1950s – the one in 1956 with 120 participants – bringing OR to mainland Europe. Within NATO, OR was also known as "Scientific Advisory" (SA) and was grouped together in the Advisory Group of Aeronautical Research and Development (AGARD). SHAPE and AGARD organized an OR conference in April 1957 in Paris. When France withdrew from the NATO military command structure, the transfer of NATO headquarters from France to Belgium led to the institutionalization of OR in Belgium, where Jacques Drèze founded CORE, the Center for Operations Research and Econometrics at the Catholic University of Leuven in 1966. With the development of computers over the next three decades, Operations Research can now solve problems with hundreds of thousands of variables and constraints. Moreover, the large volumes of data required for such problems can be stored and manipulated very efficiently." Much of operations research (modernly known as 'analytics') relies upon stochastic variables and a therefore access to truly random numbers. Fortunately, the cybernetics field also required the same level of randomness. The development of increasingly better random number generators has been a boon to both disciplines. Modern applications of operations research includes city planning, football strategies, emergency planning, optimizing all facets of industry and economy, and undoubtedly with the likelihood of the inclusion of terrorist attack planning and definitely counterterrorist attack planning. More recently, the research approach of operations research, which dates back to the 1950s, has been criticized for being collections of mathematical models but lacking an empirical basis of data collection for applications. How to collect data is not presented in the textbooks. Because of the lack of data, there are also no computer applications in the textbooks. Problems addressed Critical path analysis or project planning: identifying those processes in a multiple-dependency project which affect the overall duration of the project Floorplanning: designing the layout of equipment in a factory or components on a computer chip to reduce manufacturing time (therefore reducing cost) Network optimization: for instance, setup of telecommunications or power system networks to maintain quality of service during outages Resource allocation problems Facility location Assignment Problems: Assignment problem Generalized assignment problem Quadratic assignment problem Weapon target assignment problem Bayesian search theory: looking for a target Optimal search Routing, such as determining the routes of buses so that as few buses are needed as possible Supply chain management: managing the flow of raw materials and products based on uncertain demand for the finished products Project production activities: managing the flow of work activities in a capital project in response to system variability through operations research tools for variability reduction and buffer allocation using a combination of allocation of capacity, inventory and time Efficient messaging and customer response tactics Automation: automating or integrating robotic systems in human-driven operations processes Globalization: globalizing operations processes in order to take advantage of cheaper materials, labor, land or other productivity inputs Transportation: managing freight transportation and delivery systems (Examples: LTL shipping, intermodal freight transport, travelling salesman problem, driver scheduling problem) Scheduling: Personnel staffing Manufacturing steps Project tasks Network data traffic: these are known as queueing models or queueing systems. Sports events and their television coverage Blending of raw materials in oil refineries Determining optimal prices, in many retail and B2B settings, within the disciplines of pricing science Cutting stock problem: Cutting small items out of bigger ones. Finding the optimal parameter (weights) setting of an algorithm that generates the realisation of a figured bass in Baroque compositions (classical music) by using weighted local cost and transition cost rules Operational research is also used extensively in government where evidence-based policy is used. Management science The field of management science (MS) is known as using operations research models in business. Stafford Beer characterized this in 1967. Like operational research itself, management science is an interdisciplinary branch of applied mathematics devoted to optimal decision planning, with strong links with economics, business, engineering, and other sciences. It uses various scientific research-based principles, strategies, and analytical methods including mathematical modeling, statistics and numerical algorithms to improve an organization's ability to enact rational and meaningful management decisions by arriving at optimal or near-optimal solutions to sometimes complex decision problems. Management scientists help businesses to achieve their goals using the scientific methods of operational research. The management scientist's mandate is to use rational, systematic, science-based techniques to inform and improve decisions of all kinds. Of course, the techniques of management science are not restricted to business applications but may be applied to military, medical, public administration, charitable groups, political groups or community groups. Management science is concerned with developing and applying models and concepts that may prove useful in helping to illuminate management issues and solve managerial problems, as well as designing and developing new and better models of organizational excellence. Related fields Some of the fields that have considerable overlap with Operations Research and Management Science include: Artificial Intelligence Business analytics Computer science Data mining/Data science/Big data Decision analysis Decision intelligence Engineering Financial engineering Forecasting Game theory Geography/Geographic information science Graph theory Industrial engineering Inventory control Logistics Mathematical modeling Mathematical optimization Probability and statistics Project management Policy analysis Queueing theory Simulation Social network/Transportation forecasting models Stochastic processes Supply chain management Systems engineering Applications Applications are abundant such as in airlines, manufacturing companies, service organizations, military branches, and government. The range of problems and issues to which it has contributed insights and solutions is vast. It includes: Scheduling (of airlines, trains, buses etc.) Assignment (assigning crew to flights, trains or buses; employees to projects; commitment and dispatch of power generation facilities) Facility location (deciding most appropriate location for new facilities such as warehouses; factories or fire station) Hydraulics & Piping Engineering (managing flow of water from reservoirs) Health Services (information and supply chain management) Game Theory (identifying, understanding; developing strategies adopted by companies) Urban Design Computer Network Engineering (packet routing; timing; analysis) Telecom & Data Communication Engineering (packet routing; timing; analysis) Management is also concerned with so-called soft-operational analysis which concerns methods for strategic planning, strategic decision support, problem structuring methods. In dealing with these sorts of challenges, mathematical modeling and simulation may not be appropriate or may not suffice. Therefore, during the past 30 years, a number of non-quantified modeling methods have been developed. These include: stakeholder based approaches including metagame analysis and drama theory morphological analysis and various forms of influence diagrams cognitive mapping strategic choice robustness analysis Societies and journals Societies The International Federation of Operational Research Societies (IFORS) is an umbrella organization for operational research societies worldwide, representing approximately 50 national societies including those in the US, UK, France, Germany, Italy, Canada, Australia, New Zealand, Philippines, India, Japan and South Africa. For the institutionalization of Operations Research, the foundation of IFORS in 1960 was of decisive importance, which stimulated the foundation of national OR societies in Austria, Switzerland and Germany. IFORS held important international conferences every three years since 1957. The constituent members of IFORS form regional groups, such as that in Europe, the Association of European Operational Research Societies (EURO). Other important operational research organizations are Simulation Interoperability Standards Organization (SISO) and Interservice/Industry Training, Simulation and Education Conference (I/ITSEC) In 2004, the US-based organization INFORMS began an initiative to market the OR profession better, including a website entitled The Science of Better which provides an introduction to OR and examples of successful applications of OR to industrial problems. This initiative has been adopted by the Operational Research Society in the UK, including a website entitled Learn About OR. Journals of INFORMS The Institute for Operations Research and the Management Sciences (INFORMS) publishes thirteen scholarly journals about operations research, including the top two journals in their class, according to 2005 Journal Citation Reports. They are: Decision Analysis Information Systems Research INFORMS Journal on Computing INFORMS Transactions on Education (an open access journal) Interfaces Management Science Manufacturing & Service Operations Management Marketing Science Mathematics of Operations Research Operations Research Organization Science Service Science Transportation Science Other journals These are listed in alphabetical order of their titles. 4OR-A Quarterly Journal of Operations Research: jointly published the Belgian, French and Italian Operations Research Societies (Springer); Decision Sciences published by Wiley-Blackwell on behalf of the Decision Sciences Institute European Journal of Operational Research (EJOR): Founded in 1975 and is presently by far the largest operational research journal in the world, with its around 9,000 pages of published papers per year. In 2004, its total number of citations was the second largest amongst Operational Research and Management Science journals; INFOR Journal: published and sponsored by the Canadian Operational Research Society; Journal of Defense Modeling and Simulation (JDMS): Applications, Methodology, Technology: a quarterly journal devoted to advancing the science of modeling and simulation as it relates to the military and defense. Journal of the Operational Research Society (JORS): an official journal of The OR Society; this is the oldest continuously published journal of OR in the world, published by Taylor & Francis; Military Operations Research (MOR): published by the Military Operations Research Society; Omega - The International Journal of Management Science; Operations Research Letters; Opsearch: official journal of the Operational Research Society of India; OR Insight: a quarterly journal of The OR Society published by Palgrave; Pesquisa Operacional, the official journal of the Brazilian Operations Research Society Production and Operations Management, the official journal of the Production and Operations Management Society TOP: the official journal of the Spanish Statistics and Operations Research Society. See also Operations research topics Black box analysis Dynamic programming Inventory theory Optimal maintenance Real options valuation Artificial intelligence Operations researchers Operations researchers (category) George Dantzig Leonid Kantorovich Tjalling Koopmans Russell L. Ackoff Stafford Beer Alfred Blumstein C. West Churchman William W. Cooper Robert Dorfman Richard M. Karp Ramayya Krishnan Frederick W. Lanchester Thomas L. Magnanti Alvin E. Roth Peter Whittle Related fields Behavioral operations research Big data Business engineering Business process management Database normalization Engineering management Geographic information systems Industrial engineering Industrial organization Managerial economics Military simulation Operational level of war Power system simulation Project production management Reliability engineering Scientific management Search-based software engineering Simulation modeling Strategic management Supply chain engineering System safety Wargaming References Further reading Classic books and articles R. E. Bellman, Dynamic Programming, Princeton University Press, Princeton, 1957 Abraham Charnes, William W. Cooper, Management Models and Industrial Applications of Linear Programming, Volumes I and II, New York, John Wiley & Sons, 1961 Abraham Charnes, William W. Cooper, A. Henderson, An Introduction to Linear Programming, New York, John Wiley & Sons, 1953 C. West Churchman, Russell L. Ackoff & E. L. Arnoff, Introduction to Operations Research, New York: J. Wiley and Sons, 1957 George B. Dantzig, Linear Programming and Extensions, Princeton, Princeton University Press, 1963 Lester K. Ford, Jr., D. Ray Fulkerson, Flows in Networks, Princeton, Princeton University Press, 1962 Jay W. Forrester, Industrial Dynamics, Cambridge, MIT Press, 1961 L. V. Kantorovich, "Mathematical Methods of Organizing and Planning Production" Management Science, 4, 1960, 266–422 Ralph Keeney, Howard Raiffa, Decisions with Multiple Objectives: Preferences and Value Tradeoffs, New York, John Wiley & Sons, 1976 H. W. Kuhn, "The Hungarian Method for the Assignment Problem," Naval Research Logistics Quarterly, 1–2, 1955, 83–97 H. W. Kuhn, A. W. Tucker, "Nonlinear Programming," pp. 481–492 in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability B. O. Koopman, Search and Screening: General Principles and Historical Applications, New York, Pergamon Press, 1980 Tjalling C. Koopmans, editor, Activity Analysis of Production and Allocation, New York, John Wiley & Sons, 1951 Charles C. Holt, Franco Modigliani, John F. Muth, Herbert A. Simon, Planning Production, Inventories, and Work Force, Englewood Cliffs, NJ, Prentice-Hall, 1960 Philip M. Morse, George E. Kimball, Methods of Operations Research, New York, MIT Press and John Wiley & Sons, 1951 Robert O. Schlaifer, Howard Raiffa, Applied Statistical Decision Theory, Cambridge, Division of Research, Harvard Business School, 1961 Classic textbooks Taha, Hamdy A., "Operations Research: An Introduction", Pearson, 10th Edition, 2016 Frederick S. Hillier & Gerald J. Lieberman, Introduction to Operations Research, McGraw-Hill: Boston MA; 10th Edition, 2014 Robert J. Thierauf & Richard A. Grosse, "Decision Making Through Operations Research", John Wiley & Sons, INC, 1970 Harvey M. Wagner, Principles of Operations Research, Englewood Cliffs, Prentice-Hall, 1969 Wentzel (Ventsel), E. S. Introduction to Operations Research, Moscow: Soviet Radio Publishing House, 1964. History Saul I. Gass, Arjang A. Assad, An Annotated Timeline of Operations Research: An Informal History. New York, Kluwer Academic Publishers, 2005. Saul I. Gass (Editor), Arjang A. Assad (Editor), Profiles in Operations Research: Pioneers and Innovators. Springer, 2011 Maurice W. Kirby (Operational Research Society (Great Britain)). Operational Research in War and Peace: The British Experience from the 1930s to 1970, Imperial College Press, 2003. , J. K. Lenstra, A. H. G. Rinnooy Kan, A. Schrijver (editors) History of Mathematical Programming: A Collection of Personal Reminiscences, North-Holland, 1991 Charles W. McArthur, Operations Analysis in the U.S. Army Eighth Air Force in World War II, History of Mathematics, Vol. 4, Providence, American Mathematical Society, 1990 C. H. Waddington, O. R. in World War 2: Operational Research Against the U-boat, London, Elek Science, 1973. Richard Vahrenkamp: Mathematical Management – Operations Research in the United States and Western Europe, 1945 – 1990, in: Management Revue – Socio-Economic Studies, vol. 34 (2023), issue 1, pp. 69–91. External links What is Operations Research? International Federation of Operational Research Societies The Institute for Operations Research and the Management Sciences (INFORMS) Occupational Outlook Handbook, U.S. Department of Labor Bureau of Labor Statistics Industrial engineering Mathematical optimization in business Applied statistics Engineering disciplines Mathematical and quantitative methods (economics) Mathematical economics Decision-making
Operations research
[ "Mathematics", "Engineering" ]
5,917
[ "Applied mathematics", "Industrial engineering", "Operations research", "nan", "Mathematical economics", "Applied statistics" ]
43,589
https://en.wikipedia.org/wiki/Fluorite
Fluorite (also called fluorspar) is the mineral form of calcium fluoride, CaF2. It belongs to the halide minerals. It crystallizes in isometric cubic habit, although octahedral and more complex isometric forms are not uncommon. The Mohs scale of mineral hardness, based on scratch hardness comparison, defines value 4 as fluorite. Pure fluorite is colourless and transparent, both in visible and ultraviolet light, but impurities usually make it a colorful mineral and the stone has ornamental and lapidary uses. Industrially, fluorite is used as a flux for smelting, and in the production of certain glasses and enamels. The purest grades of fluorite are a source of fluoride for hydrofluoric acid manufacture, which is the intermediate source of most fluorine-containing fine chemicals. Optically clear transparent fluorite has anomalous partial dispersion, that is, its refractive index varies with the wavelength of light in a manner that differs from that of commonly used glasses, so fluorite is useful in making apochromatic lenses, and particularly valuable in photographic optics. Fluorite optics are also usable in the far-ultraviolet and mid-infrared ranges, where conventional glasses are too opaque for use. Fluorite also has low dispersion, and a high refractive index for its density. History and etymology The word fluorite is derived from the Latin verb fluere, meaning to flow. The mineral is used as a flux in iron smelting to decrease the viscosity of slag. The term flux comes from the Latin adjective fluxus, meaning flowing, loose, slack. The mineral fluorite was originally termed fluorspar and was first discussed in print in a 1530 work Bermannvs sive de re metallica dialogus [Bermannus; or dialogue about the nature of metals], by Georgius Agricola, as a mineral noted for its usefulness as a flux. Agricola, a German scientist with expertise in philology, mining, and metallurgy, named fluorspar as a Neo-Latinization of the German Flussspat from Fluss (stream, river) and Spat (meaning a nonmetallic mineral akin to gypsum, spærstān, spear stone, referring to its crystalline projections). In 1852, fluorite gave its name to the phenomenon of fluorescence, which is prominent in fluorites from certain locations, due to certain impurities in the crystal. Fluorite also gave the name to its constitutive element fluorine. Currently, the word "fluorspar" is most commonly used for fluorite as an industrial and chemical commodity, while "fluorite" is used mineralogically and in most other senses. In archeology, gemmology, classical studies, and Egyptology, the Latin terms murrina and myrrhina refer to fluorite. In book 37 of his Naturalis Historia, Pliny the Elder describes it as a precious stone with purple and white mottling, and noted that the Romans prized objects carved from it. Structure Fluorite crystallizes in a cubic motif. Crystal twinning is common and adds complexity to the observed crystal habits. Fluorite has four perfect cleavage planes that help produce octahedral fragments. The structural motif adopted by fluorite is so common that the motif is called the fluorite structure. Element substitution for the calcium cation often includes strontium and certain rare-earth elements (REE), such as yttrium and cerium. Occurrence and mining Fluorite forms as a late-crystallizing mineral in felsic igneous rocks typically through hydrothermal activity. It is particularly common in granitic pegmatites. It may occur as a vein deposit formed through hydrothermal activity particularly in limestones. In such vein deposits it can be associated with galena, sphalerite, barite, quartz, and calcite. Fluorite can also be found as a constituent of sedimentary rocks either as grains or as the cementing material in sandstone. It is a common mineral mainly distributed in South Africa, China, Mexico, Mongolia, the United Kingdom, the United States, Canada, Tanzania, Rwanda and Argentina. The world reserves of fluorite are estimated at 230 million tonnes (Mt) with the largest deposits being in South Africa (about 41 Mt), Mexico (32 Mt) and China (24 Mt). China is leading the world production with about 3 Mt annually (in 2010), followed by Mexico (1.0 Mt), Mongolia (0.45 Mt), Russia (0.22 Mt), South Africa (0.13 Mt), Spain (0.12 Mt) and Namibia (0.11 Mt). One of the largest deposits of fluorspar in North America is located on the Burin Peninsula, Newfoundland, Canada. The first official recognition of fluorspar in the area was recorded by geologist J.B. Jukes in 1843. He noted an occurrence of "galena" or lead ore and fluoride of lime on the west side of St. Lawrence harbour. It is recorded that interest in the commercial mining of fluorspar began in 1928 with the first ore being extracted in 1933. Eventually, at Iron Springs Mine, the shafts reached depths of . In the St. Lawrence area, the veins are persistent for great lengths and several of them have wide lenses. The area with veins of known workable size comprises about . In 2018, Canada Fluorspar Inc. commenced mine production again in St. Lawrence; in spring 2019, the company was planned to develop a new shipping port on the west side of Burin Peninsula as a more affordable means of moving their product to markets, and they successfully sent the first shipload of ore from the new port on July 31, 2021. This marks the first time in 30 years that ore has been shipped directly out of St. Lawrence. Cubic crystals up to 20 cm across have been found at Dalnegorsk, Russia. The largest documented single crystal of fluorite was a cube 2.12 meters in size and weighing approximately 16 tonnes. In Asturias (Spain) there are several fluorite deposits known internationally for the quality of the specimens they have yielded. In the area of Berbes, Ribadesella, fluorite appears as cubic crystals, sometimes with dodecahedron modifications, which can reach a size of up to 10 cm of edge, with internal colour zoning, almost always violet in colour. It is associated with quartz and leafy aggregates of baryte. In the Emilio mine, in Loroñe, Colunga, the fluorite crystals, cubes with small modifications of other figures, are colourless and transparent. They can reach 10 cm of edge. In the Moscona mine, in Villabona, the fluorite crystals, cubic without modifications of other shapes, are yellow, up to 3 cm of edge. They are associated with large crystals of calcite and barite. "Blue John" One of the most famous of the older-known localities of fluorite is Castleton in Derbyshire, England, where, under the name of "Derbyshire Blue John", purple-blue fluorite was extracted from several mines or caves. During the 19th century, this attractive fluorite was mined for its ornamental value. The mineral Blue John is now scarce, and only a few hundred kilograms are mined each year for ornamental and lapidary use. Mining still takes place in Blue John Cavern and Treak Cliff Cavern. Recently discovered deposits in China have produced fluorite with coloring and banding similar to the classic Blue John stone. Fluorescence George Gabriel Stokes named the phenomenon of fluorescence from fluorite, in 1852. Many samples of fluorite exhibit fluorescence under ultraviolet light, a property that takes its name from fluorite. Many minerals, as well as other substances, fluoresce. Fluorescence involves the elevation of electron energy levels by quanta of ultraviolet light, followed by the progressive falling back of the electrons into their previous energy state, releasing quanta of visible light in the process. In fluorite, the visible light emitted is most commonly blue, but red, purple, yellow, green, and white also occur. The fluorescence of fluorite may be due to mineral impurities, such as yttrium and ytterbium, or organic matter, such as volatile hydrocarbons in the crystal lattice. In particular, the blue fluorescence seen in fluorites from certain parts of Great Britain responsible for the naming of the phenomenon of fluorescence itself, has been attributed to the presence of inclusions of divalent europium in the crystal. Natural samples containing rare earth impurities such as erbium have also been observed to display upconversion fluorescence, in which infrared light stimulates emission of visible light, a phenomenon usually only reported in synthetic materials. One fluorescent variety of fluorite is chlorophane, which is reddish or purple in color and fluoresces brightly in emerald green when heated (thermoluminescence), or when illuminated with ultraviolet light. The color of visible light emitted when a sample of fluorite is fluorescing depends on where the original specimen was collected; different impurities having been included in the crystal lattice in different places. Neither does all fluorite fluoresce equally brightly, even from the same locality. Therefore, ultraviolet light is not a reliable tool for the identification of specimens, nor for quantifying the mineral in mixtures. For example, among British fluorites, those from Northumberland, County Durham, and eastern Cumbria are the most consistently fluorescent, whereas fluorite from Yorkshire, Derbyshire, and Cornwall, if they fluoresce at all, are generally only feebly fluorescent. Fluorite also exhibits the property of thermoluminescence. Color Fluorite is allochromatic, meaning that it can be tinted with elemental impurities. Fluorite comes in a wide range of colors and has consequently been dubbed "the most colorful mineral in the world". Every color of the rainbow in various shades is represented by fluorite samples, along with white, black, and clear crystals. The most common colors are purple, blue, green, yellow, or colorless. Less common are pink, red, white, brown, and black. Color zoning or banding is commonly present. The color of the fluorite is determined by factors including impurities, exposure to radiation, and the absence of voids of the color centers. Uses Source of fluorine and fluoride Fluorite is a major source of hydrogen fluoride, a commodity chemical used to produce a wide range of materials. Hydrogen fluoride is liberated from the mineral by the action of concentrated sulfuric acid: CaF2(s) + H2SO4 → CaSO4(s) + 2 HF(g) The resulting HF is converted into fluorine, fluorocarbons, and diverse fluoride materials. As of the late 1990s, five billion kilograms were mined annually. There are three principal types of industrial use for natural fluorite, commonly referred to as "fluorspar" in these industries, corresponding to different grades of purity. Metallurgical grade fluorite (60–85% CaF2), the lowest of the three grades, has traditionally been used as a flux to lower the melting point of raw materials in steel production to aid the removal of impurities, and later in the production of aluminium. Ceramic grade fluorite (85–95% CaF2) is used in the manufacture of opalescent glass, enamels, and cooking utensils. The highest grade, "acid grade fluorite" (97% or more CaF2), accounts for about 95% of fluorite consumption in the US where it is used to make hydrogen fluoride and hydrofluoric acid by reacting the fluorite with sulfuric acid. Internationally, acid-grade fluorite is also used in the production of AlF3 and cryolite (Na3AlF6), which are the main fluorine compounds used in aluminium smelting. Alumina is dissolved in a bath that consists primarily of molten Na3AlF6, AlF3, and fluorite (CaF2) to allow electrolytic recovery of aluminium. Fluorine losses are replaced entirely by the addition of AlF3, the majority of which react with excess sodium from the alumina to form Na3AlF6. Niche uses Lapidary uses Natural fluorite mineral has ornamental and lapidary uses. Fluorite may be drilled into beads and used in jewelry, although due to its relative softness it is not widely used as a semiprecious stone. It is also used for ornamental carvings, with expert carvings taking advantage of the stone's zonation. Optics In the laboratory, calcium fluoride is commonly used as a window material for both infrared and ultraviolet wavelengths, since it is transparent in these regions (about 0.15 μm to 9 μm) and exhibits an extremely low change in refractive index with wavelength. Furthermore, the material is attacked by few reagents. At wavelengths as short as 157 nm, a common wavelength used for semiconductor stepper manufacture for integrated circuit lithography, the refractive index of calcium fluoride shows some non-linearity at high power densities, which has inhibited its use for this purpose. In the early years of the 21st century, the stepper market for calcium fluoride collapsed, and many large manufacturing facilities have been closed. Canon and other manufacturers have used synthetically grown crystals of calcium fluoride components in lenses to aid apochromatic design, and to reduce light dispersion. This use has largely been superseded by newer glasses and computer-aided design. As an infrared optical material, calcium fluoride is widely available and was sometimes known by the Eastman Kodak trademarked name "Irtran-3", although this designation is obsolete. Fluorite should not be confused with fluoro-crown (or fluorine crown) glass, a type of low-dispersion glass that has special optical properties approaching fluorite. True fluorite is not a glass but a crystalline material. Lenses or optical groups made using this low dispersion glass as one or more elements exhibit less chromatic aberration than those utilizing conventional, less expensive crown glass and flint glass elements to make an achromatic lens. Optical groups employ a combination of different types of glass; each type of glass refracts light in a different way. By using combinations of different types of glass, lens manufacturers are able to cancel out or significantly reduce unwanted characteristics; chromatic aberration being the most important. The best of such lens designs are often called apochromatic (see above). Fluoro-crown glass (such as Schott FK51) usually in combination with an appropriate "flint" glass (such as Schott KzFSN 2) can give very high performance in telescope objective lenses, as well as microscope objectives, and camera telephoto lenses. Fluorite elements are similarly paired with complementary "flint" elements (such as Schott LaK 10). The refractive qualities of fluorite and of certain flint elements provide a lower and more uniform dispersion across the spectrum of visible light, thereby keeping colors focused more closely together. Lenses made with fluorite are superior to fluoro-crown based lenses, at least for doublet telescope objectives; but are more difficult to produce and more costly. The use of fluorite for prisms and lenses was studied and promoted by Victor Schumann near the end of the 19th century. Naturally occurring fluorite crystals without optical defects were only large enough to produce microscope objectives. With the advent of synthetically grown fluorite crystals in the 1950s - 60s, it could be used instead of glass in some high-performance optical telescope and camera lens elements. In telescopes, fluorite elements allow high-resolution images of astronomical objects at high magnifications. Canon Inc. produces synthetic fluorite crystals that are used in their better telephoto lenses. The use of fluorite for telescope lenses has declined since the 1990s, as newer designs using fluoro-crown glass, including triplets, have offered comparable performance at lower prices. Fluorite and various combinations of fluoride compounds can be made into synthetic crystals which have applications in lasers and special optics for UV and infrared. Exposure tools for the semiconductor industry make use of fluorite optical elements for ultraviolet light at wavelengths of about 157 nanometers. Fluorite has a uniquely high transparency at this wavelength. Fluorite objective lenses are manufactured by the larger microscope firms (Nikon, Olympus, Carl Zeiss and Leica). Their transparence to ultraviolet light enables them to be used for fluorescence microscopy. The fluorite also serves to correct optical aberrations in these lenses. Nikon has previously manufactured at least one fluorite and synthetic quartz element camera lens (105 mm f/4.5 UV) for the production of ultraviolet images. Konica produced a fluorite lens for their SLR cameras – the Hexanon 300 mm f/6.3. Source of fluorine gas in nature In 2012, the first source of naturally occurring fluorine gas was found in fluorite mines in Bavaria, Germany. It was previously thought that fluorine gas did not occur naturally because it is so reactive, and would rapidly react with other chemicals. Fluorite is normally colorless, but some varied forms found nearby look black, and are known as 'fetid fluorite' or antozonite. The minerals, containing small amounts of uranium and its daughter products, release radiation sufficiently energetic to induce oxidation of fluoride anions within the structure, to fluorine that becomes trapped inside the mineral. The color of fetid fluorite is predominantly due to the calcium atoms remaining. Solid-state fluorine-19 NMR carried out on the gas contained in the antozonite, revealed a peak at 425 ppm, which is consistent with F2. Gallery See also List of countries by fluorite production List of minerals Magnesium fluoride – also used in UV optics References External links Educational article about the different colors of fluorites crystals from Asturias, Spain An educational tour of Weardale Fluorite Illinois State Geologic Survey Illinois state mineral Barber Cup and Crawford Cup, related Roman cups at British Museum Cubic minerals Minerals in space group 225 Evaporite Fluorine minerals Luminescent minerals Industrial minerals Symbols of Illinois
Fluorite
[ "Chemistry" ]
3,878
[ "Luminescence", "Luminescent minerals" ]
43,590
https://en.wikipedia.org/wiki/Flux
Flux describes any effect that appears to pass or travel (whether it actually moves or not) through a surface or substance. Flux is a concept in applied mathematics and vector calculus which has many applications in physics. For transport phenomena, flux is a vector quantity, describing the magnitude and direction of the flow of a substance or property. In vector calculus flux is a scalar quantity, defined as the surface integral of the perpendicular component of a vector field over a surface. Terminology The word flux comes from Latin: fluxus means "flow", and fluere is "to flow". As fluxion, this term was introduced into differential calculus by Isaac Newton. The concept of heat flux was a key contribution of Joseph Fourier, in the analysis of heat transfer phenomena. His seminal treatise Théorie analytique de la chaleur (The Analytical Theory of Heat), defines fluxion as a central quantity and proceeds to derive the now well-known expressions of flux in terms of temperature differences across a slab, and then more generally in terms of temperature gradients or differentials of temperature, across other geometries. One could argue, based on the work of James Clerk Maxwell, that the transport definition precedes the definition of flux used in electromagnetism. The specific quote from Maxwell is: According to the transport definition, flux may be a single vector, or it may be a vector field / function of position. In the latter case flux can readily be integrated over a surface. By contrast, according to the electromagnetism definition, flux is the integral over a surface; it makes no sense to integrate a second-definition flux for one would be integrating over a surface twice. Thus, Maxwell's quote only makes sense if "flux" is being used according to the transport definition (and furthermore is a vector field rather than single vector). This is ironic because Maxwell was one of the major developers of what we now call "electric flux" and "magnetic flux" according to the electromagnetism definition. Their names in accordance with the quote (and transport definition) would be "surface integral of electric flux" and "surface integral of magnetic flux", in which case "electric flux" would instead be defined as "electric field" and "magnetic flux" defined as "magnetic field". This implies that Maxwell conceived of these fields as flows/fluxes of some sort. Given a flux according to the electromagnetism definition, the corresponding flux density, if that term is used, refers to its derivative along the surface that was integrated. By the Fundamental theorem of calculus, the corresponding flux density is a flux according to the transport definition. Given a current such as electric current—charge per time, current density would also be a flux according to the transport definition—charge per time per area. Due to the conflicting definitions of flux, and the interchangeability of flux, flow, and current in nontechnical English, all of the terms used in this paragraph are sometimes used interchangeably and ambiguously. Concrete fluxes in the rest of this article will be used in accordance to their broad acceptance in the literature, regardless of which definition of flux the term corresponds to. Flux as flow rate per unit area In transport phenomena (heat transfer, mass transfer and fluid dynamics), flux is defined as the rate of flow of a property per unit area, which has the dimensions [quantity]·[time]−1·[area]−1. The area is of the surface the property is flowing "through" or "across". For example, the amount of water that flows through a cross section of a river each second divided by the area of that cross section, or the amount of sunlight energy that lands on a patch of ground each second divided by the area of the patch, are kinds of flux. General mathematical definition (transport) Here are 3 definitions in increasing order of complexity. Each is a special case of the following. In all cases the frequent symbol j, (or J) is used for flux, q for the physical quantity that flows, t for time, and A for area. These identifiers will be written in bold when and only when they are vectors. First, flux as a (single) scalar: where In this case the surface in which flux is being measured is fixed and has area A. The surface is assumed to be flat, and the flow is assumed to be everywhere constant with respect to position and perpendicular to the surface. Second, flux as a scalar field defined along a surface, i.e. a function of points on the surface: As before, the surface is assumed to be flat, and the flow is assumed to be everywhere perpendicular to it. However the flow need not be constant. q is now a function of p, a point on the surface, and A, an area. Rather than measure the total flow through the surface, q measures the flow through the disk with area A centered at p along the surface. Finally, flux as a vector field: In this case, there is no fixed surface we are measuring over. q is a function of a point, an area, and a direction (given by a unit vector ), and measures the flow through the disk of area A perpendicular to that unit vector. I is defined picking the unit vector that maximizes the flow around the point, because the true flow is maximized across the disk that is perpendicular to it. The unit vector thus uniquely maximizes the function when it points in the "true direction" of the flow. (Strictly speaking, this is an abuse of notation because the "argmax" cannot directly compare vectors; we take the vector with the biggest norm instead.) Properties These direct definitions, especially the last, are rather unwieldy. For example, the argmax construction is artificial from the perspective of empirical measurements, when with a weathervane or similar one can easily deduce the direction of flux at a point. Rather than defining the vector flux directly, it is often more intuitive to state some properties about it. Furthermore, from these properties the flux can uniquely be determined anyway. If the flux j passes through the area at an angle θ to the area normal , then the dot product That is, the component of flux passing through the surface (i.e. normal to it) is jcosθ, while the component of flux passing tangential to the area is jsinθ, but there is no flux actually passing through the area in the tangential direction. The only component of flux passing normal to the area is the cosine component. For vector flux, the surface integral of j over a surface S, gives the proper flowing per unit of time through the surface: where A (and its infinitesimal) is the vector area combination of the magnitude of the area A through which the property passes and a unit vector normal to the area. Unlike in the second set of equations, the surface here need not be flat. Finally, we can integrate again over the time duration t1 to t2, getting the total amount of the property flowing through the surface in that time (t2 − t1): Transport fluxes Eight of the most common forms of flux from the transport phenomena literature are defined as follows: Momentum flux, the rate of transfer of momentum across a unit area (N·s·m−2·s−1). (Newton's law of viscosity) Heat flux, the rate of heat flow across a unit area (J·m−2·s−1). (Fourier's law of conduction) (This definition of heat flux fits Maxwell's original definition.) Diffusion flux, the rate of movement of molecules across a unit area (mol·m−2·s−1). (Fick's law of diffusion) Volumetric flux, the rate of volume flow across a unit area (m3·m−2·s−1). (Darcy's law of groundwater flow) Mass flux, the rate of mass flow across a unit area (kg·m−2·s−1). (Either an alternate form of Fick's law that includes the molecular mass, or an alternate form of Darcy's law that includes the density.) Radiative flux, the amount of energy transferred in the form of photons at a certain distance from the source per unit area per second (J·m−2·s−1). Used in astronomy to determine the magnitude and spectral class of a star. Also acts as a generalization of heat flux, which is equal to the radiative flux when restricted to the electromagnetic spectrum. Energy flux, the rate of transfer of energy through a unit area (J·m−2·s−1). The radiative flux and heat flux are specific cases of energy flux. Particle flux, the rate of transfer of particles through a unit area ([number of particles] m−2·s−1) These fluxes are vectors at each point in space, and have a definite magnitude and direction. Also, one can take the divergence of any of these fluxes to determine the accumulation rate of the quantity in a control volume around a given point in space. For incompressible flow, the divergence of the volume flux is zero. Chemical diffusion As mentioned above, chemical molar flux of a component A in an isothermal, isobaric system is defined in Fick's law of diffusion as: where the nabla symbol ∇ denotes the gradient operator, DAB is the diffusion coefficient (m2·s−1) of component A diffusing through component B, cA is the concentration (mol/m3) of component A. This flux has units of mol·m−2·s−1, and fits Maxwell's original definition of flux. For dilute gases, kinetic molecular theory relates the diffusion coefficient D to the particle density n = N/V, the molecular mass m, the collision cross section , and the absolute temperature T by where the second factor is the mean free path and the square root (with the Boltzmann constant k) is the mean velocity of the particles. In turbulent flows, the transport by eddy motion can be expressed as a grossly increased diffusion coefficient. Quantum mechanics In quantum mechanics, particles of mass m in the quantum state ψ(r, t) have a probability density defined as So the probability of finding a particle in a differential volume element d3r is Then the number of particles passing perpendicularly through unit area of a cross-section per unit time is the probability flux; This is sometimes referred to as the probability current or current density, or probability flux density. Flux as a surface integral General mathematical definition (surface integral) As a mathematical concept, flux is represented by the surface integral of a vector field, where F is a vector field, and dA is the vector area of the surface A, directed as the surface normal. For the second, n is the outward pointed unit normal vector to the surface. The surface has to be orientable, i.e. two sides can be distinguished: the surface does not fold back onto itself. Also, the surface has to be actually oriented, i.e. we use a convention as to flowing which way is counted positive; flowing backward is then counted negative. The surface normal is usually directed by the right-hand rule. Conversely, one can consider the flux the more fundamental quantity and call the vector field the flux density. Often a vector field is drawn by curves (field lines) following the "flow"; the magnitude of the vector field is then the line density, and the flux through a surface is the number of lines. Lines originate from areas of positive divergence (sources) and end at areas of negative divergence (sinks). See also the image at right: the number of red arrows passing through a unit area is the flux density, the curve encircling the red arrows denotes the boundary of the surface, and the orientation of the arrows with respect to the surface denotes the sign of the inner product of the vector field with the surface normals. If the surface encloses a 3D region, usually the surface is oriented such that the influx is counted positive; the opposite is the outflux. The divergence theorem states that the net outflux through a closed surface, in other words the net outflux from a 3D region, is found by adding the local net outflow from each point in the region (which is expressed by the divergence). If the surface is not closed, it has an oriented curve as boundary. Stokes' theorem states that the flux of the curl of a vector field is the line integral of the vector field over this boundary. This path integral is also called circulation, especially in fluid dynamics. Thus the curl is the circulation density. We can apply the flux and these theorems to many disciplines in which we see currents, forces, etc., applied through areas. Electromagnetism Electric flux An electric "charge", such as a single proton in space, has a magnitude defined in coulombs. Such a charge has an electric field surrounding it. In pictorial form, the electric field from a positive point charge can be visualized as a dot radiating electric field lines (sometimes also called "lines of force"). Conceptually, electric flux can be thought of as "the number of field lines" passing through a given area. Mathematically, electric flux is the integral of the normal component of the electric field over a given area. Hence, units of electric flux are, in the MKS system, newtons per coulomb times meters squared, or N m2/C. (Electric flux density is the electric flux per unit area, and is a measure of strength of the normal component of the electric field averaged over the area of integration. Its units are N/C, the same as the electric field in MKS units.) Two forms of electric flux are used, one for the E-field: and one for the D-field (called the electric displacement): This quantity arises in Gauss's law – which states that the flux of the electric field E out of a closed surface is proportional to the electric charge QA enclosed in the surface (independent of how that charge is distributed), the integral form is: where ε0 is the permittivity of free space. If one considers the flux of the electric field vector, E, for a tube near a point charge in the field of the charge but not containing it with sides formed by lines tangent to the field, the flux for the sides is zero and there is an equal and opposite flux at both ends of the tube. This is a consequence of Gauss's Law applied to an inverse square field. The flux for any cross-sectional surface of the tube will be the same. The total flux for any surface surrounding a charge q is q/ε0. In free space the electric displacement is given by the constitutive relation D = ε0 E, so for any bounding surface the D-field flux equals the charge QA within it. Here the expression "flux of" indicates a mathematical operation and, as can be seen, the result is not necessarily a "flow", since nothing actually flows along electric field lines. Magnetic flux The magnetic flux density (magnetic field) having the unit Wb/m2 (Tesla) is denoted by B, and magnetic flux is defined analogously: with the same notation above. The quantity arises in Faraday's law of induction, where the magnetic flux is time-dependent either because the boundary is time-dependent or magnetic field is time-dependent. In integral form: where d is an infinitesimal vector line element of the closed curve , with magnitude equal to the length of the infinitesimal line element, and direction given by the tangent to the curve , with the sign determined by the integration direction. The time-rate of change of the magnetic flux through a loop of wire is minus the electromotive force created in that wire. The direction is such that if current is allowed to pass through the wire, the electromotive force will cause a current which "opposes" the change in magnetic field by itself producing a magnetic field opposite to the change. This is the basis for inductors and many electric generators. Poynting flux Using this definition, the flux of the Poynting vector S over a specified surface is the rate at which electromagnetic energy flows through that surface, defined like before: The flux of the Poynting vector through a surface is the electromagnetic power, or energy per unit time, passing through that surface. This is commonly used in analysis of electromagnetic radiation, but has application to other electromagnetic systems as well. Confusingly, the Poynting vector is sometimes called the power flux, which is an example of the first usage of flux, above. It has units of watts per square metre (W/m2). SI radiometry units See also AB magnitude Explosively pumped flux compression generator Eddy covariance flux (aka, eddy correlation, eddy flux) Fast Flux Test Facility Fluence (flux of the first sort for particle beams) Fluid dynamics Flux footprint Flux pinning Flux quantization Gauss's law Inverse-square law Jansky (non SI unit of spectral flux density) Latent heat flux Luminous flux Magnetic flux Magnetic flux quantum Neutron flux Poynting flux Poynting theorem Radiant flux Rapid single flux quantum Sound energy flux Volumetric flux (flux of the first sort for fluids) Volumetric flow rate (flux of the second sort for fluids) Notes Further reading External links Physical quantities Vector calculus Rates
Flux
[ "Physics", "Mathematics" ]
3,596
[ "Physical phenomena", "Quantity", "Physical properties", "Physical quantities" ]
43,972
https://en.wikipedia.org/wiki/Partial%20pressure
In a mixture of gases, each constituent gas has a partial pressure which is the notional pressure of that constituent gas as if it alone occupied the entire volume of the original mixture at the same temperature. The total pressure of an ideal gas mixture is the sum of the partial pressures of the gases in the mixture (Dalton's Law). The partial pressure of a gas is a measure of thermodynamic activity of the gas's molecules. Gases dissolve, diffuse, and react according to their partial pressures but not according to their concentrations in gas mixtures or liquids. This general property of gases is also true in chemical reactions of gases in biology. For example, the necessary amount of oxygen for human respiration, and the amount that is toxic, is set by the partial pressure of oxygen alone. This is true across a very wide range of different concentrations of oxygen present in various inhaled breathing gases or dissolved in blood; consequently, mixture ratios, like that of breathable 20% oxygen and 80% Nitrogen, are determined by volume instead of by weight or mass. Furthermore, the partial pressures of oxygen and carbon dioxide are important parameters in tests of arterial blood gases. That said, these pressures can also be measured in, for example, cerebrospinal fluid. Symbol The symbol for pressure is usually or which may use a subscript to identify the pressure, and gas species are also referred to by subscript. When combined, these subscripts are applied recursively. Examples: or = pressure at time 1 or = partial pressure of hydrogen or or PaO2 = arterial partial pressure of oxygen or or PvO2 = venous partial pressure of oxygen Dalton's law of partial pressures Dalton's law expresses the fact that the total pressure of a mixture of ideal gases is equal to the sum of the partial pressures of the individual gases in the mixture. This equality arises from the fact that in an ideal gas, the molecules are so far apart that they do not interact with each other. Most actual real-world gases come very close to this ideal. For example, given an ideal gas mixture of nitrogen (N2), hydrogen (H2) and ammonia (NH3): where: = total pressure of the gas mixture = partial pressure of nitrogen (N2) = partial pressure of hydrogen (H2) = partial pressure of ammonia (NH3) Ideal gas mixtures Ideally the ratio of partial pressures equals the ratio of the number of molecules. That is, the mole fraction of an individual gas component in an ideal gas mixture can be expressed in terms of the component's partial pressure or the moles of the component: and the partial pressure of an individual gas component in an ideal gas can be obtained using this expression: The mole fraction of a gas component in a gas mixture is equal to the volumetric fraction of that component in a gas mixture. The ratio of partial pressures relies on the following isotherm relation: VX is the partial volume of any individual gas component (X) Vtot is the total volume of the gas mixture pX is the partial pressure of gas X ptot is the total pressure of the gas mixture nX is the amount of substance of gas (X) ntot is the total amount of substance in gas mixture Partial volume (Amagat's law of additive volume) The partial volume of a particular gas in a mixture is the volume of one component of the gas mixture. It is useful in gas mixtures, e.g. air, to focus on one particular gas component, e.g. oxygen. It can be approximated both from partial pressure and molar fraction: VX is the partial volume of an individual gas component X in the mixture Vtot is the total volume of the gas mixture pX is the partial pressure of gas X ptot is the total pressure of the gas mixture nX is the amount of substance of gas X ntot is the total amount of substance in the gas mixture Vapor pressure Vapor pressure is the pressure of a vapor in equilibrium with its non-vapor phases (i.e., liquid or solid). Most often the term is used to describe a liquid's tendency to evaporate. It is a measure of the tendency of molecules and atoms to escape from a liquid or a solid. A liquid's atmospheric pressure boiling point corresponds to the temperature at which its vapor pressure is equal to the surrounding atmospheric pressure and it is often called the normal boiling point. The higher the vapor pressure of a liquid at a given temperature, the lower the normal boiling point of the liquid. The vapor pressure chart displayed has graphs of the vapor pressures versus temperatures for a variety of liquids. As can be seen in the chart, the liquids with the highest vapor pressures have the lowest normal boiling points. For example, at any given temperature, methyl chloride has the highest vapor pressure of any of the liquids in the chart. It also has the lowest normal boiling point (−24.2 °C), which is where the vapor pressure curve of methyl chloride (the blue line) intersects the horizontal pressure line of one atmosphere (atm) of absolute vapor pressure. At higher altitudes, the atmospheric pressure is less than that at sea level, so boiling points of liquids are reduced. At the top of Mount Everest, the atmospheric pressure is approximately 0.333 atm, so by using the graph, the boiling point of diethyl ether would be approximately 7.5 °C versus 34.6 °C at sea level (1 atm). Equilibrium constants of reactions involving gas mixtures It is possible to work out the equilibrium constant for a chemical reaction involving a mixture of gases given the partial pressure of each gas and the overall reaction formula. For a reversible reaction involving gas reactants and gas products, such as: {\mathit{a}A} + {\mathit{b}B} <=> {\mathit{c}C} + {\mathit{d}D} the equilibrium constant of the reaction would be: For reversible reactions, changes in the total pressure, temperature or reactant concentrations will shift the equilibrium so as to favor either the right or left side of the reaction in accordance with Le Chatelier's Principle. However, the reaction kinetics may either oppose or enhance the equilibrium shift. In some cases, the reaction kinetics may be the overriding factor to consider. Henry's law and the solubility of gases Gases will dissolve in liquids to an extent that is determined by the equilibrium between the undissolved gas and the gas that has dissolved in the liquid (called the solvent). The equilibrium constant for that equilibrium is: where: =  the equilibrium constant for the solvation process =  partial pressure of gas in equilibrium with a solution containing some of the gas =  the concentration of gas in the liquid solution The form of the equilibrium constant shows that the concentration of a solute gas in a solution is directly proportional to the partial pressure of that gas above the solution. This statement is known as Henry's law and the equilibrium constant is quite often referred to as the Henry's law constant. Henry's law is sometimes written as: where is also referred to as the Henry's law constant. As can be seen by comparing equations () and () above, is the reciprocal of . Since both may be referred to as the Henry's law constant, readers of the technical literature must be quite careful to note which version of the Henry's law equation is being used. Henry's law is an approximation that only applies for dilute, ideal solutions and for solutions where the liquid solvent does not react chemically with the gas being dissolved. In diving breathing gases In underwater diving the physiological effects of individual component gases of breathing gases are a function of partial pressure. Using diving terms, partial pressure is calculated as: partial pressure = (total absolute pressure) × (volume fraction of gas component) For the component gas "i": pi = P × Fi For example, at underwater, the total absolute pressure is (i.e., 1 bar of atmospheric pressure + 5 bar of water pressure) and the partial pressures of the main components of air, oxygen 21% by volume and nitrogen approximately 79% by volume are: pN2 = 6 bar × 0.79 = 4.7 bar absolute pO2 = 6 bar × 0.21 = 1.3 bar absolute The minimum safe lower limit for the partial pressures of oxygen in a breathing gas mixture for diving is absolute. Hypoxia and sudden unconsciousness can become a problem with an oxygen partial pressure of less than 0.16 bar absolute. Oxygen toxicity, involving convulsions, becomes a problem when oxygen partial pressure is too high. The NOAA Diving Manual recommends a maximum single exposure of 45 minutes at 1.6 bar absolute, of 120 minutes at 1.5 bar absolute, of 150 minutes at 1.4 bar absolute, of 180 minutes at 1.3 bar absolute and of 210 minutes at 1.2 bar absolute. Oxygen toxicity becomes a risk when these oxygen partial pressures and exposures are exceeded. The partial pressure of oxygen also determines the maximum operating depth of a gas mixture. Narcosis is a problem when breathing gases at high pressure. Typically, the maximum total partial pressure of narcotic gases used when planning for technical diving may be around 4.5 bar absolute, based on an equivalent narcotic depth of . The effect of a toxic contaminant such as carbon monoxide in breathing gas is also related to the partial pressure when breathed. A mixture which may be relatively safe at the surface could be dangerously toxic at the maximum depth of a dive, or a tolerable level of carbon dioxide in the breathing loop of a diving rebreather may become intolerable within seconds during descent when the partial pressure rapidly increases, and could lead to panic or incapacitation of the diver. In medicine The partial pressures of particularly oxygen () and carbon dioxide () are important parameters in tests of arterial blood gases, but can also be measured in, for example, cerebrospinal fluid. See also References Engineering thermodynamics Equilibrium chemistry Gas laws Gases Physical chemistry Pressure Underwater diving physics Distillation
Partial pressure
[ "Physics", "Chemistry", "Engineering" ]
2,077
[ "Physical quantities", "Engineering thermodynamics", "Phases of matter", "Pressure", "Thermodynamics", "Statistical mechanics", "Physical chemistry", "Gases", "Mechanical quantities", "Equilibrium chemistry", "Distillation", "Wikipedia categories named after physical quantities", "Scalar phy...
44,044
https://en.wikipedia.org/wiki/Oceanography
Oceanography (), also known as oceanology, sea science, ocean science, and marine science, is the scientific study of the ocean, including its physics, chemistry, biology, and geology. It is an Earth science, which covers a wide range of topics, including ocean currents, waves, and geophysical fluid dynamics; fluxes of various chemical substances and physical properties within the ocean and across its boundaries; ecosystem dynamics; and plate tectonics and seabed geology. Oceanographers draw upon a wide range of disciplines to deepen their understanding of the world’s oceans, incorporating insights from astronomy, biology, chemistry, geography, geology, hydrology, meteorology and physics. History Early history Humans first acquired knowledge of the waves and currents of the seas and oceans in pre-historic times. Observations on tides were recorded by Aristotle and Strabo in 384–322 BC. Early exploration of the oceans was primarily for cartography and mainly limited to its surfaces and of the animals that fishermen brought up in nets, though depth soundings by lead line were taken. The Portuguese campaign of Atlantic navigation is the earliest example of a systematic scientific large project, sustained over many decades, studying the currents and winds of the Atlantic. The work of Pedro Nunes (1502–1578) is remembered in the navigation context for the determination of the loxodromic curve: the shortest course between two points on the surface of a sphere represented onto a two-dimensional map. When he published his "Treatise of the Sphere" (1537), mostly a commentated translation of earlier work by others, he included a treatise on geometrical and astronomic methods of navigation. There he states clearly that Portuguese navigations were not an adventurous endeavour: "nam se fezeram indo a acertar: mas partiam os nossos mareantes muy ensinados e prouidos de estromentos e regras de astrologia e geometria que sam as cousas que os cosmographos ham dadar apercebidas (...) e leuaua cartas muy particularmente rumadas e na ja as de que os antigos vsauam" (were not done by chance: but our seafarers departed well taught and provided with instruments and rules of astrology (astronomy) and geometry which were matters the cosmographers would provide (...) and they took charts with exact routes and no longer those used by the ancient). His credibility rests on being personally involved in the instruction of pilots and senior seafarers from 1527 onwards by Royal appointment, along with his recognized competence as mathematician and astronomer. The main problem in navigating back from the south of the Canary Islands (or south of Boujdour) by sail alone, is due to the change in the regime of winds and currents: the North Atlantic gyre and the Equatorial counter current will push south along the northwest bulge of Africa, while the uncertain winds where the Northeast trades meet the Southeast trades (the doldrums) leave a sailing ship to the mercy of the currents. Together, prevalent current and wind make northwards progress very difficult or impossible. It was to overcome this problem and clear the passage to India around Africa as a viable maritime trade route, that a systematic plan of exploration was devised by the Portuguese. The return route from regions south of the Canaries became the 'volta do largo' or 'volta do mar'. The 'rediscovery' of the Azores islands in 1427 is merely a reflection of the heightened strategic importance of the islands, now sitting on the return route from the western coast of Africa (sequentially called 'volta de Guiné' and 'volta da Mina'); and the references to the Sargasso Sea (also called at the time 'Mar da Baga'), to the west of the Azores, in 1436, reveals the western extent of the return route. This is necessary, under sail, to make use of the southeasterly and northeasterly winds away from the western coast of Africa, up to the northern latitudes where the westerly winds will bring the seafarers towards the western coasts of Europe. The secrecy involving the Portuguese navigations, with the death penalty for the leaking of maps and routes, concentrated all sensitive records in the Royal Archives, completely destroyed by the Lisbon earthquake of 1775. However, the systematic nature of the Portuguese campaign, mapping the currents and winds of the Atlantic, is demonstrated by the understanding of the seasonal variations, with expeditions setting sail at different times of the year taking different routes to take account of seasonal predominate winds. This happens from as early as late 15th century and early 16th: Bartolomeu Dias followed the African coast on his way south in August 1487, while Vasco da Gama would take an open sea route from the latitude of Sierra Leone, spending three months in the open sea of the South Atlantic to profit from the southwards deflection of the southwesterly on the Brazilian side (and the Brazilian current going southward - Gama departed in July 1497); and Pedro Álvares Cabral (departing March 1500) took an even larger arch to the west, from the latitude of Cape Verde, thus avoiding the summer monsoon (which would have blocked the route taken by Gama at the time he set sail). Furthermore, there were systematic expeditions pushing into the western Northern Atlantic (Teive, 1454; Vogado, 1462; Teles, 1474; Ulmo, 1486). The documents relating to the supplying of ships, and the ordering of sun declination tables for the southern Atlantic for as early as 1493–1496, all suggest a well-planned and systematic activity happening during the decade long period between Bartolomeu Dias finding the southern tip of Africa, and Gama's departure; additionally, there are indications of further travels by Bartolomeu Dias in the area. The most significant consequence of this systematic knowledge was the negotiation of the Treaty of Tordesillas in 1494, moving the line of demarcation 270 leagues to the west (from 100 to 370 leagues west of the Azores), bringing what is now Brazil into the Portuguese area of domination. The knowledge gathered from open sea exploration allowed for the well-documented extended periods of sail without sight of land, not by accident but as pre-determined planned route; for example, 30 days for Bartolomeu Dias culminating on Mossel Bay, the three months Gama spent in the South Atlantic to use the Brazil current (southward), or the 29 days Cabral took from Cape Verde up to landing in Monte Pascoal, Brazil. The Danish expedition to Arabia 1761–67 can be said to be the world's first oceanographic expedition, as the ship Grønland had on board a group of scientists, including naturalist Peter Forsskål, who was assigned an explicit task by the king, Frederik V, to study and describe the marine life in the open sea, including finding the cause of mareel, or milky seas. For this purpose, the expedition was equipped with nets and scrapers, specifically designed to collect samples from the open waters and the bottom at great depth. Although Juan Ponce de León in 1513 first identified the Gulf Stream, and the current was well known to mariners, Benjamin Franklin made the first scientific study of it and gave it its name. Franklin measured water temperatures during several Atlantic crossings and correctly explained the Gulf Stream's cause. Franklin and Timothy Folger printed the first map of the Gulf Stream in 1769–1770. Information on the currents of the Pacific Ocean was gathered by explorers of the late 18th century, including James Cook and Louis Antoine de Bougainville. James Rennell wrote the first scientific textbooks on oceanography, detailing the current flows of the Atlantic and Indian oceans. During a voyage around the Cape of Good Hope in 1777, he mapped "the banks and currents at the Lagullas". He was also the first to understand the nature of the intermittent current near the Isles of Scilly, (now known as Rennell's Current). The tides and currents of the ocean are distinct. Tides are the rise and fall of sea levels created by the combination of the gravitational forces of the Moon along with the Sun (the Sun just in a much lesser extent) and are also caused by the Earth and Moon orbiting each other. An ocean current is a continuous, directed movement of seawater generated by a number of forces acting upon the water, including wind, the Coriolis effect, breaking waves, cabbeling, and temperature and salinity differences. Sir James Clark Ross took the first modern sounding in deep sea in 1840, and Charles Darwin published a paper on reefs and the formation of atolls as a result of the second voyage of HMS Beagle in 1831–1836. Robert FitzRoy published a four-volume report of Beagles three voyages. In 1841–1842 Edward Forbes undertook dredging in the Aegean Sea that founded marine ecology. The first superintendent of the United States Naval Observatory (1842–1861), Matthew Fontaine Maury devoted his time to the study of marine meteorology, navigation, and charting prevailing winds and currents. His 1855 textbook Physical Geography of the Sea was one of the first comprehensive oceanography studies. Many nations sent oceanographic observations to Maury at the Naval Observatory, where he and his colleagues evaluated the information and distributed the results worldwide. Modern oceanography Knowledge of the oceans remained confined to the topmost few fathoms of the water and a small amount of the bottom, mainly in shallow areas. Almost nothing was known of the ocean depths. The British Royal Navy's efforts to chart all of the world's coastlines in the mid-19th century reinforced the vague idea that most of the ocean was very deep, although little more was known. As exploration ignited both popular and scientific interest in the polar regions and Africa, so too did the mysteries of the unexplored oceans. The seminal event in the founding of the modern science of oceanography was the 1872–1876 Challenger expedition. As the first true oceanographic cruise, this expedition laid the groundwork for an entire academic and research discipline. In response to a recommendation from the Royal Society, the British Government announced in 1871 an expedition to explore world's oceans and conduct appropriate scientific investigation. Charles Wyville Thomson and Sir John Murray launched the Challenger expedition. , leased from the Royal Navy, was modified for scientific work and equipped with separate laboratories for natural history and chemistry. Under the scientific supervision of Thomson, Challenger travelled nearly surveying and exploring. On her journey circumnavigating the globe, 492 deep sea soundings, 133 bottom dredges, 151 open water trawls and 263 serial water temperature observations were taken. Around 4,700 new species of marine life were discovered. The result was the Report Of The Scientific Results of the Exploring Voyage of H.M.S. Challenger during the years 1873–76. Murray, who supervised the publication, described the report as "the greatest advance in the knowledge of our planet since the celebrated discoveries of the fifteenth and sixteenth centuries". He went on to found the academic discipline of oceanography at the University of Edinburgh, which remained the centre for oceanographic research well into the 20th century. Murray was the first to study marine trenches and in particular the Mid-Atlantic Ridge, and map the sedimentary deposits in the oceans. He tried to map out the world's ocean currents based on salinity and temperature observations, and was the first to correctly understand the nature of coral reef development. In the late 19th century, other Western nations also sent out scientific expeditions (as did private individuals and institutions). The first purpose-built oceanographic ship, Albatros, was built in 1882. In 1893, Fridtjof Nansen allowed his ship, Fram, to be frozen in the Arctic ice. This enabled him to obtain oceanographic, meteorological and astronomical data at a stationary spot over an extended period. In 1881 the geographer John Francon Williams published a seminal book, Geography of the Oceans. Between 1907 and 1911 Otto Krümmel published the Handbuch der Ozeanographie, which became influential in awakening public interest in oceanography. The four-month 1910 North Atlantic expedition headed by John Murray and Johan Hjort was the most ambitious research oceanographic and marine zoological project ever mounted until then, and led to the classic 1912 book The Depths of the Ocean. The first acoustic measurement of sea depth was made in 1914. Between 1925 and 1927 the "Meteor" expedition gathered 70,000 ocean depth measurements using an echo sounder, surveying the Mid-Atlantic Ridge. In 1934, Easter Ellen Cupp, the first woman to have earned a PhD (at Scripps) in the United States, completed a major work on diatoms that remained the standard taxonomy in the field until well after her death in 1999. In 1940, Cupp was let go from her position at Scripps. Sverdrup specifically commended Cupp as a conscientious and industrious worker and commented that his decision was no reflection on her ability as a scientist. Sverdrup used the instructor billet vacated by Cupp to employ Marston Sargent, a biologist studying marine algae, which was not a new research program at Scripps. Financial pressures did not prevent Sverdrup from retaining the services of two other young post-doctoral students, Walter Munk and Roger Revelle. Cupp's partner, Dorothy Rosenbury, found her a position teaching high school, where she remained for the rest of her career. (Russell, 2000) Sverdrup, Johnson and Fleming published The Oceans in 1942, which was a major landmark. The Sea (in three volumes, covering physical oceanography, seawater and geology) edited by M.N. Hill was published in 1962, while Rhodes Fairbridge's Encyclopedia of Oceanography was published in 1966. The Great Global Rift, running along the Mid Atlantic Ridge, was discovered by Maurice Ewing and Bruce Heezen in 1953 and mapped by Heezen and Marie Tharp using bathymetric data; in 1954 a mountain range under the Arctic Ocean was found by the Arctic Institute of the USSR. The theory of seafloor spreading was developed in 1960 by Harry Hammond Hess. The Ocean Drilling Program started in 1966. Deep-sea vents were discovered in 1977 by Jack Corliss and Robert Ballard in the submersible . In the 1950s, Auguste Piccard invented the bathyscaphe and used the bathyscaphe to investigate the ocean's depths. The United States nuclear submarine made the first journey under the ice to the North Pole in 1958. In 1962 the FLIP (Floating Instrument Platform), a spar buoy, was first deployed. In 1968, Tanya Atwater led the first all-woman oceanographic expedition. Until that time, gender policies restricted women oceanographers from participating in voyages to a significant extent. From the 1970s, there has been much emphasis on the application of large scale computers to oceanography to allow numerical predictions of ocean conditions and as a part of overall environmental change prediction. Early techniques included analog computers (such as the Ishiguro Storm Surge Computer) generally now replaced by numerical methods (e.g. SLOSH.) An oceanographic buoy array was established in the Pacific to allow prediction of El Niño events. 1990 saw the start of the World Ocean Circulation Experiment (WOCE) which continued until 2002. Geosat seafloor mapping data became available in 1995. Study of the oceans is critical to understanding shifts in Earth's energy balance along with related global and regional changes in climate, the biosphere and biogeochemistry. The atmosphere and ocean are linked because of evaporation and precipitation as well as thermal flux (and solar insolation). Recent studies have advanced knowledge on ocean acidification, ocean heat content, ocean currents, sea level rise, the oceanic carbon cycle, the water cycle, Arctic sea ice decline, coral bleaching, marine heatwaves, extreme weather, coastal erosion and many other phenomena in regards to ongoing climate change and climate feedbacks. In general, understanding the world ocean through further scientific study enables better stewardship and sustainable utilization of Earth's resources. The Intergovernmental Oceanographic Commission reports that 1.7% of the total national research expenditure of its members is focused on ocean science. Branches The study of oceanography is divided into these five branches: Biological oceanography Biological oceanography investigates the ecology and biology of marine organisms in the context of the physical, chemical and geological characteristics of their ocean environment. Chemical oceanography Chemical oceanography is the study of the chemistry of the ocean. Whereas chemical oceanography is primarily occupied with the study and understanding of seawater properties and its changes, ocean chemistry focuses primarily on the geochemical cycles. The following is a central topic investigated by chemical oceanography. Ocean acidification Ocean acidification describes the decrease in ocean pH that is caused by anthropogenic carbon dioxide () emissions into the atmosphere. Seawater is slightly alkaline and had a preindustrial pH of about 8.2. More recently, anthropogenic activities have steadily increased the carbon dioxide content of the atmosphere; about 30–40% of the added CO2 is absorbed by the oceans, forming carbonic acid and lowering the pH (now below 8.1) through ocean acidification. The pH is expected to reach 7.7 by the year 2100. An important element for the skeletons of marine animals is calcium, but calcium carbonate becomes more soluble with pressure, so carbonate shells and skeletons dissolve below the carbonate compensation depth. Calcium carbonate becomes more soluble at lower pH, so ocean acidification is likely to affect marine organisms with calcareous shells, such as oysters, clams, sea urchins and corals, and the carbonate compensation depth will rise closer to the sea surface. Affected planktonic organisms will include pteropods, coccolithophorids and foraminifera, all important in the food chain. In tropical regions, corals are likely to be severely affected as they become less able to build their calcium carbonate skeletons, in turn adversely impacting other reef dwellers. The current rate of ocean chemistry change seems to be unprecedented in Earth's geological history, making it unclear how well marine ecosystems will adapt to the shifting conditions of the near future. Of particular concern is the manner in which the combination of acidification with the expected additional stressors of higher ocean temperatures and lower oxygen levels will impact the seas. Geological oceanography Geological oceanography is the study of the geology of the ocean floor including plate tectonics and paleoceanography. Physical oceanography Physical oceanography studies the ocean's physical attributes including temperature-salinity structure, mixing, surface waves, internal waves, surface tides, internal tides, and currents. The following are central topics investigated by physical oceanography. Seismic Oceanography Ocean currents Since the early ocean expeditions in oceanography, a major interest was the study of ocean currents and temperature measurements. The tides, the Coriolis effect, changes in direction and strength of wind, salinity, and temperature are the main factors determining ocean currents. The thermohaline circulation (THC) (thermo- referring to temperature and -haline referring to salt content) connects the ocean basins and is primarily dependent on the density of sea water. It is becoming more common to refer to this system as the 'meridional overturning circulation' because it more accurately accounts for other driving factors beyond temperature and salinity. Examples of sustained currents are the Gulf Stream and the Kuroshio Current which are wind-driven western boundary currents. Ocean heat content Oceanic heat content (OHC) refers to the extra heat stored in the ocean from changes in Earth's energy balance. The increase in the ocean heat play an important role in sea level rise, because of thermal expansion. Ocean warming accounts for 90% of the energy accumulation associated with global warming since 1971. Paleoceanography Paleoceanography is the study of the history of the oceans in the geologic past with regard to circulation, chemistry, biology, geology and patterns of sedimentation and biological productivity. Paleoceanographic studies using environment models and different proxies enable the scientific community to assess the role of the oceanic processes in the global climate by the reconstruction of past climate at various intervals. Paleoceanographic research is also intimately tied to palaeoclimatology. Oceanographic institutions The earliest international organizations of oceanography were founded at the turn of the 20th century, starting with the International Council for the Exploration of the Sea created in 1902, followed in 1919 by the Mediterranean Science Commission. Marine research institutes were already in existence, starting with the Stazione Zoologica Anton Dohrn in Naples, Italy (1872), the Biological Station of Roscoff, France (1876), the Arago Laboratory in Banyuls-sur-mer, France (1882), the Laboratory of the Marine Biological Association in Plymouth, UK (1884), the Norwegian Institute for Marine Research in Bergen, Norway (1900), the Laboratory für internationale Meeresforschung, Kiel, Germany (1902). On the other side of the Atlantic, the Scripps Institution of Oceanography was founded in 1903, followed by the Woods Hole Oceanographic Institution in 1930, the Virginia Institute of Marine Science in 1938, the Lamont–Doherty Earth Observatory at Columbia University in 1949, and later the School of Oceanography at University of Washington. In Australia, the Australian Institute of Marine Science (AIMS), established in 1972 soon became a key player in marine tropical research. In 1921 the International Hydrographic Bureau, called since 1970 the International Hydrographic Organization, was established to develop hydrographic and nautical charting standards. Related disciplines See also List of seas Ocean optics Ocean color Ocean chemistry References Sources and further reading Boling Guo, Daiwen Huang. Infinite-Dimensional Dynamical Systems in Atmospheric and Oceanic Science, 2014, World Scientific Publishing, . Sample Chapter Hamblin, Jacob Darwin (2005) Oceanographers and the Cold War: Disciples of Marine Science. University of Washington Press. Lang, Michael A., Ian G. Macintyre, and Klaus Rützler, eds. Proceedings of the Smithsonian Marine Science Symposium. Smithsonian Contributions to the Marine Sciences, no. 38. Washington, D.C.: Smithsonian Institution Scholarly Press (2009) Roorda, Eric Paul, ed. The Ocean Reader: History, Culture, Politics (Duke University Press, 2020) 523 pp. online review Steele, J., K. Turekian and S. Thorpe. (2001). Encyclopedia of Ocean Sciences. San Diego: Academic Press. (6 vols.) Sverdrup, Keith A., Duxbury, Alyn C., Duxbury, Alison B. (2006). Fundamentals of Oceanography, McGraw-Hill, Russell, Joellen Louise. Easter Ellen Cupp, 2000, Regents of the University of California. External links NASA Jet Propulsion Laboratory – Physical Oceanography Distributed Active Archive Center (PO.DAAC). A data centre responsible for archiving and distributing data about the physical state of the ocean. Scripps Institution of Oceanography. One of the world's oldest, largest, and most important centres for ocean and Earth science research, education, and public service. Woods Hole Oceanographic Institution (WHOI). One of the world's largest private, non-profit ocean research, engineering and education organizations. British Oceanographic Data Centre. A source of oceanographic data and information. NOAA Ocean and Weather Data Navigator. Plot and download ocean data. Freeview Video 'Voyage to the Bottom of the Deep Deep Sea' Oceanography Programme by the Vega Science Trust and the BBC/Open University. Atlas of Spanish Oceanography by InvestigAdHoc. Glossary of Physical Oceanography and Related Disciplines by Steven K. Baum, Department of Oceanography, Texas A&M University Barcelona-Ocean.com . Inspiring Education in Marine Sciences CFOO: Sea Atlas. A source of oceanographic live data (buoy monitoring) and education for South African coasts. Memorial website for USNS Bowditch, USNS Dutton, USNS Michelson and USNS H. H. Hess Applied and interdisciplinary physics Earth sciences Hydrology Physical geography Articles containing video clips
Oceanography
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
5,004
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics", "Environmental engineering" ]
44,058
https://en.wikipedia.org/wiki/Big%20Bang%20nucleosynthesis
In physical cosmology, Big Bang nucleosynthesis (also known as primordial nucleosynthesis, and abbreviated as BBN) is the production of nuclei other than those of the lightest isotope of hydrogen (hydrogen-1, 1H, having a single proton as a nucleus) during the early phases of the universe. This type of nucleosynthesis is thought by most cosmologists to have occurred from 10 seconds to 20 minutes after the Big Bang. It is thought to be responsible for the formation of most of the universe's helium (as isotope helium-4 (4He)), along with small fractions of the hydrogen isotope deuterium (2H or D), the helium isotope helium-3 (3He), and a very small fraction of the lithium isotope lithium-7 (7Li). In addition to these stable nuclei, two unstable or radioactive isotopes were produced: the heavy hydrogen isotope tritium (3H or T) and the beryllium isotope beryllium-7 (7Be). These unstable isotopes later decayed into 3He and 7Li, respectively, as above. Elements heavier than lithium are thought to have been created later in the life of the Universe by stellar nucleosynthesis, through the formation, evolution and death of stars. Characteristics There are several important characteristics of Big Bang nucleosynthesis (BBN): The initial conditions (neutron–proton ratio) were set in the first second after the Big Bang. The universe was very close to homogeneous at this time, and strongly radiation-dominated. The fusion of nuclei occurred between roughly 10 seconds to 20 minutes after the Big Bang; this corresponds to the temperature range when the universe was cool enough for deuterium to survive, but hot and dense enough for fusion reactions to occur at a significant rate. It was widespread, encompassing the entire observable universe. The key parameter which allows one to calculate the effects of Big Bang nucleosynthesis is the baryon/photon number ratio, which is a small number of order 6 × 10−10. This parameter corresponds to the baryon density and controls the rate at which nucleons collide and react; from this it is possible to calculate element abundances after nucleosynthesis ends. Although the baryon per photon ratio is important in determining element abundances, the precise value makes little difference to the overall picture. Without major changes to the Big Bang theory itself, BBN will result in mass abundances of about 75% of hydrogen-1, about 25% helium-4, about 0.01% of deuterium and helium-3, trace amounts (on the order of 10−10) of lithium, and negligible heavier elements. That the observed abundances in the universe are generally consistent with these abundance numbers is considered strong evidence for the Big Bang theory. In this field, for historical reasons it is customary to quote the helium-4 fraction by mass, symbol Y, so that 25% helium-4 means that helium-4 atoms account for 25% of the mass, but less than 8% of the nuclei would be helium-4 nuclei. Other (trace) nuclei are usually expressed as number ratios to hydrogen. The first detailed calculations of the primordial isotopic abundances came in 1966 and have been refined over the years using updated estimates of the input nuclear reaction rates. The first systematic Monte Carlo study of how nuclear reaction rate uncertainties impact isotope predictions, over the relevant temperature range, was carried out in 1993. Important parameters The creation of light elements during BBN was dependent on a number of parameters; among those was the neutron–proton ratio (calculable from Standard Model physics) and the baryon-photon ratio. Neutron–proton ratio The neutron–proton ratio was set by Standard Model physics before the nucleosynthesis era, essentially within the first 1-second after the Big Bang. Neutrons can react with positrons or electron neutrinos to create protons and other products in one of the following reactions: n \ + e+ <=> \overline{\nu}_e + p n \ + \nu_{e} <=> p + e- At times much earlier than 1 sec, these reactions were fast and maintained the n/p ratio close to 1:1. As the temperature dropped, the equilibrium shifted in favour of protons due to their slightly lower mass, and the n/p ratio smoothly decreased. These reactions continued until the decreasing temperature and density caused the reactions to become too slow, which occurred at about T = 0.7 MeV (time around 1 second) and is called the freeze out temperature. At freeze out, the neutron–proton ratio was about 1/6. However, free neutrons are unstable with a mean life of 880 sec; some neutrons decayed in the next few minutes before fusing into any nucleus, so the ratio of total neutrons to protons after nucleosynthesis ends is about 1/7. Almost all neutrons that fused instead of decaying ended up combined into helium-4, due to the fact that helium-4 has the highest binding energy per nucleon among light elements. This predicts that about 8% of all atoms should be helium-4, leading to a mass fraction of helium-4 of about 25%, which is in line with observations. Small traces of deuterium and helium-3 remained as there was insufficient time and density for them to react and form helium-4. Baryon–photon ratio The baryon–photon ratio, η, is the key parameter determining the abundances of light elements after nucleosynthesis ends. Baryons and light elements can fuse in the following main reactions: along with some other low-probability reactions leading to 7Li or 7Be. (An important feature is that there are no stable nuclei with mass 5 or 8, which implies that reactions adding one baryon to 4He, or fusing two 4He, do not occur). Most fusion chains during BBN ultimately terminate in 4He (helium-4), while "incomplete" reaction chains lead to small amounts of left-over 2H or 3He; the amount of these decreases with increasing baryon-photon ratio. That is, the larger the baryon-photon ratio the more reactions there will be and the more efficiently deuterium will be eventually transformed into helium-4. This result makes deuterium a very useful tool in measuring the baryon-to-photon ratio. Sequence Big Bang nucleosynthesis began roughly about 20 seconds after the big bang, when the universe had cooled sufficiently to allow deuterium nuclei to survive disruption by high-energy photons. (Note that the neutron–proton freeze-out time was earlier). This time is essentially independent of dark matter content, since the universe was highly radiation dominated until much later, and this dominant component controls the temperature/time relation. At this time there were about six protons for every neutron, but a small fraction of the neutrons decay before fusing in the next few hundred seconds, so at the end of nucleosynthesis there are about seven protons to every neutron, and almost all the neutrons are in Helium-4 nuclei. One feature of BBN is that the physical laws and constants that govern the behavior of matter at these energies are very well understood, and hence BBN lacks some of the speculative uncertainties that characterize earlier periods in the life of the universe. Another feature is that the process of nucleosynthesis is determined by conditions at the start of this phase of the life of the universe, and proceeds independently of what happened before. As the universe expands, it cools. Free neutrons are less stable than helium nuclei, and the protons and neutrons have a strong tendency to form helium-4. However, forming helium-4 requires the intermediate step of forming deuterium. Before nucleosynthesis began, the temperature was high enough for many photons to have energy greater than the binding energy of deuterium; therefore any deuterium that was formed was immediately destroyed (a situation known as the "deuterium bottleneck"). Hence, the formation of helium-4 was delayed until the universe became cool enough for deuterium to survive (at about T = 0.1 MeV); after which there was a sudden burst of element formation. However, very shortly thereafter, around twenty minutes after the Big Bang, the temperature and density became too low for any significant fusion to occur. At this point, the elemental abundances were nearly fixed, and the only changes were the result of the radioactive decay of the two major unstable products of BBN, tritium and beryllium-7. History of theory The history of Big Bang nucleosynthesis began with the calculations of Ralph Alpher in the 1940s. Alpher published the Alpher–Bethe–Gamow paper that outlined the theory of light-element production in the early universe. Heavy elements Big Bang nucleosynthesis produced very few nuclei of elements heavier than lithium due to a bottleneck: the absence of a stable nucleus with 8 or 5 nucleons. This deficit of larger atoms also limited the amounts of lithium-7 produced during BBN. In stars, the bottleneck is passed by triple collisions of helium-4 nuclei, producing carbon (the triple-alpha process). However, this process is very slow and requires much higher densities, taking tens of thousands of years to convert a significant amount of helium to carbon in stars, and therefore it made a negligible contribution in the minutes following the Big Bang. The predicted abundance of CNO isotopes produced in Big Bang nucleosynthesis is expected to be on the order of 10−15 that of H, making them essentially undetectable and negligible. Indeed, none of these primordial isotopes of the elements from beryllium to oxygen have yet been detected, although those of beryllium and boron may be able to be detected in the future. So far, the only stable nuclides known experimentally to have been made during Big Bang nucleosynthesis are protium, deuterium, helium-3, helium-4, and lithium-7. Helium-4 Big Bang nucleosynthesis predicts a primordial abundance of about 25% helium-4 by mass, irrespective of the initial conditions of the universe. As long as the universe was hot enough for protons and neutrons to transform into each other easily, their ratio, determined solely by their relative masses, was about 1 neutron to 7 protons (allowing for some decay of neutrons into protons). Once it was cool enough, the neutrons quickly bound with an equal number of protons to form first deuterium, then helium-4. Helium-4 is very stable and is nearly the end of this chain if it runs for only a short time, since helium neither decays nor combines easily to form heavier nuclei (since there are no stable nuclei with mass numbers of 5 or 8, helium does not combine easily with either protons, or with itself). Once temperatures are lowered, out of every 16 nucleons (2 neutrons and 14 protons), 4 of these (25% of the total particles and total mass) combine quickly into one helium-4 nucleus. This produces one helium for every 12 hydrogens, resulting in a universe that is a little over 8% helium by number of atoms, and 25% helium by mass. One analogy is to think of helium-4 as ash, and the amount of ash that one forms when one completely burns a piece of wood is insensitive to how one burns it. The resort to the BBN theory of the helium-4 abundance is necessary as there is far more helium-4 in the universe than can be explained by stellar nucleosynthesis. In addition, it provides an important test for the Big Bang theory. If the observed helium abundance is significantly different from 25%, then this would pose a serious challenge to the theory. This would particularly be the case if the early helium-4 abundance was much smaller than 25% because it is hard to destroy helium-4. For a few years during the mid-1990s, observations suggested that this might be the case, causing astrophysicists to talk about a Big Bang nucleosynthetic crisis, but further observations were consistent with the Big Bang theory. Deuterium Deuterium is in some ways the opposite of helium-4, in that while helium-4 is very stable and difficult to destroy, deuterium is only marginally stable and easy to destroy. The temperatures, time, and densities were sufficient to combine a substantial fraction of the deuterium nuclei to form helium-4 but insufficient to carry the process further using helium-4 in the next fusion step. BBN did not convert all of the deuterium in the universe to helium-4 due to the expansion that cooled the universe and reduced the density, and so cut that conversion short before it could proceed any further. One consequence of this is that, unlike helium-4, the amount of deuterium is very sensitive to initial conditions. The denser the initial universe was, the more deuterium would be converted to helium-4 before time ran out, and the less deuterium would remain. There are no known post-Big Bang processes which can produce significant amounts of deuterium. Hence observations about deuterium abundance suggest that the universe is not infinitely old, which is in accordance with the Big Bang theory. During the 1970s, there were major efforts to find processes that could produce deuterium, but those revealed ways of producing isotopes other than deuterium. The problem was that while the concentration of deuterium in the universe is consistent with the Big Bang model as a whole, it is too high to be consistent with a model that presumes that most of the universe is composed of protons and neutrons. If one assumes that all of the universe consists of protons and neutrons, the density of the universe is such that much of the currently observed deuterium would have been burned into helium-4. The standard explanation now used for the abundance of deuterium is that the universe does not consist mostly of baryons, but that non-baryonic matter (also known as dark matter) makes up most of the mass of the universe. This explanation is also consistent with calculations that show that a universe made mostly of protons and neutrons would be far more clumpy than is observed. It is very hard to come up with another process that would produce deuterium other than by nuclear fusion. Such a process would require that the temperature be hot enough to produce deuterium, but not hot enough to produce helium-4, and that this process should immediately cool to non-nuclear temperatures after no more than a few minutes. It would also be necessary for the deuterium to be swept away before it reoccurs. Producing deuterium by fission is also difficult. The problem here again is that deuterium is very unlikely due to nuclear processes, and that collisions between atomic nuclei are likely to result either in the fusion of the nuclei, or in the release of free neutrons or alpha particles. During the 1970s, cosmic ray spallation was proposed as a source of deuterium. That theory failed to account for the abundance of deuterium, but led to explanations of the source of other light elements. Lithium Lithium-7 and lithium-6 produced in the Big Bang are on the order of: lithium-7 to be 10−9 of all primordial nuclides; and lithium-6 around 10−13. Measurements and status of theory The theory of BBN gives a detailed mathematical description of the production of the light "elements" deuterium, helium-3, helium-4, and lithium-7. Specifically, the theory yields precise quantitative predictions for the mixture of these elements, that is, the primordial abundances at the end of the big-bang. In order to test these predictions, it is necessary to reconstruct the primordial abundances as faithfully as possible, for instance by observing astronomical objects in which very little stellar nucleosynthesis has taken place (such as certain dwarf galaxies) or by observing objects that are very far away, and thus can be seen in a very early stage of their evolution (such as distant quasars). As noted above, in the standard picture of BBN, all of the light element abundances depend on the amount of ordinary matter (baryons) relative to radiation (photons). Since the universe is presumed to be homogeneous, it has one unique value of the baryon-to-photon ratio. For a long time, this meant that to test BBN theory against observations one had to ask: can all of the light element observations be explained with a single value of the baryon-to-photon ratio? Or more precisely, allowing for the finite precision of both the predictions and the observations, one asks: is there some range of baryon-to-photon values which can account for all of the observations? More recently, the question has changed: Precision observations of the cosmic microwave background radiation with the Wilkinson Microwave Anisotropy Probe (WMAP) and Planck give an independent value for the baryon-to-photon ratio. Using this value, are the BBN predictions for the abundances of light elements in agreement with the observations? The present measurement of helium-4 indicates good agreement, and yet better agreement for helium-3. But for lithium-7, there is a significant discrepancy between BBN and WMAP/Planck, and the abundance derived from Population II stars. The discrepancy is a factor of 2.4―4.3 below the theoretically predicted value. This discrepancy, called the "cosmological lithium problem", is considered a problem for the original models, that have resulted in revised calculations of the standard BBN based on new nuclear data, and to various reevaluation proposals for primordial proton–proton nuclear reactions, especially the abundances of , versus . Non-standard scenarios In addition to the standard BBN scenario there are numerous non-standard BBN scenarios. These should not be confused with non-standard cosmology: a non-standard BBN scenario assumes that the Big Bang occurred, but inserts additional physics in order to see how this affects elemental abundances. These pieces of additional physics include relaxing or removing the assumption of homogeneity, or inserting new particles such as massive neutrinos. There have been, and continue to be, various reasons for researching non-standard BBN. The first, which is largely of historical interest, is to resolve inconsistencies between BBN predictions and observations. This has proved to be of limited usefulness in that the inconsistencies were resolved by better observations, and in most cases trying to change BBN resulted in abundances that were more inconsistent with observations rather than less. The second reason for researching non-standard BBN, and largely the focus of non-standard BBN in the early 21st century, is to use BBN to place limits on unknown or speculative physics. For example, standard BBN assumes that no exotic hypothetical particles were involved in BBN. One can insert a hypothetical particle (such as a massive neutrino) and see what has to happen before BBN predicts abundances that are very different from observations. This has been done to put limits on the mass of a stable tau neutrino. See also Big Bang Chronology of the universe Nucleosynthesis Relic abundance Stellar nucleosynthesis Ultimate fate of the universe References External links For a general audience White, Martin: Overview of BBN Wright, Ned: BBN (cosmology tutorial) Big Bang nucleosynthesis on arxiv.org Academic articles Report-no: FERMILAB-Pub-00-239-A Jedamzik, Karsten, "Non-Standard Big Bang Nucleosynthesis Scenarios". Max-Planck-Institut für Astrophysik, Garching. Steigman, Gary, Primordial Nucleosynthesis: Successes And Challenges ; Forensic Cosmology: Probing Baryons and Neutrinos With BBN and the CBR ; and Big Bang Nucleosynthesis: Probing the First 20 Minutes R. A. Alpher, H. A. Bethe, G. Gamow, The Origin of Chemical Elements , Physical Review 73 (1948), 803. The so-called αβγ paper, in which Alpher and Gamow suggested that the light elements were created by hydrogen ions capturing neutrons in the hot, dense early universe. Bethe's name was added for symmetry These two 1948 papers of Gamow laid the foundation for our present understanding of big-bang nucleosynthesis R. A. Alpher and R. Herman, "On the Relative Abundance of the Elements," Physical Review 74 (1948), 1577. This paper contains the first estimate of the present temperature of the universe Java Big Bang element abundance calculator C. Pitrou, A. Coc, J.-P. Uzan, E. Vangioni, Precision big bang nucleosynthesis with improved Helium-4 predictions ; Nucleosynthesis Physical cosmological concepts Big Bang
Big Bang nucleosynthesis
[ "Physics", "Chemistry", "Astronomy" ]
4,459
[ "Physical cosmological concepts", "Nuclear fission", "Cosmogony", "Concepts in astrophysics", "Big Bang", "Astrophysics", "Nucleosynthesis", "Nuclear physics", "Nuclear fusion" ]
44,145
https://en.wikipedia.org/wiki/Interquartile%20mean
The interquartile mean (IQM) (or midmean) is a statistical measure of central tendency based on the truncated mean of the interquartile range. The IQM is very similar to the scoring method used in sports that are evaluated by a panel of judges: discard the lowest and the highest scores; calculate the mean value of the remaining scores. Calculation In calculation of the IQM, only the data between the first and third quartiles is used, and the lowest 25% and the highest 25% of the data are discarded. assuming the values have been ordered. Examples Dataset size divisible by four The method is best explained with an example. Consider the following dataset: 5, 8, 4, 38, 8, 6, 9, 7, 7, 3, 1, 6 First sort the list from lowest-to-highest: 1, 3, 4, 5, 6, 6, 7, 7, 8, 8, 9, 38 There are 12 observations (datapoints) in the dataset, thus we have 4 quartiles of 3 numbers. Discard the lowest and the highest 3 values: 1, 3, 4, 5, 6, 6, 7, 7, 8, 8, 9, 38 We now have 6 of the 12 observations remaining; next, we calculate the arithmetic mean of these numbers: xIQM = (5 + 6 + 6 + 7 + 7 + 8) / 6 = 6.5 This is the interquartile mean. For comparison, the arithmetic mean of the original dataset is (5 + 8 + 4 + 38 + 8 + 6 + 9 + 7 + 7 + 3 + 1 + 6) / 12 = 8.5 due to the strong influence of the outlier, 38. Dataset size not divisible by four The above example consisted of 12 observations in the dataset, which made the determination of the quartiles very easy. Of course, not all datasets have a number of observations that is divisible by 4. We can adjust the method of calculating the IQM to accommodate this. So ideally we want to have the IQM equal to the mean for symmetric distributions, e.g.: 1, 2, 3, 4, 5 has a mean value xmean = 3, and since it is a symmetric distribution, xIQM = 3 would be desired. We can solve this by using a weighted average of the quartiles and the interquartile dataset: Consider the following dataset of 9 observations: 1, 3, 5, 7, 9, 11, 13, 15, 17 There are 9/4 = 2.25 observations in each quartile, and 4.5 observations in the interquartile range. Truncate the fractional quartile size, and remove this number from the 1st and 4th quartiles (2.25 observations in each quartile, thus the lowest 2 and the highest 2 are removed). 1, 3, (5), 7, 9, 11, (13), 15, 17 Thus, there are 3 full observations in the interquartile range with a weight of 1 for each full observation, and 2 fractional observations with each observation having a weight of 0.75 (1-0.25 = 0.75). Thus we have a total of 4.5 observations in the interquartile range, (3×1 + 2×0.75 = 4.5 observations). The IQM is now calculated as follows: xIQM = {(7 + 9 + 11) + 0.75 × (5 + 13)} / 4.5 = 9 In the above example, the mean has a value xmean = 9. The same as the IQM, as was expected. The method of calculating the IQM for any number of observations is analogous; the fractional contributions to the IQM can be either 0, 0.25, 0.50, or 0.75. Comparison with mean and median The interquartile mean shares some properties of both the mean and the median: Like the median, the IQM is insensitive to outliers; in the example given, the highest value (38) was an obvious outlier of the dataset, but its value is not used in the calculation of the IQM. On the other hand, the common average (the arithmetic mean) is sensitive to these outliers: xmean = 8.5. Like the mean, the IQM is a distinct parameter, based on a large number of observations from the dataset. The median is always equal to one of the observations in the dataset (assuming an odd number of observations). The mean can be equal to any value between the lowest and highest observation, depending on the value of all the other observations. The IQM can be equal to any value between the first and third quartiles, depending on all the observations in the interquartile range. See also Related statistics Interquartile range Mid-hinge Trimean Applications London Interbank Offered Rate estimated a reference interest rate as the interquartile mean of the rates offered by several banks. (SOFR, Libor's primary US replacement, uses a volume-weighted average price which is not robust.) Everything2 uses the interquartile mean of the reputations of a user's writeups to determine the quality of the user's contribution. References Means Robust statistics
Interquartile mean
[ "Physics", "Mathematics" ]
1,144
[ "Means", "Mathematical analysis", "Point (geometry)", "Geometric centers", "Symmetry" ]
44,158
https://en.wikipedia.org/wiki/Conservative%20force
In physics, a conservative force is a force with the property that the total work done by the force in moving a particle between two points is independent of the path taken. Equivalently, if a particle travels in a closed loop, the total work done (the sum of the force acting along the path multiplied by the displacement) by a conservative force is zero. A conservative force depends only on the position of the object. If a force is conservative, it is possible to assign a numerical value for the potential at any point and conversely, when an object moves from one location to another, the force changes the potential energy of the object by an amount that does not depend on the path taken, contributing to the mechanical energy and the overall conservation of energy. If the force is not conservative, then defining a scalar potential is not possible, because taking different paths would lead to conflicting potential differences between the start and end points. Gravitational force is an example of a conservative force, while frictional force is an example of a non-conservative force. Other examples of conservative forces are: force in elastic spring, electrostatic force between two electric charges, and magnetic force between two magnetic poles. The last two forces are called central forces as they act along the line joining the centres of two charged/magnetized bodies. A central force is conservative if and only if it is spherically symmetric. For conservative forces, where is the conservative force, is the potential energy, and is the position. Informal definition Informally, a conservative force can be thought of as a force that conserves mechanical energy. Suppose a particle starts at point A, and there is a force F acting on it. Then the particle is moved around by other forces, and eventually ends up at A again. Though the particle may still be moving, at that instant when it passes point A again, it has traveled a closed path. If the net work done by F at this point is 0, then F passes the closed path test. Any force that passes the closed path test for all possible closed paths is classified as a conservative force. The gravitational force, spring force, magnetic force (according to some definitions, see below) and electric force (at least in a time-independent magnetic field, see Faraday's law of induction for details) are examples of conservative forces, while friction and air drag are classical examples of non-conservative forces. For non-conservative forces, the mechanical energy that is lost (not conserved) has to go somewhere else, by conservation of energy. Usually the energy is turned into heat, for example the heat generated by friction. In addition to heat, friction also often produces some sound energy. The water drag on a moving boat converts the boat's mechanical energy into not only heat and sound energy, but also wave energy at the edges of its wake. These and other energy losses are irreversible because of the second law of thermodynamics. Path independence A direct consequence of the closed path test is that the work done by a conservative force on a particle moving between any two points does not depend on the path taken by the particle. This is illustrated in the figure to the right: The work done by the gravitational force on an object depends only on its change in height because the gravitational force is conservative. The work done by a conservative force is equal to the negative of change in potential energy during that process. For a proof, imagine two paths 1 and 2, both going from point A to point B. The variation of energy for the particle, taking path 1 from A to B and then path 2 backwards from B to A, is 0; thus, the work is the same in path 1 and 2, i.e., the work is independent of the path followed, as long as it goes from A to B. For example, if a child slides down a frictionless slide, the work done by the gravitational force on the child from the start of the slide to the end is independent of the shape of the slide; it only depends on the vertical displacement of the child. Mathematical description A force field F, defined everywhere in space (or within a simply-connected volume of space), is called a conservative force or conservative vector field if it meets any of these three equivalent conditions: The curl of F is the zero vector: where in two dimensions this reduces to: There is zero net work (W) done by the force when moving a particle through a trajectory that starts and ends in the same place: The force can be written as the negative gradient of a potential, : The term conservative force comes from the fact that when a conservative force exists, it conserves mechanical energy. The most familiar conservative forces are gravity, the electric force (in a time-independent magnetic field, see Faraday's law), and spring force. Many forces (particularly those that depend on velocity) are not force fields. In these cases, the above three conditions are not mathematically equivalent. For example, the magnetic force satisfies condition 2 (since the work done by a magnetic field on a charged particle is always zero), but does not satisfy condition 3, and condition 1 is not even defined (the force is not a vector field, so one cannot evaluate its curl). Accordingly, some authors classify the magnetic force as conservative, while others do not. The magnetic force is an unusual case; most velocity-dependent forces, such as friction, do not satisfy any of the three conditions, and therefore are unambiguously nonconservative. Non-conservative force Despite conservation of total energy, non-conservative forces can arise in classical physics due to neglected degrees of freedom or from time-dependent potentials. Many non-conservative forces may be perceived as macroscopic effects of small-scale conservative forces. For instance, friction may be treated without violating conservation of energy by considering the motion of individual molecules; however, that means every molecule's motion must be considered rather than handling it through statistical methods. For macroscopic systems the non-conservative approximation is far easier to deal with than millions of degrees of freedom. Examples of non-conservative forces are friction and non-elastic material stress. Friction has the effect of transferring some of the energy from the large-scale motion of the bodies to small-scale movements in their interior, and therefore appear non-conservative on a large scale. General relativity is non-conservative, as seen in the anomalous precession of Mercury's orbit. However, general relativity does conserve a stress–energy–momentum pseudotensor. See also Conservative vector field Conservative system References Force
Conservative force
[ "Physics", "Mathematics" ]
1,339
[ "Force", "Physical quantities", "Quantity", "Mass", "Classical mechanics", "Wikipedia categories named after physical quantities", "Matter" ]
44,284
https://en.wikipedia.org/wiki/Non-coding%20DNA
Non-coding DNA (ncDNA) sequences are components of an organism's DNA that do not encode protein sequences. Some non-coding DNA is transcribed into functional non-coding RNA molecules (e.g. transfer RNA, microRNA, piRNA, ribosomal RNA, and regulatory RNAs). Other functional regions of the non-coding DNA fraction include regulatory sequences that control gene expression; scaffold attachment regions; origins of DNA replication; centromeres; and telomeres. Some non-coding regions appear to be mostly nonfunctional, such as introns, pseudogenes, intergenic DNA, and fragments of transposons and viruses. Regions that are completely nonfunctional are called junk DNA. Fraction of non-coding genomic DNA In bacteria, the coding regions typically take up 88% of the genome. The remaining 12% does not encode proteins, but much of it still has biological function through genes where the RNA transcript is functional (non-coding genes) and regulatory sequences, which means that almost all of the bacterial genome has a function. The amount of coding DNA in eukaryotes is usually a much smaller fraction of the genome because eukaryotic genomes contain large amounts of repetitive DNA not found in prokaryotes. The human genome contains somewhere between 1–2% coding DNA. The exact number is not known because there are disputes over the number of functional coding exons and over the total size of the human genome. This means that 98–99% of the human genome consists of non-coding DNA and this includes many functional elements such as non-coding genes and regulatory sequences. Genome size in eukaryotes can vary over a wide range, even between closely related species. This puzzling observation was originally known as the C-value Paradox where "C" refers to the haploid genome size. The paradox was resolved with the discovery that most of the differences were due to the expansion and contraction of repetitive DNA and not the number of genes. Some researchers speculated that this repetitive DNA was mostly junk DNA. The reasons for the changes in genome size are still being worked out and this problem is called the C-value Enigma. This led to the observation that the number of genes does not seem to correlate with perceived notions of complexity because the number of genes seems to be relatively constant, an issue termed the G-value Paradox. For example, the genome of the unicellular Polychaos dubium (formerly known as Amoeba dubia) has been reported to contain more than 200 times the amount of DNA in humans (i.e. more than 600 billion pairs of bases vs a bit more than 3 billion in humans). The pufferfish Takifugu rubripes genome is only about one eighth the size of the human genome, yet seems to have a comparable number of genes. Genes take up about 30% of the pufferfish genome and the coding DNA is about 10%. (Non-coding DNA = 90%.) The reduced size of the pufferfish genome is due to a reduction in the length of introns and less repetitive DNA. Utricularia gibba, a bladderwort plant, has a very small nuclear genome (100.7 Mb) compared to most plants. It likely evolved from an ancestral genome that was 1,500 Mb in size. The bladderwort genome has roughly the same number of genes as other plants but the total amount of coding DNA comes to about 30% of the genome. The remainder of the genome (70% non-coding DNA) consists of promoters and regulatory sequences that are shorter than those in other plant species. The genes contain introns but there are fewer of them and they are smaller than the introns in other plant genomes. There are noncoding genes, including many copies of ribosomal RNA genes. The genome also contains telomere sequences and centromeres as expected. Much of the repetitive DNA seen in other eukaryotes has been deleted from the bladderwort genome since that lineage split from those of other plants. About 59% of the bladderwort genome consists of transposon-related sequences but since the genome is so much smaller than other genomes, this represents a considerable reduction in the amount of this DNA. The authors of the original 2013 article note that claims of additional functional elements in the non-coding DNA of animals do not seem to apply to plant genomes. According to a New York Times article, during the evolution of this species, "... genetic junk that didn't serve a purpose was expunged, and the necessary stuff was kept." According to Victor Albert of the University of Buffalo, the plant is able to expunge its so-called junk DNA and "have a perfectly good multicellular plant with lots of different cells, organs, tissue types and flowers, and you can do it without the junk. Junk is not needed." Types of non-coding DNA sequences Noncoding genes There are two types of genes: protein coding genes and noncoding genes. Noncoding genes are an important part of non-coding DNA and they include genes for transfer RNA and ribosomal RNA. These genes were discovered in the 1960s. Prokaryotic genomes contain genes for a number of other noncoding RNAs but noncoding RNA genes are much more common in eukaryotes. Typical classes of noncoding genes in eukaryotes include genes for small nuclear RNAs (snRNAs), small nucleolar RNAs (sno RNAs), microRNAs (miRNAs), short interfering RNAs (siRNAs), PIWI-interacting RNAs (piRNAs), and long noncoding RNAs (lncRNAs). In addition, there are a number of unique RNA genes that produce catalytic RNAs. Noncoding genes account for only a few percent of prokaryotic genomes but they can represent a vastly higher fraction in eukaryotic genomes. In humans, the noncoding genes take up at least 6% of the genome, largely because there are hundreds of copies of ribosomal RNA genes. Protein-coding genes occupy about 38% of the genome; a fraction that is much higher than the coding region because genes contain large introns. The total number of noncoding genes in the human genome is controversial. Some scientists think that there are only about 5,000 noncoding genes while others believe that there may be more than 100,000 (see the article on Non-coding RNA). The difference is largely due to debate over the number of lncRNA genes. Promoters and regulatory elements Promoters are DNA segments near the 5' end of the gene where transcription begins. They are the sites where RNA polymerase binds to initiate RNA synthesis. Every gene has a noncoding promoter. Regulatory elements are sites that control the transcription of a nearby gene. They are almost always sequences where transcription factors bind to DNA and these transcription factors can either activate transcription (activators) or repress transcription (repressors). Regulatory elements were discovered in the 1960s and their general characteristics were worked out in the 1970s by studying specific transcription factors in bacteria and bacteriophage. Promoters and regulatory sequences represent an abundant class of noncoding DNA but they mostly consist of a collection of relatively short sequences so they do not take up a very large fraction of the genome. The exact amount of regulatory DNA in mammalian genome is unclear because it is difficult to distinguish between spurious transcription factor binding sites and those that are functional. The binding characteristics of typical DNA-binding proteins were characterized in the 1970s and the biochemical properties of transcription factors predict that in cells with large genomes, the majority of binding sites will not be biologically functional. Many regulatory sequences occur near promoters, usually upstream of the transcription start site of the gene. Some occur within a gene and a few are located downstream of the transcription termination site. In eukaryotes, there are some regulatory sequences that are located at a considerable distance from the promoter region. These distant regulatory sequences are often called enhancers but there is no rigorous definition of enhancer that distinguishes it from other transcription factor binding sites. Introns Introns are the parts of a gene that are transcribed into the precursor RNA sequence, but ultimately removed by RNA splicing during the processing to mature RNA. Introns are found in both types of genes: protein-coding genes and noncoding genes. They are present in prokaryotes but they are much more common in eukaryotic genomes. Group I and group II introns take up only a small percentage of the genome when they are present. Spliceosomal introns (see Figure) are only found in eukaryotes and they can represent a substantial proportion of the genome. In humans, for example, introns in protein-coding genes cover 37% of the genome. Combining that with about 1% coding sequences means that protein-coding genes occupy about 38% of the human genome. The calculations for noncoding genes are more complicated because there is considerable dispute over the total number of noncoding genes but taking only the well-defined examples means that noncoding genes occupy at least 6% of the genome. Untranslated regions The standard biochemistry and molecular biology textbooks describe non-coding nucleotides in mRNA located between the 5' end of the gene and the translation initiation codon. These regions are called 5'-untranslated regions or 5'-UTRs. Similar regions called 3'-untranslated regions (3'-UTRs) are found at the end of the gene. The 5'-UTRs and 3'UTRs are very short in bacteria but they can be several hundred nucleotides in length in eukaryotes. They contain short elements that control the initiation of translation (5'-UTRs) and transcription termination (3'-UTRs) as well as regulatory elements that may control mRNA stability, processing, and targeting to different regions of the cell. Origins of replication DNA synthesis begins at specific sites called origins of replication. These are regions of the genome where the DNA replication machinery is assembled and the DNA is unwound to begin DNA synthesis. In most cases, replication proceeds in both directions from the replication origin. The main features of replication origins are sequences where specific initiation proteins are bound. A typical replication origin covers about 100-200 base pairs of DNA. Prokaryotes have one origin of replication per chromosome or plasmid but there are usually multiple origins in eukaryotic chromosomes. The human genome contains about 100,000 origins of replication representing about 0.3% of the genome. Centromeres Centromeres are the sites where spindle fibers attach to newly replicated chromosomes in order to segregate them into daughter cells when the cell divides. Each eukaryotic chromosome has a single functional centromere that is seen as a constricted region in a condensed metaphase chromosome. Centromeric DNA consists of a number of repetitive DNA sequences that often take up a significant fraction of the genome because each centromere can be millions of base pairs in length. In humans, for example, the sequences of all 24 centromeres have been determined and they account for about 6% of the genome. However, it is unlikely that all of this noncoding DNA is essential since there is considerable variation in the total amount of centromeric DNA in different individuals. Centromeres are another example of functional noncoding DNA sequences that have been known for almost half a century and it is likely that they are more abundant than coding DNA. Telomeres Telomeres are regions of repetitive DNA at the end of a chromosome, which provide protection from chromosomal deterioration during DNA replication. Recent studies have shown that telomeres function to aid in its own stability. Telomeric repeat-containing RNA (TERRA) are transcripts derived from telomeres. TERRA has been shown to maintain telomerase activity and lengthen the ends of chromosomes. Scaffold attachment regions Both prokaryotic and eukarotic genomes are organized into large loops of protein-bound DNA. In eukaryotes, the bases of the loops are called scaffold attachment regions (SARs) and they consist of stretches of DNA that bind an RNA/protein complex to stabilize the loop. There are about 100,000 loops in the human genome and each SAR consists of about 100 bp of DNA, so the total amount of DNA devoted to SARs accounts for about 0.3% of the human genome. Pseudogenes Pseudogenes are mostly former genes that have become non-functional due to mutation, but the term also refers to inactive DNA sequences that are derived from RNAs produced by functional genes (processed pseudogenes). Pseudogenes are only a small fraction of noncoding DNA in prokaryotic genomes because they are eliminated by negative selection. In some eukaryotes, however, pseudogenes can accumulate because selection is not powerful enough to eliminate them (see Nearly neutral theory of molecular evolution). The human genome contains about 15,000 pseudogenes derived from protein-coding genes and an unknown number derived from noncoding genes. They may cover a substantial fraction of the genome (~5%) since many of them contain former intron sequences. Pseudogenes are junk DNA by definition and they evolve at the neutral rate as expected for junk DNA. Some former pseudogenes have secondarily acquired a function and this leads some scientists to speculate that most pseudogenes are not junk because they have a yet-to-be-discovered function. Repeat sequences, transposons and viral elements Transposons and retrotransposons are mobile genetic elements. Retrotransposon repeated sequences, which include long interspersed nuclear elements (LINEs) and short interspersed nuclear elements (SINEs), account for a large proportion of the genomic sequences in many species. Alu sequences, classified as a short interspersed nuclear element, are the most abundant mobile elements in the human genome. Some examples have been found of SINEs exerting transcriptional control of some protein-encoding genes. Endogenous retrovirus sequences are the product of reverse transcription of retrovirus genomes into the genomes of germ cells. Mutation within these retro-transcribed sequences can inactivate the viral genome. Over 8% of the human genome is made up of (mostly decayed) endogenous retrovirus sequences, as part of the over 42% fraction that is recognizably derived of retrotransposons, while another 3% can be identified to be the remains of DNA transposons. Much of the remaining half of the genome that is currently without an explained origin is expected to have found its origin in transposable elements that were active so long ago (> 200 million years) that random mutations have rendered them unrecognizable. Genome size variation in at least two kinds of plants is mostly the result of retrotransposon sequences. Highly repetitive DNA Highly repetitive DNA consists of short stretches of DNA that are repeated many times in tandem (one after the other). The repeat segments are usually between 2 bp and 10 bp but longer ones are known. Highly repetitive DNA is rare in prokaryotes but common in eukaryotes, especially those with large genomes. It is sometimes called satellite DNA. Most of the highly repetitive DNA is found in centromeres and telomeres (see above) and most of it is functional although some might be redundant. The other significant fraction resides in short tandem repeats (STRs; also called microsatellites) consisting of short stretches of a simple repeat such as ATC. There are about 350,000 STRs in the human genome and they are scattered throughout the genome with an average length of about 25 repeats. Variations in the number of STR repeats can cause genetic diseases when they lie within a gene but most of these regions appear to be non-functional junk DNA where the number of repeats can vary considerably from individual to individual. This is why these length differences are used extensively in DNA fingerprinting. Junk DNA Junk DNA is DNA that has no biologically relevant function such as pseudogenes and fragments of once active transposons. Bacteria and viral genomes have very little junk DNA but some eukaryotic genomes may have a substantial amount of junk DNA. The exact amount of nonfunctional DNA in humans and other species with large genomes has not been determined and there is considerable controversy in the scientific literature. The nonfunctional DNA in bacterial genomes is mostly located in the intergenic fraction of non-coding DNA but in eukaryotic genomes it may also be found within introns. There are many examples of functional DNA elements in non-coding DNA, and it is erroneous to equate non-coding DNA with junk DNA. Genome-wide association studies (GWAS) and non-coding DNA Genome-wide association studies (GWAS) identify linkages between alleles and observable traits such as phenotypes and diseases. Most of the associations are between single-nucleotide polymorphisms (SNPs) and the trait being examined and most of these SNPs are located in non-functional DNA. The association establishes a linkage that helps map the DNA region responsible for the trait but it does not necessarily identify the mutations causing the disease or phenotypic difference. SNPs that are tightly linked to traits are the ones most likely to identify a causal mutation. (The association is referred to as tight linkage disequilibrium.) About 12% of these polymorphisms are found in coding regions; about 40% are located in introns; and most of the rest are found in intergenic regions, including regulatory sequences. See also Conserved non-coding sequence Eukaryotic chromosome fine structure Gene-centered view of evolution Gene regulatory network Intergenic region Intragenomic conflict Phylogenetic footprinting Transcriptome Non-coding RNA Gene desert The Onion Test References Further reading External links Plant DNA C-values Database at Royal Botanic Gardens, Kew Fungal Genome Size Database at Estonian Institute of Zoology and Botany ENCODE: The human encyclopaedia at Nature ENCODE DNA Gene expression
Non-coding DNA
[ "Chemistry", "Biology" ]
3,761
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
44,363
https://en.wikipedia.org/wiki/Wien%27s%20displacement%20law
In physics, Wien's displacement law states that the black-body radiation curve for different temperatures will peak at different wavelengths that are inversely proportional to the temperature. The shift of that peak is a direct consequence of the Planck radiation law, which describes the spectral brightness or intensity of black-body radiation as a function of wavelength at any given temperature. However, it had been discovered by German physicist Wilhelm Wien several years before Max Planck developed that more general equation, and describes the entire shift of the spectrum of black-body radiation toward shorter wavelengths as temperature increases. Formally, the wavelength version of Wien's displacement law states that the spectral radiance of black-body radiation per unit wavelength, peaks at the wavelength given by: where is the absolute temperature and is a constant of proportionality called Wien's displacement constant, equal to or . This is an inverse relationship between wavelength and temperature. So the higher the temperature, the shorter or smaller the wavelength of the thermal radiation. The lower the temperature, the longer or larger the wavelength of the thermal radiation. For visible radiation, hot objects emit bluer light than cool objects. If one is considering the peak of black body emission per unit frequency or per proportional bandwidth, one must use a different proportionality constant. However, the form of the law remains the same: the peak wavelength is inversely proportional to temperature, and the peak frequency is directly proportional to temperature. There are other formulations of Wien's displacement law, which are parameterized relative to other quantities. For these alternate formulations, the form of the relationship is similar, but the proportionality constant, , differs. Wien's displacement law may be referred to as "Wien's law", a term which is also used for the Wien approximation. In "Wien's displacement law", the word displacement refers to how the intensity-wavelength graphs appear shifted (displaced) for different temperatures. Examples Wien's displacement law is relevant to some everyday experiences: A piece of metal heated by a blow torch first becomes "red hot" as the very longest visible wavelengths appear red, then becomes more orange-red as the temperature is increased, and at very high temperatures would be described as "white hot" as shorter and shorter wavelengths come to predominate the black body emission spectrum. Before it had even reached the red hot temperature, the thermal emission was mainly at longer infrared wavelengths, which are not visible; nevertheless, that radiation could be felt as it warms one's nearby skin. One easily observes changes in the color of an incandescent light bulb (which produces light through thermal radiation) as the temperature of its filament is varied by a light dimmer. As the light is dimmed and the filament temperature decreases, the distribution of color shifts toward longer wavelengths and the light appears redder, as well as dimmer. A wood fire at 1500 K puts out peak radiation at about 2000 nanometers. 98% of its radiation is at wavelengths longer than 1000 nm, and only a tiny proportion at visible wavelengths (390–700 nanometers). Consequently, a campfire can keep one warm but is a poor source of visible light. The effective temperature of the Sun is 5778 Kelvin. Using Wien's law, one finds a peak emission per nanometer (of wavelength) at a wavelength of about 500 nm, in the green portion of the spectrum near the peak sensitivity of the human eye. On the other hand, in terms of power per unit optical frequency, the Sun's peak emission is at 343 THz or a wavelength of 883 nm in the near infrared. In terms of power per percentage bandwidth, the peak is at about 635 nm, a red wavelength. About half of the Sun's radiation is at wavelengths shorter than 710 nm, about the limit of the human vision. Of that, about 12% is at wavelengths shorter than 400 nm, ultraviolet wavelengths, which is invisible to an unaided human eye. A large amount of the Sun's radiation falls in the fairly small visible spectrum and passes through the atmosphere. The preponderance of emission in the visible range, however, is not the case in most stars. The hot supergiant Rigel emits 60% of its light in the ultraviolet, while the cool supergiant Betelgeuse emits 85% of its light at infrared wavelengths. With both stars prominent in the constellation of Orion, one can easily appreciate the color difference between the blue-white Rigel (T = 12100 K) and the red Betelgeuse (T ≈ 3800 K). While few stars are as hot as Rigel, stars cooler than the Sun or even as cool as Betelgeuse are very commonplace. Mammals with a skin temperature of about 300 K emit peak radiation at around 10 μm in the far infrared. This is therefore the range of infrared wavelengths that pit viper snakes and passive IR cameras must sense. When comparing the apparent color of lighting sources (including fluorescent lights, LED lighting, computer monitors, and photoflash), it is customary to cite the color temperature. Although the spectra of such lights are not accurately described by the black-body radiation curve, a color temperature (the correlated color temperature) is quoted for which black-body radiation would most closely match the subjective color of that source. For instance, the blue-white fluorescent light sometimes used in an office may have a color temperature of 6500 K, whereas the reddish tint of a dimmed incandescent light may have a color temperature (and an actual filament temperature) of 2000 K. Note that the informal description of the former (bluish) color as "cool" and the latter (reddish) as "warm" is exactly opposite the actual temperature change involved in black-body radiation. Discovery The law is named for Wilhelm Wien, who derived it in 1893 based on a thermodynamic argument. Wien considered adiabatic expansion of a cavity containing waves of light in thermal equilibrium. Using Doppler's principle, he showed that, under slow expansion or contraction, the energy of light reflecting off the walls changes in exactly the same way as the frequency. A general principle of thermodynamics is that a thermal equilibrium state, when expanded very slowly, stays in thermal equilibrium. Wien himself deduced this law theoretically in 1893, following Boltzmann's thermodynamic reasoning. It had previously been observed, at least semi-quantitatively, by an American astronomer, Langley. This upward shift in with is familiar to everyone—when an iron is heated in a fire, the first visible radiation (at around 900 K) is deep red, the lowest frequency visible light. Further increase in causes the color to change to orange then yellow, and finally blue at very high temperatures (10,000 K or more) for which the peak in radiation intensity has moved beyond the visible into the ultraviolet. The adiabatic principle allowed Wien to conclude that for each mode, the adiabatic invariant energy/frequency is only a function of the other adiabatic invariant, the frequency/temperature. From this, he derived the "strong version" of Wien's displacement law: the statement that the blackbody spectral radiance is proportional to for some function of a single variable. A modern variant of Wien's derivation can be found in the textbook by Wannier and in a paper by E. Buckingham The consequence is that the shape of the black-body radiation function (which was not yet understood) would shift proportionally in frequency (or inversely proportionally in wavelength) with temperature. When Max Planck later formulated the correct black-body radiation function it did not explicitly include Wien's constant . Rather, the Planck constant was created and introduced into his new formula. From the Planck constant and the Boltzmann constant , Wien's constant can be obtained. Peak differs according to parameterization The results in the tables above summarize results from other sections of this article. Percentiles are percentiles of the Planck blackbody spectrum. Only 25 percent of the energy in the black-body spectrum is associated with wavelengths shorter than the value given by the peak-wavelength version of Wien's law. Notice that for a given temperature, different parameterizations imply different maximal wavelengths. In particular, the curve of intensity per unit frequency peaks at a different wavelength than the curve of intensity per unit wavelength. For example, using and parameterization by wavelength, the wavelength for maximal spectral radiance is with corresponding frequency . For the same temperature, but parameterizing by frequency, the frequency for maximal spectral radiance is with corresponding wavelength . These functions are radiance density functions, which are probability density functions scaled to give units of radiance. The density function has different shapes for different parameterizations, depending on relative stretching or compression of the abscissa, which measures the change in probability density relative to a linear change in a given parameter. Since wavelength and frequency have a reciprocal relation, they represent significantly non-linear shifts in probability density relative to one another. The total radiance is the integral of the distribution over all positive values, and that is invariant for a given temperature under any parameterization. Additionally, for a given temperature the radiance consisting of all photons between two wavelengths must be the same regardless of which distribution you use. That is to say, integrating the wavelength distribution from to will result in the same value as integrating the frequency distribution between the two frequencies that correspond to and , namely from to . However, the distribution shape depends on the parameterization, and for a different parameterization the distribution will typically have a different peak density, as these calculations demonstrate. The important point of Wien's law, however, is that any such wavelength marker, including the median wavelength (or, alternatively, the wavelength below which any specified percentage of the emission occurs) is proportional to the reciprocal of temperature. That is, the shape of the distribution for a given parameterization scales with and translates according to temperature, and can be calculated once for a canonical temperature, then appropriately shifted and scaled to obtain the distribution for another temperature. This is a consequence of the strong statement of Wien's law. Frequency-dependent formulation For spectral flux considered per unit frequency (in hertz), Wien's displacement law describes a peak emission at the optical frequency given by: or equivalently where is a constant resulting from the maximization equation, is the Boltzmann constant, is the Planck constant, and is the absolute temperature. With the emission now considered per unit frequency, this peak now corresponds to a wavelength about 76% longer than the peak considered per unit wavelength. The relevant math is detailed in the next section. Derivation from Planck's law Parameterization by wavelength Planck's law for the spectrum of black-body radiation predicts the Wien displacement law and may be used to numerically evaluate the constant relating temperature and the peak parameter value for any particular parameterization. Commonly a wavelength parameterization is used and in that case the black body spectral radiance (power per emitting area per solid angle) is: Differentiating with respect to and setting the derivative equal to zero gives: which can be simplified to give: By defining: the equation becomes one in the single variable x: which is equivalent to: This equation is solved by where is the principal branch of the Lambert W function, and gives . Solving for the wavelength in millimetres, and using kelvins for the temperature yields: Parameterization by frequency Another common parameterization is by frequency. The derivation yielding peak parameter value is similar, but starts with the form of Planck's law as a function of frequency : The preceding process using this equation yields: The net result is: This is similarly solved with the Lambert W function: giving . Solving for produces: Parameterization by the logarithm of wavelength or frequency Using the implicit equation yields the peak in the spectral radiance density function expressed in the parameter radiance per proportional bandwidth. (That is, the density of irradiance per frequency bandwidth proportional to the frequency itself, which can be calculated by considering infinitesimal intervals of (or equivalently ) rather of frequency itself.) This is perhaps a more intuitive way of presenting "wavelength of peak emission". That yields . Mean photon energy as an alternate characterization Another way of characterizing the radiance distribution is via the mean photon energy where is the Riemann zeta function. The wavelength corresponding to the mean photon energy is given by Criticism Marr and Wilkin (2012) contend that the widespread teaching of Wien's displacement law in introductory courses is undesirable, and it would be better replaced by alternate material. They argue that teaching the law is problematic because: the Planck curve is too broad for the peak to stand out or be regarded as significant; the location of the peak depends on the parameterization, and they cite several sources as concurring that "that the designation of any peak of the function is not meaningful and should, therefore, be de-emphasized"; the law is not used for determining temperatures in actual practice, direct use of the Planck function being relied upon instead. They suggest that the average photon energy be presented in place of Wien's displacement law, as being a more physically meaningful indicator of changes that occur with changing temperature. In connection with this, they recommend that the average number of photons per second be discussed in connection with the Stefan–Boltzmann law. They recommend that the Planck spectrum be plotted as a "spectral energy density per fractional bandwidth distribution," using a logarithmic scale for the wavelength or frequency. See also Wien approximation Emissivity Sakuma–Hattori equation Stefan–Boltzmann law Thermometer Ultraviolet catastrophe References Further reading External links Eric Weisstein's World of Physics Eponymous laws of physics Statistical mechanics Foundational quantum physics Light 1893 in science 1893 in Germany
Wien's displacement law
[ "Physics" ]
2,831
[ "Physical phenomena", "Spectrum (physical sciences)", "Foundational quantum physics", "Electromagnetic spectrum", "Quantum mechanics", "Waves", "Light", "Statistical mechanics" ]
44,401
https://en.wikipedia.org/wiki/Brown%20dwarf
Brown dwarfs are substellar objects that have more mass than the biggest gas giant planets, but less than the least massive main-sequence stars. Their mass is approximately 13 to 80 times that of Jupiter ()not big enough to sustain nuclear fusion of ordinary hydrogen (1H) into helium in their cores, but massive enough to emit some light and heat from the fusion of deuterium (2H). The most massive ones (> ) can fuse lithium (7Li). Astronomers classify self-luminous objects by spectral type, a distinction intimately tied to the surface temperature, and brown dwarfs occupy types M, L, T, and Y. As brown dwarfs do not undergo stable hydrogen fusion, they cool down over time, progressively passing through later spectral types as they age. Their name comes not from the color of light they emit but from their falling between white dwarf stars and "dark" planets in size. To the naked eye, brown dwarfs would appear in different colors depending on their temperature. The warmest ones are possibly orange or red, while cooler brown dwarfs would likely appear magenta or black to the human eye. Brown dwarfs may be fully convective, with no layers or chemical differentiation by depth. Though their existence was initially theorized in the 1960s, it was not until the mid-1990s that the first unambiguous brown dwarfs were discovered. As brown dwarfs have relatively low surface temperatures, they are not very bright at visible wavelengths, emitting most of their light in the infrared. However, with the advent of more capable infrared detecting devices, thousands of brown dwarfs have been identified. The nearest known brown dwarfs are located in the Luhman 16 system, a binary of L- and T-type brown dwarfs about from the Sun. Luhman 16 is the third closest system to the Sun after Alpha Centauri and Barnard's Star. History Early theorizing The objects now called "brown dwarfs" were theorized by Shiv S. Kumar in the 1960s to exist and were originally called black dwarfs, a classification for dark substellar objects floating freely in space that were not massive enough to sustain hydrogen fusion. However, (a) the term black dwarf was already in use to refer to a cold white dwarf; (b) red dwarfs fuse hydrogen; and (c) these objects may be luminous at visible wavelengths early in their lives. Because of this, alternative names for these objects were proposed, including and substar. In 1975, Jill Tarter suggested the term "brown dwarf", using "brown" as an approximate color. The term "black dwarf" still refers to a white dwarf that has cooled to the point that it no longer emits significant amounts of light. However, the time required for even the lowest-mass white dwarf to cool to this temperature is calculated to be longer than the current age of the universe; hence such objects are expected to not yet exist. Early theories concerning the nature of the lowest-mass stars and the hydrogen-burning limit suggested that a population I object with a mass less than 0.07 solar masses () or a population II object less than would never go through normal stellar evolution and would become a completely degenerate star. The resulting brown dwarf star is sometimes called a failed star. The first self-consistent calculation of the hydrogen-burning minimum mass confirmed a value between 0.07 and 0.08 solar masses for population I objects. Deuterium fusion The discovery of deuterium burning down to () and the impact of dust formation in the cool outer atmospheres of brown dwarfs in the late 1980s brought these theories into question. However, such objects were hard to find because they emit almost no visible light. Their strongest emissions are in the infrared (IR) spectrum, and ground-based IR detectors were too imprecise at that time to readily identify any brown dwarfs. Since then, numerous searches by various methods have sought these objects. These methods included multi-color imaging surveys around field stars, imaging surveys for faint companions of main-sequence dwarfs and white dwarfs, surveys of young star clusters, and radial velocity monitoring for close companions. GD 165B and class L For many years, efforts to discover brown dwarfs were fruitless. In 1988, however, a faint companion to the white dwarf star GD 165 was found in an infrared search of white dwarfs. The spectrum of the companion GD 165B was very red and enigmatic, showing none of the features expected of a low-mass red dwarf. It became clear that GD 165B would need to be classified as a much cooler object than the latest M dwarfs then known. GD 165B remained unique for almost a decade until the advent of the Two Micron All-Sky Survey (2MASS) in 1997, which discovered many objects with similar colors and spectral features. Today, GD 165B is recognized as the prototype of a class of objects now called "L dwarfs". Although the discovery of the coolest dwarf was highly significant at the time, it was debated whether GD 165B would be classified as a brown dwarf or simply a very-low-mass star, because observationally it is very difficult to distinguish between the two. Soon after the discovery of GD 165B, other brown-dwarf candidates were reported. Most failed to live up to their candidacy, however, because the absence of lithium showed them to be stellar objects. True stars burn their lithium within a little over 100 Myr, whereas brown dwarfs (which can, confusingly, have temperatures and luminosities similar to true stars) will not. Hence, the detection of lithium in the atmosphere of an object older than 100 Myr ensures that it is a brown dwarf. Gliese 229B and class T The first class "T" brown dwarf was discovered in 1994 by Caltech astronomers Shrinivas Kulkarni, Tadashi Nakajima, Keith Matthews and Rebecca Oppenheimer, and Johns Hopkins scientists Samuel T. Durrance and David Golimowski. It was confirmed in 1995 as a substellar companion to Gliese 229. Gliese 229b is one of the first two instances of clear evidence for a brown dwarf, along with Teide 1. Confirmed in 1995, both were identified by the presence of the 670.8 nm lithium line. The latter was found to have a temperature and luminosity well below the stellar range. Its near-infrared spectrum clearly exhibited a methane absorption band at 2 micrometres, a feature that had previously only been observed in the atmospheres of giant planets and that of Saturn's moon Titan. Methane absorption is not expected at any temperature of a main-sequence star. This discovery helped to establish yet another spectral class even cooler than L dwarfs, known as "T dwarfs", for which Gliese 229B is the prototype. Teide 1 and class M The first confirmed class "M" brown dwarf was discovered by Spanish astrophysicists Rafael Rebolo (head of the team), María Rosa Zapatero-Osorio, and Eduardo L. Martín in 1994. This object, found in the Pleiades open cluster, received the name Teide 1. The discovery article was submitted to Nature in May 1995, and published on 14 September 1995. Nature highlighted "Brown dwarfs discovered, official" on the front page of that issue. Teide 1 was discovered in images collected by the IAC team on 6 January 1994 using the 80 cm telescope (IAC 80) at Teide Observatory, and its spectrum was first recorded in December 1994 using the 4.2 m William Herschel Telescope at Roque de los Muchachos Observatory (La Palma). The distance, chemical composition, and age of Teide 1 could be established because of its membership in the young Pleiades star cluster. Using the most advanced stellar and substellar evolution models at that moment, the team estimated for Teide 1 a mass of , which is below the stellar-mass limit. The object became a reference in subsequent young brown dwarf related works. In theory, a brown dwarf below is unable to burn lithium by thermonuclear fusion at any time during its evolution. This fact is one of the lithium test principles used to judge the substellar nature of low-luminosity and low-surface-temperature astronomical bodies. High-quality spectral data acquired by the Keck 1 telescope in November 1995 showed that Teide 1 still had the initial lithium abundance of the original molecular cloud from which Pleiades stars formed, proving the lack of thermonuclear fusion in its core. These observations fully confirmed that Teide 1 is a brown dwarf, as well as the efficiency of the spectroscopic lithium test. For some time, Teide 1 was the smallest known object outside the Solar System that had been identified by direct observation. Since then, over 1,800 brown dwarfs have been identified, even some very close to Earth, like Epsilon Indi Ba and Bb, a pair of brown dwarfs gravitationally bound to a Sun-like star 12 light-years from the Sun, and Luhman 16, a binary system of brown dwarfs at 6.5 light-years from the Sun. Theory The standard mechanism for star birth is through the gravitational collapse of a cold interstellar cloud of gas and dust. As the cloud contracts, it heats due to the Kelvin–Helmholtz mechanism. Early in the process the contracting gas quickly radiates away much of the energy, allowing the collapse to continue. Eventually, the central region becomes sufficiently dense to trap radiation. Consequently, the central temperature and density of the collapsed cloud increase dramatically with time, slowing the contraction, until the conditions are hot and dense enough for thermonuclear reactions to occur in the core of the protostar. For a typical star, gas and radiation pressure generated by the thermonuclear fusion reactions within its core will support it against any further gravitational contraction. Hydrostatic equilibrium is reached, and the star will spend most of its lifetime fusing hydrogen into helium as a main-sequence star. If, however, the initial mass of the protostar is less than about , normal hydrogen thermonuclear fusion reactions will not ignite in the core. Gravitational contraction does not heat the small protostar very effectively, and before the temperature in the core can increase enough to trigger fusion, the density reaches the point where electrons become closely packed enough to create quantum electron degeneracy pressure. According to the brown dwarf interior models, typical conditions in the core for density, temperature and pressure are expected to be the following: This means that the protostar is not massive or dense enough ever to reach the conditions needed to sustain hydrogen fusion. The infalling matter is prevented, by electron degeneracy pressure, from reaching the densities and pressures needed. Further gravitational contraction is prevented and the result is a brown dwarf that simply cools off by radiating away its internal thermal energy. Note that, in principle, it is possible for a brown dwarf to slowly accrete mass above the hydrogen burning limit without initiating hydrogen fusion. This could happen via mass transfer in a binary brown dwarf system. High-mass brown dwarfs versus low-mass stars Lithium is generally present in brown dwarfs and not in low-mass stars. Stars, which reach the high temperature necessary for fusing hydrogen, rapidly deplete their lithium. Fusion of lithium-7 and a proton occurs, producing two helium-4 nuclei. The temperature necessary for this reaction is just below that necessary for hydrogen fusion. Convection in low-mass stars ensures that lithium in the whole volume of the star is eventually depleted. Therefore, the presence of the lithium spectral line in a candidate brown dwarf is a strong indicator that it is indeed a substellar object. Lithium test The use of lithium to distinguish candidate brown dwarfs from low-mass stars is commonly referred to as the lithium test, and was pioneered by Rafael Rebolo, Eduardo Martín and Antonio Magazzu. However, lithium is also seen in very young stars, which have not yet had enough time to burn it all. Heavier stars, like the Sun, can also retain lithium in their outer layers, which never get hot enough to fuse lithium, and whose convective layer does not mix with the core where the lithium would be rapidly depleted. Those larger stars are easily distinguishable from brown dwarfs by their size and luminosity. Conversely, brown dwarfs at the high end of their mass range can be hot enough to deplete their lithium when they are young. Dwarfs of mass greater than can burn their lithium by the time they are half a billion years old; thus the lithium test is not perfect. Atmospheric methane Unlike stars, older brown dwarfs are sometimes cool enough that, over very long periods of time, their atmospheres can gather observable quantities of methane, which cannot form in hotter objects. Dwarfs confirmed in this fashion include Gliese 229B. Iron, silicate and sulfide clouds Main-sequence stars cool, but eventually reach a minimum bolometric luminosity that they can sustain through steady fusion. This luminosity varies from star to star, but is generally at least 0.01% that of the Sun. Brown dwarfs cool and darken steadily over their lifetimes; sufficiently old brown dwarfs will be too faint to be detectable. Clouds are used to explain the weakening of the iron hydride (FeH) spectral line in late L-dwarfs. Iron clouds deplete FeH in the upper atmosphere, and the cloud layer blocks the view to lower layers still containing FeH. The later strengthening of this chemical compound at cooler temperatures of mid- to late T-dwarfs is explained by disturbed clouds that allows a telescope to look into the deeper layers of the atmosphere that still contains FeH. Young L/T-dwarfs (L2-T4) show high variability, which could be explained with clouds, hot spots, magnetically driven aurorae or thermochemical instabilities. The clouds of these brown dwarfs are explained as either iron clouds with varying thickness or a lower thick iron cloud layer and an upper silicate cloud layer. This upper silicate cloud layer can consist out of quartz, enstatite, corundum and/or fosterite. It is however not clear if silicate clouds are always necessary for young objects. Silicate absorption can be directly observed in the mid-infrared at 8 to 12 μm. Observations with Spitzer IRS have shown that silicate absorption is common, but not ubiquitous, for L2-L8 dwarfs. Additionally, MIRI has observed silicate absorption in the planetary-mass companion VHS 1256b. Iron rain as part of atmospheric convection processes is possible only in brown dwarfs, and not in small stars. The spectroscopy research into iron rain is still ongoing, but not all brown dwarfs will always have this atmospheric anomaly. In 2013, a heterogeneous iron-containing atmosphere was imaged around the B component in the nearby Luhman 16 system. For late T-type brown dwarfs only a few variable searches were carried out. Thin cloud layers are predicted to form in late T-dwarfs from chromium and potassium chloride, as well as several sulfides. These sulfides are manganese sulfide, sodium sulfide and zinc sulfide. The variable T7 dwarf 2M0050–3322 is explained to have a top layer of potassium chloride clouds, a mid layer of sodium sulfide clouds and a lower layer of manganese sulfide clouds. Patchy clouds of the top two cloud layers could explain why the methane and water vapor bands are variable. At the lowest temperatures of the Y-dwarf WISE 0855-0714 patchy cloud layers of sulfide and water ice clouds could cover 50% of the surface. Low-mass brown dwarfs versus high-mass planets Like stars, brown dwarfs form independently, but, unlike stars, they lack sufficient mass to "ignite" hydrogen fusion. Like all stars, they can occur singly or in close proximity to other stars. Some orbit stars and can, like planets, have eccentric orbits. Size and fuel-burning ambiguities Brown dwarfs are all roughly the same radius as Jupiter. At the high end of their mass range (), the volume of a brown dwarf is governed primarily by electron-degeneracy pressure, as it is in white dwarfs; at the low end of the range (), their volume is governed primarily by Coulomb pressure, as it is in planets. The net result is that the radii of brown dwarfs vary by only 10–15% over the range of possible masses. Moreover, the mass–radius relationship shows no change from about one Saturn mass to the onset of hydrogen burning (), suggesting that from this perspective brown dwarfs are simply high-mass Jovian planets. This can make distinguishing them from planets difficult. In addition, many brown dwarfs undergo no fusion; even those at the high end of the mass range (over ) cool quickly enough that after 10 million years they no longer undergo fusion. Heat spectrum X-ray and infrared spectra are telltale signs of brown dwarfs. Some emit X-rays; and all "warm" dwarfs continue to glow tellingly in the red and infrared spectra until they cool to planet-like temperatures (under ). Gas giants have some of the characteristics of brown dwarfs. Like the Sun, Jupiter and Saturn are both made primarily of hydrogen and helium. Saturn is nearly as large as Jupiter, despite having only 30% the mass. Three of the giant planets in the Solar System (Jupiter, Saturn, and Neptune) emit much more (up to about twice) heat than they receive from the Sun. All four giant planets have their own "planetary" systems, in the form of extensive moon systems. Current IAU standard Currently, the International Astronomical Union considers an object above (the limiting mass for thermonuclear fusion of deuterium) to be a brown dwarf, whereas an object under that mass (and orbiting a star or stellar remnant) is considered a planet. The minimum mass required to trigger sustained hydrogen burning (about ) forms the upper limit of the definition. It is also debated whether brown dwarfs would be better defined by their formation process rather than by theoretical mass limits based on nuclear fusion reactions. Under this interpretation brown dwarfs are those objects that represent the lowest-mass products of the star formation process, while planets are objects formed in an accretion disk surrounding a star. The coolest free-floating objects discovered, such as WISE 0855, as well as the lowest-mass young objects known, like PSO J318.5−22, are thought to have masses below , and as a result are sometimes referred to as planetary-mass objects due to the ambiguity of whether they should be regarded as rogue planets or brown dwarfs. There are planetary-mass objects known to orbit brown dwarfs, such as 2M1207b,2MASS J044144b and Oph 98 B. The 13-Jupiter-mass cutoff is a rule of thumb rather than a quantity with precise physical significance. Larger objects will burn most of their deuterium and smaller ones will burn only a little, and the 13Jupiter-mass value is somewhere in between. The amount of deuterium burnt also depends to some extent on the composition of the object, specifically on the amount of helium and deuterium present and on the fraction of heavier elements, which determines the atmospheric opacity and thus the radiative cooling rate. As of 2011 the Extrasolar Planets Encyclopaedia included objects up to 25 Jupiter masses, saying, "The fact that there is no special feature around in the observed mass spectrum reinforces the choice to forget this mass limit". As of 2016, this limit was increased to 60 Jupiter masses, based on a study of mass–density relationships. The Exoplanet Data Explorer includes objects up to 24 Jupiter masses with the advisory: "The 13 Jupiter-mass distinction by the IAU Working Group is physically unmotivated for planets with rocky cores, and observationally problematic due to the sin i ambiguity." The NASA Exoplanet Archive includes objects with a mass (or minimum mass) equal to or less than 30 Jupiter masses. Sub-brown dwarf Objects below , called sub-brown dwarfs or planetary-mass brown dwarfs, form in the same manner as stars and brown dwarfs (i.e. through the collapse of a gas cloud) but have a mass below the limiting mass for thermonuclear fusion of deuterium. Some researchers call them free-floating planets, whereas others call them planetary-mass brown dwarfs. Role of other physical properties in the mass estimate While spectroscopic features can help to distinguish between low-mass stars and brown dwarfs, it is often necessary to estimate the mass to come to a conclusion. The theory behind the mass estimate is that brown dwarfs with a similar mass form in a similar way and are hot when they form. Some have spectral types that are similar to low-mass stars, such as 2M1101AB. As they cool down the brown dwarfs should retain a range of luminosities depending on the mass. Without the age and luminosity, a mass estimate is difficult; for example, an L-type brown dwarf could be an old brown dwarf with a high mass (possibly a low-mass star) or a young brown dwarf with a very low mass. For Y dwarfs this is less of a problem, as they remain low-mass objects near the sub-brown dwarf limit, even for relatively high age estimates. For L and T dwarfs it is still useful to have an accurate age estimate. The luminosity is here the less concerning property, as this can be estimated from the spectral energy distribution. The age estimate can be done in two ways. Either the brown dwarf is young and still has spectral features that are associated with youth, or the brown dwarf co-moves with a star or stellar group (star cluster or association), where age estimates are easier to obtain. A very young brown dwarf that was further studied with this method is 2M1207 and the companion 2M1207b. Based on the location, proper motion and spectral signature, this object was determined to belong to the ~8-million-year-old TW Hydrae association, and the mass of the secondary was determined to be 8 ± 2 , below the deuterium burning limit. An example of a very old age obtained by the co-movement method is the brown dwarf + white dwarf binary COCONUTS-1, with the white dwarf estimated to be billion years old. In this case the mass was not estimated with the derived age, but the co-movement provided an accurate distance estimate, using Gaia parallax. Using this measurement the authors estimated the radius, which was then used to estimate the mass for the brown dwarf as . Observations Classification of brown dwarfs Spectral class M These are brown dwarfs with a spectral class of M5.5 or later; they are also called late-M dwarfs. Some scientists regard them as red dwarfs. All brown dwarfs with spectral type M are young objects, such as Teide 1, which is the first M-type brown dwarf discovered, and LP 944-20, the closest M-type brown dwarf. Spectral class L The defining characteristic of spectral class M, the coolest type in the long-standing classical stellar sequence, is an optical spectrum dominated by absorption bands of titanium(II) oxide (TiO) and vanadium(II) oxide (VO) molecules. However, GD 165B, the cool companion to the white dwarf GD 165, had none of the hallmark TiO features of M dwarfs. The subsequent identification of many objects like GD 165B ultimately led to the definition of a new spectral class, the L dwarfs, defined in the red optical region of the spectrum not by metal-oxide absorption bands (TiO, VO), but by metal hydride emission bands (FeH, CrH, MgH, CaH) and prominent atomic lines of alkali metals (Na, K, Rb, Cs). , over 900 L dwarfs had been identified, most by wide-field surveys: the Two Micron All Sky Survey (2MASS), the Deep Near Infrared Survey of the Southern Sky (DENIS), and the Sloan Digital Sky Survey (SDSS). This spectral class also contains the coolest main-sequence stars (> 80 MJ), which have spectral classes L2 to L6. Spectral class T As GD 165B is the prototype of the L dwarfs, Gliese 229B is the prototype of a second new spectral class, the T dwarfs. T dwarfs are pinkish-magenta. Whereas near-infrared (NIR) spectra of L dwarfs show strong absorption bands of H2O and carbon monoxide (CO), the NIR spectrum of Gliese 229B is dominated by absorption bands from methane (CH4), a feature which in the Solar System is found only in the giant planets and Titan. CH4, H2O, and molecular hydrogen (H2) collision-induced absorption (CIA) give Gliese 229B blue near-infrared colors. Its steeply sloped red optical spectrum also lacks the FeH and CrH bands that characterize L dwarfs and instead is influenced by exceptionally broad absorption features from the alkali metals Na and K. These differences led J. Davy Kirkpatrick to propose the T spectral class for objects exhibiting H- and K-band CH4 absorption. , 355 T dwarfs were known. NIR classification schemes for T dwarfs have recently been developed by Adam Burgasser and Tom Geballe. Theory suggests that L dwarfs are a mixture of very-low-mass stars and sub-stellar objects (brown dwarfs), whereas the T dwarf class is composed entirely of brown dwarfs. Because of the absorption of sodium and potassium in the green part of the spectrum of T dwarfs, the actual appearance of T dwarfs to human visual perception is estimated to be not brown, but magenta. Early observations limited how distant T-dwarfs could be observed. T-class brown dwarfs, such as WISE 0316+4307, have been detected more than 100 light-years from the Sun. Observations with JWST have detected T-dwarfs such as UNCOVER-BD-1 up to 4500 parsec distant from the sun. Spectral class Y In 2009, the coolest-known brown dwarfs had estimated effective temperatures between , and have been assigned the spectral class T9. Three examples are the brown dwarfs CFBDS J005910.90–011401.3, ULAS J133553.45+113005.2 and ULAS J003402.77−005206.7. The spectra of these objects have absorption peaks around 1.55 micrometres. Delorme et al. have suggested that this feature is due to absorption from ammonia and that this should be taken as indicating the T–Y transition, making these objects of type Y0. However, the feature is difficult to distinguish from absorption by water and methane, and other authors have stated that the assignment of class Y0 is premature. The first JWST spectral energy distribution of a Y-dwarf was able to observe several bands of molecules in the atmosphere of the Y0-dwarf WISE 0359−5401. The observations covered spectroscopy from 1 to 12 μm and photometry at 15, 18 and 21 μm. The molecules water (H2O), methane (CH4), carbon monoxide (CO), carbon dioxide (CO2) and ammonia (NH3) were detected in WISE 0359−5401. Many of these features have been observed before in this Y-dwarf and warmer T-dwarfs by other observatories, but JWST was able to observe them in a single spectrum. Methane is the main reservoir of carbon in the atmosphere of WISE 0359−5401, but there is still enough carbon left to form detectable carbon monoxide (at 4.5–5.0 μm) and carbon dioxide (at 4.2–4.35 μm) in the Y-dwarf. Ammonia was difficult to detect before JWST, as it blends in with the absorption feature of water in the near-infrared, as well at 5.5–7.1 μm. At longer wavelengths of 8.5–12 μm the spectrum of WISE 0359−5401 is dominated by the absorption of ammonia. At 3 μm there is an additional newly detected ammonia feature. Role of vertical mixing In the hydrogen-dominated atmosphere of brown dwarfs a chemical equilibrium between carbon monoxide and methane exists. Carbon monoxide reacts with hydrogen molecules and forms methane and hydroxyl in this reaction. The hydroxyl radical might later react with hydrogen and form water molecules. In the other direction of the reaction, methane reacts with hydroxyl and forms carbon monoxide and hydrogen. The chemical reaction is tilted towards carbon monoxide at higher temperatures (L-dwarfs) and lower pressure. At lower temperatures (T-dwarfs) and higher pressure the reaction is tilted towards methane, and methane predominates at the T/Y-boundary. However, vertical mixing of the atmosphere can cause methane to sink into lower layers of the atmosphere and carbon monoxide to rise from these lower and hotter layers. The carbon monoxide is slow to react back into methane because of an energy barrier that prevents the breakdown of the C-O bonds. This forces the observable atmosphere of a brown dwarf to be in a chemical disequilibrium. The L/T transition is mainly defined with the transition from a carbon-monoxide-dominated atmosphere in L-dwarfs to a methane-dominated atmosphere in T-dwarfs. The amount of vertical mixing can therefore push the L/T-transition to lower or higher temperatures. This becomes important for objects with modest surface gravity and extended atmospheres, such as giant exoplanets. This pushes the L/T transition to lower temperatures for giant exoplanets. For brown dwarfs this transition occurs at around 1200 K. The exoplanet HR 8799c, on the other hand, does not show any methane, while having a temperature of 1100K. The transition between T- and Y-dwarfs is often defined as 500 K because of the lack of spectral observations of these cold and faint objects. Future observations with JWST and the ELTs might improve the sample of Y-dwarfs with observed spectra. Y-dwarfs are dominated by deep spectral features of methane, water vapor and possibly absorption features of ammonia and water ice. Vertical mixing, clouds, metallicity, photochemistry, lightning, impact shocks and metallic catalysts might influence the temperature at which the L/T and T/Y transition occurs. Secondary features Young brown dwarfs have low surface gravities because they have larger radii and lower masses than the field stars of similar spectral type. These sources are noted by a letter beta (β) for intermediate surface gravity or gamma (γ) for low surface gravity. Indicators of low surface gravity include weak CaH, K I and Na I lines, as well as a strong VO line. Alpha (α) denotes normal surface gravity and is usually dropped. Sometimes an extremely low surface gravity is denoted by a delta (δ). The suffix "pec" stands for "peculiar"; this suffix is still used for other features that are unusual, and summarizes different properties, indicating low surface gravity, subdwarfs and unresolved binaries. The prefix sd stands for subdwarf and only includes cool subdwarfs. This prefix indicates a low metallicity and kinematic properties that are more similar to halo stars than to disk stars. Subdwarfs appear bluer than disk objects. The red suffix describes objects with red color, but an older age. This is not interpreted as low surface gravity, but as a high dust content. The blue suffix describes objects with blue near-infrared colors that cannot be explained with low metallicity. Some are explained as L+T binaries, others are not binaries, such as 2MASS J11263991−5003550 and are explained with thin and/or large-grained clouds. Spectral and atmospheric properties of brown dwarfs The majority of flux emitted by L and T dwarfs is in the 1- to 2.5-micrometre near-infrared range. Low and decreasing temperatures through the late-M, -L, and -T dwarf sequence result in a rich near-infrared spectrum containing a wide variety of features, from relatively narrow lines of neutral atomic species to broad molecular bands, all of which have different dependencies on temperature, gravity, and metallicity. Furthermore, these low temperature conditions favor condensation out of the gas state and the formation of grains. Typical atmospheres of known brown dwarfs range in temperature from 2200 down to . Compared to stars, which warm themselves with steady internal fusion, brown dwarfs cool quickly over time; more massive dwarfs cool more slowly than less massive ones. There is some evidence that the cooling of brown dwarfs slows down at the transition between spectral classes L and T (about 1000 K). Observations of known brown dwarf candidates have revealed a pattern of brightening and dimming of infrared emissions that suggests relatively cool, opaque cloud patterns obscuring a hot interior that is stirred by extreme winds. The weather on such bodies is thought to be extremely strong, comparable to but far exceeding Jupiter's famous storms. On January 8, 2013, astronomers using NASA's Hubble and Spitzer space telescopes probed the stormy atmosphere of a brown dwarf named 2MASS J22282889–4310262, creating the most detailed "weather map" of a brown dwarf thus far. It shows wind-driven, planet-sized clouds. The new research is a stepping stone toward a better understanding not only brown dwarfs, but also of the atmospheres of planets beyond the Solar System. In April 2020 scientists reported measuring wind speeds of (up to 1,450 miles per hour) on the nearby brown dwarf 2MASS J10475385+2124234. To calculate the measurements, scientists compared the rotational movement of atmospheric features, as ascertained by brightness changes, against the electromagnetic rotation generated by the brown dwarf's interior. The results confirmed previous predictions that brown dwarfs would have high winds. Scientists are hopeful that this comparison method can be used to explore the atmospheric dynamics of other brown dwarfs and extrasolar planets. Observational techniques Coronagraphs have recently been used to detect faint objects orbiting bright visible stars, including Gliese 229B. Sensitive telescopes equipped with charge-coupled devices (CCDs) have been used to search distant star clusters for faint objects, including Teide 1. Wide-field searches have identified individual faint objects, such as Kelu-1 (30 light-years away). Brown dwarfs are often discovered in surveys to discover exoplanets. Methods of detecting exoplanets work for brown dwarfs as well, although brown dwarfs are much easier to detect. Brown dwarfs can be powerful emitters of radio emission due to their strong magnetic fields. Observing programs at the Arecibo Observatory and the Very Large Array have detected over a dozen such objects, which are also called ultracool dwarfs because they share common magnetic properties with other objects in this class. The detection of radio emission from brown dwarfs permits their magnetic field strengths to be measured directly. Milestones 1995: First brown dwarf verified. Teide 1, an M8 object in the Pleiades cluster, is picked out with a CCD in the Spanish Observatory of Roque de los Muchachos of the Instituto de Astrofísica de Canarias. First methane brown dwarf verified. Gliese 229B is discovered orbiting red dwarf Gliese 229A (20 ly away) using an adaptive optics coronagraph to sharpen images from the reflecting telescope at Palomar Observatory on Southern California's Mount Palomar; follow-up infrared spectroscopy made with their Hale Telescope shows an abundance of methane. 1998: First X-ray-emitting brown dwarf found. Cha Helpha 1, an M8 object in the Chamaeleon I dark cloud, is determined to be an X-ray source, similar to convective late-type stars. 15 December 1999: First X-ray flare detected from a brown dwarf. A team at the University of California monitoring LP 944-20 (, 16 ly away) via the Chandra X-ray Observatory, catches a 2-hour flare. 27 July 2000: First radio emission (in flare and quiescence) detected from a brown dwarf. A team of students at the Very Large Array detected emission from LP 944–20. 30 April 2004: First detection of a candidate exoplanet around a brown dwarf: 2M1207b discovered with the VLT and the first directly imaged exoplanet. 20 March 2013: Discovery of the closest brown dwarf system: Luhman 16. 25 April 2014: Coldest-known brown dwarf discovered. WISE 0855−0714 is 7.2 light-years away (seventh-closest system to the Sun) and has a temperature between −48 and −13 °C. Brown dwarfs X-ray sources X-ray flares detected from brown dwarfs since 1999 suggest changing magnetic fields within them, similar to those in very-low-mass stars. Although they do not fuse hydrogen into helium in their cores like stars, energy from the fusion of deuterium and gravitational contraction keep their interiors warm and generate strong magnetic fields. The interior of a brown dwarf is in a rapidly boiling, or convective state. When combined with the rapid rotation that most brown dwarfs exhibit, convection sets up conditions for the development of a strong, tangled magnetic field near the surface. The magnetic fields that generated the flare observed by Chandra from LP 944-20 has its origin in the turbulent magnetized plasma beneath the brown dwarf's "surface". Using NASA's Chandra X-ray Observatory, scientists have detected X-rays from a low-mass brown dwarf in a multiple star system. This is the first time that a brown dwarf this close to its parent star(s) (Sun-like stars TWA 5A) has been resolved in X-rays. "Our Chandra data show that the X-rays originate from the brown dwarf's coronal plasma which is some 3 million degrees Celsius", said Yohko Tsuboi of Chuo University in Tokyo. "This brown dwarf is as bright as the Sun today in X-ray light, while it is fifty times less massive than the Sun", said Tsuboi. "This observation, thus, raises the possibility that even massive planets might emit X-rays by themselves during their youth!" Brown dwarfs as radio sources The first brown dwarf that was discovered to emit radio signals was LP 944-20, which was observed since it is also a source of X-ray emission, and both types of emission are signatures of coronae. Approximately 5–10% of brown dwarfs appear to have strong magnetic fields and emit radio waves, and there may be as many as 40 magnetic brown dwarfs within 25 pc of the Sun based on Monte Carlo modeling and their average spatial density. The power of the radio emissions of brown dwarfs is roughly constant despite variations in their temperatures. Brown dwarfs may maintain magnetic fields of up to 6 kG in strength. Astronomers have estimated brown dwarf magnetospheres to span an altitude of approximately 107 m given properties of their radio emissions. It is unknown whether the radio emissions from brown dwarfs more closely resemble those from planets or stars. Some brown dwarfs emit regular radio pulses, which are sometimes interpreted as radio emission beamed from the poles but may also be beamed from active regions. The regular, periodic reversal of radio wave orientation may indicate that brown dwarf magnetic fields periodically reverse polarity. These reversals may be the result of a brown dwarf magnetic activity cycle, similar to the solar cycle. The first brown dwarf of spectral class M found to emit radio waves was LP 944-20, detected in 2001. The first brown dwarf of spectral class L found to emit radio waves was 2MASS J0036159+182110, detected in 2008. The first brown dwarf of spectral class T found to emit radio waves was 2MASS J10475385+2124234. This last discovery was significant since it revealed that brown dwarfs with temperatures similar to exoplanets could host strong >1.7 kG magnetic fields. Although a sensitive search for radio emission from Y dwarfs was conducted at the Arecibo Observatory in 2010, no emission was detected. Recent developments Estimates of brown dwarf populations in the solar neighbourhood suggest that there may be as many as six stars for every brown dwarf. A more recent estimate from 2017 using the young massive star cluster RCW 38 concluded that the Milky Way galaxy contains between 25 and 100 billion brown dwarfs. (Compare these numbers to the estimates of the number of stars in the Milky Way; 100 to 400 billion.) In a study published in Aug 2017 NASA's Spitzer Space Telescope monitored infrared brightness variations in brown dwarfs caused by cloud cover of variable thickness. The observations revealed large-scale waves propagating in the atmospheres of brown dwarfs (similarly to the atmosphere of Neptune and other Solar System giant planets). These atmospheric waves modulate the thickness of the clouds and propagate with different velocities (probably due to differential rotation). In August 2020, astronomers discovered 95 brown dwarfs near the Sun through the project Backyard Worlds: Planet 9. In 2024 the James Webb Space Telescope provided the most detailed weather report yet on two brown dwarfs, revealing "stormy" conditions. These brown dwarfs, part of a binary star system named Luhman 16 discovered in 2013, are only 6.5 light-years away from Earth and are the closest brown dwarfs to our sun. Researchers discovered that they have turbulent clouds, likely made of silicate grains, with temperatures ranging from to . This indicates that hot sand is being blown by winds on the brown dwarfs. Additionally, absorption signatures of carbon monoxide, methane, and water vapor were detected. Binary brown dwarfs Brown dwarf–brown dwarf binaries Brown dwarfs binaries of type M, L, and T are less common with a lower mass of the primary. L-dwarfs have a binary fraction of about % and the binary fraction for late T, early Y-dwarfs (T5-Y0) is about . Brown dwarf binaries have a higher companion-to-host ratio for lower mass binaries. Binaries with a M-type star as a primary have for example a broad distribution of q with a preference of q ≥ 0.4. Brown dwarfs on the other hand show a strong preference for q ≥ 0.7. The separation is decreasing with mass: M-type stars have a separation peaking at 3–30 astronomical units (au), M-L-type brown dwarfs have a projected separation peaking at 5–8 au and T5–Y0 objects have a projected separation that follows a lognormal distribution with a peak separation of about 2.9 au. An example is the closest brown dwarf binary Luhman 16 AB with a primary L7.5 dwarf and a separation of 3.5 au and q = 0.85. The separation is on the lower end of the expected separation for M-L-type brown dwarfs, but the mass ratio is typical. It is not known if the same trend continues with Y-dwarfs, because their sample size is so small. The Y+Y dwarf binaries should have a high mass ratio q and a low separation, reaching scales of less than one au. In 2023, the Y+Y dwarf WISE J0336-0143 was confirmed as a binary with JWST, with a mass ratio of q=0.62±0.05 and a separation of 0.97 astronomical units. The researchers point out that the sample size of low-mass binary brown dwarfs is too small to determine if WISE J0336-0143 is a typical representative of low-mass binaries or a peculiar system. Observations of the orbit of binary systems containing brown dwarfs can be used to measure the mass of the brown dwarf. In the case of 2MASSW J0746425+2000321, the secondary weighs 6% of the solar mass. This measurement is called a dynamical mass. The brown dwarf system closest to the Solar System is the binary Luhman 16. It was attempted to search for planets around this system with a similar method, but none were found. Unusual brown dwarf binaries The wide binary system 2M1101AB was the first binary with a separation greater than . The discovery of the system gave definitive insights to the formation of brown dwarfs. It was previously thought that wide binary brown dwarfs are not formed or at least are disrupted at ages of 1–10 Myr. The existence of this system is also inconsistent with the ejection hypothesis. The ejection hypothesis was a proposed hypothesis in which brown dwarfs form in a multiple system, but are ejected before they gain enough mass to burn hydrogen. More recently the wide binary W2150AB was discovered. It has a similar mass ratio and binding energy as 2M1101AB, but a greater age and is located in a different region of the galaxy. While 2M1101AB is in a closely crowded region, the binary W2150AB is in a sparsely-separated field. It must have survived any dynamical interactions in its natal star cluster. The binary belongs also to a few L+T binaries that can be easily resolved by ground-based observatories. The other two are SDSS J1416+13AB and Luhman 16. There are other interesting binary systems such as the eclipsing binary brown dwarf system 2MASS J05352184–0546085. Photometric studies of this system have revealed that the less massive brown dwarf in the system is hotter than its higher-mass companion. Brown dwarfs around stars Brown dwarfs and massive planets in a close orbit (less than 5 au) around stars are rare and this is sometimes described as the brown dwarf desert. Less than 1% of stars with the mass of the sun have a brown dwarf within 3–5 au. An example for a star–brown dwarf binary is the first discovered T-dwarf Gliese 229 B, which orbits around the main-sequence star Gliese 229 A, a red dwarf. Brown dwarfs orbiting subgiants are also known, such as TOI-1994b which orbits its star every 4.03 days. There is also disagreement if some low-mass brown dwarfs should be considered planets. The NASA Exoplanet archive includes brown dwarfs with a minimum mass less or equal to 30 Jupiter masses as planets as long as there are other criteria fulfilled (e.g. orbiting a star). The Working Group on Extrasolar Planets (WGESP) of the IAU on the other hand only considers planets with a mass below 13 Jupiter masses. White dwarf–brown dwarf binaries Brown dwarfs around white dwarfs are quite rare. GD 165 B, the prototype of the L dwarfs, is one such system. Such systems can be useful in determining the age of the system and the mass of the brown dwarf. Other white dwarf-brown dwarf binaries are COCONUTS-1 AB (7 billion years old), and LSPM J0055+5948 AB (10 billion years old), SDSS J22255+0016 AB (2 billion years old) WD 0806−661 AB (1.5–2.7 billion years old). Systems with close, tidally locked brown dwarfs orbiting around white dwarfs belong to the post common envelope binaries or PCEBs. Only eight confirmed PCEBs containing a white dwarf with a brown dwarf companion are known, including WD 0137-349 AB. In the past history of these close white dwarf–brown dwarf binaries, the brown dwarf is engulfed by the star in the red giant phase. Brown dwarfs with a mass lower than 20 Jupiter masses would evaporate during the engulfment. The dearth of brown dwarfs orbiting close to white dwarfs can be compared with similar observations of brown dwarfs around main-sequence stars, described as the brown-dwarf desert. The PCEB might evolve into a cataclysmic variable star (CV*) with the brown dwarf as the donor. Simulations have shown that highly evolved CV* are mostly associated with substellar donors (up to 80%). A type of CV*, called WZ Sge-type dwarf nova often show donors with a mass near the borderline of low-mass stars and brown dwarfs. The binary BW Sculptoris is such a dwarf nova with a brown dwarf donor. This brown dwarf likely formed when a donor star lost enough mass to become a brown dwarf. The mass loss comes with a loss of the orbital period until it reaches a minimum of 70–80 minutes at which the period increases again. This gives this evolutionary stage the name period bouncer. There could also exist brown dwarfs that merged with white dwarfs. The nova CK Vulpeculae might be a result of such a white dwarf–brown dwarf merger. Formation and evolution The earliest stage of brown dwarf formation is called proto- or pre-brown dwarf. Proto-brown dwarfs are low-mass equivalents of protostars (class 0/I objects). Additionally Very Low Luminosity Objects (VeLLOs) that have Lint ≤0.1-0.2 are often proto-brown dwarfs. They are found in nearby star-forming clouds. Around 67 promising proto-brown dwarfs and 26 pre-brown dwarfs are known as of 2024. As of 2017 there is only one known proto-brown dwarf that is connected with a large Herbig–Haro object. This is the brown dwarf Mayrit 1701117, which is surrounded by a pseudo-disk and a Keplerian disk. Mayrit 1701117 launches the 0.7-light-year-long jet HH 1165, mostly seen in ionized sulfur. Brown dwarfs form similarly to stars and are surrounded by protoplanetary disks, such as Cha 110913−773444. Disks around brown dwarfs have been found to have many of the same features as disks around stars; therefore, it is expected that there will be accretion-formed planets around brown dwarfs. Given the small mass of brown dwarf disks, most planets will be terrestrial planets rather than gas giants. If a giant planet orbits a brown dwarf across our line of sight, then, because they have approximately the same diameter, this would give a large signal for detection by transit. The accretion zone for planets around a brown dwarf is very close to the brown dwarf itself, so tidal forces would have a strong effect. In 2020, the closest brown dwarf with an associated primordial disk (class II disk)—WISEA J120037.79-784508.3 (W1200-7845)—was discovered by the Disk Detective project when classification volunteers noted its infrared excess. It was vetted and analyzed by the science team who found that W1200-7845 had a 99.8% probability of being a member of the ε Chamaeleontis (ε Cha) young moving group association. Its parallax (using Gaia DR2 data) puts it at a distance of 102 parsecs (or 333 lightyears) from Earth—which is within the local Solar neighborhood. A paper from 2021 studied circumstellar discs around brown dwarfs in stellar associations that are a few million years old and 140 to 200 parsecs away. The researchers found that these disks are not massive enough to form planets in the future. There is evidence in these disks that might indicate that planet formation begins at earlier stages and that planets are already present in these disks. The evidence for disk evolution includes a decreasing disk mass over time, dust grain growth and dust settling. Two brown dwarf disks were also found in absorption and at least 4 disks are photoevaporating from external UV-ratiation in the Orion Nebula. Such objects are also called proplyds. Proplyd 181−247, which is a brown dwarf or low-mass star, is surrounded by a disk with a radius of 30 astronomical units and the disk has a mass of 6.2±1.0 . Disks around brown dwarfs usually have a radius smaller than 40 astronomical units, but three disks in the more distant Taurus molecular cloud have a radius larger than 70 au and were resolved with ALMA. These larger disks are able to form rocky planets with a mass >1 . There are also brown dwarfs with disks in associations older than a few million years, which might be evidence that disks around brown dwarfs need more time to dissipate. Especially old disks (>20 Myrs) are sometimes called Peter Pan disks. Currently 2MASS J02265658-5327032 is the only known brown dwarf that has a Peter Pan disk. The brown dwarf Cha 110913−773444, located 500 light-years away in the constellation Chamaeleon, may be in the process of forming a miniature planetary system. Astronomers from Pennsylvania State University have detected what they believe to be a disk of gas and dust similar to the one hypothesized to have formed the Solar System. Cha 110913−773444 is the smallest brown dwarf found to date (), and if it formed a planetary system, it would be the smallest-known object to have one. Planets around brown dwarfs According to the IAU working definition (from August 2018) an exoplanet can orbit a brown dwarf. It requires a mass below 13 and a mass ratio of M/Mcentral<2/(25+√621). This means that an object with a mass up to 3.2  around a brown dwarf with a mass of 80  is considered a planet. It also means that an object with a mass up to 0.52  around a brown dwarf with a mass of 13  is considered a planet. The super-Jupiter planetary-mass objects 2M1207b, 2MASS J044144 and Oph 98 B that are orbiting brown dwarfs at large orbital distances may have formed by cloud collapse rather than accretion and so may be sub-brown dwarfs rather than planets, which is inferred from relatively large masses and large orbits. The first discovery of a low-mass companion orbiting a brown dwarf (ChaHα8) at a small orbital distance using the radial velocity technique paved the way for the detection of planets around brown dwarfs on orbits of a few AU or smaller. However, with a mass ratio between the companion and primary in ChaHα8 of about 0.3, this system rather resembles a binary star. Then, in 2008, the first planetary-mass companion in a relatively small orbit (MOA-2007-BLG-192Lb) was discovered orbiting a brown dwarf. Planets around brown dwarfs are likely to be carbon planets depleted of water. A 2017 study, based upon observations with Spitzer estimates that 175 brown dwarfs need to be monitored in order to guarantee (95%) at least one detection of a below earth-sized planet via the transiting method. JWST could potentially detect smaller planets. The orbits of planets and moons in the solar system often align with the orientation of the host star/planet they orbit. Assuming the orbit of a planet is aligned with the rotational axis of a brown dwarf or planetary-mass object, the geometric transit probability of an object similar to Io can be calculated with the formula cos(79.5°)/cos(inclination). The inclination was estimated for several brown dwarfs and planetary-mass objects. SIMP 0136 for example has an estimated inclination of 80°±12. Assuming the lower bound of i≥68° for SIMP 0136, this results in a transit probability of ≥48.6% for close-in planets. It is however not known how common close-in planets are around brown dwarfs and they might be more common for lower-mass objects, as disk sizes seem to decrease with mass. Habitability Habitability for hypothetical planets orbiting brown dwarfs has been studied. Computer models suggesting conditions for these bodies to have habitable planets are very stringent, the habitable zone being narrow, close (T dwarf 0.005 au) and decreasing with time, due to the cooling of the brown dwarf (they fuse for at most 10 million years). The orbits there would have to be of extremely low eccentricity (on the order of 10 to the minus 6) to avoid strong tidal forces that would trigger a runaway greenhouse effect on the planets, rendering them uninhabitable. There would also be no moons. Superlative brown dwarfs In 1984, it was postulated by some astronomers that the Sun may be orbited by an undetected brown dwarf (sometimes referred to as Nemesis) that could interact with the Oort cloud just as passing stars can. However, this hypothesis has fallen out of favor. Table of firsts Table of extremes See also Fusor (astronomy) Stellification WD 0032-317 b List of brown dwarfs List of Y-dwarfs Footnotes References External links HubbleSite newscenter – Weather patterns on a brown dwarf History Kumar, Shiv S.; Low-Luminosity Stars. Gordon and Breach, London, 1969—an early overview paper on brown dwarfs The Columbia Encyclopedia: "Brown Dwarfs" Details A current list of L and T dwarfs A geological definition of brown dwarfs, contrasted with stars and planets (via Berkeley) I. Neill Reid's pages at the Space Telescope Science Institute: On spectral analysis of M dwarfs, L dwarfs, and T dwarfs Temperature and mass characteristics of low-temperature dwarfs First X-ray from brown dwarf observed, Spaceref.com, 2000 Montes, David; "Brown Dwarfs and ultracool dwarfs (late-M, L, T)", UCM Wild Weather: Iron Rain on Failed Stars—scientists are investigating astonishing weather patterns on brown dwarfs, Space.com, 2006 NASA Brown dwarf detectives —Detailed information in a simplified sense Brown Dwarfs—Website with general information about brown dwarfs (has many detailed and colorful artist's impressions) Stars Cha Halpha 1 stats and history "A census of observed brown dwarfs" (not all confirmed), 1998 Michaud, Peter; Heyer, Inge; Leggett, Sandy K.; and Adamson, Andy; "Discovery Narrows the Gap Between Planets and Brown Dwarfs", Gemini and Joint Astronomy Centre, 2007 Definition of planet Star types Stellar phenomena Substellar objects Types of planet
Brown dwarf
[ "Physics", "Astronomy" ]
12,088
[ "Definition of planet", "Physical phenomena", "Astronomical controversies", "Astronomical classification systems", "Substellar objects", "Stellar phenomena", "Astronomical objects", "Star types" ]
44,495
https://en.wikipedia.org/wiki/Linear%20motor
A linear motor is an electric motor that has had its stator and rotor "unrolled", thus, instead of producing a torque (rotation), it produces a linear force along its length. However, linear motors are not necessarily straight. Characteristically, a linear motor's active section has ends, whereas more conventional motors are arranged as a continuous loop. A typical mode of operation is as a Lorentz-type actuator, in which the applied force is linearly proportional to the current and the magnetic field . Linear motors are most commonly found in high accuracy engineering applications. Many designs have been put forward for linear motors, falling into two major categories, low-acceleration and high-acceleration linear motors. Low-acceleration linear motors are suitable for maglev trains and other ground-based transportation applications. High-acceleration linear motors are normally rather short, and are designed to accelerate an object to a very high speed; for example, see the coilgun. High-acceleration linear motors are typically used in studies of hypervelocity collisions, as weapons, or as mass drivers for spacecraft propulsion. They are usually of the AC linear induction motor (LIM) design with an active three-phase winding on one side of the air-gap and a passive conductor plate on the other side. However, the direct current homopolar linear motor railgun is another high acceleration linear motor design. The low-acceleration, high speed and high power motors are usually of the linear synchronous motor (LSM) design, with an active winding on one side of the air-gap and an array of alternate-pole magnets on the other side. These magnets can be permanent magnets or electromagnets. The motor for the Shanghai maglev train, for instance, is an LSM. Types Brushless Brushless linear motors are members of the Synchronous motor family. They are typically used in standard linear stages or integrated into custom, high performance positioning systems. Invented in the late 1980s by Anwar Chitayat at Anorad Corporation, now Rockwell Automation, and helped improve the throughput and quality of industrial manufacturing processes. Brush Brushed linear motors were used in industrial automation applications prior to the invention of Brushless linear motors. Compared with three phase brushless motors, which are typically being used today, brush motors operate on a single phase. Brush linear motors have a lower cost since they do not need moving cables or three phase servo drives. However, they require higher maintenance since their brushes wear out. Synchronous In this design the rate of movement of the magnetic field is controlled, usually electronically, to track the motion of the rotor. For cost reasons synchronous linear motors rarely use commutators, so the rotor often contains permanent magnets, or soft iron. Examples include coilguns and the motors used on some maglev systems, as well as many other linear motors. In high precision industrial automation linear motors are typically configured with a magnet stator and a moving coil. A Hall effect sensor is attached to the rotor to track the magnetic flux of the stator. The electric current is typically provided from a stationary servo drive to the moving coil by a moving cable inside a cable carrier. Induction In this design, the force is produced by a moving linear magnetic field acting on conductors in the field. Any conductor, be it a loop, a coil or simply a piece of plate metal, that is placed in this field will have eddy currents induced in it thus creating an opposing magnetic field, in accordance with Lenz's law. The two opposing fields will repel each other, thus creating motion as the magnetic field sweeps through the metal. Homopolar In this design a large current is passed through a metal sabot across sliding contacts that are fed by two rails. The magnetic field this generates causes the metal to be projected along the rails. Tubular Efficient and compact design applicable to the replacement of pneumatic cylinders. Piezoelectric Piezoelectric drive is often used to drive small linear motors. History Low acceleration The history of linear electric motors can be traced back at least as far as the 1840s, to the work of Charles Wheatstone at King's College London, but Wheatstone's model was too inefficient to be practical. A feasible linear induction motor is described in (1905 - inventor Alfred Zehden of Frankfurt-am-Main), for driving trains or lifts. The German engineer Hermann Kemper built a working model in 1935. In the late 1940s, Dr. Eric Laithwaite of Manchester University, later Professor of Heavy Electrical Engineering at Imperial College in London developed the first full-size working model. In a single sided version the magnetic repulsion forces the conductor away from the stator, levitating it, and carrying it along in the direction of the moving magnetic field. He called the later versions of it magnetic river. The technologies would later be applied, in the 1984, Air-Rail Link shuttle, between Birmingham's airport and an adjacent train station. Because of these properties, linear motors are often used in maglev propulsion, as in the Japanese Linimo magnetic levitation train line near Nagoya. However, linear motors have been used independently of magnetic levitation, as in the Bombardier Innovia Metro systems worldwide and a number of modern Japanese subways, including Tokyo's Toei Ōedo Line. Similar technology is also used in some roller coasters with modifications but, at present, is still impractical on street running trams, although this, in theory, could be done by burying it in a slotted conduit. Outside of public transportation, vertical linear motors have been proposed as lifting mechanisms in deep mines, and the use of linear motors is growing in motion control applications. They are also often used on sliding doors, such as those of low floor trams such as the Alstom Citadis and the Socimi Eurotram. Dual axis linear motors also exist. These specialized devices have been used to provide direct X-Y motion for precision laser cutting of cloth and sheet metal, automated drafting, and cable forming. Most linear motors in use are LIM (linear induction motor), or LSM (linear synchronous motor). Linear DC motors are not used due to their higher cost and linear SRM suffers from poor thrust. So for long runs in traction LIM is mostly preferred and for short runs LSM is mostly preferred. High acceleration High-acceleration linear motors have been suggested for a number of uses. They have been considered for use as weapons, since current armour-piercing ammunition tends to consist of small rounds with very high kinetic energy, for which just such motors are suitable. Many amusement park launched roller coasters now use linear induction motors to propel the train at a high speed, as an alternative to using a lift hill. The United States Navy is also using linear induction motors in the Electromagnetic Aircraft Launch System that will replace traditional steam catapults on future aircraft carriers. They have also been suggested for use in spacecraft propulsion. In this context they are usually called mass drivers. The simplest way to use mass drivers for spacecraft propulsion would be to build a large mass driver that can accelerate cargo up to escape velocity, though RLV launch assist like StarTram to low Earth orbit has also been investigated. High-acceleration linear motors are difficult to design for a number of reasons. They require large amounts of energy in very short periods of time. One rocket launcher design calls for 300 GJ for each launch in the space of less than a second. Normal electrical generators are not designed for this kind of load, but short-term electrical energy storage methods can be used. Capacitors are bulky and expensive but can supply large amounts of energy quickly. Homopolar generators can be used to convert the kinetic energy of a flywheel into electric energy very rapidly. High-acceleration linear motors also require very strong magnetic fields; in fact, the magnetic fields are often too strong to permit the use of superconductors. However, with careful design, this need not be a major problem. Two different basic designs have been invented for high-acceleration linear motors: railguns and coilguns. Usage Linear motors are commonly used for actuating high performance industrial automation equipment. Their advantage, unlike any other commonly used actuator, such as a ball screw, timing belt, or rack and pinion, is that they provide any combination of high precision, high velocity, high force and long travel. Linear motors are widely used. One of the major uses of linear motors is for propelling the shuttle in looms. A linear motor has been used for sliding doors and various similar actuators. They have been used for baggage handling and even large-scale bulk materials transport. Linear motors are sometimes used to create rotary motion. For example, they have been used at observatories to deal with the large radius of curvature. Linear motors may also be used as an alternative to conventional chain-run lift hills for roller coasters. The coaster Maverick at Cedar Point uses one such linear motor in place of a chain lift. A linear motor has been used to accelerate cars for crash tests. Industrial automation The combination of high precision, high velocity, high force, and long travel makes brushless linear motors attractive for driving industrial automations equipment. They serve industries and applications such as semiconductor steppers, electronics surface-mount technology, automotive cartesian coordinate robots, aerospace chemical milling, optics electron microscope, healthcare laboratory automation, food and beverage pick and place. Machine tools Synchronous linear motor actuators, used in machine tools, provide high force, high velocity, high precision and high dynamic stiffness, resulting in high smoothness of motion and low settling time. They may reach velocities of 2 m/s and micron-level accuracies, with short cycle times and a smooth surface finish. Train propulsion Conventional rails All of the following applications are in rapid transit and have the active part of the motor in the cars. Bombardier Innovia Metro Originally developed in the late 1970s by UTDC in Canada as the Intermediate Capacity Transit System (ICTS). A test track was constructed in Millhaven, Ontario, for extensive testing of prototype cars, after which three lines were constructed: Line 3 Scarborough in Toronto (opened 1985; closed 2023) Expo Line of the Vancouver SkyTrain (opened 1985 and extended in 1994) Detroit People Mover in Detroit (opened 1987) ICTS was sold to Bombardier Transportation in 1991 and later known as Advanced Rapid Transit (ART) before adopting its current branding in 2011. Since then, several more installations have been made: Kelana Jaya Line in Kuala Lumpur (opened 1998 and extended in 2016) Millennium Line of the Vancouver SkyTrain (opened 2002 and extended in 2016) AirTrain JFK in New York (opened 2003) Airport Express (Beijing Subway) (opened 2008) Everline in Yongin, South Korea (opened 2013) All Innovia Metro systems use third rail electrification. Japanese Linear Metro One of the biggest challenges faced by Japanese railway engineers in the 1970s to the 1980s was the ever increasing construction costs of subways. In response, the Japan Subway Association began studying on the feasibility of the "mini-metro" for meeting urban traffic demand in 1979. In 1981, the Japan Railway Engineering Association studied on the use of linear induction motors for such small-profile subways and by 1984 was investigating on the practical applications of linear motors for urban rail with the Japanese Ministry of Land, Infrastructure, Transport and Tourism. In 1988, a successful demonstration was made with the Limtrain at Saitama and influenced the eventual adoption of the linear motor for the Nagahori Tsurumi-ryokuchi Line in Osaka and Toei Line 12 (present-day Toei Oedo Line) in Tokyo. To date, the following subway lines in Japan use linear motors and use overhead lines for power collection: Two Osaka Metro lines in Osaka: Nagahori Tsurumi-ryokuchi Line (opened 1990) Imazatosuji Line (opened 2006) Toei Ōedo Line in Tokyo (opened 2000) Kaigan Line of the Kobe Municipal Subway (opened 2001) Nanakuma Line of the Fukuoka City Subway (opened 2005) Yokohama Municipal Subway Green Line (opened 2008) Sendai Subway Tōzai Line (opened 2015) In addition, Kawasaki Heavy Industries has also exported the Linear Metro to the Guangzhou Metro in China; all of the Linear Metro lines in Guangzhou use third rail electrification: Line 4 (opened 2005) Line 5 (opened 2009). Line 6 (opened 2013) Monorail There is at least one known monorail system which is not magnetically levitated, but nonetheless uses linear motors. This is the Moscow Monorail. Originally, traditional motors and wheels were to be used. However, it was discovered during test runs that the proposed motors and wheels would fail to provide adequate traction under some conditions, for example, when ice appeared on the rail. Hence, wheels are still used, but the trains use linear motors to accelerate and slow down. This is possibly the only use of such a combination, due to the lack of such requirements for other train systems. The TELMAGV is a prototype of a monorail system that is also not magnetically levitated but uses linear motors. Magnetic levitation High-speed trains: Transrapid: first commercial use in Shanghai (opened in 2004) SCMaglev, under construction in Japan (fastest train in the world, planned to open by 2027) Rapid transit: Birmingham Airport, UK (opened 1984, closed 1995) M-Bahn in Berlin, Germany (opened in 1989, closed in 1991) Daejeon EXPO, Korea (ran only 1993) HSST: Linimo line in Aichi Prefecture, Japan (opened 2005) Incheon Airport Maglev (opened July 2014) Changsha Maglev Express (opened 2016) S1 line of Beijing Subway (opened 2017) Amusement rides There are many roller coasters throughout the world that use LIMs to accelerate the ride vehicles. The first being Flight of Fear at Kings Island and Kings Dominion, both opening in 1996. Battlestar Galactica: Human VS Cylon & Revenge of the Mummy at Universal Studios Singapore opened in 2010. They both use LIMs to accelerate from certain point in the rides. Revenge of the Mummy also located at Universal Studios Hollywood and Universal Studios Florida. The Incredible Hulk Coaster and VelociCoaster at Universal Islands of Adventure also use linear motors. At Walt Disney World, Rock 'n' Roller Coaster Starring Aerosmith at Disney's Hollywood Studios and Guardians of the Galaxy: Cosmic Rewind at Epcot both use LSM to launch their ride vehicles into their indoor ride enclosures. In 2023 a hydraulic launch roller coaster, Top Thrill Dragster at Cedar Point in Ohio, USA, was renovated and the hydraulic launch replaced with a weaker multi-launch system using LSM, that creates less g-force. Aircraft launching Electromagnetic Aircraft Launch System Proposed and research Launch loop – A proposed system for launching vehicles into space using a linear motor powered loop StarTram – Concept for a linear motor on extreme scale Tether cable catapult system Aérotrain S44 – A suburban commuter hovertrain prototype Research Test Vehicle 31 – A hovercraft-type vehicle guided by a track Hyperloop – a conceptual high-speed transportation system put forward by entrepreneur Elon Musk Elevator Lift Magway - a UK freight delivery system under research and development that aims to deliver goods in pods via 90 cm diameter pipework under and over ground. See also Linear actuator Linear induction motor Linear motion Maglev Online Electric Vehicle Reciprocating electric motor Sawyer motor Tubular linear motor References External links Design equations, spreadsheet, and drawings Motor torque calculation Overview of Electromagnetic Guns Electric motors English inventions Linear motion
Linear motor
[ "Physics", "Technology", "Engineering" ]
3,216
[ "Physical phenomena", "Engines", "Electric motors", "Motion (physics)", "Electrical engineering", "Linear motion" ]
44,708
https://en.wikipedia.org/wiki/Ferroelectricity
In physics and materials science, ferroelectricity is a characteristic of certain materials that have a spontaneous electric polarization that can be reversed by the application of an external electric field. All ferroelectrics are also piezoelectric and pyroelectric, with the additional property that their natural electrical polarization is reversible. The term is used in analogy to ferromagnetism, in which a material exhibits a permanent magnetic moment. Ferromagnetism was already known when ferroelectricity was discovered in 1920 in Rochelle salt by American physicist Joseph Valasek. Thus, the prefix ferro, meaning iron, was used to describe the property despite the fact that most ferroelectric materials do not contain iron. Materials that are both ferroelectric and ferromagnetic are known as multiferroics. Polarization When most materials are electrically polarized, the polarization induced, P, is almost exactly proportional to the applied external electric field E; so the polarization is a linear function. This is called linear dielectric polarization (see figure). Some materials, known as paraelectric materials, show a more enhanced nonlinear polarization (see figure). The electric permittivity, corresponding to the slope of the polarization curve, is not constant as in linear dielectrics but is a function of the external electric field. In addition to being nonlinear, ferroelectric materials demonstrate a spontaneous nonzero polarization (after entrainment, see figure) even when the applied field E is zero. The distinguishing feature of ferroelectrics is that the spontaneous polarization can be reversed by a suitably strong applied electric field in the opposite direction; the polarization is therefore dependent not only on the current electric field but also on its history, yielding a hysteresis loop. They are called ferroelectrics by analogy to ferromagnetic materials, which have spontaneous magnetization and exhibit similar hysteresis loops. Typically, materials demonstrate ferroelectricity only below a certain phase transition temperature, called the Curie temperature (TC) and are paraelectric above this temperature: the spontaneous polarization vanishes, and the ferroelectric crystal transforms into the paraelectric state. Many ferroelectrics lose their pyroelectric properties above TC completely, because their paraelectric phase has a centrosymmetric crystal structure. Applications The nonlinear nature of ferroelectric materials can be used to make capacitors with adjustable capacitance. Typically, a ferroelectric capacitor simply consists of a pair of electrodes sandwiching a layer of ferroelectric material. The permittivity of ferroelectrics is not only adjustable but commonly also very high, especially when close to the phase transition temperature. Because of this, ferroelectric capacitors are small in physical size compared to dielectric (non-tunable) capacitors of similar capacitance. The spontaneous polarization of ferroelectric materials implies a hysteresis effect which can be used as a memory function, and ferroelectric capacitors are indeed used to make ferroelectric RAM for computers and RFID cards. In these applications thin films of ferroelectric materials are typically used, as this allows the field required to switch the polarization to be achieved with a moderate voltage. However, when using thin films a great deal of attention needs to be paid to the interfaces, electrodes and sample quality for devices to work reliably. Ferroelectric materials are required by symmetry considerations to be also piezoelectric and pyroelectric. The combined properties of memory, piezoelectricity, and pyroelectricity make ferroelectric capacitors very useful, e.g. for sensor applications. Ferroelectric capacitors are used in medical ultrasound machines (the capacitors generate and then listen for the ultrasound ping used to image the internal organs of a body), high quality infrared cameras (the infrared image is projected onto a two dimensional array of ferroelectric capacitors capable of detecting temperature differences as small as millionths of a degree Celsius), fire sensors, sonar, vibration sensors, and even fuel injectors on diesel engines. Another idea of recent interest is the ferroelectric tunnel junction (FTJ) in which a contact is made up by nanometer-thick ferroelectric film placed between metal electrodes. The thickness of the ferroelectric layer is small enough to allow tunneling of electrons. The piezoelectric and interface effects as well as the depolarization field may lead to a giant electroresistance (GER) switching effect. Yet another burgeoning application is multiferroics, where researchers are looking for ways to couple magnetic and ferroelectric ordering within a material or heterostructure; there are several recent reviews on this topic. Catalytic properties of ferroelectrics have been studied since 1952 when Parravano observed anomalies in CO oxidation rates over ferroelectric sodium and potassium niobates near the Curie temperature of these materials. Surface-perpendicular component of the ferroelectric polarization can dope polarization-dependent charges on surfaces of ferroelectric materials, changing their chemistry. This opens the possibility of performing catalysis beyond the limits of the Sabatier principle. Sabatier principle states that the surface-adsorbates interaction has to be an optimal amount: not too weak to be inert toward the reactants and not too strong to poison the surface and avoid desorption of the products: a compromise situation. This set of optimum interactions is usually referred to as "top of the volcano" in activity volcano plots. On the other hand, ferroelectric polarization-dependent chemistry can offer the possibility of switching the surface—adsorbates interaction from strong adsorption to strong desorption, thus a compromise between desorption and adsorption is no longer needed. Ferroelectric polarization can also act as an energy harvester. Polarization can help the separation of photo-generated electron-hole pairs, leading to enhanced photocatalysis. Also, due to pyroelectric and piezoelectric effects under varying temperature (heating/cooling cycles) or varying strain (vibrations) conditions extra charges can appear on the surface and drive various (electro)chemical reactions forward. Photoferroelectric imaging is a technique to record optical information on pieces of ferroelectric material. The images are nonvolatile and selectively erasable. Materials The internal electric dipoles of a ferroelectric material are coupled to the material lattice so anything that changes the lattice will change the strength of the dipoles (in other words, a change in the spontaneous polarization). The change in the spontaneous polarization results in a change in the surface charge. This can cause current flow in the case of a ferroelectric capacitor even without the presence of an external voltage across the capacitor. Two stimuli that will change the lattice dimensions of a material are force and temperature. The generation of a surface charge in response to the application of an external stress to a material is called piezoelectricity. A change in the spontaneous polarization of a material in response to a change in temperature is called pyroelectricity. Generally, there are 230 space groups among which 32 crystalline classes can be found in crystals. There are 21 non-centrosymmetric classes, within which 20 are piezoelectric. Among the piezoelectric classes, 10 have a spontaneous electric polarization which varies with temperature; thus they are pyroelectric. Ferroelectricity is a subset of pyroelectricity, which brings spontaneous electronic polarization to the material. Ferroelectric phase transitions are often characterized as either displacive (such as BaTiO3) or order-disorder (such as NaNO2), though often phase transitions will demonstrate elements of both behaviors. In barium titanate, a typical ferroelectric of the displacive type, the transition can be understood in terms of a polarization catastrophe, in which, if an ion is displaced from equilibrium slightly, the force from the local electric fields due to the ions in the crystal increases faster than the elastic-restoring forces. This leads to an asymmetrical shift in the equilibrium ion positions and hence to a permanent dipole moment. The ionic displacement in barium titanate concerns the relative position of the titanium ion within the oxygen octahedral cage. In lead titanate, another key ferroelectric material, although the structure is rather similar to barium titanate the driving force for ferroelectricity is more complex with interactions between the lead and oxygen ions also playing an important role. In an order-disorder ferroelectric, there is a dipole moment in each unit cell, but at high temperatures they are pointing in random directions. Upon lowering the temperature and going through the phase transition, the dipoles order, all pointing in the same direction within a domain. An important ferroelectric material for applications is lead zirconate titanate (PZT), which is part of the solid solution formed between ferroelectric lead titanate and anti-ferroelectric lead zirconate. Different compositions are used for different applications; for memory applications, PZT closer in composition to lead titanate is preferred, whereas piezoelectric applications make use of the diverging piezoelectric coefficients associated with the morphotropic phase boundary that is found close to the 50/50 composition. Ferroelectric crystals often show several transition temperatures and domain structure hysteresis, much as do ferromagnetic crystals. The nature of the phase transition in some ferroelectric crystals is still not well understood. In 1974 R.B. Meyer used symmetry arguments to predict ferroelectric liquid crystals, and the prediction could immediately be verified by several observations of behavior connected to ferroelectricity in smectic liquid-crystal phases that are chiral and tilted. The technology allows the building of flat-screen monitors. Mass production between 1994 and 1999 was carried out by Canon. Ferroelectric liquid crystals are used in production of reflective LCoS. In 2010 David Field found that prosaic films of chemicals such as nitrous oxide or propane exhibited ferroelectric properties. This new class of ferroelectric materials exhibit "spontelectric" properties, and may have wide-ranging applications in device and nano-technology and also influence the electrical nature of dust in the interstellar medium. Other ferroelectric materials used include triglycine sulfate, polyvinylidene fluoride (PVDF) and lithium tantalate. A single atom thick ferroelectric monolayer can be created using pure bismuth. It should be possible to produce materials which combine both ferroelectric and metallic properties simultaneously, at room temperature. According to research published in 2018 in Nature Communications, scientists were able to produce a two-dimensional sheet of material which was both ferroelectric (had a polar crystal structure) and which conducted electricity. Theory An introduction to Landau theory can be found here. Based on Ginzburg–Landau theory, the free energy of a ferroelectric material, in the absence of an electric field and applied stress may be written as a Taylor expansion in terms of the order parameter, . If a sixth order expansion is used (i.e. 8th order and higher terms truncated), the free energy is given by: where are the components of the polarization vector in the directions respectively, and the coefficients, must be consistent with the crystal symmetry. To investigate domain formation and other phenomena in ferroelectrics, these equations are often used in the context of a phase field model. Typically, this involves adding a gradient term, an electrostatic term and an elastic term to the free energy. The equations are then discretized onto a grid using the finite difference method or finite element method and solved subject to the constraints of Gauss's law and Linear elasticity. In all known ferroelectrics, and . These coefficients may be obtained experimentally or from ab-initio simulations. For ferroelectrics with a first order phase transition, , whereas for a second order phase transition. The spontaneous polarization, of a ferroelectric for a cubic to tetragonal phase transition may be obtained by considering the 1D expression of the free energy which is: This free energy has the shape of a double well potential with two free energy minima at , the spontaneous polarization. We find the derivative of the free energy, and set it equal to zero in order to solve for : Since the solution of this equation rather corresponds to a free energy maxima in the ferroelectric phase, the desired solutions for correspond to setting the remaining factor to zero: whose solution is: and eliminating solutions which take the square root of a negative number (for either the first or second order phase transitions) gives: If , the solution for the spontaneous polarization reduces to: The hysteresis loop ( versus ) may be obtained from the free energy expansion by including the term corresponding to the energy due to an external electric field interacting with the polarization , as follows: We find the stable polarization values of under the influence of the external field, now denoted as , again by setting the derivative of the energy with respect to to zero: Plotting (on the X axis) as a function of (but on the Y axis) gives an S-shaped curve which is multi-valued in for some values of . The central part of the 'S' corresponds to a free energy local maximum (since ). Elimination of this region, and connection of the top and bottom portions of the 'S' curve by vertical lines at the discontinuities gives the hysteresis loop of internal polarization due to an external electric field. Sliding ferroelectricity Sliding ferroelectricity is widely found but only in two-dimensional (2D) van der Waals stacked layers. The vertical electric polarization is switched by in-plane interlayer sliding. See also :Category:Ferroelectric materials Physics s Lists References Further reading External links Ferroelectric Materials at University of Cambridge Ferroelectric materials Electric and magnetic fields in matter Electrical phenomena Phases of matter
Ferroelectricity
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,014
[ "Physical phenomena", "Ferroelectric materials", "Phases of matter", "Electric and magnetic fields in matter", "Materials science", "Materials", "Electrical phenomena", "Condensed matter physics", "Hysteresis", "Matter" ]
44,726
https://en.wikipedia.org/wiki/Magnetoresistance
Magnetoresistance is the tendency of a material (often ferromagnetic) to change the value of its electrical resistance in an externally-applied magnetic field. There are a variety of effects that can be called magnetoresistance. Some occur in bulk non-magnetic metals and semiconductors, such as geometrical magnetoresistance, Shubnikov–de Haas oscillations, or the common positive magnetoresistance in metals. Other effects occur in magnetic metals, such as negative magnetoresistance in ferromagnets or anisotropic magnetoresistance (AMR). Finally, in multicomponent or multilayer systems (e.g. magnetic tunnel junctions), giant magnetoresistance (GMR), tunnel magnetoresistance (TMR), colossal magnetoresistance (CMR), and extraordinary magnetoresistance (EMR) can be observed. The first magnetoresistive effect was discovered in 1856 by William Thomson, better known as Lord Kelvin, but he was unable to lower the electrical resistance of anything by more than 5%. Today, systems including semimetals and concentric ring EMR structures are known. In these, a magnetic field can adjust the resistance by orders of magnitude. Since different mechanisms can alter the resistance, it is useful to separately consider situations where it depends on a magnetic field directly (e.g. geometric magnetoresistance and multiband magnetoresistance) and those where it does so indirectly through magnetization (e.g. AMR and TMR). Discovery William Thomson (Lord Kelvin) first discovered ordinary magnetoresistance in 1856. He experimented with pieces of iron and discovered that the resistance increases when the current is in the same direction as the magnetic force and decreases when the current is at 90° to the magnetic force. He then did the same experiment with nickel and found that it was affected in the same way but the magnitude of the effect was greater. This effect is referred to as anisotropic magnetoresistance (AMR). In 2007, Albert Fert and Peter Grünberg were jointly awarded the Nobel Prize for the discovery of giant magnetoresistance. Geometrical magnetoresistance An example of magnetoresistance due to direct action of magnetic field on electric current can be studied on a Corbino disc (see Figure). It consists of a conducting annulus with perfectly conducting rims. Without a magnetic field, the battery drives a radial current between the rims. When a magnetic field perpendicular to the plane of the annulus is applied, (either into or out of the page) a circular component of current flows as well, due to Lorentz force. Initial interest in this problem began with Boltzmann in 1886, and independently was re-examined by Corbino in 1911. In a simple model, supposing the response to the Lorentz force is the same as for an electric field, the carrier velocity is given by: where is the carrier mobility. Solving for the velocity, we find: where the effective reduction in mobility due to the -field (for motion perpendicular to this field) is apparent. Electric current (proportional to the radial component of velocity) will decrease with increasing magnetic field and hence the resistance of the device will increase. Critically, this magnetoresistive scenario depends sensitively on the device geometry and current lines and it does not rely on magnetic materials. In a semiconductor with a single carrier type, the magnetoresistance is proportional to , where is the semiconductor mobility (units m2·V−1·s−1, equivalently m2·Wb−1, or T −1) and is the magnetic field (units teslas). Indium antimonide, an example of a high mobility semiconductor, could have an electron mobility above at . So in a field, for example the magnetoresistance increase would be 100%. Anisotropic magnetoresistance (AMR) Thomson's experiments are an example of AMR, a property of a material in which a dependence of electrical resistance on the angle between the direction of electric current and direction of magnetization is observed. The effect arises in most cases from the simultaneous action of magnetization and spin–orbit interaction (exceptions related to non-collinear magnetic order notwithstanding) and its detailed mechanism depends on the material. It can be for example due to a larger probability of s-d scattering of electrons in the direction of magnetization (which is controlled by the applied magnetic field). The net effect (in most materials) is that the electrical resistance has maximum value when the direction of current is parallel to the applied magnetic field. AMR of new materials is being investigated and magnitudes up to 50% have been observed in some uranium (but otherwise quite conventional) ferromagnetic compounds. Materials with extreme AMR have been identified driven by unconventional mechanisms such as a metal-insulator transition triggered by rotating the magnetic moments (while for some directions of magnetic moments, the system is semimetallic, for other directions a gap opens). In polycrystalline ferromagnetic materials, the AMR can only depend on the angle between the magnetization and current direction and (as long as the resistivity of the material can be described by a rank-two tensor), it must follow where is the (longitudinal) resistivity of the film and are the resistivities for and , respectively. Associated with longitudinal resistivity, there is also transversal resistivity dubbed (somewhat confusingly) the planar Hall effect. In monocrystals, resistivity depends also on and individually. To compensate for the non-linear characteristics and inability to detect the polarity of a magnetic field, the following structure is used for sensors. It consists of stripes of aluminum or gold placed on a thin film of permalloy (a ferromagnetic material exhibiting the AMR effect) inclined at an angle of 45°. This structure forces the current not to flow along the “easy axes” of thin film, but at an angle of 45°. The dependence of resistance now has a permanent offset which is linear around the null point. Because of its appearance, this sensor type is called 'barber pole'. The AMR effect is used in a wide array of sensors for measurement of Earth's magnetic field (electronic compass), for electric current measuring (by measuring the magnetic field created around the conductor), for traffic detection and for linear position and angle sensing. The biggest AMR sensor manufacturers are Honeywell, NXP Semiconductors, STMicroelectronics, and Sensitec GmbH. As theoretical aspects, I. A. Campbell, A. Fert, and O. Jaoul () derived an expression of the AMR ratio for Ni-based alloys using the two-current model with s-s and s-d scattering processes, where 's' is a conduction electron, and 'd' is 3d states with the spin-orbit interaction. The AMR ratio is expressed as with and , where , , and are a spin-orbit coupling constant (so-called ), an exchange field, and a resistivity for spin , respectively. In addition, recently, Satoshi Kokado et al. have obtained the general expression of the AMR ratio for 3d transition-metal ferromagnets by extending the theory to a more general one. The general expression can also be applied to half-metals. See also Giant magnetoresistance Tunnel magnetoresistance Colossal magnetoresistance Extraordinary magnetoresistance Magnetoresistive random-access memory Footnotes References 1856 introductions 1856 in science Magnetic ordering Spintronics Articles containing video clips
Magnetoresistance
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,585
[ "Magnetoresistance", "Physical quantities", "Spintronics", "Electric and magnetic fields in matter", "Materials science", "Magnetic ordering", "Condensed matter physics", "Electrical resistance and conductance" ]
44,790
https://en.wikipedia.org/wiki/Luminosity
Luminosity is an absolute measure of radiated electromagnetic energy per unit time, and is synonymous with the radiant power emitted by a light-emitting object. In astronomy, luminosity is the total amount of electromagnetic energy emitted per unit of time by a star, galaxy, or other astronomical objects. In SI units, luminosity is measured in joules per second, or watts. In astronomy, values for luminosity are often given in the terms of the luminosity of the Sun, L⊙. Luminosity can also be given in terms of the astronomical magnitude system: the absolute bolometric magnitude (Mbol) of an object is a logarithmic measure of its total energy emission rate, while absolute magnitude is a logarithmic measure of the luminosity within some specific wavelength range or filter band. In contrast, the term brightness in astronomy is generally used to refer to an object's apparent brightness: that is, how bright an object appears to an observer. Apparent brightness depends on both the luminosity of the object and the distance between the object and observer, and also on any absorption of light along the path from object to observer. Apparent magnitude is a logarithmic measure of apparent brightness. The distance determined by luminosity measures can be somewhat ambiguous, and is thus sometimes called the luminosity distance. Measurement When not qualified, the term "luminosity" means bolometric luminosity, which is measured either in the SI units, watts, or in terms of solar luminosities (). A bolometer is the instrument used to measure radiant energy over a wide band by absorption and measurement of heating. A star also radiates neutrinos, which carry off some energy (about 2% in the case of the Sun), contributing to the star's total luminosity. The IAU has defined a nominal solar luminosity of to promote publication of consistent and comparable values in units of the solar luminosity. While bolometers do exist, they cannot be used to measure even the apparent brightness of a star because they are insufficiently sensitive across the electromagnetic spectrum and because most wavelengths do not reach the surface of the Earth. In practice bolometric magnitudes are measured by taking measurements at certain wavelengths and constructing a model of the total spectrum that is most likely to match those measurements. In some cases, the process of estimation is extreme, with luminosities being calculated when less than 1% of the energy output is observed, for example with a hot Wolf-Rayet star observed only in the infrared. Bolometric luminosities can also be calculated using a bolometric correction to a luminosity in a particular passband. The term luminosity is also used in relation to particular passbands such as a visual luminosity of K-band luminosity. These are not generally luminosities in the strict sense of an absolute measure of radiated power, but absolute magnitudes defined for a given filter in a photometric system. Several different photometric systems exist. Some such as the UBV or Johnson system are defined against photometric standard stars, while others such as the AB system are defined in terms of a spectral flux density. Stellar luminosity A star's luminosity can be determined from two stellar characteristics: size and effective temperature. The former is typically represented in terms of solar radii, R⊙, while the latter is represented in kelvins, but in most cases neither can be measured directly. To determine a star's radius, two other metrics are needed: the star's angular diameter and its distance from Earth. Both can be measured with great accuracy in certain cases, with cool supergiants often having large angular diameters, and some cool evolved stars having masers in their atmospheres that can be used to measure the parallax using VLBI. However, for most stars the angular diameter or parallax, or both, are far below our ability to measure with any certainty. Since the effective temperature is merely a number that represents the temperature of a black body that would reproduce the luminosity, it obviously cannot be measured directly, but it can be estimated from the spectrum. An alternative way to measure stellar luminosity is to measure the star's apparent brightness and distance. A third component needed to derive the luminosity is the degree of interstellar extinction that is present, a condition that usually arises because of gas and dust present in the interstellar medium (ISM), the Earth's atmosphere, and circumstellar matter. Consequently, one of astronomy's central challenges in determining a star's luminosity is to derive accurate measurements for each of these components, without which an accurate luminosity figure remains elusive. Extinction can only be measured directly if the actual and observed luminosities are both known, but it can be estimated from the observed colour of a star, using models of the expected level of reddening from the interstellar medium. In the current system of stellar classification, stars are grouped according to temperature, with the massive, very young and energetic Class O stars boasting temperatures in excess of 30,000 K while the less massive, typically older Class M stars exhibit temperatures less than 3,500 K. Because luminosity is proportional to temperature to the fourth power, the large variation in stellar temperatures produces an even vaster variation in stellar luminosity. Because the luminosity depends on a high power of the stellar mass, high mass luminous stars have much shorter lifetimes. The most luminous stars are always young stars, no more than a few million years for the most extreme. In the Hertzsprung–Russell diagram, the x-axis represents temperature or spectral type while the y-axis represents luminosity or magnitude. The vast majority of stars are found along the main sequence with blue Class O stars found at the top left of the chart while red Class M stars fall to the bottom right. Certain stars like Deneb and Betelgeuse are found above and to the right of the main sequence, more luminous or cooler than their equivalents on the main sequence. Increased luminosity at the same temperature, or alternatively cooler temperature at the same luminosity, indicates that these stars are larger than those on the main sequence and they are called giants or supergiants. Blue and white supergiants are high luminosity stars somewhat cooler than the most luminous main sequence stars. A star like Deneb, for example, has a luminosity around 200,000 L⊙, a spectral type of A2, and an effective temperature around 8,500 K, meaning it has a radius around . For comparison, the red supergiant Betelgeuse has a luminosity around 100,000 L⊙, a spectral type of M2, and a temperature around 3,500 K, meaning its radius is about . Red supergiants are the largest type of star, but the most luminous are much smaller and hotter, with temperatures up to 50,000 K and more and luminosities of several million L⊙, meaning their radii are just a few tens of R⊙. For example, R136a1 has a temperature over 46,000 K and a luminosity of more than 6,100,000 L⊙ (mostly in the UV), it is only . Radio luminosity The luminosity of a radio source is measured in , to avoid having to specify a bandwidth over which it is measured. The observed strength, or flux density, of a radio source is measured in Jansky where . For example, consider a 10W transmitter at a distance of 1 million metres, radiating over a bandwidth of 1 MHz. By the time that power has reached the observer, the power is spread over the surface of a sphere with area or about , so its flux density is . More generally, for sources at cosmological distances, a k-correction must be made for the spectral index α of the source, and a relativistic correction must be made for the fact that the frequency scale in the emitted rest frame is different from that in the observer's rest frame. So the full expression for radio luminosity, assuming isotropic emission, is where Lν is the luminosity in , Sobs is the observed flux density in , DL is the luminosity distance in metres, z is the redshift, α is the spectral index (in the sense , and in radio astronomy, assuming thermal emission the spectral index is typically equal to 2.) For example, consider a 1 Jy signal from a radio source at a redshift of 1, at a frequency of 1.4 GHz. Ned Wright's cosmology calculator calculates a luminosity distance for a redshift of 1 to be 6701 Mpc = 2×1026 m giving a radio luminosity of . To calculate the total radio power, this luminosity must be integrated over the bandwidth of the emission. A common assumption is to set the bandwidth to the observing frequency, which effectively assumes the power radiated has uniform intensity from zero frequency up to the observing frequency. In the case above, the total power is . This is sometimes expressed in terms of the total (i.e. integrated over all wavelengths) luminosity of the Sun which is , giving a radio power of . Luminosity formulae The Stefan–Boltzmann equation applied to a black body gives the value for luminosity for a black body, an idealized object which is perfectly opaque and non-reflecting: where A is the surface area, T is the temperature (in kelvins) and is the Stefan–Boltzmann constant, with a value of Imagine a point source of light of luminosity that radiates equally in all directions. A hollow sphere centered on the point would have its entire interior surface illuminated. As the radius increases, the surface area will also increase, and the constant luminosity has more surface area to illuminate, leading to a decrease in observed brightness. where is the area of the illuminated surface. is the flux density of the illuminated surface. The surface area of a sphere with radius r is , so for stars and other point sources of light: where is the distance from the observer to the light source. For stars on the main sequence, luminosity is also related to mass approximately as below: Relationship to magnitude Luminosity is an intrinsic measurable property of a star independent of distance. The concept of magnitude, on the other hand, incorporates distance. The apparent magnitude is a measure of the diminishing flux of light as a result of distance according to the inverse-square law. The Pogson logarithmic scale is used to measure both apparent and absolute magnitudes, the latter corresponding to the brightness of a star or other celestial body as seen if it would be located at an interstellar distance of . In addition to this brightness decrease from increased distance, there is an extra decrease of brightness due to extinction from intervening interstellar dust. By measuring the width of certain absorption lines in the stellar spectrum, it is often possible to assign a certain luminosity class to a star without knowing its distance. Thus a fair measure of its absolute magnitude can be determined without knowing its distance nor the interstellar extinction. In measuring star brightnesses, absolute magnitude, apparent magnitude, and distance are interrelated parameters—if two are known, the third can be determined. Since the Sun's luminosity is the standard, comparing these parameters with the Sun's apparent magnitude and distance is the easiest way to remember how to convert between them, although officially, zero point values are defined by the IAU. The magnitude of a star, a unitless measure, is a logarithmic scale of observed visible brightness. The apparent magnitude is the observed visible brightness from Earth which depends on the distance of the object. The absolute magnitude is the apparent magnitude at a distance of , therefore the bolometric absolute magnitude is a logarithmic measure of the bolometric luminosity. The difference in bolometric magnitude between two objects is related to their luminosity ratio according to: where: is the bolometric magnitude of the first object is the bolometric magnitude of the second object. is the first object's bolometric luminosity is the second object's bolometric luminosity The zero point of the absolute magnitude scale is actually defined as a fixed luminosity of . Therefore, the absolute magnitude can be calculated from a luminosity in watts: where is the zero point luminosity and the luminosity in watts can be calculated from an absolute magnitude (although absolute magnitudes are often not measured relative to an absolute flux): See also Glossary of astronomy List of brightest stars List of most luminous stars Orders of magnitude (power) Solar luminosity References Further reading External links Luminosity calculator Ned Wright's cosmology calculator Concepts in astrophysics Physical quantities
Luminosity
[ "Physics", "Mathematics" ]
2,688
[ "Physical phenomena", "Concepts in astrophysics", "Physical quantities", "Quantity", "Astrophysics", "Physical properties" ]
44,883
https://en.wikipedia.org/wiki/Welding
Welding is a fabrication process that joins materials, usually metals or thermoplastics, primarily by using high temperature to melt the parts together and allow them to cool, causing fusion. Common alternative methods include solvent welding (of thermoplastics) using chemicals to melt materials being bonded without heat, and solid-state welding processes which bond without melting, such as pressure, cold welding, and diffusion bonding. Metal welding is distinct from lower temperature bonding techniques such as brazing and soldering, which do not melt the base metal (parent metal) and instead require flowing a filler metal to solidify their bonds. In addition to melting the base metal in welding, a filler material is typically added to the joint to form a pool of molten material (the weld pool) that cools to form a joint that can be stronger than the base material. Welding also requires a form of shield to protect the filler metals or melted metals from being contaminated or oxidized. Many different energy sources can be used for welding, including a gas flame (chemical), an electric arc (electrical), a laser, an electron beam, friction, and ultrasound. While often an industrial process, welding may be performed in many different environments, including in open air, under water, and in outer space. Welding is a hazardous undertaking and precautions are required to avoid burns, electric shock, vision damage, inhalation of poisonous gases and fumes, and exposure to intense ultraviolet radiation. Until the end of the 19th century, the only welding process was forge welding, which blacksmiths had used for millennia to join iron and steel by heating and hammering. Arc welding and oxy-fuel welding were among the first processes to develop late in the century, and electric resistance welding followed soon after. Welding technology advanced quickly during the early 20th century, as world wars drove the demand for reliable and inexpensive joining methods. Following the wars, several modern welding techniques were developed, including manual methods like shielded metal arc welding, now one of the most popular welding methods, as well as semi-automatic and automatic processes such as gas metal arc welding, submerged arc welding, flux-cored arc welding and electroslag welding. Developments continued with the invention of laser beam welding, electron beam welding, magnetic pulse welding, and friction stir welding in the latter half of the century. Today, as the science continues to advance, robot welding is commonplace in industrial settings, and researchers continue to develop new welding methods and gain greater understanding of weld quality. Etymology The term weld is derived from the Middle English verb well (; plural/present tense: ) or welling (), meaning 'to heat' (to the maximum temperature possible); 'to bring to a boil'. The modern word was probably derived from the past-tense participle welled (), with the addition of d for this purpose being common in the Germanic languages of the Angles and Saxons. It was first recorded in English in 1590. A fourteenth century translation of the Christian Bible into English by John Wycliffe translates Isaiah 2:4 as "" (they shall beat together their swords into plowshares). In the 1590 version this was changed to "" (they shall weld together their swords into plowshares), suggesting this particular use of the word probably became popular in English sometime between these periods. The Old English word for welding iron was ('to bring together') or ('to bring together hot'). The word is related to the Old Swedish word , meaning 'to boil', which could refer to joining metals, as in (literally 'to boil iron'). Sweden was a large exporter of iron during the Middle Ages, so the word may have entered English from the Swedish iron trade, or may have been imported with the thousands of Viking settlements that arrived in England before and during the Viking Age, as more than half of the most common English words in everyday use are Scandinavian in origin. History The history of joining metals goes back several millennia. The earliest examples of this come from the Bronze and Iron Ages in Europe and the Middle East. The ancient Greek historian Herodotus states in The Histories of the 5th century BC that Glaucus of Chios "was the man who single-handedly invented iron welding". Forge welding was used in the construction of the Iron pillar of Delhi, erected in Delhi, India about 310 AD and weighing 5.4 metric tons. The Middle Ages brought advances in forge welding, in which blacksmiths pounded heated metal repeatedly until bonding occurred. In 1540, Vannoccio Biringuccio published De la pirotechnia, which includes descriptions of the forging operation. Renaissance craftsmen were skilled in the process, and the industry continued to grow during the following centuries. In 1800, Sir Humphry Davy discovered the short-pulse electrical arc and presented his results in 1801. In 1802, Russian scientist Vasily Petrov created the continuous electric arc, and subsequently published "News of Galvanic-Voltaic Experiments" in 1803, in which he described experiments carried out in 1802. Of great importance in this work was the description of a stable arc discharge and the indication of its possible use for many applications, one being melting metals. In 1808, Davy, who was unaware of Petrov's work, rediscovered the continuous electric arc. In 1881–82 inventors Nikolai Benardos (Russian) and Stanisław Olszewski (Polish) created the first electric arc welding method known as carbon arc welding using carbon electrodes. The advances in arc welding continued with the invention of metal electrodes in the late 1800s by a Russian, Nikolai Slavyanov (1888), and an American, C. L. Coffin (1890). Around 1900, A. P. Strohmenger released a coated metal electrode in Britain, which gave a more stable arc. In 1905, Russian scientist Vladimir Mitkevich proposed using a three-phase electric arc for welding. Alternating current welding was invented by C. J. Holslag in 1919, but did not become popular for another decade. Resistance welding was also developed during the final decades of the 19th century, with the first patents going to Elihu Thomson in 1885, who produced further advances over the next 15 years. Thermite welding was invented in 1893, and around that time another process, oxyfuel welding, became well established. Acetylene was discovered in 1836 by Edmund Davy, but its use was not practical in welding until about 1900, when a suitable torch was developed. At first, oxyfuel welding was one of the more popular welding methods due to its portability and relatively low cost. As the 20th century progressed, however, it fell out of favor for industrial applications. It was largely replaced with arc welding, as advances in metal coverings (known as flux) were made. Flux covering the electrode primarily shields the base material from impurities, but also stabilizes the arc and can add alloying components to the weld metal. World War I caused a major surge in the use of welding, with the various military powers attempting to determine which of the several new welding processes would be best. The British primarily used arc welding, even constructing a ship, the "Fullagar" with an entirely welded hull. Arc welding was first applied to aircraft during the war as well, as some German airplane fuselages were constructed using the process. Also noteworthy is the first welded road bridge in the world, the Maurzyce Bridge in Poland (1928). During the 1920s, significant advances were made in welding technology, including the introduction of automatic welding in 1920, in which electrode wire was fed continuously. Shielding gas became a subject receiving much attention, as scientists attempted to protect welds from the effects of oxygen and nitrogen in the atmosphere. Porosity and brittleness were the primary problems, and the solutions that developed included the use of hydrogen, argon, and helium as welding atmospheres. During the following decade, further advances allowed for the welding of reactive metals like aluminum and magnesium. This in conjunction with developments in automatic welding, alternating current, and fluxes fed a major expansion of arc welding during the 1930s and then during World War II. In 1930, the first all-welded merchant vessel, M/S Carolinian, was launched. During the middle of the century, many new welding methods were invented. In 1930, Kyle Taylor was responsible for the release of stud welding, which soon became popular in shipbuilding and construction. Submerged arc welding was invented the same year and continues to be popular today. In 1932 a Russian, Konstantin Khrenov eventually implemented the first underwater electric arc welding. Gas tungsten arc welding, after decades of development, was finally perfected in 1941, and gas metal arc welding followed in 1948, allowing for fast welding of non-ferrous materials but requiring expensive shielding gases. Shielded metal arc welding was developed during the 1950s, using a flux-coated consumable electrode, and it quickly became the most popular metal arc welding process. In 1957, the flux-cored arc welding process debuted, in which the self-shielded wire electrode could be used with automatic equipment, resulting in greatly increased welding speeds, and that same year, plasma arc welding was invented by Robert Gage. Electroslag welding was introduced in 1958, and it was followed by its cousin, electrogas welding, in 1961. In 1953, the Soviet scientist N. F. Kazakov proposed the diffusion bonding method. Other recent developments in welding include the 1958 breakthrough of electron beam welding, making deep and narrow welding possible through the concentrated heat source. Following the invention of the laser in 1960, laser beam welding debuted several decades later, and has proved to be especially useful in high-speed, automated welding. Magnetic pulse welding (MPW) has been industrially used since 1967. Friction stir welding was invented in 1991 by Wayne Thomas at The Welding Institute (TWI, UK) and found high-quality applications all over the world. All of these four new processes continue to be quite expensive due to the high cost of the necessary equipment, and this has limited their applications. Processes Welding joins two pieces of metal using heat, pressure, or both. The most common modern welding methods use heat sufficient to melt the base metals to be joined and the filler metal. This includes gas welding and all forms of arc welding. The area where the base and filler metals melt is called the weld pool or puddle. Most welding methods involve pushing the puddle along a joint to create a weld bead. Overlapping pieces of metal can be joined by forming the weld pool within a hole made in the topmost piece of base metal. This is called a plug weld. Overlapping base metals are commonly joined using electric resistance welding, a process that combines heat and pressure and does not require a filler metal. Solid-state welding processes join two pieces of metal using pressure. Gas welding The most common gas welding process is oxyfuel welding, also known as oxyacetylene welding. It is one of the oldest and most versatile welding processes, but in recent years it has become less popular in industrial applications. It is still widely used for welding pipes and tubes, as well as repair work. The equipment is relatively inexpensive and simple, generally employing the combustion of acetylene in oxygen to produce a welding flame temperature of about 3100 °C (5600 °F). The flame, since it is less concentrated than an electric arc, causes slower weld cooling, which can lead to greater residual stresses and weld distortion, though it eases the welding of high alloy steels. A similar process, generally called oxyfuel cutting, is used to cut metals. Arc welding These processes use a welding power supply to create and maintain an electric arc between an electrode and the base material to melt metals at the welding point. They can use either direct current (DC) or alternating current (AC), and consumable or non-consumable electrodes. The welding region is sometimes protected by some type of inert or semi-inert gas, known as a shielding gas, and filler material is sometimes used as well. Arc welding processes One of the most common types of arc welding is shielded metal arc welding (SMAW); it is also known as manual metal arc welding (MMAW) or stick welding. Electric current is used to strike an arc between the base material and consumable electrode rod, which is made of filler material (typical steel) and is covered with a flux that protects the weld area from oxidation and contamination by producing carbon dioxide (CO2) gas during the welding process. The electrode core itself acts as filler material, making a separate filler unnecessary. The process is versatile and can be performed with relatively inexpensive equipment, making it well suited to shop jobs and field work. An operator can become reasonably proficient with a modest amount of training and can achieve mastery with experience. Weld times are rather slow, since the consumable electrodes must be frequently replaced and because slag, the residue from the flux, must be chipped away after welding. Furthermore, the process is generally limited to welding ferrous materials, though special electrodes have made possible the welding of cast iron, stainless steel, aluminum, and other metals. Gas metal arc welding (GMAW), also known as metal inert gas or MIG welding, is a semi-automatic or automatic process that uses a continuous wire feed as an electrode and an inert or semi-inert gas mixture to protect the weld from contamination. Since the electrode is continuous, welding speeds are greater for GMAW than for SMAW. A related process, flux-cored arc welding (FCAW), uses similar equipment but uses wire consisting of a steel electrode surrounding a powder fill material. This cored wire is more expensive than the standard solid wire and can generate fumes and/or slag, but it permits even higher welding speed and greater metal penetration. Gas tungsten arc welding (GTAW), or tungsten inert gas (TIG) welding, is a manual welding process that uses a non-consumable tungsten electrode, an inert or semi-inert gas mixture, and a separate filler material. Especially useful for welding thin materials, this method is characterized by a stable arc and high-quality welds, but it requires significant operator skill and can only be accomplished at relatively low speeds. GTAW can be used on nearly all weldable metals, though it is most often applied to stainless steel and light metals. It is often used when quality welds are extremely important, such as in bicycle, aircraft and naval applications. A related process, plasma arc welding, also uses a tungsten electrode but uses plasma gas to make the arc. The arc is more concentrated than the GTAW arc, making transverse control more critical and thus generally restricting the technique to a mechanized process. Because of its stable current, the method can be used on a wider range of material thicknesses than can the GTAW process and it is much faster. It can be applied to all of the same materials as GTAW except magnesium, and automated welding of stainless steel is one important application of the process. A variation of the process is plasma cutting, an efficient steel cutting process. Submerged arc welding (SAW) is a high-productivity welding method in which the arc is struck beneath a covering layer of flux. This increases arc quality since contaminants in the atmosphere are blocked by the flux. The slag that forms on the weld generally comes off by itself, and combined with the use of a continuous wire feed, the weld deposition rate is high. Working conditions are much improved over other arc welding processes, since the flux hides the arc and almost no smoke is produced. The process is commonly used in industry, especially for large products and in the manufacture of welded pressure vessels. Other arc welding processes include atomic hydrogen welding, electroslag welding (ESW), electrogas welding, and stud arc welding. ESW is a highly productive, single-pass welding process for thicker materials between 1 inch (25 mm) and 12 inches (300 mm) in a vertical or close to vertical position. Arc welding power supplies To supply the electrical power necessary for arc welding processes, a variety of different power supplies can be used. The most common welding power supplies are constant current power supplies and constant voltage power supplies. In arc welding, the length of the arc is directly related to the voltage, and the amount of heat input is related to the current. Constant current power supplies are most often used for manual welding processes such as gas tungsten arc welding and shielded metal arc welding, because they maintain a relatively constant current even as the voltage varies. This is important because in manual welding, it can be difficult to hold the electrode perfectly steady, and as a result, the arc length and thus voltage tend to fluctuate. Constant voltage power supplies hold the voltage constant and vary the current, and as a result, are most often used for automated welding processes such as gas metal arc welding, flux-cored arc welding, and submerged arc welding. In these processes, arc length is kept constant, since any fluctuation in the distance between the wire and the base material is quickly rectified by a large change in current. For example, if the wire and the base material get too close, the current will rapidly increase, which in turn causes the heat to increase and the tip of the wire to melt, returning it to its original separation distance. The type of current used plays an important role in arc welding. Consumable electrode processes such as shielded metal arc welding and gas metal arc welding generally use direct current, but the electrode can be charged either positively or negatively. In welding, the positively charged anode will have a greater heat concentration, and as a result, changing the polarity of the electrode affects weld properties. If the electrode is positively charged, the base metal will be hotter, increasing weld penetration and welding speed. Alternatively, a negatively charged electrode results in more shallow welds. Non-consumable electrode processes, such as gas tungsten arc welding, can use either type of direct current, as well as alternating current. However, with direct current, because the electrode only creates the arc and does not provide filler material, a positively charged electrode causes shallow welds, while a negatively charged electrode makes deeper welds. Alternating current rapidly moves between these two, resulting in medium-penetration welds. One disadvantage of AC, the fact that the arc must be re-ignited after every zero crossings, has been addressed with the invention of special power units that produce a square wave pattern instead of the normal sine wave, making rapid zero crossings possible and minimizing the effects of the problem. Resistance welding Resistance welding involves the generation of heat by passing current through the resistance caused by the contact between two or more metal surfaces. Small pools of molten metal are formed at the weld area as high current (1,000–100,000 A) is passed through the metal. In general, resistance welding methods are efficient and cause little pollution, but their applications are somewhat limited and the equipment cost can be high. Resistance spot welding is a popular method used to join overlapping metal sheets of up to 3 mm thick. Two electrodes are simultaneously used to clamp the metal sheets together and to pass current through the sheets. The advantages of the method include efficient energy use, limited workpiece deformation, high production rates, easy automation, and no required filler materials. Weld strength is significantly lower than with other welding methods, making the process suitable for only certain applications. It is used extensively in the automotive industry—ordinary cars can have several thousand spot welds made by industrial robots. A specialized process called shot welding, can be used to spot weld stainless steel. Seam welding also relies on two electrodes to apply pressure and current to join metal sheets. However, instead of pointed electrodes, wheel-shaped electrodes roll along and often feed the workpiece, making it possible to make long continuous welds. In the past, this process was used in the manufacture of beverage cans, but now its uses are more limited. Other resistance welding methods include butt welding, flash welding, projection welding, and upset welding. Energy beam welding Energy beam welding methods, namely laser beam welding and electron beam welding, are relatively new processes that have become quite popular in high production applications. The two processes are quite similar, differing most notably in their source of power. Laser beam welding employs a highly focused laser beam, while electron beam welding is done in a vacuum and uses an electron beam. Both have a very high energy density, making deep weld penetration possible and minimizing the size of the weld area. Both processes are extremely fast, and are easily automated, making them highly productive. The primary disadvantages are their very high equipment costs (though these are decreasing) and a susceptibility to thermal cracking. Developments in this area include laser-hybrid welding, which uses principles from both laser beam welding and arc welding for even better weld properties, laser cladding, and x-ray welding. Solid-state welding Like forge welding (the earliest welding process discovered), some modern welding methods do not involve the melting of the materials being joined. One of the most popular, ultrasonic welding, is used to connect thin sheets or wires made of metal or thermoplastic by vibrating them at high frequency and under high pressure. The equipment and methods involved are similar to that of resistance welding, but instead of electric current, vibration provides energy input. When welding metals, the vibrations are introduced horizontally, and the materials are not melted; with plastics, which should have similar melting temperatures, vertically. Ultrasonic welding is commonly used for making electrical connections out of aluminum or copper, and it is also a very common polymer welding process. Another common process, explosion welding, involves the joining of materials by pushing them together under extremely high pressure. The energy from the impact plasticizes the materials, forming a weld, even though only a limited amount of heat is generated. The process is commonly used for welding dissimilar materials, including bonding aluminum to carbon steel in ship hulls and stainless steel or titanium to carbon steel in petrochemical pressure vessels. Other solid-state welding processes include friction welding (including friction stir welding and friction stir spot welding), magnetic pulse welding, co-extrusion welding, cold welding, diffusion bonding, exothermic welding, high frequency welding, hot pressure welding, induction welding, and roll bonding. Geometry Welds can be geometrically prepared in many different ways. The five basic types of weld joints are the butt joint, lap joint, corner joint, edge joint, and T-joint (a variant of this last is the cruciform joint). Other variations exist as well—for example, double-V preparation joints are characterized by the two pieces of material each tapering to a single center point at one-half their height. Single-U and double-U preparation joints are also fairly common—instead of having straight edges like the single-V and double-V preparation joints, they are curved, forming the shape of a U. Lap joints are also commonly more than two pieces thick—depending on the process used and the thickness of the material, many pieces can be welded together in a lap joint geometry. Many welding processes require the use of a particular joint design; for example, resistance spot welding, laser beam welding, and electron beam welding are most frequently performed on lap joints. Other welding methods, like shielded metal arc welding, are extremely versatile and can weld virtually any type of joint. Some processes can also be used to make multipass welds, in which one weld is allowed to cool, and then another weld is performed on top of it. This allows for the welding of thick sections arranged in a single-V preparation joint, for example. After welding, a number of distinct regions can be identified in the weld area. The weld itself is called the fusion zone—more specifically, it is where the filler metal was laid during the welding process. The properties of the fusion zone depend primarily on the filler metal used, and its compatibility with the base materials. It is surrounded by the heat-affected zone, the area that had its microstructure and properties altered by the weld. These properties depend on the base material's behavior when subjected to heat. The metal in this area is often weaker than both the base material and the fusion zone, and is also where residual stresses are found. Quality Many distinct factors influence the strength of welds and the material around them, including the welding method, the amount and concentration of energy input, the weldability of the base material, filler material, and flux material, the design of the joint, and the interactions between all these factors. For example, the factor of welding position influences weld quality, that welding codes & specifications may require testing—both welding procedures and welders—using specified welding positions: 1G (flat), 2G (horizontal), 3G (vertical), 4G (overhead), 5G (horizontal fixed pipe), or 6G (inclined fixed pipe). To test the quality of a weld, either destructive or nondestructive testing methods are commonly used to verify that welds are free of defects, have acceptable levels of residual stresses and distortion, and have acceptable heat-affected zone (HAZ) properties. Types of welding defects include cracks, distortion, gas inclusions (porosity), non-metallic inclusions, lack of fusion, incomplete penetration, lamellar tearing, and undercutting. The metalworking industry has instituted codes and specifications to guide welders, weld inspectors, engineers, managers, and property owners in proper welding technique, design of welds, how to judge the quality of welding procedure specification, how to judge the skill of the person performing the weld, and how to ensure the quality of a welding job. Methods such as visual inspection, radiography, ultrasonic testing, phased-array ultrasonics, dye penetrant inspection, magnetic particle inspection, or industrial computed tomography can help with detection and analysis of certain defects. Heat-affected zone The heat-affected zone (HAZ) is a ring surrounding the weld in which the temperature of the welding process, combined with the stresses of uneven heating and cooling, alters the heat-treatment properties of the alloy. The effects of welding on the material surrounding the weld can be detrimental—depending on the materials used and the heat input of the welding process used, the HAZ can be of varying size and strength. The thermal diffusivity of the base material plays a large role—if the diffusivity is high, the material cooling rate is high and the HAZ is relatively small. Conversely, a low diffusivity leads to slower cooling and a larger HAZ. The amount of heat injected by the welding process plays an important role as well, as processes like oxyacetylene welding have an unconcentrated heat input and increase the size of the HAZ. Processes like laser beam welding give a highly concentrated, limited amount of heat, resulting in a small HAZ. Arc welding falls between these two extremes, with the individual processes varying somewhat in heat input. To calculate the heat input for arc welding procedures, the following formula can be used: where Q = heat input (kJ/mm), V = voltage (V), I = current (A), and S = welding speed (mm/min). The efficiency is dependent on the welding process used, with shielded metal arc welding having a value of 0.75, gas metal arc welding and submerged arc welding, 0.9, and gas tungsten arc welding, 0.8. Methods of alleviating the stresses and brittleness created in the HAZ include stress relieving and tempering. One major defect concerning the HAZ would be cracking at the toes , due to the rapid expansion (heating) and contraction (cooling) the material may not have the ability to withstand the stress and could cause cracking, one method the control these stress would be to control the heating and cooling rate, such as pre-heating and post- heating Lifetime extension with after treatment methods The durability and life of dynamically loaded, welded steel structures is determined in many cases by the welds, in particular the weld transitions. Through selective treatment of the transitions by grinding (abrasive cutting), shot peening, High-frequency impact treatment, Ultrasonic impact treatment, etc. the durability of many designs increases significantly. Metallurgy Most solids used are engineering materials consisting of crystalline solids in which the atoms or ions are arranged in a repetitive geometric pattern which is known as a lattice structure. The only exception is material that is made from glass which is a combination of a supercooled liquid and polymers which are aggregates of large organic molecules. Crystalline solids cohesion is obtained by a metallic or chemical bond that is formed between the constituent atoms. Chemical bonds can be grouped into two types consisting of ionic and covalent. To form an ionic bond, either a valence or bonding electron separates from one atom and becomes attached to another atom to form oppositely charged ions. The bonding in the static position is when the ions occupy an equilibrium position where the resulting force between them is zero. When the ions are exerted in tension force, the inter-ionic spacing increases creating an electrostatic attractive force, while a repulsing force under compressive force between the atomic nuclei is dominant. Covalent bonding takes place when one of the constituent atoms loses one or more electrons, with the other atom gaining the electrons, resulting in an electron cloud that is shared by the molecule as a whole. In both ionic and covalent bonding the location of the ions and electrons are constrained relative to each other, thereby resulting in the bond being characteristically brittle. Metallic bonding can be classified as a type of covalent bonding for which the constituent atoms are of the same type and do not combine with one another to form a chemical bond. Atoms will lose an electron(s) forming an array of positive ions. These electrons are shared by the lattice which makes the electron cluster mobile, as the electrons are free to move as well as the ions. For this, it gives metals their relatively high thermal and electrical conductivity as well as being characteristically ductile. Three of the most commonly used crystal lattice structures in metals are the body-centred cubic, face-centred cubic and close-packed hexagonal. Ferritic steel has a body-centred cubic structure and austenitic steel, non-ferrous metals like aluminium, copper and nickel have the face-centred cubic structure. Ductility is an important factor in ensuring the integrity of structures by enabling them to sustain local stress concentrations without fracture. In addition, structures are required to be of an acceptable strength, which is related to a material's yield strength. In general, as the yield strength of a material increases, there is a corresponding reduction in fracture toughness. A reduction in fracture toughness may also be attributed to the embrittlement effect of impurities, or for body-centred cubic metals, from a reduction in temperature. Metals and in particular steels have a transitional temperature range where above this range the metal has acceptable notch-ductility while below this range the material becomes brittle. Within the range, the materials behavior is unpredictable. The reduction in fracture toughness is accompanied by a change in the fracture appearance. When above the transition, the fracture is primarily due to micro-void coalescence, which results in the fracture appearing fibrous. When the temperatures falls the fracture will show signs of cleavage facets. These two appearances are visible by the naked eye. Brittle fracture in steel plates may appear as chevron markings under the microscope. These arrow-like ridges on the crack surface point towards the origin of the fracture. Fracture toughness is measured using a notched and pre-cracked rectangular specimen, of which the dimensions are specified in standards, for example ASTM E23. There are other means of estimating or measuring fracture toughness by the following: The Charpy impact test per ASTM A370; The crack-tip opening displacement (CTOD) test per BS 7448–1; The J integral test per ASTM E1820; The Pellini drop-weight test per ASTM E208. Unusual conditions While many welding applications are done in controlled environments such as factories and repair shops, some welding processes are commonly used in a wide variety of conditions, such as open air, underwater, and vacuums (such as space). In open-air applications, such as construction and outdoors repair, shielded metal arc welding is the most common process. Processes that employ inert gases to protect the weld cannot be readily used in such situations, because unpredictable atmospheric movements can result in a faulty weld. Shielded metal arc welding is also often used in underwater welding in the construction and repair of ships, offshore platforms, and pipelines, but others, such as flux cored arc welding and gas tungsten arc welding, are also common. Welding in space is also possible—it was first attempted in 1969 by Russian cosmonauts during the Soyuz 6 mission, when they performed experiments to test shielded metal arc welding, plasma arc welding, and electron beam welding in a depressurized environment. Further testing of these methods was done in the following decades, and today researchers continue to develop methods for using other welding processes in space, such as laser beam welding, resistance welding, and friction welding. Advances in these areas may be useful for future endeavours similar to the construction of the International Space Station, which could rely on welding for joining in space the parts that were manufactured on Earth. Safety issues Welding can be dangerous and unhealthy if the proper precautions are not taken. However, using new technology and proper protection greatly reduces risks of injury and death associated with welding. Since many common welding procedures involve an open electric arc or flame, the risk of burns and fire is significant; this is why it is classified as a hot work process. To prevent injury, welders wear personal protective equipment in the form of heavy leather gloves and protective long-sleeve jackets to avoid exposure to extreme heat and flames. Synthetic clothing such as polyester should not be worn since it may burn, causing injury. Additionally, the brightness of the weld area leads to a condition called arc eye or flash burns in which ultraviolet light causes inflammation of the cornea and can burn the retinas of the eyes. Goggles and welding helmets with dark UV-filtering face plates are worn to prevent this exposure. Since the 2000s, some helmets have included a face plate which instantly darkens upon exposure to the intense UV light. To protect bystanders, the welding area is often surrounded with translucent welding curtains. These curtains, made of a polyvinyl chloride plastic film, shield people outside the welding area from the UV light of the electric arc, but cannot replace the filter glass used in helmets. Depending on the type of material, welding varieties, and other factors, welding can produce over 100 dB(A) of noise. Long term or continuous exposure to higher decibels can lead to noise-induced hearing loss. Welders are often exposed to dangerous gases and particulate matter. Processes like flux-cored arc welding and shielded metal arc welding produce smoke containing particles of various types of oxides. The size of the particles in question tends to influence the toxicity of the fumes, with smaller particles presenting a greater danger. This is because smaller particles have the ability to cross the blood–brain barrier. Fumes and gases, such as carbon dioxide, ozone, and fumes containing heavy metals, can be dangerous to welders lacking proper ventilation and training. Exposure to manganese welding fumes, for example, even at low levels (<0.2 mg/m3), may lead to neurological problems or to damage to the lungs, liver, kidneys, or central nervous system. Nano particles can become trapped in the alveolar macrophages of the lungs and induce pulmonary fibrosis. The use of compressed gases and flames in many, welding processes poses an explosion and fire risk. Some common precautions include limiting the amount of oxygen in the air, and keeping combustible materials away from the workplace. Costs and trends As an industrial process, the cost of welding plays a crucial role in manufacturing decisions. Many different variables affect the total cost, including equipment cost, labor cost, material cost, and energy cost. Depending on the process, equipment cost can vary, from inexpensive for methods like shielded metal arc welding and oxyfuel welding, to extremely expensive for methods like laser beam welding and electron beam welding. Because of their high cost, they are only used in high production operations. Similarly, because automation and robots increase equipment costs, they are only implemented when high production is necessary. Labor cost depends on the deposition rate (the rate of welding), the hourly wage, and the total operation time, including time spent fitting, welding, and handling the part. The cost of materials includes the cost of the base and filler material, and the cost of shielding gases. Finally, energy cost depends on arc time and welding power demand. For manual welding methods, labor costs generally make up the vast majority of the total cost. As a result, many cost-saving measures are focused on minimizing operation time. To do this, welding procedures with high deposition rates can be selected, and weld parameters can be fine-tuned to increase welding speed. Mechanization and automation are often implemented to reduce labor costs, but this frequently increases the cost of equipment and creates additional setup time. Material costs tend to increase when special properties are necessary, and energy costs normally do not amount to more than several percent of the total welding cost. In recent years, in order to minimize labor costs in high production manufacturing, industrial welding has become increasingly more automated, most notably with the use of robots in resistance spot welding (especially in the automotive industry) and in arc welding. In robot welding, mechanized devices both hold the material and perform the weld and at first, spot welding was its most common application, but robotic arc welding increases in popularity as technology advances. Other key areas of research and development include the welding of dissimilar materials (such as steel and aluminum, for example) and new welding processes, such as friction stir, magnetic pulse, conductive heat seam, and laser-hybrid welding. Furthermore, progress is desired in making more specialized methods like laser beam welding practical for more applications, such as in the aerospace and automotive industries. Researchers also hope to better understand the often unpredictable properties of welds, especially microstructure, residual stresses, and a weld's tendency to crack or deform. The trend of accelerating the speed at which welds are performed in the steel erection industry comes at a risk to the integrity of the connection. Without proper fusion to the base materials provided by sufficient arc time on the weld, a project inspector cannot ensure the effective diameter of the puddle weld therefore he or she cannot guarantee the published load capacities unless they witness the actual installation. This method of puddle welding is common in the United States and Canada for attaching steel sheets to bar joist and structural steel members. Regional agencies are responsible for ensuring the proper installation of puddle welding on steel construction sites. Currently there is no standard or weld procedure which can ensure the published holding capacity of any unwitnessed connection, but this is under review by the American Welding Society. Glass and plastic welding Glasses and certain types of plastics are commonly welded materials. Unlike metals, which have a specific melting point, glasses and plastics have a melting range, called the glass transition. When heating the solid material past the glass-transition temperature (Tg) into this range, it will generally become softer and more pliable. When it crosses through the range, above the glass-melting temperature (Tm), it will become a very thick, sluggish, viscous liquid, slowly decreasing in viscosity as temperature increases. Typically, this viscous liquid will have very little surface tension compared to metals, becoming a sticky, taffy to honey-like consistency, so welding can usually take place by simply pressing two melted surfaces together. The two liquids will generally mix and join at first contact. Upon cooling through the glass transition, the welded piece will solidify as one solid piece of amorphous material. Glass welding Glass welding is a common practice during glassblowing. It is used very often in the construction of lighting, neon signs, flashtubes, scientific equipment, and the manufacture of dishes and other glassware. It is also used during glass casting for joining the halves of glass molds, making items such as bottles and jars. Welding glass is accomplished by heating the glass through the glass transition, turning it into a thick, formable, liquid mass. Heating is usually done with a gas or oxy-gas torch, or a furnace, because the temperatures for melting glass are often quite high. This temperature may vary, depending on the type of glass. For example, lead glass becomes a weldable liquid at around , and can be welded with a simple propane torch. On the other hand, quartz glass (fused silica) must be heated to over , but quickly loses its viscosity and formability if overheated, so an oxyhydrogen torch must be used. Sometimes a tube may be attached to the glass, allowing it to be blown into various shapes, such as bulbs, bottles, or tubes. When two pieces of liquid glass are pressed together, they will usually weld very readily. Welding a handle onto a pitcher can usually be done with relative ease. However, when welding a tube to another tube, a combination of blowing and suction, and pressing and pulling is used to ensure a good seal, to shape the glass, and to keep the surface tension from closing the tube in on itself. Sometimes a filler rod may be used, but usually not. Because glass is very brittle in its solid state, it is often prone to cracking upon heating and cooling, especially if the heating and cooling are uneven. This is because the brittleness of glass does not allow for uneven thermal expansion. Glass that has been welded will usually need to be cooled very slowly and evenly through the glass transition, in a process called annealing, to relieve any internal stresses created by a temperature gradient. There are many types of glass, and it is most common to weld using the same types. Different glasses often have different rates of thermal expansion, which can cause them to crack upon cooling when they contract differently. For instance, quartz has very low thermal expansion, while soda-lime glass has very high thermal expansion. When welding different glasses to each other, it is usually important to closely match their coefficients of thermal expansion, to ensure that cracking does not occur. Also, some glasses will simply not mix with others, so welding between certain types may not be possible. Glass can also be welded to metals and ceramics, although with metals the process is usually more adhesion to the surface of the metal rather than a commingling of the two materials. However, certain glasses will typically bond only to certain metals. For example, lead glass bonds readily to copper or molybdenum, but not to aluminum. Tungsten electrodes are often used in lighting but will not bond to quartz glass, so the tungsten is often wetted with molten borosilicate glass, which bonds to both tungsten and quartz. However, care must be taken to ensure that all materials have similar coefficients of thermal expansion to prevent cracking both when the object cools and when it is heated again. Special alloys are often used for this purpose, ensuring that the coefficients of expansion match, and sometimes thin, metallic coatings may be applied to a metal to create a good bond with the glass. Plastic welding Plastics are generally divided into two categories, which are "thermosets" and "thermoplastics." A thermoset is a plastic in which a chemical reaction sets the molecular bonds after first forming the plastic, and then the bonds cannot be broken again without degrading the plastic. Thermosets cannot be melted, therefore, once a thermoset has set it is impossible to weld it. Examples of thermosets include epoxies, silicone, vulcanized rubber, polyester, and polyurethane. Thermoplastics, by contrast, form long molecular chains, which are often coiled or intertwined, forming an amorphous structure without any long-range, crystalline order. Some thermoplastics may be fully amorphous, while others have a partially crystalline/partially amorphous structure. Both amorphous and semicrystalline thermoplastics have a glass transition, above which welding can occur, but semicrystallines also have a specific melting point which is above the glass transition. Above this melting point, the viscous liquid will become a free-flowing liquid (see rheological weldability for thermoplastics). Examples of thermoplastics include polyethylene, polypropylene, polystyrene, polyvinylchloride (PVC), and fluoroplastics like Teflon and Spectralon. Welding thermoplastic with heat is very similar to welding glass. The plastic first must be cleaned and then heated through the glass transition, turning the weld-interface into a thick, viscous liquid. Two heated interfaces can then be pressed together, allowing the molecules to mix through intermolecular diffusion, joining them as one. Then the plastic is cooled through the glass transition, allowing the weld to solidify. A filler rod may often be used for certain types of joints. The main differences between welding glass and plastic are the types of heating methods, the much lower melting temperatures, and the fact that plastics will burn if overheated. Many different methods have been devised for heating plastic to a weldable temperature without burning it. Ovens or electric heating tools can be used to melt the plastic. Ultrasonic, laser, or friction heating are other methods. Resistive metals may be implanted in the plastic, which respond to induction heating. Some plastics will begin to burn at temperatures lower than their glass transition, so welding can be performed by blowing a heated, inert gas onto the plastic, melting it while, at the same time, shielding it from oxygen. Solvent welding Many thermoplastics can also be welded using chemical solvents. When placed in contact with the plastic, the solvent will begin to soften it, bringing the surface into a thick, liquid solution. When two melted surfaces are pressed together, the molecules in the solution mix, joining them as one. Because the solvent can permeate the plastic, the solvent evaporates out through the surface of the plastic, causing the weld to drop out of solution and solidify. A common use for solvent welding is for joining PVC (polyvinyl chloride) or ABS (acrylonitrile butadiene styrene) pipes during plumbing, or for welding styrene and polystyrene plastics in the construction of models. Solvent welding is especially effective on plastics like PVC which burn at or below their glass transition, but may be ineffective on plastics like Teflon or polyethylene that are resistant to chemical decomposition. See also Aluminium joining Fasteners List of welding codes List of welding processes Welding Procedure Specification Welder certification Welded sculpture Welding table References Sources External links Pipes Joint Welding Welding Process Welding Ventilation at CCOHS IARC Group 1 carcinogens Articles containing video clips Joining Mechanical engineering
Welding
[ "Physics", "Engineering" ]
9,833
[ "Welding", "Applied and interdisciplinary physics", "Mechanical engineering" ]
318,051
https://en.wikipedia.org/wiki/Law%20of%20mass%20action
In chemistry, the law of mass action is the proposition that the rate of a chemical reaction is directly proportional to the product of the activities or concentrations of the reactants. It explains and predicts behaviors of solutions in dynamic equilibrium. Specifically, it implies that for a chemical reaction mixture that is in equilibrium, the ratio between the concentration of reactants and products is constant. Two aspects are involved in the initial formulation of the law: 1) the equilibrium aspect, concerning the composition of a reaction mixture at equilibrium and 2) the kinetic aspect concerning the rate equations for elementary reactions. Both aspects stem from the research performed by Cato M. Guldberg and Peter Waage between 1864 and 1879 in which equilibrium constants were derived by using kinetic data and the rate equation which they had proposed. Guldberg and Waage also recognized that chemical equilibrium is a dynamic process in which rates of reaction for the forward and backward reactions must be equal at chemical equilibrium. In order to derive the expression of the equilibrium constant appealing to kinetics, the expression of the rate equation must be used. The expression of the rate equations was rediscovered independently by Jacobus Henricus van 't Hoff. The law is a statement about equilibrium and gives an expression for the equilibrium constant, a quantity characterizing chemical equilibrium. In modern chemistry this is derived using equilibrium thermodynamics. It can also be derived with the concept of chemical potential. History Two chemists generally expressed the composition of a mixture in terms of numerical values relating the amount of the product to describe the equilibrium state. Cato Maximilian Guldberg and Peter Waage, building on Claude Louis Berthollet's ideas about reversible chemical reactions, proposed the law of mass action in 1864. These papers, in Danish, went largely unnoticed, as did the later publication (in French) of 1867 which contained a modified law and the experimental data on which that law was based. In 1877 van 't Hoff independently came to similar conclusions, but was unaware of the earlier work, which prompted Guldberg and Waage to give a fuller and further developed account of their work, in German, in 1879. Van 't Hoff then accepted their priority. 1864 The equilibrium state (composition) In their first paper, Guldberg and Waage suggested that in a reaction such as A + B <=> A' + B' the "chemical affinity" or "reaction force" between A and B did not just depend on the chemical nature of the reactants, as had previously been supposed, but also depended on the amount of each reactant in a reaction mixture. Thus the law of mass action was first stated as follows: When two reactants, A and B, react together at a given temperature in a "substitution reaction," the affinity, or chemical force between them, is proportional to the active masses, [A] and [B], each raised to a particular power . In this context a substitution reaction was one such as {alcohol} + acid <=> {ester} + water. Active mass was defined in the 1879 paper as "the amount of substance in the sphere of action". For species in solution active mass is equal to concentration. For solids, active mass is taken as a constant. , a and b were regarded as empirical constants, to be determined by experiment. At equilibrium, the chemical force driving the forward reaction must be equal to the chemical force driving the reverse reaction. Writing the initial active masses of A,B, A' and B' as p, q, p' and q' and the dissociated active mass at equilibrium as , this equality is represented by represents the amount of reagents A and B that has been converted into A' and B'. Calculations based on this equation are reported in the second paper. Dynamic approach to the equilibrium state The third paper of 1864 was concerned with the kinetics of the same equilibrium system. Writing the dissociated active mass at some point in time as x, the rate of reaction was given as Likewise the reverse reaction of A' with B' proceeded at a rate given by The overall rate of conversion is the difference between these rates, so at equilibrium (when the composition stops changing) the two rates of reaction must be equal. Hence ... 1867 The rate expressions given in Guldberg and Waage's 1864 paper could not be differentiated, so they were simplified as follows. The chemical force was assumed to be directly proportional to the product of the active masses of the reactants. This is equivalent to setting the exponents a and b of the earlier theory to one. The proportionality constant was called an affinity constant, k. The equilibrium condition for an "ideal" reaction was thus given the simplified form [A]eq, [B]eq etc. are the active masses at equilibrium. In terms of the initial amounts reagents p,q etc. this becomes The ratio of the affinity coefficients, k'/k, can be recognized as an equilibrium constant. Turning to the kinetic aspect, it was suggested that the velocity of reaction, v, is proportional to the sum of chemical affinities (forces). In its simplest form this results in the expression where is the proportionality constant. Actually, Guldberg and Waage used a more complicated expression which allowed for interaction between A and A', etc. By making certain simplifying approximations to those more complicated expressions, the rate equation could be integrated and hence the equilibrium quantity could be calculated. The extensive calculations in the 1867 paper gave support to the simplified concept, namely, The rate of a reaction is proportional to the product of the active masses of the reagents involved. This is an alternative statement of the law of mass action. 1879 In the 1879 paper the assumption that reaction rate was proportional to the product of concentrations was justified microscopically in terms of the frequency of independent collisions, as had been developed for gas kinetics by Boltzmann in 1872 (Boltzmann equation). It was also proposed that the original theory of the equilibrium condition could be generalised to apply to any arbitrary chemical equilibrium. The exponents α, β etc. are explicitly identified for the first time as the stoichiometric coefficients for the reaction. Modern statement of the law The affinity constants, k+ and k−, of the 1879 paper can now be recognised as rate constants. The equilibrium constant, K, was derived by setting the rates of forward and backward reactions to be equal. This also meant that the chemical affinities for the forward and backward reactions are equal. The resultant expression is correct even from the modern perspective, apart from the use of concentrations instead of activities (the concept of chemical activity was developed by Josiah Willard Gibbs, in the 1870s, but was not widely known in Europe until the 1890s). The derivation from the reaction rate expressions is no longer considered to be valid. Nevertheless, Guldberg and Waage were on the right track when they suggested that the driving force for both forward and backward reactions is equal when the mixture is at equilibrium. The term they used for this force was chemical affinity. Today the expression for the equilibrium constant is derived by setting the chemical potential of forward and backward reactions to be equal. The generalisation of the law of mass action, in terms of affinity, to equilibria of arbitrary stoichiometry was a bold and correct conjecture. The hypothesis that reaction rate is proportional to reactant concentrations is, strictly speaking, only true for elementary reactions (reactions with a single mechanistic step), but the empirical rate expression is also applicable to second order reactions that may not be concerted reactions. Guldberg and Waage were fortunate in that reactions such as ester formation and hydrolysis, on which they originally based their theory, do indeed follow this rate expression. In general many reactions occur with the formation of reactive intermediates, and/or through parallel reaction pathways. However, all reactions can be represented as a series of elementary reactions and, if the mechanism is known in detail, the rate equation for each individual step is given by the expression so that the overall rate equation can be derived from the individual steps. When this is done the equilibrium constant is obtained correctly from the rate equations for forward and backward reaction rates. In biochemistry, there has been significant interest in the appropriate mathematical model for chemical reactions occurring in the intracellular medium. This is in contrast to the initial work done on chemical kinetics, which was in simplified systems where reactants were in a relatively dilute, pH-buffered, aqueous solution. In more complex environments, where bound particles may be prevented from disassociation by their surroundings, or diffusion is slow or anomalous, the model of mass action does not always describe the behavior of the reaction kinetics accurately. Several attempts have been made to modify the mass action model, but consensus has yet to be reached. Popular modifications replace the rate constants with functions of time and concentration. As an alternative to these mathematical constructs, one school of thought is that the mass action model can be valid in intracellular environments under certain conditions, but with different rates than would be found in a dilute, simple environment . The fact that Guldberg and Waage developed their concepts in steps from 1864 to 1867 and 1879 has resulted in much confusion in the literature as to which equation the law of mass action refers. It has been a source of some textbook errors. Thus, today the "law of mass action" sometimes refers to the (correct) equilibrium constant formula, and at other times to the (usually incorrect) rate formula. Applications to other fields In semiconductor physics The law of mass action also has implications in semiconductor physics. Regardless of doping, the product of electron and hole densities is a constant at equilibrium. This constant depends on the thermal energy of the system (i.e. the product of the Boltzmann constant, , and temperature, ), as well as the band gap (the energy separation between conduction and valence bands, ) and effective density of states in the valence and conduction bands. When the equilibrium electron and hole densities are equal, their density is called the intrinsic carrier density as this would be the value of and in a perfect crystal. Note that the final product is independent of the Fermi level : Diffusion in condensed matter Yakov Frenkel represented diffusion process in condensed matter as an ensemble of elementary jumps and quasichemical interactions of particles and defects. Henry Eyring applied his theory of absolute reaction rates to this quasichemical representation of diffusion. Mass action law for diffusion leads to various nonlinear versions of Fick's law. In mathematical ecology The Lotka–Volterra equations describe dynamics of the predator-prey systems. The rate of predation upon the prey is assumed to be proportional to the rate at which the predators and the prey meet; this rate is evaluated as xy, where x is the number of prey, y is the number of predator. This is a typical example of the law of mass action. In mathematical epidemiology The law of mass action forms the basis of the compartmental model of disease spread in mathematical epidemiology, in which a population of humans, animals or other individuals is divided into categories of susceptible, infected, and recovered (immune). The principle of mass action is at the heart of the transmission term of compartmental models in epidemiology, which provide a useful abstraction of disease dynamics. The law of mass action formulation of the SIR model corresponds to the following "quasichemical" system of elementary reactions: The list of components is S (susceptible individuals), I (infected individuals), and R (removed individuals, or just recovered ones if we neglect lethality); The list of elementary reactions is S + I -> 2I I -> R. If the immunity is unstable then the transition from R to S should be added that closes the cycle (SIRS model): R -> S. A rich system of law of mass action models was developed in mathematical epidemiology by adding components and elementary reactions. Individuals in human or animal populations unlike molecules in an ideal solution do not mix homogeneously. There are some disease examples in which this non-homogeneity is great enough such that the outputs of the classical SIR model and their simple generalizations like SIS or SEIR, are invalid. For these situations, more sophisticated compartmental models or distributed reaction-diffusion models may be useful. See also Chemical equilibrium Chemical potential Disequilibrium ratio Equilibrium constant Reaction quotient References Further reading Studies Concerning Affinity. P. Waage and C.M. Guldberg; Henry I. Abrash, Translator. "Guldberg and Waage and the Law of Mass Action", E.W. Lund, J. Chem. Ed., (1965), 42, 548-550. A simple explanation of the mass action law. H. Motulsky. The Thermodynamic Equilibrium Constant History of chemistry Equilibrium chemistry Chemical kinetics Jacobus Henricus van 't Hoff
Law of mass action
[ "Chemistry" ]
2,674
[ "Equilibrium chemistry", "Chemical kinetics", "Chemical reaction engineering" ]
318,370
https://en.wikipedia.org/wiki/RuBisCO
Ribulose-1,5-bisphosphate carboxylase/oxygenase, commonly known by the abbreviations RuBisCo, rubisco, RuBPCase, or RuBPco, is an enzyme () involved in the light-independent (or "dark") part of photosynthesis, including the carbon fixation by which atmospheric carbon dioxide is converted by plants and other photosynthetic organisms to energy-rich molecules such as glucose. It emerged approximately four billion years ago in primordial metabolism prior to the presence of oxygen on Earth. It is probably the most abundant enzyme on Earth. In chemical terms, it catalyzes the carboxylation of ribulose-1,5-bisphosphate (also known as RuBP). Alternative carbon fixation pathways RuBisCO is important biologically because it catalyzes the primary chemical reaction by which inorganic carbon enters the biosphere. While many autotrophic bacteria and archaea fix carbon via the reductive acetyl CoA pathway, the 3-hydroxypropionate cycle, or the reverse Krebs cycle, these pathways are relatively small contributors to global carbon fixation compared to that catalyzed by RuBisCO. Phosphoenolpyruvate carboxylase, unlike RuBisCO, only temporarily fixes carbon. Reflecting its importance, RuBisCO is the most abundant protein in leaves, accounting for 50% of soluble leaf protein in plants (20–30% of total leaf nitrogen) and 30% of soluble leaf protein in plants (5–9% of total leaf nitrogen). Given its important role in the biosphere, the genetic engineering of RuBisCO in crops is of continuing interest (see below). Structure In plants, algae, cyanobacteria, and phototrophic and chemoautotrophic Pseudomonadota (formerly proteobacteria), the enzyme usually consists of two types of protein subunit, called the large chain (L, about 55,000 Da) and the small chain (S, about 13,000 Da). The large-chain gene (rbcL) is encoded by the chloroplast DNA in plants. There are typically several related small-chain genes in the nucleus of plant cells, and the small chains are imported to the stromal compartment of chloroplasts from the cytosol by crossing the outer chloroplast membrane. The enzymatically active substrate (ribulose 1,5-bisphosphate) binding sites are located in the large chains that form dimers in which amino acids from each large chain contribute to the binding sites. A total of eight large chains (= four dimers) and eight small chains assemble into a larger complex of about 540,000 Da. In some Pseudomonadota and dinoflagellates, enzymes consisting of only large subunits have been found. Magnesium ions () are needed for enzymatic activity. Correct positioning of in the active site of the enzyme involves addition of an "activating" carbon dioxide molecule () to a lysine in the active site (forming a carbamate). operates by driving deprotonation of the Lys210 residue, causing the Lys residue to rotate by 120 degrees to the trans conformer, decreasing the distance between the nitrogen of Lys and the carbon of . The close proximity allows for the formation of a covalent bond, resulting in the carbamate. is first enabled to bind to the active site by the rotation of His335 to an alternate conformation. is then coordinated by the His residues of the active site (His300, His302, His335), and is partially neutralized by the coordination of three water molecules and their conversion to −OH. This coordination results in an unstable complex, but produces a favorable environment for the binding of . Formation of the carbamate is favored by an alkaline pH. The pH and the concentration of magnesium ions in the fluid compartment (in plants, the stroma of the chloroplast) increases in the light. The role of changing pH and magnesium ion levels in the regulation of RuBisCO enzyme activity is discussed below. Once the carbamate is formed, His335 finalizes the activation by returning to its initial position through thermal fluctuation. Enzymatic activity RuBisCO is one of many enzymes in the Calvin cycle. When Rubisco facilitates the attack of at the C2 carbon of RuBP and subsequent bond cleavage between the C3 and C2 carbon, 2 molecules of glycerate-3-phosphate are formed. The conversion involves these steps: enolisation, carboxylation, hydration, C-C bond cleavage, and protonation. Substrates Substrates for RuBisCO are ribulose-1,5-bisphosphate and carbon dioxide (distinct from the "activating" carbon dioxide). RuBisCO also catalyses a reaction of ribulose-1,5-bisphosphate and molecular oxygen (O2) instead of carbon dioxide (). Discriminating between the substrates and O2 is attributed to the differing interactions of the substrate's quadrupole moments and a high electrostatic field gradient. This gradient is established by the dimer form of the minimally active RuBisCO, which with its two components provides a combination of oppositely charged domains required for the enzyme's interaction with O2 and . These conditions help explain the low turnover rate found in RuBisCO: In order to increase the strength of the electric field necessary for sufficient interaction with the substrates’ quadrupole moments, the C- and N- terminal segments of the enzyme must be closed off, allowing the active site to be isolated from the solvent and lowering the dielectric constant. This isolation has a significant entropic cost, and results in the poor turnover rate. Binding RuBP Carbamylation of the ε-amino group of Lys210 is stabilized by coordination with the . This reaction involves binding of the carboxylate termini of Asp203 and Glu204 to the ion. The substrate RuBP binds displacing two of the three aquo ligands. Enolisation Enolisation of RuBP is the conversion of the keto tautomer of RuBP to an enediol(ate). Enolisation is initiated by deprotonation at C3. The enzyme base in this step has been debated, but the steric constraints observed in crystal structures have made Lys210 the most likely candidate. Specifically, the carbamate oxygen on Lys210 that is not coordinated with the Mg ion deprotonates the C3 carbon of RuBP to form a 2,3-enediolate. Carboxylation Carboxylation of the 2,3-enediolate results in the intermediate 3-keto-2-carboxyarabinitol-1,5-bisphosphate and Lys334 is positioned to facilitate the addition of the substrate as it replaces the third -coordinated water molecule and add directly to the enediol. No Michaelis complex is formed in this process. Hydration of this ketone results in an additional hydroxy group on C3, forming a gem-diol intermediate. Carboxylation and hydration have been proposed as either a single concerted step or as two sequential steps. Concerted mechanism is supported by the proximity of the water molecule to C3 of RuBP in multiple crystal structures. Within the spinach structure, other residues are well placed to aid in the hydration step as they are within hydrogen bonding distance of the water molecule. C-C bond cleavage The gem-diol intermediate cleaves at the C2-C3 bond to form one molecule of glycerate-3-phosphate and a negatively charged carboxylate. Stereo specific protonation of C2 of this carbanion results in another molecule of glycerate-3-phosphate. This step is thought to be facilitated by Lys175 or potentially the carbamylated Lys210. Products When carbon dioxide is the substrate, the product of the carboxylase reaction is an unstable six-carbon phosphorylated intermediate known as 3-keto-2-carboxyarabinitol-1,5-bisphosphate, which decays rapidly into two molecules of glycerate-3-phosphate. This product, also known as 3-phosphoglycerate, can be used to produce larger molecules such as glucose. When molecular oxygen is the substrate, the products of the oxygenase reaction are phosphoglycolate and 3-phosphoglycerate. Phosphoglycolate is recycled through a sequence of reactions called photorespiration, which involves enzymes and cytochromes located in the mitochondria and peroxisomes (this is a case of metabolite repair). In this process, two molecules of phosphoglycolate are converted to one molecule of carbon dioxide and one molecule of 3-phosphoglycerate, which can reenter the Calvin cycle. Some of the phosphoglycolate entering this pathway can be retained by plants to produce other molecules such as glycine. At ambient levels of carbon dioxide and oxygen, the ratio of the reactions is about 4 to 1, which results in a net carbon dioxide fixation of only 3.5. Thus, the inability of the enzyme to prevent the reaction with oxygen greatly reduces the photosynthetic capacity of many plants. Some plants, many algae, and photosynthetic bacteria have overcome this limitation by devising means to increase the concentration of carbon dioxide around the enzyme, including carbon fixation, crassulacean acid metabolism, and the use of pyrenoid. Rubisco side activities can lead to useless or inhibitory by-products. Important inhibitory by-products include xylulose 1,5-bisphosphate and glycero-2,3-pentodiulose 1,5-bisphosphate, both caused by "misfires" halfway in the enolisation-carboxylation reaction. In higher plants, this process causes RuBisCO self-inhibition, which can be triggered by saturating and RuBP concentrations and solved by Rubisco activase (see below). Rate of enzymatic activity Some enzymes can carry out thousands of chemical reactions each second. However, RuBisCO is slow, fixing only 3–10 carbon dioxide molecules each second per molecule of enzyme. The reaction catalyzed by RuBisCO is, thus, the primary rate-limiting factor of the Calvin cycle during the day. Nevertheless, under most conditions, and when light is not otherwise limiting photosynthesis, the speed of RuBisCO responds positively to increasing carbon dioxide concentration. RuBisCO is usually only active during the day, as ribulose 1,5-bisphosphate is not regenerated in the dark. This is due to the regulation of several other enzymes in the Calvin cycle. In addition, the activity of RuBisCO is coordinated with that of the other enzymes of the Calvin cycle in several other ways: By ions Upon illumination of the chloroplasts, the pH of the stroma rises from 7.0 to 8.0 because of the proton (hydrogen ion, ) gradient created across the thylakoid membrane. The movement of protons into thylakoids is driven by light and is fundamental to ATP synthesis in chloroplasts (Further reading: Photosynthetic reaction centre; Light-dependent reactions). To balance ion potential across the membrane, magnesium ions () move out of the thylakoids in response, increasing the concentration of magnesium in the stroma of the chloroplasts. RuBisCO has a high optimal pH (can be >9.0, depending on the magnesium ion concentration) and, thus, becomes "activated" by the introduction of carbon dioxide and magnesium to the active sites as described above. By RuBisCO activase In plants and some algae, another enzyme, RuBisCO activase (Rca, , ), is required to allow the rapid formation of the critical carbamate in the active site of RuBisCO. This is required because ribulose 1,5-bisphosphate (RuBP) binds more strongly to the active sites of RuBisCO when excess carbamate is present, preventing processes from moving forward. In the light, RuBisCO activase promotes the release of the inhibitory (or — in some views — storage) RuBP from the catalytic sites of RuBisCO. Activase is also required in some plants (e.g., tobacco and many beans) because, in darkness, RuBisCO is inhibited (or protected from hydrolysis) by a competitive inhibitor synthesized by these plants, a substrate analog 2-carboxy-D-arabitinol 1-phosphate (CA1P). CA1P binds tightly to the active site of carbamylated RuBisCO and inhibits catalytic activity to an even greater extent. CA1P has also been shown to keep RuBisCO in a conformation that is protected from proteolysis. In the light, RuBisCO activase also promotes the release of CA1P from the catalytic sites. After the CA1P is released from RuBisCO, it is rapidly converted to a non-inhibitory form by a light-activated CA1P-phosphatase. Even without these strong inhibitors, once every several hundred reactions, the normal reactions with carbon dioxide or oxygen are not completed; other inhibitory substrate analogs are still formed in the active site. Once again, RuBisCO activase can promote the release of these analogs from the catalytic sites and maintain the enzyme in a catalytically active form. However, at high temperatures, RuBisCO activase aggregates and can no longer activate RuBisCO. This contributes to the decreased carboxylating capacity observed during heat stress. By activase The removal of the inhibitory RuBP, CA1P, and the other inhibitory substrate analogs by activase requires the consumption of ATP. This reaction is inhibited by the presence of ADP, and, thus, activase activity depends on the ratio of these compounds in the chloroplast stroma. Furthermore, in most plants, the sensitivity of activase to the ratio of ATP/ADP is modified by the stromal reduction/oxidation (redox) state through another small regulatory protein, thioredoxin. In this manner, the activity of activase and the activation state of RuBisCO can be modulated in response to light intensity and, thus, the rate of formation of the ribulose 1,5-bisphosphate substrate. By phosphate In cyanobacteria, inorganic phosphate (Pi) also participates in the co-ordinated regulation of photosynthesis: Pi binds to the RuBisCO active site and to another site on the large chain where it can influence transitions between activated and less active conformations of the enzyme. In this way, activation of bacterial RuBisCO might be particularly sensitive to Pi levels, which might cause it to act in a similar way to how RuBisCO activase functions in higher plants. By carbon dioxide Since carbon dioxide and oxygen compete at the active site of RuBisCO, carbon fixation by RuBisCO can be enhanced by increasing the carbon dioxide level in the compartment containing RuBisCO (chloroplast stroma). Several times during the evolution of plants, mechanisms have evolved for increasing the level of carbon dioxide in the stroma (see carbon fixation). The use of oxygen as a substrate appears to be a puzzling process, since it seems to throw away captured energy. However, it may be a mechanism for preventing carbohydrate overload during periods of high light flux. This weakness in the enzyme is the cause of photorespiration, such that healthy leaves in bright light may have zero net carbon fixation when the ratio of O2 to available to RuBisCO shifts too far towards oxygen. This phenomenon is primarily temperature-dependent: high temperatures can decrease the concentration of dissolved in the moisture of leaf tissues. This phenomenon is also related to water stress: since plant leaves are evaporatively cooled, limited water causes high leaf temperatures. plants use the enzyme PEP carboxylase initially, which has a higher affinity for . The process first makes a 4-carbon intermediate compound, hence the name plants, which is shuttled into a site of photosynthesis then decarboxylated, releasing to boost the concentration of . Crassulacean acid metabolism (CAM) plants keep their stomata closed during the day, which conserves water but prevents the light-independent reactions (a.k.a. the Calvin Cycle) from taking place, since these reactions require to pass by gas exchange through these openings. Evaporation through the upper side of a leaf is prevented by a layer of wax. Genetic engineering Since RuBisCO is often rate-limiting for photosynthesis in plants, it may be possible to improve photosynthetic efficiency by modifying RuBisCO genes in plants to increase catalytic activity and/or decrease oxygenation rates. This could improve sequestration of and be a strategy to increase crop yields. Approaches under investigation include transferring RuBisCO genes from one organism into another organism, engineering Rubisco activase from thermophilic cyanobacteria into temperature sensitive plants, increasing the level of expression of RuBisCO subunits, expressing RuBisCO small chains from the chloroplast DNA, and altering RuBisCO genes to increase specificity for carbon dioxide or otherwise increase the rate of carbon fixation. Mutagenesis in plants In general, site-directed mutagenesis of RuBisCO has been mostly unsuccessful, though mutated forms of the protein have been achieved in tobacco plants with subunit C4 species, and a RuBisCO with more C4-like kinetic characteristics have been attained in rice via nuclear transformation. Robust and reliable engineering for yield of RuBisCO and other enzymes in the C3 cycle was shown to be possible, and it was first achieved in 2019 through a synthetic biology approach. One avenue is to introduce RuBisCO variants with naturally high specificity values such as the ones from the red alga Galdieria partita into plants. This may improve the photosynthetic efficiency of crop plants, although possible negative impacts have yet to be studied. Advances in this area include the replacement of the tobacco enzyme with that of the purple photosynthetic bacterium Rhodospirillum rubrum. In 2014, two transplastomic tobacco lines with functional RuBisCO from the cyanobacterium Synechococcus elongatus PCC7942 (Se7942) were created by replacing the RuBisCO with the large and small subunit genes of the Se7942 enzyme, in combination with either the corresponding Se7942 assembly chaperone, RbcX, or an internal carboxysomal protein, CcmM35. Both mutants had increased fixation rates when measured as carbon molecules per RuBisCO. However, the mutant plants grew more slowly than wild-type. A recent theory explores the trade-off between the relative specificity (i.e., ability to favour fixation over O2 incorporation, which leads to the energy-wasteful process of photorespiration) and the rate at which product is formed. The authors conclude that RuBisCO may actually have evolved to reach a point of 'near-perfection' in many plants (with widely varying substrate availabilities and environmental conditions), reaching a compromise between specificity and reaction rate. It has been also suggested that the oxygenase reaction of RuBisCO prevents depletion near its active sites and provides the maintenance of the chloroplast redox state. Since photosynthesis is the single most effective natural regulator of carbon dioxide in the Earth's atmosphere, a biochemical model of RuBisCO reaction is used as the core module of climate change models. Thus, a correct model of this reaction is essential to the basic understanding of the relations and interactions of environmental models. Expression in bacterial hosts There currently are very few effective methods for expressing functional plant Rubisco in bacterial hosts for genetic manipulation studies. This is largely due to Rubisco's requirement of complex cellular machinery for its biogenesis and metabolic maintenance including the nuclear-encoded RbcS subunits, which are typically imported into chloroplasts as unfolded proteins. Furthermore, sufficient expression and interaction with Rubisco activase are major challenges as well. One successful method for expression of Rubisco in E. coli involves the co-expression of multiple chloroplast chaperones, though this has only been shown for Arabidopsis thaliana Rubisco. Depletion in proteomic studies Due to its high abundance in plants (generally 40% of the total protein content), RuBisCO often impedes analysis of important signaling proteins such as transcription factors, kinases, and regulatory proteins found in lower abundance (10-100 molecules per cell) within plants. For example, using mass spectrometry on plant protein mixtures would result in multiple intense RuBisCO subunit peaks that interfere and hide those of other proteins. Recently, one efficient method for precipitating out RuBisCO involves the usage of protamine sulfate solution. Other existing methods for depleting RuBisCO and studying lower abundance proteins include fractionation techniques with calcium and phytate, gel electrophoresis with polyethylene glycol, affinity chromatography, and aggregation using DTT, though these methods are more time-consuming and less efficient when compared to protamine sulfate precipitation. Evolution of RuBisCO Phylogenetic studies The chloroplast gene rbcL, which codes for the large subunit of RuBisCO has been widely used as an appropriate locus for analysis of phylogenetics in plant taxonomy. Origin Non-carbon-fixing proteins similar to RuBisCO, termed RuBisCO-like proteins (RLPs), are also found in the wild in organisms as common as Bacillus subtilis. This bacterium has a rbcL-like protein with a 2,3-diketo-5-methylthiopentyl-1-phosphate enolase function, part of the methionine salvage pathway. Later identifications found functionally divergent examples dispersed all over bacteria and archaea, as well as transitionary enzymes performing both RLP-type enolase and RuBisCO functions. It is now believed that the current RuBisCO evolved from a dimeric RLP ancestor, acquiring its carboxylase function first before further oligomerizing and then recruiting the small subunit to form the familiar modern enzyme. The small subunit probably first evolved in anaerobic and thermophilic organisms, where it enabled RuBisCO to catalyze its reaction at higher temperatures. In addition to its effect on stabilizing catalysis, it enabled the evolution of higher specificities for over O2 by modulating the effect that substitutions within RuBisCO have on enzymatic function. Substitutions that do not have an effect without the small subunit suddenly become beneficial when it is bound. Furthermore, the small subunit enabled the accumulation of substitutions that are only tolerated in its presence. Accumulation of such substitutions leads to a strict dependence on the small subunit, which is observed in extant Rubiscos that bind a small subunit. C4 With the mass convergent evolution of the C4-fixation pathway in a diversity of plant lineages, ancestral C3-type RuBisCO evolved to have faster turnover of in exchange for lower specificity as a result of the greater localization of from the mesophyll cells into the bundle sheath cells. This was achieved through enhancement of conformational flexibility of the “open-closed” transition in the Calvin cycle. Laboratory-based phylogenetic studies have shown that this evolution was constrained by the trade-off between stability and activity brought about by the series of necessary mutations for C4 RuBisCO. Moreover, in order to sustain the destabilizing mutations, the evolution to C4 RuBisCO was preceded by a period in which mutations granted the enzyme increased stability, establishing a buffer to sustain and maintain the mutations required for C4 RuBisCO. To assist with this buffering process, the newly-evolved enzyme was found to have further developed a series of stabilizing mutations. While RuBisCO has always been accumulating new mutations, most of these mutations that have survived have not had significant effects on protein stability. The destabilizing C4 mutations on RuBisCO has been sustained by environmental pressures such as low concentrations, requiring a sacrifice of stability for new adaptive functions. History of the term The term "RuBisCO" was coined humorously in 1979, by David Eisenberg at a seminar honouring the retirement of the early, prominent RuBisCO researcher, Sam Wildman, and also alluded to the snack food trade name "Nabisco" in reference to Wildman's attempts to create an edible protein supplement from tobacco leaves. The capitalization of the name has been long debated. It can be capitalized for each letter of the full name (Ribulose-1,5 bisphosphate carboxylase/oxygenase), but it has also been argued that is should all be in lower case (rubisco), similar to other terms like scuba or laser. See also Carbon cycle Photorespiration Pyrenoid C3 carbon fixation C4 carbon fixation Crassulacean acid metabolism/CAM photosynthesis Carboxysome References Further reading External links Photosynthesis EC 4.1.1
RuBisCO
[ "Chemistry", "Biology" ]
5,409
[ "Biochemistry", "Photosynthesis" ]
318,577
https://en.wikipedia.org/wiki/Cavendish%20experiment
The Cavendish experiment, performed in 1797–1798 by English scientist Henry Cavendish, was the first experiment to measure the force of gravity between masses in the laboratory and the first to yield accurate values for the gravitational constant. Because of the unit conventions then in use, the gravitational constant does not appear explicitly in Cavendish's work. Instead, the result was originally expressed as the relative density of Earth, or equivalently the mass of Earth. His experiment gave the first accurate values for these geophysical constants. The experiment was devised sometime before 1783 by geologist John Michell, who constructed a torsion balance apparatus for it. However, Michell died in 1793 without completing the work. After his death the apparatus passed to Francis John Hyde Wollaston and then to Cavendish, who rebuilt the apparatus but kept close to Michell's original plan. Cavendish then carried out a series of measurements with the equipment and reported his results in the Philosophical Transactions of the Royal Society in 1798. The experiment The apparatus consisted of a torsion balance made of a wooden rod horizontally suspended from a wire, with two , lead spheres, one attached to each end. Two massive , lead balls, suspended separately, could be positioned away from or to either side of the smaller balls, away. The experiment measured the faint gravitational attraction between the small and large balls, which deflected the torsion balance rod by about 0.16" (or only 0.03" with a stiffer suspending wire). The two large balls could be positioned either away from or to either side of the torsion balance rod. Their mutual attraction to the small balls caused the arm to rotate, twisting the suspension wire. The arm rotated until it reached an angle where the twisting force of the wire balanced the combined gravitational force of attraction between the large and small lead spheres. By measuring the angle of the rod and knowing the twisting force (torque) of the wire for a given angle, Cavendish was able to determine the force between the pairs of masses. Since the gravitational force of the Earth on the small ball could be measured directly by weighing it, the ratio of the two forces allowed the relative density of the Earth to be calculated, using Newton's law of gravitation. Cavendish found that the Earth's density was times that of water (although due to a simple arithmetic error, found in 1821 by Francis Baily, the erroneous value appears in his paper). The current accepted value is 5.514 g/cm3. To find the wire's torsion coefficient, the torque exerted by the wire for a given angle of twist, Cavendish timed the natural oscillation period of the balance rod as it rotated slowly clockwise and counterclockwise against the twisting of the wire. For the first 3 experiments the period was about 15 minutes and for the next 14 experiments the period was half of that, about 7.5 minutes. The period changed because after the third experiment Cavendish put in a stiffer wire. The torsion coefficient could be calculated from this and the mass and dimensions of the balance. Actually, the rod was never at rest; Cavendish had to measure the deflection angle of the rod while it was oscillating. Cavendish's equipment was remarkably sensitive for its time. The force involved in twisting the torsion balance was very small, , (the weight of only 0.0177 milligrams) or about of the weight of the small balls. To prevent air currents and temperature changes from interfering with the measurements, Cavendish placed the entire apparatus in a mahogany box about 1.98 meters wide, 1.27 meters tall, and 14 cm thick, all in a closed shed on his estate. Through two holes in the walls of the shed, Cavendish used telescopes to observe the movement of the torsion balance's horizontal rod. The key observable was of course the deflection of the torsion balance rod, which Cavendish measured to be about 0.16" (or only 0.03" for the stiffer wire used mostly). Cavendish was able to measure this small deflection to an accuracy of better than using vernier scales on the ends of the rod. The accuracy of Cavendish's result was not exceeded until C. V. Boys' experiment in 1895. In time, Michell's torsion balance became the dominant technique for measuring the gravitational constant (G) and most contemporary measurements still use variations of it. Cavendish's result provided additional evidence for a planetary core made of metal, an idea first proposed by Charles Hutton based on his analysis of the 1774 Schiehallion experiment. Cavendish's result of 5.4 g·cm−3, 23% bigger than Hutton's, is close to 80% of the density of liquid iron, and 80% higher than the density of the Earth's outer crust, suggesting the existence of a dense iron core. Reformulation of Cavendish's result to G The formulation of Newtonian gravity in terms of a gravitational constant did not become standard until long after Cavendish's time. Indeed, one of the first references to G is in 1873, 75 years after Cavendish's work. Cavendish expressed his result in terms of the density of the Earth. He referred to his experiment in correspondence as 'weighing the world'. Later authors reformulated his results in modern terms. After converting to SI units, Cavendish's value for the Earth's density, 5.448 g cm−3, gives G = , which differs by only 1% from the 2014 CODATA value of . Today, physicists often use units where the gravitational constant takes a different form. The Gaussian gravitational constant used in space dynamics is a defined constant and the Cavendish experiment can be considered as a measurement of this constant. In Cavendish's time, physicists used the same units for mass and weight, in effect taking g as a standard acceleration. Then, since R was known, ρ played the role of an inverse gravitational constant. The density of the Earth was hence a much sought-after quantity at the time, and there had been earlier attempts to measure it, such as the Schiehallion experiment in 1774. Derivation of G and the Earth's mass The following is not the method Cavendish used, but describes how modern physicists would calculate the results from his experiment. From Hooke's law, the torque on the torsion wire is proportional to the deflection angle of the balance. The torque is where is the torsion coefficient of the wire. However, a torque in the opposite direction is also generated by the gravitational pull of the masses. It can be written as a product of the attractive force of a large ball on a small ball and the distance L/2 to the suspension wire. Since there are two balls, each experiencing force F at a distance from the axis of the balance, the torque due to gravitational force is LF. At equilibrium (when the balance has been stabilized at an angle ), the total amount of torque must be zero as these two sources of torque balance out. Thus, we can equate their magnitudes given by the formulas above, which gives the following: For F, Newton's law of universal gravitation is used to express the attractive force between a large and small ball: Substituting F into the first equation above gives To find the torsion coefficient () of the wire, Cavendish measured the natural resonant oscillation period T of the torsion balance: Assuming the mass of the torsion beam itself is negligible, the moment of inertia of the balance is just due to the small balls. Treating them as point masses, each at L/2 from the axis, gives: , and so: Solving this for , substituting into (1), and rearranging for G, the result is: . Once G has been found, the attraction of an object at the Earth's surface to the Earth itself can be used to calculate the Earth's mass and density: Definitions of terms References Sources Establishes that Cavendish didn't determine G. Discusses Michell's contributions, and whether Cavendish determined G. Review of gravity measurements since 1740. External links Cavendish’s experiment in the Feynman Lectures on Physics Sideways Gravity in the Basement, The Citizen Scientist, July 1, 2005. Homebrew Cavendish experiment, showing calculation of results and precautions necessary to eliminate wind and electrostatic errors. "Big 'G'", Physics Central, retrieved Dec. 8, 2013. Experiment at Univ. of Washington to measure the gravitational constant using variation of Cavendish method. . Discusses current state of measurements of G. Model of Cavendish's torsion balance, retrieved Aug. 28, 2007, at Science Museum, London. Physics experiments 1790s in science 1797 in science 1798 in science Geodesy Gravity Royal Society
Cavendish experiment
[ "Physics", "Mathematics" ]
1,801
[ "Applied mathematics", "Geodesy", "Experimental physics", "Physics experiments" ]
318,742
https://en.wikipedia.org/wiki/Hyperbolic%20space
In mathematics, hyperbolic space of dimension n is the unique simply connected, n-dimensional Riemannian manifold of constant sectional curvature equal to −1. It is homogeneous, and satisfies the stronger property of being a symmetric space. There are many ways to construct it as an open subset of with an explicitly written Riemannian metric; such constructions are referred to as models. Hyperbolic 2-space, H2, which was the first instance studied, is also called the hyperbolic plane. It is also sometimes referred to as Lobachevsky space or Bolyai–Lobachevsky space after the names of the author who first published on the topic of hyperbolic geometry. Sometimes the qualificative "real" is added to distinguish it from complex hyperbolic spaces. Hyperbolic space serves as the prototype of a Gromov hyperbolic space, which is a far-reaching notion including differential-geometric as well as more combinatorial spaces via a synthetic approach to negative curvature. Another generalisation is the notion of a CAT(−1) space. Formal definition and models Definition The -dimensional hyperbolic space or hyperbolic -space, usually denoted , is the unique simply connected, -dimensional complete Riemannian manifold with a constant negative sectional curvature equal to −1. The unicity means that any two Riemannian manifolds that satisfy these properties are isometric to each other. It is a consequence of the Killing–Hopf theorem. Models of hyperbolic space To prove the existence of such a space as described above one can explicitly construct it, for example as an open subset of with a Riemannian metric given by a simple formula. There are many such constructions or models of hyperbolic space, each suited to different aspects of its study. They are isometric to each other according to the previous paragraph, and in each case an explicit isometry can be explicitly given. Here is a list of the better-known models which are described in more detail in their namesake articles: Poincaré half-space model: this is the upper-half space with the metric Poincaré disc model: this is the unit ball of with the metric . The isometry to the half-space model can be realised by a homography sending a point of the unit sphere to infinity. Hyperboloid model: In contrast with the previous two models this realises hyperbolic -space as isometrically embedded inside the -dimensional Minkowski space (which is not a Riemannian but rather a Lorentzian manifold). More precisely, looking at the quadratic form on , its restriction to the tangent spaces of the upper sheet of the hyperboloid given by are definite positive, hence they endow it with a Riemannian metric that turns out to be of constant curvature −1. The isometry to the previous models can be realised by stereographic projection from the hyperboloid to the plane , taking the vertex from which to project to be for the ball and a point at infinity in the cone inside projective space for the half-space. Beltrami–Klein model: This is another model realised on the unit ball of ; rather than being given as an explicit metric it is usually presented as obtained by using stereographic projection from the hyperboloid model in Minkowski space to its horizontal tangent plane (that is, ) from the origin . Symmetric space: Hyperbolic -space can be realised as the symmetric space of the simple Lie group (the group of isometries of the quadratic form with positive determinant); as a set the latter is the coset space . The isometry to the hyperboloid model is immediate through the action of the connected component of on the hyperboloid. Geometric properties Parallel lines Hyperbolic space, developed independently by Nikolai Lobachevsky, János Bolyai and Carl Friedrich Gauss, is a geometric space analogous to Euclidean space, but such that Euclid's parallel postulate is no longer assumed to hold. Instead, the parallel postulate is replaced by the following alternative (in two dimensions): Given any line L and point P not on L, there are at least two distinct lines passing through P that do not intersect L. It is then a theorem that there are infinitely many such lines through P. This axiom still does not uniquely characterize the hyperbolic plane up to isometry; there is an extra constant, the curvature , that must be specified. However, it does uniquely characterize it up to homothety, meaning up to bijections that only change the notion of distance by an overall constant. By choosing an appropriate length scale, one can thus assume, without loss of generality, that . Euclidean embeddings The hyperbolic plane cannot be isometrically embedded into Euclidean 3-space by Hilbert's theorem. On the other hand the Nash embedding theorem implies that hyperbolic n-space can be isometrically embedded into some Euclidean space of larger dimension (5 for the hyperbolic plane by the Nash embedding theorem). When isometrically embedded to a Euclidean space every point of a hyperbolic space is a saddle point. Volume growth and isoperimetric inequality The volume of balls in hyperbolic space increases exponentially with respect to the radius of the ball rather than polynomially as in Euclidean space. Namely, if is any ball of radius in then: where is the total volume of the Euclidean -sphere of radius 1. The hyperbolic space also satisfies a linear isoperimetric inequality, that is there exists a constant such that any embedded disk whose boundary has length has area at most . This is to be contrasted with Euclidean space where the isoperimetric inequality is quadratic. Other metric properties There are many more metric properties of hyperbolic space that differentiate it from Euclidean space. Some can be generalised to the setting of Gromov-hyperbolic spaces, which is a generalisation of the notion of negative curvature to general metric spaces using only the large-scale properties. A finer notion is that of a CAT(−1)-space. Hyperbolic manifolds Every complete, connected, simply connected manifold of constant negative curvature −1 is isometric to the real hyperbolic space Hn. As a result, the universal cover of any closed manifold M of constant negative curvature −1, which is to say, a hyperbolic manifold, is Hn. Thus, every such M can be written as Hn/Γ, where Γ is a torsion-free discrete group of isometries on Hn. That is, Γ is a lattice in . Riemann surfaces Two-dimensional hyperbolic surfaces can also be understood according to the language of Riemann surfaces. According to the uniformization theorem, every Riemann surface is either elliptic, parabolic or hyperbolic. Most hyperbolic surfaces have a non-trivial fundamental group ; the groups that arise this way are known as Fuchsian groups. The quotient space H2/Γ of the upper half-plane modulo the fundamental group is known as the Fuchsian model of the hyperbolic surface. The Poincaré half plane is also hyperbolic, but is simply connected and noncompact. It is the universal cover of the other hyperbolic surfaces. The analogous construction for three-dimensional hyperbolic surfaces is the Kleinian model. See also Dini's surface Hyperbolic 3-manifold Ideal polyhedron Mostow rigidity theorem Murakami–Yano formula Pseudosphere References Footnotes Bibliography Ratcliffe, John G., Foundations of hyperbolic manifolds, New York, Berlin. Springer-Verlag, 1994. Reynolds, William F. (1993) "Hyperbolic Geometry on a Hyperboloid", American Mathematical Monthly 100:442–455. Wolf, Joseph A. Spaces of constant curvature, 1967. See page 67. Homogeneous spaces Hyperbolic geometry Topological spaces
Hyperbolic space
[ "Physics", "Mathematics" ]
1,605
[ "Mathematical structures", "Group actions", "Homogeneous spaces", "Space (mathematics)", "Topological spaces", "Topology", "Geometry", "Symmetry" ]
318,920
https://en.wikipedia.org/wiki/Dosimetry
Radiation dosimetry in the fields of health physics and radiation protection is the measurement, calculation and assessment of the ionizing radiation dose absorbed by an object, usually the human body. This applies both internally, due to ingested or inhaled radioactive substances, or externally due to irradiation by sources of radiation. Internal dosimetry assessment relies on a variety of monitoring, bio-assay or radiation imaging techniques, whilst external dosimetry is based on measurements with a dosimeter, or inferred from measurements made by other radiological protection instruments. Radiation dosimetry is extensively used for radiation protection; routinely applied to monitor occupational radiation workers, where irradiation is expected, or where radiation is unexpected, such as in the contained aftermath of the Three Mile Island, Chernobyl or Fukushima radiological release incidents. The public dose take-up is measured and calculated from a variety of indicators such as ambient measurements of gamma radiation, radioactive particulate monitoring, and the measurement of levels of radioactive contamination. Other significant radiation dosimetry areas are medical, where the required treatment absorbed dose and any collateral absorbed dose is monitored, and environmental, such as radon monitoring in buildings. Measuring radiation dose External dose There are several ways of measuring absorbed doses from ionizing radiation. People in occupational contact with radioactive substances, or who may be exposed to radiation, routinely carry personal dosimeters. These are specifically designed to record and indicate the dose received. Traditionally, these were lockets fastened to the external clothing of the monitored person, which contained photographic film known as film badge dosimeters. These have been largely replaced with other devices such as Thermoluminescent dosimetry(TLD), optically stimulated luminescence(OSL), or Fluorescent Nuclear Tract Detector(FNTD) badges. The International Committee on Radiation Protection (ICRP) guidance states that if a personal dosimeter is worn on a position on the body representative of its exposure, assuming whole-body exposure, the value of Personal Dose Equivalent Hp(10), is sufficient to estimate an effective dose value suitable for radiological protection. Personal Dose Equivalent is a radiation quantity specifically designed to be used for radiation measurements by personal dosimeters. Dosimeters are known as "legal dosimeters" if they have been approved for use in recording personnel dose for regulatory purposes. In cases of non-uniform irradiation such personal dosimeters may not be representative of certain specific areas of the body, where additional dosimeters are used in the area of concern. A number of electronic devices known as Electronic Personal Dosimeters (EPDs) have come into general use using semiconductor detection and programmable processor technology. These are worn as badges but can give an indication of instantaneous dose rate and an audible and visual alarm if a dose rate or a total integrated dose is exceeded. A good deal of information can be made immediately available to the wearer of the recorded dose and current dose rate via a local display. They can be used as the main stand-alone dosimeter, or as a supplement to other devices. EPD's are particularly useful for real-time monitoring of dose where a high dose rate is expected which will time-limit the wearer's exposure. In certain circumstances, a dose can be inferred from readings taken by fixed instrumentation in an area in which the person concerned has been working. This would generally only be used if personal dosimetry had not been issued, or a personal dosimeter has been damaged or lost. Such calculations would take a pessimistic view of the likely received dose. Internal dose Internal dosimetry is used to evaluate the committed dose due to the intake of radionuclides into the human body. Medical dosimetry Medical dosimetry is the calculation of absorbed dose and optimization of dose delivery in radiation therapy. It is often performed by a professional health physicist with specialized training in that field. In order to plan the delivery of radiation therapy, the radiation produced by the sources is usually characterized with percentage depth dose curves and dose profiles measured by a medical physicist. In radiation therapy, three-dimensional dose distributions are often evaluated using a technique known as gel dosimetry. Environmental dosimetry Environmental dosimetry is used where it is likely that the environment will generate a significant radiation dose. An example of this is radon monitoring. The largest single source of radiation exposure to the general public is naturally occurring radon gas, which comprises approximately 55% of the annual background dose. It is estimated that radon is responsible for 10% of lung cancers in the United States. Radon is a radioactive gas generated by the decay of uranium, which is present in varying amounts in the Earth's crust. Certain geographic areas, due to the underlying geology, continually generate radon which permeates its way to the Earth's surface. In some cases the dose can be significant in buildings where the gas can accumulate. A number of specialised dosimetry techniques are used to evaluate the dose that a building's occupants may receive. Radiation exposure monitoring Records of legal dosimetry results are usually kept for a set period of time, depending upon the legal requirements of the nation in which they are used. Medical radiation exposure monitoring is the practice of collecting dose information from radiology equipment and using the data to help identify opportunities to reduce unnecessary dose in medical situations. Measures of dose To enable consideration of stochastic health risk, calculations are performed to convert the physical quantity absorbed dose into equivalent and effective doses, the details of which depend on the radiation type and biological context. For applications in radiation protection and dosimetry assessment the (ICRP) and the International Commission on Radiation Units and Measurements (ICRU) have published recommendations and data which are used to calculate these. Units of measure There are a number of different measures of radiation dose, including absorbed dose (D) measured in: gray (Gy) energy absorbed per unit of mass (J·kg−1) Equivalent dose (H) measured in sieverts (Sv) Effective dose (E) measured in sieverts Kerma (K) measured in grays dose area product (DAP) measured in gray centimeters2 dose length product (DLP) measured in gray centimeters rads a deprecated unit of absorbed radiation dose, defined as 1 rad = 0.01 Gy = 0.01 J/kg Roentgen a legacy unit of measurement for the exposure of X-rays Each measure is often simply described as ‘dose’, which can lead to confusion. Non-SI units are still used, particularly in the USA, where dose is often reported in rads and dose equivalent in rems. By definition, 1 Gy = 100 rad and 1 Sv = 100 rem. The fundamental quantity is the absorbed dose (D), which is defined as the mean energy imparted [by ionising radiation] (dE) per unit mass (dm) of material (D = dE/dm) The SI unit of absorbed dose is the gray (Gy) defined as one joule per kilogram. Absorbed dose, as a point measurement, is suitable for describing localised (i.e. partial organ) exposures such as tumour dose in radiotherapy. It may be used to estimate stochastic risk provided the amount and type of tissue involved is stated. Localised diagnostic dose levels are typically in the 0–50 mGy range. At a dose of 1 milligray (mGy) of photon radiation, each cell nucleus is crossed by an average of 1 liberated electron track. Equivalent dose The absorbed dose required to produce a certain biological effect varies between different types of radiation, such as photons, neutrons or alpha particles. This is taken into account by the equivalent dose (H), which is defined as the mean dose to organ T by radiation type R (DT,R), multiplied by a weighting factor WR . This designed to take into account the biological effectiveness (RBE) of the radiation type, For instance, for the same absorbed dose in Gy, alpha particles are 20 times as biologically potent as X or gamma rays. The measure of ‘dose equivalent’ is not organ averaged and now only used for "operational quantities". Equivalent dose is designed for estimation of stochastic risks from radiation exposures. Stochastic effect is defined for radiation dose assessment as the probability of cancer induction and genetic damage. As dose is averaged over the whole organ; equivalent dose is rarely suitable for evaluation of acute radiation effects or tumour dose in radiotherapy. In the case of estimation of stochastic effects, assuming a linear dose response, this averaging out should make no difference as the total energy imparted remains the same. Effective dose Effective dose is the central dose quantity for radiological protection used to specify exposure limits to ensure that the occurrence of stochastic health effects is kept below unacceptable levels and that tissue reactions are avoided. It is difficult to compare the stochastic risk from localised exposures of different parts of the body (e.g. a chest x-ray compared to a CT scan of the head), or to compare exposures of the same body part but with different exposure patterns (e.g. a cardiac CT scan with a cardiac nuclear medicine scan). One way to avoid this problem is to simply average out a localised dose over the whole body. The problem of this approach is that the stochastic risk of cancer induction varies from one tissue to another. The effective dose E is designed to account for this variation by the application of specific weighting factors for each tissue (WT). Effective dose provides the equivalent whole body dose that gives the same risk as the localised exposure. It is defined as the sum of equivalent doses to each organ (HT), each multiplied by its respective tissue weighting factor (WT). Weighting factors are calculated by the International Commission for Radiological Protection (ICRP), based on the risk of cancer induction for each organ and adjusted for associated lethality, quality of life and years of life lost. Organs that are remote from the site of irradiation will only receive a small equivalent dose (mainly due to scattering) and therefore contribute little to the effective dose, even if the weighting factor for that organ is high. Effective dose is used to estimate stochastic risks for a ‘reference’ person, which is an average of the population. It is not suitable for estimating stochastic risk for individual medical exposures, and is not used to assess acute radiation effects. Dose versus source or field strength Radiation dose refers to the amount of energy deposited in matter and/or biological effects of radiation, and should not be confused with the unit of radioactive activity (becquerel, Bq) of the source of radiation, or the strength of the radiation field (fluence). The article on the sievert gives an overview of dose types and how they are calculated. Exposure to a source of radiation will give a dose which is dependent on many factors, such as the activity, duration of exposure, energy of the radiation emitted, distance from the source and amount of shielding. Background radiation The worldwide average background dose for a human being is about 3.5 mSv per year , mostly from cosmic radiation and natural isotopes in the earth. The largest single source of radiation exposure to the general public is naturally occurring radon gas, which comprises approximately 55% of the annual background dose. It is estimated that radon is responsible for 10% of lung cancers in the United States. Calibration standards for measuring instruments Because the human body is approximately 70% water and has an overall density close to 1 g/cm3, dose measurement is usually calculated and calibrated as dose to water. National standards laboratories such as the National Physical Laboratory, UK (NPL) provide calibration factors for ionization chambers and other measurement devices to convert from the instrument's readout to absorbed dose. The standards laboratories operates as a primary standard, which is normally calibrated by absolute calorimetry (the warming of substances when they absorb energy). A user sends their secondary standard to the laboratory, where it is exposed to a known amount of radiation (derived from the primary standard) and a factor is issued to convert the instrument's reading to that dose. The user may then use their secondary standard to derive calibration factors for other instruments they use, which then become tertiary standards, or field instruments. The NPL operates a graphite-calorimeter for absolute photon dosimetry. Graphite is used instead of water as its specific heat capacity is one-sixth that of water and therefore the temperature increase in graphite is 6 times higher than the equivalent in water and measurements are more accurate. Significant problems exist in insulating the graphite from the surrounding environment in order to measure the tiny temperature changes. A lethal dose of radiation to a human is approximately 10–20 Gy. This is 10–20 joules per kilogram. A 1 cm3 piece of graphite weighing 2 grams would therefore absorb around 20–40 mJ. With a specific heat capacity of around 700 J·kg−1·K−1, this equates to a temperature rise of just 20 mK. Dosimeters in radiotherapy (linear particle accelerator in external beam therapy) are routinely calibrated using ionization chambers or diode technology or gel dosimeters. Radiation-related quantities The following table shows radiation quantities in SI and non-SI units. Although the United States Nuclear Regulatory Commission permits the use of the units curie, rad, and rem alongside SI units, the European Union European units of measurement directives required that their use for "public health ... purposes" be phased out by 31 December 1985. See also Computational human phantom Health effects of radon Radiation dose reconstruction Notes References External links Ionization chamber – "The confusing world of radiation dosimetry" – M.A. Boyd, U.S. Environmental Protection Agency. An account of chronological differences between USA and ICRP dosimetry systems. Tim Stephens and Keith Pantridge, 'Dosimetry, Personal Monitoring Film' (a short article on Dosimetry from the point of view of its relation to photography, in Philosophy of Photography, volume 2, number 2, 2011, pp. 153–158.) Radiobiology Radiation therapy Nuclear physics Medical physics Radiation protection
Dosimetry
[ "Physics", "Chemistry", "Biology" ]
2,933
[ "Applied and interdisciplinary physics", "Radiobiology", "Medical physics", "Nuclear physics", "Radioactivity" ]
319,341
https://en.wikipedia.org/wiki/Guidance%20system
A guidance system is a virtual or physical device, or a group of devices implementing a controlling the movement of a ship, aircraft, missile, rocket, satellite, or any other moving object. Guidance is the process of calculating the changes in position, velocity, altitude, and/or rotation rates of a moving object required to follow a certain trajectory and/or altitude profile based on information about the object's state of motion. A guidance system is usually part of a Guidance, navigation and control system, whereas navigation refers to the systems necessary to calculate the current position and orientation based on sensor data like those from compasses, GPS receivers, Loran-C, star trackers, inertial measurement units, altimeters, etc. The output of the navigation system, the navigation solution, is an input for the guidance system, among others like the environmental conditions (wind, water, temperature, etc.) and the vehicle's characteristics (i.e. mass, control system availability, control systems correlation to vector change, etc.). In general, the guidance system computes the instructions for the control system, which comprises the object's actuators (e.g., thrusters, reaction wheels, body flaps, etc.), which are able to manipulate the path and orientation of the object without direct or continuous human control. One of the earliest examples of a true guidance system is that used in the German V-1 during World War II. The navigation system consisted of a simple gyroscope, an airspeed sensor, and an altimeter. The guidance instructions were target altitude, target velocity, cruise time, and engine cut off time. A guidance system has three major sub-sections: Inputs, Processing, and Outputs. The input section includes sensors, course data, radio and satellite links, and other information sources. The processing section, composed of one or more CPUs, integrates this data and determines what actions, if any, are necessary to maintain or achieve a proper heading. This is then fed to the outputs which can directly affect the system's course. The outputs may control speed by interacting with devices such as turbines, and fuel pumps, or they may more directly alter course by actuating ailerons, rudders, or other devices. History Inertial guidance systems were originally developed for rockets. American rocket pioneer Robert Goddard experimented with rudimentary gyroscopic systems. Dr. Goddard's systems were of great interest to contemporary German pioneers including Wernher von Braun. The systems entered more widespread use with the advent of spacecraft, guided missiles, and commercial airliners. US guidance history centers around 2 distinct communities. One driven out of Caltech and NASA Jet Propulsion Laboratory, the other from the German scientists that developed the early V2 rocket guidance and MIT. The GN&C system for V2 provided many innovations and was the most sophisticated military weapon in 1942 using self-contained closed loop guidance. Early V2s leveraged 2 gyroscopes and lateral accelerometer with a simple analog computer to adjust the azimuth for the rocket in flight. Analog computer signals were used to drive 4 external rudders on the tail fins for flight control. Von Braun engineered the surrender of 500 of his top rocket scientists, along with plans and test vehicles, to the Americans. They arrived in Fort Bliss, Texas in 1945 and were subsequently moved to Huntsville, Alabama, in 1950 (aka Redstone arsenal). Von Braun's passion was interplanetary space flight. However his tremendous leadership skills and experience with the V-2 program made him invaluable to the US military. In 1955 the Redstone team was selected to put America's first satellite into orbit putting this group at the center of both military and commercial space. The Jet Propulsion Laboratory traces its history from the 1930s, when Caltech professor Theodore von Karman conducted pioneering work in rocket propulsion. Funded by Army Ordnance in 1942, JPL's early efforts would eventually involve technologies beyond those of aerodynamics and propellant chemistry. The result of the Army Ordnance effort was JPL's answer to the German V-2 missile, named MGM-5 Corporal, first launched in May 1947. On December 3, 1958, two months after the National Aeronautics and Space Administration (NASA) was created by Congress, JPL was transferred from Army jurisdiction to that of this new civilian space agency. This shift was due to the creation of a military focused group derived from the German V2 team. Hence, beginning in 1958, NASA JPL and the Caltech crew became focused primarily on unmanned flight and shifted away from military applications with a few exceptions. The community surrounding JPL drove tremendous innovation in telecommunication, interplanetary exploration and earth monitoring (among other areas). In the early 1950s, the US government wanted to insulate itself against over dependency on the German team for military applications. Among the areas that were domestically "developed" was missile guidance. In the early 1950s the MIT Instrumentation Laboratory (later to become the Charles Stark Draper Laboratory, Inc.) was chosen by the Air Force Western Development Division to provide a self-contained guidance system backup to Convair in San Diego for the new Atlas intercontinental ballistic missile. The technical monitor for the MIT task was a young engineer named Jim Fletcher who later served as the NASA Administrator. The Atlas guidance system was to be a combination of an on-board autonomous system, and a ground-based tracking and command system. This was the beginning of a philosophic controversy, which, in some areas, remains unresolved. The self-contained system finally prevailed in ballistic missile applications for obvious reasons. In space exploration, a mixture of the two remains. In the summer of 1952, Dr. Richard Battin and Dr. J. Halcombe ("Hal") Laning Jr., researched computational based solutions to guidance as computing began to step out of the analog approach. As computers of that time were very slow (and missiles very fast) it was extremely important to develop programs that were very efficient. Dr. J. Halcombe Laning, with the help of Phil Hankins and Charlie Werner, initiated work on MAC, an algebraic programming language for the IBM 650, which was completed by early spring of 1958. MAC became the work-horse of the MIT lab. MAC is an extremely readable language having a three-line format, vector-matrix notations and mnemonic and indexed subscripts. Today's Space Shuttle (STS) language called HAL, (developed by Intermetrics, Inc.) is a direct offshoot of MAC. Since the principal architect of HAL was Jim Miller, who co-authored with Hal Laning a report on the MAC system, it is a reasonable speculation that the space shuttle language is named for Jim's old mentor, and not, as some have suggested, for the electronic superstar of the Arthur Clarke movie "2001-A Space Odyssey." (Richard Battin, AIAA 82–4075, April 1982) Hal Laning and Richard Battin undertook the initial analytical work on the Atlas inertial guidance in 1954. Other key figures at Convair were Charlie Bossart, the Chief Engineer, and Walter Schweidetzky, head of the guidance group. Walter had worked with Wernher von Braun at Peenemuende during World War II. The initial "Delta" guidance system assessed the difference in position from a reference trajectory. A velocity to be gained (VGO) calculation is made to correct the current trajectory with the objective of driving VGO to Zero. The mathematics of this approach were fundamentally valid, but dropped because of the challenges in accurate inertial navigation (e.g. IMU Accuracy) and analog computing power. The challenges faced by the "Delta" efforts were overcome by the "Q system" of guidance. The "Q" system's revolution was to bind the challenges of missile guidance (and associated equations of motion) in the matrix Q. The Q matrix represents the partial derivatives of the velocity with respect to the position vector. A key feature of this approach allowed for the components of the vector cross product (v, xdv,/dt) to be used as the basic autopilot rate signals-a technique that became known as "cross-product steering." The Q-system was presented at the first Technical Symposium on Ballistic Missiles held at the Ramo-Wooldridge Corporation in Los Angeles on June 21 and 22, 1956. The "Q System" was classified information through the 1960s. Derivations of this guidance are used for today's military missiles. The CSDL team remains a leader in the military guidance and is involved in projects for most divisions of the US military. On August 10 of 1961 NASA awarded MIT a contract for preliminary design study of a guidance and navigation system for Apollo program. (see Apollo on-board guidance, navigation, and control system, Dave Hoag, International Space Hall of Fame Dedication Conference in Alamogordo, N.M., October 1976 ). Today's space shuttle guidance is named PEG4 (Powered Explicit Guidance). It takes into account both the Q system and the predictor-corrector attributes of the original "Delta" System (PEG Guidance). Although many updates to the shuttles navigation system have taken place over the last 30 years (ex. GPS in the OI-22 build), the guidance core of today's Shuttle GN&C system has evolved little. Within a manned system, there is a human interface needed for the guidance system. As Astronauts are the customer for the system, many new teams are formed that touch GN&C as it is a primary interface to "fly" the vehicle. For the Apollo and STS (Shuttle system) CSDL "designed" the guidance, McDonnell Douglas wrote the requirements and IBM programmed the requirements. Much system complexity within manned systems is driven by "redundancy management" and the support of multiple "abort" scenarios that provide for crew safety. Manned US Lunar and Interplanetary guidance systems leverage many of the same guidance innovations (described above) developed in the 1950s. So while the core mathematical construct of guidance has remained fairly constant, the facilities surrounding GN&C continue to evolve to support new vehicles, new missions and new hardware. The center of excellence for the manned guidance remains at MIT (CSDL) as well as the former McDonnell Douglas Space Systems (in Houston). See also Automotive navigation system Autopilot Guide rail List of missiles Robotic navigation Precision-guided munition Guided bomb Missile Missile guidance Terminal guidance Proximity sensor Artillery fuze Magnetic proximity fuze Proximity fuze References Further reading An Introduction to the Mathematics and Methods of Astrodynamics, Revised Edition (AIAA Education Series) Richard Battin, May 1991 Space Guidance Evolution-A Personal Narrative, Richard Battin, AIAA 82–4075, April 1982 Military technology Uncrewed vehicles Applications of control engineering NASA spin-off technologies de:Navigationssystem stq:Autonavigation
Guidance system
[ "Engineering" ]
2,246
[ "Control engineering", "Applications of control engineering" ]
319,484
https://en.wikipedia.org/wiki/Law%20of%20tangents
In trigonometry, the law of tangents or tangent rule is a statement about the relationship between the tangents of two angles of a triangle and the lengths of the opposing sides. In Figure 1, , , and are the lengths of the three sides of the triangle, and , , and are the angles opposite those three respective sides. The law of tangents states that The law of tangents, although not as commonly known as the law of sines or the law of cosines, is equivalent to the law of sines, and can be used in any case where two sides and the included angle, or two angles and a side, are known. Proof To prove the law of tangents one can start with the law of sines: where is the diameter of the circumcircle, so that and . It follows that Using the trigonometric identity, the factor formula for sines specifically we get As an alternative to using the identity for the sum or difference of two sines, one may cite the trigonometric identity (see tangent half-angle formula). Application The law of tangents can be used to compute the angles of a triangle in which two sides and and the enclosed angle are given. From compute the angle difference ; use that to calculate and then . Once an angle opposite a known side is computed, the remaining side can be computed using the law of sines. In the time before electronic calculators were available, this method was preferable to an application of the law of cosines , as this latter law necessitated an additional lookup in a logarithm table, in order to compute the square root. In modern times the law of tangents may have better numerical properties than the law of cosines: If is small, and and are almost equal, then an application of the law of cosines leads to a subtraction of almost equal values, incurring catastrophic cancellation. Spherical version On a sphere of unit radius, the sides of the triangle are arcs of great circles. Accordingly, their lengths can be expressed in radians or any other units of angular measure. Let , , be the angles at the three vertices of the triangle and let , , be the respective lengths of the opposite sides. The spherical law of tangents says History The law of tangents was discovered by Arab mathematician Abu al-Wafa in the 10th century. Ibn Muʿādh al-Jayyānī also described the law of tangents for planar triangles in the 11th century. The law of tangents for spherical triangles was described in the 13th century by Persian mathematician Nasir al-Din al-Tusi (1201–1274), who also presented the law of sines for plane triangles in his five-volume work Treatise on the Quadrilateral. Cyclic quadrilateral A generalization of the law of tangents holds for a cyclic quadrilateral Denote the lengths of sides and and angle measures .Then: This formula reduces to the law of tangents for a triangle when . See also Law of sines Law of cosines Law of cotangents Mollweide's formula Half-side formula Tangent half-angle formula Notes Trigonometry Articles containing proofs Theorems about triangles Angle
Law of tangents
[ "Physics", "Mathematics" ]
670
[ "Geometric measurement", "Scalar physical quantities", "Physical quantities", "Articles containing proofs", "Wikipedia categories named after physical quantities", "Angle" ]
319,506
https://en.wikipedia.org/wiki/Transient-voltage-suppression%20diode
A transient-voltage-suppression (TVS) diode, also transil, transorb or thyrector, is an electronic component used to protect electronics from voltage spikes induced on connected wires. Description The device operates by shunting excess current when the induced voltage exceeds the avalanche breakdown potential. It is a clamping device, suppressing all overvoltages above its breakdown voltage. It automatically resets when the overvoltage goes away, but absorbs much more of the transient energy internally than a similarly rated crowbar device. A transient-voltage-suppression diode may be either unidirectional or bidirectional. A unidirectional device operates as a rectifier in the forward direction like any other avalanche diode, but is made and tested to handle very large peak currents. A bidirectional transient-voltage-suppression diode can be represented by two mutually opposing avalanche diodes in series with one another and connected in parallel with the circuit to be protected. While this representation is schematically accurate, physically the devices are now manufactured as a single component. A transient-voltage-suppression diode can respond to over-voltages faster than other common over-voltage protection components such as varistors or gas discharge tubes. The actual clamping occurs in roughly one picosecond, but in a practical circuit the inductance of the wires leading to the device imposes a higher limit. This makes transient-voltage-suppression diodes useful for protection against very fast and often damaging voltage transients. These fast over-voltage transients are present on all distribution networks and can be caused by either internal or external events, such as lightning or motor arcing. Transient voltage suppressors will fail if they are subjected to voltages or conditions beyond those that the particular product was designed to accommodate. There are three key modes in which the TVS will fail: short, open, and degraded device. TVS diodes are sometimes referred to as transorbs, from the Vishay trademark TransZorb. Characterization A TVS diode is characterized by: Leakage current: the amount of current conducted when voltage applied is below the maximum reverse standoff voltage. Maximum reverse standoff voltage: the voltage below which no significant conduction occurs. Breakdown voltage: the voltage at which some specified and significant conduction occurs. Clamping voltage: the voltage at which the device will conduct its fully rated current (hundreds to thousands of amperes). Parasitic capacitance: The nonconducting diode behaves like a capacitor, which can distort and corrupt high-speed signals. Lower capacitance is generally preferred. Parasitic inductance: Because the actual over voltage switching is so fast, the package inductance is the limiting factor for response speed. Amount of energy it can absorb: Because the transients are so brief, all of the energy is initially stored internally as heat; a heat sink only affects the time to cool down afterwards. Thus, a high-energy TVS must be physically large. If this capacity is too small, the over voltage will possibly destroy the device and leave the circuit unprotected. See also Surge protector Trisil Zener diode References Further reading TVS/Zener Theory and Design Considerations; ON Semiconductor; 127 pages; 2005; HBD854/D. (Free PDF download) External links What are TVS diodes, Semtech Application Note SI96-01 Transient Suppression Devices and Principles, Littelfuse Application Note AN9768 Transil™ / Trisil™ Comparison, ST application note AN574 Transient Protection Solutions: Transil™ diode versus Varistor, ST application note AN1826 Diodes Electric power systems components Voltage stability
Transient-voltage-suppression diode
[ "Physics" ]
779
[ "Voltage", "Voltage stability", "Physical quantities" ]
319,515
https://en.wikipedia.org/wiki/Silicon%20controlled%20rectifier
A silicon controlled rectifier or semiconductor controlled rectifier is a four-layer solid-state current-controlling device. The name "silicon controlled rectifier" is General Electric's trade name for a type of thyristor. The principle of four-layer p–n–p–n switching was developed by Moll, Tanenbaum, Goldey, and Holonyak of Bell Laboratories in 1956. The practical demonstration of silicon controlled switching and detailed theoretical behavior of a device in agreement with the experimental results was presented by Dr Ian M. Mackintosh of Bell Laboratories in January 1958. The SCR was developed by a team of power engineers led by Gordon Hall and commercialized by Frank W. "Bill" Gutzwiller in 1957. Some sources define silicon-controlled rectifiers and thyristors as synonymous while other sources define silicon-controlled rectifiers as a proper subset of the set of thyristors; the latter being devices with at least four layers of alternating n- and p-type material. According to Bill Gutzwiller, the terms "SCR" and "controlled rectifier" were earlier, and "thyristor" was applied later, as usage of the device spread internationally. SCRs are unidirectional devices (i.e. can conduct current only in one direction) as opposed to TRIACs, which are bidirectional (i.e. charge carriers can flow through them in either direction). SCRs can be triggered normally only by a positive current going into the gate as opposed to TRIACs, which can be triggered normally by either a positive or a negative current applied to its gate electrode. Modes of operation There are three modes of operation for an SCR depending upon the biasing given to it: Forward blocking mode (off state) Forward conduction mode (on state) Reverse blocking mode (off state) Forward blocking mode In this mode of operation, the anode (+, p-doped side) is given a positive voltage while the cathode (−, n-doped side) is given a negative voltage, keeping the gate at zero (0) potential i.e. disconnected. In this case junction J1and J3 are forward-biased, while J2 is reverse-biased, allowing only a small leakage current from the anode to the cathode. When the applied voltage reaches the breakover value for J2, then J2 undergoes avalanche breakdown. At this breakover voltage J2 starts conducting, but below breakover voltage J2 offers very high resistance to the current and the SCR is said to be in the off state. Forward conduction mode An SCR can be brought from blocking mode to conduction mode in two ways: Either by increasing the voltage between anode and cathode beyond the breakover voltage, or by applying a positive pulse at the gate. Once the SCR starts conducting, no more gate voltage is required to maintain it in the ON state. The minimum current necessary to maintain the SCR in the ON state on removal of the gate voltage is called the latching current. There are two ways to turn it off: Reduce the current through it below a minimum value called the holding current, or With the gate turned off, short-circuit the anode and cathode momentarily with a push-button switch or transistor across the junction. Reverse blocking mode When a negative voltage is applied to the anode and a positive voltage to the cathode, the SCR is in reverse blocking mode, making J1 and J3 reverse biased and J2 forward biased. The device behaves as two diodes connected in series. A small leakage current flows. This is the reverse blocking mode. If the reverse voltage is increased, then at critical breakdown level, called the reverse breakdown voltage (VBR), an avalanche occurs at J1 and J3 and the reverse current increases rapidly. SCRs are available with reverse blocking capability, which adds to the forward voltage drop because of the need to have a long, low-doped P1 region. Usually, the reverse blocking voltage rating and forward blocking voltage rating are the same. The typical application for a reverse blocking SCR is in current-source inverters. An SCR incapable of blocking reverse voltage is known as an asymmetrical SCR, abbreviated ASCR. It typically has a reverse breakdown rating in the tens of volts. ASCRs are used where either a reverse conducting diode is applied in parallel (for example, in voltage-source inverters) or where reverse voltage would never occur (for example, in switching power supplies or DC traction choppers). Asymmetrical SCRs can be fabricated with a reverse conducting diode in the same package. These are known as RCTs, for reverse conducting thyristors. Thyristor turn-on methods forward-voltage triggering gate triggering dv/dt triggering thermal triggering light triggering Forward-voltage triggering occurs when the anode–cathode forward voltage is increased with the gate circuit opened. This is known as avalanche breakdown, during which junction J2 will break down. At sufficient voltages, the thyristor changes to its on state with low voltage drop and large forward current. In this case, J1 and J3 are already forward-biased. In order for gate triggering to occur, the thyristor should be in the forward blocking state where the applied voltage is less than the breakdown voltage, otherwise forward-voltage triggering may occur. A single small positive voltage pulse can then be applied between the gate and the cathode. This supplies a single gate current pulse that turns the thyristor onto its on state. In practice, this is the most common method used to trigger a thyristor. Temperature triggering occurs when the width of depletion region decreases as the temperature is increased. When the SCR is near VPO a very small increase in temperature causes junction J2 to be removed which triggers the device. Simple SCR circuit A simple SCR circuit can be illustrated using an AC voltage source connected to a SCR with a resistive load. Without an applied current pulse to the gate of the SCR, the SCR is left in its forward blocking state. This makes the start of conduction of the SCR controllable. The delay angle α, which is the instant the gate current pulse is applied with respect to the instant of natural conduction (ωt = 0), controls the start of conduction. Once the SCR conducts, the SCR does not turn off until the current through the SCR, is, becomes negative. is stays zero until another gate current pulse is applied and SCR once again begins conducting. Applications SCRs are mainly used in devices where the control of high power, possibly coupled with high voltage, is demanded. Their operation makes them suitable for use in medium- to high-voltage AC power control applications, such as lamp dimming, power regulators and motor control. SCRs and similar devices are used for rectification of high-power AC in high-voltage direct current power transmission. They are also used in the control of welding machines, mainly gas tungsten arc welding and similar processes. It is used as an electronic switch in various devices. Early solid-state pinball machines made use of these to control lights, solenoids, and other functions electronically, instead of mechanically, hence the name solid-state. Other applications include power switching circuits, controlled rectifiers, speed control of DC shunt motors, SCR crowbars, computer logic circuits, timing circuits, and inverters. Comparison with SCS A silicon-controlled switch (SCS) behaves nearly the same way as an SCR; but there are a few differences. Unlike an SCR, an SCS switches off when a positive voltage/input current is applied to another anode gate lead. Unlike an SCR, an SCS can be triggered into conduction when a negative voltage/output current is applied to that same lead. SCSs are useful in practically all circuits that need a switch that turns on/off through two distinct control pulses. This includes power-switching circuits, logic circuits, lamp drivers, and counters. Compared to TRIACs A TRIAC resembles an SCR in that both act as electrically controlled switches. Unlike an SCR, a TRIAC can pass current in either direction. Thus, TRIACs are particularly useful for AC applications. TRIACs have three leads: a gate lead and two conducting leads, referred to as MT1 and MT2. If no current/voltage is applied to the gate lead, the TRIAC switches off. On the other hand, if the trigger voltage is applied to the gate lead, the TRIAC switches on. TRIACs are suitable for light-dimming circuits, phase-control circuits, AC power-switching circuits, AC motor control circuits, etc. See also High-voltage direct current Gate turn-off thyristor Insulated-gate bipolar transistor Integrated gate-commutated thyristor Voltage regulator Snubber Crowbar (circuit) DIAC BJT References Further reading External links SCR at AllAboutCircuits SCR Circuit Design Solid state switches Power electronics Rectifiers General Electric inventions 1957 introductions 1957 in technology 20th-century inventions de:Thyristor#Geschichte
Silicon controlled rectifier
[ "Engineering" ]
1,934
[ "Electronic engineering", "Power electronics" ]
319,536
https://en.wikipedia.org/wiki/7400-series%20integrated%20circuits
The 7400 series is a popular logic family of transistor–transistor logic (TTL) integrated circuits (ICs). In 1964, Texas Instruments introduced the SN5400 series of logic chips, in a ceramic semiconductor package. A low-cost plastic package SN7400 series was introduced in 1966 which quickly gained over 50% of the logic chip market, and eventually becoming de facto standardized electronic components. Since the introduction of the original bipolar-transistor TTL parts, pin-compatible parts were introduced with such features as low power CMOS technology and lower supply voltages. Surface mount packages exist for several popular logic family functions. Overview The 7400 series contains hundreds of devices that provide everything from basic logic gates, flip-flops, and counters, to special purpose bus transceivers and arithmetic logic units (ALU). Specific functions are described in a list of 7400 series integrated circuits. Some TTL logic parts were made with an extended military-specification temperature range. These parts are prefixed with 54 instead of 74 in the part number. The less-common 64 and 84 prefixes on Texas Instruments parts indicated an industrial temperature range. Since the 1970s, new product families have been released to replace the original 7400 series. More recent TTL-compatible logic families were manufactured using CMOS or BiCMOS technology rather than TTL. Today, surface-mounted CMOS versions of the 7400 series are used in various applications in electronics and for glue logic in computers and industrial electronics. The original through-hole devices in dual in-line packages (DIP/DIL) were the mainstay of the industry for many decades. They are useful for rapid breadboard-prototyping and for education and remain available from most manufacturers. The fastest types and very low voltage versions are typically surface-mount only, however. The first part number in the series, the 7400, is a 14-pin IC containing four two-input NAND gates. Each gate uses two input pins and one output pin, with the remaining two pins being power (+5 V) and ground. This part was made in various through-hole and surface-mount packages, including flat pack and plastic/ceramic dual in-line. Additional characters in a part number identify the package and other variations. Unlike the older resistor-transistor logic integrated circuits, bipolar TTL gates were unsuitable to be used as analog devices, providing low gain, poor stability, and low input impedance. Special-purpose TTL devices were used to provide interface functions such as Schmitt triggers or monostable multivibrator timing circuits. Inverting gates could be cascaded as a ring oscillator, useful for purposes where high stability was not required. History Although the 7400 series was the first de facto industry standard TTL logic family (i.e. second-sourced by several semiconductor companies), there were earlier TTL logic families such as: Sylvania Universal High-level Logic in 1963 Motorola MC4000 MTTL National Semiconductor DM8000 Fairchild 9300 series Signetics 8200 and 8T00 The 7400 quad 2-input NAND gate was the first product in the series, introduced by Texas Instruments in a military grade metal flat package (5400W) in October 1964. The pin assignment of this early series differed from the de facto standard set by the later series in DIP packages (in particular, ground was connected to pin 11 and the power supply to pin 4, compared to pins 7 and 14 for DIP packages). The extremely popular commercial grade plastic DIP (7400N) followed in the third quarter of 1966. The 5400 and 7400 series were used in many popular minicomputers in the 1970s and early 1980s. Some models of the DEC PDP-series 'minis' used the 74181 ALU as the main computing element in the CPU. Other examples were the Data General Nova series and Hewlett-Packard 21MX, 1000, and 3000 series. In 1965, typical quantity-one pricing for the SN5400 (military grade, in ceramic welded flat-pack) was around 22 USD. As of 2007, individual commercial-grade chips in molded epoxy (plastic) packages can be purchased for approximately US$0.25 each, depending on the particular chip. Families 7400 series parts were constructed using bipolar junction transistors (BJT), forming what is referred to as transistor–transistor logic or TTL. Newer series, more or less compatible in function and logic level with the original parts, use CMOS technology or a combination of the two (BiCMOS). Originally the bipolar circuits provided higher speed but consumed more power than the competing 4000 series of CMOS devices. Bipolar devices are also limited to a fixed power-supply voltage, typically 5 V, while CMOS parts often support a range of supply voltages. Milspec-rated devices for use in extended temperature conditions are available as the 5400 series. Texas Instruments also manufactured radiation-hardened devices with the prefix RSN, and the company offered beam-lead bare dies for integration into hybrid circuits with a BL prefix designation. Regular-speed TTL parts were also available for a time in the 6400 series these had an extended industrial temperature range of −40 °C to +85 °C. While companies such as Mullard listed 6400-series compatible parts in 1970 data sheets, by 1973 there was no mention of the 6400 family in the Texas Instruments TTL Data Book. Texas Instruments brought back the 6400 series in 1989 for the SN64BCT540. The SN64BCTxxx series is still in production as of 2023. Some companies have also offered industrial extended temperature range variants using the regular 7400-series part numbers with a prefix or suffix to indicate the temperature grade. As integrated circuits in the 7400 series were made in different technologies, usually compatibility was retained with the original TTL logic levels and power-supply voltages. An integrated circuit made in CMOS is not a TTL chip, since it uses field-effect transistors (FETs) and not bipolar junction transistors (BJT), but similar part numbers are retained to identify similar logic functions and electrical (power and I/O voltage) compatibility in the different subfamilies. Over 40 different logic subfamilies use this standardized part number scheme. The headings in the following table are: Vcc power-supply voltage; tpd maximum gate delay; IOL maximum output current at low level; IOH maximum output current at high level; tpd, IOL, and IOH apply to most gates in a given family. Driver or buffer gates have higher output currents. Many parts in the CMOS HC, AC, AHC, and VHC families are also offered in "T" versions (HCT, ACT, AHCT and VHCT) which have input thresholds that are compatible with both TTL and 3.3 V CMOS signals. The non-T parts have conventional CMOS input thresholds, which are more restrictive than TTL thresholds. Typically, CMOS input thresholds require high-level signals to be at least 70% of Vcc and low-level signals to be at most 30% of Vcc. (TTL has the input high level above 2.0 V and the input low level below 0.8 V, so a TTL high-level signal could be in the forbidden middle range for 5 V CMOS.) The 74H family is the same basic design as the 7400 family with resistor values reduced. This reduced the typical propagation delay from 9 ns to 6 ns but increased the power consumption. The 74H family provided a number of unique devices for CPU designs in the 1970s. Many designers of military and aerospace equipment used this family over a long period and as they need exact replacements, this family is still produced by Lansdale Semiconductor. The 74S family, using Schottky circuitry, uses more power than the 74, but is faster. The 74LS family of ICs is a lower-power version of the 74S family, with slightly higher speed but lower power dissipation than the original 74 family; it became the most popular variant once it was widely available. Many 74LS ICs can be found in microcomputers and digital consumer electronics manufactured in the 1980s and early 1990s. The 74F family was introduced by Fairchild Semiconductor and adopted by other manufacturers; it is faster than the 74, 74LS and 74S families. Through the late 1980s and 1990s newer versions of this family were introduced to support the lower operating voltages used in newer CPU devices. Part numbering Part number schemes varied by manufacturer. The part numbers for 7400-series logic devices often use the following designators: Often first, a two or three letter prefix, denoting the manufacturer and flow class of the device. These codes are no longer closely associated with a single manufacturer, for example, Fairchild Semiconductor manufactures parts with MM and DM prefixes, and no prefixes. Examples: SN: Texas Instruments using a commercial processing SNV: Texas Instruments using military processing M: ST Microelectronics DM: National Semiconductor UT: Cobham PLC SG: Sylvania Two digits for temperature range. Examples: 54: military temperature range 64: short-lived historical series with intermediate "industrial" temperature range 74: commercial temperature range device Zero to four letters denoting the logic subfamily. Examples: zero letters: basic bipolar TTL LS: low power Schottky HCT: High-speed CMOS compatible with TTL Two or more arbitrarily assigned digits that identify the function of the device. There are hundreds of different devices in each family. Additional suffix letters and numbers may be appended to denote the package type, quality grade, or other information, but this varies widely by manufacturer. For example, "SN5400N" signifies that the part is a 7400-series IC probably manufactured by Texas Instruments ("SN" originally meaning "Semiconductor Network") using commercial processing, is of the military temperature rating ("54"), and is of the TTL family (absence of a family designator), its function being the quad 2-input NAND gate ("00") implemented in a plastic through-hole DIP package ("N"). Many logic families maintain a consistent use of the device numbers as an aid to designers. Often a part from a different 74x00 subfamily could be substituted ("drop-in replacement") in a circuit, with the same function and pin-out yet more appropriate characteristics for an application (perhaps speed or power consumption), which was a large part of the appeal of the 74C00 series over the competing CD4000B series, for example. But there are a few exceptions where incompatibilities (mainly in pin-out) across the subfamilies occurred, such as: some flat-pack devices (e.g. 7400W) and surface-mount devices, some of the faster CMOS series (for example 74AC), a few low-power TTL devices (e.g. 74L86, 74L9 and 74L95) have a different pin-out than the regular (or even 74LS) series part. five versions of the 74x54 (4-wide AND-OR-INVERT gates IC), namely 7454(N), 7454W, 74H54, 74L54W and 74L54N/74LS54, are different from each other in pin-out and/or function, Second sources from Europe and Eastern Bloc Some manufacturers, such as Mullard and Siemens, had pin-compatible TTL parts, but with a completely different numbering scheme; however, data sheets identified the 7400-compatible number as an aid to recognition. At the time the 7400 series was being made, some European manufacturers (that traditionally followed the Pro Electron naming convention), such as Philips/Mullard, produced a series of TTL integrated circuits with part names beginning with FJ. Some examples of FJ series are: FJH101 (=7430) single 8-input NAND gate, FJH131 (=7400) quadruple 2-input NAND gate, FJH181 (=7454N or J) 2+2+2+2 input AND-OR-NOT gate. The Soviet Union started manufacturing TTL ICs with 7400-series pinout in the late 1960s and early 1970s, such as the K155ЛA3, which was pin-compatible with the 7400 part available in the United States, except for using a metric spacing of 2.5 mm between pins instead of the pin-to-pin spacing used in the west. Another peculiarity of the Soviet-made 7400 series was the packaging material used in the 1970s–1980s. Instead of the ubiquitous black resin, they had a brownish-green body colour with subtle swirl marks created during the moulding process. It was jokingly referred to in the Eastern Bloc electronics industry as the "elephant-dung packaging", due to its appearance. The Soviet integrated circuit designation is different from the Western series: the technology modifications were considered different series and were identified by different numbered prefixes – К155 series is equivalent to plain 74, К555 series is 74LS, К1533 is 74ALS, etc.; the function of the unit is described with a two-letter code followed by a number: the first letter represents the functional group – logical, triggers, counters, multiplexers, etc.; the second letter shows the functional subgroup, making the distinction between logical NAND and NOR, D- and JK-triggers, decimal and binary counters, etc.; the number distinguishes variants with different number of inputs or different number of elements within a die – ЛА1/ЛА2/ЛА3 (LA1/LA2/LA3) are 2 four-input / 1 eight-input / 4 two-input NAND elements respectively (equivalent to 7420/7430/7400). Before July 1974 the two letters from the functional description were inserted after the first digit of the series. Examples: К1ЛБ551 and К155ЛА1 (7420), К1ТМ552 and К155ТМ2 (7474) are the same ICs made at different times. Clones of the 7400 series were also made in other Eastern Bloc countries: Bulgaria (Mikroelektronika Botevgrad) used a designation somewhat similar to that of the Soviet Union, e.g. 1ЛБ00ШМ (1LB00ShM) for a 74LS00. Some of the two-letter functional groups were borrowed from the Soviet designation, while others differed. Unlike the Soviet scheme, the two or three digit number after the functional group matched the western counterpart. The series followed at the end (i.e. ШМ for LS). Only the LS series is known to have been manufactured in Bulgaria. Czechoslovakia (TESLA) used the 7400 numbering scheme with manufacturer prefix MH. Example: MH7400. Tesla also produced industrial grade (8400, −25 ° to 85 °C) and military grade (5400, −55 ° to 125 °C) ones. Poland (Unitra CEMI) used the 7400 numbering scheme with manufacturer prefixes UCA for the 5400 and 6400 series, as well as UCY for the 7400 series. Examples: UCA6400, UCY7400. Note that ICs with the prefix MCY74 correspond to the 4000 series (e.g. MCY74002 corresponds to 4002 and not to 7402). Hungary (Tungsram, later Mikroelektronikai Vállalat / MEV) also used the 7400 numbering scheme, but with manufacturer suffix – 7400 is marked as 7400APC. Romania (I.P.R.S.) used a trimmed 7400 numbering with the manufacturer prefix CDB (example: CDB4123E corresponds to 74123) for the 74 and 74H series, where the suffix H indicated the 74H series. For the later 74LS series, the standard numbering was used. East Germany (HFO) also used trimmed 7400 numbering without manufacturer prefix or suffix. The prefix D (or E) designates digital IC, and not the manufacturer. Example: D174 is 7474. 74LS clones were designated by the prefix DL; e.g. DL000 = 74LS00. In later years East German made clones were also available with standard 74* numbers, usually for export. A number of different technologies were available from the Soviet Union, Czechoslovakia, Poland, and East Germany. The 8400 series in the table below indicates an industrial temperature range from −25 °C to +85 °C (as opposed to −40 °C to +85 °C for the 6400 series). Around 1990 the production of standard logic ceased in all Eastern European countries except the Soviet Union and later Russia and Belarus. As of 2016, the series 133, К155, 1533, КР1533, 1554, 1594, and 5584 were in production at "Integral" in Belarus, as well as the series 130 and 530 at "NZPP-KBR", 134 and 5574 at "VZPP", 533 at "Svetlana", 1564, К1564, КР1564 at "NZPP", 1564, К1564 at "Voshod", 1564 at "Exiton", and 133, 530, 533, 1533 at "Mikron" in Russia. The Russian company Angstrem manufactures 54HC circuits as the 5514БЦ1 series, 54AC as the 5514БЦ2 series, and 54LVC as the 5524БЦ2 series. See also Electronic component Logic gate, Logic family List of 7400-series integrated circuits 4000-series integrated circuits List of 4000-series integrated circuits Linear integrated circuit List of linear integrated circuits List of LM-series integrated circuits Push–pull output Open-collector/drain output Three-state output Schmitt trigger input Programmable logic device Pin compatibility References Further reading Books 50 Circuits Using 7400 Series IC's; 1st Ed; R.N. Soar; Bernard Babani Publishing; 76 pages; 1979; . (archive) TTL Cookbook; 1st Ed; Don Lancaster; Sams Publishing; 412 pages; 1974; . (archive) Designing with TTL Integrated Circuits; 1st Ed; Robert Morris, John Miller; Texas Instruments and McGraw-Hill; 322 pages; 1971; . (archive) App Notes Understanding and Interpreting Standard-Logic Data Sheets; Stephen Nolan, Jose Soltero, Shreyas Rao; Texas Instruments; 60 pages; 2016. Comparison of 74HC / 74S / 74LS / 74ALS Logic; Fairchild; 6 pages, 1983. Interfacing to 74HC Logic; Fairchild; 10 pages; 1998. 74AHC / 74AHCT Designer's Guide; TI; 53pages; 1998. Compares 74HC / 74AHC / 74AC (CMOS I/O) and 74HCT / 74AHCT / 74ACT (TTL I/O). Fairchild Semiconductor / ON Semiconductor Historical Data Books: TTL (1978, 752 pages), FAST (1981, 349 pages) Logic Selection Guide (2008, 12 pages) Nexperia / NXP Semiconductor Logic Selection Guide (2020, 234 pages) Logic Application Handbook Design Engineer's Guide' (2021, 157 pages) ''Logic Translators''' (2021, 62 pages) Texas Instruments / National Semiconductor Historical Catalog: (1967, 375 pages) Historical Databooks: TTL Vol1 (1984, 339 pages), TTL Vol2 (1985, 1402 pages), TTL Vol3 (1984, 793 pages), TTL Vol4 (1986, 445 pages) Digital Logic Pocket Data Book (2007, 794 pages), Logic Reference Guide (2004, 8 pages), Logic Selection Guide (1998, 215 pages) Little Logic Guide (2018, 25 pages), Little Logic Selection Guide (2004, 24 pages) Toshiba General-Purpose Logic ICs (2012, 55 pages) External links Understanding 7400-series digital logic ICs - Nuts and Volts magazine Thorough list of 7400-series ICs - Electronics Club Integrated circuits Digital electronics 1964 introductions
7400-series integrated circuits
[ "Technology", "Engineering" ]
4,276
[ "Electronic engineering", "Computer engineering", "Integrated circuits", "Digital electronics" ]
319,613
https://en.wikipedia.org/wiki/Voltage%20spike
In electrical engineering, spikes are fast, short duration electrical transients in voltage (voltage spikes), current (current spikes), or transferred energy (energy spikes) in an electrical circuit. Fast, short duration electrical transients (overvoltages) in the electric potential of a circuit are typically caused by Lightning strikes Power outages Tripped circuit breakers Short circuits Power transitions in other large equipment on the same power line Malfunctions caused by the power company Electromagnetic pulses (EMP) with electromagnetic energy distributed typically up to the 100 kHz and 1 MHz frequency range. Inductive spikes In the design of critical infrastructure and military hardware, one concern is of pulses produced by nuclear explosions, whose nuclear electromagnetic pulses distribute large energies in frequencies from 1 kHz into the gigahertz range through the atmosphere. The effect of a voltage spike is to produce a corresponding increase in current (current spike). However some voltage spikes may be created by current sources. Voltage would increase as necessary so that a constant current will flow. Current from a discharging inductor is one example. For sensitive electronics, excessive current can flow if this voltage spike exceeds a material's breakdown voltage, or if it causes avalanche breakdown. In semiconductor junctions, excessive electric current may destroy or severely weaken that device. An avalanche diode, transient voltage suppression diode, varistor, overvoltage crowbar, or a range of other overvoltage protective devices can divert (shunt) this transient current thereby minimizing voltage. Voltage spikes, also known as surges, may be created by a rapid buildup or decay of a magnetic field, which may induce energy into the associated circuit. However voltage spikes can also have more mundane causes such as a fault in a transformer or higher-voltage (primary circuit) power wires falling onto lower-voltage (secondary circuit) power wires as a result of accident or storm damage. Voltage spikes may be longitudinal (common) mode or metallic (normal or differential) mode. Some equipment damage from surges and spikes can be prevented by use of surge protection equipment. Each type of spike requires selective use of protective equipment. For example, a common mode voltage spike may not even be detected by a protector installed for normal mode transients. Power increases or decreases which last multiple cycles are called swells or sags, respectively. An uninterrupted voltage increase that lasts more than a minute is called an overvoltage. These are usually caused by malfunctions of the electric power distribution system. See also - a device to channel inductive spikes back through the coil producing them References Power electronics Spike pl:Przepięcie
Voltage spike
[ "Physics", "Engineering" ]
542
[ "Physical quantities", "Electronic engineering", "Voltage", "Voltage stability", "Power electronics" ]
319,834
https://en.wikipedia.org/wiki/Rutherford%20scattering%20experiments
The Rutherford scattering experiments were a landmark series of experiments by which scientists learned that every atom has a nucleus where all of its positive charge and most of its mass is concentrated. They deduced this after measuring how an alpha particle beam is scattered when it strikes a thin metal foil. The experiments were performed between 1906 and 1913 by Hans Geiger and Ernest Marsden under the direction of Ernest Rutherford at the Physical Laboratories of the University of Manchester. The physical phenomenon was explained by Rutherford in a classic 1911 paper that eventually lead to the widespread use of scattering in particle physics to study subatomic matter. Rutherford scattering or Coulomb scattering is the elastic scattering of charged particles by the Coulomb interaction. The paper also initiated the development of the planetary Rutherford model of the atom and eventually the Bohr model. Rutherford scattering is now exploited by the materials science community in an analytical technique called Rutherford backscattering. Summary Thomson's model of the atom The prevailing model of atomic structure before Rutherford's experiments was devised by J. J. Thomson. Thomson had discovered the electron through his work on cathode rays and proposed that they existed within atoms, and an electric current is electrons hopping from one atom to an adjacent one in a series. There logically had to be a commensurate amount of positive charge to balance the negative charge of the electrons and hold those electrons together. Having no idea what the source of this positive charge was, he tentatively proposed that the positive charge was everywhere in the atom, adopting a spherical shape for simplicity. Thomson imagined that the balance of electrostatic forces would distribute the electrons throughout this sphere in a more or less even manner. Thomson also believed the electrons could move around in this sphere, and in that regard he likened the substance of the sphere to a liquid. In fact the positive sphere was more of an abstraction than anything material. He did not propose a positively-charged subatomic particle; a counterpart to the electron. Thomson was never able to develop a complete and stable model that could predict any of the other known properties of the atom, such as emission spectra and valencies. The Japanese scientist Hantaro Nagaoka rejected Thomson's model on the grounds that opposing charges cannot penetrate each other. He proposed instead that electrons orbit the positive charge like the rings around Saturn. However this model was also known to be unstable. Alpha particles and the Thomson atom An alpha particle is a positively charged particle of matter that is spontaneously emitted from certain radioactive elements. Alpha particles are so tiny as to be invisible, but they can be detected with the use of phosphorescent screens, photographic plates, or electrodes. Rutherford discovered them in 1899. In 1906, by studying how alpha particle beams are deflected by magnetic and electric fields, he deduced that they were essentially helium atoms stripped of two electrons. Thomson and Rutherford knew nothing about the internal structure of alpha particles. Prior to 1911 they were thought to have a diameter similar to helium atoms and contain ten or so electrons. Thomson's model was consistent with the experimental evidence available at the time. Thomson studied beta particle scattering which showed small angle deflections modelled as interactions of the particle with many atoms in succession. Each interaction of the particle with the electrons of the atom and the positive background sphere would lead to a tiny deflection, but many such collisions could add up. The scattering of alpha particles was expected to be similar. Rutherford's team would show that the multiple scattering model was not needed: single scattering from a compact charge at the centre of the atom would account for all of the scattering data. Rutherford, Geiger, and Marsden Ernest Rutherford was Langworthy Professor of Physics at the Victoria University of Manchester (now the University of Manchester). He had already received numerous honours for his studies of radiation. He had discovered the existence of alpha rays, beta rays, and gamma rays, and had proved that these were the consequence of the disintegration of atoms. In 1906, he received a visit from the German physicist Hans Geiger, and was so impressed that he asked Geiger to stay and help him with his research. Ernest Marsden was a physics undergraduate student studying under Geiger. In 1908, Rutherford sought to independently determine the charge and mass of alpha particles. To do this, he wanted to count the number of alpha particles and measure their total charge; the ratio would give the charge of a single alpha particle. Alpha particles are too tiny to see, but Rutherford knew about Townsend discharge, a cascade effect from ionisation leading to a pulse of electric current. On this principle, Rutherford and Geiger designed a simple counting device which consisted of two electrodes in a glass tube. (See #1908 experiment.) Every alpha particle that passed through the tube would create a pulse of electricity that could be counted. It was an early version of the Geiger counter. The counter that Geiger and Rutherford built proved unreliable because the alpha particles were being too strongly deflected by their collisions with the molecules of air within the detection chamber. The highly variable trajectories of the alpha particles meant that they did not all generate the same number of ions as they passed through the gas, thus producing erratic readings. This puzzled Rutherford because he had thought that alpha particles were too heavy to be deflected so strongly. Rutherford asked Geiger to investigate how far matter could scatter alpha rays. The experiments they designed involved bombarding a metal foil with a beam of alpha particles to observe how the foil scattered them in relation to its thickness and material. They used a phosphorescent screen to measure the trajectories of the particles. Each impact of an alpha particle on the screen produced a tiny flash of light. Geiger worked in a darkened lab for hours on end, counting these tiny scintillations using a microscope. For the metal foil, they tested a variety of metals, but favoured gold because they could make the foil very thin, as gold is the most malleable metal. As a source of alpha particles, Rutherford's substance of choice was radium, which is thousands of times more radioactive than uranium. Scattering theory and the new atomic model In a 1909 experiment, Geiger and Marsden discovered that the metal foils could scatter some alpha particles in all directions, sometimes more than 90°. This should have been impossible according to Thomson's model. According to Thomson's model, all the alpha particles should have gone straight through. In Thomson's model of the atom, the sphere of positive charge that fills the atom and encapsulates the electrons is permeable; the electrons could move around in it, after all. Therefore, an alpha particle should be able to pass through this sphere if the electrostatic forces within permit it. Thomson himself did not study how an alpha particle might be scattered in such a collision with an atom, but he did study beta particle scattering. He calculated that a beta particle would only experience very small deflection when passing through an atom, and even after passing through many atoms in a row, the total deflection should still be less than 1°. Alpha particles typically have much more momentum than beta particles and therefore should likewise experience only the slightest deflection. The extreme scattering observed forced Rutherford to revise the model of the atom. The issue in Thomson's model was that the charges were too diffuse to produce a sufficiently strong electrostatic force to cause such repulsion. Therefore they had to be more concentrated. In Rutherford's new model, the positive charge does not fill the entire volume of the atom but instead constitutes a tiny nucleus at least 10,000 times smaller than the atom as a whole. All that positive charge concentrated in a much smaller volume produces a much stronger electric field near its surface. The nucleus also carried most of the atom's mass. This meant that it could deflect alpha particles by up to 180° depending on how close they pass. The electrons surround this nucleus, spread throughout the atom's volume. Because their negative charge is diffuse and their combined mass is low, they have a negligible effect on the alpha particle. To verify his model, Rutherford developed a scientific model to predict the intensity of alpha particles at the different angles they scattered coming out of the gold foil, assuming all of the positive charge was concentrated at the centre of the atom. This model was validated in an experiment performed in 1913. His model explained both the beta scattering results of Thomson and the alpha scattering results of Geiger and Marsden. Legacy There was little reaction to Rutherford's now-famous 1911 paper in the first years. The paper was primarily about alpha particle scattering in an era before particle scattering was a primary tool for physics. The probability techniques he used and confusing collection of observations involved were not immediately compelling. Nuclear physics The first impacts were to encourage new focus on scattering experiments. For example the first results from a cloud chamber, by C.T.R. Wilson shows alpha particle scattering and also appeared in 1911. Over time, particle scattering became a major aspect of theoretical and experimental physics; Rutherford's concept of a "cross-section" now dominates the descriptions of experimental particle physics. The historian Silvan S. Schweber suggests that Rutherford's approach marked the shift to viewing all interactions and measurements in physics as scattering processes. After the nucleus - a term Rutherford introduced in 1912 - became the accepted model for the core of atoms, Rutherford's analysis of the scattering of alpha particles created a new branch of physics, nuclear physics. Atomic model Rutherford's new atom model caused no stir. Rutherford explicitly ignores the electrons, only mentioning Hantaro Nagaoka's Saturnian model of electrons orbiting a tiny "sun", a model that had been previously rejected as mechanically unstable. By ignoring the electrons Rutherford also ignores any potential implications for atomic spectroscopy for chemistry. Rutherford himself did not press the case for his atomic model: his own 1913 book on "Radioactive substances and their radiations" only mentions the atom twice; other books by other authors around this time focus on Thomson's model. The impact of Rutherford's nuclear model came after Niels Bohr arrived as a post-doctoral student in Manchester at Rutherford's invitation. Bohr dropped his work on the Thomson model in favour of Rutherford's nuclear model, developing the Rutherford–Bohr model over the next several years. Eventually Bohr incorporated early ideas of quantum mechanics into the model of the atom, allowing prediction of electronic spectra and concepts of chemistry. Hantaro Nagaoka, who had proposed a Saturnian model of the atom, wrote to Rutherford from Tokyo in 1911: "I have been struck with the simpleness of the apparatus you employ and the brilliant results you obtain." The astronomer Arthur Eddington called Rutherford's discovery the most important scientific achievement since Democritus proposed the atom ages earlier. Rutherford has since been hailed as "the father of nuclear physics". In a lecture delivered on 15 October 1936 at Cambridge University, Rutherford described his shock at the results of the 1909 experiment: Rutherford's claim of surprise makes a good story but by the time of the Geiger-Marsden experiment the result confirmed suspicions Rutherford developed from his many previous experiments. Experiments Alpha particle scattering: 1906 and 1908 experiments Rutherford's first steps towards his discovery of the nature of the atom came from his work to understand alpha particles. In 1906, Rutherford noticed that alpha particles passing through sheets of mica were deflected by the sheets by as much as 2 degrees. Rutherford placed a radioactive source in a sealed tube ending with a narrow slits followed by a photographic plate. Half of the slit was covered by a thin layer of mica. A magnetic field around the tube was altered every 10 minutes to reject the effect of beta rays, known to be sensitive to magnetic fields. The tube was evacuated to different amounts and a series of images recorded. At the lowest pressure the image of the open slit was clear, while images of the mica covered slit or the open slit at higher pressures were fuzzy. Rutherford explained these results as alpha-particle scattering in a paper published in 1906. He already understood the implications of the observation for models of atoms: "such a result brings out clearly the fact that the atoms of matter must be the seat of very intense electrical forces". A 1908 paper by Geiger, On the Scattering of α-Particles by Matter, describes the following experiment. He constructed a long glass tube, nearly two metres long. At one end of the tube was a quantity of "radium emanation" (R) as a source of alpha particles. The opposite end of the tube was covered with a phosphorescent screen (Z). In the middle of the tube was a 0.9 mm-wide slit. The alpha particles from R passed through the slit and created a glowing patch of light on the screen. A microscope (M) was used to count the scintillations on the screen and measure their spread. Geiger pumped all the air out of the tube so that the alpha particles would be unobstructed, and they left a neat and tight image on the screen that corresponded to the shape of the slit. Geiger then allowed some air into the tube, and the glowing patch became more diffuse. Geiger then pumped out the air and placed one or two gold foils over the slit at AA. This too caused the patch of light on the screen to become more spread out, with the larger spread for two layers. This experiment demonstrated that both air and solid matter could markedly scatter alpha particles. Alpha particle reflection: the 1909 experiment The results of the initial alpha particle scattering experiments were confusing. The angular spread of the particle on the screen varied greatly with the shape of the apparatus and its internal pressure. Rutherford suggested that Ernest Marsden, a physics undergraduate student studying under Geiger, should look for diffusely reflected or back-scattered alpha particles, even though these were not expected. Marsden's first crude reflector got results, so Geiger helped him create a more sophisticated apparatus. They were able to demonstrate that 1 in 8000 alpha particle collisions were diffuse reflections. Although this fraction was small, it was much larger than the Thomson model of the atom could explain. These results where published in a 1909 paper, On a Diffuse Reflection of the α-Particles, where Geiger and Marsden described the experiment by which they proved that alpha particles can indeed be scattered by more than 90°. In their experiment, they prepared a small conical glass tube (AB) containing "radium emanation" (radon), "radium A" (actual radium), and "radium C" (bismuth-214); its open end was sealed with mica. This was their alpha particle emitter. They then set up a lead plate (P), behind which they placed a fluorescent screen (S). The tube was held on the opposite side of plate, such that the alpha particles it emitted could not directly strike the screen. They noticed a few scintillations on the screen because some alpha particles got around the plate by bouncing off air molecules. They then placed a metal foil (R) to the side of the lead plate. They tested with lead, gold, tin, aluminium, copper, silver, iron, and platinum. They pointed the tube at the foil to see if the alpha particles would bounce off it and strike the screen on the other side of the plate, and observed an increase in the number of scintillations on the screen. Counting the scintillations, they observed that metals with higher atomic mass, such as gold, reflected more alpha particles than lighter ones such as aluminium. Geiger and Marsden then wanted to estimate the total number of alpha particles that were reflected. The previous setup was unsuitable for doing this because the tube contained several radioactive substances (radium plus its decay products) and thus the alpha particles emitted had varying ranges, and because it was difficult for them to ascertain at what rate the tube was emitting alpha particles. This time, they placed a small quantity of radium C (bismuth-214) on the lead plate, which bounced off a platinum reflector (R) and onto the screen. They concluded that approximately 1 in 8,000 of the alpha particles that struck the reflector bounced onto the screen. By measuring the reflection from thin foils they showed that the effect due to a volume and not a surface effect. When contrasted with the vast number of alpha particles that pass unhindered through a metal foil, this small number of large angle reflections was a strange result that meant very large forces were involved. Dependence on foil material and thickness: the 1910 experiment A 1910 paper by Geiger, The Scattering of the α-Particles by Matter, describes an experiment to measure how the most probable angle through which an alpha particle is deflected varies with the material it passes through, the thickness of the material, and the velocity of the alpha particles. He constructed an airtight glass tube from which the air was pumped out. At one end was a bulb (B) containing "radium emanation" (radon-222). By means of mercury, the radon in B was pumped up the narrow glass pipe whose end at A was plugged with mica. At the other end of the tube was a fluorescent zinc sulfide screen (S). The microscope which he used to count the scintillations on the screen was affixed to a vertical millimetre scale with a vernier, which allowed Geiger to precisely measure where the flashes of light appeared on the screen and thus calculate the particles' angles of deflection. The alpha particles emitted from A was narrowed to a beam by a small circular hole at D. Geiger placed a metal foil in the path of the rays at D and E to observe how the zone of flashes changed. He tested gold, tin, silver, copper, and aluminium. He could also vary the velocity of the alpha particles by placing extra sheets of mica or aluminium at A. From the measurements he took, Geiger came to the following conclusions: the most probable angle of deflection increases with the thickness of the material the most probable angle of deflection is proportional to the atomic mass of the substance the most probable angle of deflection decreases with the velocity of the alpha particles Rutherford's Structure of the Atom paper (1911) Considering the results of these experiments, Rutherford published a landmark paper in 1911 titled "The Scattering of α and β Particles by Matter and the Structure of the Atom" wherein he showed that single scattering from a very small and intense electric charge predicts primarily small-angle scattering with small but measurable amounts of backscattering. For the purpose of his mathematical calculations he assumed this central charge was positive, but he admitted he could not prove this and that he had to wait for other experiments to develop his theory. Rutherford developed a mathematical equation that modelled how the foil should scatter the alpha particles if all the positive charge and most of the atomic mass was concentrated in a point at the centre of an atom. From the scattering data, Rutherford estimated the central charge qn to be about +100 units. Rutherford's paper does not discuss any electron arrangement beyond discussions on the scattering from Thomson's plum pudding model and Nagaoka's Saturnian model. He shows that the scattering results predicted by Thomson's model are also explained by single scattering, but that Thomson's model does not explain large angle scattering. He says that Nagaoka's model, having a compact charge, would agree with the scattering data. The Saturnian model had previously been rejected on other grounds. The so-called Rutherford model of the atom with orbiting electrons was not proposed by Rutherford in the 1911 paper. Confirming the scattering theory: the 1913 experiment In a 1913 paper, The Laws of Deflexion of α Particles through Large Angles, Geiger and Marsden describe a series of experiments by which they sought to experimentally verify Rutherford's equation. Rutherford's equation predicted that the number of scintillations per minute s that will be observed at a given angle Φ should be proportional to: cosec4 thickness of foil t magnitude of the square of central charge Qn Their 1913 paper describes four experiments by which they proved each of these four relationships. To test how the scattering varied with the angle of deflection (i.e. if s ∝ csc4). Geiger and Marsden built an apparatus that consisted of a hollow metal cylinder mounted on a turntable. Inside the cylinder was a metal foil (F) and a radiation source containing radon (R), mounted on a detached column (T) which allowed the cylinder to rotate independently. The column was also a tube by which air was pumped out of the cylinder. A microscope (M) with its objective lens covered by a fluorescent zinc sulfide screen (S) penetrated the wall of the cylinder and pointed at the metal foil. They tested with silver and gold foils. By turning the table, the microscope could be moved a full circle around the foil, allowing Geiger to observe and count alpha particles deflected by up to 150°. Correcting for experimental error, Geiger and Marsden found that the number of alpha particles that are deflected by a given angle Φ is indeed proportional to csc4. Geiger and Marsden then tested how the scattering varied with the thickness of the foil (i.e. if s ∝ t). They constructed a disc (S) with six holes drilled in it. The holes were covered with metal foil (F) of varying thickness, or none for control. This disc was then sealed in a brass ring (A) between two glass plates (B and C). The disc could be rotated by means of a rod (P) to bring each window in front of the alpha particle source (R). On the rear glass pane was a zinc sulfide screen (Z). Geiger and Marsden found that the number of scintillations that appeared on the screen was indeed proportional to the thickness, as long as the thickness was small. Geiger and Marsden reused the apparatus to measure how the scattering pattern varied with the square of the nuclear charge (i.e. if s ∝ Qn2). Geiger and Marsden did not know what the positive charge of the nucleus of their metals were (they had only just discovered the nucleus existed at all), but they assumed it was proportional to the atomic weight, so they tested whether the scattering was proportional to the atomic weight squared. Geiger and Marsden covered the holes of the disc with foils of gold, tin, silver, copper, and aluminium. They measured each foil's stopping power by equating it to an equivalent thickness of air. They counted the number of scintillations per minute that each foil produced on the screen. They divided the number of scintillations per minute by the respective foil's air equivalent, then divided again by the square root of the atomic weight (Geiger and Marsden knew that for foils of equal stopping power, the number of atoms per unit area is proportional to the square root of the atomic weight). Thus, for each metal, Geiger and Marsden obtained the number of scintillations that a fixed number of atoms produce. For each metal, they then divided this number by the square of the atomic weight, and found that the ratios were about the same. Thus they proved that s ∝ Qn2. Finally, Geiger and Marsden tested how the scattering varied with the velocity of the alpha particles (i.e. if s ∝ ). Using the same apparatus, they slowed the alpha particles by placing extra sheets of mica in front of the alpha particle source. They found that, within the range of experimental error, the number of scintillations was indeed proportional to . Positive charge on nucleus: 1913 In his 1911 paper (see above), Rutherford assumed that the central charge of the atom was positive, but a negative charge would have fitted his scattering model just as well. In a 1913 paper, Rutherford declared that the "nucleus" (as he now called it) was indeed positively charged, based on the result of experiments exploring the scattering of alpha particles in various gases. In 1917, Rutherford and his assistant William Kay began exploring the passage of alpha particles through gases such as hydrogen and nitrogen. In this experiment, they shot a beam of alpha particles through hydrogen, and they carefully placed their detector—a zinc sulfide screen—just beyond the range of the alpha particles, which were absorbed by the gas. They nonetheless picked up charged particles of some sort causing scintillations on the screen. Rutherford interpreted this as alpha particles knocking the hydrogen nuclei forwards in the direction of the beam, not backwards. Rutherford's scattering model Rutherford begins his 1911 paper with a discussion of Thomson's results on scattering of beta particles, a form of radioactivity that results in high velocity electrons. Thomson's model had electrons circulating inside of a sphere of positive charge. Rutherford highlights the need for compound or multiple scattering events: the deflections predicted for each collision are much less than one degree. He then proposes a model which will produce large deflections on a single encounter: place all of the positive charge at the centre of the sphere and ignore the electron scattering as insignificant. The concentrated charge will explain why most alpha particles do not scatter to any measurable degree – they fly past too far from the charge – and yet particles that do pass very close to the centre scatter through large angles. Maximum nuclear size estimate Rutherford begins his analysis by considering a head-on collision between the alpha particle and atom. This will establish the minimum distance between them, a value which will be used throughout his calculations. Assuming there are no external forces and that initially the alpha particles are far from the nucleus, the inverse-square law between the charges on the alpha particle and nucleus gives the potential energy gained by the particle as it approaches the nucleus. For head-on collisions between alpha particles and the nucleus, all the kinetic energy of the alpha particle is turned into potential energy and the particle stops and turns back. Where the particle stops at a distance from the centre, the potential energy matches the original kinetic energy: where Rearranging: For an alpha particle: (mass) = = (for the alpha particle) = 2 × = (for gold) = 79 × = (initial velocity) = (for this example) The distance from the alpha particle to the centre of the nucleus () at this point is an upper limit for the nuclear radius. Substituting these in gives the value of about , or 27 fm. (The true radius is about 7.3 fm.) The true radius of the nucleus is not recovered in these experiments because the alphas do not have enough energy to penetrate to more than 27 fm of the nuclear centre, as noted, when the actual radius of gold is 7.3 fm. Rutherford's 1911 paper started with a slightly different formula suitable for head-on collision with a sphere of positive charge: In Rutherford's notation, e is the elementary charge, N is the charge number of the nucleus (now also known as the atomic number), and E is the charge of an alpha particle. The convention in Rutherford's time was to measure charge in electrostatic units, distance in centimeters, force in dynes, and energy in ergs. The modern convention is to measure charge in coulombs, distance in meters, force in newtons, and energy in joules. Using coulombs requires using the Coulomb constant (k) in the equation. Rutherford used b as the turning point distance (called rmin above) and R is the radius of the atom. The first term is the Coulomb repulsion used above. This form assumes the alpha particle could penetrate the positive charge. At the time of Rutherford's paper, Thomson's plum pudding model proposed a positive charge with the radius of an atom, thousands of times larger than the rmin found above. Figure 1 shows how concentrated this potential is compared to the size of the atom. Many of Rutherford's results are expressed in terms of this turning point distance rmin, simplifying the results and limiting the need for units to this calculation of turning point. Single scattering by a heavy nucleus From his results for a head on collision, Rutherford knows that alpha particle scattering occurs close to the centre of an atom, at a radius 10,000 times smaller than the atom. The electrons have negligible effect. He begins by assuming no energy loss in the collision, that is he ignores the recoil of the target atom. He will revisit each of these issues later in his paper. Under these conditions, the alpha particle and atom interact through a central force, a physical problem studied first by Isaac Newton. A central force only acts along a line between the particles and when the force varies with the inverse square, like Coulomb force in this case, a detailed theory was developed under the name of the Kepler problem. The well-known solutions to the Kepler problem are called orbits and unbound orbits are hyperbolas. Thus Rutherford proposed that the alpha particle will take a hyperbolic trajectory in the repulsive force near the centre of the atom as shown in Figure 2. To apply the hyperbolic trajectory solutions to the alpha particle problem, Rutherford expresses the parameters of the hyperbola in terms of the scattering geometry and energies. He starts with conservation of angular momentum. When the particle of mass and initial velocity is far from the atom, its angular momentum around the centre of the atom will be where is the impact parameter, which is the lateral distance between the alpha particle's path and the atom. At the point of closest approach, labeled A in Figure 2, the angular momentum will be . Therefore Rutherford also applies the law of conservation of energy between the same two points: The left hand side and the first term on the right hand side are the kinetic energies of the particle at the two points; the last term is the potential energy due to the Coulomb force between the alpha particle and atom at the point of closest approach (A). qa is the charge of the alpha particle, qg is the charge of the nucleus, and k is the Coulomb constant. The energy equation can then be rearranged thus: For convenience, the non-geometric physical variables in this equation can be contained in a variable , which is the point of closest approach in a head-on collision scenario which was explored in a previous section of this article: This allows Rutherford simplify the energy equation to: This leaves two simultaneous equations for , the first derived from the conservation of momentum equation and the second from the conservation of energy equation. Eliminating and gives at a new formula for : The next step is to find a formula for . From Figure 2, is the sum of two distances related to the hyperbola, SO and OA. Using the following logic, these distances can be expressed in terms of angle and impact parameter . The eccentricity of a hyperbola is a value that describes the hyperbola's shape. It can be calculated by dividing the focal distance by the length of the semi-major axis, which per Figure 2 is . As can be seen in Figure 3, the eccentricity is also equal to , where is the angle between the major axis and the asymptote. Therefore: As can be deduced from Figure 2, the focal distance SO is and therefore With these formulas for SO and OA, the distance can be written in terms of and simplified using a trigonometric identity known as a half-angle formula: Applying a trigonometric identity known as the cotangent double angle formula and the previous equation for gives a simpler relationship between the physical and geometric variables: The scattering angle of the particle is and therefore . With the help of a trigonometric identity known as a reflection formula, the relationship between θ and b can be resolved to: which can be rearranged to give Rutherford gives some illustrative values as shown in this table: Rutherford's approach to this scattering problem remains a standard treatment in textbooks on classical mechanics. Intensity vs angle To compare to experiments the relationship between impact parameter and scattering angle needs to be converted to probability versus angle. The scattering cross section gives the relative intensity by angles: In classical mechanics, the scattering angle is uniquely determined the initial kinetic energy of the incoming particles and the impact parameter . Therefore, the number of particles scattered into an angle between and must be the same as the number of particles with associated impact parameters between and . For an incident intensity , this implies: Thus the cross section depends on scattering angle as: Using the impact parameter as a function of angle, , from the single scattering result above produces the Rutherford scattering cross section: s = the number of alpha particles falling on unit area at an angle of deflection Φ r = distance from point of incidence of α rays on scattering material X = total number of particles falling on the scattering material n = number of atoms in a unit volume of the material t = thickness of the foil qn = positive charge of the atomic nucleus qa = positive charge of the alpha particles m = mass of an alpha particle v = velocity of the alpha particle This formula predicted the results that Geiger measured in the coming year. The scattering probability into small angles greatly exceeds the probability in to larger angles, reflecting the tiny nucleus surrounded by empty space. However, for rare close encounters, large angle scattering occurs with just a single target. At the end of his development of the cross section formula, Rutherford emphasises that the results apply to single scattering and thus require measurements with thin foils. For thin foils the degree of scattering is proportional to the foil thickness in agreement with Geiger's measurements. Comparison to JJ Thomson's results At the time of Rutherford's paper, JJ Thomson was the "undisputed world master in the design of atoms". Rutherford needed to compare his new approach to Thomson's. Thomson's model, presented in 1910, modelled the electron collisions with hyperbolic orbits from his 1906 paper combined with a factor for the positive sphere. Multiple resulting small deflections compounded using a random walk. In his paper Rutherford emphasised that single scattering alone could account for Thomson's results if the positive charge were concentrated in the centre. Rutherford computes the probability of single scattering from a compact charge and demonstrates that it is 3 times larger than Thomson's multiple scattering probability. Rutherford completes his analysis including the effects of density and foil thickness, then concludes that thin foils are governed by single scattering, not multiple scattering. Later analysis showed Thomson's scattering model could not account for large scattering. The maximum angular deflection from electron scattering or from the positive sphere each come to less than 0.02°; even many such scattering events compounded would result in less than a one degree average deflection and a probability of scattering through 90° of less than one in 103500. Target recoil Rutherford's analysis assumed that alpha particle trajectories turned at the centre of the atom but the exit velocity was not reduced. This is equivalent to assuming that the concentrated charge at the centre had infinite mass or was anchored in place. Rutherford discusses the limitations of this assumption by comparing scattering from lighter atoms like aluminium with heavier atoms like gold. If the concentrated charge is lighter it will recoil from the interaction, gaining momentum while the alpha particle loses momentum and consequently slows down. Modern treatments analyze this type of Coulomb scattering in the centre of mass reference frame. The six coordinates of the two particles (also called "bodies") are converted into three relative coordinates between the two particles and three centre-of-mass coordinates moving in space (called the lab frame). The interaction only occurs in the relative coordinates, giving an equivalent one-body problem just as Rutherford solved, but with different interpretations for the mass and scattering angle. Rather than the mass of the alpha particle, the more accurate formula including recoil uses reduced mass: For Rutherford's alpha particle scattering from gold, with mass of 197, the reduced mass is very close to the mass of the alpha particle: For lighter aluminium, with mass 27, the effect is greater: a 13% difference in mass. Rutherford notes this difference and suggests experiments be performed with lighter atoms. The second effect is a change in scattering angle. The angle in the relative coordinate system or centre of mass frame needs to be converted to an angle in the lab frame. In the lab frame, denoted by a subscript L, the scattering angle for a general central potential is For a heavy particle like gold used by Rutherford, the factor can be neglected at almost all angles. Then the lab and relative angles are the same, . The change in scattering angle alters the formula for differential cross-section needed for comparison to experiment. For any central potential, the differential cross-section in the lab frame is related to that in the centre-of-mass frame by where Limitations to Rutherford's scattering formula Very light nuclei and higher energies In 1919 Rutherford analyzed alpha particle scattering from hydrogen atoms, showing the limits of the 1911 formula even with corrections for reduced mass. Similar issues with smaller deviations for helium, magnesium, aluminium lead to the conclusion that the alpha particle was penetrating the nucleus in these cases. This allowed the first estimates of the size of atomic nuclei. Later experiments based on cyclotron acceleration of alpha particles striking heavier nuclei provided data for analysis of interaction between the alpha particle and the nuclear surface. However at energies that push the alpha particles deeper they are strongly absorbed by the nuclei, a more complex interaction. Quantum mechanics Rutherford's treatment of alpha particle scattering seems to rely on classical mechanics and yet the particles are of sub-atomic dimensions. However the critical aspects of the theory ultimately rely on conservation of momentum and energy. These concepts apply equally in classical and quantum regimes: the scattering ideas developed by Rutherford apply to subatomic elastic scattering problems like neutron-proton scattering. An alternative method to find the scattering angle This section presents an alternative method to find the relation between the impact parameter and deflection angle in a single-atom encounter, using a force-centric approach as opposed to the energy-centric one that Rutherford used. The scattering geometry is shown in this diagram The impact parameter b is the distance between the alpha particle's initial trajectory and a parallel line that goes through the nucleus. Smaller values of b bring the particle closer to the atom so it feels more deflection force resulting in a larger deflection angle θ. The goal is to find the relationship between b and the deflection angle. The alpha particle's path is a hyperbola and the net change in momentum runs along the axis of symmetry. From the geometry in the diagram and the magnitude of the initial and final momentum vectors, , the magnitude of can be related to the deflection angle: A second formula for involving b will give the relationship to the deflection angle. The net change in momentum can also be found by adding small increments to momentum all along the trajectory using the integral where is the distance between the alpha particle and the centre of the nucleus and is its angle from the axis of symmetry. These two are the polar coordinates of the alpha particle at time . Here the Coulomb force exerted along the line between the alpha particle and the atom is and the factor gives that part of the force causing deflection. The polar coordinates r and φ depend on t in the integral, but they must be related to each other as they both vary as the particle moves. Changing the variable and limits of integration from t to φ makes this connection explicit: The factor is the reciprocal of the angular velocity the particle. Since the force is only along the line between the particle and the atom, the angular momentum, which is proportional to the angular velocity, is constant: This law of conservation of angular momentum gives a formula for : Replacing in the integral for ΔP simultaneously eliminates the dependence on r: Applying the trigonometric identities and to simplify this result gives the second formula for : Solving for θ as a function of b gives the final result Why the plum pudding model was wrong J. J. Thomson himself didn't study alpha particle scattering, but he did study beta particle scattering. In his 1910 paper "On the Scattering of rapidly moving Electrified Particles", Thomson presented equations that modelled how beta particles scatter in a collision with an atom. Rutherford adapted those equations to alpha particle scattering in his 1911 paper "The Scattering of α and β Particles by Matter and the Structure of the Atom". Deflection by the positive sphere In Thomson's 1910 paper "On the Scattering of rapidly moving Electrified Particles", Thomson presented the following equation (in this article's notation) that isolates the effect of the positive sphere in the plum pudding model on an incoming beta particle. Thomson did not explain how he arrived at this equation, but this section provides an educated guess and at the same time adapts the equation to alpha particle scattering. Consider an alpha particle passing by a positive sphere of pure positive charge (no electrons) with a radius R and mass equal to those of a gold atom. The alpha particle passes just close enough to graze the edge of the sphere, which is where the electric field of the sphere is strongest. An earlier section of this article presented an equation which models how an incoming charged particle is deflected by another charged particle at a fixed position. This equation can be used to calculate the deflection angle in the special case in Figure 4 by setting the impact parameter b to the same value as the radius of the sphere R. So long as the alpha particle does not penetrate the sphere, there is no practical difference between a sphere of charge and a point charge. qg = positive charge of the gold atom = = qa = charge of the alpha particle = = R = radius of the gold atom = v = speed of the alpha particle = m = mass of the alpha particle = k = Coulomb constant = This shows that the largest possible deflection will be very small, to the point that the path of the alpha particle passing through the positive sphere of a gold atom is almost a straight line. Therefore in computing the average deflection, which will be smaller still, we will treat the particle's path through the sphere as a chord of length L. Inside a sphere of uniformly distributed positive charge, the force exerted on the alpha particle at any point along its path through the sphere is The lateral component of this force is The lateral change in momentum py is therefore The deflection angle is given by where px is the average horizontal momentum, which is first reduced then restored as horizontal force changes direction as the alpha particle goes across the sphere. Since the deflection is very small, can be treated as equal to . The chord length , per Pythagorean theorem. The average deflection angle sums the angle for values of b and L across the entire sphere and divides by the cross-section of the sphere: This matches Thomson's formula in his 1910 paper. Deflection by the electrons Consider an alpha particle passing through an atom of radius R along a path of length L. The effect of the positive sphere is ignored so as to isolate the effect of the atomic electrons. As with the positive sphere, deflection by the electrons is expected to be very small, to the point that the path is practically a straight line. For the electrons within an arbitrary distance s of the alpha particle's path, their mean distance will be s. Therefore, the average deflection per electron will be where qe is the elementary charge. The average net deflection by all the electrons within this arbitrary cylinder of effect around the alpha particle's path is where N0 is the number of electrons per unit volume and is the volume of this cylinder. Treating L as a straight line, where b is the distance of this line from the centre. The mean of is therefore To obtain the mean deflection , replace in the equation for : where N is the number of electrons in the atom, equal to . Cumulative effect Applying Thomson's equations described above to an alpha particle colliding with a gold atom, using the following values: qg = positive charge of the gold atom = = qa = charge of the alpha particle = = qe = elementary charge = R = radius of the gold atom = v = speed of the alpha particle = m = mass of the alpha particle = k = Coulomb constant = N = number of electrons in the gold atom = 79 gives the average angle by which the alpha particle should be deflected by the atomic electrons as: The average angle by which an alpha particle should be deflected by the positive sphere is: The net deflection for a single atomic collision is: On average the positive sphere and the electrons alike provide very little deflection in a single collision. Thomson's model combined many single-scattering events from the atom's electrons and a positive sphere. Each collision may increase or decrease the total scattering angle. Only very rarely would a series of collisions all line up in the same direction. The result is similar to the standard statistical problem called a random walk. If the average deflection angle of the alpha particle in a single collision with an atom is , then the average deflection after n collisions is The probability that an alpha particle will be deflected by a total of more than 90° after n deflections is given by: where e is Euler's number (≈2.71828...). A gold foil with a thickness of 1.5 micrometers would be about 10,000 atoms thick. If the average deflection per atom is 0.008°, the average deflection after 10,000 collisions would be 0.8°. The probability of an alpha particle being deflected by more than 90° will be While in Thomson's plum pudding model it is mathematically possible that an alpha particle could be deflected by more than 90° after 10,000 collisions, the probability of such an event is so low as to be undetectable. This extremely small number shows that Thomson's model cannot explain the results of the Geiger-Mardsen experiment of 1909. Notes on historical measurements Rutherford assumed that the radius of atoms in general to be on the order of 10−10 m and the positive charge of a gold atom to be about 100 times that of hydrogen (). The atomic weight of gold was known to be around 197 since early in the 19th century. From an experiment in 1906, Rutherford measured alpha particles to have a charge of and an atomic weight of 4, and alpha particles emitted by radon to have velocity of . Rutherford deduced that alpha particles are essentially helium atoms stripped of two electrons, but at the time scientists only had a rough idea of how many electrons atoms have and so the alpha particle was thought to have up to 10 electrons left. In 1906, J. J. Thomson measured the elementary charge to be about (). In 1909 Robert A. Millikan provided a more accurate measurement of , only 0.6% off the current accepted measurement. Jean Perrin in 1909 measured the mass of hydrogen to be , and if alpha particles are four times as heavy as that, they would have an absolute mass of . The convention in Rutherford's time was to measure charge in electrostatic units, distance in centimeters, force in dynes, and energy in ergs. The modern convention is to measure charge in coulombs, distance in meters, force in newtons, and energy in joules. Using coulombs requires using the Coulomb constant (k) in certain equations. In this article, Rutherford and Thomson's equations have been rewritten to fit modern notation conventions. See also Atomic theory Rutherford backscattering spectroscopy List of scattering experiments References Bibliography Chapter 4 Central forces External links Description of the experiment, from cambridgephysics.org Foundational quantum physics Physics experiments 1909 in science Ernest Rutherford Fixed-target experiments
Rutherford scattering experiments
[ "Physics" ]
9,803
[ "Quantum mechanics", "Foundational quantum physics", "Experimental physics", "Physics experiments" ]
320,056
https://en.wikipedia.org/wiki/Pyrex
Pyrex (trademarked as PYREX and pyrex) is a brand introduced by Corning Inc. in 1915, initially for a line of clear, low-thermal-expansion borosilicate glass used for laboratory glassware and kitchenware. It was later expanded in the 1930s to include kitchenware products made of soda–lime glass and other materials. Its name has become famous for making rectangular glass roasters. In 1998, the kitchenware division of Corning Inc. responsible for the development of Pyrex spun off from its parent company as Corning Consumer Products Company, subsequently renamed Corelle Brands. Corning Inc. no longer manufactures or markets consumer products, only industrial ones. History Borosilicate glass was first made by German chemist and glass technologist Otto Schott, founder of Schott AG in 1893, 22 years before Corning produced the Pyrex brand. Schott AG sells the product under the name "Duran". In 1908, Eugene Sullivan, director of research at Corning Glass Works, developed Nonex, a borosilicate low-expansion glass, to reduce breakage in shock-resistant lantern globes and battery jars. Sullivan had learned about Schott's borosilicate glass as a doctoral student in Leipzig, Germany. Jesse Littleton of Corning discovered the cooking potential of borosilicate glass by giving his wife Bessie Littleton a casserole dish made from a cut-down Nonex battery jar. Corning removed the lead from Nonex and developed it as a consumer product. Pyrex made its public debut in 1915 during World War I, positioned as an American-produced alternative to Duran. A Corning executive gave the following account of the etymology of the name "Pyrex": Corning purchased the Macbeth-Evans Glass Company in 1936 and their Charleroi, PA plant was used to produce Pyrex opal ware bowls and bakeware made of tempered soda–lime glass. In 1958 an internal design department was started by John B. Ward. He redesigned the Pyrex ovenware and Flameware. Over the years, designers such as Penny Sparke, Betty Baugh, Smart Design, TEAMS Design, and others have contributed to the design of the line. Corning divested itself of the Corning Consumer Products Company (now known as Corelle Brands) in 1998 and production of consumer Pyrex products went with it. Its previous licensing of the name to Newell Cookware Europe remained in effect. France-based cookware maker Arc International acquired Newell's European business in early 2006 to own rights to the brand in Europe, the Middle East and Africa. In 2007, Arc closed the Pyrex soda–lime factory in Sunderland, UK moving all European production to France. The Sunderland factory had first started making Pyrex in 1922. In 2014, Arc International sold off its Arc International Cookware division which operated the Pyrex business to Aurora Capital for its Resurgence Fund II. The division was renamed the International Cookware group. London-based private equity firm Kartesia purchased International Cookware in 2020. In 2021, Pyrex rival Duralex was acquired by International Cookware group for €3.5 million (US$4.2m). In March 2019, Corelle Brands, the makers of Pyrex in the United States, merged with Instant Brands, the makers of the Instant Pot. On June 12, 2023, Instant Brands filed for Chapter 11 bankruptcy after high interest rates and waning access to credit hit its cash position and made its debts unsustainable. The company emerged from bankruptcy on February 27, 2024 under the previous Corelle Brands moniker, after having sold off its appliance business ("Instant" branded products). Trademark In Europe, Africa, and the Middle East, a variation of the PYREX (all uppercase) trademark is licensed by International Cookware for bakeware that has been made of numerous materials including borosilicate and soda–lime glass, stoneware, metal, plus vitroceramic cookware. The pyrex (all lowercase, introduced in 1975) trademark is now used for kitchenware sold in the United States, South America, and Asia. In the past, the brand name has also been used for kitchen utensils and bakeware by other companies in regions such as Japan and Australia. It is a common misconception that the logo style alone indicates the type of glass used to manufacture the bakeware. Additionally, Corning's introduction of soda-lime-glass-based Pyrex in the 1940s predates the introduction of the all lowercase logo by nearly 30 years. Composition Older clear-glass Pyrex manufactured by Corning, Arc International's Pyrex products, and Pyrex laboratory glassware are made of borosilicate glass. According to the National Institute of Standards and Technology, borosilicate Pyrex is composed of (as percentage of weight): 4.0% boron, 54.0% oxygen, 2.8% sodium, 1.1% aluminum, 37.7% silicon, and 0.3% potassium. According to glass supplier Pulles and Hannique, borosilicate Pyrex is made of Corning 7740 glass and is equivalent in formulation to Schott Glass 8330 glass sold under the "Duran" brand name. The composition of both Corning 7740 and Schott 8330 is given as 80.6% , 12.6% , 4.2% , 2.2% , 0.1% , 0.1% , 0.05% , and 0.04% . In the late 1930s and 1940s, Corning also introduced new product lines under the Pyrex brand using different types of glass. Opaque tempered soda–lime glass was used to create decorated opal ware bowls and bakeware, and aluminosilicate glass was used for Pyrex Flameware stovetop cookware. The latter product had a bluish tint caused by the addition of alumino-sulfate. Beginning in the 1980s, production of clear Pyrex glass products manufactured in the USA by Corning was also shifted to tempered soda–lime glass, like their popular opal bakeware. This change was justified by stating that soda–lime glass has higher mechanical strength than borosilicatemaking it more resistant to physical damage when dropped, which is believed to be the most common cause of breakage in glass bakeware. The glass is also cheaper to produce and more environmentally friendly. Its thermal shock resistance is lower than borosilicate's, leading to potential breakage from heat stress if used contrary to recommendations. Since the closure of the soda–lime plant in England in 2007, European Pyrex has been made solely from borosilicate. The differences between Pyrex-branded glass products has also led to controversy regarding safety issuesin 2008, the U.S. Consumer Product Safety Commission reported it had received 66 complaints by users reporting that their Pyrex glassware had shattered over the prior ten years yet concluded that Pyrex glass bakeware does not present a safety concern. The consumer affairs magazine Consumer Reports investigated the issue and released test results, in January 2011, confirming that borosilicate glass bakeware was less susceptible to thermal shock breakage than tempered soda lime bakeware. They admitted their testing conditions were "contrary to instructions" provided by the manufacturer. STATS analyzed the data available and found that the most common way that users were injured by glassware was via mechanical breakage, being hit or dropped, and that "the change to soda lime represents a greater net safety benefit." Use in telescopes Because of its low expansion characteristics, borosilicate glass is often the material of choice for reflective optics in astronomy applications. In 1932, George Ellery Hale approached Corning with the challenge of fabricating the telescope mirror for the California Institute of Technology's Palomar Observatory project. A previous effort to fabricate the optic from fused quartz had failed, with the cast blank having voids. The mirror was cast by Corning during 1934–1936 out of borosilicate glass. After a year of cooling, during which it was almost lost to a flood, the blank was completed in 1935. The first blank now resides in the Corning Museum of Glass. See also Jena glass Citations General and cited references External links Pyrex Love, a vintage Pyrex reference site American brands Boron compounds Corning Inc. Glass trademarks and brands Kitchenware brands Kitchenware Low-expansion glass Products introduced in 1915 Companies that filed for Chapter 11 bankruptcy in 2023 Transparent materials
Pyrex
[ "Physics" ]
1,788
[ "Physical phenomena", "Optical phenomena", "Materials", "Transparent materials", "Matter" ]
1,505,909
https://en.wikipedia.org/wiki/Radio%20silence
In telecommunications, radio silence or emissions control (EMCON) is a status in which all fixed or mobile radio stations in an area are asked to stop transmitting for safety or security reasons. The term "radio station" may include anything capable of transmitting a radio signal. A single ship, aircraft, or spacecraft, or a group of them, may also maintain radio silence. Amateur radio Wilderness Protocol The Wilderness Protocol recommends that those stations able to do so should monitor the primary (and secondary, if possible) frequency every three hours starting at 7 AM, local time, for 5 minutes starting at the top of every hour, or even continuously. The Wilderness Protocol is now included in both the ARRL ARES Field Resources Manual and the ARES Emergency Resources Manual. Per the manual, the protocol is: The Wilderness protocol (see page 101, August 1995 QST) calls for hams in the wilderness to announce their presence on, and to monitor, the national calling frequencies for five minutes beginning at the top of the hour, every three hours from 7 AM to 7 PM while in the back country. A ham in a remote location may be able to relay emergency information through another wilderness ham who has better access to a repeater. National calling frequencies: 52.525, 146.52, 223.50, 446.00, 1294.50 MHz. Priority transmissions should begin with the LITZ (Long Interval Tone Zero or Long Time Zero) DTMF signal for at least 5 seconds. CQ like calls (to see who is out there) should not take place until after 4 minutes after the hour. Maritime mobile service Distress calls Radio silence can be used in nautical and aeronautical communications to allow faint distress calls to be heard (see Mayday). In the latter case, the controlling station can order other stations to stop transmitting with the proword "Seelonce Seelonce Seelonce". (The word uses an approximation of the French pronunciation of the word silence, "See-LAWNCE."). Once the need for radio silence is finished, the controlling station lifts radio silence by the prowords "Seelonce FINI." Disobeying a Seelonce Mayday order constitutes a serious criminal offence in most countries. The aviation equivalent of Seelonce Mayday is the phrase or command "Stop Transmitting - Distress (or Mayday)". "Distress traffic ended" is the phrase used when the emergency is over. Again, disobeying such an order is extremely dangerous and is therefore a criminal offence in most countries. Silent periods Up until the procedure was replaced by the Global Maritime Distress and Safety System (August 1, 2013 in the U.S.), maritime radio stations were required to observe radio silence on 500 kHz (radiotelegraph) for the three minutes between 15 and 18 minutes past the top of each hour, and for the three minutes between 45 and 48 minutes past the top of the hour; and were also required to observe radio silence on 2182 kHz (upper-sideband radiotelephony) for the first three minutes of each hour (H+00 to H+03) and for the three minutes following the bottom of the hour (H+30 to H+33). For 2182 kHz, this is still a legal requirement, according to 47 CFR 80.304 - Watch requirement during silence periods. Military An order for Radio silence is generally issued by the military where any radio transmission may reveal troop positions, either audibly from the sound of talking, or by radio direction finding. In extreme scenarios Electronic Silence ('Emissions Control' or EMCON) may also be put into place as a defence against interception. In the British Army, the imposition and lifting of radio silence will be given in orders or ordered by control using 'Battle Code' (BATCO). Control is the only authority to impose or lift radio silence either fully or selectively. The lifting of radio silence can only be ordered on the authority of the HQ that imposed it in the first place. During periods of radio silence a station may, with justifiable cause, transmit a message. This is known as Breaking Radio Silence. The necessary replies are permitted but radio silence is automatically re-imposed afterwards. The breaking station transmits its message using BATCO to break radio silence. The command for imposing radio silence is: Hello all stations, this is 0. Impose radio silence. Over. Other countermeasures are also applied to protect secrets against enemy signals intelligence. Electronic emissions can be used to plot a line of bearing to an intercepted signal, and if more than one receiver detects it, triangulation can estimate its location. Radio direction finding (RDF) was critically important during the Battle of Britain and reached a high state of maturity in early 1943 with the aid of United States institutions aiding British Research and Development under the pressures of the continuing Battle of the Atlantic during World War II when locating U-boats. One key breakthrough was marrying MIT/Raytheon developed CRT technology with pairs of RDF antennas giving a differentially derived instant bearing useful in tactical situations, enabling escorts to run down the bearing to an intercept. The U-boat command of Wolfpacks required a minimum once daily communications check-in, allowing new Hunter-Killer groups to localize U-boats tactically from April on, leading to dramatic swings in the fortunes of war in the battles between March, when the U-boats sank over 300 allied ships and "Black May" when the allies sank at least 44 U-boats—each without orders to exercise EMCON/radio silence. Other uses Radio silence can be maintained for other purposes, such as for highly sensitive radio astronomy. Radio silence can also occur for spacecraft whose antenna is temporarily pointed away from Earth in order to perform observations, or there is insufficient power to operate the radio transmitter, or during re-entry when the hot plasma surrounding the spacecraft blocks radio signals. In the USA, CONELRAD and EBS (which are now discontinued), and EAS (which is currently active) are also ways of maintaining radio silence, mainly in broadcasting, in the event of an attack. Examples of radio silence orders Radio silencing helped hide the Japanese attack on Pearl Harbor in World War II. The attackers had used AM radio station KGU in Honolulu as a homing signal. On June 2, 1942, during World War II, a nine-minute air-raid alert, including at 9:22 pm a radio silence order applied to all radio stations from Mexico to Canada. In January 1965, Syrian Armed Forces observed a period of radio silence which successfully detected Mossad spy Eli Cohen who was transmitting espionage work to Israel. See also Dead air Guard band Mapimí Silent Zone Radio quiet zone CONELRAD References Military communications Radio communications Spacecraft communication Emergency Alert System Civil defense Silence
Radio silence
[ "Engineering" ]
1,386
[ "Telecommunications engineering", "Spacecraft communication", "Radio communications", "Military communications", "Aerospace engineering" ]
1,507,559
https://en.wikipedia.org/wiki/Interdecadal%20Pacific%20oscillation
The Interdecadal Pacific oscillation (IPO) is an oceanographic/meteorological phenomenon similar to the Pacific decadal oscillation (PDO), but occurring in a wider area of the Pacific. While the PDO occurs in mid-latitudes of the Pacific Ocean in the northern hemisphere, the IPO stretches from the southern hemisphere into the northern hemisphere. The period of oscillation is roughly 15–30 years. Positive phases of the IPO are characterized by a warmer than average tropical Pacific and cooler than average northern Pacific. Negative phases are characterized by an inversion of this pattern, with cool tropics and warm northern regions. The IPO had positive phases (southeastern tropical Pacific warm) from 1922 to 1946 and 1978 to 1998, and a negative phase between 1947 and 1976. References Physical oceanography Regional climate effects Pacific Ocean Climate oscillations
Interdecadal Pacific oscillation
[ "Physics" ]
179
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
1,508,507
https://en.wikipedia.org/wiki/Vector%20projection
The vector projection (also known as the vector component or vector resolution) of a vector on (or onto) a nonzero vector is the orthogonal projection of onto a straight line parallel to . The projection of onto is often written as or . The vector component or vector resolute of perpendicular to , sometimes also called the vector rejection of from (denoted or ), is the orthogonal projection of onto the plane (or, in general, hyperplane) that is orthogonal to . Since both and are vectors, and their sum is equal to , the rejection of from is given by: To simplify notation, this article defines and Thus, the vector is parallel to the vector is orthogonal to and The projection of onto can be decomposed into a direction and a scalar magnitude by writing it as where is a scalar, called the scalar projection of onto , and is the unit vector in the direction of . The scalar projection is defined as where the operator ⋅ denotes a dot product, ‖a‖ is the length of , and θ is the angle between and . The scalar projection is equal in absolute value to the length of the vector projection, with a minus sign if the direction of the projection is opposite to the direction of , that is, if the angle between the vectors is more than 90 degrees. The vector projection can be calculated using the dot product of and as: Notation This article uses the convention that vectors are denoted in a bold font (e.g. ), and scalars are written in normal font (e.g. a1). The dot product of vectors and is written as , the norm of is written ‖a‖, the angle between and is denoted θ. Definitions based on angle θ Scalar projection The scalar projection of on is a scalar equal to where θ is the angle between and . A scalar projection can be used as a scale factor to compute the corresponding vector projection. Vector projection The vector projection of on is a vector whose magnitude is the scalar projection of on with the same direction as . Namely, it is defined as where is the corresponding scalar projection, as defined above, and is the unit vector with the same direction as : Vector rejection By definition, the vector rejection of on is: Hence, Definitions in terms of a and b When is not known, the cosine of can be computed in terms of and , by the following property of the dot product Scalar projection By the above-mentioned property of the dot product, the definition of the scalar projection becomes: In two dimensions, this becomes Vector projection Similarly, the definition of the vector projection of onto becomes: which is equivalent to either or Scalar rejection In two dimensions, the scalar rejection is equivalent to the projection of onto , which is rotated 90° to the left. Hence, Such a dot product is called the "perp dot product." Vector rejection By definition, Hence, By using the Scalar rejection using the perp dot product this gives Properties Scalar projection The scalar projection on is a scalar which has a negative sign if 90 degrees < θ ≤ 180 degrees. It coincides with the length of the vector projection if the angle is smaller than 90°. More exactly: if , if . Vector projection The vector projection of on is a vector which is either null or parallel to . More exactly: if , and have the same direction if , and have opposite directions if . Vector rejection The vector rejection of on is a vector which is either null or orthogonal to . More exactly: if or , is orthogonal to if , Matrix representation The orthogonal projection can be represented by a projection matrix. To project a vector onto the unit vector , it would need to be multiplied with this projection matrix: Uses The vector projection is an important operation in the Gram–Schmidt orthonormalization of vector space bases. It is also used in the separating axis theorem to detect whether two convex shapes intersect. Generalizations Since the notions of vector length and angle between vectors can be generalized to any n-dimensional inner product space, this is also true for the notions of orthogonal projection of a vector, projection of a vector onto another, and rejection of a vector from another. In some cases, the inner product coincides with the dot product. Whenever they don't coincide, the inner product is used instead of the dot product in the formal definitions of projection and rejection. For a three-dimensional inner product space, the notions of projection of a vector onto another and rejection of a vector from another can be generalized to the notions of projection of a vector onto a plane, and rejection of a vector from a plane. The projection of a vector on a plane is its orthogonal projection on that plane. The rejection of a vector from a plane is its orthogonal projection on a straight line which is orthogonal to that plane. Both are vectors. The first is parallel to the plane, the second is orthogonal. For a given vector and plane, the sum of projection and rejection is equal to the original vector. Similarly, for inner product spaces with more than three dimensions, the notions of projection onto a vector and rejection from a vector can be generalized to the notions of projection onto a hyperplane, and rejection from a hyperplane. In geometric algebra, they can be further generalized to the notions of projection and rejection of a general multivector onto/from any invertible k-blade. See also Scalar projection Vector notation References External links Projection of a vector onto a plane projection Transformation (function) Functions and mappings
Vector projection
[ "Mathematics" ]
1,118
[ "Functions and mappings", "Mathematical analysis", "Transformation (function)", "Mathematical objects", "Mathematical relations", "Geometry" ]
1,508,678
https://en.wikipedia.org/wiki/Kansei%20engineering
Kansei engineering (Japanese: 感性工学 kansei kougaku, emotional or affective engineering) aims at the development or improvement of products and services by translating the customer's psychological feelings and needs into the domain of product design (i.e. parameters). It was founded by Mitsuo Nagamachi, professor emeritus of Hiroshima University (also former Dean of Hiroshima International University and CEO of International Kansei Design Institute). Kansei engineering parametrically links the customer's emotional responses (i.e. physical and psychological) to the properties and characteristics of a product or service. In consequence, products can be designed to bring forward the intended feeling. It has been adopted as one of the topics for professional development by the Royal Statistical Society. Introduction Product design has become increasingly complex as products contain more functions and have to meet increasing demands such as user-friendliness, manufacturability and ecological considerations. With a shortened product lifecycle, development costs are likely to increase. Since errors in the estimations of market trends can be very expensive, companies therefore perform benchmarking studies that compare with competitors on strategic, process, marketing, and product levels. However, success in a certain market segment not only requires knowledge about the competitors and the performance of competing products, but also about the impressions which a product leaves to the customer. The latter requirement becomes much more important as products and companies are becoming mature. Customers purchase products based on subjective terms such as brand image, reputation, design, impression etc.. A large number of manufacturers have started to consider such subjective properties and develop their products in a way that conveys the company image. A reliable instrument is therefore needed: an instrument which can predict the reception of a product on the market before the development costs become too large. This demand has triggered the research dealing with the translation of the customer's subjective, hidden needs into concrete products. Research is done foremost in Asia, including Japan and Korea. In Europe, a network has been forged under the 6th EU framework. This network refers to the new research field as "emotional design" or "affective engineering". History People want to use products that are functional at the physical level, usable at the psychological level and attractive at the emotional level. Affective engineering is the study of the interactions between the customer and the product at that third level. It focuses on the relationships between the physical traits of a product and its affective influence on the user. Thanks to this field of research, it is possible to gain knowledge on how to design more attractive products and make the customers satisfied. Methods in affective engineering (or Kansei engineering) is one of the major areas of ergonomics (human factor engineering). The study of integrating affective values in artifacts is not new at all. Already in the 18th century philosophers such as Baumgarten and Kant established the area of aesthetics. In addition to pure practical values, artifacts always also had an affective component. One example is jewellery found in excavations from the Stone Ages. The period of Renaissance is also a good example. In the middle of the 20th century, the idea of aesthetics was deployed in scientific contexts. Charles E. Osgood developed his semantic differential method in which he quantified the peoples' perceptions of artifacts. Some years later, in 1960, Professors Shigeru Mizuno and Yoji Akao developed an engineering approach in order to connect peoples' needs to product properties. This method was called quality function deployment (QFD). Another method, the Kano model, was developed in the field of quality in the early 1980s by Professor Noriaki Kano, of Tokyo University. Kano's model is used to establish the importance of individual product features for the customer's satisfaction and hence it creates the optimal requirement for process oriented product development activities. A pure marketing technique is conjoint analysis. Conjoint analysis estimates the relative importance of a product's attributes by analysing the consumer's overall judgment of a product or service. A more artistic method is called Semantic description of environments. It is mainly a tool for examining how a single person or a group of persons experience a certain (architectural) environment. Although all of these methods are concerned with subjective impact, none of them can translate this impact to design parameters sufficiently. This can, however, be accomplished by Kansei engineering. Kansei engineering (KE) has been used as a tool for affective engineering. It was developed in the early 70s in Japan and is now widely spread among Japanese companies. In the middle of the 90s, the method spread to the United States, but cultural differences may have prevented the method to enfold its whole potential. Procedure As mentioned above, Kansei engineering can be considered as a methodology within the research field of 'affective engineering'. Some researchers have identified the content of the methodology. Shimizu et al. state that 'Kansei Engineering is used as a tool for product development and the basic principles behind it are the following: identification of product properties and correlation between those properties and the design characteristics'. According to Nagasawa, one of the forerunners of Kansei engineering, there are three focal points in the method: How to accurately understand consumer Kansei How to reflect and translate Kansei understanding into product design How to create a system and organization for Kansei orientated design A model on methodology Different types of Kansei engineering are identified and applied in various contexts. Schütte examined different types of Kansei engineering and developed a general model covering the contents of Kansei engineering. Choice of Domain Domain in this context describes the overall idea behind an assembly of products, i.e. the product type in general. Choosing the domain includes the definition of the intended target group and user type, market-niche and type, and the product group in question. Choosing and defining the domain are carried out on existing products, concepts and on design solutions yet unknown. From this, a domain description is formulated, serving as the basis for further evaluation. The process is necessary and has been described by Schütte in detail in a couple of publications. Span the Semantic Space The expression Semantic space was addressed for the first time by Osgood et al.. He posed that every artifact can be described in a certain vector space defined by semantic expressions (words). This is done by collecting a large number of words that describe the domain. Suitable sources are pertinent literature, commercials, manuals, specification list, experts etc. The number of the words gathered varies according to the product, typically between 100 and 1000 words. In a second step the words are grouped using manual (e.g. Affinity diagram) or mathematical methods (e.g. factor and/or cluster analysis). Finally a few representing words are selected from this spanning the Semantic Space. These words are called "Kansei words" or "Kansei Engineering words". Span the Space of Properties The next step is to span the Space of Product Properties, which is similar to the Semantic Space. The Space of Product Properties collects products representing the domain, identifies key features and selects product properties for further evaluation. The collection of products representing the domain is done from different sources such as existing products, customer suggestions, possible technical solutions and design concepts etc. The key features are found using specification lists for the products in question. To select properties for further evaluation, a Pareto-diagram can assist the decision between important and less important features. Synthesis In the synthesis step, the Semantic Space and the Space of Properties are linked together, as displayed in Figure 3. Compared to other methods in Affective Engineering, Kansei engineering is the only method that can establish and quantify connections between abstract feelings and technical specifications. For every Kansei word a number of product properties are found, affecting the Kansei word. Synthesis The research into constructing these links has been a core part of Nagamachi's work with Kansei engineering in the last few years. Nowadays, a number of different tools is available. Some of the most common tools are : Category Identification Regression Analysis /Quantification Theory Type I Rough Sets Theory Genetic Algorithm Fuzzy Sets Theory Model building and Test of Validity After doing the necessary stages, the final step of validation remains. This is done in order to check if the prediction model is reliable and realistic. However, in case of prediction model failure, it is necessary to update the Space of Properties and the Semantic Space, and consequently refine the model. The process of refinement is difficult due to the shortage of methods. This shows the need of new tools to be integrated. The existing tools can partially be found in the previously mentioned methods for the synthesis. Software tools Kansei engineering has always been a statistically and mathematically advanced methodology. Most types require good expert knowledge and a reasonable amount of experience to carry out the studies sufficiently. This has also been the major obstacle for a widespread application of Kansei engineering. In order to facilitate application some software packages have been developed in the recent years, most of them in Japan. There are two different types of software packages available: User consoles and data collection and analysis tools. User consoles are software programs that calculate and propose a product design based on the users' subjective preferences (Kanseis). However, such software requires a database that quantifies the connections between Kanseis and the combination of product attributes. For building such databases, data collection and analysis tools can be used. This part of the paper demonstrates some of the tools. There are many more tools used in companies and universities, which might not be available to the public. User consoles Software As described above, Kansei data collection and analysis is often complex and connected with statistical analysis. Depending on which synthesis method is used, different computer software is used. Kansei Engineering Software (KESo) uses QT1 for linear analysis. The concept of Kansei Engineering Software (KESo) Linköping University in Sweden. The software generates online questionnaires for collection of Kansei raw-data Another software package (Kn6) was developed at the Polytechnic University of Valencia in Spain. Both software packages improve the collection and evaluation of Kansei data. In this way even users with no specialist competence in advanced statistics can use Kansei engineering. See also Affective computing Gandhian engineering – for low cost, frugal, large distribution product design. Fahrvergnügen Japanese quality References External links KANSEI Innovation (Hiroshima, JAPAN) European Kansei Engineering group Ph.D thesis on Kansei Engineering (europe) Ph.D thesis on Website Emotional UX and Kansei Engineering The Japan Society of Kansei Engineering The Malaysian Research Intensive Group for Kansei/Affective Engineering International Conference on Kansei Engineering & Intelligent Systems KEIS QFD Institute KESoft Engineering disciplines
Kansei engineering
[ "Engineering" ]
2,192
[ "nan" ]
1,509,289
https://en.wikipedia.org/wiki/Magnetostatics
Magnetostatics is the study of magnetic fields in systems where the currents are steady (not changing with time). It is the magnetic analogue of electrostatics, where the charges are stationary. The magnetization need not be static; the equations of magnetostatics can be used to predict fast magnetic switching events that occur on time scales of nanoseconds or less. Magnetostatics is even a good approximation when the currents are not static – as long as the currents do not alternate rapidly. Magnetostatics is widely used in applications of micromagnetics such as models of magnetic storage devices as in computer memory. Applications Magnetostatics as a special case of Maxwell's equations Starting from Maxwell's equations and assuming that charges are either fixed or move as a steady current , the equations separate into two equations for the electric field (see electrostatics) and two for the magnetic field. The fields are independent of time and each other. The magnetostatic equations, in both differential and integral forms, are shown in the table below. Where ∇ with the dot denotes divergence, and B is the magnetic flux density, the first integral is over a surface with oriented surface element . Where ∇ with the cross denotes curl, J is the current density and is the magnetic field intensity, the second integral is a line integral around a closed loop with line element . The current going through the loop is . The quality of this approximation may be guessed by comparing the above equations with the full version of Maxwell's equations and considering the importance of the terms that have been removed. Of particular significance is the comparison of the term against the term. If the term is substantially larger, then the smaller term may be ignored without significant loss of accuracy. Re-introducing Faraday's law A common technique is to solve a series of magnetostatic problems at incremental time steps and then use these solutions to approximate the term . Plugging this result into Faraday's Law finds a value for (which had previously been ignored). This method is not a true solution of Maxwell's equations but can provide a good approximation for slowly changing fields. Solving for the magnetic field Current sources If all currents in a system are known (i.e., if a complete description of the current density is available) then the magnetic field can be determined, at a position r, from the currents by the Biot–Savart equation: This technique works well for problems where the medium is a vacuum or air or some similar material with a relative permeability of 1. This includes air-core inductors and air-core transformers. One advantage of this technique is that, if a coil has a complex geometry, it can be divided into sections and the integral evaluated for each section. Since this equation is primarily used to solve linear problems, the contributions can be added. For a very difficult geometry, numerical integration may be used. For problems where the dominant magnetic material is a highly permeable magnetic core with relatively small air gaps, a magnetic circuit approach is useful. When the air gaps are large in comparison to the magnetic circuit length, fringing becomes significant and usually requires a finite element calculation. The finite element calculation uses a modified form of the magnetostatic equations above in order to calculate magnetic potential. The value of can be found from the magnetic potential. The magnetic field can be derived from the vector potential. Since the divergence of the magnetic flux density is always zero, and the relation of the vector potential to current is: Magnetization Strongly magnetic materials (i.e., ferromagnetic, ferrimagnetic or paramagnetic) have a magnetization that is primarily due to electron spin. In such materials the magnetization must be explicitly included using the relation Except in the case of conductors, electric currents can be ignored. Then Ampère's law is simply This has the general solution where is a scalar potential. Substituting this in Gauss's law gives Thus, the divergence of the magnetization, has a role analogous to the electric charge in electrostatics and is often referred to as an effective charge density . The vector potential method can also be employed with an effective current density See also Darwin Lagrangian References External links Electric and magnetic fields in matter Potentials
Magnetostatics
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
873
[ "Condensed matter physics", "Electric and magnetic fields in matter", "Materials science" ]
1,510,452
https://en.wikipedia.org/wiki/Roll%20center
The roll center of a vehicle is the notional point at which the cornering forces in the suspension are reacted to the vehicle body. There are two definitions of roll center. The most commonly used is the geometric (or kinematic) roll center, whereas the Society of Automotive Engineers uses a force-based definition. Definition Geometric roll center is solely dictated by the suspension geometry, and can be found using principles of the instant center of rotation. Force based roll center, according to the US Society of Automotive Engineers, is "The point in the transverse vertical plane through any pair of wheel centers at which lateral forces may be applied to the sprung mass without producing suspension roll". The lateral location of the roll center is typically at the center-line of the vehicle when the suspension on the left and right sides of the car are mirror images of each other. The significance of the roll center can only be appreciated when the vehicle's center of mass is also considered. If there is a difference between the position of the center of mass and the roll center a moment arm is created. When the vehicle experiences angular velocity due to cornering, the length of the moment arm, combined with the stiffness of the springs and possibly anti-roll bars (also called 'anti-sway bar'), defines how much the vehicle will roll. This has other effects too, such as dynamic load transfer. Application When the vehicle rolls the roll centers migrate. The roll center height has been shown to affect behavior at the initiation of turns such as nimbleness and initial roll control. Testing methods Current methods of analyzing individual wheel instant centers have yielded more intuitive results of the effects of non-rolling weight transfer effects. This type of analysis is better known as the lateral-anti method. This is where one takes the individual instant center locations of each corner of the car and then calculates the resultant vertical reaction vector due to lateral force. This value then is taken into account in the calculation of a jacking force and lateral weight transfer. This method works particularly well in circumstances where there are asymmetries in left to right suspension geometry. The practical equivalent of the above is to push laterally at the tire contact patch and measure the ratio of the change in vertical load to the horizontal force. See also Weight distribution Vehicle metrics References Classical mechanics Geometric centers Vehicle technology
Roll center
[ "Physics", "Mathematics", "Engineering" ]
472
[ "Point (geometry)", "Geometric centers", "Classical mechanics", "Vehicle technology", "Mechanics", "Mechanical engineering by discipline", "Symmetry" ]
1,511,304
https://en.wikipedia.org/wiki/Wetting%20layer
A wetting layer is an monolayer of atoms that is epitaxially grown on a flat surface. The atoms forming the wetting layer can be semimetallic elements/compounds or metallic alloys (for thin films). Wetting layers form when depositing a lattice-mismatched material on a crystalline substrate. This article refers to the wetting layer connected to the growth of self-assembled quantum dots (e.g. InAs on GaAs). These quantum dots form on top of the wetting layer. The wetting layer can influence the states of the quantum dot for applications in quantum information processing and quantum computation. Process The wetting layer is epitaxially grown on a surface using molecular beam epitaxy (MBE). The temperatures required for wetting layer growth typically range from 400-500 degrees Celsius. When a material A is deposited on a surface of a lattice-mismatched material B, the first atomic layer of material A often adopts the lattice constant of B. This mono-layer of material A is called the wetting layer. When the thickness of layer A increases further, it becomes energetically unfavorable for material A to keep the lattice constant of B. Due to the high strain of layer A, additional atoms group together once a certain critical thickness of layer A is reached. This island formation reduces the elastic energy. Overgrown with material B, the wetting layer forms a quantum well in case material A has a lower bandgap than B. In this case, the formed islands are quantum dots. Further annealing can be used to modify the physical properties of the wetting layer/quantum dot . Properties The wetting layer is a close-to mono-atomic layer with a thickness of typically 0.5 nanometers. The electronic properties of the quantum dot can change as a result of the wetting layer. Also, the strain of the quantum dot can change due to the wetting layer. Notes External links Wetting layer on arxiv.org group website of M. Dähne Quantum electronics Thin film deposition
Wetting layer
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
427
[ "Quantum electronics", "Thin film deposition", "Coatings", "Thin films", "Quantum mechanics", "Condensed matter physics", "Nanotechnology", "Planes (geometry)", "Solid state engineering" ]
14,407,845
https://en.wikipedia.org/wiki/Sextuple%20bond
A sextuple bond is a type of covalent bond involving 12 bonding electrons and in which the bond order is 6. The only known molecules with true sextuple bonds are the diatomic dimolybdenum (Mo2) and ditungsten (W2), which exist in the gaseous phase and have boiling points of and respectively. Theoretical analysis Roos et al argue that no stable element can form bonds of higher order than a sextuple bond, because the latter corresponds to a hybrid of the s orbital and all five d orbitals, and f orbitals contract too close to the nucleus to bond in the lanthan­ides. Indeed, quantum mechanical calculations have revealed that the di­molybdenum bond is formed by a combination of two σ bonds, two π bonds and two δ bonds. (Also, the σ and π bonds contribute much more significantly to the sextuple bond than the δ bonds.) Although no φ bonding has been reported for transition metal dimers, it is predicted that if any sextuply-bonded actinides were to exist, at least one of the bonds would likely be a φ bond as in quintuply-bonded diuranium and di­neptunium. No sextuple bond has been observed in lanthanides or actinides. For the majority of elements, even the possibility of a sextuple bond is foreclosed, because the d electrons ferromagnetically couple, instead of bonding. The only known exceptions are dimolybdenum and ditungsten. Quantum-mechanical treatment The formal bond order (FBO) of a molecule is half the number of bonding electrons surplus to antibonding electrons; for a typical molecule, it attains exclusively integer values. A full quantum treatment requires a more nuanced picture, in which electrons may exist in a superposition, contributing fractionally to both bonding and antibonding orbitals. In a formal sextuple bond, there would be different electron pairs; an effective sextuple bond would then have all six contributing almost entirely to bonding orbitals. In Roos et al's calculations, the effective bond order (EBO) could be determined by the formula where is the proportion of formal bonding orbital occupation for an electron pair , is the proportion of the formal antibonding orbital occupation, and is a correction factor account­ing for deviations from equilibrium geometry. Several metal-metal bonds' EBOs are given in the table at right, compared to their formal bond orders. Dimolybdenum and ditungsten are the only mole­cules with effective bond orders above 5, with a quintuple bond and a partially formed sixth covalent bond. Dichromium, while formally described as having a sextuple bond, is best described as a pair of chromium atoms with all electron spins exchange-coupled to each other. While diuranium is also formally described as having a sextuple bond, relativistic quantum mechanical calculations have determined it to be a quadruple bond with four electrons ferro­magnetically coupled to each other rather than in two formal bonds. Previous calcu­lations on diuranium did not treat the electronic molecular Hamiltonian relativistically and produced higher bond orders of 4.2 with two ferromagnetically coupled electrons. Known instances: dimolybdenum and ditungsten Laser evaporation of a molybdenum sheet at low temperatures (7 K) produces gaseous dimolybdenum (Mo2). The resulting molecules can then be imaged with, for instance, near-infrared spectroscopy or UV spectroscopy. Both ditungsten and dimolybdenum have very short bond lengths compared to neighboring metal dimers. For example, sextuply-bonded dimolybdenum has an equilibrium bond length of 1.93 Å. This equi­librium internuclear distance is signi­ficantly lower than in the dimer of any neighboring 4d transition metal, and sug­gestive of higher bond orders. However, the bond dissociation energies of ditungsten and dimolybdenum are rather low, because the short internuclear distance introduces geometric strain. One empirical technique to determine bond order is spectroscopic exami­nation of bond force constants. Linus Pauling investigated the relationships between bonding atoms and developed a formula that predicts that bond order is roughly proportional to the force constant; that is, where is the bond order, is the force constant of the interatomic inter­action and is the force constant of a single bond between the atoms. The table at right shows some select force constants for metal-metal dimers com­pared to their EBOs; consistent with a sextuple bond, molybdenum's summed force constant is substantially more than quintuple the single-bond force constant. Like dichromium, dimolybdenum and ditungsten are expected to exhibit a 1Σg+ singlet ground state. However, in tungsten, this ground state arises from a hybrid of either two 5D0 ground states or two 7S3 excited states. Only the latter corresponds to the formation of a stable, sextuply-bonded ditungsten dimer. Ligand effects Although sextuple bonding in homodimers is rare, it remains a possibility in larger molecules. Aromatics Theoretical computations suggest that bent dimetallocenes have a higher bond order than their linear counterparts. For this reason, the Schaefer lab has investi­gated dimetallocenes for natural sextuple bonds. However, such com­pounds tend to exhibit Jahn-Teller distortion, rather than a true sextuple bond. For example, dirhenocene is bent. Calculating its frontier molecular orbitals sug­gests the existence of relatively stable singlet and triplet states, with a sextuple bond in the singlet state. But that state is the excited one; the triplet ground state should exhibit a formal quintuple bond. Similarly, for the dibenzene complexes Cr2(C6H6)2, Mo2(C6H6)2, and W2(C6H6)2, molecular bonding orbitals for the triplet states with symmetries D6h and D6d indicate the possibility of an intermetallic sex­tuple bond. Quantum chemistry calculations reveal, however, that the corre­sponding D2h singlet geometry is stabler than the D6h triplet state by , depending on the central metal. Oxo ligands Both quantum mechanical calculations and photoelectron spectroscopy of the tungsten oxide clusters W2On (n = 1-6) indicate that increased oxidation state reduces the bond order in ditungsten. At first, the weak δ bonds break to yield a quadruply-bonded W2O; further oxidation generates the ditungsten complex W2O6 with two bridging oxo ligands and no direct W-W bonds. References Further reading Chemical bonding
Sextuple bond
[ "Physics", "Chemistry", "Materials_science" ]
1,437
[ "Chemical bonding", "Condensed matter physics", "nan" ]
14,411,227
https://en.wikipedia.org/wiki/Critical%20state%20soil%20mechanics
Critical state soil mechanics is the area of soil mechanics that encompasses the conceptual models representing the mechanical behavior of saturated remoulded soils based on the critical state concept. At the critical state, the relationship between forces applied in the soil (stress), and the resulting deformation resulting from this stress (strain) becomes constant. The soil will continue to deform, but the stress will no longer increase. Forces are applied to soils in a number of ways, for example when they are loaded by foundations, or unloaded by excavations. The critical state concept is used to predict the behaviour of soils under various loading conditions, and geotechnical engineers use the critical state model to estimate how soil will behave under different stresses. The basic concept is that soil and other granular materials, if continuously distorted until they flow as a frictional fluid, will come into a well-defined critical state. In practical terms, the critical state can be considered a failure condition for the soil. It's the point at which the soil cannot sustain any additional load without undergoing continuous deformation, in a manner similar to the behaviour of fluids. Certain properties of the soil, like porosity, shear strength, and volume, reach characteristic values. These properties are intrinsic to the type of soil and its initial conditions. Formulation The Critical State concept is an idealization of the observed behavior of saturated remoulded clays in triaxial compression tests, and it is assumed to apply to undisturbed soils. It states that soils and other granular materials, if continuously distorted (sheared) until they flow as a frictional fluid, will come into a well-defined critical state. At the onset of the critical state, shear distortions occur without any further changes in mean effective stress , deviatoric stress (or yield stress, , in uniaxial tension according to the von Mises yielding criterion), or specific volume : where, However, for triaxial conditions . Thus, All critical states, for a given soil, form a unique line called the Critical State Line (CSL) defined by the following equations in the space : where , , and are soil constants. The first equation determines the magnitude of the deviatoric stress needed to keep the soil flowing continuously as the product of a frictional constant (capital ) and the mean effective stress . The second equation states that the specific volume occupied by unit volume of flowing particles will decrease as the logarithm of the mean effective stress increases. History In an attempt to advance soil testing techniques, Kenneth Harry Roscoe of Cambridge University, in the late forties and early fifties, developed a simple shear apparatus in which his successive students attempted to study the changes in conditions in the shear zone both in sand and in clay soils. In 1958 a study of the yielding of soil based on some Cambridge data of the simple shear apparatus tests, and on much more extensive data of triaxial tests at Imperial College London from research led by Professor Sir Alec Skempton at Imperial College, led to the publication of the critical state concept . Roscoe obtained his undergraduate degree in mechanical engineering and his experiences trying to create tunnels to escape when held as a prisoner of war by the Nazis during WWII introduced him to soil mechanics. Subsequent to this 1958 paper, concepts of plasticity were introduced by Schofield and published in his textbook. Schofield was taught at Cambridge by Prof. John Baker, a structural engineer who was a strong believer in designing structures that would fail "plastically". Prof. Baker's theories strongly influenced Schofield's thinking on soil shear. Prof. Baker's views were developed from his pre-war work on steel structures and further informed by his wartime experiences assessing blast-damaged structures and with the design of the "Morrison Shelter", an air-raid shelter which could be located indoors . Original Cam-Clay Model The name cam clay asserts that the plastic volume change typical of clay soil behaviour is due to mechanical stability of an aggregate of small, rough, frictional, interlocking hard particles. The Original Cam-Clay model is based on the assumption that the soil is isotropic, elasto-plastic, deforms as a continuum, and it is not affected by creep. The yield surface of the Cam clay model is described by the equation where is the equivalent stress, is the pressure, is the pre-consolidation pressure, and is the slope of the critical state line in space. The pre-consolidation pressure evolves as the void ratio () (and therefore the specific volume ) of the soil changes. A commonly used relation is where is the virgin compression index of the soil. A limitation of this model is the possibility of negative specific volumes at realistic values of stress. An improvement to the above model for is the bilogarithmic form where is the appropriate compressibility index of the soil. Modified Cam-Clay Model Professor John Burland of Imperial College who worked with Professor Roscoe is credited with the development of the modified version of the original model. The difference between the Cam Clay and the Modified Cam Clay (MCC) is that the yield surface of the MCC is described by an ellipse and therefore the plastic strain increment vector (which is perpendicular to the yield surface) for the largest value of the mean effective stress is horizontal, and hence no incremental deviatoric plastic strain takes place for a change in mean effective stress (for purely hydrostatic states of stress). This is very convenient for constitutive modelling in numerical analysis, especially finite element analysis, where numerical stability issues are important (as a curve needs to be continuous in order to be differentiable). The yield surface of the modified Cam-clay model has the form where is the pressure, is the equivalent stress, is the pre-consolidation pressure, and is the slope of the critical state line. Critique The basic concepts of the elasto-plastic approach were first proposed by two mathematicians Daniel C. Drucker and William Prager (Drucker and Prager, 1952) in a short eight page note. In their note, Drucker and Prager also demonstrated how to use their approach to calculate the critical height of a vertical bank using either a plane or a log spiral failure surface. Their yield criterion is today called the Drucker-Prager yield criterion. Their approach was subsequently extended by Kenneth H. Roscoe and others in the soil mechanics department of Cambridge University. Critical state and elasto-plastic soil mechanics have been the subject of criticism ever since they were first introduced. The key factor driving the criticism is primarily the implicit assumption that soils are made of isotropic point particles. Real soils are composed of finite size particles with anisotropic properties that strongly determine observed behavior. Consequently, models based on a metals based theory of plasticity are not able to model behavior of soils that is a result of anisotropic particle properties, one example of which is the drop in shear strengths post peak strength, i.e., strain-softening behavior. Because of this elasto-plastic soil models are only able to model "simple stress-strain curves" such as that from isotropic normally or lightly over consolidated "fat" clays, i.e., CL-ML type soils constituted of very fine grained particles. Also, in general, volume change is governed by considerations from elasticity and, this assumption being largely untrue for real soils, results in very poor matches of these models to volume changes or pore pressure changes. Further, elasto-plastic models describe the entire element as a whole and not specifically conditions directly on the failure plane, as a consequence of which, they do not model the stress-strain curve post failure, particularly for soils that exhibit strain-softening post peak. Finally, most models separate out the effects of hydrostatic stress and shear stress, with each assumed to cause only volume change and shear change respectively. In reality, soil structure, being analogous to a "house of cards," shows both shear deformations on the application of pure compression, and volume changes on the application of pure shear. Additional criticisms are that the theory is "only descriptive," i.e., only describes known behavior and lacking the ability to either explain or predict standard soil behaviors such as, why the void ratio in a one dimensional compression test varies linearly with the logarithm of the vertical effective stress. This behavior, critical state soil mechanics simply assumes as a given. For these reasons, critical-state and elasto-plastic soil mechanics have been subject to charges of scholasticism; the tests to demonstrated its validity are usually "conformation tests" where only simple stress-strain curves are demonstrated to be modeled satisfactorily. The critical-state and concepts surrounding it have a long history of being "scholastic," with Sir Alec Skempton, the “founding father” of British soil mechanics, attributed the scholastic nature of CSSM to Roscoe, of whom he said: “…he did little field work and was, I believe, never involved in a practical engineering job.”.In the 1960s and 1970s, Prof. Alan Bishop at Imperial College used to routinely demonstrate the inability of these theories to match the stress-strain curves of real soils. Joseph (2013) has suggested that critical-state and elasto-plastic soil mechanics meet the criterion of a “degenerate research program” a concept proposed by the philosopher of science Imre Lakatos, for theories where excuses are used to justify an inability of theory to match empirical data. Response The claims that critical state soil mechanics is only descriptive and meets the criterion of a degenerate research program have not been settled. Andrew Jenike used a logarithmic-logarithmic relation to describe the compression test in his theory of critical state and admitted decreases in stress during converging flow and increases in stress during diverging flow. Chris Szalwinski has defined a critical state as a multi-phase state at which the specific volume is the same in both solid and fluid phases. Under his definition the linear-logarithmic relation of the original theory and Jenike's logarithmic-logarithmic relation are special cases of a more general physical phenomenon. Stress tensor formulations Plane stress Drained conditions Plane Strain State of Stress Separation of Plane Strain Stress State Matrix into Distortional and Volumetric Parts: After loading Drained state of stress Drained Plane Strain State ; ; By matrix: ; Undrained conditions Undrained state of stress Undrained Strain State of Stress Undrained state of Plane Strain State Triaxial State of Stress Separation Matrix into Distortional and Volumetric Parts: Undrained state of Triaxial stress Drained state of Triaxial stress Only volumetric in case of drainage: Example solution in matrix form The following data were obtained from a conventional triaxial compression test on a saturated (B=1), normally consolidated simple clay (Ladd, 1964). The cell pressure was held constant at 10 kPa, while the axial stress was increased to failure (axial compression test).. Initial phase: Step one: Step 2-9 is same step one. Step seven: Notes References Soil mechanics
Critical state soil mechanics
[ "Physics" ]
2,287
[ "Soil mechanics", "Applied and interdisciplinary physics" ]
14,411,733
https://en.wikipedia.org/wiki/Madelung%20equations
In theoretical physics, the Madelung equations, or the equations of quantum hydrodynamics, are Erwin Madelung's equivalent alternative formulation of the Schrödinger equation for a spinless non relativistic particle, written in terms of hydrodynamical variables, similar to the Navier–Stokes equations of fluid dynamics. The derivation of the Madelung equations is similar to the de Broglie–Bohm formulation, which represents the Schrödinger equation as a quantum Hamilton–Jacobi equation. History In the fall of 1926, Erwin Madelung reformulated Schrödinger's quantum equation in a more classical and visualizable form resembling hydrodynamics. His paper was one of numerous early attempts at different approaches to quantum mechanics, including those of Louis de Broglie and Earle Hesse Kennard. The most influential of these theories was ultimately de Broglie's through the 1952 work of David Bohm now called Bohmian mechanics Equations The Madelung equations are quantum Euler equations: where is the flow velocity, is the mass density, is the Bohm quantum potential, is the potential from the Schrödinger equation. The Madelung equations answer the question whether obeys the continuity equations of hydrodynamics and, subsequently, what plays the role of the stress tensor. The circulation of the flow velocity field along any closed path obeys the auxiliary quantization condition for all integers . Derivation The Madelung equations are derived by first writing the wavefunction in polar form with and both real and the associated probability density. Substituting this form into the probability current gives: where the flow velocity is expressed as However, the interpretation of as a "velocity" should not be taken too literal, because a simultaneous exact measurement of position and velocity would necessarily violate the uncertainty principle. Next, substituting the polar form into the Schrödinger equation and performing the appropriate differentiations, dividing the equation by and separating the real and imaginary parts, one obtains a system of two coupled partial differential equations: The first equation corresponds to the imaginary part of Schrödinger equation and can be interpreted as the continuity equation. The second equation corresponds to the real part and is also referred to as the quantum Hamilton-Jacobi equation. Multiplying the first equation by and calculating the gradient of the second equation results in the Madelung equations: with quantum potential Alternatively, the quantum Hamilton-Jacobi equation can be written in a form similar to the Cauchy momentum equation: with an external force defined as and a quantum pressure tensor The integral energy stored in the quantum pressure tensor is proportional to the Fisher information, which accounts for the quality of measurements. Thus, according to the Cramér–Rao bound, the Heisenberg uncertainty principle is equivalent to a standard inequality for the efficiency of measurements. Quantum energies The thermodynamic definition of the quantum chemical potential follows from the hydrostatic force balance above: According to thermodynamics, at equilibrium the chemical potential is constant everywhere, which corresponds straightforwardly to the stationary Schrödinger equation. Therefore, the eigenvalues of the Schrödinger equation are free energies, which differ from the internal energies of the system. The particle internal energy is calculated as and is related to the local Carl Friedrich von Weizsäcker correction. See also Quantum potential Quantum hydrodynamics Bohmian quantum mechanics Pilot wave theory Notes References Partial differential equations Quantum mechanics
Madelung equations
[ "Physics" ]
707
[ "Theoretical physics", "Quantum mechanics" ]
4,086,059
https://en.wikipedia.org/wiki/Thermoplastic%20olefin
Thermoplastic olefin, thermoplastic polyolefin (TPO), or olefinic thermoplastic elastomers refer to polymer/filler blends usually consisting of some fraction of a thermoplastic, an elastomer or rubber, and usually a filler. Outdoor applications such as roofing frequently contain TPO because it does not degrade under solar UV radiation, a common problem with nylons. TPO is used extensively in the automotive industry. Materials Thermoplastics Thermoplastics may include polypropylene (PP), polyethylene (PE), block copolymer polypropylene (BCPP), and others. Fillers Common fillers include, though are not restricted to talc, fiberglass, carbon fiber, wollastonite, and MOS (Metal Oxy Sulfate). Elastomers Common elastomers include ethylene propylene rubber (EPR), EPDM (EP-diene rubber), ethylene-octene (EO), ethylbenzene (EB), and styrene ethylene butadiene styrene (SEBS). Currently there are a great variety of commercially available rubbers and BCPP's. They are produced using regioselective and stereoselective catalysts known as metallocenes. The metallocene catalyst becomes embedded in the polymer and cannot be recovered. Creation Components for TPO are blended together at 210 - 270 °C under high shear. A twin screw extruder or a continuous mixer may be employed to achieve a continuous stream, or a Banbury compounder may be employed for batch production. A higher degree of mixing and dispersion is achieved in the batch process, but the superheat batch must immediately be processed through an extruder to be pelletized into a transportable intermediate. Thus batch production essentially adds an additional cost step. Structure The geometry of the metallocene catalyst will determine the sequence of chirality in the chain, as in, atactic, syndiotactic, isotactic, as well as average block length, molecular weight and distribution. These characteristics will in turn govern the microstructure of the blend. As in metal alloys the properties of a TPO product depend greatly upon controlling the size and distribution of the microstructure. PP and PE form lamellar crystallites separated by amorphous regions that can grow into a variety of microstructures ranging from single crystals from dilute solution crystallization to fiberous crystals and shish-kabob structures. Thin films from quiescent melts can form spherulitic impinging structures that display cylindrically symmetric birefringence. The PP and PE components of a blend constitute the "crystalline phase", and the rubber and branched PE chains and PE/PP end groups gives the "amorphous phase". If PP and PE are the dominant component of a TPO blend then the rubber fraction will be dispersed into a continuous matrix of "crystalline" polypropylene. If the fraction of rubber is greater than 40% phase inversion may be possible when the blend cools, resulting in an amorphous continuous phase, and a crystalline dispersed phase. This type of material is non-rigid, and is sometimes called TPR for ThermoPlastic Rubber. To increase the rigidity of a TPO blend, fillers exploit a surface tension phenomena. By selecting a filler with a higher surface area per weight, a higher flexural modulus can be achieved. Specific density of TPO blends range from 0.92 to 1.1. Application TPO is easily processed by injection molding, profile extrusion, and thermoforming. However, TPO cannot be blown, or sustain a film thickness less than 1/4 mil (about 6 micrometers). References Thermoplastic elastomers Materials science Polymer physics
Thermoplastic olefin
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
830
[ "Polymer physics", "Applied and interdisciplinary physics", "Materials science", "nan", "Polymer chemistry" ]
4,086,824
https://en.wikipedia.org/wiki/Selective%20soldering
Selective soldering is the process of selectively soldering components to printed circuit boards and molded modules that could be damaged by the heat of a reflow oven or wave soldering in a traditional surface-mount technology (SMT) or through-hole technology assembly processes. This usually follows an SMT oven reflow process; parts to be selectively soldered are usually surrounded by parts that have been previously soldered in a surface-mount reflow process, and the selective-solder process must be sufficiently precise to avoid damaging them. Processes Assembly processes used in selective soldering include: Selective aperture tooling over wave solder: These tools mask off areas previously soldered in the SMT reflow soldering process, exposing only those areas to be selectively soldered in the tool's aperture or window. The tool and printed circuit board (PCB) assembly are then passed over wave soldering equipment to complete the process. Each tool is specific to a PCB assembly. Mass selective dip solder fountain: A variant of selective-aperture soldering in which specialized tooling (with apertures to allow solder to be pumped through it) represent the areas to be soldered. The PCB is then presented over the selective-solder fountain; all selective soldering of the PCB is soldered simultaneously as the board is lowered into the solder fountain. Each tool is specific to a PCB assembly. Miniature wave selective solder : This typically uses a round miniature pumped solder wave, similar to the end of a pencil or crayon, to sequentially solder the PCB. The process is slower than the two previous methods, but more accurate. The PCB may be fixed, and the wave solder pot moved underneath the PCB; alternately, the PCB may be articulated over a fixed wave or solder bath to undergo the selective-soldering process. Unlike the first two examples, this process is toolless. Laser Selective Soldering System: A new system, able to import CAD-based board layouts and use that data to position a laser to directly solder any point on the board. Its advantages are the elimination of thermal stress, its non-contact quality, consistent high-quality solder joints and flexibility. Soldering time averages one second per joint; stencils and solder masks may be eliminated from the circuit board to reduce manufacturing costs. Less-common selective soldering processes include: Hot-iron solder with wire-solder feed Induction solder with paste-solder, solder-laden pads or preforms and hot gas (including hydrogen), with a number of methods of presenting the solder Other selective soldering applications are non-electronic, such as lead-frame attachment to ceramic substrates, coil-lead attachment, SMT attachment (such as LEDs to PCBs) and fire sprinklers (where the fuse is low-temperature solder alloys). Regardless of the selective soldering equipment used, there are two types of selective flux applicators: spray and dropjet fluxers. The spray fluxer applies atomized flux to a specific area, while the dropjet fluxer is more precise; the choice depends on the circumstances surrounding the soldering application. Miniature wave selective solder fountain The miniature wave selective solder fountain type is widely used, yielding good results if the PCB design and manufacturing process are optimized. Key requirements for selective fountain type soldering are: Process Nozzle diameter selection according to solder-joint geometry, nearby component clearance, component lead height and wettable or non-wettable nozzle Solder temperature: Set value or actual value on plated through-hole part Contact time Preheating Flux type: No-clean, organic-based; method of fluxing (spray or dropjet) Soldering: Drag, dip or angle method Design Temperature requirement (for soldered part) and component selection Nearby SMD through-hole component clearance Ratio of component pin diameter to plated through-hole Component lead length Thermal decoupling Solder masking (green masking) distance from component pad Drop-Jet The Drop-Jet is an Electromechanical device which is capable of depositing a droplet of flux on demand onto a surface such as a Printed Circuit Board and or component pin. Thermal profiling The thermal profile of the selective process is critical as with other common automated soldering techniques. Topside temperature measurements within the pre-heat stage must be verified as with conventional flow solder machine, additionally flux activation must be verified as sufficient. As number of miniature profiling dataloggers are now available to make the process more simple such as the Solderstar Pro units. Selective solder optimization A number of fixtures are available to allow daily checking of the selective solder process, these instruments allow the verification of machine parameters to be performed on a periodic basis. Parameters such as contact time, X/Y speeds, nozzle wave height and profile temperature can all be measured. Use of nitrogen atmosphere Selective soldering is normally undertaken in a nitrogen atmosphere. This prevents oxidation of the fountain surface and results in better wetting. Less flux is needed with less left-over residue. The use of nitrogen results in clean, shiny joints without the need for PCB cleaning or brushing. References Printed circuit board manufacturing Soldering
Selective soldering
[ "Engineering" ]
1,082
[ "Electrical engineering", "Electronic engineering", "Printed circuit board manufacturing" ]
4,087,965
https://en.wikipedia.org/wiki/Genetic%20analysis
Genetic analysis is the overall process of studying and researching in fields of science that involve genetics and molecular biology. There are a number of applications that are developed from this research, and these are also considered parts of the process. The base system of analysis revolves around general genetics. Basic studies include identification of genes and inherited disorders. This research has been conducted for centuries on both a large-scale physical observation basis and on a more microscopic scale. Genetic analysis can be used generally to describe methods both used in and resulting from the sciences of genetics and molecular biology, or to applications resulting from this research. Genetic analysis may be done to identify genetic/inherited disorders and also to make a differential diagnosis in certain somatic diseases such as cancer. Genetic analyses of cancer include detection of mutations, fusion genes, and DNA copy number changes. History Much of the research that set the foundation of genetic analysis began in prehistoric times. Early humans found that they could practice selective breeding to improve crops and animals. They also identified inherited traits in humans that were eliminated over the years. The many genetic analyses gradually evolved over time. Mendelian research Modern genetic analysis began in the mid-1800s with research conducted by Gregor Mendel. Mendel, who is known as the "father of modern genetics", was inspired to study variation in plants. Between 1856 and 1863, Mendel cultivated and tested some 29,000 pea plants (i.e., Pisum sativum). This study showed that one in four pea plants had purebred recessive alleles, two out of four were hybrid and one out of four were purebred dominant. His experiments led him to make two generalizations, the Law of Segregation and the Law of Independent Assortment, which later became known as Mendel's Laws of Inheritance. Lacking the basic understanding of heredity, Mendel observed various organisms and first utilized genetic analysis to find that traits were inherited from parents and those traits could vary between children. Later, it was found that units within each cell are responsible for these traits. These units are called genes. Each gene is defined by a series of amino acids that create proteins responsible for genetic traits. Types Genetic analyses include molecular technologies such as PCR, RT-PCR, DNA sequencing, and DNA microarrays, and cytogenetic methods such as karyotyping and fluorescence in situ hybridisation. DNA sequencing DNA sequencing is essential to the applications of genetic analysis. This process is used to determine the order of nucleotide bases. Each molecule of DNA is made from adenine, guanine, cytosine and thymine, which determine what function the genes will possess. This was first discovered during the 1970s. DNA sequencing encompasses biochemical methods for determining the order of the nucleotide bases, adenine, guanine, cytosine, and thymine, in a DNA oligonucleotide. By generating a DNA sequence for a particular organism, you are determining the patterns that make up genetic traits and in some cases behaviors. Sequencing methods have evolved from relatively laborious gel-based procedures to modern automated protocols based on dye labelling and detection in capillary electrophoresis that permit rapid large-scale sequencing of genomes and transcriptomes. Knowledge of DNA sequences of genes and other parts of the genome of organisms has become indispensable for basic research studying biological processes, as well as in applied fields such as diagnostic or forensic research. The advent of DNA sequencing has significantly accelerated biological research and discovery. Cytogenetics Cytogenetics is a branch of genetics that is concerned with the study of the structure and function of the cell, especially the chromosomes. Polymerase chain reaction studies the amplification of DNA. Because of the close analysis of chromosomes in cytogenetics, abnormalities are more readily seen and diagnosed. Karyotyping A karyotype is the number and appearance of chromosomes in the nucleus of a eukaryotic cell. The term is also used for the complete set of chromosomes in a species, or an individual organism. Karyotypes describe the number of chromosomes, and what they look like under a light microscope. Attention is paid to their length, the position of the centromeres, banding pattern, any differences between the sex chromosomes, and any other physical characteristics. Karyotyping uses a system of studying chromosomes to identify genetic abnormalities and evolutionary changes in the past. DNA microarrays A DNA microarray is a collection of microscopic DNA spots attached to a solid surface. Scientists use DNA microarrays to measure the expression levels of large numbers of genes simultaneously or to genotype multiple regions of a genome. When a gene is expressed in a cell, it generates messenger RNA (mRNA). Overexpressed genes generate more mRNA than underexpressed genes. This can be detected on the microarray. Since an array can contain tens of thousands of probes, a microarray experiment can accomplish many genetic tests in parallel. Therefore, arrays have dramatically accelerated many types of investigations. PCR The polymerase chain reaction (PCR) is a biochemical technology in molecular biology to amplify a single or a few copies of a piece of DNA across several orders of magnitude, generating thousands to millions of copies of a particular DNA sequence. PCR is now a common and often indispensable technique used in medical and biological research labs for a variety of applications. These include DNA cloning for sequencing, DNA-based phylogeny, or functional analysis of genes; the diagnosis of hereditary diseases; the identification of genetic fingerprints (used in forensic sciences and paternity testing); and the detection and diagnosis of infectious diseases. Applications Cancer Numerous practical advancements have been made in the field of genetics and molecular biology through the processes of genetic analysis. One of the most prevalent advancements during the late 20th and early 21st centuries is a greater understanding of cancer's link to genetics. By identifying which genes in the cancer cells are working abnormally, doctors can better diagnose and treat cancers. Research Research has been able to identify the concepts of genetic mutations, fusion genes and changes in DNA copy numbers, and advances are made in the field every day. Much of these applications have led to new types of sciences that use the foundations of genetic analysis. Reverse genetics uses the methods to determine what is missing in a genetic code or what can be added to change that code. Genetic linkage studies analyze the spatial arrangements of genes and chromosomes. There have also been studies to determine the legal and social and moral effects of the increase of genetic analysis. Genetic analysis may be done to identify genetic/inherited disorders and also to make a differential diagnosis in certain somatic diseases such as cancer. Genetic analyses of cancer include detection of mutations, fusion genes, and DNA copy number changes. References Analysis
Genetic analysis
[ "Biology" ]
1,392
[ "Genetics" ]
4,088,091
https://en.wikipedia.org/wiki/Main%20battery
A main battery is the primary weapon or group of weapons around which a warship is designed. As such, a main battery was historically a naval gun or group of guns used in volleys, as in the broadsides of cannon on a ship of the line. Later, this came to be turreted groups of similar large-caliber naval rifles. With the evolution of technology the term has come to encompass guided missiles and torpedoes as a warship's principal offensive weaponry, deployed both on surface ships and submarines. A main battery features common parts, munition and fire control system across the weapons which it comprises. Description In the age of cannon at sea, the main battery was the principal group of weapons around which a ship was designed, usually its heavies. With the coming of naval rifles and subsequent revolving gun turrets, the main battery became the principal group of heaviest guns, regardless of how many turrets they were placed in. As missiles displaced guns both above and below the water their principal group became a vessel's main battery. Between the age of sail and its cannons and the dreadnought era of large iron warships fighting ships' weapons deployments lacked standardization, with a variety of naval rifles of mixed breach and caliber scattered throughout vessels. Dreadnoughts resolved this in favor of a main battery of large guns, supported by largely defensive secondary batteries of smaller guns of standardized form, further augmented on large warships such as battleships and cruisers with smaller yet tertiary batteries. As air superiority became all-important early in World War II, weight of broadside fell by the wayside as a vessel's principal fighting asset. Anti-aircraft batteries of scores of small-caliber rapid-fire weapons came to supplant big guns even on large warships assigned to protect vital fast carrier task forces. At sea, ships such as small, fast destroyers assigned to convoy protection, essential in the transport of the enormous stock of materials required for land war particularly in the European Theater, came to rely more on depth charge projectors. The terms main battery and secondary battery fell out of favor as ships were designed to carry surface-to-air missiles and anti-ship missiles with greater range and heavier warheads than their guns. Such ships often referred to their remaining guns as simply the gun battery and to the missiles as the missile battery. Ships with more than one type of missile might refer to the batteries by the name of the missile. had a Talos battery and a Tartar battery. Examples The German battleship , carried a main battery of eight 15 inch (380mm) guns, along with a secondary battery of twelve 5.9 inch (150mm) guns for defense against destroyers and torpedo boats, and an anti-aircraft battery of various guns ranging in caliber from 4.1 inch (105mm) to 20mm guns. Many later ships during World War II used dual-purpose guns to combine the secondary battery and the heavier guns of the anti-aircraft battery for increased flexibility and economy. The United States Navy battleship had a main battery of nine guns arranged in three turrets, two forward and one aft. The secondary battery was 5-inch dual purpose guns, allowing use against other ships and aircraft. A dedicated anti-aircraft battery was composed of light Bofors 40 mm guns and Oerlikon 20 mm cannon. References Notes Weapons platforms Shipbuilding Naval warfare Naval artillery
Main battery
[ "Engineering" ]
678
[ "Shipbuilding", "Marine engineering" ]
4,088,449
https://en.wikipedia.org/wiki/Refractometer
A refractometer is a laboratory or field device for the measurement of an index of refraction (refractometry). The index of refraction is calculated from the observed refraction angle using Snell's law. For mixtures, the index of refraction then allows the concentration to be determined using mixing rules such as the Gladstone–Dale relation and Lorentz–Lorenz equation. Refractometry Standard refractometers measure the extent of light refraction (as part of a refractive index) of transparent substances in either a liquid this is then used in order to identify a liquid sample, analyze the sample's purity, and determine the amount or concentration of dissolved substances within the sample. As light passes through the liquid from the air it will slow down and create a ‘bending’ illusion, the severity of the ‘bend’ will depend on the amount of substance dissolved in the liquid. For example, the amount of sugar in a glass of water. Types There are four main types of refractometers: traditional handheld refractometers, digital handheld refractometers, laboratory or Abbe refractometers (named for the instrument's inventor and based on Ernst Abbe's original design of the 'critical angle') and inline process refractometers. There is also the Rayleigh Refractometer used (typically) for measuring the refractive indices of gases. In laboratory medicine, a refractometer is used to measure the total plasma protein in a blood sample and urine specific gravity in a urine sample. In drug diagnostics, a refractometer is used to measure the specific gravity of human urine. In gemology, the gemstone refractometer is one of the fundamental pieces of equipment used in a gemological laboratory. Gemstones are transparent minerals and can therefore be examined using optical methods. Refractive index is a material constant, dependent on the chemical composition of a substance. The refractometer is used to help identify gem materials by measuring their refractive index, one of the principal properties used in determining the type of a gemstone. Due to the dependence of the refractive index on the wavelength of the light used (i.e. dispersion), the measurement is normally taken at the wavelength of the sodium line D-line (NaD) of ~589 nm. This is either filtered out from daylight or generated with a monochromatic light-emitting diode (LED). Certain stones such as rubies, sapphires, tourmalines and topaz are optically anisotropic. They demonstrate birefringence based on the polarisation plane of the light. The two different refractive indexes are classified using a polarisation filter. Gemstone refractometers are available both as classic optical instruments and as electronic measurement devices with a digital display. In marine aquarium keeping, a refractometer is used to measure the salinity and specific gravity of the water. In the automobile industry, a refractometer is used to measure the coolant concentration. In the machine industry, a refractometer is used to measure the amount of coolant concentrate that has been added to the water-based coolant for the machining process. In homebrewing, a brewing refractometer is used to measure the specific gravity before fermentation to determine the amount of fermentable sugars which will potentially be converted to alcohol. Brix refractometers are often used by hobbyists for making preserves including jams, marmalades and honey. In beekeeping, a brix refractometer is used to measure the amount of water in honey. Automatic Automatic refractometers automatically measure the refractive index of a sample. The automatic measurement of the refractive index of the sample is based on the determination of the critical angle of total reflection. A light source, usually a long-life LED, is focused onto a prism surface via a lens system. An interference filter guarantees the specified wavelength. Due to focusing light to a spot at the prism surface, a wide range of different angles is covered. As shown in the figure "Schematic setup of an automatic refractometer" the measured sample is in direct contact with the measuring prism. Depending on its refractive index, the incoming light below the critical angle of total reflection is partly transmitted into the sample, whereas for higher angles of incidence the light is totally reflected. This dependence of the reflected light intensity from the incident angle is measured with a high-resolution sensor array. From the video signal taken with the CCD sensor the refractive index of the sample can be calculated. This method of detecting the angle of total reflection is independent on the sample properties. It is even possible to measure the refractive index of optically dense strongly absorbing samples or samples containing air bubbles or solid particles . Furthermore, only a few microliters are required and the sample can be recovered. This determination of the refraction angle is independent of vibrations and other environmental disturbances. Influence of wavelength The refractive index of a given sample varies with wavelength for all materials. This dispersion relation is nonlinear and is characteristic for every material. In the visible range, a decrease of the refractive index comes with increasing wavelength. In glass prisms very little absorption is observable. In the infrared wavelength range several absorption maxima and fluctuations in the refractive index appear. To guarantee a high quality measurement with an accuracy of up to 0.00002 in the refractive index the wavelength has to be determined correctly. Therefore, in modern refractometers the wavelength is tuned to a bandwidth of +/-0.2 nm to ensure correct results for samples with different dispersions. Influence of temperature Temperature has a very important influence on the refractive index measurement. Therefore, the temperature of the prism and the temperature of the sample have to be controlled with high precision. There are several subtly-different designs for controlling the temperature; but there are some key factors common to all, such as high-precision temperature sensors and Peltier devices to control the temperature of the sample and the prism. The temperature control of these devices should be designed so that the variation in sample temperature is small enough that it will not cause a detectable refractive-index change. External water baths were used in the past but are no longer needed. Extended possibilities of automatic refractometers Automatic refractometers are microprocessor-controlled electronic devices. This means they can have a high degree of automation and also be combined with other measuring devices Flow cells There are different types of sample cells available, ranging from a flow cell for a few microliters to sample cells with a filling funnel for fast sample exchange without cleaning the measuring prism in between. The sample cells can also be used for the measurement of poisonous and toxic samples with minimum exposure to the sample. Micro cells require only a few microliters volume, assure good recovery of expensive samples and prevent evaporation of volatile samples or solvents. They can also be used in automated systems for automatic filling of the sample onto the refractometer prism. For convenient filling of the sample through a funnel, flow cells with a filling funnel are available. These are used for fast sample exchange in quality control applications. Automatic sample feeding Once an automatic refractometer is equipped with a flow cell, the sample can either be filled by means of a syringe or by using a peristaltic pump. Modern refractometers have the option of a built-in peristaltic pump. This is controlled via the instrument's software menu. A peristaltic pump opens the way to monitor batch processes in the laboratory or perform multiple measurements on one sample without any user interaction. This eliminates human error and assures a high sample throughput. If an automated measurement of a large number of samples is required, modern automatic refractometers can be combined with an automatic sample changer. The sample changer is controlled by the refractometer and assures fully automated measurements of the samples placed in the vials of the sample changer for measurements. Multiparameter measurements Today's laboratories do not only want to measure the refractive index of samples, but several additional parameters like density or viscosity to perform efficient quality control. Due to the microprocessor control and a number of interfaces, automatic refractometers are able to communicate with computers or other measuring devices, e.g. density meters, pH meters or viscosity meters, to store refractive index data and density data (and other parameters) into one database. Software features Automatic refractometers do not only measure the refractive index, but offer a lot of additional software features, like Instrument settings and configuration via software menu Automatic data recording into a database User-configurable data output Export of measuring data Statistical functions Predefined methods for different kinds of applications Automatic checks and adjustments Check if sufficient amount of sample is on the prism Data recording only if the results are plausible Pharma documentation and validation Refractometers are often used in pharmaceutical applications for quality control of raw intermediate and final products. The manufacturers of pharmaceuticals have to follow several international regulations like FDA 21 CFR Part 11, GMP, Gamp 5, USP<1058>, which require a lot of documentation work. The manufacturers of automatic refractometers support these users providing instrument software fulfills the requirements of 21 CFR Part 11, with user levels, electronic signature and audit trail. Furthermore, Pharma Validation and Qualification Packages are available containing Qualification Plan (QP) Design Qualification (DQ) Risk Analysis Installation Qualification (IQ) Operational Qualification (OQ) Check List 21 CFR Part 11 / SOP Performance Qualification (PQ) Scales typically used Brix Oechsle scale Plato scale Baumé scale See also Ernst Abbe Refractive index Gemology Must weight Winemaking Harvest (wine) Gravity (beer) High-fructose corn syrup Cutting fluid German inventors and discoverers High refractive index polymers References Further reading External links Refractometer – Gemstone Buzz uses, procedure & limitations. Rayleigh Refractometer: Operational Principles Refractometers and refractometry explains how refractometers work. Measuring instruments Scales Beekeeping tools Food analysis
Refractometer
[ "Chemistry", "Technology", "Engineering" ]
2,132
[ "Refractometers", "Food analysis", "Food chemistry", "Measuring instruments" ]
4,090,318
https://en.wikipedia.org/wiki/Vela%20Supernova%20Remnant
The Vela supernova remnant is a supernova remnant in the southern constellation Vela. Its source Type II supernova exploded approximately 11,000 years ago (and was about 900 light-years away). The association of the Vela supernova remnant with the Vela pulsar, made by astronomers at the University of Sydney in 1968, was direct observational evidence that supernovae form neutron stars. The Vela supernova remnant includes NGC 2736. Viewed from Earth, the Vela supernova remnant overlaps the Puppis A supernova remnant, which is four times more distant. Both the Puppis and Vela remnants are among the largest and brightest features in the X-ray sky. The Vela supernova remnant is one of the closest known to us. The Geminga pulsar is closer (and also resulted from a supernova), and in 1998 another near-Earth supernova remnant was discovered, RX J0852.0-4622, which from our point of view appears to be contained in the southeastern part of the Vela remnant. This remnant was not seen earlier because when viewed in most wavelengths, it is lost in the Vela remnant. See also CG 4 List of supernova remnants List of supernovae References External links Gum Nebula (annotated) Bill Blair's Vela Supernova Remnant page Gum Nebula Supernova remnants Vela (constellation)
Vela Supernova Remnant
[ "Astronomy" ]
290
[ "Vela (constellation)", "Constellations" ]
4,093,697
https://en.wikipedia.org/wiki/Monte%20Carlo%20localization
Monte Carlo localization (MCL), also known as particle filter localization, is an algorithm for robots to localize using a particle filter. Given a map of the environment, the algorithm estimates the position and orientation of a robot as it moves and senses the environment. The algorithm uses a particle filter to represent the distribution of likely states, with each particle representing a possible state, i.e., a hypothesis of where the robot is. The algorithm typically starts with a uniform random distribution of particles over the configuration space, meaning the robot has no information about where it is and assumes it is equally likely to be at any point in space. Whenever the robot moves, it shifts the particles to predict its new state after the movement. Whenever the robot senses something, the particles are resampled based on recursive Bayesian estimation, i.e., how well the actual sensed data correlate with the predicted state. Ultimately, the particles should converge towards the actual position of the robot. Basic description Consider a robot with an internal map of its environment. When the robot moves around, it needs to know where it is within this map. Determining its location and rotation (more generally, the pose) by using its sensor observations is known as robot localization. Because the robot may not always behave in a perfectly predictable way, it generates many random guesses of where it is going to be next. These guesses are known as particles. Each particle contains a full description of a possible future state. When the robot observes the environment, it discards particles inconsistent with this observation, and generates more particles close to those that appear consistent. In the end, hopefully most particles converge to where the robot actually is. State representation The state of the robot depends on the application and design. For example, the state of a typical 2D robot may consist of a tuple for position and orientation . For a robotic arm with 10 joints, it may be a tuple containing the angle at each joint: . The belief, which is the robot's estimate of its current state, is a probability density function distributed over the state space. In the MCL algorithm, the belief at a time is represented by a set of particles . Each particle contains a state, and can thus be considered a hypothesis of the robot's state. Regions in the state space with many particles correspond to a greater probability that the robot will be there—and regions with few particles are unlikely to be where the robot is. The algorithm assumes the Markov property that the current state's probability distribution depends only on the previous state (and not any ones before that), i.e., depends only on . This only works if the environment is static and does not change with time. Typically, on start up, the robot has no information on its current pose so the particles are uniformly distributed over the configuration space. Overview Given a map of the environment, the goal of the algorithm is for the robot to determine its pose within the environment. At every time the algorithm takes as input the previous belief , an actuation command , and data received from sensors ; and the algorithm outputs the new belief . Algorithm MCL: for to : motion_update sensor_update endfor for to : draw from with probability endfor return Example for 1D robot Consider a robot in a one-dimensional circular corridor with three identical doors, using a sensor that returns either true or false depending on whether there is a door. At the end of the three iterations, most of the particles are converged on the actual position of the robot as desired. Motion update During the motion update, the robot predicts its new location based on the actuation command given, by applying the simulated motion to each of the particles. For example, if a robot moves forward, all particles move forward in their own directions no matter which way they point. If a robot rotates 90 degrees clockwise, all particles rotate 90 degrees clockwise, regardless of where they are. However, in the real world, no actuator is perfect: they may overshoot or undershoot the desired amount of motion. When a robot tries to drive in a straight line, it inevitably curves to one side or the other due to minute differences in wheel radius. Hence, the motion model must compensate for noise. Inevitably, the particles diverge during the motion update as a consequence. This is expected since a robot becomes less sure of its position if it moves blindly without sensing the environment. Sensor update When the robot senses its environment, it updates its particles to more accurately reflect where it is. For each particle, the robot computes the probability that, had it been at the state of the particle, it would perceive what its sensors have actually sensed. It assigns a weight for each particle proportional to the said probability. Then, it randomly draws new particles from the previous belief, with probability proportional to . Particles consistent with sensor readings are more likely to be chosen (possibly more than once) and particles inconsistent with sensor readings are rarely picked. As such, particles converge towards a better estimate of the robot's state. This is expected since a robot becomes increasingly sure of its position as it senses its environment. Properties Non-parametricity The particle filter central to MCL can approximate multiple different kinds of probability distributions, since it is a non-parametric representation. Some other Bayesian localization algorithms, such as the Kalman filter (and variants, the extended Kalman filter and the unscented Kalman filter), assume the belief of the robot is close to being a Gaussian distribution and do not perform well for situations where the belief is multimodal. For example, a robot in a long corridor with many similar-looking doors may arrive at a belief that has a peak for each door, but the robot is unable to distinguish which door it is at. In such situations, the particle filter can give better performance than parametric filters. Another non-parametric approach to Markov localization is the grid-based localization, which uses a histogram to represent the belief distribution. Compared with the grid-based approach, the Monte Carlo localization is more accurate because the state represented in samples is not discretized. Computational requirements The particle filter's time complexity is linear with respect to the number of particles. Naturally, the more particles, the better the accuracy, so there is a compromise between speed and accuracy and it is desired to find an optimal value of . One strategy to select is to continuously generate additional particles until the next pair of command and sensor reading has arrived. This way, the greatest possible number of particles is obtained while not impeding the function of the rest of the robot. As such, the implementation is adaptive to available computational resources: the faster the processor, the more particles can be generated and therefore the more accurate the algorithm is. Compared to grid-based Markov localization, Monte Carlo localization has reduced memory usage since memory usage only depends on number of particles and does not scale with size of the map, and can integrate measurements at a much higher frequency. The algorithm can be improved using KLD sampling, as described below, which adapts the number of particles to use based on how sure the robot is of its position. Particle deprivation A drawback of the naive implementation of Monte Carlo localization occurs in a scenario where a robot sits at one spot and repeatedly senses the environment without moving. Suppose that the particles all converge towards an erroneous state, or if an occult hand picks up the robot and moves it to a new location after particles have already converged. As particles far away from the converged state are rarely selected for the next iteration, they become scarcer on each iteration until they disappear altogether. At this point, the algorithm is unable to recover. This problem is more likely to occur for small number of particles, e.g., , and when the particles are spread over a large state space. In fact, any particle filter algorithm may accidentally discard all particles near the correct state during the resampling step. One way to mitigate this issue is to randomly add extra particles on every iteration. This is equivalent to assuming that, at any point in time, the robot has some small probability of being kidnapped to a random position in the map, thus causing a fraction of random states in the motion model. By guaranteeing that no area in the map is totally deprived of particles, the algorithm is now robust against particle deprivation. Variants The original Monte Carlo localization algorithm is fairly simple. Several variants of the algorithm have been proposed, which address its shortcomings or adapt it to be more effective in certain situations. KLD sampling Monte Carlo localization may be improved by sampling the particles in an adaptive manner based on an error estimate using the Kullback–Leibler divergence (KLD). Initially, it is necessary to use a large due to the need to cover the entire map with a uniformly random distribution of particles. However, when the particles have converged around the same location, maintaining such a large sample size is computationally wasteful. KLD–sampling is a variant of Monte Carlo Localization where at each iteration, a sample size is calculated. The sample size is calculated such that, with probability , the error between the true posterior and the sample-based approximation is less than . The variables and are fixed parameters. The main idea is to create a grid (a histogram) overlaid on the state space. Each bin in the histogram is initially empty. At each iteration, a new particle is drawn from the previous (weighted) particle set with probability proportional to its weight. Instead of the resampling done in classic MCL, the KLD–sampling algorithm draws particles from the previous, weighted, particle set and applies the motion and sensor updates before placing the particle into its bin. The algorithm keeps track of the number of non-empty bins, . If a particle is inserted in a previously empty bin, the value of is recalculated, which increases mostly linear in . This is repeated until the sample size is the same as . It is easy to see KLD–sampling culls redundant particles from the particle set, by only increasing when a new location (bin) has been filled. In practice, KLD–sampling consistently outperforms and converges faster than classic MCL. References Robot navigation Monte Carlo methods
Monte Carlo localization
[ "Physics" ]
2,113
[ "Monte Carlo methods", "Computational physics" ]
4,093,822
https://en.wikipedia.org/wiki/System%20in%20a%20package
A system in a package (SiP) or system-in-package is a number of integrated circuits (ICs) enclosed in one chip carrier package or encompassing an IC package substrate that may include passive components and perform the functions of an entire system. The ICs may be stacked using package on package, placed side by side, and/or embedded in the substrate. The SiP performs all or most of the functions of an electronic system, and is typically used when designing components for mobile phones, digital music players, etc. Dies containing integrated circuits may be stacked vertically on the package substrate. They are internally connected by fine wires that are bonded to the package substrate. Alternatively, with a flip chip technology, solder bumps are used to join stacked chips together and to the package substrate, or even both techniques can be used in a single package. SiPs are like systems on a chip (SoCs) but less tightly integrated and not on a single semiconductor die. SIPs can be used either to reduce the size of a system, improve performance or to reduce costs. The technology evolved from multi chip module (MCM) technology, the difference being that SiPs also use die stacking, which stacks several chips or dies on top of each other. Technology SiP dies can be stacked vertically or tiled horizontally, with techniques like chiplets or quilt packaging. SiPs connect the dies with standard off-chip wire bonds or solder bumps, unlike slightly denser three-dimensional integrated circuits which connect stacked silicon dies with conductors running through the die using through-silicon vias. Many different 3D packaging techniques have been developed for stacking many fairly standard chip dies into a compact area. SiPs can contain several chips or dies—such as a specialized processor, DRAM, flash memory—combined with passive components—resistors and capacitors—all mounted on the same substrate. This means that a complete functional unit can be built in a single package, so that few external components need to be added to make it work. This is particularly valuable in space constrained environments like MP3 players and mobile phones as it reduces the complexity of the printed circuit board and overall design. Despite its benefits, this technique decreases the yield of fabrication since any defective chip in the package will result in a non-functional packaged integrated circuit, even if all other modules in that same package are functional. SiPs are in contrast to the common system on a chip (SoC) integrated circuit architecture which integrates components based on function into a single circuit die. An SoC will typically integrate a CPU, graphics and memory interfaces, hard-disk and USB connectivity, random-access and read-only memories, and secondary storage and/or their controllers on a single die. In comparison an SiP would connect these modules as discrete components in one or more chip packages or dies. An SiP resembles the common traditional motherboard-based PC architecture, as it separates components based on function and connects them through a central interfacing circuit board. An SiP has a lower grade of integration in comparison to an SoC. Hybrid integrated circuits (HICs) are somewhat similar to SiPs, however they tend to handle analog signals whereas SiPs usually handle digital signals, because of this HICs use older or less advanced technology (tend to use single layer circuit boards or substrates, not use die stacking, do not use flip chip or BGA for connecting components or dies, use only wire bonding for connecting dies or Small outline integrated circuit packages, use Dual in-line packages, or Single in-line packages for interfacing outside the Hybrid IC instead of BGA, etc.). SiP technology is primarily being driven by early market trends in wearables, mobile devices and the internet of things which do not demand the high numbers of produced units as in the established consumer and business SoC market. As the internet of things becomes more of a reality and less of a vision, there is innovation going on at the system on a chip and SiP level so that microelectromechanical (MEMS) sensors can be integrated on a separate die and control the connectivity. SiP solutions may require multiple packaging technologies, such as flip chip, wire bonding, wafer-level packaging, Through-silicon vias (TSVs), chiplets and more. Suppliers Advanced Micro Devices Amkor Technology Atmel AMPAK Technology Inc. NANIUM, S.A. ASE Group CeraMicro ChipSiP Technology Cypress Semiconductor STATS ChipPAC Ltd Toshiba Renesas SanDisk Samsung Silicon Labs Octavo Systems Nordic Semiconductor JCET Desay Sip Universal Scientific Industrial (USI) See also Advanced packaging (semiconductors) Multi-chip module System on a chip (SoC) Hybrid integrated circuit (HIC) References Packaging (microfabrication) Integrated circuits Electronic design Microtechnology Computer systems
System in a package
[ "Materials_science", "Technology", "Engineering" ]
971
[ "Computer engineering", "Packaging (microfabrication)", "Microtechnology", "Electronic design", "Materials science", "Computer systems", "Computer science", "Electronic engineering", "Design", "Computers", "Integrated circuits" ]
9,257,264
https://en.wikipedia.org/wiki/FLUXNET
FLUXNET is a global network of micrometeorological tower sites that use eddy covariance methods to measure the exchanges of carbon dioxide, water vapor, and energy between the biosphere and atmosphere. FLUXNET is a global 'network of regional networks' that serves to provide an infrastructure to compile, archive and distribute data for the scientific community. The most recent FLUXNET data product, FLUXNET2015, is hosted by the Lawrence Berkeley National Laboratory (USA) and is publicly available for download.  Currently there are over 1000 active and historic flux measurement sites. FLUXNET works to ensure that different flux networks are calibrated to facilitate comparison between sites, and it provides a forum for the distribution of knowledge and data between scientists. Researchers also collect data on site vegetation, soil, trace gas fluxes, hydrology, and meteorological characteristics at the tower sites. History and Background FLUXNET started in 1997 and has grown from a handful of sites in North America and Europe to a current population exceeding 260 registered sites world-wide.  Today, FLUXNET consists of regional networks in North America (AmeriFlux, Fluxnet-Canada, NEON), South America (LBA), Europe (CarboEuroFlux, ICOS), Australasia (OzFlux), Asia (China Flux, and Asia Flux) and Africa (AfriFlux).   At each tower site, the eddy covariance flux measurements are made every 30 minutes and are integrated on daily, monthly and annual time scales.  The spatial scale of the footprint at each tower site reaches between 200 m and a kilometer. An overarching intent of FLUXNET, and its regional partners, is to provide data that can be used to validate terrestrial carbon fluxes derived from sensors on NASA satellites, such as TERRA and AQUA, and from biogeochemical models.   To achieve this overarching goal, the objectives and priorities of FLUXNET have evolved as the network has grown and matured.  During the initial stages of FLUXNET, the priority of our research was to develop value-added products, such as gap-filled data sets of net ecosystem productivity, NEP, evaporation, energy exchange and meteorology.  The rationales for this undertaking were: 1) to compute daily, monthly and annual sums of net carbon, water and energy exchange; and 2) to produce continuous datasets for the execution and testing of a variety of biogeochemical/biophysical/ecosystem dynamic models and satellite-based remote sensing algorithms.   During the second stage of FLUXNET the research priority involved the decomposition of NEE measurements into component fluxes such as GPP and ecosystem respiration, Reco.  This step is required for FLUXNET to be a successful tool for validating MODIS-based estimate of terrestrial carbon exchange; algorithms driven by satellite-based remote sensing instruments are unable to assess NEE directly, and instead compute GPP or NPP. In the intervening years, FLUXNET scientists have used the flux-component datasets (GPP, Reco) to assess how canopy photosynthesis and ecosystem respiration vary as a function of: 1) season; 2) plant functional type; and 3) environmental drivers. While these initial studies have contributed significantly towards understanding the physiology of whole ecosystems, they only represent an initial step towards the future evolution and productivity of FLUXNET.   For example, the majority of the early work was produced with a subset of field sites, which was heavily biased towards coniferous and deciduous forests.   With the continued growth and extended duration of the network, many new opportunities, relating to the spatial/temporal aspects of carbon dioxide exchange, remain to be explored.  First, FLUXNET has expanded to include broader representation of vegetation types and climates.  The network now includes numerous tower sites over tropical and alpine forests, savanna, chaparral, tundra, grasslands, wetlands and an assortment of agricultural crops. Second, the scope of many studies over deciduous and conifer forests has expanded. Several contributing research groups are conducting chronosequence studies associated with disturbance by fire and logging.   From this work, scientists are learning that information on disturbance needs to be incorporated into model schemes that rely on climate drivers and plant functional type to upscale of tower fluxes to landscapes and regions—adding another level of complexity. Third, FLUXNET is partnering with other groups that are measuring the changes in phenology with networks of digital cameras, soil moisture and methane fluxes. Today, with many datasets extending beyond two decades, FLUXNET has the opportunity to provide data that is necessary to assess the impacts of climate and ecosystem factors on inter-annual variations and trends of carbon dioxide and water vapor fluxes. The sharing of data has also been instrumental in developing techniques that use machine learning methods and combine data streams from FLUXNET, remote sensing and gridded data products to produce maps of carbon and water fluxes. References Further reading Pastorello, G., D. Papale, H. Chu, C. Trotta, D. Agarwal, E. C. Canfora, D. Baldocchi, and M. Torn (2016), The FLUXNET2015 Dataset: The longest record of global carbon, water, and energy fluxes is updated, Eos Trans. AGU. Pastorello, G., et al. (2020), The FLUXNET2015 dataset and the ONEFlux processing pipeline for eddy covariance data, Scientific Data, 7(1), 225, doi:10.1038/s41597-020-0534-3. External links FLUXNET FLUXNET2015 Dataset (2015) FLUXNET LaThuile Dataset (2007) FLUXNET Marconi Dataset (2000) Historical Interactive Map of Fluxnet Sites Historical FLUXNET at ORNL Historical Fluxdata.org Fluxnet on NOSA Regional FLUXNET websites AmeriFlux AsiaFlux CarboEurope Chinaflux European Fluxes Database Fluxnet-Canada KoFlux OzFlux Urban Flux Network Applied and interdisciplinary physics Meteorological data and networks
FLUXNET
[ "Physics" ]
1,241
[ "Applied and interdisciplinary physics" ]
9,258,361
https://en.wikipedia.org/wiki/Ruppeiner%20geometry
Ruppeiner geometry is thermodynamic geometry (a type of information geometry) using the language of Riemannian geometry to study thermodynamics. George Ruppeiner proposed it in 1979. He claimed that thermodynamic systems can be represented by Riemannian geometry, and that statistical properties can be derived from the model. This geometrical model is based on the inclusion of the theory of fluctuations into the axioms of equilibrium thermodynamics, namely, there exist equilibrium states which can be represented by points on two-dimensional surface (manifold) and the distance between these equilibrium states is related to the fluctuation between them. This concept is associated to probabilities, i.e. the less probable a fluctuation between states, the further apart they are. This can be recognized if one considers the metric tensor gij in the distance formula (line element) between the two equilibrium states where the matrix of coefficients gij is the symmetric metric tensor which is called a Ruppeiner metric, defined as a negative Hessian of the entropy function where U is the internal energy (mass) of the system and Na refers to the extensive parameters of the system. Mathematically, the Ruppeiner geometry is one particular type of information geometry and it is similar to the Fisher–Rao metric used in mathematical statistics. The Ruppeiner metric can be understood as the thermodynamic limit (large systems limit) of the more general Fisher information metric. For small systems (systems where fluctuations are large), the Ruppeiner metric may not exist, as second derivatives of the entropy are not guaranteed to be non-negative. The Ruppeiner metric is conformally related to the Weinhold metric via where T is the temperature of the system under consideration. Proof of the conformal relation can be easily done when one writes down the first law of thermodynamics (dU = TdS + ...) in differential form with a few manipulations. The Weinhold geometry is also considered as a thermodynamic geometry. It is defined as a Hessian of the internal energy with respect to entropy and other extensive parameters. It has long been observed that the Ruppeiner metric is flat for systems with noninteracting underlying statistical mechanics such as the ideal gas. Curvature singularities signal critical behaviors. In addition, it has been applied to a number of statistical systems including Van der Waals gas. Recently the anyon gas has been studied using this approach. Application to black hole systems This geometry has been applied to black hole thermodynamics, with some physically relevant results. The most physically significant case is for the Kerr black hole in higher dimensions, where the curvature singularity signals thermodynamic instability, as found earlier by conventional methods. The entropy of a black hole is given by the well-known Bekenstein–Hawking formula where is the Boltzmann constant, is the speed of light, is the Newtonian constant of gravitation and is the area of the event horizon of the black hole. Calculating the Ruppeiner geometry of the black hole's entropy is, in principle, straightforward, but it is important that the entropy should be written in terms of extensive parameters, where is ADM mass of the black hole and Na are the conserved charges and a runs from 1 to n. The signature of the metric reflects the sign of the hole's specific heat. For a Reissner–Nordström black hole, the Ruppeiner metric has a Lorentzian signature which corresponds to the negative heat capacity it possess, while for the BTZ black hole, we have a Euclidean signature. This calculation cannot be done for the Schwarzschild black hole, because its entropy is which renders the metric degenerate. References . Riemannian geometry Thermodynamics New College of Florida faculty Mathematical physics
Ruppeiner geometry
[ "Physics", "Chemistry", "Mathematics" ]
795
[ "Applied mathematics", "Theoretical physics", "Thermodynamics", "Mathematical physics", "Dynamical systems" ]
9,263,122
https://en.wikipedia.org/wiki/Lead%20scandium%20tantalate
Lead scandium tantalate (PST) is a mixed oxide of lead, scandium, and tantalum. It has the formula Pb(Sc0.5Ta0.5)O3. It is a ceramic material with a perovskite structure, where the Sc and Ta atoms at the B site have an arrangement that is intermediate between ordered and disordered configurations, and can be fine-tuned with thermal treatment. It is ferroelectric at temperatures below , and is also piezoelectric. Like structurally similar lead zirconate titanate and barium strontium titanate, PST can be used for manufacture of uncooled focal plane array infrared imaging sensors for thermal cameras. References Lead(II) compounds Scandium compounds Tantalates Ceramic materials Ferroelectric materials Piezoelectric materials Infrared sensor materials Perovskites
Lead scandium tantalate
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
182
[ "Physical phenomena", "Inorganic compounds", "Ferroelectric materials", "Tantalates", "Inorganic compound stubs", "Salts", "Materials", "Electrical phenomena", "Ceramic materials", "Ceramic engineering", "Piezoelectric materials", "Hysteresis", "Matter" ]
7,120,484
https://en.wikipedia.org/wiki/Glass%20ionomer%20cement
A glass ionomer cement (GIC) is a dental restorative material used in dentistry as a filling material and luting cement, including for orthodontic bracket attachment. Glass-ionomer cements are based on the reaction of silicate glass-powder (calciumaluminofluorosilicate glass) and polyacrylic acid, an ionomer. Occasionally water is used instead of an acid, altering the properties of the material and its uses. This reaction produces a powdered cement of glass particles surrounded by matrix of fluoride elements and is known chemically as glass polyalkenoate. There are other forms of similar reactions which can take place, for example, when using an aqueous solution of acrylic/itaconic copolymer with tartaric acid, this results in a glass-ionomer in liquid form. An aqueous solution of maleic acid polymer or maleic/acrylic copolymer with tartaric acid can also be used to form a glass-ionomer in liquid form. Tartaric acid plays a significant part in controlling the setting characteristics of the material. Glass-ionomer based hybrids incorporate another dental material, for example resin-modified glass ionomer cements (RMGIC) and compomers (or modified composites). Non-destructive neutron scattering has evidenced GIC setting reactions to be non-monotonic, with eventual fracture toughness dictated by changing atomic cohesion, fluctuating interfacial configurations and interfacial terahertz (THz) dynamics. It is on the World Health Organization's List of Essential Medicines. Background Glass ionomer cement is primarily used in the prevention of dental caries. This dental material has good adhesive bond properties to tooth structure, allowing it to form a tight seal between the internal structures of the tooth and the surrounding environment. Dental caries are caused by bacterial production of acid during their metabolic actions. The acid produced from this metabolism results in the breakdown of tooth enamel and subsequent inner structures of the tooth, if the disease is not intervened by a dental professional, or if the carious lesion does not arrest and/or the enamel re-mineralises by itself. Glass ionomer cements act as sealants when pits and fissures in the tooth occur and release fluoride to prevent further enamel demineralisation and promote remineralisation. Fluoride can also hinder bacterial growth, by inhibiting their metabolism of ingested sugars in the diet. It does this by inhibiting various metabolic enzymes within the bacteria. This leads to a reduction in the acid produced during the bacteria's digestion of food, preventing a further drop in pH and therefore preventing caries. There is evidence that when using sealants, only 6% of people develop tooth decay over a 2-year period, in comparison to 40% of people when not using a sealant. However, it is recommended that the use of fluoride varnish alongside glass ionomer sealants should be applied in practice to further reduce the risk of secondary dental caries. Resin-modified glass ionomers The addition of resin to glass ionomers improves them significantly, allowing them to be more easily mixed and placed. Resin-modified glass ionomers allow equal or higher fluoride release and there is evidence of higher retention, higher strength and lower solubility. Resin-based glass ionomers have two setting reactions: an acid-base setting and a free-radical polymerisation. The free-radical polymerisation is the predominant mode of setting, as it occurs more rapidly than the acid-base mode. Only the material properly activated by light will be optimally cured. The presence of resin protects the cement from water contamination. Due to the shortened working time, it is recommended that placement and shaping of the material occurs as soon as possible after mixing. History Dental sealants were first introduced as part of the preventative programme, in the late 1960s, in response to increasing cases of pits and fissures on occlusal surfaces due to caries. This led to glass ionomer cements to be introduced in 1972 by Wilson and Kent as derivative of the silicate cements and the polycarboxylate cements. The glass ionomer cements incorporated the fluoride releasing properties of the silicate cements with the adhesive qualities of polycarboxylate cements. This incorporation allowed the material to be stronger, less soluble and more translucent (and therefore more aesthetic) than its predecessors. Glass ionomer cements were initially intended to be used for the aesthetic restoration of anterior teeth and were recommended for restoring Class III and Class V cavity preparations. There have now been further developments in the material's composition to improve properties. For example, the addition of metal or resin particles into the sealant is favoured due to the longer working time and the material being less sensitive to moisture during setting. When glass ionomer cements were first used, they were mainly used for the restoration of abrasion/erosion lesions and as a luting agent for crown and bridge reconstructions. However, this has now been extended to occlusal restorations in deciduous dentition, restoration of proximal lesions and cavity bases and liners. This is made possible by the ever-increasing new formulations of glass ionomer cements. One of the early commercially successful GICs, employing G338 glass and developed by Wilson and Kent, served purpose as non-load bearing restorative materials. However, this glass resulted in a cement too brittle for use in load-bearing applications such as in molar teeth. The properties of G338 being shown to be related to its phase-composition, specifically the interplay between its three amorphous phases Ca/Na-Al-Si-O, Ca-Al-F and Ca-P-O-F, as characterised by mechanical testing, differential scanning calorimetry (DSC) and X-ray diffraction (XRD), as well as quantum chemical modelling and ab initio molecular dynamics simulations. Glass ionomer versus resin-based sealants When the two dental sealants are compared, there has always been a contradiction as to which materials is more effective in caries reduction. Therefore, there are claims against replacing resin-based sealants, the current gold standard, with glass ionomer. Advantages Glass ionomer sealants are thought to prevent caries through a steady fluoride release over a prolonged period and the fissures are more resistant to demineralization, even after the visible loss of sealant material, however, a systemic review found no difference in caries development when GICs was used as a fissure sealing material compared to the conventional resin based sealants, in addition, it has less retention to the tooth structure than the resin based sealants. These sealants have hydrophilic properties, allowing them to be an alternative of the hydrophobic resin in the generally wet oral cavity. Resin-based sealants are easily destroyed by saliva contamination. They chemically bond with both enamel and dentin and do not necessarily require preparation/mechanical retention and can therefore be applied without harming existing tooth structure. This makes them ideal in many situations when tooth preservation is foremost and with minimally invasive techniques, particularly Class V fillings where there is a larger area of exposed dentin with only a thin ring of enamel. This often results in longer retention and service life than resin Class V fillings. They chemically bond to enamel and dentin leaving a smaller gap for bacteria to enter. Particularly when paired with silver diamine fluoride this can arrest caries and harden active caries and prevent further damage. They can be placed and cured outside of clinical settings and do not require a curing light. Chemically curable glass ionomer cements are considered safe from allergic reactions but a few have been reported with resin-based materials. Nevertheless, allergic reactions are very rarely associated with both sealants. Disadvantages The main disadvantage of glass ionomer sealants or cements has been inadequate retention or simply lack of strength, toughness, and limited wear resistance. For instance, due to its poor retention rate, periodic recalls are necessary, even after 6 months, to eventually replace the lost sealant. Different methods have been used to address the physical shortcomings of the glass ionomer cements such as thermo-light curing (polymerization), or addition of the zirconia, hydroxyapatite, N-vinyl pyrrolidone, N-vinyl caprolactam, and fluoroapatite to reinforce the glass ionomer cements. Clinical applications Glass ionomers are widely used due to their versatile properties and ease of use. Prior to procedures, starter materials for glass ionomers are supplied as a powder and liquid or as a powder mixed with water. These materials can be mixed and encapsulated. Preparation of the material should involve following manufacture instructions. A paper pad or cool dry glass slab may be used for mixing the raw materials though it is important to note that the use of the glass slab will retard the reaction and hence increase the working time. The raw materials in liquid and powder form should not be dispensed onto the chosen surface until the mixture is required in the clinical procedure the glass ionomer is being used for, as a prolonged exposure to the atmosphere could interfere with the ratio of chemicals in the liquid. At the stage of mixing, a spatula should be used to rapidly incorporate the powder into the liquid for a duration of 45–60 seconds depending on manufacture instructions and the individual products. Once mixed together to form a paste, an acid-base reaction occurs which allows the glass ionomer complex to set over a certain period of time and this reaction involves four overlapping stages: Dissolution Gelation Hardening (3–6 min) Maturation (24 hr – 1 yr) It is important to note that glass ionomers have a long setting time and need protection from the oral environment in order to minimize interference with dissolution and prevent contamination. The type of application for glass ionomers depends on the cement consistency as varying levels of viscosity from very high viscosity to low viscosity, can determine whether the cement is used as luting agents, orthodontic bracket adhesives, pit and fissure sealants, liners and bases, core build-ups, or intermediate restorations. Clinical uses The different clinical uses of glass ionomer compounds as restorative materials include; Cermets, which are essentially metal reinforced, glass ionomer cements, used to aid in restoring tooth loss as a result of decay or cavities to the tooth surfaces near the gingival margin, or the tooth roots, though cermets can be incorporated at other sites on various teeth, depending on the function required. They maintain adhesion to enamel and dentine and have an identical setting reaction to other glass ionomers. The development of cermets is an attempt to improve the mechanical properties of glass ionomers, particularly brittleness and abrasion resistance by incorporating metals such as silver, tin, gold and titanium. The use of these materials with glass ionomers appears to increase the value of compressive strength and fatigue limit as compared to conventional glass ionomer, however there is no marked difference in the flexural strength and resistance to abrasive wear as compared to glass ionomers. Dentine surface treatment, which can be performed with glass ionomer cements as the cement has adhesive characteristics which may be useful when placed in undercut cavities. The surfaces on which the glass cement ionomers are placed would be adequately prepared by removing the precipitated salivary proteins, present from saliva as this would greatly reduce the receptiveness of the glass ionomer cement and dentine surface, to bond formation. A number of different substances can be used to remove this element, such as citric acid, however the most effective substance seems to be polyacrylic acid, which is applied to the tooth surface for 30 seconds before it is washed off. The tooth is then dried to ensure the surface is receptive to bond formation but care is taken to ensure desiccation does not occur. Matrix techniques with glass ionomers, which are used to aid in proximal cavity restorations of anterior teeth. Between the teeth that are adjacent to the cavity, the matrix is inserted, commonly before any dentine surface conditioning. Once the material is inserted in excess, the matrix is placed around the tooth root and kept in place with the help of firm digital pressure while the material sets. Once set, the matrix can be carefully removed using a sharp probe or excavator. Fissure sealants, which involve the use of glass ionomers as the materials can be mixed to achieve a certain fluid consistency and viscosity that allows the cement to sink into fissures and pits located in posterior teeth and fill these spaces which pose as a site for caries risk, thereby reducing the risk of caries manifesting. Orthodontic brackets, which can involve the use of glass ionomer cements as an adhesive cement that forms strong chemical bonds between the enamel and the many metals which are used in orthodontic brackets such as stainless steel. Fluoride varnishes have been combined with sealant application in the prevention of dental caries. There is low certainty evidence that the combined usage of both increases the overall effectiveness as compared to using fluoride varnish alone. Chemistry and setting reaction All GICs contain a basic glass and an acidic polymer liquid, which set by an acid-base reaction. The polymer is an ionomer, containing a small proportion – some 5 to 10% – of substituted ionic groups. These allow it to be acid decomposable and clinically set readily. The glass filler is generally a calcium alumino fluorosilicate powder, which upon reaction with a polyalkenoic acid gives a glass polyalkenoate-glass residue set in an ionised, polycarboxylate matrix. The acid base setting reaction begins with the mixing of the components. The first phase of the reaction involves dissolution. The acid begins to attack the surface of the glass particles, as well as the adjacent tooth substrate, thus precipitating their outer layers but also neutralising itself. As the pH of the aqueous solution rises, the polyacrylic acid begins to ionise, and becoming negatively charged it sets up a diffusion gradient and helps draw cations out of the glass and dentine. The alkalinity also induces the polymers to dissociate, increasing the viscosity of the aqueous solution. The second phase is gelation, where as the pH continues to rise and the concentration of the ions in solution to increase, a critical point is reached and insoluble polyacrylates begin to precipitate. These polyanions have carboxylate groups whereby cations bind them, especially Ca2+ in this early phase, as it is the most readily available ion, crosslinking into calcium polyacrylate chains that begin to form a gel matrix, resulting in the initial hard set, within five minutes. Crosslinking, H bonds and physical entanglement of the chains are responsible for gelation. During this phase, the GIC is still vulnerable and must be protected from moisture. If contamination occurs, the chains will degrade and the GIC lose its strength and optical properties. Conversely, dehydration early on will crack the cement and make the surface porous. Over the next twenty four hours maturation occurs. The less stable calcium polyacrylate chains are progressively replaced by aluminium polyacrylate, allowing the calcium to join the fluoride and phosphate and diffuse into the tooth substrate, forming polysalts, which progressively hydrate to yield a physically stronger matrix. The incorporation of fluoride delays the reaction, increasing the working time. Other factors are the temperature of the cement, and the powder to liquid ratio – more powder or heat speeding up the reaction. GICs have good adhesive relations with tooth substrates, uniquely chemically bonding to dentine and, to a lesser extend, to enamel. During initial dissolution, both the glass particles and the hydroxyapatite structure are affected, and thus as the acid is buffered the matrix reforms, chemically welded together at the interface into a calcium phosphate polyalkenoate bond. In addition, the polymer chains are incorporated into both, weaving cross links, and in dentine the collagen fibres also contribute, both linking physically and H-bonding to the GIC salt precipitates. There is also microretention from porosities occurring in the hydroxyapatite. Works employing non-destructive neutron scattering and terahertz (THz) spectroscopy have evidenced that GIC's developing fracture toughness during setting is related to interfacial THz dynamics, changing atomic cohesion and fluctuating interfacial configurations. Setting of GICs is non-monotonic, characterised by abrupt features, including a glass–polymer coupling point, an early setting point, where decreasing toughness unexpectedly recovers, followed by stress-induced weakening of interfaces. Subsequently, toughness declines asymptotically to long-term fracture test values. Glass ionomer cement as a permanent material Fluoride release and remineralisation The pattern of fluoride release from glass ionomer cement is characterised by an initial rapid release of appreciable amounts of fluoride, followed by a taper in the release rate over time.  An initial fluoride “burst” effect is desirable to reduce the viability of remaining bacteria in the inner carious dentin, hence, inducing enamel or dentin remineralization.  The constant fluoride release during the following days are attributed to the fluoride ability to diffuse through cement pores and fractures. Thus, continuous small amounts of fluoride surrounding the teeth reduces demineralization of the tooth tissues. A study by Chau et al. shows a negative correlation between acidogenicity of the biofilm and the fluoride release by GIC, suggestive that enough fluoride release may decrease the virulence of cariogenic biofilms.  In addition, Ngo et al. (2006) studied the interaction between demineralised dentine and Fuji IX GP which includes a strontium – containing glass as opposed to the more conventional calcium-based glass in other GICs. A substantial amount of both strontium and fluoride ions was found to cross the interface into the partially demineralised dentine affected by caries. This promoted mineral depositions in these areas where calcium ion levels were low. Hence, this study supports the idea of glass ionomers contributing directly to remineralisation of carious dentine, provided that good seal is achieved with intimate contact between the GIC and partly demineralised dentine. This, then raises a question, “Is glass ionomer cement a suitable material for permanent restorations?” due to the desirable effects of fluoride release by glass ionomer cement. Glass Ionomer Cement in Primary Teeth Numerous studies and reviews have been published with respect to GIC used in primary teeth restorations. Findings of a systematic review and meta-analysis suggested that conventional glass ionomers were not recommended for Class II restorations in primary molars.  This material showed poor anatomical form and marginal integrity, and composite restorations were shown to be more successful than GIC when good moisture control could be achieved.  Resin modified glass ionomer cements (RMGIC) were developed to overcome the limitations of the conventional glass ionomer as a restorative material. A systematic review supports the use of RMGIC in small to moderate sized class II cavities, as they are able to withstand the occlusal forces on primary molars for at least one year.  With their desirable fluoride releasing effect, RMGIC may be considered for Class I and Class II restorations of primary molars in high caries risk population. Glass Ionomer Cement in Permanent Teeth With regard to permanent teeth, there is insufficient evidence to support the use of RMGIC as long term restorations in permanent teeth. Despite the low number of randomised control trials, a meta- analysis review by Bezerra et al. [2009] reported significantly fewer carious lesions on the margins of glass ionomer restorations in permanent teeth after six years as compared to amalgam restorations.  In addition, adhesive ability and longevity of GIC from a clinical standpoint can be best studied with restoration of non- carious cervical lesions. A systematic review shows GIC has higher retention rates than resin composite in follow up periods of up to 5 years. Unfortunately, reviews for Class II restorations in permanent teeth with glass ionomer cement are scarce with high bias or short study periods. However, a study  [2003] of the compressive strength and the fluoride release was done on 15 commercial fluoride- releasing restorative materials. A negative linear correlation was found between the compressive strength and fluoride release (r2=0.7741), i.e., restorative materials with high fluoride release have lower mechanical properties. References Further reading Dental materials Glass chemistry World Health Organization essential medicines
Glass ionomer cement
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
4,437
[ "Glass engineering and science", "Glass chemistry", "Dental materials", "Materials", "Matter" ]
7,121,345
https://en.wikipedia.org/wiki/Alloyant
Metallurgy
Alloyant
[ "Chemistry", "Materials_science", "Engineering" ]
5
[ "Metallurgy", "Materials science", "nan" ]
7,125,022
https://en.wikipedia.org/wiki/Multibeam%20echosounder
A multibeam echosounder (MBES) is a type of sonar that is used to map the seabed. It emits acoustic waves in a fan shape beneath its transceiver. The time it takes for the sound waves to reflect off the seabed and return to the receiver is used to calculate the water depth. Unlike other sonars and echo sounders, MBES uses beamforming to extract directional information from the returning soundwaves, producing a swathe of depth soundings from a single ping. History and progression Multibeam sonar sounding systems, also known as swathe (British English) or swath (American English) , originated for military applications. The concept originated in a radar system that was intended for the Lockheed U-2 high altitude reconnaissance aircraft, but the project was derailed when the aircraft flown by Gary Powers was brought down by a Soviet missile in May 1960. A proposal for using the "Mills Cross" beamforming technique adapted for use with bottom mapping sonar was made to the US Navy. Data from each ping of the sonar would be automatically processed, making corrections for ship motion and transducer depth sound velocity and refraction effects, but at the time there was insufficient digital data storage capacity, so the data would be converted into a depth contour strip map and stored on continuous film. The Sonar Array Sounding System (SASS) was developed in the early 1960s by the US Navy, in conjunction with General Instrument to map large swathes of the ocean floor to assist the underwater navigation of its submarine force. SASS was tested aboard the USS Compass Island (AG-153). The final array system, composed of sixty-one one degree beams with a swathe width of approximately 1.15 times water depth, was then installed on the USNS Bowditch (T-AGS-21), USNS Dutton (T-AGS-22) and USNS Michelson (T-AGS-23). At the same time, a Narrow Beam Echo Sounder (NBES) using 16 narrow beams was also developed by Harris ASW and installed on the Survey Ships Surveyor, Discoverer and Researcher. This technology would eventually become Sea Beam Only the vertical centre beam data was recorded during surveying operations. Starting in the 1970s, companies such as General Instrument (now SeaBeam Instruments, part of L3 Klein) in the United States, Krupp Atlas (now Atlas Hydrographic) and Elac Nautik (now part of the Wärtsilä Corporation) in Germany, Simrad (now Kongsberg Discovery) in Norway and RESON now Teledyne RESON A/S in Denmark developed systems that could be mounted to the hull of large ships, as well as on small boats (as technology improved, multibeam echosounders became more compact and lighter, and operating frequencies increased). The first commercial multibeam is now known as the SeaBeam Classic and was put in service in May 1977 on the Australian survey vessel HMAS Cook. This system produced up to 16 beams across a 45-degree arc. The (retronym) term "SeaBeam Classic" was coined after the manufacturer developed newer systems such as the SeaBeam 2000 and the SeaBeam 2112 in the late 1980s. The second SeaBeam Classic installation was on the French Research Vessel Jean Charcot. The SB Classic arrays on the Charcot were damaged in a grounding and the SeaBeam was replaced with an EM120 in 1991. Although it seems that the original SeaBeam Classic installation was not used much, the others were widely used, and subsequent installations were made on many vessels. SeaBeam Classic systems were subsequently installed on the US academic research vessels (Scripps Institution of Oceanography, University of California), the (Lamont–Doherty Earth Observatory of Columbia University) and the (Woods Hole Oceanographic Institution). As technology improved in the 1980s and 1990s, higher-frequency systems which provided higher resolution mapping in shallow water were developed, and today such systems are widely used for shallow-water hydrographic surveying in support of navigational charting. Multibeam echosounders are also commonly used for geological and oceanographic research, and since the 1990s for offshore oil and gas exploration and seafloor cable routing. More recently, multibeam echsounders are also used in the renewable energy sector such as offshore windfarms. In 1989, Atlas Electronics (Bremen, Germany) installed a second-generation deep-sea multibeam called Hydrosweep DS on the German research vessel Meteor. The Hydrosweep DS (HS-DS) produced up to 59 beams across a 90-degree swath, which was a vast improvement and was inherently ice-strengthened. Early HS-DS systems were installed on the (Germany), the (Germany), the (US) and the (India) in 1989 and 1990 and subsequently on a number of other vessels including the (US) and (Japan). As multibeam acoustic frequencies have increased and the cost of components has decreased, the worldwide number of multibeam swathe systems in operation has increased significantly. The required physical size of an acoustic transducer used to develop multiple high-resolution beams, decreases as the multibeam acoustic frequency increases. Consequently, increases in the operating frequencies of multibeam sonars have resulted in significant decreases in their weight, size and volume characteristics. The older and larger, lower-frequency multibeam sonar systems, that required considerable time and effort mounting them onto a ship's hull, used conventional tonpilz-type transducer elements, which provided a usable bandwidth of approximately 1/3 octave. The newer and smaller, higher-frequency multibeam sonar systems can easily be attached to a survey launch or to a tender vessel. Shallow water multibeam echosounders, like those from Teledyne Odom, R2Sonic and Norbit, which can incorporate sensors for measuring transducer motion and sound speed local to the transducer, are allowing many smaller hydrographic survey companies to move from traditional single beam echosounders to multibeam echosounders. Small low-power multibeam swathe systems are also now suitable for mounting on an Autonomous Underwater Vehicle (AUV) and on an Autonomous Surface Vessel (ASV). Multibeam echosounder data may include bathymetry, acoustic backscatter, and water column data. (Gas plumes now commonly identified in midwater multibeam data are termed flares.) Type 1-3 piezo-composite transducer elements, are being employed in a multispectral multibeam echosounder to provide a usable bandwidth that is in excess of 3 octaves. Consequently, multispectral multibeam echosounder surveys are possible with a single sonar system, which during every ping cycle, collects multispectral bathymetry data, multispectral backscatter data, and multispectral water column data in each swathe. Theory of operation A multibeam echosounder is a device typically used by hydrographic surveyors to determine the depth of water and the nature of the seabed. Most modern systems work by transmitting a broad acoustic fan shaped pulse from a specially designed transducer across the full swathe acrosstrack with a narrow alongtrack then forming multiple receive beams (beamforming) that are much narrower in the acrosstrack (around 1 degree depending on the system). From this narrow beam, a two way travel time of the acoustic pulse is then established utilizing a bottom detection algorithm. If the speed of sound in water is known for the full water column profile, the depth and position of the return signal can be determined from the receive angle and the two-way travel time. In order to determine the transmit and receive angle of each beam, a multibeam echosounder requires accurate measurement of the motion of the sonar relative to a cartesian coordinate system. The measured values are typically heave, pitch, roll, yaw, and heading. To compensate for signal loss due to spreading and absorption a time-varied gain circuit is designed into the receiver. For deep water systems, a steerable transmit beam is required to compensate for pitch. This can also be accomplished with beamforming. References Further reading Louay M.A. Jalloul and Sam. P. Alex, "Evaluation Methodology and Performance of an IEEE 802.16e System", Presented to the IEEE Communications and Signal Processing Society, Orange County Joint Chapter (ComSig), December 7, 2006. Available at: https://web.archive.org/web/20110414143801/http://chapters.comsoc.org/comsig/meet.html B. D. V. Veen and K. M. Buckley. Beamforming: A versatile approach to spatial filtering. IEEE ASSP Magazine, pages 4–24, Apr. 1988. H. L. Van Trees, Optimum Array Processing, Wiley, NY, 2002. "A Primer on Digital Beamforming" by Toby Haynes, March 26, 1998 "What Is Beamforming?" by Greg Allen. "Two Decades of Array Signal Processing Research" by Hamid Krim and Mats Viberg in IEEE Signal Processing Magazine, July 1996 External links A Note on Fifty Years of Multi-beam Sounding Pole to Sea Beam (NOAA History) MB-System open source software for processing multibeam data News and application articles of multibeam equipment on Hydro International Memorial website for USNS Bowditch, USNS Dutton and USNS Michelson {First application of Multibeam} Oceanography Sonar
Multibeam echosounder
[ "Physics", "Environmental_science" ]
1,957
[ "Hydrography", "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
5,452,697
https://en.wikipedia.org/wiki/Dember%20effect
In physics, the Dember effect is when the electron current from a cathode subjected to both illumination and a simultaneous electron bombardment is greater than the sum of the photoelectric current and the secondary emission current . History Discovered by Harry Dember (1882–1943) in 1925, this effect is due to the sum of the excitations of an electron by two means: photonic illumination and electron bombardment (i.e. the sum of the two excitations extracts the electron). In Dember’s initial study, he referred only to metals; however, more complex materials have been analyzed since then. Photoelectric effect The photoelectric effect due to the illumination of the metallic surface extracts electrons (if the energy of the photon is greater than the extraction work) and excites the electrons which the photons don’t have the energy to extract. In a similar process, the electron bombardment of the metal both extracts and excites electrons inside the metal. If one considers a constant and increases , it can be observed that has a maximum of about 150 times . On the other hand, considering a constant and increasing the intensity of the illumination the , supplementary current, tends to saturate. This is due to the usage in the photoelectric effect of all the electrons excited (sufficiently) by the primary electrons of . See also Anomalous photovoltaic effect Photo-Dember References Further reading External links :de:Harry Dember Electrical phenomena
Dember effect
[ "Physics" ]
301
[ "Physical phenomena", "Electrical phenomena" ]
5,452,760
https://en.wikipedia.org/wiki/Protein%20precursor
A protein precursor, also called a pro-protein or pro-peptide, is an inactive protein (or peptide) that can be turned into an active form by post-translational modification, such as breaking off a piece of the molecule or adding on another molecule. The name of the precursor for a protein is often prefixed by pro-. Examples include proinsulin and proopiomelanocortin, which are both prohormones. Protein precursors are often used by an organism when the subsequent protein is potentially harmful, but needs to be available on short notice and/or in large quantities. Enzyme precursors are called zymogens or proenzymes. Examples are enzymes of the digestive tract in humans. Some protein precursors are secreted from the cell. Many of these are synthesized with an N-terminal signal peptide that targets them for secretion. Like other proteins that contain a signal peptide, their name is prefixed by pre. They are thus called pre-pro-proteins or pre-pro-peptides. The signal peptide is cleaved off in the endoplasmic reticulum. An example is preproinsulin. Pro-sequences are areas in the protein that are essential for its correct folding, usually in the transition of a protein from an inactive to an active state. Pro-sequences may also be involved in pro-protein transport and secretion. Pro-domain (or prodomain) is the domain of a proprotein. References External links
Protein precursor
[ "Chemistry" ]
309
[ "Biochemistry stubs", "Protein stubs" ]
5,452,870
https://en.wikipedia.org/wiki/Microbial%20fuel%20cell
Microbial fuel cell (MFC) is a type of bioelectrochemical fuel cell system also known as micro fuel cell that generates electric current by diverting electrons produced from the microbial oxidation of reduced compounds (also known as fuel or electron donor) on the anode to oxidized compounds such as oxygen (also known as oxidizing agent or electron acceptor) on the cathode through an external electrical circuit. MFCs produce electricity by using the electrons derived from biochemical reactions catalyzed by bacteria. Comprehensive Biotechnology (Third Edition) MFCs can be grouped into two general categories: mediated and unmediated. The first MFCs, demonstrated in the early 20th century, used a mediator: a chemical that transfers electrons from the bacteria in the cell to the anode. Unmediated MFCs emerged in the 1970s; in this type of MFC the bacteria typically have electrochemically active redox proteins such as cytochromes on their outer membrane that can transfer electrons directly to the anode. In the 21st century MFCs have started to find commercial use in wastewater treatment. History The idea of using microbes to produce electricity was conceived in the early twentieth century. Michael Cressé Potter initiated the subject in 1911. Potter managed to generate electricity from Saccharomyces cerevisiae, but the work received little coverage. In 1931, Barnett Cohen created microbial half fuel cells that, when connected in series, were capable of producing over 35 volts with only a current of 2 milliamps. A study by DelDuca et al. used hydrogen produced by the fermentation of glucose by Clostridium butyricum as the reactant at the anode of a hydrogen and air fuel cell. Though the cell functioned, it was unreliable owing to the unstable nature of hydrogen production by the micro-organisms. This issue was resolved by Suzuki et al. in 1976, who produced a successful MFC design a year later. In the late 1970s, little was understood about how microbial fuel cells functioned. The concept was studied by Robin M. Allen and later by H. Peter Bennetto. People saw the fuel cell as a possible method for the generation of electricity for developing countries. Bennetto's work, starting in the early 1980s, helped build an understanding of how fuel cells operate and he was seen by many as the topic's foremost authority. In May 2007, the University of Queensland, Australia completed a prototype MFC as a cooperative effort with Foster's Brewing. The prototype, a 10 L design, converted brewery wastewater into carbon dioxide, clean water and electricity. The group had plans to create a pilot-scale model for an upcoming international bio-energy conference. Definition A microbial fuel cell (MFC) is a device that converts chemical energy to electrical energy by the action of microorganisms. These electrochemical cells are constructed using either a bioanode and/or a biocathode. Most MFCs contain a membrane to separate the compartments of the anode (where oxidation takes place) and the cathode (where reduction takes place). The electrons produced during oxidation are transferred directly to an electrode or to a redox mediator species. The electron flux is moved to the cathode. The charge balance of the system is maintained by ionic movement inside the cell, usually across an ionic membrane. Most MFCs use an organic electron donor that is oxidized to produce CO2, protons, and electrons. Other electron donors have been reported, such as sulfur compounds or hydrogen. The cathode reaction uses a variety of electron acceptors, most often oxygen (O2). Other electron acceptors studied include metal recovery by reduction, water to hydrogen, nitrate reduction, and sulfate reduction. Applications Power generation MFCs are attractive for power generation applications that require only low power, but where replacing batteries may be impractical, such as wireless sensor networks. Wireless sensors powered by microbial fuel cells can then for example be used for remote monitoring (conservation). Virtually any organic material could be used to feed the fuel cell, including coupling cells to wastewater treatment plants. Chemical process wastewater and synthetic wastewater have been used to produce bioelectricity in dual- and single-chamber mediator less MFCs (uncoated graphite electrodes). Higher power production was observed with a biofilm-covered graphite anode. Fuel cell emissions are well under regulatory limits. MFCs convert energy more efficiently than standard internal combustion engines, which are limited by the Carnot efficiency. In theory, an MFC is capable of energy efficiency far beyond 50%. Rozendal produced hydrogen with 8 times less energy input than conventional hydrogen production technologies. Moreover, MFCs can also work at a smaller scale. Electrodes in some cases need only be 7 μm thick by 2 cm long, such that an MFC can replace a battery. It provides a renewable form of energy and does not need to be recharged. MFCs operate well in mild conditions, 20 °C to 40 °C and at pH of around 7 but lack the stability required for long-term medical applications such as in pacemakers. Power stations can be based on aquatic plants such as algae. If sited adjacent to an existing power system, the MFC system can share its electricity lines. Education Soil-based microbial fuel cells serve as educational tools, as they encompass multiple scientific disciplines (microbiology, geochemistry, electrical engineering, etc.) and can be made using commonly available materials, such as soils and items from the refrigerator. Kits for home science projects and classrooms are available. One example of microbial fuel cells being used in the classroom is in the IBET (Integrated Biology, English, and Technology) curriculum for Thomas Jefferson High School for Science and Technology. Several educational videos and articles are also available on the International Society for Microbial Electrochemistry and Technology (ISMET Society)"". Biosensor The current generated from a microbial fuel cell is directly proportional to the organic-matter content of wastewater used as the fuel. MFCs can measure the solute concentration of wastewater (i.e., as a biosensor). Wastewater is commonly assessed for its biochemical oxygen demand (BOD) values. BOD values are determined by incubating samples for 5 days with proper source of microbes, usually activated sludge collected from wastewater plants. An MFC-type BOD sensor can provide real-time BOD values. Oxygen and nitrate are interfering preferred electron acceptors over the anode, reducing current generation from an MFC. Therefore, MFC BOD sensors underestimate BOD values in the presence of these electron acceptors. This can be avoided by inhibiting aerobic and nitrate respiration in the MFC using terminal oxidase inhibitors such as cyanide and azide. Such BOD sensors are commercially available. The United States Navy is considering microbial fuel cells for environmental sensors. The use of microbial fuel cells to power environmental sensors could provide power for longer periods and enable the collection and retrieval of undersea data without a wired infrastructure. The energy created by these fuel cells is enough to sustain the sensors after an initial startup time. Due to undersea conditions (high salt concentrations, fluctuating temperatures and limited nutrient supply), the Navy may deploy MFCs with a mixture of salt-tolerant microorganisms that would allow for a more complete utilization of available nutrients. Shewanella oneidensis is their primary candidate, but other heat- and cold-tolerant Shewanella spp may also be included. A first self-powered and autonomous BOD/COD biosensor has been developed and enables detection of organic contaminants in freshwater. The sensor relies only on power produced by MFCs and operates continuously without maintenance. It turns on the alarm to inform about contamination level: the increased frequency of the signal warns about a higher contamination level, while a low frequency informs about a low contamination level. Biorecovery In 2010, A. ter Heijne et al. constructed a device capable of producing electricity and reducing Cu2+ ions to copper metal. Microbial electrolysis cells have been demonstrated to produce hydrogen. Wastewater treatment MFCs are used in water treatment to harvest energy utilizing anaerobic digestion. The process can also reduce pathogens. However, it requires temperatures upwards of 30 degrees C and requires an extra step in order to convert biogas to electricity. Spiral spacers may be used to increase electricity generation by creating a helical flow in the MFC. Scaling MFCs is a challenge because of the power output challenges of a larger surface area. Types Mediated Most microbial cells are electrochemically inactive. Electron transfer from microbial cells to the electrode is facilitated by mediators such as thionine, pyocyanin, methyl viologen, methyl blue, humic acid, and neutral red. Most available mediators are expensive and toxic. Mediator-free Mediator-free microbial fuel cells use electrochemically active bacteria such as Shewanella putrefaciens and Aeromonas hydrophila to transfer electrons directly from the bacterial respiratory enzyme to the electrode. Some bacteria are able to transfer their electron production via the pili on their external membrane. Mediator-free MFCs are less well characterized, such as the strain of bacteria used in the system, type of ion-exchange membrane and system conditions (temperature, pH, etc.) Mediator-free microbial fuel cells can run on wastewater and derive energy directly from certain plants and O2. This configuration is known as a plant microbial fuel cell. Possible plants include reed sweetgrass, cordgrass, rice, tomatoes, lupines and algae. Given that the power is obtained using living plants (in situ-energy production), this variant can provide ecological advantages. Microbial electrolysis One variation of the mediator-less MFC is the microbial electrolysis cell (MEC). While MFCs produce electric current by the bacterial decomposition of organic compounds in water, MECs partially reverse the process to generate hydrogen or methane by applying a voltage to bacteria. This supplements the voltage generated by the microbial decomposition of organics, leading to the electrolysis of water or methane production. A complete reversal of the MFC principle is found in microbial electrosynthesis, in which carbon dioxide is reduced by bacteria using an external electric current to form multi-carbon organic compounds. Soil-based Soil-based microbial fuel cells adhere to the basic MFC principles, whereby soil acts as the nutrient-rich anodic media, the inoculum and the proton exchange membrane (PEM). The anode is placed at a particular depth within the soil, while the cathode rests on top the soil and is exposed to air. Soils naturally teem with diverse microbes, including electrogenic bacteria needed for MFCs, and are full of complex sugars and other nutrients that have accumulated from plant and animal material decay. Moreover, the aerobic (oxygen consuming) microbes present in the soil act as an oxygen filter, much like the expensive PEM materials used in laboratory MFC systems, which cause the redox potential of the soil to decrease with greater depth. Soil-based MFCs are becoming popular educational tools for science classrooms. Sediment microbial fuel cells (SMFCs) have been applied for wastewater treatment. Simple SMFCs can generate energy while decontaminating wastewater. Most such SMFCs contain plants to mimic constructed wetlands. By 2015 SMFC tests had reached more than 150 L. In 2015 researchers announced an SMFC application that extracts energy and charges a battery. Salts dissociate into positively and negatively charged ions in water and move and adhere to the respective negative and positive electrodes, charging the battery and making it possible to remove the salt effecting microbial capacitive desalination. The microbes produce more energy than is required for the desalination process. In 2020, a European research project achieved the treatment of seawater into fresh water for human consumption with an energy consumption around 0.5 kWh/m3, which represents an 85% reduction in current energy consumption respect state of the art desalination technologies. Furthermore, the biological process from which the energy is obtained simultaneously purifies residual water for its discharge in the environment or reuse in agricultural/industrial uses. This has been achieved in the desalination innovation center that Aqualia has opened in Denia, Spain early 2020. Phototrophic biofilm Phototrophic biofilm MFCs (ner) use a phototrophic biofilm anode containing photosynthetic microorganism such as chlorophyta and candyanophyta. They carry out photosynthesis and thus produce organic metabolites and donate electrons. One study found that PBMFCs display a power density sufficient for practical applications. The sub-category of phototrophic MFCs that use purely oxygenic photosynthetic material at the anode are sometimes called biological photovoltaic systems. Nanoporous membrane The United States Naval Research Laboratory developed nanoporous membrane microbial fuel cells that use a non-PEM to generate passive diffusion within the cell. The membrane is a nonporous polymer filter (nylon, cellulose, or polycarbonate). It offers comparable power densities to Nafion (a well-known PEM) with greater durability. Porous membranes allow passive diffusion thereby reducing the necessary power supplied to the MFC in order to keep the PEM active and increasing the total energy output. MFCs that do not use a membrane can deploy anaerobic bacteria in aerobic environments. However, membrane-less MFCs experience cathode contamination by the indigenous bacteria and the power-supplying microbe. The novel passive diffusion of nanoporous membranes can achieve the benefits of a membrane-less MFC without worry of cathode contamination.Nanoporous membranes are also 11 times cheaper than Nafion (Nafion-117, $0.22/cm2 vs. polycarbonate, <$0.02/cm2). Ceramic membrane PEM membranes can be replaced with ceramic materials. Ceramic membrane costs can be as low as $5.66/m2. The macroporous structure of ceramic membranes allows for good transport of ionic species. The materials that have been successfully employed in ceramic MFCs are earthenware, alumina, mullite, pyrophyllite, and terracotta. Generation process When microorganisms consume a substance such as sugar in aerobic conditions, they produce carbon dioxide and water. However, when oxygen is not present, they may produce carbon dioxide, hydrons (hydrogen ions), and electrons, as described below for sucrose: Microbial fuel cells use inorganic mediators to tap into the electron transport chain of cells and channel electrons produced. The mediator crosses the outer cell lipid membranes and bacterial outer membrane; then, it begins to liberate electrons from the electron transport chain that normally would be taken up by oxygen or other intermediates. The now-reduced mediator exits the cell laden with electrons that it transfers to an electrode; this electrode becomes the anode. The release of the electrons recycles the mediator to its original oxidized state, ready to repeat the process. This can happen only under anaerobic conditions; if oxygen is present, it will collect the electrons, as it has more free energy to release. Certain bacteria can circumvent the use of inorganic mediators by making use of special electron transport pathways known collectively as extracellular electron transfer (EET). EET pathways allow the microbe to directly reduce compounds outside of the cell, and can be used to enable direct electrochemical communication with the anode. In MFC operation, the anode is the terminal electron acceptor recognized by bacteria in the anodic chamber. Therefore, the microbial activity is strongly dependent on the anode's redox potential. A Michaelis–Menten curve was obtained between the anodic potential and the power output of an acetate-driven MFC. A critical anodic potential seems to provide maximum power output. Potential mediators include natural red, methylene blue, thionine, and resorufin. Organisms capable of producing an electric current are termed exoelectrogens. In order to turn this current into usable electricity, exoelectrogens have to be accommodated in a fuel cell. The mediator and a micro-organism such as yeast, are mixed together in a solution to which is added a substrate such as glucose. This mixture is placed in a sealed chamber to prevent oxygen from entering, thus forcing the micro-organism to undertake anaerobic respiration. An electrode is placed in the solution to act as the anode. In the second chamber of the MFC is another solution and the positively charged cathode. It is the equivalent of the oxygen sink at the end of the electron transport chain, external to the biological cell. The solution is an oxidizing agent that picks up the electrons at the cathode. As with the electron chain in the yeast cell, this could be a variety of molecules such as oxygen, although a more convenient option is a solid oxidizing agent, which requires less volume. Connecting the two electrodes is a wire (or other electrically conductive path). Completing the circuit and connecting the two chambers is a salt bridge or ion-exchange membrane. This last feature allows the protons produced, as described in , to pass from the anode chamber to the cathode chamber. The reduced mediator carries electrons from the cell to the electrode. Here the mediator is oxidized as it deposits the electrons. These then flow across the wire to the second electrode, which acts as an electron sink. From here they pass to an oxidizing material. Also the hydrogen ions/protons are moved from the anode to the cathode via a proton exchange membrane such as Nafion. They will move across to the lower concentration gradient and be combined with the oxygen but to do this they need an electron. This generates current and the hydrogen is used sustaining the concentration gradient. Algal biomass has been observed to give high energy when used as the substrate in microbial fuel cell. Applications in Environmental Remediation Microbial fuel cells (MFCs) have emerged as promising tools for environmental remediation due to their unique ability to utilize the metabolic activities of microorganisms for both electricity generation and pollutant degradation. MFCs find applications across diverse contexts in environmental remediation. One primary application is in bioremediation, where the electroactive microorganisms on the MFC anode actively participate in the breakdown of organic pollutants, providing a sustainable and efficient method for pollutant removal. Moreover, MFCs play a significant role in wastewater treatment by simultaneously generating electricity and enhancing water quality through the microbial degradation of contaminants. These fuel cells can be deployed in situ, allowing for continuous and autonomous remediation in contaminated sites. Furthermore, their versatility extends to sediment microbial fuel cells (SMFCs), which are capable of removing heavy metals and nutrients from sediments. By integrating MFCs with sensors, they enable remote environmental monitoring in challenging locations. The applications of microbial fuel cells in environmental remediation highlight their potential to convert pollutants into a renewable energy source while actively contributing to the restoration and preservation of ecosystems. Challenges and advances Microbial fuel cells (MFCs) offer significant potential as sustainable and innovative technologies, but they are not without their challenges. One major obstacle lies in the optimization of MFC performance, which remains a complex task due to various factors including microbial diversity, electrode materials, and reactor design. The development of cost-effective and long-lasting electrode materials presents another hurdle, as it directly affects the economic viability of MFCs on a larger scale. Furthermore, the scaling up of MFCs for practical applications poses engineering and logistical challenges. Nonetheless, ongoing research in microbial fuel cell technology continues to address these obstacles. Scientists are actively exploring new electrode materials, enhancing microbial communities to improve efficiency, and optimizing reactor configurations. Moreover, advancements in synthetic biology and genetic engineering have opened up possibilities for designing custom microbes with enhanced electron transfer capabilities, pushing the boundaries of MFC performance. Collaborative efforts between multidisciplinary fields are also contributing to a deeper understanding of MFC mechanisms and expanding their potential applications in areas such as wastewater treatment, environmental remediation, and sustainable energy production. See also Biobattery Cable bacteria Dark fermentation Electrohydrogenesis Electromethanogenesis Fermentative hydrogen production Glossary of fuel cell terms Hydrogen hypothesis Hydrogen technologies Photofermentation Bacterial nanowires References Yue P.L. and Lowther K. (1986). Enzymatic Oxidation of C1 compounds in a Biochemical Fuel Cell. The Chemical Engineering Journal, 33B, p 69-77 Further reading External links DIY MFC Kit BioFuel from Microalgae Sustainable and efficient biohydrogen production via electrohydrogenesis – November 2007 Microbial Fuel Cell blog A research-type blog on common techniques used in MFC research. Microbial Fuel Cells This website is originating from a few of the research groups currently active in the MFC research domain. Microbial Fuel Cells from Rhodopherax Ferrireducens An overview from the Science Creative Quarterly. Building a Two-Chamber Microbial Fuel Cell Discussion group on Microbial Fuel Cells Innovation company developing MFC technology Bioelectrochemistry Fuel cells Hydrogen biology Renewable energy
Microbial fuel cell
[ "Chemistry" ]
4,475
[ "Electrochemistry", "Bioelectrochemistry" ]
5,453,292
https://en.wikipedia.org/wiki/Threaded%20pipe
A threaded pipe is a pipe with screw-threaded ends for assembly. Tapered threads The threaded pipes used in some plumbing installations for the delivery of gases or liquids under pressure have a tapered thread that is slightly conical (in contrast to the parallel sided cylindrical section commonly found on bolts and leadscrews). The seal provided by a threaded pipe joint depends upon multiple factors: the labyrinth seal created by the threads; a positive seal between the threads created by thread deformation when they are tightened to the proper torque; and sometimes on the presence of a sealing coating, such as thread seal tape or a liquid or paste pipe sealant such as pipe dope. Tapered thread joints typically do not include a gasket. Especially precise threads are known as "dry fit" or "dry seal" and require no sealant for a gas-tight seal. Such threads are needed where the sealant would contaminate or react with the media inside the piping, e.g., oxygen service. Tapered threaded fittings are sometimes used on plastic piping. Due to the wedging effect of the tapered thread, extreme care must be used to avoid overtightening the joint. The overstressed female fitting may split days, weeks, or even years after initial installation. Therefore many municipal plumbing codes restrict the use of threaded plastic pipe fittings. Both British standard and National pipe thread standards specify a thread taper of 1:16; the change in diameter is one sixteenth the distance travelled along the thread. The nominal diameter is achieved some small distance (the "gauge length") from the end of the pipe. Straight threads Pipes may also be threaded with cylindrical threaded sections, in which case the threads do not themselves provide any sealing function other than some labyrinth seal effect, which may not be enough to satisfy either functional or code requirements. Instead, an O-ring seated between the shoulder of the male pipe section and an interior surface on the female, provides the seal. See also AN thread British Standard Pipe thread (BSP) Buttress thread Fire hose thread Garden hose thread National pipe thread (NPT) Nipple (plumbing) O-ring boss seal Panzergewinde (steel conduit thread) Piping Plumbing Screw thread Tap and die Thread angle United States Standard thread External links NPT Vs. NPTF Taper Pipe Threads Newman Tools Inc. and J.W. WINCO, INC. show the Whitworth form BSP or ISO pipe thread. Piping Plumbing
Threaded pipe
[ "Chemistry", "Engineering" ]
504
[ "Building engineering", "Chemical engineering", "Plumbing", "Construction", "Mechanical engineering", "Piping" ]
5,453,466
https://en.wikipedia.org/wiki/Daylighting%20%28streams%29
Daylighting is the opening up and restoration of a previously buried watercourse, one which had at some point been diverted below ground. Typically, the rationale behind returning the riparian environment of a stream, wash, or river to a more natural above-ground state is to reduce runoff, create habitat for species in need of it, or improve an area's aesthetics. In the United Kingdom, the practice is also known as deculverting. In addition to its use in urban design and planning the term also refers to the public process of advancing such projects. According to the Planning and Development Department of the City of Berkeley, "A general consensus has developed that protecting and restoring natural creeks' functions is achievable over time in an urban environment while recognizing the importance of property rights." Systems Natural drainage systems Natural drainage systems help manage stormwater by infiltrating and slowing the flow of stormwater, filtering and bioremediating pollutants by soils and plants, reducing impervious surfaces, using porous paving, increasing vegetation, and improving related pedestrian amenities. Natural features—open, vegetated swales, stormwater cascades, and small wetland ponds—mimic the functions of nature lost to urbanization. At the heart are plants, trees, and the deep, healthy soils that support them. All three combine to form a "living infrastructure" that, unlike pipes and vaults, increase in functional value over time. Some efforts to blend urban development with natural systems use innovative drainage design and landscaping instead of traditional curbs and gutters, pipes and vaults. One such demonstration project in the Pipers Creek watershed reduced imperviousness by more than 18 percent. The project built bioswales, landscape elements intended to remove silt and pollution from surface runoff water and planted 100 evergreen trees and 1,100 shrubs. From 2001 to 2003, the project reduced the volume of stormwater leaving the street in a two-year storm event by 98%. Such a reduction can reduce storm damage to water quality and habitats for species such as the iconic salmon. Unfortunately, the engineering alternatives have a relatively expensive initial price, since they are usually replacing existing structures, albeit life-limited ones. Further, conventional systems generally do not consider full cost accounting. The natural drainage system alternatives can also provide returns on investment by improving urban environments. The street edge alternatives street breaks most of the conventions of 150 years of standard American street design. Narrow, curved streets, open drainage swales, and an abundance of diverse plants and trees welcome pedestrians as well as diverse species. Adjacent residents maintain city infrastructure in the form of street "gardens" in front of their homes, visually integrating the neighborhood along the street. The natural drainage system united the community visually, environmentally, and socially. The 110th Cascades SEA (2002–2003) are a creek-like cascade of stair-stepped natural, seasonal pools that intercept, infiltrate, slow and filter over of stormwater draining through the project. Example projects Viable, daylighted streams exist only where neighbourhoods are intimately connected to restoration and stewardship values in their watersheds, since the health of an urban stream can not long survive carelessness or neglect. With impervious surfaces having replaced most of the natural ground cover in urban environments, habitat for wildlife is dramatically reduced compared to historic baselines. Hydrologic changes have resulted, and impervious waterways directly carry non-point pollution through urban creeks. One effective solution is to restore streams and riparian habitat. This improves the entire urban watershed, far beyond the riparian channel itself. Wild et al 2011 described the first known online map and database of urban river daylighting projects. Wild et al 2019 published geo-spatial database about all schemes. University of Waterloo documented a very similar list featuring many of the same stream daylighting projects around the globe. Switzerland Zürich The City of Zürich’s stream daylighting policy has long received the attention of researchers and is considered by some to be unique in the world. It had been adopted since 1986 and ensued in daylighting nearly 21 kilometers of Zürich’s buried streams thus far. The positive impact on the quality of water and biodiversity has been significant. There are also benefits for enhanced stormwater management, and even socio-cultural benefits such as, enhanced public realm and educational ones. Canada Vancouver, British Columbia In the 1880s there were over 50 wild salmon streams in Vancouver alone. However, as Vancouver grew, these streams were lost to urbanization. They were covered by roads, homes, and businesses. They were also lost when they were buried beneath sewers or culverts. The City of Vancouver and its residents are now making an effort to uncover these lost streams and restore them back to their natural state. Hastings Creek The Hastings Creek Stream Daylighting Project was originally proposed in 1994 as a way to manage storm water and for aesthetic purposes. The idea was to bring the stream back to its once natural formation which would improve the surrounding habitat for wildlife as well as the originally proposed purposes. This project's plan was finalized in 1997, and work began the same year. The stream had existed in Hastings Park until 1935 when the Park became focused on entertainment rather than its original purpose when it was given to the city in 1889, which was to be a retreat for those with a passion for the outdoors. As the Pacific National Exhibition (PNE) grounds continued to expand there was a continued loss of natural woodlands, greenery and waterways. It was not until the 1980s when the surrounding community began to look at continuing to uphold its original purpose. The daylighting project made major progress in 2013 in the area located in the Creekway Park, which was originally a parking lot. The daylighted stream will one day connect the Sanctuary in Hastings Park to the Burrard Inlet. The progress made in Creekway Park is a major step towards this goal. This daylighting project also improved pedestrian and bikeway transit. This stream is now able to obtain the stormwater from the surrounding area, which reduces the load that is felt by the municipality's storm sewers. It is the storms in early autumn which provide the water flow for the creek, meaning that there is variable flow throughout the year. During the late summer months the moist soil is relied upon to maintain the vegetation of the area. This variation in flow does not allow for salmon migration through the creek; however it does house trout as well as vegetation which aid in the filtration of the storm water entering the creek. Spanish Banks Located upstream from Spanish Banks waterfront, one of the highest profile creeks in Vancouver Metro became open to salmon in 2000. In a collaborative project between Spanish Banks Streamkeepers Association and the Department of Fisheries and Oceans Canada, barriers to fish passage were removed and habitat structure was added. Spanish Banks Creek was previously diverted through a culvert underneath a parking lot, but the lower reaches of this creek have been revitalized. The banks were stabilized with riprap, large woody debris was added for habitat cover, and spawning gravels were added in appropriate areas. Rigorous effectiveness monitoring has not been performed, but a few dozen coho and chum salmon are known to spawn there annually in a sustaining population. Maintenance to the creek is provided by Spanish Banks Streamkeepers Association, a local volunteer stewardship group. St. George Rainway The East Vancouver neighborhood of Mount Pleasant has officially incorporated into its community plan a project to restore St. George Creek, a tributary to the False Creek watershed. St. George street is the site of this former stream, which now flows through the sewers and a culvert. This paved street will be converted into a shared-use path, riparian habitat, and urban greenspace. St. George Creek once spawned salmon and trout, and hosted a diverse riparian ecosystem. The restoration of this habitat using the rainway proposal would allow for salmon spawning, recreational and educational opportunities, and improve the community's access to nature and transportation alternatives. The proposal would pass the following community centres: Great Northern Way Campus, St. Francis Xavier School, Mt. Pleasant Elementary, Florence N. Elementary, Kivan Boys and Girls Club, Robson Park Family Centre. Detailed landscape designs have been produced, and incorporated into the community plan of Mount Pleasant neighborhood. Project leaders from the False Creek watershed Society and Vancouver Society of Storytelling have collaborated with Mount Pleasant Elementary students to create a street mural drawing attention to the belowground stream. To date, the mural is the only physical progress on the project. Tatlow Creek This is a future project aiming to ultimately connect the gap in the Seaside Greenway in order to link it to the Burrard Bridge. The beginning of this project has been started by the City of Vancouver in 2013, after its approval on July 29 of the same year. Volunteer Park is located in Kitsilano at the corner of Point Grey Road and Macdonald Street. This is where the main daylighting project for this area is planned to occur. Phase one is currently in progress. Point Grey Road is currently closed to through motor traffic in order to turn the street into a greenway for cycling and walking. This part of the project is expected to be complete by summer 2014. Phase two of this project is looking to include the daylighting of Tatlow Creek which is located in Volunteer Park. This phase must go through the City Council and the Park Board capital planning process for the 2015-2017 Capital Plan before any plans can be finalized. Tatlow Creek had been scheduled to be daylighted in 1996, and the project to start in 1997. The project was deemed feasible and the storm water was to be diverted back into the natural creek bed and tunneled under Point Grey Road. When it was not done, the project was proposed again by a UBC masters' student as the Tatlow Creek Revitalization Project. If this project is completed as phase 2 of the new Park Board Project it would allow for salmon and trout spawning. Caledon, Ontario Credit River: East Credit Tributary Credit Valley Conservation (CVC) worked with a private landowner to daylight 500 m of coldwater stream on their Caledon family farm. The project emerged from a decision to replace a failing tile drain on the farm property with a stream. The stream was buried in an agricultural tile in the early 1980s to facilitate agricultural operations. CVC worked collaboratively with the landowners to design and construct a new stream, stream-side grassland and wetland in 2017. The project improved biodiversity and ecosystem health. Nine species of fish have been recorded in the stream, and Bobolink and Eastern Meadowlark (both threatened bird species) use the planted riparian grassland. Frogs and toads are also thriving in the new wetland. In addition to the newly created stream, CVC removed a perched culvert downstream that was preventing fish passage to allow downstream fish populations to reach the new stream. In January 2018, the landowners received the Ontario Heritage Trust Lieutenant Governor's Award for Conservation Excellence in recognition of the project's contribution to conservation. The project was funded by the Fisheries and Oceans Canada, Peel Rural Water Quality Program and the Species at Risk Farm Incentive Program. France Ile de France La Bièvre river Partial reopening sections and re-naturalisation of La Bièvre river, in the region Ile de France (from the south to Paris, where it joins La Seine) 600 metre section in Fresnes in 2003 900 metres section in Verrieres-le-Buisson/Massy in 2006 600 metres section in L’Haÿ-les-Roses in 2016 600 metre section between Arcueil and Gentilly in 2021 Re-naturalisation in 2020 of a section from Bievres to Igny from a relatively straight caisson reinforced embankment to a meandering stream (excess flow diverted into a pipe). United States California Codornices Creek and Strawberry Creek, Berkeley Islais Creek, San Francisco Maryland Since the 1990s there have been several plans to daylight the Jones Falls along much of its route through downtown Baltimore. Massachusetts Part of Island End River flowing through Everett, Massachusetts was daylighted in 2021. New York (State) Yonkers, New York, the third largest city in the state, broke ground on December 15, 2010, on a project to daylight of the Saw Mill River as it runs through its downtown, called Getty Square. The daylighting project is the cornerstone of a large redevelopment effort in the downtown. An additional 2 other sections of the Saw Mill River are planned to be daylighted as well. The first phase of the Yonkers daylighting was portrayed in the documentary Lost Rivers. The second phase, where the river runs under the Mill Street Courtyard, broke ground on March 19, 2014. Salt Lake City, Utah City Creek A public-private partnership between Salt Lake City and the Church of Jesus Christ of Latter Day Saints, exchange the ownership of a surface parking lot at 110 N State Street in Salt Lake City for development rights to an underground parking garage. In 1995, a donation by the church allowed Salt Lake City to daylight a creek channel through the newly created City Creek Park. Three Creeks Confluence Red Butte, Emigration, and Parleys Creeks flow into the Jordan River at 1300 South and 900 West in Salt Lake City, UT. The site was previously paved over with a dead-end segment of 1300 South. A dilapidated, vacant home existed to the north of 1300 South on the site. The area was in a neglected condition, impacted by noxious weeds, dumping, and encroachments from private property. Approximately $3 million was secured for the construction of the Three Creeks Confluence, a partnership between Salt Lake City and the Seven Canyons Trust. Red Butte, Emigration, and Parleys Creeks were daylighted 200 feet in a newly restored channel up to 900 West. The site includes a Jordan River Trail connection, fishing bridge, and plaza space. In 2017, an Achievement Award from the Utah Chapter of the American Planning Association was received for the innovative project design and creative community engagement process. Seattle, Washington Pipers Creek Pipers Creek in the central to north Greenwood area is joined by Venema and Mohlendorph Creeks in Carkeek Park on Puget Sound. Pipers is one of the four largest streams in urban Seattle, together with Longfellow, Taylor, and Thornton creeks. Pipers Creek drains a watershed into Puget Sound, from a residential upper plateau that is most of the watershed, through the steep ravines of the of Carkeek Park. The headwaters begin in the north Greenwood neighborhood. As a result of project efforts, salmon were brought back to Pipers Creek, Venema, and Mohlendorph creeks in the mid-2000s after a fifty-year absence. The latter is named for the late Ted Mohlendorph, a biologist who spearheaded efforts to restore the watershed as salmon habitat. Though augmented by hatchery fish, anywhere from 200 to 600 chum salmon return each November, along with a few coho in the fall and fewer occasional winter steelhead. Inspirationally, several hundred small resident coastal cutthroat trout live in the watershed, believed to be native fish that survived decades of urban assault. An environmental learning center and programs are part of comprehensive restoration. More than four miles (6 km) of trail are maintained by neighborhood volunteers who put in 4,000 hours of work in 2003, for example. The creek waters are pretty in their impressively restored settings, but the watershed is the surrounding neighborhoods and streets, laced with petrochemicals, pesticides, fertilizers, wandering pets, and such. Along with steeply high volume during storm runoff and resulting turbidity, water quality is the remaining big issue in restoring salmon. The north fork of Pipers Creek is the site for the 110th Cascades, a street edge alternatives street demonstration project (see above). The 110th Cascades are a creek-like cascade of stair-stepped natural, seasonal pools that intercept, infiltrate, slow and filter over of stormwater draining through the project. The cascades are a part of a natural drainage systems) project; together these united the community visually, environmentally, and socially, toward integrating the neighborhood as a community. Taylor Creek Taylor Creek flows from Deadhorse Canyon (west of Rainier Avenue S at 68th Avenue S and northwest of Skyway Park), through Lakeridge Park to Lake Washington. With volunteer effort and some city matching grants, restoration has been underway since 1971. Volunteers have planted thousands of indigenous trees and plants, removed tons of garbage, removed invasive plants, and had city help removing fish-blocking culverts and improving trails. A deer has been spotted and sightings of raccoons, opossum and birds are common. By about 2050, the area will be looking like a young version of what it looked like before being disrupted. Taylor is one of the four largest streams in urban Seattle. Fauntleroy Creek Fauntleroy Creek in the Fauntleroy neighborhood of West Seattle flows about a mile (1.6 km) from as far east as 38th Avenue SW in the modest 33 acre (130,000 m2) Fauntleroy Park at SW Barton Street, through a fish ladder at its outlet near the Fauntleroy ferry terminal (the creek drops a moderately steep 300 ft (91 m) in that one mile). Coho salmon and cutthroat trout returned as soon as barriers were removed, after concerted effort and pressure by citizen groups of activist neighbors (1989–1998). A further culvert blocks fish passage to Kilbourne Park and so on up to the headwaters in Fauntleroy Park. The 98 acre (400,000 m2) watershed is about two-thirds residential development, from 1900s summer colony to post-World War II urban, with the rest natural space, primarily Fauntleroy Park. Longfellow Creek Longfellow Creek is one of the four largest in urban Seattle. It flows north from Roxhill Park for several miles along the valley of the Delridge neighborhood of West Seattle, turning east to reach the Duwamish Waterway via a 3,300 ft (1000 m) pipe beneath the Bethlehem Steel plant (now Nucor). Salmon returned without intervention as soon as toxic input was ended and barriers were removed, after having been extinguished for 60 years. Construction of a fish ladder at the north end of the West Seattle Golf Course will allow spawning salmon up along the fairways. Farther upstream the city has been enlarging and building more storm-detention ponds, recreation areas, and an outdoor-education center at Camp Long. An area of of open upland, wetland and wooded space just east of Chief Sealth High School in Westwood is the first daylight of Longfellow Creek. It has been the location of some plant and tree restoration since 1997. After more than a decade of preparation by hundreds of neighborhood volunteers, a restoration and 4.2 mile (6.7 km) legacy trail was completed in 2004. Further improvement by removal of invasive vegetation is ongoing as native species retake hold. Blue heron and coyote can be seen. The creek first emerges at the 10,000-year-old Roxhill Bog, south of the Westwood Village shopping center. Madrona Creek Citizens of Madrona neighborhoods initiated a daylighting project in 2001, encompassing from above 38th Avenue into Lake Washington. Daylighting will return the creek to a new bed and replace the sloping lawn between Lake Washington Boulevard and Lake Washington with native plantings, and with the mouth of the creek at a restored wetland cove on the lake. New culverts under 38th, the boulevard, and under a permeable pedestrian path will allow fish passage. Native plantings will restore about 1.5 acres (6,100 m2), with plantings three to four feet in height at three key view corridors. Planning continued through 2004, followed by design (2205) and construction (2006). The completion celebration is scheduled for spring, 2007. The $450,000 cost is funded by community-initiated grants and private donations. Citizen stewards of the creek and woods are represented by the Friends of Madrona Woods (1996). The urban forest encompasses about 9 acres (36,000 m2), largely in a couple ravines. The park area was built 1891-1893, officially no longer maintained since the 1930s with the demise of streetcars and pedestrian lifestyles. Persistent efforts began (1995) with informal removal of ivy smothering trees, then invasive species like holly, laurel and blackberries, and realization that effective restoration would require comprehensive stewardship. With a Department of Neighborhoods grant, the neighborhood started a formal effort. Neighborhood groups, planning with naturalists and landscape architects, brought an effective early step rebuilding trails, promoting access and building constituency. Further priorities were protection for habitat, restoration of stream beds, rehabilitation as a natural area using native plants, and using the Madrona Woods as a setting for environmental education programs at local schools. A hired landscape architect became a team member, experimental plots were set up to test different methods for revegetating with native plants. (Plants adapt to microclimates; experimentation is required to jumpstart the otherwise very long natural processes.) Friends of Madrona Woods earned a much larger Department of Neighborhoods matching grant in 2000, funding the creation of a master action plan, and major trail restoration work. The community match for the grant was nearly 2500 hours of volunteer labor by community members and school children from St. Therese and Epiphany schools. After many decades of urban use without formal maintenance, substantial trail engineering was required. EarthCorps was contracted to do the actual construction, which included 86 steps, two landings and a bridge. In the process of clearing, volunteers found substantial erosion in the wetland hillside, leading to a grant from a Parks Department fund to stabilize it with a water cascade of natural materials. Neighbors did a little trail-building of their own with Volunteers for Outdoor Washington and an all-day trail building workshop (February 2000). Work parties continue monthly through much of the year. Schmitz Creek Schmitz Creek in the Alki neighborhood of West Seattle flows to the sound from Schmitz Park, SW 55th Avenue at SW Admiral Way. Apart from the paved entrance and a parking lot at the northwest corner, the park has remained essentially unchanged since its 53 acres (210,000 m2) were protected 1908-1912 from complete logging. Fragmentary old growth forest remains. Daylighting and drainage rebuilding to handle seasonal and storm flow was done 2001-2003. United Kingdom Porter Brook, Sheffield, Yorkshire The Porter Brook flows from the west of Sheffield on the edge of the Peak District and flows into the River Sheaf at Sheaf Street near Sheffield Railway Station. The Porter Brook is one of Sheffield's five well known rivers, along with the Don, Sheaf, Loxley and Rivelin. The Porter has been deculverted at Matilda Street near the BBC Radio Sheffield studios. A feasibility study for the scheme was undertaken for South Yorkshire Forest Partnership by Sheffield City Council in 2013 with funding from the Environment Agency and the EU via the Interreg North Sea Region Programme. The project was completed by Sheffield City Council with funding from the Environment Agency in 2016. The Porter Brook daylighting scheme featured in a 2016 BBC Radio 4 documentary entitled A River of Steel, produced by sound recordist Chris Watson, ex-member of Caberet Voltaire. It was also discussed in an article in The Guardian in 2017. River Roch, Rochdale, Greater Manchester The River Roch that runs through the town of Rochdale has recently been uncovered, revealing the medieval bridge in place. It was covered in 1904 to accommodate a tram network that has since closed. South Korea In Seoul, which buried the Cheonggyecheon creek during the city's 1960s boom, an artificial waterway and adjoining parks have been built atop it. Mayor Lee Myung Bak, formerly a construction magnate with the Hyundai chaebol that helped bury the river, ran for office promising to daylight it, and achieved in 2005 a greenspace in a city without very many parks or playgrounds. The new park is hugely popular, alleviating fears that opening the river would cause nearby businesses to lose customers. See also Stream restoration Subterranean river Water resources Notes and references Bibliography File: Jae-Won, Lee. "A CITY RUNS THROUGH IT: Residents waded into the newly restored Chonggyechon River earlier this month in downtown Seoul, South Korea." "with additions by Sunny Walter and local Audubon chapters." See "Northeast Seattle" section, bullet points "Meadowbrook", "Paramount Park Open Space", "North Seattle Community College Wetlands", and "Sunny Walter -- Twin Ponds". Particularly useful. Fiset referenced Warren W. Wing, To Seattle by Trolley (Edmonds, WA: Pacific Fast Mail), 1988; [No author, title], Portage, Winter/Spring 1984; Gail Lee Dubrow et al., Broadview/Bitter Lake Community History, (Seattle Department of Parks & Recreation), 1995; [No author, title], Today, August 4, 1976; [No author, title], The Seattle Times, May 22, 1930; [No author, title], Seattle Post-Intelligencer, August 19, 1953. Referenced The Electric Trolley by Junius Rochester; Seattle 1900-1920 by Richard C. Berner; Seattle Now & Then by Paul Dorpat; The Lake Washington Story by Lucille McDonald; The Don Sherwood Files, Seattle Parks Department. from the files of Don Sherwood, 1916–1981, Park Historian, Don Sherwood History Files). Was , NF. Overview and links to full document in PDF. Clean Water & Oceans: Water Pollution: In Depth: Report > Stormwater Strategies Community Responses to Runoff Pollution Date per "Stormwater Strategies Community Responses to Runoff Pollution ", additional chapter 12, October 2001. Planning 2001-2004, construction 2006. Thistle St. Longfellow Creek Greenspace Good list of news articles; also newsletters and official correspondence. Viewing locations only; the book has walks, hikes, wildlife, and natural wonders. Walter excerpted from "with additions by Sunny Walter and local Audubon chapters." See "Northeast Seattle" section, bullet points "Meadowbrook", "Paramount Park Open Space", "North Seattle Community College Wetlands", and "Sunny Walter -- Twin Ponds". Includes summary title of Initiative 80. Further reading Overview of the geography of metro Seattle watersheds, Map of the landscape carved by the Vashon Glacier some 14,000 years ago. Homewaters Project, Thornton Creek Watershed Longfellow Creek Home Page City of Seattle Urban Creeks Legacy What is in urban stormwater runoff External links https://uwaterloo.ca/stream-daylighting/interactive-map https://web.archive.org/web/20071008041448/http://groundworkhudsonvalley.org/ http://www.SawMillRiverCoalition.org https://web.archive.org/web/20121109013431/http://riverwiki.restorerivers.eu/ Water streams Ecological restoration Hydrology Hydraulic engineering Riparian zone Habitat Water and the environment Subterranean rivers
Daylighting (streams)
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
5,543
[ "Hydrology", "Ecological restoration", "Physical systems", "Hydraulics", "Civil engineering", "Environmental engineering", "Riparian zone", "Hydraulic engineering" ]
5,454,132
https://en.wikipedia.org/wiki/Problem%20of%20future%20contingents
Future contingent propositions (or simply, future contingents) are statements about states of affairs in the future that are contingent: neither necessarily true nor necessarily false. The problem of future contingents seems to have been first discussed by Aristotle in chapter 9 of his On Interpretation (De Interpretatione), using the famous sea-battle example. Roughly a generation later, Diodorus Cronus from the Megarian school of philosophy stated a version of the problem in his notorious master argument. The problem was later discussed by Leibniz. The problem can be expressed as follows. Suppose that a sea-battle will not be fought tomorrow. Then it was also true yesterday (and the week before, and last year) that it will not be fought, since any true statement about what will be the case in the future was also true in the past. But all past truths are now necessary truths; therefore it is now necessarily true in the past, prior and up to the original statement "A sea battle will not be fought tomorrow", that the battle will not be fought, and thus the statement that it will be fought is necessarily false. Therefore, it is not possible that the battle will be fought. In general, if something will not be the case, it is not possible for it to be the case. "For a man may predict an event ten thousand years beforehand, and another may predict the reverse; that which was truly predicted at the moment in the past will of necessity take place in the fullness of time" (De Int. 18b35). This conflicts with the idea of our own free choice: that we have the power to determine or control the course of events in the future, which seems impossible if what happens, or does not happen, is necessarily going to happen, or not happen. As Aristotle says, if so there would be no need "to deliberate or to take trouble, on the supposition that if we should adopt a certain course, a certain result would follow, while, if we did not, the result would not follow". Aristotle's solution Aristotle solved the problem by asserting that the principle of bivalence found its exception in this paradox of the sea battles: in this specific case, what is impossible is that both alternatives can be possible at the same time: either there will be a battle, or there won't. Both options can't be simultaneously taken. Today, they are neither true nor false; but if one is true, then the other becomes false. According to Aristotle, it is impossible to say today if the proposition is correct: we must wait for the contingent realization (or not) of the battle, logic realizes itself afterwards: One of the two propositions in such instances must be true and the other false, but we cannot say determinately that this or that is false, but must leave the alternative undecided. One may indeed be more likely to be true than the other, but it cannot be either actually true or actually false. It is therefore plain that it is not necessary that of an affirmation and a denial, one should be true and the other false. For in the case of that which exists potentially, but not actually, the rule which applies to that which exists actually does not hold good. (§9) For Diodorus, the future battle was either impossible or necessary. Aristotle added a third term, contingency, which saves logic while in the same time leaving place for indetermination in reality. What is necessary is not that there will or that there will not be a battle tomorrow, but the dichotomy itself is necessary: A sea-fight must either take place tomorrow or not, but it is not necessary that it should take place tomorrow, neither is it necessary that it should not take place, yet it is necessary that it either should or should not take place tomorrow. (De Interpretatione, 9, 19 a 30.) Islamic philosophy What exactly al-Farabi posited on the question of future contingents is contentious. Nicholas Rescher argues that al-Farabi's position is that the truth value of future contingents is already distributed in an "indefinite way", whereas Fritz Zimmerman argues that al-Farabi endorsed Aristotle's solution that the truth value of future contingents has not been distributed yet. Peter Adamson claims they are both correct as al-Farabi endorses both perspectives at different points in his writing, depending on how far he is engaging with the question of divine foreknowledge. Al-Farabi's argument about "indefinite" truth values centers around the idea that "from premises that are contingently true, a contingently true conclusion necessarily follows". This means that even though a future contingent will occur, it may not have done so according to present contingent facts; as such, the truth value of a proposition concerning that future contingent is true, but true in a contingent way. al-Farabi uses the following example; if we argue truly that Zayd will take a trip tomorrow, then he will, but crucially:There is in Zayd the possibility that he stays home....if we grant that Zayd is capable of staying home or of making the trip, then these two antithetical outcomes are equally possibleAl-Farabi's argument deals with the dilemma of future contingents by denying that the proposition P "it is true at that Zayd will travel at " and the proposition Q "it is true that at that Zayd travels" would lead us to conclude that necessarily if P then necessarily Q. He denies this by arguing that "the truth of the present statement about Zayd's journey does not exclude the possibility of Zayd’s staying at home: it just excludes that this possibility will be realized". Leibniz Leibniz gave another response to the paradox in §6 of Discourse on Metaphysics: "That God does nothing which is not orderly, and that it is not even possible to conceive of events which are not regular." Thus, even a miracle, the Event by excellence, does not break the regular order of things. What is seen as irregular is only a default of perspective, but does not appear so in relation to universal order, and thus possibility exceeds human logics. Leibniz encounters this paradox because according to him: Thus the quality of king, which belonged to Alexander the Great, an abstraction from the subject, is not sufficiently determined to constitute an individual, and does not contain the other qualities of the same subject, nor everything which the idea of this prince includes. God, however, seeing the individual concept, or haecceity, of Alexander, sees there at the same time the basis and the reason of all the predicates which can be truly uttered regarding him; for instance that he will conquer Darius and Porus, even to the point of knowing a priori (and not by experience) whether he died a natural death or by poison,- facts which we can learn only through history. When we carefully consider the connection of things we see also the possibility of saying that there was always in the soul of Alexander marks of all that had happened to him and evidences of all that would happen to him and traces even of everything which occurs in the universe, although God alone could recognize them all. (§8) If everything that happens to Alexander derives from the haecceity of Alexander, then fatalism threatens Leibniz's construction: We have said that the concept of an individual substance includes once for all everything which can ever happen to it and that in considering this concept one will be able to see everything which can truly be said concerning the individual, just as we are able to see in the nature of a circle all the properties which can be derived from it. But does it not seem that in this way the difference between contingent and necessary truths will be destroyed, that there will be no place for human liberty, and that an absolute fatality will rule as well over all our actions as over all the rest of the events of the world? To this I reply that a distinction must be made between that which is certain and that which is necessary. (§13) Against Aristotle's separation between the subject and the predicate, Leibniz states: "Thus the content of the subject must always include that of the predicate in such a way that if one understands perfectly the concept of the subject, he will know that the predicate appertains to it also." (§8) The predicate (what happens to Alexander) must be completely included in the subject (Alexander) "if one understands perfectly the concept of the subject". Leibniz henceforth distinguishes two types of necessity: necessary necessity and contingent necessity, or universal necessity vs singular necessity. Universal necessity concerns universal truths, while singular necessity concerns something necessary that could not be (it is thus a "contingent necessity"). Leibniz hereby uses the concept of compossible worlds. According to Leibniz, contingent acts such as "Caesar crossing the Rubicon" or "Adam eating the apple" are necessary: that is, they are singular necessities, contingents and accidentals, but which concerns the principle of sufficient reason. Furthermore, this leads Leibniz to conceive of the subject not as a universal, but as a singular: it is true that "Caesar crosses the Rubicon", but it is true only of this Caesar at this time, not of any dictator nor of Caesar at any time (§8, 9, 13). Thus Leibniz conceives of substance as plural: there is a plurality of singular substances, which he calls monads. Leibniz hence creates a concept of the individual as such, and attributes to it events. There is a universal necessity, which is universally applicable, and a singular necessity, which applies to each singular substance, or event. There is one proper noun for each singular event: Leibniz creates a logic of singularity, which Aristotle thought impossible (he considered that there could only be knowledge of generality). 20th century One of the early motivations for the study of many-valued logics has been precisely this issue. In the early 20th century, the Polish formal logician Jan Łukasiewicz proposed three truth-values: the true, the false and the as-yet-undetermined. This approach was later developed by Arend Heyting and L. E. J. Brouwer; see Łukasiewicz logic. Issues such as this have also been addressed in various temporal logics, where one can assert that "Eventually, either there will be a sea battle tomorrow, or there won't be." (Which is true if "tomorrow" eventually occurs.) The modal fallacy By asserting "A sea-fight must either take place tomorrow or not, but it is not necessary that it should take place tomorrow, neither is it necessary that it should not take place, yet it is necessary that it either should or should not take place tomorrow", Aristotle is simply claiming "necessarily (a or not-a)", which is correct. However, if we then conclude: "If a is the case, then necessarily, a is the case", then this is known as the modal fallacy. Expressed in another way: That is, there are no contingent propositions. Every proposition is either necessarily true or necessarily false. The fallacy arises in the ambiguity of the first premise. If we interpret it close to the English, we get: However, if we recognize that the English expression (i) is potentially misleading, that it assigns a necessity to what is simply nothing more than a necessary condition, then we get instead as our premises: From these latter two premises, one cannot validly infer the conclusion: See also Logical determinism Free will Principle of distributivity Principle of plenitude Truth-value link In Borges' The Garden of Forking Paths, both alternatives happen, thus leading to what Deleuze calls "incompossible worlds" Notes Further reading attempts to reconstruct both Aristotle's and Diodorus' arguments in propositional modal logic Dorothea Frede (1985), "The Sea Battle Reconsidered: A defense of the traditional interpretation", Oxford Studies in Ancient Philosophy 3, 31-87. John MacFarlane (2003), Sea Battles, Futures Contingents, and Relative Truth, The Philosophical Quarterly 53, 321-36 Jules Vuillemin, Le chapitre IX du De Interpretatione d'Aristote - Vers une réhabilitation de l'opinion comme connaissance probable des choses contingentes, in Philosophiques, vol. X, n°1, April 1983 External links Aristotle's De Interpretatione: Semantics and Philosophy of Language with an extensive bibliography of recent studies on the Future Sea Battle Selected Bibliography on the Master Argument, Diodorus Chronus, Philo the Dialectician with a bibliography on Diodorus and the problem of future contingents Modal logic Philosophical logic Paradoxes Future Contingents Future Ancient Greek logic
Problem of future contingents
[ "Physics", "Mathematics" ]
2,694
[ "Physical quantities", "Time", "Future", "Mathematical logic", "Spacetime", "Modal logic" ]
5,455,427
https://en.wikipedia.org/wiki/Axilrod%E2%80%93Teller%20potential
The Axilrod–Teller potential in molecular physics, is a three-body potential that results from a third-order perturbation correction to the attractive London dispersion interactions (instantaneous induced dipole-induced dipole) where is the distance between atoms and , and is the angle between the vectors and . The coefficient is positive and of the order , where is the ionization energy and is the mean atomic polarizability; the exact value of depends on the magnitudes of the dipole matrix elements and on the energies of the orbitals. References Chemical bonding Quantum mechanical potentials
Axilrod–Teller potential
[ "Physics", "Chemistry", "Materials_science" ]
121
[ "Materials science stubs", "Quantum mechanics", "Quantum mechanical potentials", "Condensed matter physics", "nan", "Chemical bonding", "Electromagnetism stubs", "Quantum physics stubs" ]