id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
1,825,388
https://en.wikipedia.org/wiki/Orifice%20plate
An orifice plate is a device used for measuring flow rate, for reducing pressure or for restricting flow (in the latter two cases it is often called a ). Description An orifice plate is a thin plate with a hole in it, which is usually placed in a pipe. When a fluid (whether liquid or gaseous) passes through the orifice, its pressure builds up slightly upstream of the orifice but as the fluid is forced to converge to pass through the hole, the velocity increases and the fluid pressure decreases. A little downstream of the orifice the flow reaches its point of maximum convergence, the vena contracta (see drawing to the right) where the velocity reaches its maximum and the pressure reaches its minimum. Beyond that, the flow expands, the velocity falls and the pressure increases. By measuring the difference in fluid pressure across tappings upstream and downstream of the plate, the flow rate can be obtained from Bernoulli's equation using coefficients established from extensive research. In general, the mass flow rate measured in kg/s across an orifice can be described as where {| |- style="vertical-align:top" | = | coefficient of discharge, dimensionless, typically between 0.6 and 0.85, depending on the orifice geometry and tappings, |- style="vertical-align:top" | = | diameter ratio of orifice diameter to pipe diameter , dimensionless, |- style="vertical-align:top" | = | expansibility factor, 1 for incompressible gases and most liquids, and decreasing with pressure ratio across the orifice, dimensionless, |- style="vertical-align:top" | = | internal orifice diameter under operating conditions, m, |- style="vertical-align:top" | = | fluid density in plane of upstream tapping, kg/m3, |- style="vertical-align:top" | = | differential pressure measured across the orifice, Pa. |} The volume flow rate measured in m3/s is The overall pressure loss in the pipe due to an orifice plate is lower than the measured differential pressure, typically by a factor of . Application Orifice plates are most commonly used to measure flow rates in pipes, when the fluid is single-phase (rather than being a mixture of gases and liquids, or of liquids and solids) and well-mixed, the flow is continuous rather than pulsating, the fluid occupies the entire pipe (precluding silt or trapped gas), the flow profile is even and well-developed and the fluid and flow rate meet certain other conditions. Under these circumstances and when the orifice plate is constructed and installed according to appropriate standards, the flow rate can easily be determined using published formulae based on substantial research and published in industry, national and international standards. An orifice plate is called a calibrated orifice if it has been calibrated with an appropriate fluid flow and a traceable flow measurement device. Plates are commonly made with sharp-edged circular orifices and installed concentric with the pipe and with pressure tappings at one of three standard pairs of distances upstream and downstream of the plate; these types are covered by ISO 5167 and other major standards. There are many other possibilities. The edges may be rounded or conical, the plate may have an orifice the same size as the pipe except for a segment at top or bottom which is obstructed, the orifice may be installed eccentric to the pipe, and the pressure tappings may be at other positions. Variations on these possibilities are covered in various standards and handbooks. Each combination gives rise to different coefficients of discharge which can be predicted so long as various conditions are met, conditions which differ from one type to another. Once the orifice plate is designed and installed, the flow rate can often be indicated with an acceptably low uncertainty simply by taking the square root of the differential pressure across the orifice's pressure tappings and applying an appropriate constant. Orifice plates are also used to reduce pressure or restrict flow, in which case they are often called restriction plates. Pressure tappings There are three standard positions for pressure tappings (also called taps), commonly named as follows: placed immediately upstream and downstream of the plate; convenient when the plate is provided with an orifice carrier incorporating tappings or placed one pipe diameter upstream and half a pipe diameter downstream of the plate; these can be installed by welding bosses to the pipe placed 25.4 mm (1 inch) upstream and downstream of the plate, normally within specialised pipe flanges. These types are covered by ISO 5167 and other major standards. Other types include or placed 2.5 pipe diameters upstream and 8 diameters downstream, at which point the measured differential is equal to the unrecoverable pressure loss caused by the orifice placed one pipe diameter upstream and at a position 0.3 to 0.9 diameters downstream, depending on the orifice type and size relative to the pipe, in the plane of minimum fluid pressure. The measured differential pressure differs for each combination and so the coefficient of discharge used in flow calculations depends partly on the tapping positions. The simplest installations use single tappings upstream and downstream, but in some circumstances these may be unreliable; they might be blocked by solids or gas-bubbles, or the flow profile might be uneven so that the pressures at the tappings are higher or lower than the average in those planes. In these situations multiple tappings can be used, arranged circumferentially around the pipe and joined by a piezometer ring, or (in the case of corner taps) annular slots running completely round the internal circumference of the orifice carrier. Plate Standards and handbooks are mainly concerned with plates. In these, the leading edge is sharp and free of burrs and the cylindrical section of the orifice is short, either because the entire plate is thin or because the downstream edge of the plate is bevelled. Exceptions include the or orifice, which has a fully rounded leading edge and no cylindrical section, and the or plate which has a bevelled leading edge and a very short cylindrical section. The orifices are normally concentric with the pipe (the orifice is a specific exception) and circular (except in the specific case of the or orifice, in which the plate obstructs just a segment of the pipe). Standards and handbooks stipulate that the upstream surface of the plate is particularly flat and smooth. Sometimes a small drain or vent hole is drilled through the plate where it meets the pipe, to allow condensate or gas bubbles to pass along the pipe. Pipe Standards and handbooks stipulate a well-developed flow profile; velocities will be lower at the pipe wall than in the centre but not eccentric or jetting. Similarly the flow downstream of the plate must be unobstructed, otherwise the downstream pressure will be affected. To achieve this, the pipe must be acceptably circular, smooth and straight for stipulated distances. Sometimes when it is impossible to provide enough straight pipe, flow conditioners such as tube bundles or plates with multiple holes are inserted into the pipe to straighten and develop the flow profile, but even these require a further length of straight pipe before the orifice itself. Some standards and handbooks also provide for flows from or into large spaces rather than pipes, stipulating that the region before or after the plate is free of obstruction and abnormalities in the flow. Theory Incompressible flow By assuming steady-state, incompressible (constant fluid density), inviscid, laminar flow in a horizontal pipe (no change in elevation) with negligible frictional losses, Bernoulli's equation (which expresses the conservation of energy of an incompressible fluid parcel as it moves between two points on the same streamline) can be rewritten without the gravitational potential energy term and reduced to: or: By continuity equation:   or   and : Solving for : and: The above expression for gives the theoretical volume flow rate. Introducing the beta factor as well as the discharge coefficient : And finally introducing the meter coefficient which is defined as to obtain the final equation for the volumetric flow of the fluid through the orifice which accounts for irreversible losses: Multiplying by the density of the fluid to obtain the equation for the mass flow rate at any section in the pipe: Deriving the above equations used the cross-section of the orifice opening and is not as realistic as using the minimum cross-section at the vena contracta. In addition, frictional losses may not be negligible and viscosity and turbulence effects may be present. For that reason, the coefficient of discharge is introduced. Methods exist for determining the coefficient of discharge as a function of the Reynolds number. The parameter is often referred to as the velocity of approach factor and multiplying the coefficient of discharge by that parameter (as was done above) produces the flow coefficient . Methods also exist for determining the flow coefficient as a function of the beta function and the location of the downstream pressure sensing tap. For rough approximations, the flow coefficient may be assumed to be between 0.60 and 0.75. For a first approximation, a flow coefficient of 0.62 can be used as this approximates to fully developed flow. An orifice only works well when supplied with a fully developed flow profile. This is achieved by a long upstream length (20 to 40 pipe diameters, depending on Reynolds number) or the use of a flow conditioner. Orifice plates are small and inexpensive but do not recover the pressure drop as well as a venturi, nozzle, or venturi-nozzle does. Venturis also require much less straight pipe upstream. A venturi meter is more efficient, but usually more expensive and less accurate (unless calibrated in a laboratory) than an orifice plate. Compressible flow In general, equation (2) is applicable only for incompressible flows. It can be modified by introducing the expansibility factor, (also called the expansion factor) to account for the compressibility of gasses. is 1.0 for incompressible fluids and it can be calculated for compressible gases using empirically determined formulae as shown below in computation. For smaller values of β (such as restriction plates with β less than 0.25 and discharge from tanks), if the fluid is compressible, the rate of flow depends on whether the flow has become choked. If it is, then the flow may be calculated as shown at choked flow (although the flow of real gases through thin-plate orifices never becomes fully choked By using a mechanical energy balance, compressible fluid flow in un-choked conditions may be calculated as: and Under choked flow conditions, the fluid flow rate becomes: or Computation according to ISO 5167 Flow rates through an orifice plate can be calculated without specifically calibrating the individual flowmeter so long as the construction and installation of the device complies with the stipulations of the relevant standard or handbook. The calculation takes account of the fluid and fluid conditions, the pipe size, the orifice size and the measured differential pressure; it also takes account of the coefficient of discharge of the orifice plate, which depends upon the orifice type and the positions of the pressure tappings. With local pressure tappings (corner, flange and D+D/2), sharp-edged orifices have coefficients around 0.6 to 0.63, while the coefficients for conical entrance plates are in the range 0.73 to 0.734 and for quarter-circle plates 0.77 to 0.85. The coefficients of sharp-edged orifices vary more with fluids and flow rates than the coefficients of conical-entrance and quarter-circle plates, especially at low flows and high viscosities. For compressible flows such as flows of gases or steam, an expansibility factor or expansion factor is also calculated. This factor is primarily a function of the ratio of the measured differential pressure to the fluid pressure and so can vary significantly as the flow rate varies, especially at high differential pressures and low static pressures. The equations provided in American and European national and industry standards and the various coefficients used to differ from each other even to the extent of using different combinations of correction factors, but many are now closely aligned and give identical results; in particular, they use the same Reader-Harris/Gallagher (1998) equation for the coefficient of discharge for sharp-edged orifice plates. The equations below largely follow the notation of the international standard ISO 5167 and use SI units. Volume flow rate: Mass flow rate: Coefficient of discharge Coefficient of discharge for sharp-edged orifice plates with corner, flange or D and D/2 tappings and no drain or vent hole (Reader-Harris/Gallagher equation): and if D < 71.2mm in which case this further term is added to C: In the equation for C, and only the three following pairs of values for L1 and L'2 are valid: corner tappings: flange tappings: D and D/2 tappings: Expansibility factor Expansibility factor, also called expansion factor, for sharp-edged orifice plates with corner, flange or D and D/2 tappings: if (at least - standards vary) but for incompressible fluids, including most liquids Overall pressure loss The overall pressure loss caused by an orifice plate is less than the differential pressure measured across tappings near the plate. For sharp-edged plates such as corner, flange or D and D/2 tappings, it can be approximated by the equation or See also Accidental release source terms Choked flow De Laval nozzle Flowmeter Pitot tube Restrictive flow orifice Rocket engine nozzle Torricelli's law Venturi effect Advantages and disadvantages of orifice meter and venturi meter References Notes Citations Sources Online Tools Online Orifice Plate Calculator Fluid dynamics Chemical engineering Mechanical engineering Control devices Piping
Orifice plate
[ "Physics", "Chemistry", "Engineering" ]
2,892
[ "Applied and interdisciplinary physics", "Building engineering", "Chemical engineering", "Control devices", "Control engineering", "nan", "Mechanical engineering", "Piping", "Fluid dynamics" ]
18,446,530
https://en.wikipedia.org/wiki/Microplasma
A microplasma is a plasma of small dimensions, ranging from tens to thousands of micrometers. Microplasmas can be generated at a variety of temperatures and pressures, existing as either thermal or non-thermal plasmas. Non-thermal microplasmas that can maintain their state at standard temperatures and pressures are readily available and accessible to scientists as they can be easily sustained and manipulated under standard conditions. Therefore, they can be employed for commercial, industrial, and medical applications, giving rise to the evolving field of microplasmas. What is a microplasma? There are 4 states of matter: solid, liquid, gas, and plasma. Plasmas make up more than 99% of the visible universe. In general, when energy is applied to a gas, internal electrons of gas molecules (atoms) are excited and move up to higher energy levels. If the energy applied is high enough, outermost electron(s) can even be stripped off the molecules (atoms), forming ions. Electrons, molecules (atoms), excited species and ions form a soup of species which involves many interactions between species and demonstrate collective behavior under the influence of external electric and magnetic fields. Light always accompanies plasmas: as the excited species relax and move to lower energy levels, energy is released in the form of light. Microplasma is a subdivision of plasma in which the dimensions of the plasma can range between tens, hundreds, or even thousands of micrometers in size. The majority of microplasmas that are employed in commercial applications are cold plasmas. In a cold plasma, electrons have much higher energy than the accompanying ions and neutrals. Microplasmas are typically generated at elevated pressure to atmospheric pressure or higher. Successful ignition of microplasmas is governed by Paschen's Law, which describes the breakdown voltage (the voltage at which the plasma begins to arc) as a function of the product of electrode distance and pressure, where pd is the product of pressure and distance, and and are the gas constants for calculating Townsend's first ionization coefficient and is the secondary emission coefficient of the material. As the pressure increases, the distance between the electrodes must decrease to achieve the same breakdown voltage. This law is proven to be valid at inter-electrode distances as small as tens of micrometers and pressures higher than atmospheric. However, its validity at even smaller scales (approaching debye length) is still currently under investigation. Generating microplasmas While microplasma devices have been studied experimentally for more than a decade, understanding has been spurred in the past few years as the result of modelling and computational investigations of microplasmas. Confinement to small spaces When the pressure of the gas medium in which the microplasma is generated increases, the distance between the electrodes must decrease to maintain the same breakdown voltage. In such microhollow cathode discharges, the product of pressure and distance ranges from fractions of Torr cm to about 10 Torr cm. At values below 5 Torr cm, the discharges are called "pre-discharges" and are low intensity glow discharges. Above 10 Torr cm the discharge can become uncontrollable and extend from the anode to random locations within the cavity. Further research by David Staack provided a graph of ideal electrode distances, voltages, and carrier gases tested for microplasma generation. Dielectric materials Dielectrics are poor electrical conductors, but support electrostatic fields and electric polarization. Dielectric barrier discharge microplasmas are typically created between metal plates, which are covered by a thin layer of dielectric or highly resistive material. The dielectric layer plays an important role in suppressing the current: the cathode/anode layer is charged by incoming positive ions/electrons during a positive cycle of AC is applied which reduces the electric field and hinders charge transport towards the electrode. DBD also has a large surface-to-volume ratio, which promotes diffusion losses and maintains a low gas temperature. When a negative cycle of AC is applied, the electrons are repelled off of the anode, and are ready to collide with other particles. Frequencies of 1000 Hz or more are required to move the electrons fast enough to create a microplasma, but excessive frequencies can damage the electrode (~50 kHz). Although dielectric barrier discharge comes in various shapes and dimensions, each individual discharge is in micrometer scale. Pulsed power AC and high frequency power are often used to excite dielectrics, in place of DC. Take AC as an example, there are positive and negative cycles in each period. When the positive cycle occurs, electrons accumulate on the dielectric surface. On the other hand, the negative cycle would repel the accumulated electrons, causing collisions in the gas and creating plasma. During the switch from the negative to positive cycles, the above-mentioned frequency range of 1000 Hz-50,000 Hz is needed in order for a microplasma to be generated. Because of the small mass of the electrons, they are able to absorb the sudden switch in energy and become excited; the larger particles (atoms, molecules, and ions), however aren't able to follow the fast switching, therefore keeping the gas temperature low. Radio frequency and microwave signals Based on transistor amplifiers low power RF (radio frequency) and microwave sources are used to generate a microplasma. Most of the solutions work at 2.45 GHz. Meanwhile, is a technology developed which provide the ignition (high voltage generation) on the one hand and the high efficient operation (matching of the plasma and the wave guide impedance) on the other hand with the same electronic and impedance transformer network. Laser induced With the use of lasers, solid substrates can be converted directly into microplasmas. Solid targets are struck by high energy lasers, usually gas lasers, which are pulsed at time periods from picoseconds to femtoseconds (mode-locking). Successful experiments have used Ti:Sm, KrF, and YAG lasers, which can be applied to a variety of substrates such as lithium, germanium, plastics, and glass. History In 1857, Werner von Siemens, a German scientist, originated ozone generation using a dielectric barrier discharge apparatus for biological decontamination. His observations were explained without the knowledge of “microplasmas”, but were later recognized as the first use of microplasmas to date. The early electrical engineers, such as Edison and Tesla, were actually trying to prevent the generation of such "micro-discharges", and used dielectrics to insulate the first electrical infrastructures. Subsequent studies have observed the Paschen breakdown curve as being the prime cause of microplasma generation in an article published in 1916. Subsequent articles during the course of the 20th century have described the various conditions and specifications that lead to the generation of microplasmas. After Siemens' interactions with microplasma, Ulrich Kogelschatz was the first to identify these "micro-discharges" and define their fundamental properties. Kogelschatz also realized that microplasmas could be used for excimer formation. His experiments spurred the rapid development of the microplasma field. In February 2003, Kunihide Tachibana, a professor of Kyoto University held the first international workshop on microplasmas (IWM) in Hyogo, Japan. The workshop, titled “The New World of Microplasmas”, opened a new era of microplasma research. Tachibana is recognized as one of the founding fathers as he coined the term “microplasma”. The Second IWM was organized in October 2004 by Professors K.H. Becker, J.G. Eden, and K.H. Schoenbach at Stevens Institute of Technology in Hoboken, New Jersey. The third international workshop was coordinated by the Institute of Low Temperature Plasma Physics alongside the Institute of Physics of Ernst-Moitz-Arndt-University in Greifswald, Germany, May 2006 (K.D. Weltmann). Topics discussed were inspiring scientific and arising technological opportunities of microplasmas. The fourth IWM was held in Taiwan in October 2007 (C.C. Chao and J.E. Chang), the fifth in San Diego, California in March 2009 (J.G. Eden and S.-J. Park), and the sixth in Paris, France in April 2011 (V. Puech). The next (seventh) workshop was held in China in approximately May 2013 (Y.K. Pu). Applications The rapid growth of applications of microplasmas renders it impossible to name all of them within a short space, but some selected applications are listed here. Plasma displays Artificially generated microplasmas are found on the flat panel screen of a plasma display. The technology utilizes small cells and contains electrically charged ionized gases. Across this plasma display panel, there are a millions of tiny cells called pixels that are confined to form a visual image. In the plasma display panels, X and Y grid of electrodes, separated by a MgO dielectric layer and surrounded by a mixture of inert gases - such as argon, neon or xenon, the individual picture elements are addressed. They work on the principle that passing a high voltage through a low-pressure gas generates light. Essentially, a PDP can be viewed as a matrix of tiny fluorescent tubes which are controlled in a sophisticated fashion. Each pixel comprises a small capacitor with three electrodes, one for each primary color (some newer displays include an electrode for yellow). An electrical discharge across the electrodes causes the rare gases sealed in the cell to be converted to plasma form as it ionizes. Being electrically neutral, it contains equal quantities of electrons and ions and is, by definition, a good conductor. Once energized, the plasma cells release ultraviolet (UV) light which then strikes and excites red, green and blue phosphors along the face of each pixel, causing them to glow. Illumination (Light Source) The team of Gary Eden and Sung-Jin Park are pioneering the use of microplasmas for general illumination (as well as UV source). Their apparatus uses many microplasma generators in a large array, which emit light through a clear, transparent window. Unlike fluorescent lamps, which require the electrodes to be far apart in a cylindrical cavity and vacuum conditions, microplasma light sources can be put into many different shapes and configurations, and generate heat. This is opposed to the more commonly used fluorescent lamps which require a noble gas atmosphere (usually argon), where excimer formation and resulting radiative decomposition strikes a phosphor coating to create light. Excimer light sources are also being produced and researched. The stable, non-equilibrium condition of microplasmas favors three-body collisions which can lead to excimer formation. The excimer, an unstable molecule produced by collisions of excited atoms, is very short lived due to its rapid dissociation. Upon their decomposition, excimers release different kinds of radiation when electrons fall to lower energy levels. One application, which has been pursued by the Hyundai Display Advanced Technology R&D Research Center and the University of Illinois, is to use excimer light sources in flat panel displays. The technology moved further to the technology of a compact, flat source of UV and vacuum UV wavelengths for various applications. Destruction of volatile organic compounds (VOC's) Microplasma are used to destroy volatile organic compounds. For example, capillary plasma electrode (CPE) discharge was used to effectively destroy volatile organic compounds such as benzene, toluene, ethylbenzene, xylene, ethylene, heptane, octane, and ammonia in the surrounding air for use in advanced life support systems designed for enclosed environments. Destruction efficiencies were determined as a function of plasma energy density, initial contaminant concentration, residence time in plasma volume, reactor volume, and the number of contaminants in the gas flow stream. Complete destruction of VOC's can be achieved in the annular reactor for specific energies of 3 J cm−3 and above. Furthermore, specific energies approaching 10 J cm−3 are required to achieve a comparable destruction efficiency in the cross-flow reactor. This indicates that optimization of the reactor geometry is a critical aspect of achieving maximum destruction efficiencies. Koutsospyros et al. (2004, 2005) and Yin et al. (2003) reported results regarding studies of VOC destruction using CPE plasma reactors. All compounds studied reached maximum VOC destruction efficiencies between 95% and 100%. The VOC destruction efficiency increased initially with the specific energy, but remained at values of the specific energy that are compound-dependent. A similar observation was made for the dependence of the VOC destruction efficiency on the residence time. The destruction efficiency increased with rising initial contaminant concentration. For chemically similar compounds, the maximum destruction efficiency was found to be inversely related to the ionization energy of the compound and directly related to the degree of chemical substitution. This may suggest that chemical substitution sites offer the highest plasma-induced chemical activity. Environmental sensors The small size and modest power required for microplasma devices employ a variety of environmental sensing applications and detect trace concentrations of hazardous species. Microplasmas are sensitive enough to act as detectors, which can distinguish between excessive quantities of complex molecules. C.M. Herring and his colleagues at Caviton Inc. have simulated this system by coupling a microplasma device with a commercial gas chromatography column (GC). The microplasma device is situated at the exit of the GC column, which records the relative fluorescence intensity of specific atomic and molecular dissociation fragments. This apparatus possesses the ability to detect minute concentrations of toxic and environmentally hazardous molecules. It can also detect a wide range of wavelengths and the temporal signature of chromatograms, which identifies the species of interest. For the detection of less complex species, the temporal sorting done by the GC column is not necessary since the direct observation of fluorescence produced in the microplasma is sufficient. Ozone generation for water purification Microplasmas are being used for the formation of ozone from atmospheric oxygen. Ozone (O3) has been shown to be a good disinfectant and water treatment that can cause breakdown of organic and inorganic materials. Ozone is not potable and reverts to diatomic oxygen, with a half-life of about 3 days in air room temperature (about 20 0C). In water, however, ozone has a half-life of only 20 minutes at the same temperature of 20 (0C) . Degremont Technologies (Switzerland) produces microplasma arrays for commercial and industrial production of ozone for water treatment. By passing molecular oxygen through a series of dielectric barriers, using what Degremont calls the Intelligent Gap System (IGS), an increasing concentration of ozone is produced by altering the gap size and coatings used on the electrodes farther down the system. The ozone is then directly bubbled into the water to be made potable (suitable for drinking). Unlike chlorine, which is still used in many water purification systems to treat water, ozone does not remain in the water for extended periods. Because ozone decomposes with a half-life of 20 minutes in water at room temperature, there are no lasting effects that may cause harm. Current research Fuel cells Microplasmas serve as energetic sources of ions and radicals, which are desirable for activating chemical reactions. Microplasmas are used as flow reactors that allow molecular gases to flow through the microplasma inducing chemical modifications by molecular decomposition. The high energy electrons of microplasmas accommodate chemical modification and reformation of liquid hydrocarbon fuels to produce fuel for fuel cells. Becker and his co-workers used a single flow-through dc-excited microplasma reactor to generate hydrogen from an atmospheric pressure mixture of ammonia and argon for use in small, portable fuel cells. Lindner and Besser experimented with reforming model hydrocarbons such as methane, methanol, and butane into hydrogen for fuel cell feed. Their novel microplasma reactor was a microhollow cathode discharge with a microfluidic channel. Mass and energy balances on these experiments revealed conversions up to nearly 50%, but the conversion of electrical power input to chemical reaction enthalpy was only on the order of 1%. Although through modeling the reforming reaction it was found that the amount of input electrical power to chemical conversion could increase by improving the device as well as the system parameters. Nanomaterial synthesis and deposition The use of microplasmas is being looked into for the synthesis of complex macromolecules, as well as the addition of functional groups to the surfaces of other substrates. An article by Klages et al. describes the addition of amino groups to the surfaces of polymers after treatment with a pulsed DC discharge apparatus using nitrogen containing gases. It was found that ammonia gas microplasmas add on an average of 2.4 amino groups per square nanometer of a nitrocellulose membrane, and increase the strength at which the layers of the substrate can bind. The treatment can also provide a reactive surface for biomedicine, as amino groups are extremely electron-rich and energetic. Mohan Sankaran has done work on the synthesis of nanoparticles using a pulsed DC discharge. His research team has found that by applying a microplasma jet to an electrolytic solution which has either a gold or silver anode is submerged produces the relevant cations. These cations can then capture electrons supplied by the microplasma jet and results in the formation of nanoparticles. The research shows that more nanoparticles of gold and silver are shown in the solution than there are of the resulting salts that form from the acid conducting solution. Cosmetics Microplasma uses in research are being considered. The plasma skin regeneration (PSR) device consists of an ultra–high-radiofrequency generator that excites a tuned resonator and imparts energy to a flow of inert nitrogen gas within the handpiece. The plasma generated has an optical emission spectrum with peaks in the visible range (mainly indigo and violet) and near-infrared range. Nitrogen is used as the gaseous source because it is able to purge oxygen from the surface of the skin, minimizing the risk of unpredictable hot spots, charring, and scar formation. As the plasma hits the skin, energy is rapidly transferred to the skin surface, causing instantaneous heating in a controlled uniform manner, without an explosive effect on tissue or epidermal removal. In pretreatment samples, the zone of collagen shows a dense accumulation of elastin, but in posttreatment samples, this zone contains less dense elastin with significant, interlocking new collagen. Repeated low-energy PSR treatment is an effective modality for improving dyspigmentation, smoothness, and skin laxity associated with photoaging. Histologic analysis of posttreatment samples confirms the production of new collagen and remodeling of dermal architecture. Changes consist of erythema and superficial epidermal peeling without complete removal, generally complete by 4 to 5 days. Microsputtering Thin Film Deposition Active research into microplasma sputtering for conductive interconnect thin film deposition poses a potential additive manufacturing alternative to costly semiconductor industry production standards. Novel microputterers, operating with a continuously fed cathodic wire, employ print head reactors consisting of the wire terminus, two positively biased electrodes, and two opposing negatively charged focus electrodes to generate a microplasma environment within a sub-millimeter target-to-substrate separation space. As in traditional sputtering, the incited plasma bombards the exposed target surface, ejecting individual atoms which are then incident on the substrate surface, forming a conductive thin film. Contrasted with traditional applications, microplasma sputtering offers numerous advantages, including limited to no post-processing requirements, as controlled positioning of the substrate can produce precise patterning without the need for subsequent photolithographic masking and etching, and versatility of substrate form, in that microsputterers are not constrained to planar deposition. Additionally, atmospheric conditions permitted by this method eliminate the substantial cost barrier presented by the necessity for the expensive, complex vacuum systems in which contemporary sputtering operations are performed. To date this technique has failed to achieve the resolution of industry standard microelectronics, with pinnacle pathway width results of approx. 9μm, but noted potential for improvements to process gas flow and possible post-processing enhancements stand to assist in closing the gap. Given the method’s relatively low cost and its broad versatility, attaining production quality on par with modern industry standards could potentially stand to spur a revolution in mass-customizable electronics. Plasma medicine Dental treatments Scientists found that microplasmas are capable of inactivating bacteria that causes tooth decay and periodontal diseases. By directing low temperature microplasma beams at the calcified tissue structure beneath the tooth enamel coating called dentin, it severely reduces the amount of dental bacteria and in turn reduces infection. This aspect of microplasma could allow dentists to use microplasma technology to destroy bacteria in tooth cavities instead of using mechanical means. Developers claim that microplasma devices will enable dentists to effectively treat oral-borne diseases with little pain to their patients. Recent studies show that microplasmas can be a very effective method of controlling oral biofilms. Biofilms (also known as slime) are highly organized, three-dimensional bacterial communities. Dental plaque is a common example of oral biofilms. It is the main cause of both tooth decay and periodontal diseases such as Gingivitis and Periodontitis. At the University of Southern California, Parish Sedghizadeh, Director of the USC Center for Biofilms and Chunqi Jiang, assistant research professor in the Ming Hsieh Department of Electrical Engineering-Electrophysics, work with researchers from Viterbi School of Engineering searching for new ways to fight off these bacterial infections. Sedghizadeh explained that the biofilms’ slimy matrix acts as extra protection against traditional antibiotics. However, the centers’ study confirms that biofilms cultivated in the root canal of extracted human teeth can be easily destroyed by the application of microplasma. The plasma emission microscopy obtained during each experiment suggests that the atomic oxygen produced by the microplasma is responsible for the inactivation of bacteria. Sedghizadeh then suggested that the oxygen free radicals could disrupt the biofilms cellular membrane and cause them to break down. According to their ongoing research at USC, Sedghizadeh and Jiang have found that microplasma is not harmful to surrounding healthy tissues and they are confident that microplasma technology will soon become a groundbreaking tool in the medical industry.J.K. Lee along with other scientists in this field have found that microplasma can also be used for teeth bleaching. This reactive species can effectively bleach teeth along with saline or whitening gels that consist of hydrogen peroxide. Lee and his colleagues experimented with this method, examining how microplasma along with hydrogen peroxide effects blood stained human teeth. These scientists took forty extracted single-root, blood stained human teeth and randomly divided them into two groups of twenty. Group one received 30% hydrogen peroxide activated by microplasma for thirty minutes in a pulp chamber, while group two received 30% hydrogen peroxide alone for thirty minutes in the pulp chamber and the temperature was maintained at thirty seven degrees Celsius for both groups. After the tests had been performed, they found that microplasma treatment with 30% hydrogen peroxide had a significant effect on the whiteness of the teeth in group one. Lee and his associates concluded that the application of microplasma along with hydrogen peroxide is an efficient method in the bleaching of stained teeth due to its ability to remove proteins on the surface of teeth and the increased production of hydroxide. Wound care Microplasma that is sustained near room temperature can destroy bacteria, viruses, and fungi deposited on the surfaces of surgical instruments and medical devices. Researchers discovered that bacteria cannot survive in the harsh environment created by microplasmas. They consist of chemically reactive species such as hydroxyl (OH) and atomic oxygen (O) that can kill harmful bacteria through oxidation. Oxidation of the lipids and proteins that compose a cell's membrane can lead to the breakdown of the membrane and deactivate the bacteria. Microplasma can contact skin without harming it, making it ideal for disinfecting wounds. “Medical plasmas are said to be in the ‘Goldilocks’ range—hot enough to produce and be an effective treatment, but cold enough to leave tissues unharmed” (Larousi, Kong 1). Researchers have found that microplasmas can be applied directly to living tissues to deactivate pathogens. Scientists have also discovered that microplasmas stop bleeding without damaging healthy tissue, disinfect wounds, accelerate wound healing, and selectively kill some types of cancer cells. At moderate doses, microplasmas can destroy pathogens. At low doses, they can accelerate the replication of cells—an important step in the wound healing process. The ability of microplasma to kill bacteria cells and accelerate the replication of healthy tissue cells is known as the “plasma kill/plasma heal” process, this led scientists to further experiment with the use of microplasmas for wound care. Preliminary tests have also demonstrated successful treatments of some types of chronic wounds. Cancer treatments Since microplasmas deactivate bacteria they may have the ability to destroy cancer cells. Jean Michel Pouvesle has been working at the University of Orléans in France, in the Group for Research and Studies on Mediators of Inflammation (GREMI), experimenting with the effects of microplasma on cancer cells. Pouvesle along with other scientists has created a dielectric barrier discharge and plasma gun for cancer treatment, in which microplasma will be applied to both in vitro and in vivo experiments. This application will reveal the role of ROS (Reactive Oxygen Species), DNA damage, cell cycle modification, and apoptosis induction. Studies show that microplasma treatments are able to induce programmed death (apoptosis) among cancer cells—stopping the rapid reproduction of cancerous cells, with little damage to living human tissues. GREMI performs many experiments with microplasmas in cancerology, their first experiment applies microplasma to mice tumors growing beneath the skin's surface. During this experiment, scientists found no changes or burns on the surface of the skin. After a five-day microplasma treatment, the results displayed a significant decrease in the growth of U87 glioma cancer (brain tumor), compared to the control group where microplasma was not applied. GREMI performed further in vitro studies regarding U87 gliomal cancer (brain tumors) and HCT116 (colon tumor) cell lines where microplasma was applied. This microplasma treatment was proven to be an efficient method in destroying cancer cells after being applied over periods of a few tens of seconds. Further studies are being conducted on the effects of microplasma treatment in oncology; this application of microplasma will impact the medical field significantly. References External links The Center for Microplasma Science and Technology (CMST) Atmospheric microwave microplasma sources at Ferdinand-Braun-Institut (FBH) Laboratory for Optical Physics and Engineering (LOPE) The Group for Research and Studies on Mediators in Inflammation Powerpoint on plasma medicine Another powerpoint on plasma medicine The Laser and Plasma Engineering Institute at Old Dominion University The AJ Drexel Plasma Institute Plasma types
Microplasma
[ "Physics" ]
5,832
[ "Plasma types", "Plasma physics" ]
18,450,034
https://en.wikipedia.org/wiki/Multicritical%20point
Multicritical points are special points in the parameter space of thermodynamic or other systems with a continuous phase transition. At least two thermodynamic or other parameters must be adjusted to reach a multicritical point. At a multicritical point the system belongs to a universality class different from the "normal" universality class. A more detailed definition requires concepts from the theory of critical phenomena. Definition The union of all the points of the parameter space for which the system is critical is called a critical manifold. As an example consider a substance ferromagnetic below a transition temperature , and paramagnetic above . The parameter space here is the temperature axis, and the critical manifold consists of the point . Now add hydrostatic pressure to the parameter space. Under hydrostatic pressure the substance normally still becomes ferromagnetic below a temperature (). This leads to a critical curve in the () plane - a -dimensional critical manifold. Also taking into account shear stress as a thermodynamic parameter leads to a critical surface () in the () parameter space - a -dimensional critical manifold. Critical manifolds of dimension and may have physically reachable borders of dimension which in turn may have borders of dimension . The system still is critical at these borders. However, criticality terminates for good reason, and the points on the borders normally belong to another universality class than the universality class realized within the critical manifold. All the points on the border of a critical manifold are multicritical points. Instead of terminating somewhere critical manifolds also may branch or intersect. The points on the intersections or branch lines also are multicritical points. At least two parameters must be adjusted to reach a multicritical point. A -dimensional critical manifold may have two -dimensional borders intersecting at a point. Two parameters must be adjusted to reach such a border, three parameters must be adjusted to reach the intersection of the two borders. A system of this type represents up to four universality classes: one within the critical manifold, two on the borders and one on the intersection of the borders. The gas-liquid critical point is not multicritical, because the phase transition at the vapour pressure curve () is discontinuous and the critical manifold thus consists of a single point. Examples Tricritical Point and Multicritical Points of Higher Order To reach a tricritical point the parameters must be tuned in such a way that the renormalized counterpart of the -term of the Hamiltonian vanishes. A well-known experimental realization is found in the mixture of Helium-3 and Helium-4. Lifshitz Point To reach a Lifshitz point the parameters must be tuned in such a way that the renormalized counterpart of the -term of the Hamiltonian vanishes. Consequently, at the Lifshitz point phases of uniform and modulated order meet the disordered phase. An experimental example is the magnet MnP. A Lifshitz point is realized in a prototypical way in the ANNNI model. The Lifshitz point has been introduced by R.M. Hornreich, S. Shtrikman and M. Luban in 1975, honoring the research of Evgeny Lifshitz. Lifshitz Tricritical Point This multicritical point is simultaneously tricritical and Lifshitz. Three parameters must be adjusted to reach a Lifshitz tricritical point. Such a point has been discussed to occur in non-stoichiometric ferroelectrics. Lee-Yang edge singularity The critical manifold of an Ising model with zero external magnetic field consists of the point at the critical temperature on the temperature axis . In a purely imaginary external magnetic field this critical manifold ramifies into the two branches of the Lee-Yang type, belonging to a different universality class. The Ising critical point plays the role of a multicritical point in this situation (there are no imaginary magnetic fields, but there are equivalent physical situations). References Renormalization group
Multicritical point
[ "Physics", "Materials_science", "Mathematics" ]
831
[ "Physical phenomena", "Critical phenomena", "Renormalization group", "Condensed matter physics", "Statistical mechanics", "Dynamical systems" ]
20,643,804
https://en.wikipedia.org/wiki/Farrell%E2%80%93Jones%20conjecture
In mathematics, the Farrell–Jones conjecture, named after F. Thomas Farrell and Lowell E. Jones, states that certain assembly maps are isomorphisms. These maps are given as certain homomorphisms. The motivation is the interest in the target of the assembly maps; this may be, for instance, the algebraic K-theory of a group ring or the L-theory of a group ring , where G is some group. The sources of the assembly maps are equivariant homology theory evaluated on the classifying space of G with respect to the family of virtually cyclic subgroups of G. So assuming the Farrell–Jones conjecture is true, it is possible to restrict computations to virtually cyclic subgroups to get information on complicated objects such as or . The Baum–Connes conjecture formulates a similar statement, for the topological K-theory of reduced group -algebras . Formulation One can find for any ring equivariant homology theories satisfying respectively Here denotes the group ring. The K-theoretic Farrell–Jones conjecture for a group G states that the map induces an isomorphism on homology Here denotes the classifying space of the group G with respect to the family of virtually cyclic subgroups, i.e. a G-CW-complex whose isotropy groups are virtually cyclic and for any virtually cyclic subgroup of G the fixed point set is contractible. The L-theoretic Farrell–Jones conjecture is analogous. Computational aspects The computation of the algebraic K-groups and the L-groups of a group ring is motivated by obstructions living in those groups (see for example Wall's finiteness obstruction, surgery obstruction, Whitehead torsion). So suppose a group satisfies the Farrell–Jones conjecture for algebraic K-theory. Suppose furthermore we have already found a model for the classifying space for virtually cyclic subgroups: Choose -pushouts and apply the Mayer-Vietoris sequence to them: This sequence simplifies to: This means that if any group satisfies a certain isomorphism conjecture one can compute its algebraic K-theory (L-theory) only by knowing the algebraic K-Theory (L-Theory) of virtually cyclic groups and by knowing a suitable model for . Why the family of virtually cyclic subgroups ? One might also try to take for example the family of finite subgroups into account. This family is much easier to handle. Consider the infinite cyclic group . A model for is given by the real line , on which acts freely by translations. Using the properties of equivariant K-theory we get The Bass-Heller-Swan decomposition gives Indeed one checks that the assembly map is given by the canonical inclusion. So it is an isomorphism if and only if , which is the case if is a regular ring. So in this case one can really use the family of finite subgroups. On the other hand this shows that the isomorphism conjecture for algebraic K-Theory and the family of finite subgroups is not true. One has to extend the conjecture to a larger family of subgroups which contains all the counterexamples. Currently no counterexamples for the Farrell–Jones conjecture are known. If there is a counterexample, one has to enlarge the family of subgroups to a larger family which contains that counterexample. Inheritances of isomorphism conjectures The class of groups which satisfies the fibered Farrell–Jones conjecture contain the following groups virtually cyclic groups (definition) hyperbolic groups (see ) CAT(0)-groups (see ) solvable groups (see ) mapping class groups (see ) Furthermore the class has the following inheritance properties: Closed under finite products of groups. Closed under taking subgroups. Meta-conjecture and fibered isomorphism conjectures Fix an equivariant homology theory . One could say, that a group G satisfies the isomorphism conjecture for a family of subgroups, if and only if the map induced by the projection induces an isomorphism on homology: The group G satisfies the fibered isomorphism conjecture for the family of subgroups F if and only if for any group homomorphism the group H satisfies the isomorphism conjecture for the family . One gets immediately that in this situation also satisfies the fibered isomorphism conjecture for the family . Transitivity principle The transitivity principle is a tool to change the family of subgroups to consider. Given two families of subgroups of . Suppose every group satisfies the (fibered) isomorphism conjecture with respect to the family . Then the group satisfies the fibered isomorphism conjecture with respect to the family if and only if it satisfies the (fibered) isomorphism conjecture with respect to the family . Isomorphism conjectures and group homomorphisms Given any group homomorphism and suppose that G"' satisfies the fibered isomorphism conjecture for a family F of subgroups. Then also H"' satisfies the fibered isomorphism conjecture for the family . For example if has finite kernel the family agrees with the family of virtually cyclic subgroups of H. For suitable one can use the transitivity principle to reduce the family again. Connections to other conjectures Novikov conjecture There are also connections from the Farrell–Jones conjecture to the Novikov conjecture. It is known that if one of the following maps is rationally injective, then the Novikov-conjecture holds for . See for example,. Bost conjecture The Bost conjecture (named for Jean-Benoît Bost) states that the assembly map is an isomorphism. The ring homomorphism induces maps in K-theory . Composing the upper assembly map with this homomorphism one gets exactly the assembly map occurring in the Baum–Connes conjecture. Kaplansky conjecture The Kaplansky conjecture predicts that for an integral domain and a torsion-free group the only idempotents in are . Each such idempotent gives a projective module by taking the image of the right multiplication with . Hence there seems to be a connection between the Kaplansky conjecture and the vanishing of . There are theorems relating the Kaplansky conjecture to the Farrell–Jones conjecture (compare ). References Surgery theory K-theory Conjectures Unsolved problems in mathematics
Farrell–Jones conjecture
[ "Mathematics" ]
1,283
[ "Unsolved problems in mathematics", "Mathematical problems", "Conjectures" ]
20,644,107
https://en.wikipedia.org/wiki/International%20Award%20of%20Merit%20in%20Structural%20Engineering
The International Award of Merit in Structural Engineering is presented to people for outstanding contributions in the field of structural engineering, with special reference to usefulness for society by the International Association for Bridge and Structural Engineering Fields of endeavour may include: planning, design, construction, materials, equipment, education, research, government, management. The first Award was presented in 1976. Awardees Source IABSE 2020: Ahseen Kareem, USA 2019: Niels Jørgen Gimsing, Denmark 2018: Tristram Carfrae, UK 2017: Juan José Arenas, Spain 2016: no award 2015: Jose Calavera, Spain 2014: William F. Baker, USA 2013: Theodossios Tassios, Greece 2012: Hai-Fan Xiang, China 2011: Leslie E. Robertson, USA 2010: Man-Chung Tang, USA 2009: Christian Menn, Switzerland 2008: Tom Paulay, New Zealand 2007: Manabu Ito, Japan and Spain 2006: Javier Manterola, Spain 2005: Jean-Marie Cremer, Belgium 2004: Chander Alimchandani, India 2003: Michel Virlogeux, France 2002: Ian Liddell, UK 2001: John W. Fisher, USA 2000: John E. Breen, USA 1998: Peter Head, UK 1997: Bruno Thürlimann, Switzerland 1996: Alan G. Davenport, Canada 1994: T. N. Subbarao, India 1995: Mamoru Kawaguchi, Japan 1993: Jean Muller, France 1992: Leo Finzi, Italy 1991: Jörg Schlaich, Germany See also List of engineering awards References External links IABSE webpage Structural engineering awards International awards
International Award of Merit in Structural Engineering
[ "Technology", "Engineering" ]
344
[ "Science and technology awards", "Structural engineering", "International science and technology awards", "Structural engineering awards" ]
20,644,699
https://en.wikipedia.org/wiki/Low%20plasticity%20burnishing
Low plasticity burnishing (LPB) cold compresses metal to provide deep, stable surface residual stresses to improve damage tolerance and extend metal fatigue life; mitigating surface damage, including fretting, corrosion pitting, stress corrosion cracking (SCC), and foreign object damage (FOD). Improved fretting fatigue and stress corrosion performance has been documented, even at elevated temperatures where the compression from other metal improvement processes: low stress grinding (LSG) etc. relax. The resulting deep layer of compressive residual stress has also been shown to improve high cycle fatigue (HCF), low cycle fatigue (LCF), and stress corrosion cracking (SCC) performance. LPB is the only known metal improvement method applied under continuous closed-loop process control and has been successfully applied to turbine engines, piston engines, propellers, aging aircraft structures, landing gear, nuclear waste material containers, biomedical implants, armaments, fitness equipment and welded joints. Typical applications involve titanium, iron, nickel and steel-based components which showed improved damage tolerance as well as HCF and LCF performance by an order of magnitude over existing metal improvement processes. History LPB, unlike traditional burnishing tools, consists of a hard wheel or fixed lubricated ball pressed into the surface of an asymmetrical work piece with sufficient force to deform the surface layers, usually in a lathe. Multiple passes are made over the work piece, usually under increasing load, to improve surface finish and deliberately cold work the surface. Roller and ball burnishing have been studied in Russia and Japan, and were applied most extensively in the USSR in the 1970’s and Eastern Europe. LPB was further developed and patented by Lambda Technologies in Cincinnati, Ohio in 1996. How it works The basic LPB tool is a ball, wheel or other similar tip supported in a spherical hydrostatic bearing held in a CNC machine or industrial robot, depending on the application. Continuous coolant flow pressurizes the LPB tool bearing to support the ball. The ball does not contact the mechanical bearing seat, even under load. The ball is loaded at a normal state to the surface of a component with a hydraulic cylinder that is in the body of the tool. LPB can be performed in conjunction with chip forming machining operations in the same CNC machining tool. The ball rolls across the surface of a component in a pattern defined in the CNC code, as in any machining operation. The tool path and normal pressure applied are designed to create a distribution of compressive residual stress. The form of the distribution is designed to counter applied stresses and optimize fatigue and stress corrosion performance. Since there is no shear being applied to the ball, it is free to roll in any direction. As the ball rolls over the component, the pressure from the ball causes plastic deformation to occur in the surface of the material under the ball. Since the bulk of the material constrains the deformed area, the deformed zone is left in compression after the ball passes. Benefits With this practice of customization along with the closed-loop process control system, LPB has been shown to produce a maximum compression of 12mm, although the average is around 1-7+mm. LPB has even been shown to have the ability to produce through-thickness compression in blades and vanes, greatly increasing their damage tolerance over 10-fold, effectively mitigating most FOD and reducing inspection requirements. No material is removed during this process, even when correcting corrosion damage. LPB smooths surface asperities during machining, leaving an improved, almost mirror-like surface finish that is vastly better looking and better protected than even a newly manufactured component. Cold working The cold work temperature produced from this process is typically minimal; similar to the cold work produced by laser peening, but a great deal less than shot peening, gravity peening or, deep rolling. Cold work is particularly important because the higher the cold work temperatures at the surface of a component, the more vulnerable to elevated temperatures and mechanical overload that component will be and the easier the beneficial surface residual compression will relax, rendering the treatment pointless. In other words, a highly cold worked component will not hold compression if it comes into contact with extreme heat, like an engine, and will be just as vulnerable to damage without cold working. Therefore, LPB and laser peening stand out in the surface enhancement industry because both are thermally stable at high temperatures. The reason LPB produces such low percentages of cold work is because of the aforementioned closed-loop process control. Conventional shot peening processes have some guesswork on complete component coverage and are not exact at all, causing the procedure to be performed multiple times on one component to ensure adequate cold work. For example, shot peening, in order to make sure every spot on the component is treated, typically specifies coverage of between 200% (2T) and 400% (4T). This means that at 200% coverage (2T), 5 or more impacts occur at 84% of locations and at 400% coverage (4T), it is significantly more. One problem is that one area will be hit several times while the area next to may be hit fewer times, leaving uneven compression at the surface; resulting in the whole process being unstable and easily “undone”, as mentioned above. LPB requires only one pass with the tool and leaves a deep, even, stable compressive stress. The LPB process can be performed on-site in the shop or in situ using robots, making it easy to incorporate into everyday maintenance and manufacturing procedures. The method is applied under continuous closed loop process control (CLPC), creating accuracy within 0.1% and alerting the operator and QA immediately if the processing bounds are exceeded. One issue of this process is that different CNC processing codes need to be developed for each application, just like other machining tasks. Another potential issue is that because of dimensional restrictions, it may not be possible to create the tools necessary to work on certain geometries, although this has yet to be a problem. See also Corrosion fatigue Damage tolerance FOD Fretting High Frequency Impact Treatment aftertreatment of weld transitions Laser peening Metal fatigue Peening Residual stress Shot peening Stress corrosion cracking Ultrasonic impact treatment References Beres, W. "Ch. 5- FOD/HCF Resistant Surface Treatments". Nato/Otan. Retrieved 11 December 2008 from ftp://ftp.rta.nato.int/PubFullText/RTO/TR/RTO-TR-AVT-094/TR-AVT-094-05.pdf. This contains and excellent comparison of several surface treatments. Exactech. "Low Plasticity Burnishing." Retrieved 11 December 2008 from http://www.exac.com/products/hip/emerging-technologies/low-plasticity-burnishing. Giummara, C., Zonker, H. "Improving the Fatigue Response of Aerospace Structural Joints." Alcoa Inc., Alcoa Technical Center, Pittsburgh, PA. Presented at ICAF 2005 Proceedings in Hamburg, Germany. Jayaraman, N., Prevey, P. "Case Studies of Mitigation of FOD, Fretting Fatigue, Corrosion Fatigue and SCC Damage by Low Plasticity Burnishing in Aircraft Structural Alloys." Presented for the USAF Structural Integrity Program. Memphis, TN. 2005. Lambda Technologies. “LPB Application Note: Aging Aircraft.” Retrieved 20 October 2008 from http://www.lambdatechs.com/html/documents/Aa_pp.pdf. Migala, T., Jacobs, T. "Low Plasticity Burnishing: An Affordable, Effective Means Of Surface Enhancement." Retrieved 11 December 2008 from http://www.surfaceenhancement.com/techpapers/729.pdf. NASA. “Improved Method Being Developed for Surface Enhancement of Metallic Materials.” Retrieved 29 October 2008 from . NASA: John Glenn Research Center. "Fatigue life and resistance to damage are increased at relatively low cost." Retrieved 11 December 2008 from http://www.techbriefs.com/index.php?option=com_staticxt&staticfile=Briefs/Aug02/LEW17188.html. Prevey, P., Ravindranath, R., Shepard, M., Gabb, T. "Case Studies of Fatigue Life Improvement Using Low Plasticity Burnishing in Gas Turbine Engine Applications." Presented June 2003 at the ASME Turbo Expo. Atlanta, GA. Corrosion Metalworking
Low plasticity burnishing
[ "Chemistry", "Materials_science" ]
1,781
[ "Materials degradation", "Electrochemistry", "Metallurgy", "Corrosion" ]
20,645,003
https://en.wikipedia.org/wiki/Allylmagnesium%20bromide
Allylmagnesium bromide is a Grignard reagent used for introducing the allyl group. It is commonly available as a solution in diethyl ether. It may be synthesized by treatment of magnesium with allyl bromide while maintaining the reaction temperature below 0 °C to suppress formation of hexadiene. Allyl chloride can also be used in place of the bromide to give allylmagnesium chloride. These reagents are used to prepare metal allyl complexes. References Further reading Organomagnesium compounds Allyl compounds Bromides
Allylmagnesium bromide
[ "Chemistry" ]
117
[ "Bromides", "Reagents for organic chemistry", "Organomagnesium compounds", "Salts" ]
20,646,034
https://en.wikipedia.org/wiki/Continuum%20%28measurement%29
Continuum (: continua or continuums) theories or models explain variation as involving gradual quantitative transitions without abrupt changes or discontinuities. In contrast, categorical theories or models explain variation using qualitatively different states. In physics In physics, for example, the space-time continuum model describes space and time as part of the same continuum rather than as separate entities. A spectrum in physics, such as the electromagnetic spectrum, is often termed as either continuous (with energy at all wavelengths) or discrete (energy at only certain wavelengths). In contrast, quantum mechanics uses quanta, certain defined amounts (i.e. categorical amounts) which are distinguished from continuous amounts. In mathematics and philosophy A good introduction to the philosophical issues involved is John Lane Bell's essay in the Stanford Encyclopedia of Philosophy. A significant divide is provided by the law of excluded middle. It determines the divide between intuitionistic continua such as Brouwer's and Lawvere's, and classical ones such as Stevin's and Robinson's. Bell isolates two distinct historical conceptions of infinitesimal, one by Leibniz and one by Nieuwentijdt, and argues that Leibniz's conception was implemented in Robinson's hyperreal continuum, whereas Nieuwentijdt's, in Lawvere's smooth infinitesimal analysis, characterized by the presence of nilsquare infinitesimals: "It may be said that Leibniz recognized the need for the first, but not the second type of infinitesimal and Nieuwentijdt, vice versa. It is of interest to note that Leibnizian infinitesimals (differentials) are realized in nonstandard analysis, and nilsquare infinitesimals in smooth infinitesimal analysis". In social sciences, psychology and psychiatry In social sciences in general, psychology and psychiatry included, data about differences between individuals, like any data, can be collected and measured using different levels of measurement. Those levels include dichotomous (a person either has a personality trait or not) and non-dichotomous approaches. While the non-dichotomous approach allows for understanding that everyone lies somewhere on a particular personality dimension, the dichotomous (nominal categorical and ordinal) approaches only seek to confirm that a particular person either has or does not have a particular mental disorder. Expert witnesses particularly are trained to help courts in translating the data into the legal (e.g. 'guilty' vs. 'not guilty') dichotomy, which apply to law, sociology and ethics. In linguistics In linguistics, the range of dialects spoken over a geographical area that differ slightly between neighboring areas is known as a dialect continuum. A language continuum is a similar description for the merging of neighboring languages without a clear defined boundary. Examples of dialect or language continuums include the varieties of Italian or German; and the Romance languages, Arabic languages, or Bantu languages. References External links Continuity and infinitesimals, John Bell, Stanford Encyclopedia of Philosophy Concepts in metaphysics Concepts in physics Concepts in the philosophy of science Mathematical concepts
Continuum (measurement)
[ "Physics", "Mathematics" ]
648
[ "nan" ]
20,646,064
https://en.wikipedia.org/wiki/Quantum
In physics, a quantum (: quanta) is the minimum amount of any physical entity (physical property) involved in an interaction. Quantum is a discrete quantity of energy proportional in magnitude to the frequency of the radiation it represents. The fundamental notion that a property can be "quantized" is referred to as "the hypothesis of quantization". This means that the magnitude of the physical property can take on only discrete values consisting of integer multiples of one quantum. For example, a photon is a single quantum of light of a specific frequency (or of any other form of electromagnetic radiation). Similarly, the energy of an electron bound within an atom is quantized and can exist only in certain discrete values. Atoms and matter in general are stable because electrons can exist only at discrete energy levels within an atom. Quantization is one of the foundations of the much broader physics of quantum mechanics. Quantization of energy and its influence on how energy and matter interact (quantum electrodynamics) is part of the fundamental framework for understanding and describing nature. Etymology and discovery The word is the neuter singular of the Latin interrogative adjective quantus, meaning "how much". "", the neuter plural, short for "quanta of electricity" (electrons), was used in a 1902 article on the photoelectric effect by Philipp Lenard, who credited Hermann von Helmholtz for using the word in the area of electricity. However, the word quantum in general was well known before 1900, e.g. quantum was used in E. A. Poe's Loss of Breath. It was often used by physicians, such as in the term quantum satis, "the amount which is enough". Both Helmholtz and Julius von Mayer were physicians as well as physicists. Helmholtz used quantum with reference to heat in his article on Mayer's work, and the word quantum can be found in the formulation of the first law of thermodynamics by Mayer in his letter dated July 24, 1841. In 1901, Max Planck used quanta to mean "quanta of matter and electricity", gas, and heat. In 1905, in response to Planck's work and the experimental work of Lenard (who explained his results by using the term quanta of electricity), Albert Einstein suggested that radiation existed in spatially localized packets which he called "quanta of light" ("Lichtquanta"). The concept of quantization of radiation was discovered in 1900 by Max Planck, who had been trying to understand the emission of radiation from heated objects, known as black-body radiation. By assuming that energy can be absorbed or released only in tiny, differential, discrete packets (which he called "bundles", or "energy elements"), Planck accounted for certain objects changing color when heated. On December 14, 1900, Planck reported his findings to the German Physical Society, and introduced the idea of quantization for the first time as a part of his research on black-body radiation. As a result of his experiments, Planck deduced the numerical value of h, known as the Planck constant, and reported more precise values for the unit of electrical charge and the Avogadro–Loschmidt number, the number of real molecules in a mole, to the German Physical Society. After his theory was validated, Planck was awarded the Nobel Prize in Physics for his discovery in 1918. Quantization While quantization was first discovered in electromagnetic radiation, it describes a fundamental aspect of energy not just restricted to photons. In the attempt to bring theory into agreement with experiment, Max Planck postulated that electromagnetic energy is absorbed or emitted in discrete packets, or quanta. See also Graviton Introduction to quantum mechanics Magnetic flux quantum Particle Elementary particle Subatomic particle Photon polarization Qubit Quantum cellular automata Quantum channel Quantum chromodynamics Quantum cognition Quantum coherence Quantum computer Quantum cryptography Quantum dot Quantum electronics Quantum entanglement Quantum fiction Quantum field theory Quantum lithography Quantum mechanics Quantum mind Quantum mysticism Quantum number Quantum optics Quantum sensor Quantum state Quantum suicide and immortality Quantum teleportation References Further reading M. Planck, A Survey of Physical Theory, transl. by R. Jones and D.H. Williams, Methuen & Co., Limited., London 1925 (Dover edition 17 May 2003, ISBN 978-0486678672) including the Nobel lecture. Rodney, Brooks (14 December 2010) Fields of Color: The theory that escaped Einstein. Allegra Print & Imaging. ISBN 979-8373308427 Quantum mechanics
Quantum
[ "Physics" ]
948
[ "Theoretical physics", "Quantum mechanics" ]
20,646,400
https://en.wikipedia.org/wiki/Strange%20matter
Strange matter (or strange quark matter) is quark matter containing strange quarks. In extreme environments, strange matter is hypothesized to occur in the core of neutron stars, or, more speculatively, as isolated droplets that may vary in size from femtometers (strangelets) to kilometers, as in the hypothetical strange stars. At high enough density, strange matter is expected to be color superconducting. Ordinary matter, also referred to as atomic matter, is composed of atoms, with nearly all matter concentrated in the atomic nuclei. Nuclear matter is a liquid composed of neutrons and protons, and they are themselves composed of up and down quarks. Quark matter is a condensed form of matter composed entirely of quarks. When quark matter does not contain strange quarks, it is sometimes referred to as non-strange quark matter. Context In particle physics and astrophysics, the term 'strange matter' is used in two different contexts, one broader and the other more specific and hypothetical: In the broader context, our current understanding of the laws of nature predicts that strange matter could be created when nuclear matter (made of protons and neutrons) is compressed beyond a critical density. At this critical pressure and density, the protons and neutrons dissociate into quarks, yielding quark matter and potentially strange matter. A more specific hypothesis is that quark matter is the true ground state of all matter, and thus more stable than ordinary nuclear matter. This idea is known as the "strange matter hypothesis", or the Bodmer–Witten assumption. Under this hypothesis, the nuclei of the atoms we see around us are only metastable, even when the external critical pressure is zero, and given enough time (or the right stimulus) the nuclei would decay into stable droplets of strange matter. Droplets of strange matter are also referred to as strangelets. Stability of strange matter only at high pressure In the general context, strange matter might occur inside neutron stars, if the pressure at their core is high enough to provide a sufficient gravitational force (i.e. above the critical pressure). At the sort of densities and high pressures we expect in the center of a neutron star, the quark matter would probably be strange matter. It could conceivably be non-strange quark matter, if the effective mass of the strange quark were too high. Charm quarks and heavier quarks would only occur at much higher densities. Strange matter comes about as a way to relieve degeneracy pressure. The Pauli exclusion principle forbids fermions such as quarks from occupying the same position and energy level. When the particle density is high enough that all energy levels below the available thermal energy are already occupied, increasing the density further requires raising some to higher, unoccupied energy levels. This need for energy to cause compression manifests as a pressure. Neutrons consist of twice as many down quarks (charge − e) as up quarks (charge + e), so the degeneracy pressure of down quarks usually dominates electrically neutral quark matter. However, when the required energy level is high enough, an alternative becomes available: half of the down quarks can be transmuted to strange quarks (charge − e). The higher rest mass of the strange quark costs some energy, but by opening up an additional set of energy levels, the average energy per particle can be lower, making strange matter more stable than non-strange quark matter. A neutron star with a quark matter core is often called a hybrid star. However, it is difficult to know whether hybrid stars really exist in nature because physicists currently have little idea of the likely value of the critical pressure or density. It seems plausible that the transition to quark matter will already have occurred when the separation between the nucleons becomes much smaller than their size, so the critical density must be less than about 100 times nuclear saturation density. But a more precise estimate is not yet available, because the strong interaction that governs the behavior of quarks is mathematically intractable, and numerical calculations using lattice QCD are currently blocked by the fermion sign problem. One major area of activity in neutron star physics is the attempt to find observable signatures by which we could tell whether neutron stars have quark matter (probably strange matter) in their core. During the merger of two neutron stars, strange matter may be ejected out into the space around the stars, which may allow for the studying of strange matter. However, the rate at which strange matter decays is unknown, and there are very few binary pairs of neutron stars nearby to the Solar System, which could make the official discovery of strange matter very difficult. Stability of strange matter at zero pressure If the "strange matter hypothesis" is true, then nuclear matter is metastable against decaying into strange matter. The lifetime for spontaneous decay is very long, so we do not see this decay process happening around us. However, under this hypothesis there should be strange matter in the universe: Quark stars (often called "strange stars") consist of quark matter from their core to their surface. They would be several kilometers across, and may have a very thin crust of nuclear matter. Strangelets are small pieces of strange matter, perhaps as small as nuclei. They would be produced when strange stars are formed or collide, or when a nucleus decays. See also Strangeness and quark–gluon plasma – subatomic signature References Exotic matter Phases of matter Quark matter Unsolved problems in physics Strange quark
Strange matter
[ "Physics", "Chemistry" ]
1,166
[ "Quark matter", "Unsolved problems in physics", "Phases of matter", "Astrophysics", "Exotic matter", "Nuclear physics", "Matter" ]
20,646,679
https://en.wikipedia.org/wiki/Lightning%20rod
A lightning rod or lightning conductor (British English) is a metal rod mounted on a structure and intended to protect the structure from a lightning strike. If lightning hits the structure, it is most likely to strike the rod and be conducted to ground through a wire, rather than passing through the structure, where it could start a fire or even cause electrocution. Lightning rods are also called finials, air terminals, or strike termination devices. In a lightning protection system, a lightning rod is a single component of the system. The lightning rod requires a connection to the earth to perform its protective function. Lightning rods come in many different forms, including hollow, solid, pointed, rounded, flat strips, or even bristle brush-like. The main attribute common to all lightning rods is that they are all made of conductive materials, such as copper and aluminum. Copper and its alloys are the most common materials used in lightning protection. History The first proper lightning rod was invented by Father Prokop Diviš, a Czech priest and scientist, who erected a grounded lightning rod in 1754. Diviš's design involved a vertical iron rod topped with a grounded wire, intended to attract lightning strikes and safely conduct them to the ground. His experimental apparatus, known as the "weather machine” predated Benjamin Franklin's more widely recognized experiments. Franklin, unaware of Diviš's work, independently developed and popularized his own lightning rod design, which became widely adopted across Europe and North America. Franklin's contribution significantly advanced the understanding and application of lightning protection systems, although Diviš's earlier conceptual work remains an important milestone in the history of electrical safety engineering. British Empire In what later became the United States, the pointed lightning rod conductor (not grounded), also called a lightning attractor or Franklin rod, was invented by Benjamin Franklin in 1752 as part of his groundbreaking exploration of electricity. Although not the first to suggest a correlation between electricity and lightning, Franklin was the first to propose a workable system for testing his hypothesis. Franklin speculated that, with an iron rod sharpened to a point, "The electrical fire would, I think, be drawn out of a cloud silently, before it could come near enough to strike." Franklin speculated about lightning rods for several years before his reported kite experiment. In the 19th century, the lightning rod became a decorative motif. Lightning rods were embellished with ornamental glass balls (now prized by collectors). The ornamental appeal of these glass balls has been used in weather vanes. The main purpose of these balls, however, is to provide evidence of a lightning strike by shattering or falling off. If after a storm a ball is discovered missing or broken, the property owner should then check the building, rod, and grounding wire for damage. Balls of solid glass occasionally were used in a method purported to prevent lightning strikes to ships and other objects. The idea was that glass objects, being non-conductors, are seldom struck by lightning. Therefore, goes the theory, there must be something about glass that repels lightning. Hence the best method for preventing a lightning strike to a wooden ship was to bury a small solid glass ball in the tip of the highest mast. The random behavior of lightning combined with observers' confirmation bias ensured that the method gained a good bit of credence even after the development of the marine lightning rod soon after Franklin's initial work. The first lightning conductors on ships were supposed to be hoisted when lightning was anticipated, and had a low success rate. In 1820 William Snow Harris invented a successful system for fitting lightning protection to the wooden sailing ships of the day, but despite successful trials which began in 1830, the British Royal Navy did not adopt the system until 1842, by which time the Imperial Russian Navy had already adopted the system. In the 1990s, the 'lightning points' were replaced as originally constructed when the Statue of Freedom atop the United States Capitol building in Washington, D.C. was restored. The statue was designed with multiple devices that are tipped with platinum. The Washington Monument also was equipped with multiple lightning points, and the Statue of Liberty in New York Harbor gets hit by lightning, which is shunted to ground. Lightning protection system A lightning protection system is designed to protect a structure from damage due to lightning strikes by intercepting such strikes and safely passing their extremely high currents to ground. A lightning protection system includes a network of air terminals, bonding conductors, and ground electrodes designed to provide a low impedance path to ground for potential strikes. Lightning protection systems are used to prevent lightning strike damage to structures. Lightning protection systems mitigate the fire hazard which lightning strikes pose to structures. A lightning protection system provides a low-impedance path for the lightning current to lessen the heating effect of current flowing through flammable structural materials. If lightning travels through porous and water-saturated materials, these materials may explode if their water content is flashed to steam by heat produced from the high current. This is why trees are often shattered by lightning strikes. Because of the high energy and current levels associated with lightning (currents can be in excess of 150,000 A), and the very rapid rise time of a lightning strike, no protection system can guarantee absolute safety from lightning. Lightning current will divide to follow every conductive path to ground, and even the divided current can cause damage. Secondary "side-flashes" can be enough to ignite a fire, blow apart brick, stone, or concrete, or injure occupants within a structure or building. However, the benefits of basic lightning protection systems have been evident for well over a century. Laboratory-scale measurements of the effects of [any lightning investigation research] do not scale to applications involving natural lightning. Field applications have mainly been derived from trial and error based on the best intended laboratory research of a highly complex and variable phenomenon. The parts of a lightning protection system are air terminals (lightning rods or strike termination devices), bonding conductors, ground terminals (ground or "earthing" rods, plates, or mesh), and all of the connectors and supports to complete the system. The air terminals are typically arranged at or along the upper points of a roof structure, and are electrically bonded together by bonding conductors (called "down conductors" or "downleads"), which are connected by the most direct route to one or more grounding or earthing terminals. Connections to the earth electrodes must not only have low resistance, but must have low self-inductance. An example of a structure vulnerable to lightning is a wooden barn. When lightning strikes the barn, the wooden structure and its contents may be ignited by the heat generated by lightning current conducted through parts of the structure. A basic lightning protection system would provide a conductive path between an air terminal and earth, so that most of the lightning's current will follow the path of the lightning protection system, with substantially less current traveling through flammable materials. Originally, scientists believed that such a lightning protection system of air terminals and "downleads" directed the current of the lightning down into the earth to be "dissipated". However, high speed photography has clearly demonstrated that lightning is actually composed of both a cloud component and an oppositely charged ground component. During "cloud-to-ground" lightning, these oppositely charged components usually "meet" somewhere in the atmosphere well above the earth to equalize previously unbalanced charges. The heat generated as this electric current flows through flammable materials is the hazard which lightning protection systems attempt to mitigate by providing a low-resistance path for the lightning circuit. No lightning protection system can be relied upon to "contain" or "control" lightning completely (nor thus far, to prevent lightning strikes entirely), but they do seem to help immensely on most occasions of lightning strikes. Steel framed structures can bond the structural members to earth to provide lightning protection. A metal flagpole with its foundation in the earth is its own extremely simple lightning protection system. However, the flag(s) flying from the pole during a lightning strike may be completely incinerated. The majority of lightning protection systems in use today are of the traditional Franklin design. The fundamental principle used in Franklin-type lightning protections systems is to provide a sufficiently low impedance path for the lightning to travel through to reach ground without damaging the building. This is accomplished by surrounding the building in a kind of Faraday cage. A system of lightning protection conductors and lightning rods are installed on the roof of the building to intercept any lightning before it strikes the building. Russia A lightning conductor may have been intentionally used in the Leaning Tower of Nevyansk. The spire of the tower is crowned with a metallic rod in the shape of a gilded sphere with spikes. This lightning rod is grounded through the rebar carcass, which pierces the entire building. The Nevyansk Tower was built between 1721 and 1745, on the orders of industrialist Akinfiy Demidov. The Nevyansk Tower was built 28 years before Benjamin Franklin's experiment and scientific explanation. However, the true intent behind the metal rooftop and rebars remains unknown. Europe The church tower of many European cities, which was usually the highest structure in the city, was likely to be hit by lightning. Peter Ahlwardts ("Reasonable and Theological Considerations about Thunder and Lightning", 1745) advised individuals seeking cover from lightning to go anywhere except in or around a church. There is an ongoing debate over whether a "metereological machine", invented by Premonstratensian priest Prokop Diviš and erected in Brenditz, (now Přímětice, part of Znojmo), Moravia (now the Czech Republic) in June 1754, does count as an individual invention of the lightning rod. Diviš's apparatus was, according to his private theories, aimed towards preventing thunderstorms altogether by constantly depriving the air of its superfluous electricity. The apparatus was, however, mounted on a free-standing pole and probably better grounded than Franklin's lightning rods at that time, so it served the purpose of a lightning rod. After local protests, Diviš had to cease his weather experiments around 1760. Structure protectors Lightning arrester A lightning arrester is a device, essentially an air gap between an electric wire and ground, used on electric power systems and telecommunication systems to protect the insulation and conductors of the system from the damaging effects of lightning. The typical lightning arrester has a high-voltage terminal and a ground terminal. In telegraphy and telephony, a lightning arrester is a device placed where wires enter a structure, in order to prevent damage to electronic instruments within and ensuring the safety of individuals near the structures. Smaller versions of lightning arresters, also called surge protectors, are devices that are connected between each electrical conductor in a power or communications system, and the ground. They help prevent the flow of the normal power or signal currents to ground, but provide a path over which high-voltage lightning current flows, bypassing the connected equipment. Arresters are used to limit the rise in voltage when a communications or power line is struck by lightning or is near to a lightning strike. Protection of electric distribution systems In overhead electric transmission systems, one or two lighter ground wires may be mounted to the top of the pylons, poles, or towers not specifically used to send electricity through the grid. These conductors, often referred as to "static", "pilot" or "shield" wires are designed to be the point of lightning termination instead of the high-voltage lines themselves. These conductors are intended to protect the primary power conductors from lightning strikes. These conductors are bonded to earth either through the metal structure of a pole or tower, or by additional ground electrodes installed at regular intervals along the line. As a general rule, overhead power lines with voltages below 50 kV do not have a "static" conductor, but most lines carrying more than 50 kV do. The ground conductor cable may also support fibre optic cables for data transmission. Older lines may use surge arresters which insulate conducting lines from direct bonding with earth and may be used as low voltage communication lines. If the voltage exceeds a certain threshold, such as during a lightning termination to the conductor, it "jumps" the insulators and passes to earth. Protection of electrical substations is as varied as lightning rods themselves, and is often proprietary to the electric company. Lightning protection of mast radiators Radio mast radiators may be insulated from the ground by a spark gap at the base. When lightning hits the mast, it jumps this gap. A small inductivity in the feed line between the mast and the tuning unit (usually one winding) limits the voltage increase, protecting the transmitter from dangerously high voltages. The transmitter must be equipped with a device to monitor the antenna's electrical properties. This is very important, as a charge could remain after a lightning strike, damaging the gap or the insulators. The monitoring device switches off the transmitter when the antenna shows incorrect behavior, e.g. as a result of undesired electrical charge. When the transmitter is switched off, these charges dissipate. The monitoring device makes several attempts to switch back on. If after several attempts the antenna continues to show improper behavior, possibly as result of structural damage, the transmitter remains switched off. Lightning conductors and grounding precautions Ideally, the underground part of the assembly should reside in an area of high ground conductivity. If the underground cable is able to resist corrosion well, it can be covered in salt to improve its electrical connection with the ground. While the electrical resistance of the lightning conductor between the air terminal and the Earth is of significant concern, the inductive reactance of the conductor could be more important. For this reason, the down conductor route is kept short, and any curves have a large radius. If these measures are not taken, lightning current may arc over a resistive or reactive obstruction that it encounters in the conductor. At the very least, the arc current will damage the lightning conductor and can easily find another conductive path, such as building wiring or plumbing, and cause fires or other disasters. Grounding systems without low resistivity to the ground can still be effective in protecting a structure from lightning damage. When ground soil has poor conductivity, is very shallow, or non-existent, a grounding system can be augmented by adding ground rods, counterpoise (ground ring) conductor, cable radials projecting away from the building, or a concrete building's reinforcing bars can be used for a ground conductor (Ufer ground). These additions, while still not reducing the resistance of the system in some instances, will allow the [dispersion] of the lightning into the earth without damage to the structure. Additional precautions must be taken to prevent side-flashes between conductive objects on or in the structure and the lightning protection system. The surge of lightning current through a lightning protection conductor will create a voltage difference between it and any conductive objects that are near it. This voltage difference can be large enough to cause a dangerous side-flash (spark) between the two that can cause significant damage, especially on structures housing flammable or explosive materials. The most effective way to prevent this potential damage is to ensure the electrical continuity between the lightning protection system and any objects susceptible to a side-flash. Effective bonding will allow the voltage potential of the two objects to rise and fall simultaneously, thereby eliminating any risk of a side-flash. Lightning protection system design Considerable material is used to make up lightning protection systems, so it is prudent to consider carefully where an air terminal will provide the greatest protection. Historical understanding of lightning, from statements made by Ben Franklin, assumed that each lightning rod protected a cone of 45 degrees. This has been found to be unsatisfactory for protecting taller structures, as it is possible for lightning to strike the side of a building. A modeling system based on a better understanding of the termination targeting of lightning, called the Rolling Sphere Method, was developed by Dr Tibor Horváth. It has become the standard by which traditional Franklin Rod systems are installed. To understand this requires knowledge of how lightning 'moves'. As the step leader of a lightning bolt jumps toward the ground, it steps toward the grounded objects nearest its path. The maximum distance that each step may travel is called the critical distance and is proportional to the electric current. Objects are likely to be struck if they are nearer to the leader than this critical distance. It is standard practice to approximate the sphere's radius as 46 m near the ground. An object outside the critical distance is unlikely to be struck by the leader if there is a solidly grounded object within the critical distance. Locations that are considered safe from lightning can be determined by imagining a leader's potential paths as a sphere that travels from the cloud to the ground. For lightning protection, it suffices to consider all possible spheres as they touch potential strike points. To determine strike points, consider a sphere rolling over the terrain. At each point, a potential leader position is simulated. Lightning is most likely to strike where the sphere touches the ground. Points that the sphere cannot roll across and touch are safest from lightning. Lightning protectors should be placed where they will prevent the sphere from touching a structure. A weak point in most lightning diversion systems is in transporting the captured discharge from the lightning rod to the ground, though. Lightning rods are typically installed around the perimeter of flat roofs, or along the peaks of sloped roofs at intervals of 6.1 m or 7.6 m, depending on the height of the rod. When a flat roof has dimensions greater than 15 m by 15 m, additional air terminals will be installed in the middle of the roof at intervals of 15 m or less in a rectangular grid pattern. Rounded vis-à-vis pointed ends The optimal shape for the tip of a lightning rod has been controversial since the 18th century. During the period of political confrontation between Britain and its American colonies, British scientists maintained that a lightning rod should have a ball on its end, while American scientists maintained that there should be a point. , the controversy had not been completely resolved. It is difficult to resolve the controversy because proper controlled experiments are nearly impossible, but work performed by Charles B. Moore, et al., in 2000 has shed some light on the issue, finding that moderately rounded or blunt-tipped lightning rods act as marginally better strike receptors. As a result, round-tipped rods are installed on most new systems in the United States, though most existing systems still have pointed rods. According to the study, Additionally, the height of the lightning protector relative to the structure, and indeed the Earth itself, both have an effect. Charge transfer theory The charge transfer theory states that a lightning strike to a protected structure can be prevented by reducing the electrical potential between the protected structure and the thundercloud. This is done by transferring electric charge (such as from the nearby Earth to the sky or vice versa). Transferring electric charge from the Earth to the sky is done by installing engineered products composed of many points above the structure. It is noted that pointed objects will indeed transfer charge to the surrounding atmosphere and that a considerable electric current can be measured through the conductors as ionization occurs at the point when an electric field is present, such as happens when thunderclouds are overhead. In the United States, the National Fire Protection Association (NFPA) does not currently endorse a device that can prevent or reduce lightning strikes. The NFPA Standards Council, following a request for a project to address Dissipation Array[tm] Systems and Charge Transfer Systems, denied the request to begin forming standards on such technology (though the Council did not foreclose on future standards development after reliable sources demonstrating the validity of the basic technology and science were submitted). Early streamer emission (ESE) theory The theory of early streamer emission proposes that if a lightning rod has a mechanism producing ionization near its tip, then its lightning capture area is greatly increased. At first, small quantities of radioactive isotopes (radium-226 or americium-241) were used as sources of ionization between 1930 and 1980, later replaced with various electrical and electronic devices. According to an early patent, since most lightning protectors' ground potentials are elevated, the path distance from the source to the elevated ground point will be shorter, creating a stronger field (measured in volts per unit distance) and that structure will be more prone to ionization and breakdown. AFNOR, the French national standardization body, issued a standard, NF C 17-102, covering this technology. The NFPA also investigated the subject and there was a proposal to issue a similar standard in the USA. Initially, an NFPA independent third party panel stated that "the [Early Streamer Emission] lightning protection technology appears to be technically sound" and that there was an "adequate theoretical basis for the [Early Streamer Emission] air terminal concept and design from a physical viewpoint".) The same panel also concluded that "the recommended [NFPA 781 standard] lightning protection system has never been scientifically or technically validated and the Franklin rod air terminals have not been validated in field tests under thunderstorm conditions". In response, the American Geophysical Union concluded that "[t]he Bryan Panel reviewed essentially none of the studies and literature on the effectiveness and scientific basis of traditional lightning protection systems and was erroneous in its conclusion that there was no basis for the Standard". AGU did not attempt to assess the effectiveness of any proposed modifications to traditional systems in its report. The NFPA withdrew its proposed draft edition of standard 781 due to a lack of evidence of increased effectiveness of Early Streamer Emission-based protection systems over conventional air terminals. Members of the Scientific Committee of the International Conference on Lightning Protection (ICLP) have issued a joint statement stating their opposition to Early Streamer Emission technology. ICLP maintained a web page with information related to ESE and related technologies until 2016. Still, the number of buildings and structures equipped with ESE lightning protection systems is growing as well as the number of manufacturers of ESE air terminals from Europe, Americas, Middle East, Russia, China, South Korea, ASEAN countries, and Australia. Analysis of strikes Lightning strikes to a metallic structure can vary from leaving no evidence—except, perhaps, a small pit in the metal—to the complete destruction of the structure. When there is no evidence, analyzing the strikes is difficult. This means that a strike on an uninstrumented structure must be visually confirmed, and the random behavior of lightning renders such observations difficult. Inventors have patented lightning rockets. Whilst controlled experiments might eventually become feasible, very good contemperaneous data is obtained via specialized radio receivers which record the characteristic electrical "signature" of lightning strikes. Through extremely accurate timing and triangulation techniques, lightning strikes can be located with great precision such that strikes on specific objects can often be pinpointed with a high degree of confidence. The energy in a lightning strike is typically in the range of 1 to 10 billion joules. This energy is released usually in a small number of separate strokes, each with duration of a few tens of microseconds (typically 30 to 50 microseconds), over a period of about approximately fifth of a second. The vast majority of the energy is dissipated as heat, light and sound in the atmosphere, with a minority via conduction to ground (in both aspects of "ground"). Aircraft protectors Aircraft are protected by devices mounted to the aircraft structure and by the design of internal systems. Lightning usually enters and exits an aircraft through the outer surface of its airframe or through static wicks. The lightning protection system provides safe conductive paths between the entry and exit points to prevent damage to electronic equipment and to protect flammable fuel or cargo from sparks. These paths are constructed of conductive materials. Electrical insulators are only effective in combination with a conductive path because blocked lightning can easily exceed the breakdown voltage of insulators. Composite materials are constructed with layers of wire mesh to make them sufficiently conductive and structural joints are protected by making an electrical connection across the joint. Shielded cable and conductive enclosures provide the majority of protection to electronic systems. The lightning current emits a magnetic pulse which induces current through any loops formed by the cables. The current induced in the shield of a loop creates magnetic flux through the loop in the opposite direction. This decreases the total flux through the loop and the induced voltage around it. The lightning-conductive path and conductive shielding carry the majority of current. The remainder is bypassed around sensitive electronics using transient voltage suppressors, and blocked using electronic filters once the let-through voltage is low enough. Filters, like insulators, are only effective when lightning and surge currents are able to flow through an alternate path. Watercraft protectors A lightning protection installation on a watercraft comprises a lightning protector mounted on the top of a mast or superstructure, and a grounding conductor in contact with the water. Electrical conductors attach to the protector and run down to the conductor. For a vessel with a conducting (iron or steel) hull, the grounding conductor is the hull. For a vessel with a non-conducting hull, the grounding conductor may be retractable, attached to the hull, or attached to a centerboard. Risk assessment Some structures are inherently more or less at risk of being struck by lightning. The risk for a structure is a function of the size (area) of a structure, the height, and the number of lightning strikes per year per mi2 for the region. For example, a small building will be less likely to be struck than a large one, and a building in an area with a high density of lightning strikes will be more likely to be struck than one in an area with a low density of lightning strikes. The National Fire Protection Association provides a risk assessment worksheet in their lightning protection standard. The International Electrotechnical Commission (IEC) lightning risk-assessment comprises four parts: loss of living beings, loss of service to public, loss of cultural heritage, and loss of economic value. Loss of living beings is rated as the most important and is the only loss taken into consideration for many nonessential industrial and commercial applications. Standards The introduction of lightning protection systems into standards allowed various manufactures to develop protector systems to a multitude of specifications. There are multiple international, national, corporate and military lightning protection standards. NFPA-780: "Standard for the Installation of Lightning Protection Systems" (2014) M440.1-1, Electrical Storms and Lightning Protection, Department of Energy AFI 32-1065 – Grounding Systems, U. S. Air Force Space Command FAA STD 019e, Lightning and Surge Protection, Grounding, Bonding and Shielding Requirements for Facilities and Electronic Equipment UL standards for lightning protection UL 96: "Standard of Lightning Protection Components" (5th Edition, 2005) UL 96A: "Standard for Installation Requirements for Lightning Protection Systems" (Twelfth Edition, 2007) UL 1449: "Standard for Surge Protective Devices" (Fourth Edition, 2014) IEC standards EN 61000-4-5/IEC 61000-4-5: "Electromagnetic compatibility (EMC) – Part 4-5: Testing and measurement techniques – Surge immunity test" EN 62305/IEC 62305: "Protection against lightning" EN 62561/IEC 62561: "Lightning Protection System Components (LPSC)" ITU-T K Series recommendations: "Protection against interference" IEEE standards for grounding IEEE SA-142-2007: "IEEE Recommended Practice for Grounding of Industrial and Commercial Power Systems". (2007) IEEE SA-1100-2005: "IEEE Recommended Practice for Powering and Grounding Electronic Equipment" (2005) AFNOR NF C 17-102 : "Lightning protection – Protection of structures and open areas against lightning using early streamer emission air terminals" (1995) GB 50057-2010 Design Code for Lightning Protection of Buildings AS / NZS 1768:2007: "Lightning protection" See also Apollo 12 – A Saturn V rocket that was struck by lightning shortly after liftoff. James Otis Jr. – Contemporary of Ben Franklin, killed at doorway by lightning in Andover, Massachusetts on May 23, 1783. Ground (electricity) Grounding kit Lightning Lightning rod fashion The Autobiography of Benjamin Franklin Václav Prokop Diviš (1698–1765) – Constructor of the first grounded lightning rod, in Přímětice u Znojma during 1750–1754. References Citations Sources Vladimir A. Rakov and Martin A. Uman, Lightning: physics and effects. Cambridge University Press, 2003. 698 pages. . J. L. Bryan, R. G. Biermann and G. A. Erickson, "Report of the Third-Party Independent Evaluation Panel on the Early Streamer Emission Lightning Protection Technology". National Fire Protection Association, Quincy, Mass., 1999. Kithil, Rich. "More on lightning rods...", Lightning Safety Home Page, Message #402. May 8, 2000. (Response to C. B. Moore) Originally at: https://portishead-plumbing.co.uk/ Carpenter Jr., Roy B. "Preventing Direct Strikes ". External links "Researchers find that blunt lightning rods work best". USA Today, June 10, 2002. Federal Aviation Administration, "FAA-STD-019d, Lightning and surge protection, grounding, bonding and shielding requirements for facilities and electronic equipment ". National Transportation Library, August 9, 2002. Kithil, Richard, "Lightning Rods: Recent Investigations ". National Lightning Safety Institute, September 26, 2005. Kithil, Richard, "Should Lightning Rods be Installed? ". National Lightning Safety Institute, September 26, 2005. Kithil, Richard, "Fundamentals of Lightning Protection ". National Lightning Safety Institute, September 26, 2005. Nailen, Richard L., "Lightning controversy goes on", The Electrical Apparatus, February 2001. Lightning Safety Alliance education page John Scoffern, Orr's Circle of the Sciences, Atmospheric Electricity—Theory of Lightning-rods W. S. Orr 1855. February 1919 Popular Science article about Lightning Arresters and how they were used in early AC and DC power distribution systems, "Electrical Devices and How They Work, Part 14: Lightning Arresters", Popular Science monthly, February 1919, 5 unnumbered pages, Scanned by Google Books: https://books.google.com/books?id=7igDAAAAMBAJ&pg=PT17 "Do lightning rods really work?", The Straight Dope, August 24, 2001 Scientific American, "Protection From Lightning", 06-Aug-1881, pp.88 Lightning Electrical safety Roofs Inventions by Benjamin Franklin Czech inventions Safety equipment Electric arcs Electrodes
Lightning rod
[ "Physics", "Chemistry", "Technology", "Engineering" ]
6,321
[ "Structural engineering", "Electric arcs", "Physical phenomena", "Plasma phenomena", "Electrodes", "Structural system", "Electrochemistry", "Electrical phenomena", "Lightning", "Roofs" ]
20,646,704
https://en.wikipedia.org/wiki/Sewage
Sewage (or domestic sewage, domestic wastewater, municipal wastewater) is a type of wastewater that is produced by a community of people. It is typically transported through a sewer system. Sewage consists of wastewater discharged from residences and from commercial, institutional and public facilities that exist in the locality. Sub-types of sewage are greywater (from sinks, bathtubs, showers, dishwashers, and clothes washers) and blackwater (the water used to flush toilets, combined with the human waste that it flushes away). Sewage also contains soaps and detergents. Food waste may be present from dishwashing, and food quantities may be increased where garbage disposal units are used. In regions where toilet paper is used rather than bidets, that paper is also added to the sewage. Sewage contains macro-pollutants and micro-pollutants, and may also incorporate some municipal solid waste and pollutants from industrial wastewater. Sewage usually travels from a building's plumbing either into a sewer, which will carry it elsewhere, or into an onsite sewage facility. Collection of sewage from several households together usually takes places in either sanitary sewers or combined sewers. The former is designed to exclude stormwater flows whereas the latter is designed to also take stormwater. The production of sewage generally corresponds to the water consumption. A range of factors influence water consumption and hence the sewage flowrates per person. These include: Water availability (the opposite of water scarcity), water supply options, climate (warmer climates may lead to greater water consumption), community size, economic level of the community, level of industrialization, metering of household consumption, water cost and water pressure. The main parameters in sewage that are measured to assess the sewage strength or quality as well as treatment options include: solids, indicators of organic matter, nitrogen, phosphorus, and indicators of fecal contamination. These can be considered to be the main macro-pollutants in sewage. Sewage contains pathogens which stem from fecal matter. The following four types of pathogens are found in sewage: pathogenic bacteria, viruses, protozoa (in the form of cysts or oocysts) and helminths (in the form of eggs). In order to quantify the organic matter, indirect methods are commonly used: mainly the Biochemical Oxygen Demand (BOD) and the Chemical Oxygen Demand (COD). Management of sewage includes collection and transport for release into the environment, after a treatment level that is compatible with the local requirements for discharge into water bodies, onto soil or for reuse applications. Disposal options include dilution (self-purification of water bodies, making use of their assimilative capacity if possible), marine outfalls, land disposal and sewage farms. All disposal options may run risks of causing water pollution. Terminology Sewage and wastewater Sewage (or domestic wastewater) consists of wastewater discharged from residences and from commercial, institutional and public facilities that exist in the locality. Sewage is a mixture of water (from the community's water supply), human excreta (feces and urine), used water from bathrooms, food preparation wastes, laundry wastewater, and other waste products of normal living. Sewage from municipalities contains wastewater from commercial activities and institutions, e.g. wastewater discharged from restaurants, laundries, hospitals, schools, prisons, offices, stores and establishments serving the local area of larger communities. Sewage can be distinguished into "untreated sewage" (also called "raw sewage") and "treated sewage" (also called "effluent" from a sewage treatment plant). The term "sewage" is nowadays often used interchangeably with "wastewater" – implying "municipal wastewater" – in many textbooks, policy documents and the literature. To be precise, wastewater is a broader term, because it refers to any water after it has been used in a variety of applications. Thus it may also refer to "industrial wastewater", agricultural wastewater and other flows that are not related to household activities. Blackwater Greywater Overall appearance The overall appearance of sewage is as follows: The temperature tends to be slightly higher than in drinking water but is more stable than the ambient temperature. The color of fresh sewage is slightly grey, whereas older sewage (also called "septic sewage") is dark grey or black. The odor of fresh sewage is "oily" and relatively unpleasant, whereas older sewage has an unpleasant foul odor due to hydrogen sulfide gas and other decomposition by-products. Sewage can have high turbidity from suspended solids. The pH value of sewage is usually near neutral, and can be in the range of 6.7–8.0. Pollutants Sewage consists primarily of water and usually contains less than one part of solid matter per thousand parts of water. In other words, one can say that sewage is composed of around 99.9% pure water, and the remaining 0.1% are solids, which can be in the form of either dissolved solids or suspended solids. The thousand-to-one ratio is an order of magnitude estimate rather than an exact percentage because, aside from variation caused by dilution, solids may be defined differently depending upon the mechanism used to separate those solids from the liquid fraction. Sludges of settleable solids removed by settling or suspended solids removed by filtration may contain significant amounts of entrained water, while dried solid material remaining after evaporation eliminates most of that water but includes dissolved minerals not captured by filtration or gravitational separation. The suspended and dissolved solids include organic and inorganic matter plus microorganisms. About one-third of this solid matter is suspended by turbulence, while the remainder is dissolved or colloidal. For the situation in the United States in the 1950s it was estimated that the waste contained in domestic sewage is about half organic and half inorganic. Organic matter The organic matter in sewage can be classified in terms of form and size: Suspended (particulate) or dissolved (soluble). Secondly, it can be classified in terms of biodegradability: either inert or biodegradable. The organic matter in sewage consists of protein compounds (about 40%), carbohydrates (about 25–50%), oils and grease (about 10%) and urea, surfactants, phenols, pesticides and others (lower quantity). In order to quantify the organic matter content, it is common to use "indirect methods" which are based on the consumption of oxygen to oxidize the organic matter: mainly the Biochemical Oxygen Demand (BOD) and the Chemical Oxygen Demand (COD). These indirect methods are associated with the major impact of the discharge of organic matter into water bodies: the organic matter will be food for microorganisms, whose population will grow, and lead to the consumption of oxygen, which may then affect aquatic living organisms. The mass load of organic content is calculated as the sewage flowrate multiplied with the concentration of the organic matter in the sewage. Typical values for physical–chemical characteristics of raw sewage is provided further down below. Nutrients Apart from organic matter, sewage also contains nutrients. The major nutrients of interest are nitrogen and phosphorus. If sewage is discharged untreated, its nitrogen and phosphorus content can lead to pollution of lakes and reservoirs via a process called eutrophication. In raw sewage, nitrogen exists in the two forms of organic nitrogen or ammonia. The ammonia stems from the urea in urine. Urea is rapidly hydrolyzed and therefore not usually found in raw sewage. Total phosphorus is mostly present in sewage in the form of phosphates.They are either inorganic (polyphosphates and orthophosphates) and their main source is from detergents and other household chemical products. The other form is organic phosphorus, where the source is organic compounds to which the organic phosphorus is bound. Pathogens Human feces in sewage may contain pathogens capable of transmitting diseases. The following four types of pathogens are found in sewage: Bacteria like Salmonella, Shigella, Campylobacter, or Vibrio cholerae; Viruses like hepatitis A, rotavirus, coronavirus, enteroviruses; Protozoa like Entamoeba histolytica, Giardia lamblia, Cryptosporidium parvum; and Helminths and their eggs including Ascaris (roundworm), Ancylostoma (hookworm), and Trichuris (whipworm) In most practical cases, pathogenic organisms are not directly investigated in laboratory analyses. An easier way to assess the presence of fecal contamination is by assessing the most probable numbers of fecal coliforms (called thermotolerant coliforms), especially Escherichia coli. Escherichia coli are intestinal bacteria excreted by all warm blooded animals, including human beings, and thus tracking their presence in sewage is easy, because of their substantially high concentrations (around 10 to 100 million per 100 mL). Solid waste The ability of a flush toilet to make things "disappear" is soon recognized by young children who may experiment with virtually anything they can carry to the toilet. Adults may be tempted to dispose of toilet paper, wet wipes, diapers, sanitary napkins, tampons, tampon applicators, condoms, and expired medications, even at the risk of causing blockages. The privacy of a toilet offers a clandestine means of removing embarrassing evidence by flushing such things as drug paraphernalia, pregnancy test kits, combined oral contraceptive pill dispensers, and the packaging for those devices. There may be reluctance to retrieve items like children's toys or toothbrushes which accidentally fall into toilets, and items of clothing may be found in sewage from prisons or other locations where occupants may be careless. Trash and garbage in streets may be carried to combined sewers by stormwater runoff. Micro-pollutants Sewage contains environmental persistent pharmaceutical pollutants. Trihalomethanes can also be present as a result of past disinfection. Sewage may contain microplastics such as polyethylene and polypropylene beads, or polyester and polyamide fragments from synthetic clothing and bedding fabrics abraded by wear and laundering, or from plastic packaging and plastic-coated paper products disintegrated by lift station pumps. Pharmaceuticals, endocrine disrupting compounds, and hormones may be excreted in urine or feces if not catabolized within the human body. Some residential users tend to pour unwanted liquids like used cooking oil, lubricants, adhesives, paint, solvents, detergents, and disinfectants into their sewer connections. This behavior can result in problems for the treatment plant operation and is thus discouraged. Typical sewage composition Factors that determine composition The composition of sewage varies with climate, social and economic situation and population habits. In regions where water use is low, the strength of the sewage (or pollutant concentrations) is much higher than that in the United States where water use per person is high. Household income and diet also plays a role: For example, for the case of Brazil, it has been found that the higher the household income, the higher is the BOD load per person and the lower is the BOD concentration. Concentrations and loads Typical values for physical–chemical characteristics of raw sewage in developing countries have been published as follows: 180 g/person/d for total solids (or 1100 mg/L when expressed as a concentration), 50 g/person/d for BOD (300 mg/L), 100 g/person/d for COD (600 mg/L), 8 g/person/d for total nitrogen (45 mg/L), 4.5 g/person/d for ammonia-N (25 mg/L) and 1.0 g/person/d for total phosphorus (7 mg/L). The typical ranges for these values are: 120–220 g/person/d for total solids (or 700–1350 mg/L when expressed as a concentration), 40–60 g/person/d for BOD (250–400 mg/L), 80–120 g/person/d for COD (450–800 mg/L), 6–10 g/person/d for total nitrogen (35–60 mg/L), 3.5–6 g/person/d for ammonia-N (20–35 mg/L) and 0.7–2.5 g/person/d for total phosphorus (4–15 mg/L). For high income countries, the "per person organic matter load" has been found to be approximately 60 gram of BOD per person per day. This is called the population equivalent (PE) and is also used as a comparison parameter to express the strength of industrial wastewater compared to sewage. Values for households in the United States have been published as follows, whereby the estimates are based on the assumption that 25% of the homes have kitchen waste-food grinders (sewage from such households contain more waste): 95 g/person/d for total suspended solids (503 mg/L concentration), 85 g/person/d for BOD (450 mg/L), 198 g/person/d for COD (1050 mg/L), 13.3 g/person/d for the sum of organic nitrogen and ammonia nitrogen (70.4 mg/L), 7.8 g/person/d for ammonia-N (41.2 mg/L) and 3.28 g/person/d for total phosphorus (17.3 mg/L). The concentration values given here are based on a flowrate of 190 L per person per day. A United States source published in 1972 estimated that the daily dry weight of solid wastes per capita in sewage is estimated as in feces, of dissolved solids in urine, of toilet paper, of greywater solids, of food solids (if garbage disposal units are used), and varying amounts of dissolved minerals depending upon salinity of local water supplies, volume of water use per capita, and extent of water softener use. Sewage contains urine and feces. The mass of feces varies with dietary fiber intake. An average person produces 128 grams of wet feces per day, or a median dry mass of 29 g/person/day. The median urine generation rate is about 1.42 L/person/day, as was determined by a global literature review. Flowrates The volume of domestic sewage produced per person (or "per capita", abbreviated as "cap") varies with the water consumption in the respective locality. A range of factors influence water consumption and hence the sewage flowrates per person. These include: Water availability (the opposite of water scarcity), water supply options, climate (warmer climates may lead to greater water consumption), community size, economic level of the community, level of industrialization, metering of household consumption, water cost and water pressure. The production of sewage generally corresponds to the water consumption. However water used for landscape irrigation will not enter the sewer system, while groundwater and stormwater may enter the sewer system in addition to sewage. There are usually two peak flowrates of sewage arriving at a treatment plant per day: One peak is at the beginning of the morning and another peak is at the beginning of the evening. With regards to water consumption, a design figure that can be regarded as "world average" is 35–90 L per person per day (data from 1992). The same publication listed the water consumption in China as 80 L per person per day, Africa as 15–35 L per person per day, Eastern Mediterranean in Europe as 40–85 L per person per day and Latin America and Caribbean as 70–190 L per person per day. Even inside a country, there may be large variations from one region to another due to the various factors that determine the water consumption as listed above. A flowrate value of 200 liters of sewage per person per day is often used as an estimate in high income countries, and is used for example in the design of sewage treatment plants. For comparison, typical sewage flowrates from urban residential sources in the United States are estimated as follows: 365 L/person/day (for one person households), 288 L/person/day (two person households), 200 L/person/day (four person households), 189 L/person/day (six person households). This means the overall range for this example would be . Analytical methods General quality indicators Specific organisms and substances Sewage can be monitored for both disease-causing and benign organisms with a variety of techniques. Traditional techniques involve filtering, staining, and examining samples under a microscope. Much more sensitive and specific testing can be accomplished with DNA sequencing, such as when looking for rare organisms, attempting eradication, testing specifically for drug-resistant strains, or discovering new species. Sequencing DNA from an environmental sample is known as metagenomics. Sewage has also been analyzed to determine relative rates of use of prescription and illegal drugs among municipal populations. General socioeconomic demographics may be inferred as well. Collection Sewage is commonly collected and transported in gravity sewers, either in a sanitary sewer or in a combined sewer. The latter also conveys urban runoff (stormwater) which means the sewage gets diluted during rain events. Sanitary sewer Combined sewer Dilution in the sewer Infiltration of groundwater into the sewerage system Infiltration is groundwater entering sewer pipes through defective pipes, connections, joints or manholes. Contaminated or saline groundwater may introduce additional pollutants to the sewage. The amount of such infiltrated water depends on several parameters, such as the length of the collection network, pipeline diameters, drainage area, soil type, water table depth, topography and number of connections per unit area. Infiltration is increased by poor construction procedures, and tends to increase with the age of the sewer. The amount of infiltration varies with the depth of the sewer in comparison to the local groundwater table. Older sewer systems that are in need of rehabilitation may also exfiltrate sewage into groundwater from the leaking sewer joints and service connections. This can lead to groundwater pollution. Stormwater Combined sewers are designed to transport sewage and stormwater together. This means that sewage becomes diluted during rain events. There are other types of inflow that also dilute sewage, e.g. "water discharged from cellar and foundation drains, cooling-water discharges, and any direct stormwater runoff connections to the sanitary collection system". The "direct inflows" can result in peak sewage flowrates similar to combined sewers during wet weather events. Industrial wastewater Sewage from communities with industrial facilities may include some industrial wastewater, generated by industrial processes such as the production or manufacture of goods. Volumes of industrial wastewater vary widely with the type of industry. Industrial wastewater may contain very different pollutants at much higher concentrations than what is typically found in sewage. Pollutants may be toxic or non-biodegradable waste including pharmaceuticals, biocides, heavy metals, radionuclides, or thermal pollution. An industry may treat its wastewater and discharge it into the environment (or even use the treated wastewater for specific applications), or, in case it is located in the urban area, it may discharge the wastewater into the public sewerage system. In the latter case, industrial wastewater may receive pre-treatment at the factories to reduce the pollutant load. Mixing industrial wastewater with sewage does nothing to reduce the mass of pollutants to be treated, but the volume of sewage lowers the concentration of pollutants unique to industrial wastewater, and the volume of industrial wastewater lowers the concentration of pollutants unique to sewage. Disposal and dilution Assimilative capacity of receiving water bodies or land When wastewater is discharged into a water body (river, lakes, sea) or land, its relative impact will depend on the assimilative capacity of the water body or ecosystem. Water bodies have a self-purification capacity, so that the concentration of a pollutant may decrease along the distance from the discharge point. Furthermore, water bodies provide a dilution to the pollutants concentrations discharged, although it does not decrease their mass. In principle, the higher the dilution capacity (ratio of volume or flow of the receiving water and volume or flow of sewage discharged), the lower will be the concentration of pollutants in the receiving water, and probably the lower will be the negative impacts. But if the water body already arrives very polluted at the point of discharge, the dilution will be of limited value. In several cases, a community may treat partially its sewage, and still count on the assimilative capacity of the water body. However, this needs to be analyzed very carefully, taking into account the quality of the water in the receiving body before it receives the discharge of sewage, the resulting water quality after the discharge and the impact on the intended water uses after discharge. There are also specific legal requirements in each country. Different countries have different regulations regarding the specifications of the quality of the sewage being discharged and the quality to be maintained in the receiving water body.The combination of treatment and disposal must comply with existing local regulations. The assimilative capacity depends – among several factors – on the ability of the receiving water to sustain dissolved oxygen concentrations necessary to support organisms catabolizing organic waste. For example, fish may die if dissolved oxygen levels are depressed below 5 mg/L. Application of sewage to land can be considered as a form of final disposal or of treatment, or both. Land disposal alternatives require consideration of land availability, groundwater quality, and possible soil deterioration. Other disposal methods Sewage may be discharged to an evaporation or infiltration basin. Groundwater recharge is used to reduce saltwater intrusion, or replenish aquifers used for agricultural irrigation. Treatment is usually required to sustain percolation capacity of infiltration basins, and more extensive treatment may be required for aquifers used as drinking water supplies. Marine outfall Global situation Treatment Sewage treatment is beneficial in reducing environmental pollution. Bar screens can remove large solid debris from sewage, and primary treatment can remove floating and settleable matter. Primary treated sewage usually contains less than half of the original solids content and approximately two-thirds of the BOD in the form of colloids and dissolved organic compounds. Secondary treatment can reduce the BOD of organic waste in undiluted sewage, but is less effective for dilute sewage. Water disinfection may be attempted to kill pathogens prior to disposal, and is increasingly effective after more elements of the foregoing treatment sequence have been completed. Reuse and reclamation An alternative to discharge into the environment is to reuse the sewage in a productive way (for agricultural, urban or industrial uses), in compliance with local regulations and requirements for each specific reuse application. Public health risks of sewage reuse in agriculture can be minimized by following a "multiple barrier approach" according to guidelines by the World Health Organization. There is also the possibility of resource recovery which could make agriculture more sustainable by using carbon, nitrogen, phosphorus, water and energy recovered from sewage. Sewage farm Regulations Sewage management includes collection and transport for release into the environment after a treatment level that is compatible with the local requirements for discharge into water bodies, onto soil, or for reuse applications. In most countries, uncontrolled discharges of wastewater to the environment are not permitted under law, and strict water quality requirements are to be met. For requirements in the United States, see Clean Water Act. Sewage management regulations are often part of a country's broader sanitation policies. These may also include the management of human excreta (from non-sewered collection systems), solid waste and stormwater. See also Fecal sludge management History of water supply and sanitation Reuse of human excreta Urban Waste Water Treatment Directive Wastewater-based epidemiology References External links Sanitation and Wastewater Atlas of Africa Anaerobic digestion Sanitation Sewerage Waste management Water pollution
Sewage
[ "Chemistry", "Engineering", "Environmental_science" ]
4,925
[ "Water pollution", "Sewerage", "Anaerobic digestion", "Environmental engineering", "Water technology" ]
20,646,772
https://en.wikipedia.org/wiki/Vibration
Vibration () is a mechanical phenomenon whereby oscillations occur about an equilibrium point. Vibration may be deterministic if the oscillations can be characterised precisely (e.g. the periodic motion of a pendulum), or random if the oscillations can only be analysed statistically (e.g. the movement of a tire on a gravel road). Vibration can be desirable: for example, the motion of a tuning fork, the reed in a woodwind instrument or harmonica, a mobile phone, or the cone of a loudspeaker. In many cases, however, vibration is undesirable, wasting energy and creating unwanted sound. For example, the vibrational motions of engines, electric motors, or any mechanical device in operation are typically unwanted. Such vibrations could be caused by imbalances in the rotating parts, uneven friction, or the meshing of gear teeth. Careful designs usually minimize unwanted vibrations. The studies of sound and vibration are closely related (both fall under acoustics). Sound, or pressure waves, are generated by vibrating structures (e.g. vocal cords); these pressure waves can also induce the vibration of structures (e.g. ear drum). Hence, attempts to reduce noise are often related to issues of vibration. Machining vibrations are common in the process of subtractive manufacturing. Types Free vibration or natural vibration occurs when a mechanical system is set in motion with an initial input and allowed to vibrate freely. Examples of this type of vibration are pulling a child back on a swing and letting it go, or hitting a tuning fork and letting it ring. The mechanical system vibrates at one or more of its natural frequencies and damps down to motionlessness. Forced vibration is when a time-varying disturbance (load, displacement, velocity, or acceleration) is applied to a mechanical system. The disturbance can be a periodic and steady-state input, a transient input, or a random input. The periodic input can be a harmonic or a non-harmonic disturbance. Examples of these types of vibration include a washing machine shaking due to an imbalance, transportation vibration caused by an engine or uneven road, or the vibration of a building during an earthquake. For linear systems, the frequency of the steady-state vibration response resulting from the application of a periodic, harmonic input is equal to the frequency of the applied force or motion, with the response magnitude being dependent on the actual mechanical system. Damped vibration: When the energy of a vibrating system is gradually dissipated by friction and other resistances, the vibrations are said to be damped. The vibrations gradually reduce or change in frequency or intensity or cease and the system rests in its equilibrium position. An example of this type of vibration is the vehicular suspension dampened by the shock absorber. Isolation Testing Vibration testing is accomplished by introducing a forcing function into a structure, usually with some type of shaker. Alternately, a DUT (device under test) is attached to the "table" of a shaker. Vibration testing is performed to examine the response of a device under test (DUT) to a defined vibration environment. The measured response may be ability to function in the vibration environment, fatigue life, resonant frequencies or squeak and rattle sound output (NVH). Squeak and rattle testing is performed with a special type of quiet shaker that produces very low sound levels while under operation. For relatively low frequency forcing (typically less than 100 Hz), servohydraulic (electrohydraulic) shakers are used. For higher frequencies (typically 5 Hz to 2000 Hz), electrodynamic shakers are used. Generally, one or more "input" or "control" points located on the DUT-side of a vibration fixture is kept at a specified acceleration. Other "response" points may experience higher vibration levels (resonance) or lower vibration level (anti-resonance or damping) than the control point(s). It is often desirable to achieve anti-resonance to keep a system from becoming too noisy, or to reduce strain on certain parts due to vibration modes caused by specific vibration frequencies. The most common types of vibration testing services conducted by vibration test labs are sinusoidal and random. Sine (one-frequency-at-a-time) tests are performed to survey the structural response of the device under test (DUT). During the early history of vibration testing, vibration machine controllers were limited only to controlling sine motion so only sine testing was performed. Later, more sophisticated analog and then digital controllers were able to provide random control (all frequencies at once). A random (all frequencies at once) test is generally considered to more closely replicate a real world environment, such as road inputs to a moving automobile. Most vibration testing is conducted in a 'single DUT axis' at a time, even though most real-world vibration occurs in various axes simultaneously. MIL-STD-810G, released in late 2008, Test Method 527, calls for multiple exciter testing. The vibration test fixture used to attach the DUT to the shaker table must be designed for the frequency range of the vibration test spectrum. It is difficult to design a vibration test fixture which duplicates the dynamic response (mechanical impedance) of the actual in-use mounting. For this reason, to ensure repeatability between vibration tests, vibration fixtures are designed to be resonance free within the test frequency range. Generally for smaller fixtures and lower frequency ranges, the designer can target a fixture design that is free of resonances in the test frequency range. This becomes more difficult as the DUT gets larger and as the test frequency increases. In these cases multi-point control strategies can mitigate some of the resonances that may be present in the future. Some vibration test methods limit the amount of crosstalk (movement of a response point in a mutually perpendicular direction to the axis under test) permitted to be exhibited by the vibration test fixture. Devices specifically designed to trace or record vibrations are called vibroscopes. Analysis Vibration analysis (VA), applied in an industrial or maintenance environment aims to reduce maintenance costs and equipment downtime by detecting equipment faults. VA is a key component of a condition monitoring (CM) program, and is often referred to as predictive maintenance (PdM). Most commonly VA is used to detect faults in rotating equipment (Fans, Motors, Pumps, and Gearboxes etc.) such as imbalance, misalignment, rolling element bearing faults and resonance conditions. VA can use the units of Displacement, Velocity and Acceleration displayed as a time waveform (TWF), but most commonly the spectrum is used, derived from a fast Fourier transform of the TWF. The vibration spectrum provides important frequency information that can pinpoint the faulty component. The fundamentals of vibration analysis can be understood by studying the simple Mass-spring-damper model. Indeed, even a complex structure such as an automobile body can be modeled as a "summation" of simple mass–spring–damper models. The mass–spring–damper model is an example of a simple harmonic oscillator. The mathematics used to describe its behavior is identical to other simple harmonic oscillators such as the RLC circuit. Note: This article does not include the step-by-step mathematical derivations, but focuses on major vibration analysis equations and concepts. Please refer to the references at the end of the article for detailed derivations. Free vibration without damping To start the investigation of the mass–spring–damper assume the damping is negligible and that there is no external force applied to the mass (i.e. free vibration). The force applied to the mass by the spring is proportional to the amount the spring is stretched "x" (assuming the spring is already compressed due to the weight of the mass). The proportionality constant, k, is the stiffness of the spring and has units of force/distance (e.g. lbf/in or N/m). The negative sign indicates that the force is always opposing the motion of the mass attached to it: The force generated by the mass is proportional to the acceleration of the mass as given by Newton's second law of motion: The sum of the forces on the mass then generates this ordinary differential equation: Assuming that the initiation of vibration begins by stretching the spring by the distance of A and releasing, the solution to the above equation that describes the motion of mass is: This solution says that it will oscillate with simple harmonic motion that has an amplitude of A and a frequency of fn. The number fn is called the undamped natural frequency. For the simple mass–spring system, fn is defined as: Note: angular frequency ω (ω=2 π f) with the units of radians per second is often used in equations because it simplifies the equations, but is normally converted to ordinary frequency (units of Hz or equivalently cycles per second) when stating the frequency of a system. If the mass and stiffness of the system is known, the formula above can determine the frequency at which the system vibrates once set in motion by an initial disturbance. Every vibrating system has one or more natural frequencies that it vibrates at once disturbed. This simple relation can be used to understand in general what happens to a more complex system once we add mass or stiffness. For example, the above formula explains why, when a car or truck is fully loaded, the suspension feels "softer" than unloaded—the mass has increased, reducing the natural frequency of the system. What causes the system to vibrate: from conservation of energy point of view Vibrational motion could be understood in terms of conservation of energy. In the above example the spring has been extended by a value of x and therefore some potential energy () is stored in the spring. Once released, the spring tends to return to its un-stretched state (which is the minimum potential energy state) and in the process accelerates the mass. At the point where the spring has reached its un-stretched state all the potential energy that we supplied by stretching it has been transformed into kinetic energy (). The mass then begins to decelerate because it is now compressing the spring and in the process transferring the kinetic energy back to its potential. Thus oscillation of the spring amounts to the transferring back and forth of the kinetic energy into potential energy. In this simple model the mass continues to oscillate forever at the same magnitude—but in a real system, damping always dissipates the energy, eventually bringing the spring to rest. Free vibration with damping When a "viscous" damper is added to the model this outputs a force that is proportional to the velocity of the mass. The damping is called viscous because it models the effects of a fluid within an object. The proportionality constant c is called the damping coefficient and has units of Force over velocity (lbf⋅s/in or N⋅s/m). Summing the forces on the mass results in the following ordinary differential equation: The solution to this equation depends on the amount of damping. If the damping is small enough, the system still vibrates—but eventually, over time, stops vibrating. This case is called underdamping, which is important in vibration analysis. If damping is increased just to the point where the system no longer oscillates, the system has reached the point of critical damping. If the damping is increased past critical damping, the system is overdamped. The value that the damping coefficient must reach for critical damping in the mass-spring-damper model is: To characterize the amount of damping in a system a ratio called the damping ratio (also known as damping factor and % critical damping) is used. This damping ratio is just a ratio of the actual damping over the amount of damping required to reach critical damping. The formula for the damping ratio () of the mass-spring-damper model is: For example, metal structures (e.g., airplane fuselages, engine crankshafts) have damping factors less than 0.05, while automotive suspensions are in the range of 0.2–0.3. The solution to the underdamped system for the mass-spring-damper model is the following: The value of X, the initial magnitude, and the phase shift, are determined by the amount the spring is stretched. The formulas for these values can be found in the references. Damped and undamped natural frequencies The major points to note from the solution are the exponential term and the cosine function. The exponential term defines how quickly the system “damps” down – the larger the damping ratio, the quicker it damps to zero. The cosine function is the oscillating portion of the solution, but the frequency of the oscillations is different from the undamped case. The frequency in this case is called the "damped natural frequency", and is related to the undamped natural frequency by the following formula: The damped natural frequency is less than the undamped natural frequency, but for many practical cases the damping ratio is relatively small and hence the difference is negligible. Therefore, the damped and undamped description are often dropped when stating the natural frequency (e.g. with 0.1 damping ratio, the damped natural frequency is only 1% less than the undamped). The plots to the side present how 0.1 and 0.3 damping ratios effect how the system “rings” down over time. What is often done in practice is to experimentally measure the free vibration after an impact (for example by a hammer) and then determine the natural frequency of the system by measuring the rate of oscillation, as well as the damping ratio by measuring the rate of decay. The natural frequency and damping ratio are not only important in free vibration, but also characterize how a system behaves under forced vibration. Forced vibration with damping The behavior of the spring mass damper model varies with the addition of a harmonic force. A force of this type could, for example, be generated by a rotating imbalance. Summing the forces on the mass results in the following ordinary differential equation: The steady state solution of this problem can be written as: The result states that the mass will oscillate at the same frequency, f, of the applied force, but with a phase shift The amplitude of the vibration “X” is defined by the following formula. Where “r” is defined as the ratio of the harmonic force frequency over the undamped natural frequency of the mass–spring–damper model. The phase shift, is defined by the following formula. The plot of these functions, called "the frequency response of the system", presents one of the most important features in forced vibration. In a lightly damped system when the forcing frequency nears the natural frequency () the amplitude of the vibration can get extremely high. This phenomenon is called resonance (subsequently the natural frequency of a system is often referred to as the resonant frequency). In rotor bearing systems any rotational speed that excites a resonant frequency is referred to as a critical speed. If resonance occurs in a mechanical system it can be very harmful – leading to eventual failure of the system. Consequently, one of the major reasons for vibration analysis is to predict when this type of resonance may occur and then to determine what steps to take to prevent it from occurring. As the amplitude plot shows, adding damping can significantly reduce the magnitude of the vibration. Also, the magnitude can be reduced if the natural frequency can be shifted away from the forcing frequency by changing the stiffness or mass of the system. If the system cannot be changed, perhaps the forcing frequency can be shifted (for example, changing the speed of the machine generating the force). The following are some other points in regards to the forced vibration shown in the frequency response plots. At a given frequency ratio, the amplitude of the vibration, X, is directly proportional to the amplitude of the force (e.g. if you double the force, the vibration doubles) With little or no damping, the vibration is in phase with the forcing frequency when the frequency ratio r < 1 and 180 degrees out of phase when the frequency ratio r > 1 When r ≪ 1 the amplitude is just the deflection of the spring under the static force This deflection is called the static deflection Hence, when r ≪ 1 the effects of the damper and the mass are minimal. When r ≫ 1 the amplitude of the vibration is actually less than the static deflection In this region the force generated by the mass (F = ma) is dominating because the acceleration seen by the mass increases with the frequency. Since the deflection seen in the spring, X, is reduced in this region, the force transmitted by the spring (F = kx) to the base is reduced. Therefore, the mass–spring–damper system is isolating the harmonic force from the mounting base – referred to as vibration isolation. More damping actually reduces the effects of vibration isolation when r ≫ 1 because the damping force (F = cv) is also transmitted to the base. Whatever the damping is, the vibration is 90 degrees out of phase with the forcing frequency when the frequency ratio r = 1, which is very helpful when it comes to determining the natural frequency of the system. Whatever the damping is, when r ≫ 1, the vibration is 180 degrees out of phase with the forcing frequency Whatever the damping is, when r ≪ 1, the vibration is in phase with the forcing frequency Resonance causes Resonance is simple to understand if the spring and mass are viewed as energy storage elements – with the mass storing kinetic energy and the spring storing potential energy. As discussed earlier, when the mass and spring have no external force acting on them they transfer energy back and forth at a rate equal to the natural frequency. In other words, to efficiently pump energy into both mass and spring requires that the energy source feed the energy in at a rate equal to the natural frequency. Applying a force to the mass and spring is similar to pushing a child on swing, a push is needed at the correct moment to make the swing get higher and higher. As in the case of the swing, the force applied need not be high to get large motions, but must just add energy to the system. The damper, instead of storing energy, dissipates energy. Since the damping force is proportional to the velocity, the more the motion, the more the damper dissipates the energy. Therefore, there is a point when the energy dissipated by the damper equals the energy added by the force. At this point, the system has reached its maximum amplitude and will continue to vibrate at this level as long as the force applied stays the same. If no damping exists, there is nothing to dissipate the energy and, theoretically, the motion will continue to grow into infinity. Applying "complex" forces to the mass–spring–damper model In a previous section only a simple harmonic force was applied to the model, but this can be extended considerably using two powerful mathematical tools. The first is the Fourier transform that takes a signal as a function of time (time domain) and breaks it down into its harmonic components as a function of frequency (frequency domain). For example, by applying a force to the mass–spring–damper model that repeats the following cycle – a force equal to 1 newton for 0.5 second and then no force for 0.5 second. This type of force has the shape of a 1 Hz square wave. The Fourier transform of the square wave generates a frequency spectrum that presents the magnitude of the harmonics that make up the square wave (the phase is also generated, but is typically of less concern and therefore is often not plotted). The Fourier transform can also be used to analyze non-periodic functions such as transients (e.g. impulses) and random functions. The Fourier transform is almost always computed using the fast Fourier transform (FFT) computer algorithm in combination with a window function. In the case of our square wave force, the first component is actually a constant force of 0.5 newton and is represented by a value at 0 Hz in the frequency spectrum. The next component is a 1 Hz sine wave with an amplitude of 0.64. This is shown by the line at 1 Hz. The remaining components are at odd frequencies and it takes an infinite amount of sine waves to generate the perfect square wave. Hence, the Fourier transform allows you to interpret the force as a sum of sinusoidal forces being applied instead of a more "complex" force (e.g. a square wave). In the previous section, the vibration solution was given for a single harmonic force, but the Fourier transform in general gives multiple harmonic forces. The second mathematical tool, the superposition principle, allows the summation of the solutions from multiple forces if the system is linear. In the case of the spring–mass–damper model, the system is linear if the spring force is proportional to the displacement and the damping is proportional to the velocity over the range of motion of interest. Hence, the solution to the problem with a square wave is summing the predicted vibration from each one of the harmonic forces found in the frequency spectrum of the square wave. Frequency response model The solution of a vibration problem can be viewed as an input/output relation – where the force is the input and the output is the vibration. Representing the force and vibration in the frequency domain (magnitude and phase) allows the following relation: is called the frequency response function (also referred to as the transfer function, but not technically as accurate) and has both a magnitude and phase component (if represented as a complex number, a real and imaginary component). The magnitude of the frequency response function (FRF) was presented earlier for the mass–spring–damper system. The phase of the FRF was also presented earlier as: For example, calculating the FRF for a mass–spring–damper system with a mass of 1 kg, spring stiffness of 1.93 N/mm and a damping ratio of 0.1. The values of the spring and mass give a natural frequency of 7 Hz for this specific system. Applying the 1 Hz square wave from earlier allows the calculation of the predicted vibration of the mass. The figure illustrates the resulting vibration. It happens in this example that the fourth harmonic of the square wave falls at 7 Hz. The frequency response of the mass–spring–damper therefore outputs a high 7 Hz vibration even though the input force had a relatively low 7 Hz harmonic. This example highlights that the resulting vibration is dependent on both the forcing function and the system that the force is applied to. The figure also shows the time domain representation of the resulting vibration. This is done by performing an inverse Fourier Transform that converts frequency domain data to time domain. In practice, this is rarely done because the frequency spectrum provides all the necessary information. The frequency response function (FRF) does not necessarily have to be calculated from the knowledge of the mass, damping, and stiffness of the system—but can be measured experimentally. For example, if a known force over a range of frequencies is applied, and if the associated vibrations are measured, the frequency response function can be calculated, thereby characterizing the system. This technique is used in the field of experimental modal analysis to determine the vibration characteristics of a structure. Multiple degrees of freedom systems and mode shapes The simple mass–spring–damper model is the foundation of vibration analysis. The model described above is called a single degree of freedom (SDOF) model since the mass is assumed to only move up and down. In more complex systems, the system must be discretized into more masses that move in more than one direction, adding degrees of freedom. The major concepts of multiple degrees of freedom (MDOF) can be understood by looking at just a two degree of freedom model as shown in the figure. The equations of motion of the 2DOF system are found to be: This can be rewritten in matrix format: A more compact form of this matrix equation can be written as: where and are symmetric matrices referred respectively as the mass, damping, and stiffness matrices. The matrices are NxN square matrices where N is the number of degrees of freedom of the system. The following analysis involves the case where there is no damping and no applied forces (i.e. free vibration). The solution of a viscously damped system is somewhat more complicated. This differential equation can be solved by assuming the following type of solution: Note: Using the exponential solution of is a mathematical trick used to solve linear differential equations. Using Euler's formula and taking only the real part of the solution it is the same cosine solution for the 1 DOF system. The exponential solution is only used because it is easier to manipulate mathematically. The equation then becomes: Since cannot equal zero the equation reduces to the following. Eigenvalue problem This is referred to an eigenvalue problem in mathematics and can be put in the standard format by pre-multiplying the equation by and if: and The solution to the problem results in N eigenvalues (i.e. ), where N corresponds to the number of degrees of freedom. The eigenvalues provide the natural frequencies of the system. When these eigenvalues are substituted back into the original set of equations, the values of that correspond to each eigenvalue are called the eigenvectors. These eigenvectors represent the mode shapes of the system. The solution of an eigenvalue problem can be quite cumbersome (especially for problems with many degrees of freedom), but fortunately most math analysis programs have eigenvalue routines. The eigenvalues and eigenvectors are often written in the following matrix format and describe the modal model of the system: A simple example using the 2 DOF model can help illustrate the concepts. Let both masses have a mass of 1 kg and the stiffness of all three springs equal 1000 N/m. The mass and stiffness matrix for this problem are then: and Then The eigenvalues for this problem given by an eigenvalue routine is: The natural frequencies in the units of hertz are then (remembering ) and The two mode shapes for the respective natural frequencies are given as: Since the system is a 2 DOF system, there are two modes with their respective natural frequencies and shapes. The mode shape vectors are not the absolute motion, but just describe relative motion of the degrees of freedom. In our case the first mode shape vector is saying that the masses are moving together in phase since they have the same value and sign. In the case of the second mode shape vector, each mass is moving in opposite direction at the same rate. Illustration of a multiple DOF problem When there are many degrees of freedom, one method of visualizing the mode shapes is by animating them using structural analysis software such as Femap, ANSYS or VA One by ESI Group. An example of animating mode shapes is shown in the figure below for a cantilevered -beam as demonstrated using modal analysis on ANSYS. In this case, the finite element method was used to generate an approximation of the mass and stiffness matrices by meshing the object of interest in order to solve a discrete eigenvalue problem. Note that, in this case, the finite element method provides an approximation of the meshed surface (for which there exists an infinite number of vibration modes and frequencies). Therefore, this relatively simple model that has over 100 degrees of freedom and hence as many natural frequencies and mode shapes, provides a good approximation for the first natural frequencies and modes. Generally, only the first few modes are important for practical applications. Note that when performing a numerical approximation of any mathematical model, convergence of the parameters of interest must be ascertained. Multiple DOF problem converted to a single DOF problem The eigenvectors have very important properties called orthogonality properties. These properties can be used to greatly simplify the solution of multi-degree of freedom models. It can be shown that the eigenvectors have the following properties: and are diagonal matrices that contain the modal mass and stiffness values for each one of the modes. (Note: Since the eigenvectors (mode shapes) can be arbitrarily scaled, the orthogonality properties are often used to scale the eigenvectors so the modal mass value for each mode is equal to 1. The modal mass matrix is therefore an identity matrix) These properties can be used to greatly simplify the solution of multi-degree of freedom models by making the following coordinate transformation. Using this coordinate transformation in the original free vibration differential equation results in the following equation. Taking advantage of the orthogonality properties by premultiplying this equation by The orthogonality properties then simplify this equation to: This equation is the foundation of vibration analysis for multiple degree of freedom systems. A similar type of result can be derived for damped systems. The key is that the modal mass and stiffness matrices are diagonal matrices and therefore the equations have been "decoupled". In other words, the problem has been transformed from a large unwieldy multiple degree of freedom problem into many single degree of freedom problems that can be solved using the same methods outlined above. Solving for x is replaced by solving for q, referred to as the modal coordinates or modal participation factors. It may be clearer to understand if is written as: Written in this form it can be seen that the vibration at each of the degrees of freedom is just a linear sum of the mode shapes. Furthermore, how much each mode "participates" in the final vibration is defined by q, its modal participation factor. Rigid-body mode An unrestrained multi-degree of freedom system experiences both rigid-body translation and/or rotation and vibration. The existence of a rigid-body mode results in a zero natural frequency. The corresponding mode shape is called the rigid-body mode. See also Acoustic engineering Anti-vibration compound Balancing machine Base isolation Cushioning Critical speed Damping ratio Dunkerley's method Earthquake engineering Elastic pendulum Fast Fourier transform Mechanical engineering Mechanical resonance Modal analysis Mode shape Noise and vibration on maritime vessels Noise, vibration, and harshness Pallesthesia Passive heave compensation Pendulum Quantum vibration Random vibration Ride quality Rayleigh's quotient in vibrations analysis Shaker (testing device) Shock Shock and vibration data logger Simple harmonic oscillator Sound Structural acoustics Structural dynamics Tire balance Torsional vibration Tuned mass damper Vibration calibrator Vibration control Vibration isolation Wave Whole body vibration References Further reading Tongue, Benson, Principles of Vibration, Oxford University Press, 2001, Inman, Daniel J., Engineering Vibration, Prentice Hall, 2001, Thompson, W.T., Theory of Vibrations, Nelson Thornes Ltd, 1996, Hartog, Den, Mechanical Vibrations, Dover Publications, 1985, Institute for Occupational Safety and Health of the German Social Accident Insurance: Whole-body and hand-arm vibration Manarikkal, I., Elsaha, F., Mba, D. and Laila, D. Dynamic Modelling of Planetary Gearboxes with Cracked Tooth Using Vibrational Analysis, (2019) Advances in Condition Monitoring of Machinery in Non-Stationary Operations, p 240–250, Springer, Switzerland; External links Free Excel sheets to estimate modal parameters Vibration Analysis Reference – Mobius Institute Condition Monitoring and Machinery Protection – Siemens AG V
Vibration
[ "Physics", "Mathematics", "Engineering" ]
6,539
[ "Structural engineering", "Applied mathematics", "Mechanics", "Mechanical vibrations" ]
20,647,050
https://en.wikipedia.org/wiki/Temperature
Temperature is a physical quantity that quantitatively expresses the attribute of hotness or coldness. Temperature is measured with a thermometer. It reflects the average kinetic energy of the vibrating and colliding atoms making up a substance. Thermometers are calibrated in various temperature scales that historically have relied on various reference points and thermometric substances for definition. The most common scales are the Celsius scale with the unit symbol °C (formerly called centigrade), the Fahrenheit scale (°F), and the Kelvin scale (K), with the third being used predominantly for scientific purposes. The kelvin is one of the seven base units in the International System of Units (SI). Absolute zero, i.e., zero kelvin or −273.15 °C, is the lowest point in the thermodynamic temperature scale. Experimentally, it can be approached very closely but not actually reached, as recognized in the third law of thermodynamics. It would be impossible to extract energy as heat from a body at that temperature. Temperature is important in all fields of natural science, including physics, chemistry, Earth science, astronomy, medicine, biology, ecology, material science, metallurgy, mechanical engineering and geography as well as most aspects of daily life. Effects Many physical processes are related to temperature; some of them are given below: the physical properties of materials including the phase (solid, liquid, gaseous or plasma), density, solubility, vapor pressure, electrical conductivity, hardness, wear resistance, thermal conductivity, corrosion resistance, strength the rate and extent to which chemical reactions occur the amount and properties of thermal radiation emitted from the surface of an object air temperature affects all living organisms the speed of sound, which in a gas is proportional to the square root of the absolute temperature Scales Temperature scales need two values for definition: the point chosen as zero degrees and the magnitudes of the incremental unit of temperature. The Celsius scale (°C) is used for common temperature measurements in most of the world. It is an empirical scale that developed historically, which led to its zero point being defined as the freezing point of water, and as the boiling point of water, both at atmospheric pressure at sea level. It was called a centigrade scale because of the 100-degree interval. Since the standardization of the kelvin in the International System of Units, it has subsequently been redefined in terms of the equivalent fixing points on the Kelvin scale, so that a temperature increment of one degree Celsius is the same as an increment of one kelvin, though numerically the scales differ by an exact offset of 273.15. The Fahrenheit scale is in common use in the United States. Water freezes at and boils at at sea-level atmospheric pressure. Absolute zero At the absolute zero of temperature, no energy can be removed from matter as heat, a fact expressed in the third law of thermodynamics. At this temperature, matter contains no macroscopic thermal energy, but still has quantum-mechanical zero-point energy as predicted by the uncertainty principle, although this does not enter into the definition of absolute temperature. Experimentally, absolute zero can be approached only very closely; it can never be reached (the lowest temperature attained by experiment is 38 pK). Theoretically, in a body at a temperature of absolute zero, all classical motion of its particles has ceased and they are at complete rest in this classical sense. Absolute zero, defined as , is exactly equal to , or . Absolute scales Referring to the Boltzmann constant, to the Maxwell–Boltzmann distribution, and to the Boltzmann statistical mechanical definition of entropy, as distinct from the Gibbs definition, for independently moving microscopic particles, disregarding interparticle potential energy, by international agreement, a temperature scale is defined and said to be absolute because it is independent of the characteristics of particular thermometric substances and thermometer mechanisms. Apart from absolute zero, it does not have a reference temperature. It is known as the Kelvin scale, widely used in science and technology. The kelvin (the unit name is spelled with a lower-case 'k') is the unit of temperature in the International System of Units (SI). The temperature of a body in a state of thermodynamic equilibrium is always positive relative to absolute zero. Besides the internationally agreed Kelvin scale, there is also a thermodynamic temperature scale, invented by Lord Kelvin, also with its numerical zero at the absolute zero of temperature, but directly relating to purely macroscopic thermodynamic concepts, including the macroscopic entropy, though microscopically referable to the Gibbs statistical mechanical definition of entropy for the canonical ensemble, that takes interparticle potential energy into account, as well as independent particle motion so that it can account for measurements of temperatures near absolute zero. This scale has a reference temperature at the triple point of water, the numerical value of which is defined by measurements using the aforementioned internationally agreed Kelvin scale. Kelvin scale Many scientific measurements use the Kelvin temperature scale (unit symbol: K), named in honor of the physicist who first defined it. It is an absolute scale. Its numerical zero point, , is at the absolute zero of temperature. Since May 2019, the kelvin has been defined through particle kinetic theory, and statistical mechanics. In the International System of Units (SI), the magnitude of the kelvin is defined in terms of the Boltzmann constant, the value of which is defined as fixed by international convention. Statistical mechanical versus thermodynamic temperature scales Since May 2019, the magnitude of the kelvin is defined in relation to microscopic phenomena, characterized in terms of statistical mechanics. Previously, but since 1954, the International System of Units defined a scale and unit for the kelvin as a thermodynamic temperature, by using the reliably reproducible temperature of the triple point of water as a second reference point, the first reference point being at absolute zero. Historically, the temperature of the triple point of water was defined as exactly 273.16 K. Today it is an empirically measured quantity. The freezing point of water at sea-level atmospheric pressure occurs at very close to (). Classification of scales There are various kinds of temperature scale. It may be convenient to classify them as empirically and theoretically based. Empirical temperature scales are historically older, while theoretically based scales arose in the middle of the nineteenth century. Empirical scales Empirically based temperature scales rely directly on measurements of simple macroscopic physical properties of materials. For example, the length of a column of mercury, confined in a glass-walled capillary tube, is dependent largely on temperature and is the basis of the very useful mercury-in-glass thermometer. Such scales are valid only within convenient ranges of temperature. For example, above the boiling point of mercury, a mercury-in-glass thermometer is impracticable. Most materials expand with temperature increase, but some materials, such as water, contract with temperature increase over some specific range, and then they are hardly useful as thermometric materials. A material is of no use as a thermometer near one of its phase-change temperatures, for example, its boiling-point. In spite of these limitations, most generally used practical thermometers are of the empirically based kind. Especially, it was used for calorimetry, which contributed greatly to the discovery of thermodynamics. Nevertheless, empirical thermometry has serious drawbacks when judged as a basis for theoretical physics. Empirically based thermometers, beyond their base as simple direct measurements of ordinary physical properties of thermometric materials, can be re-calibrated, by use of theoretical physical reasoning, and this can extend their range of adequacy. Theoretical scales Theoretically based temperature scales are based directly on theoretical arguments, especially those of kinetic theory and thermodynamics. They are more or less ideally realized in practically feasible physical devices and materials. Theoretically based temperature scales are used to provide calibrating standards for practical empirically based thermometers. Microscopic statistical mechanical scale In physics, the internationally agreed conventional temperature scale is called the Kelvin scale. It is calibrated through the internationally agreed and prescribed value of the Boltzmann constant, referring to motions of microscopic particles, such as atoms, molecules, and electrons, constituent in the body whose temperature is to be measured. In contrast with the thermodynamic temperature scale invented by Kelvin, the presently conventional Kelvin temperature is not defined through comparison with the temperature of a reference state of a standard body, nor in terms of macroscopic thermodynamics. Apart from the absolute zero of temperature, the Kelvin temperature of a body in a state of internal thermodynamic equilibrium is defined by measurements of suitably chosen of its physical properties, such as have precisely known theoretical explanations in terms of the Boltzmann constant. That constant refers to chosen kinds of motion of microscopic particles in the constitution of the body. In those kinds of motion, the particles move individually, without mutual interaction. Such motions are typically interrupted by inter-particle collisions, but for temperature measurement, the motions are chosen so that, between collisions, the non-interactive segments of their trajectories are known to be accessible to accurate measurement. For this purpose, interparticle potential energy is disregarded. In an ideal gas, and in other theoretically understood bodies, the Kelvin temperature is defined to be proportional to the average kinetic energy of non-interactively moving microscopic particles, which can be measured by suitable techniques. The proportionality constant is a simple multiple of the Boltzmann constant. If molecules, atoms, or electrons are emitted from material and their velocities are measured, the spectrum of their velocities often nearly obeys a theoretical law called the Maxwell–Boltzmann distribution, which gives a well-founded measurement of temperatures for which the law holds. There have not yet been successful experiments of this same kind that directly use the Fermi–Dirac distribution for thermometry, but perhaps that will be achieved in the future. The speed of sound in a gas can be calculated theoretically from the gas's molecular character, temperature, pressure, and the Boltzmann constant. For a gas of known molecular character and pressure, this provides a relation between temperature and the Boltzmann constant. Those quantities can be known or measured more precisely than can the thermodynamic variables that define the state of a sample of water at its triple point. Consequently, taking the value of the Boltzmann constant as a primarily defined reference of exactly defined value, a measurement of the speed of sound can provide a more precise measurement of the temperature of the gas. Measurement of the spectrum of electromagnetic radiation from an ideal three-dimensional black body can provide an accurate temperature measurement because the frequency of maximum spectral radiance of black-body radiation is directly proportional to the temperature of the black body; this is known as Wien's displacement law and has a theoretical explanation in Planck's law and the Bose–Einstein law. Measurement of the spectrum of noise-power produced by an electrical resistor can also provide accurate temperature measurement. The resistor has two terminals and is in effect a one-dimensional body. The Bose-Einstein law for this case indicates that the noise-power is directly proportional to the temperature of the resistor and to the value of its resistance and to the noise bandwidth. In a given frequency band, the noise-power has equal contributions from every frequency and is called Johnson noise. If the value of the resistance is known then the temperature can be found. Macroscopic thermodynamic scale Historically, till May 2019, the definition of the Kelvin scale was that invented by Kelvin, based on a ratio of quantities of energy in processes in an ideal Carnot engine, entirely in terms of macroscopic thermodynamics. That Carnot engine was to work between two temperatures, that of the body whose temperature was to be measured, and a reference, that of a body at the temperature of the triple point of water. Then the reference temperature, that of the triple point, was defined to be exactly . Since May 2019, that value has not been fixed by definition but is to be measured through microscopic phenomena, involving the Boltzmann constant, as described above. The microscopic statistical mechanical definition does not have a reference temperature. Ideal gas A material on which a macroscopically defined temperature scale may be based is the ideal gas. The pressure exerted by a fixed volume and mass of an ideal gas is directly proportional to its temperature. Some natural gases show so nearly ideal properties over suitable temperature range that they can be used for thermometry; this was important during the development of thermodynamics and is still of practical importance today. The ideal gas thermometer is, however, not theoretically perfect for thermodynamics. This is because the entropy of an ideal gas at its absolute zero of temperature is not a positive semi-definite quantity, which puts the gas in violation of the third law of thermodynamics. In contrast to real materials, the ideal gas does not liquefy or solidify, no matter how cold it is. Alternatively thinking, the ideal gas law, refers to the limit of infinitely high temperature and zero pressure; these conditions guarantee non-interactive motions of the constituent molecules. Kinetic theory approach The magnitude of the kelvin is now defined in terms of kinetic theory, derived from the value of the Boltzmann constant. Kinetic theory provides a microscopic account of temperature for some bodies of material, especially gases, based on macroscopic systems' being composed of many microscopic particles, such as molecules and ions of various species, the particles of a species being all alike. It explains macroscopic phenomena through the classical mechanics of the microscopic particles. The equipartition theorem of kinetic theory asserts that each classical degree of freedom of a freely moving particle has an average kinetic energy of where denotes the Boltzmann constant. The translational motion of the particle has three degrees of freedom, so that, except at very low temperatures where quantum effects predominate, the average translational kinetic energy of a freely moving particle in a system with temperature will be . Molecules, such as oxygen (O2), have more degrees of freedom than single spherical atoms: they undergo rotational and vibrational motions as well as translations. Heating results in an increase of temperature due to an increase in the average translational kinetic energy of the molecules. Heating will also cause, through equipartitioning, the energy associated with vibrational and rotational modes to increase. Thus a diatomic gas will require more energy input to increase its temperature by a certain amount, i.e. it will have a greater heat capacity than a monatomic gas. As noted above, the speed of sound in a gas can be calculated from the gas's molecular character, temperature, pressure, and the Boltzmann constant. Taking the value of the Boltzmann constant as a primarily defined reference of exactly defined value, a measurement of the speed of sound can provide a more precise measurement of the temperature of the gas. It is possible to measure the average kinetic energy of constituent microscopic particles if they are allowed to escape from the bulk of the system, through a small hole in the containing wall. The spectrum of velocities has to be measured, and the average calculated from that. It is not necessarily the case that the particles that escape and are measured have the same velocity distribution as the particles that remain in the bulk of the system, but sometimes a good sample is possible. Thermodynamic approach Temperature is one of the principal quantities in the study of thermodynamics. Formerly, the magnitude of the kelvin was defined in thermodynamic terms, but nowadays, as mentioned above, it is defined in terms of kinetic theory. The thermodynamic temperature is said to be absolute for two reasons. One is that its formal character is independent of the properties of particular materials. The other reason is that its zero is, in a sense, absolute, in that it indicates absence of microscopic classical motion of the constituent particles of matter, so that they have a limiting specific heat of zero for zero temperature, according to the third law of thermodynamics. Nevertheless, a thermodynamic temperature does in fact have a definite numerical value that has been arbitrarily chosen by tradition and is dependent on the property of particular materials; it is simply less arbitrary than relative "degrees" scales such as Celsius and Fahrenheit. Being an absolute scale with one fixed point (zero), there is only one degree of freedom left to arbitrary choice, rather than two as in relative scales. For the Kelvin scale since May 2019, by international convention, the choice has been made to use knowledge of modes of operation of various thermometric devices, relying on microscopic kinetic theories about molecular motion. The numerical scale is settled by a conventional definition of the value of the Boltzmann constant, which relates macroscopic temperature to average microscopic kinetic energy of particles such as molecules. Its numerical value is arbitrary, and an alternate, less widely used absolute temperature scale exists called the Rankine scale, made to be aligned with the Fahrenheit scale as Kelvin is with Celsius. The thermodynamic definition of temperature is due to Kelvin. It is framed in terms of an idealized device called a Carnot engine, imagined to run in a fictive continuous cycle of successive processes that traverse a cycle of states of its working body. The engine takes in a quantity of heat from a hot reservoir and passes out a lesser quantity of waste heat to a cold reservoir. The net heat energy absorbed by the working body is passed, as thermodynamic work, to a work reservoir, and is considered to be the output of the engine. The cycle is imagined to run so slowly that at each point of the cycle the working body is in a state of thermodynamic equilibrium. The successive processes of the cycle are thus imagined to run reversibly with no entropy production. Then the quantity of entropy taken in from the hot reservoir when the working body is heated is equal to that passed to the cold reservoir when the working body is cooled. Then the absolute or thermodynamic temperatures, and , of the reservoirs are defined such that The zeroth law of thermodynamics allows this definition to be used to measure the absolute or thermodynamic temperature of an arbitrary body of interest, by making the other heat reservoir have the same temperature as the body of interest. Kelvin's original work postulating absolute temperature was published in 1848. It was based on the work of Carnot, before the formulation of the first law of thermodynamics. Carnot had no sound understanding of heat and no specific concept of entropy. He wrote of 'caloric' and said that all the caloric that passed from the hot reservoir was passed into the cold reservoir. Kelvin wrote in his 1848 paper that his scale was absolute in the sense that it was defined "independently of the properties of any particular kind of matter". His definitive publication, which sets out the definition just stated, was printed in 1853, a paper read in 1851. Numerical details were formerly settled by making one of the heat reservoirs a cell at the triple point of water, which was defined to have an absolute temperature of 273.16 K. Nowadays, the numerical value is instead obtained from measurement through the microscopic statistical mechanical international definition, as above. Intensive variability In thermodynamic terms, temperature is an intensive variable because it is equal to a differential coefficient of one extensive variable with respect to another, for a given body. It thus has the dimensions of a ratio of two extensive variables. In thermodynamics, two bodies are often considered as connected by contact with a common wall, which has some specific permeability properties. Such specific permeability can be referred to a specific intensive variable. An example is a diathermic wall that is permeable only to heat; the intensive variable for this case is temperature. When the two bodies have been connected through the specifically permeable wall for a very long time, and have settled to a permanent steady state, the relevant intensive variables are equal in the two bodies; for a diathermal wall, this statement is sometimes called the zeroth law of thermodynamics. In particular, when the body is described by stating its internal energy , an extensive variable, as a function of its entropy , also an extensive variable, and other state variables , with ), then the temperature is equal to the partial derivative of the internal energy with respect to the entropy: Likewise, when the body is described by stating its entropy as a function of its internal energy , and other state variables , with , then the reciprocal of the temperature is equal to the partial derivative of the entropy with respect to the internal energy: The above definition, equation (1), of the absolute temperature, is due to Kelvin. It refers to systems closed to the transfer of matter and has a special emphasis on directly experimental procedures. A presentation of thermodynamics by Gibbs starts at a more abstract level and deals with systems open to the transfer of matter; in this development of thermodynamics, the equations (2) and (3) above are actually alternative definitions of temperature. Local thermodynamic equilibrium Real-world bodies are often not in thermodynamic equilibrium and not homogeneous. For the study by methods of classical irreversible thermodynamics, a body is usually spatially and temporally divided conceptually into 'cells' of small size. If classical thermodynamic equilibrium conditions for matter are fulfilled to good approximation in such a 'cell', then it is homogeneous and a temperature exists for it. If this is so for every 'cell' of the body, then local thermodynamic equilibrium is said to prevail throughout the body. It makes good sense, for example, to say of the extensive variable , or of the extensive variable , that it has a density per unit volume or a quantity per unit mass of the system, but it makes no sense to speak of the density of temperature per unit volume or quantity of temperature per unit mass of the system. On the other hand, it makes no sense to speak of the internal energy at a point, while when local thermodynamic equilibrium prevails, it makes good sense to speak of the temperature at a point. Consequently, the temperature can vary from point to point in a medium that is not in global thermodynamic equilibrium, but in which there is local thermodynamic equilibrium. Thus, when local thermodynamic equilibrium prevails in a body, the temperature can be regarded as a spatially varying local property in that body, and this is because the temperature is an intensive variable. Basic theory Temperature is a measure of a quality of a state of a material. The quality may be regarded as a more abstract entity than any particular temperature scale that measures it, and is called hotness by some writers. The quality of hotness refers to the state of material only in a particular locality, and in general, apart from bodies held in a steady state of thermodynamic equilibrium, hotness varies from place to place. It is not necessarily the case that a material in a particular place is in a state that is steady and nearly homogeneous enough to allow it to have a well-defined hotness or temperature. Hotness may be represented abstractly as a one-dimensional manifold. Every valid temperature scale has its own one-to-one map into the hotness manifold. When two systems in thermal contact are at the same temperature no heat transfers between them. When a temperature difference does exist heat flows spontaneously from the warmer system to the colder system until they are in thermal equilibrium. Such heat transfer occurs by conduction or by thermal radiation. Experimental physicists, for example Galileo and Newton, found that there are indefinitely many empirical temperature scales. Nevertheless, the zeroth law of thermodynamics says that they all measure the same quality. This means that for a body in its own state of internal thermodynamic equilibrium, every correctly calibrated thermometer, of whatever kind, that measures the temperature of the body, records one and the same temperature. For a body that is not in its own state of internal thermodynamic equilibrium, different thermometers can record different temperatures, depending respectively on the mechanisms of operation of the thermometers. Bodies in thermodynamic equilibrium For experimental physics, hotness means that, when comparing any two given bodies in their respective separate thermodynamic equilibria, any two suitably given empirical thermometers with numerical scale readings will agree as to which is the hotter of the two given bodies, or that they have the same temperature. This does not require the two thermometers to have a linear relation between their numerical scale readings, but it does require that the relation between their numerical readings shall be strictly monotonic. A definite sense of greater hotness can be had, independently of calorimetry, of thermodynamics, and of properties of particular materials, from Wien's displacement law of thermal radiation: the temperature of a bath of thermal radiation is proportional, by a universal constant, to the frequency of the maximum of its frequency spectrum; this frequency is always positive, but can have values that tend to zero. Thermal radiation is initially defined for a cavity in thermodynamic equilibrium. These physical facts justify a mathematical statement that hotness exists on an ordered one-dimensional manifold. This is a fundamental character of temperature and thermometers for bodies in their own thermodynamic equilibrium. Except for a system undergoing a first-order phase change such as the melting of ice, as a closed system receives heat, without a change in its volume and without a change in external force fields acting on it, its temperature rises. For a system undergoing such a phase change so slowly that departure from thermodynamic equilibrium can be neglected, its temperature remains constant as the system is supplied with latent heat. Conversely, a loss of heat from a closed system, without phase change, without change of volume, and without a change in external force fields acting on it, decreases its temperature. Bodies in a steady state but not in thermodynamic equilibrium While for bodies in their own thermodynamic equilibrium states, the notion of temperature requires that all empirical thermometers must agree as to which of two bodies is the hotter or that they are at the same temperature, this requirement is not safe for bodies that are in steady states though not in thermodynamic equilibrium. It can then well be that different empirical thermometers disagree about which is hotter, and if this is so, then at least one of the bodies does not have a well-defined absolute thermodynamic temperature. Nevertheless, any one given body and any one suitable empirical thermometer can still support notions of empirical, non-absolute, hotness, and temperature, for a suitable range of processes. This is a matter for study in non-equilibrium thermodynamics. Bodies not in a steady state When a body is not in a steady-state, then the notion of temperature becomes even less safe than for a body in a steady state not in thermodynamic equilibrium. This is also a matter for study in non-equilibrium thermodynamics. Thermodynamic equilibrium axiomatics For the axiomatic treatment of thermodynamic equilibrium, since the 1930s, it has become customary to refer to a zeroth law of thermodynamics. The customarily stated minimalist version of such a law postulates only that all bodies, which when thermally connected would be in thermal equilibrium, should be said to have the same temperature by definition, but by itself does not establish temperature as a quantity expressed as a real number on a scale. A more physically informative version of such a law views empirical temperature as a chart on a hotness manifold. While the zeroth law permits the definitions of many different empirical scales of temperature, the second law of thermodynamics selects the definition of a single preferred, absolute temperature, unique up to an arbitrary scale factor, whence called the thermodynamic temperature. If internal energy is considered as a function of the volume and entropy of a homogeneous system in thermodynamic equilibrium, thermodynamic absolute temperature appears as the partial derivative of internal energy with respect the entropy at constant volume. Its natural, intrinsic origin or null point is absolute zero at which the entropy of any system is at a minimum. Although this is the lowest absolute temperature described by the model, the third law of thermodynamics postulates that absolute zero cannot be attained by any physical system. Heat capacity When an energy transfer to or from a body is only as heat, the state of the body changes. Depending on the surroundings and the walls separating them from the body, various changes are possible in the body. They include chemical reactions, increase of pressure, increase of temperature and phase change. For each kind of change under specified conditions, the heat capacity is the ratio of the quantity of heat transferred to the magnitude of the change. For example, if the change is an increase in temperature at constant volume, with no phase change and no chemical change, then the temperature of the body rises and its pressure increases. The quantity of heat transferred, , divided by the observed temperature change, , is the body's heat capacity at constant volume: If heat capacity is measured for a well-defined amount of substance, the specific heat is the measure of the heat required to increase the temperature of such a unit quantity by one unit of temperature. For example, raising the temperature of water by one kelvin (equal to one degree Celsius) requires 4186 joules per kilogram (J/kg). Measurement Temperature measurement using modern scientific thermometers and temperature scales goes back at least as far as the early 18th century, when Daniel Gabriel Fahrenheit adapted a thermometer (switching to mercury) and a scale both developed by Ole Christensen Rømer. Fahrenheit's scale is still in use in the United States for non-scientific applications. Temperature is measured with thermometers that may be calibrated to a variety of temperature scales. In most of the world (except for Belize, Myanmar, Liberia and the United States), the Celsius scale is used for most temperature measuring purposes. Most scientists measure temperature using the Celsius scale and thermodynamic temperature using the Kelvin scale, which is the Celsius scale offset so that its null point is = , or absolute zero. Many engineering fields in the US, notably high-tech and US federal specifications (civil and military), also use the Kelvin and Celsius scales. Other engineering fields in the US also rely upon the Rankine scale (a shifted Fahrenheit scale) when working in thermodynamic-related disciplines such as combustion. Units The basic unit of temperature in the International System of Units (SI) is the kelvin. It has the symbol K. For everyday applications, it is often convenient to use the Celsius scale, in which corresponds very closely to the freezing point of water and is its boiling point at sea level. Because liquid droplets commonly exist in clouds at sub-zero temperatures, is better defined as the melting point of ice. In this scale, a temperature difference of 1 degree Celsius is the same as a increment, but the scale is offset by the temperature at which ice melts (). By international agreement, until May 2019, the Kelvin and Celsius scales were defined by two fixing points: absolute zero and the triple point of Vienna Standard Mean Ocean Water, which is water specially prepared with a specified blend of hydrogen and oxygen isotopes. Absolute zero was defined as precisely and . It is the temperature at which all classical translational motion of the particles comprising matter ceases and they are at complete rest in the classical model. Quantum-mechanically, however, zero-point motion remains and has an associated energy, the zero-point energy. Matter is in its ground state, and contains no thermal energy. The temperatures and were defined as those of the triple point of water. This definition served the following purposes: it fixed the magnitude of the kelvin as being precisely 1 part in 273.16 parts of the difference between absolute zero and the triple point of water; it established that one kelvin has precisely the same magnitude as one degree on the Celsius scale; and it established the difference between the null points of these scales as being ( = and = ). Since 2019, there has been a new definition based on the Boltzmann constant, but the scales are scarcely changed. In the United States, the Fahrenheit scale is the most widely used. On this scale the freezing point of water corresponds to and the boiling point to . The Rankine scale, still used in fields of chemical engineering in the US, is an absolute scale based on the Fahrenheit increment. Historical scales The following temperature scales are in use or have historically been used for measuring temperature: Kelvin scale Celsius scale Fahrenheit scale Rankine scale Delisle scale Newton scale Réaumur scale Rømer scale Plasma physics The field of plasma physics deals with phenomena of electromagnetic nature that involve very high temperatures. It is customary to express temperature as energy in a unit related to the electronvolt or kiloelectronvolt (eV/kB or keV/kB). The corresponding energy, which is dimensionally distinct from temperature, is then calculated as the product of the Boltzmann constant and temperature, . Then, 1eV/kB is . In the study of QCD matter one routinely encounters temperatures of the order of a few hundred MeV/kB, equivalent to about . Continuous or discrete When one measures the variation of temperature across a region of space or time, do the temperature measurements turn out to be continuous or discrete? There is a widely held misconception that such temperature measurements must always be continuous. This misconception partly originates from the historical view associated with the continuity of classical physical quantities, which states that physical quantities must assume every intermediate value between a starting value and a final value. However, the classical picture is only true in the cases where temperature is measured in a system that is in equilibrium, that is, temperature may not be continuous outside these conditions. For systems outside equilibrium, such as at interfaces between materials (e.g., a metal/non-metal interface or a liquid-vapour interface) temperature measurements may show steep discontinuities in time and space. For instance, Fang and Ward were some of the first authors to successfully report temperature discontinuities of as much as 7.8 K at the surface of evaporating water droplets. This was reported at inter-molecular scales, or at the scale of the mean free path of molecules which is typically of the order of a few micrometers in gases at room temperature. Generally speaking, temperature discontinuities are considered to be norms rather than exceptions in cases of interfacial heat transfer. This is due to the abrupt change in the vibrational or thermal properties of the materials across such interfaces which prevent instantaneous transfer of heat and the establishment of thermal equilibrium (a prerequisite for having a uniform equilibrium temperature across the interface). Further, temperature measurements at the macro-scale (typical observational scale) may be too coarse-grained as they average out the microscopic thermal information based on the scale of the representative sample volume of the control system, and thus it is likely that temperature discontinuities at the micro-scale may be overlooked in such averages. Such an averaging may even produce incorrect or misleading results in many cases of temperature measurements, even at macro-scales, and thus it is prudent that one examines the micro-physical information carefully before averaging out or smoothing out any potential temperature discontinuities in a system as such discontinuities cannot always be averaged or smoothed out. Temperature discontiuities, rather than merely being anomalies, have actually substantially improved our understanding and predictive abilities pertaining to heat transfer at small scales. Theoretical foundation Historically, there are several scientific approaches to the explanation of temperature: the classical thermodynamic description based on macroscopic empirical variables that can be measured in a laboratory; the kinetic theory of gases which relates the macroscopic description to the probability distribution of the energy of motion of gas particles; and a microscopic explanation based on statistical physics and quantum mechanics. In addition, rigorous and purely mathematical treatments have provided an axiomatic approach to classical thermodynamics and temperature. Statistical physics provides a deeper understanding by describing the atomic behavior of matter and derives macroscopic properties from statistical averages of microscopic states, including both classical and quantum states. In the fundamental physical description, the temperature may be measured directly in units of energy. However, in the practical systems of measurement for science, technology, and commerce, such as the modern metric system of units, the macroscopic and the microscopic descriptions are interrelated by the Boltzmann constant, a proportionality factor that scales temperature to the microscopic mean kinetic energy. The microscopic description in statistical mechanics is based on a model that analyzes a system into its fundamental particles of matter or into a set of classical or quantum-mechanical oscillators and considers the system as a statistical ensemble of microstates. As a collection of classical material particles, the temperature is a measure of the mean energy of motion, called translational kinetic energy, of the particles, whether in solids, liquids, gases, or plasmas. The kinetic energy, a concept of classical mechanics, is half the mass of a particle times its speed squared. In this mechanical interpretation of thermal motion, the kinetic energies of material particles may reside in the velocity of the particles of their translational or vibrational motion or in the inertia of their rotational modes. In monatomic perfect gases and, approximately, in most gas and in simple metals, the temperature is a measure of the mean particle translational kinetic energy, 3/2 kBT. It also determines the probability distribution function of energy. In condensed matter, and particularly in solids, this purely mechanical description is often less useful and the oscillator model provides a better description to account for quantum mechanical phenomena. Temperature determines the statistical occupation of the microstates of the ensemble. The microscopic definition of temperature is only meaningful in the thermodynamic limit, meaning for large ensembles of states or particles, to fulfill the requirements of the statistical model. Kinetic energy is also considered as a component of thermal energy. The thermal energy may be partitioned into independent components attributed to the degrees of freedom of the particles or to the modes of oscillators in a thermodynamic system. In general, the number of these degrees of freedom that are available for the equipartitioning of energy depends on the temperature, i.e. the energy region of the interactions under consideration. For solids, the thermal energy is associated primarily with the vibrations of its atoms or molecules about their equilibrium position. In an ideal monatomic gas, the kinetic energy is found exclusively in the purely translational motions of the particles. In other systems, vibrational and rotational motions also contribute degrees of freedom. Kinetic theory of gases Maxwell and Boltzmann developed a kinetic theory that yields a fundamental understanding of temperature in gases. This theory also explains the ideal gas law and the observed heat capacity of monatomic (or 'noble') gases. The ideal gas law is based on observed empirical relationships between pressure (p), volume (V), and temperature (T), and was recognized long before the kinetic theory of gases was developed (see Boyle's and Charles's laws). The ideal gas law states: where n is the number of moles of gas and is the gas constant. This relationship gives us our first hint that there is an absolute zero on the temperature scale, because it only holds if the temperature is measured on an absolute scale such as Kelvin's. The ideal gas law allows one to measure temperature on this absolute scale using the gas thermometer. The temperature in kelvins can be defined as the pressure in pascals of one mole of gas in a container of one cubic meter, divided by the gas constant. Although it is not a particularly convenient device, the gas thermometer provides an essential theoretical basis by which all thermometers can be calibrated. As a practical matter, it is not possible to use a gas thermometer to measure absolute zero temperature since the gases condense into a liquid long before the temperature reaches zero. It is possible, however, to extrapolate to absolute zero by using the ideal gas law, as shown in the figure. The kinetic theory assumes that pressure is caused by the force associated with individual atoms striking the walls, and that all energy is translational kinetic energy. Using a sophisticated symmetry argument, Boltzmann deduced what is now called the Maxwell–Boltzmann probability distribution function for the velocity of particles in an ideal gas. From that probability distribution function, the average kinetic energy (per particle) of a monatomic ideal gas is where the Boltzmann constant is the ideal gas constant divided by the Avogadro number, and is the root-mean-square speed. This direct proportionality between temperature and mean molecular kinetic energy is a special case of the equipartition theorem, and holds only in the classical limit of a perfect gas. It does not hold exactly for most substances. Zeroth law of thermodynamics When two otherwise isolated bodies are connected together by a rigid physical path impermeable to matter, there is the spontaneous transfer of energy as heat from the hotter to the colder of them. Eventually, they reach a state of mutual thermal equilibrium, in which heat transfer has ceased, and the bodies' respective state variables have settled to become unchanging. One statement of the zeroth law of thermodynamics is that if two systems are each in thermal equilibrium with a third system, then they are also in thermal equilibrium with each other. This statement helps to define temperature but it does not, by itself, complete the definition. An empirical temperature is a numerical scale for the hotness of a thermodynamic system. Such hotness may be defined as existing on a one-dimensional manifold, stretching between hot and cold. Sometimes the zeroth law is stated to include the existence of a unique universal hotness manifold, and of numerical scales on it, so as to provide a complete definition of empirical temperature. To be suitable for empirical thermometry, a material must have a monotonic relation between hotness and some easily measured state variable, such as pressure or volume, when all other relevant coordinates are fixed. An exceptionally suitable system is the ideal gas, which can provide a temperature scale that matches the absolute Kelvin scale. The Kelvin scale is defined on the basis of the second law of thermodynamics. Second law of thermodynamics As an alternative to considering or defining the zeroth law of thermodynamics, it was the historical development in thermodynamics to define temperature in terms of the second law of thermodynamics which deals with entropy. The second law states that any process will result in either no change or a net increase in the entropy of the universe. This can be understood in terms of probability. For example, in a series of coin tosses, a perfectly ordered system would be one in which either every toss comes up heads or every toss comes up tails. This means the outcome is always 100% the same result. In contrast, many mixed (disordered) outcomes are possible, and their number increases with each toss. Eventually, the combinations of ~50% heads and ~50% tails dominate, and obtaining an outcome significantly different from 50/50 becomes increasingly unlikely. Thus the system naturally progresses to a state of maximum disorder or entropy. As temperature governs the transfer of heat between two systems and the universe tends to progress toward a maximum of entropy, it is expected that there is some relationship between temperature and entropy. A heat engine is a device for converting thermal energy into mechanical energy, resulting in the performance of work. An analysis of the Carnot heat engine provides the necessary relationships. According to energy conservation and energy being a state function that does not change over a full cycle, the work from a heat engine over a full cycle is equal to the net heat, i.e. the sum of the heat put into the system at high temperature, qH > 0, and the waste heat given off at the low temperature, qC < 0. The efficiency is the work divided by the heat input: where wcy is the work done per cycle. The efficiency depends only on |qC|/qH. Because qC and qH correspond to heat transfer at the temperatures TC and TH, respectively, |qC|/qH should be some function of these temperatures: Carnot's theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, a heat engine operating between T1 and T3 must have the same efficiency as one consisting of two cycles, one between T1 and T2, and the second between T2 and T3. This can only be the case if which implies Since the first function is independent of T2, this temperature must cancel on the right side, meaning f(T1, T3) is of the form g(T1)/g(T3) (i.e. = = = , where g is a function of a single temperature. A temperature scale can now be chosen with the property that Substituting (6) back into (4) gives a relationship for the efficiency in terms of temperature: For TC = 0K the efficiency is 100% and that efficiency becomes greater than 100% below 0K. Since an efficiency greater than 100% violates the first law of thermodynamics, this implies that 0K is the minimum possible temperature. In fact, the lowest temperature ever obtained in a macroscopic system was 20nK, which was achieved in 1995 at NIST. Subtracting the right hand side of (5) from the middle portion and rearranging gives where the negative sign indicates heat ejected from the system. This relationship suggests the existence of a state function, S, whose change characteristically vanishes for a complete cycle if it is defined by where the subscript indicates a reversible process. This function corresponds to the entropy of the system, which was described previously. Rearranging (8) gives a formula for temperature in terms of fictive infinitesimal quasi-reversible elements of entropy and heat: For a constant-volume system where entropy S(E) is a function of its energy E, dE = dqrev and (9) gives i.e. the reciprocal of the temperature is the rate of increase of entropy with respect to energy at constant volume. Definition from statistical mechanics Statistical mechanics defines temperature based on a system's fundamental degrees of freedom. Eq.(10) is the defining relation of temperature, where the entropy is defined (up to a constant) by the logarithm of the number of microstates of the system in the given macrostate (as specified in the microcanonical ensemble): where is the Boltzmann constant and W is the number of microstates with the energy E of the system (degeneracy). When two systems with different temperatures are put into purely thermal connection, heat will flow from the higher temperature system to the lower temperature one; thermodynamically this is understood by the second law of thermodynamics: The total change in entropy following a transfer of energy from system 1 to system 2 is: and is thus positive if From the point of view of statistical mechanics, the total number of microstates in the combined system 1 + system 2 is , the logarithm of which (times the Boltzmann constant) is the sum of their entropies; thus a flow of heat from high to low temperature, which brings an increase in total entropy, is more likely than any other scenario (normally it is much more likely), as there are more microstates in the resulting macrostate. Generalized temperature from single-particle statistics It is possible to extend the definition of temperature even to systems of few particles, like in a quantum dot. The generalized temperature is obtained by considering time ensembles instead of configuration-space ensembles given in statistical mechanics in the case of thermal and particle exchange between a small system of fermions (N even less than 10) with a single/double-occupancy system. The finite quantum grand canonical ensemble, obtained under the hypothesis of ergodicity and orthodicity, allows expressing the generalized temperature from the ratio of the average time of occupation and of the single/double-occupancy system: where EF is the Fermi energy. This generalized temperature tends to the ordinary temperature when N goes to infinity. Negative temperature On the empirical temperature scales that are not referenced to absolute zero, a negative temperature is one below the zero point of the scale used. For example, dry ice has a sublimation temperature of which is equivalent to . On the absolute Kelvin scale this temperature is . No body of matter can be brought to exactly (the temperature of the ideally coldest possible body) by any finite practicable process; this is a consequence of the third law of thermodynamics. The internal kinetic theory states that the temperature of a body of matter cannot take negative values. The thermodynamic temperature scale, however, is not so constrained. A body of matter can sometimes be conceptually defined in terms of microscopic degrees of freedom, namely particle spins, a subsystem with a temperature other than that of the whole body. When the body is in its state of internal thermodynamic equilibrium, the temperatures of the entire body and the subsystem must be the same. The two temperatures can differ when, by work through externally imposed force fields, energy can be transferred to and from the subsystem, separately from the rest of the body; then, the whole body is not in its own state of internal thermodynamic equilibrium. There is an upper limit of energy such a spin subsystem can attain. Considering the subsystem to be in a temporary state of virtual thermodynamic equilibrium, obtaining a negative temperature on the thermodynamic scale is possible. Thermodynamic temperature is the inverse of the derivative of the subsystem's entropy for its internal energy. As the subsystem's internal energy increases, the entropy increases for some range but eventually attains a maximum value and then begins to decrease as the highest energy states begin to fill. At the point of maximum entropy, the temperature function shows the behavior of a singularity because the slope of the entropy as a function of energy decreases to zero and then turns negative. As the subsystem's entropy reaches its maximum, its thermodynamic temperature goes to positive infinity, switching to negative infinity as the slope turns negative. Such negative temperatures are hotter than any positive temperature. Over time, when the subsystem is exposed to the rest of the body, which has a positive temperature, energy is transferred as heat from the negative temperature subsystem to the positive temperature system. The kinetic theory temperature is not defined for such subsystems. Examples See also (thermoregulation) List of cities by average temperature Notes and references Notes Citations Bibliography of cited references Adkins, C.J. (1968/1983). Equilibrium Thermodynamics, (1st edition 1968), third edition 1983, Cambridge University Press, Cambridge UK, . Buchdahl, H.A. (1966). The Concepts of Classical Thermodynamics, Cambridge University Press, Cambridge. Jaynes, E.T. (1965). Gibbs vs Boltzmann entropies, American Journal of Physics, 33(5), 391–398. Middleton, W.E.K. (1966). A History of the Thermometer and its Use in Metrology, Johns Hopkins Press, Baltimore. Partington, J.R. (1949). An Advanced Treatise on Physical Chemistry, volume 1, Fundamental Principles. The Properties of Gases, Longmans, Green & Co., London, pp. 175–177. Pippard, A.B. (1957/1966). Elements of Classical Thermodynamics for Advanced Students of Physics, original publication 1957, reprint 1966, Cambridge University Press, Cambridge UK. Quinn, T.J. (1983). Temperature, Academic Press, London, . Schooley, J.F. (1986). Thermometry, CRC Press, Boca Raton, . Roberts, J.K., Miller, A.R. (1928/1960). Heat and Thermodynamics, (first edition 1928), fifth edition, Blackie & Son Limited, Glasgow. Truesdell, C.A. (1980). The Tragicomical History of Thermodynamics, 1822–1854, Springer, New York, . Tschoegl, N.W. (2000). Fundamentals of Equilibrium and Steady-State Thermodynamics, Elsevier, Amsterdam, . Further reading Chang, Hasok (2004). Inventing Temperature: Measurement and Scientific Progress. Oxford: Oxford University Press. . Zemansky, Mark Waldo (1964). Temperatures Very Low and Very High. Princeton, NJ: Van Nostrand. Bíró, Tamás Sándor (2011). Is There a Temperature? Conceptual Challenges at High Energy, Acceleration and Complexity. Springer, ISBN 978-1-4419-8040-3. External links Current map of global surface temperatures
Temperature
[ "Physics", "Chemistry" ]
11,159
[ "Scalar physical quantities", "Thermodynamic properties", "Temperature", "Physical quantities", "SI base quantities", "Intensive quantities", "Thermodynamics", "Wikipedia categories named after physical quantities" ]
20,647,505
https://en.wikipedia.org/wiki/Strangelet
A strangelet (pronounced ) is a hypothetical particle consisting of a bound state of roughly equal numbers of up, down, and strange quarks. An equivalent description is that a strangelet is a small fragment of strange matter, small enough to be considered a particle. The size of an object composed of strange matter could, theoretically, range from a few femtometers across (with the mass of a light nucleus) to arbitrarily large. Once the size becomes macroscopic (on the order of metres across), such an object is usually called a strange star. The term "strangelet" originates with Edward Farhi and Robert Jaffe in 1984. It has been theorized that strangelets can convert matter to strange matter on contact. Strangelets have also been suggested as a dark matter candidate. Theoretical possibility Strange matter hypothesis The known particles with strange quarks are unstable. Because the strange quark is heavier than the up and down quarks, it can spontaneously decay, via the weak interaction, into an up quark. Consequently, particles containing strange quarks, such as the lambda particle, always lose their strangeness, by decaying into lighter particles containing only up and down quarks. However, condensed states with a larger number of quarks might not suffer from this instability. That possible stability against decay is the "strange matter hypothesis", proposed separately by Arnold Bodmer and Edward Witten. According to this hypothesis, when a large enough number of quarks are concentrated together, the lowest energy state is one which has roughly equal numbers of up, down, and strange quarks, namely a strangelet. This stability would occur because of the Pauli exclusion principle; having three types of quarks, rather than two as in normal nuclear matter, allows more quarks to be placed in lower energy levels. Relationship with nuclei A nucleus is a collection of a number of up and down quarks (in some nuclei a fairly large number), confined into triplets (neutrons and protons). According to the strange matter hypothesis, strangelets are more stable than nuclei, so nuclei are expected to decay into strangelets. But this process may be extremely slow because there is a large energy barrier to overcome: as the weak interaction starts making a nucleus into a strangelet, the first few strange quarks form strange baryons, such as the Lambda, which are heavy. Only if many conversions occur almost simultaneously will the number of strange quarks reach the critical proportion required to achieve a lower energy state. This is very unlikely to happen, so even if the strange matter hypothesis were correct, nuclei would never be seen to decay to strangelets because their lifetime would be longer than the age of the universe. Size The stability of strangelets depends on their size, because of surface tension at the interface between quark matter and vacuum (which affects small strangelets more than big ones). The surface tension of strange matter is unknown. If it is smaller than a critical value (a few MeV per square femtometer) then large strangelets are unstable and will tend to fission into smaller strangelets (strange stars would still be stabilized by gravity). If it is larger than the critical value, then strangelets become more stable as they get bigger. screening of charges, which allows small strangelets to be charged, with a neutralizing cloud of electrons/positrons around them, but requires large strangelets, like any large piece of matter, to be electrically neutral in their interior. The charge screening distance tends to be of the order of a few femtometers, so only the outer few femtometers of a strangelet can carry charge. Natural or artificial occurrence Although nuclei do not decay to strangelets, there are other ways to create strangelets, so if the strange matter hypothesis is correct there should be strangelets in the universe. There are at least three ways they might be created in nature: Cosmogonically, i.e. in the early universe when the QCD confinement phase transition occurred. It is possible that strangelets were created along with the neutrons and protons that form ordinary matter. High-energy processes. The universe is full of very high-energy particles (cosmic rays). It is possible that when these collide with each other or with neutron stars they may provide enough energy to overcome the energy barrier and create strangelets from nuclear matter. Some identified exotic cosmic ray events, such as "Price's event"—i.e., those with very low charge-to-mass ratios (as the s-quark itself possesses charge commensurate with the more-familiar d-quark, but is much more massive)—could have already registered strangelets. Cosmic ray impacts. In addition to head-on collisions of cosmic rays, ultra high energy cosmic rays impacting on Earth's atmosphere may create strangelets. These scenarios offer possibilities for observing strangelets. If strangelets can be produced in high-energy collisions, then they might be produced by heavy-ion colliders. Similarly, if there are strangelets flying around the universe, then occasionally a strangelet should hit Earth, where it may appear as an exotic type of cosmic ray; alternatively, a stable strangelet could end up incorporated into the bulk of the Earth's matter, acquiring an electron shell proportional to its charge and hence appearing as an anomalously heavy isotope of the appropriate element—though searches for such anomalous "isotopes" have, so far, been unsuccessful. Accelerator production At heavy ion accelerators like the Relativistic Heavy Ion Collider (RHIC), nuclei are collided at relativistic speeds, creating strange and antistrange quarks that could conceivably lead to strangelet production. The experimental signature of a strangelet would be its very high ratio of mass to charge, which would cause its trajectory in a magnetic field to be very nearly, but not quite, straight. The STAR collaboration has searched for strangelets produced at the RHIC, but none were found. The Large Hadron Collider (LHC) is even less likely to produce strangelets, but searches are planned for the LHC ALICE detector. Space-based detection The Alpha Magnetic Spectrometer (AMS), an instrument that is mounted on the International Space Station, could detect strangelets. Possible seismic detection In May 2002, a group of researchers at Southern Methodist University reported the possibility that strangelets may have been responsible for seismic events recorded on October 22 and November 24 in 1993. The authors later retracted their claim, after finding that the clock of one of the seismic stations had a large error during the relevant period. It has been suggested that the International Monitoring System be set up to verify the Comprehensive Nuclear Test Ban Treaty (CTBT) after entry into force may be useful as a sort of "strangelet observatory" using the entire Earth as its detector. The IMS will be designed to detect anomalous seismic disturbances down to energy release or less, and could be able to track strangelets passing through Earth in real time if properly exploited. Impacts on Solar System bodies It has been suggested that strangelets of subplanetary (i.e. heavy meteorite) mass would puncture planets and other Solar System objects, leading to impact craters which show characteristic features. Potential propagation If the strange matter hypothesis is correct, and if a stable negatively-charged strangelet with a surface tension larger than the aforementioned critical value exists, then a larger strangelet would be more stable than a smaller one. One speculation that has resulted from the idea is that a strangelet coming into contact with a lump of ordinary matter could over time convert the ordinary matter to strange matter. This is not a concern for strangelets in cosmic rays because they are produced far from Earth and have had time to decay to their ground state, which is predicted by most models to be positively charged, so they are electrostatically repelled by nuclei, and would rarely merge with them. On the other hand, high-energy collisions could produce negatively charged strangelet states, which could live long enough to interact with the nuclei of ordinary matter. The danger of catalyzed conversion by strangelets produced in heavy-ion colliders has received some media attention, and concerns of this type were raised at the commencement of the RHIC experiment at Brookhaven, which could potentially have created strangelets. A detailed analysis concluded that the RHIC collisions were comparable to ones which naturally occur as cosmic rays traverse the Solar System, so we would already have seen such a disaster if it were possible. RHIC has been operating since 2000 without incident. Similar concerns have been raised about the operation of the LHC at CERN but such fears are dismissed as far-fetched by scientists. In the case of a neutron star, the conversion scenario may be more plausible. A neutron star is in a sense a giant nucleus (20 km across), held together by gravity, but it is electrically neutral and would not electrostatically repel strangelets. If a strangelet hit a neutron star, it might catalyze quarks near its surface to form into more strange matter, potentially continuing until the entire star became a strange star. Debate about the strange matter hypothesis The strange matter hypothesis remains unproven. No direct search for strangelets in cosmic rays or particle accelerators has yet confirmed a strangelet. If any of the objects such as neutron stars could be shown to have a surface made of strange matter, this would indicate that strange matter is stable at zero pressure, which would vindicate the strange matter hypothesis. However, there is no strong evidence for strange matter surfaces on neutron stars. Another argument against the hypothesis is that if it were true, essentially all neutron stars should be made of strange matter, and otherwise none should be. Even if there were only a few strange stars initially, violent events such as collisions would soon create many fragments of strange matter flying around the universe. Because collision with a single strangelet would convert a neutron star to strange matter, all but a few of the most recently formed neutron stars should by now have already been converted to strange matter. This argument is still debated, but if it is correct then showing that one old neutron star has a conventional nuclear matter crust would disprove the strange matter hypothesis. Because of its importance for the strange matter hypothesis, there is an ongoing effort to determine whether the surfaces of neutron stars are made of strange matter or nuclear matter. The evidence currently favors nuclear matter. This comes from the phenomenology of X-ray bursts, which is well explained in terms of a nuclear matter crust, and from measurement of seismic vibrations in magnetars. In fiction An episode of Odyssey 5 featured an attempt to destroy the planet by intentionally creating negatively charged strangelets in a particle accelerator. The BBC docudrama End Day features a scenario where a particle accelerator in New York City explodes, creating a strangelet and starting a catastrophic chain reaction which destroys Earth. The story A Matter most Strange in the collection Indistinguishable from Magic by Robert L. Forward deals with the making of a strangelet in a particle accelerator. Impact, published in 2010 and written by Douglas Preston, deals with an alien machine that creates strangelets. The machine's strangelets impact the Earth and Moon and pass through. The novel Phobos, published in 2011 and written by Steve Alten as the third and final part of his Domain trilogy, presents a fictional story where strangelets are unintentionally created at the LHC and escape from it to destroy the Earth. In the 1992 black-comedy novel Humans by Donald E. Westlake, an irritated God sends an angel to Earth to bring about Armageddon by means of using a strangelet created in a particle accelerator to convert the Earth into a quark star. In the 2010 film Quantum Apocalypse, a strangelet approaches the Earth from space. In the novel The Quantum Thief by Hannu Rajaniemi and the rest of the trilogy, strangelets are mostly used as weapons, but during an early project to terraform Mars, one was used to convert Phobos into an additional "sun". See also Grey goo Ice-nine Hyperon Further reading References External links Concepts in astrophysics Celestial mechanics Doomsday scenarios Exotic matter History of astronomy History of physics Hypothetical composite particles Hypothetical objects Nuclear physics Quantum chromodynamics Quantum mechanics Quark matter Strange quark Unsolved problems in astronomy Unsolved problems in physics de:Seltsame Materie
Strangelet
[ "Physics", "Astronomy" ]
2,562
[ "Unsolved problems in astronomy", "Hypotheses in physics", "Concepts in astrophysics", "Quark matter", "Concepts in astronomy", "History of astronomy", "Theoretical physics", "Unsolved problems in physics", "Classical mechanics", "Astrophysics", "Quantum mechanics", "Exotic matter", "Astrono...
20,650,838
https://en.wikipedia.org/wiki/Superglass
A superglass is a phase of matter which is characterized by superfluidity and a frozen amorphous structure at the same time. J.C. Séamus Davis theorised that frozen helium-4 (at 0.2 K and 50 atm) may be a superglass. Notes References Superglass could be new state of matter (subscription required) A new quantum glass phase: the superglass Phys. Rev. Lett. Vol.101, 8th Aug 2008 Superfluidity Phases of matter Glass physics
Superglass
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
107
[ "Glass engineering and science", "Physical phenomena", "Phase transitions", "Phases of matter", "Superfluidity", "Glass physics", "Condensed matter physics", "Exotic matter", "Matter", "Fluid dynamics" ]
3,470,540
https://en.wikipedia.org/wiki/Zener%20pinning
Zener pinning is the influence of a dispersion of fine particles on the movement of low- and high-angle grain boundaries through a polycrystalline material. Small particles act to prevent the motion of such boundaries by exerting a pinning pressure which counteracts the driving force pushing the boundaries. Zener pinning is very important in materials processing as it has a strong influence on recovery, recrystallization and grain growth. Origin of the pinning force A boundary is an imperfection in the crystal structure and as such is associated with a certain quantity of energy. When a boundary passes through an incoherent particle then the portion of boundary that would be inside the particle essentially ceases to exist. In order to move past the particle some new boundary must be created, and this is energetically unfavourable. While the region of boundary near the particle is pinned, the rest of the boundary continues trying to move forward under its own driving force. This results in the boundary becoming bowed between those points where it is anchored to the particles. Mathematical description The figure illustrates a boundary intersecting with an incoherent particle of radius . The pinning force acts along the line of contact between the boundary and the particle, i.e., a circle of diameter . The force per unit length of boundary in contact is , where is the interfacial energy. Hence, the total force acting on the particle-boundary interface is The maximum restraining force occurs when , so . In order to determine the pinning force resulting from a given dispersion of particles, Clarence Zener made several important assumptions: The particles are spherical. The passage of the boundary does not alter the particle-boundary interaction. Each particle exerts the maximum pinning force on the boundary, regardless of contact position. The contacts between particles and boundaries are completely random. The number density of particles on the boundary is that expected for a random distribution of particles. For a volume fraction, , of randomly distributed spherical particles of radius , the number or particles per unit volume (number density) is given by From this total number density, only those particles that are within one particle radius will be able to interact with the boundary. If the boundary is essentially planar, then this fraction will be given by Given the assumption that all particles apply the maximum pinning force, , the total pinning pressure exerted by the particle distribution per unit area of the boundary is This is referred to as the Zener pinning pressure. It follows that large pinning pressures are produced by: Increasing the volume fraction of particles Reducing the particle size The Zener pinning pressure is orientation dependent, which means that the exact pinning pressure depends on the amount of coherence at the grain boundaries. Computer Simulation Particle pinning has been studied extensively with computer simulations, such as Monte Carlo and phase field methods. These methods can capture interfaces with complex shapes and provide better approximations for the pinning force. Notes According to Current issues in recrystallization: a review, R.D. Doherty et al., Materials Science and Engineering A238 (1997), p 219-274 For information on zener pinning modeling see: - "Contribution à l'étude de la dynamique du Zener pinning: simulations numériques par éléments finis", Thesis in French (2003). by G. Couturier. - "3D finite element simulation of the inhibition of normal grain growth by particles". Acta Materialia, 53, pp. 977–989, (2005). by G. Couturier, R. Doherty, Cl. Maurice, R. Fortunier. - "3D finite element simulation of Zener pinning dynamics". Philosophical Magazine, vol 83, n° 30, pp. 3387–3405, (2003). by G. Couturier, Cl. Maurice, R. Fortunier. Materials science
Zener pinning
[ "Physics", "Materials_science", "Engineering" ]
789
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
3,471,089
https://en.wikipedia.org/wiki/Recrystallization%20%28metallurgy%29
In materials science, recrystallization is a process by which deformed grains are replaced by a new set of defect-free grains that nucleate and grow until the original grains have been entirely consumed. Recrystallization is usually accompanied by a reduction in the strength and hardness of a material and a simultaneous increase in the ductility. Thus, the process may be introduced as a deliberate step in metals processing or may be an undesirable byproduct of another processing step. The most important industrial uses are softening of metals previously hardened or rendered brittle by cold work, and control of the grain structure in the final product. Recrystallization temperature is typically 0.3–0.4 times the melting point for pure metals and 0.5 times for alloys. Definition Recrystallization is defined as the process in which grains of a crystal structure come in a new structure or new crystal shape. A precise definition of recrystallization is difficult to state as the process is strongly related to several other processes, most notably recovery and grain growth. In some cases it is difficult to precisely define the point at which one process begins and another ends. Doherty et al. defined recrystallization as: "... the formation of a new grain structure in a deformed material by the formation and migration of high angle grain boundaries driven by the stored energy of deformation. High angle boundaries are those with greater than a 10-15° misorientation" Thus the process can be differentiated from recovery (where high angle grain boundaries do not migrate) and grain growth (where the driving force is only due to the reduction in boundary area). Recrystallization may occur during or after deformation (during cooling or subsequent heat treatment, for example). The former is termed dynamic while the latter is termed static. In addition, recrystallization may occur in a discontinuous manner, where distinct new grains form and grow, or a continuous manner, where the microstructure gradually evolves into a recrystallized microstructure. The different mechanisms by which recrystallization and recovery occur are complex and in many cases remain controversial. The following description is primarily applicable to static discontinuous recrystallization, which is the most classical variety and probably the most understood. Additional mechanisms include (geometric) dynamic recrystallization and strain induced boundary migration. Secondary recrystallization occurs when a certain very small number of {110}<001> (Goss) grains grow selectively, about one in 106 primary grains, at the expense of many other primary recrystallized grains. This results in abnormal grain growth, which may be beneficial or detrimental for product material properties. The mechanism of secondary recrystallization is a small and uniform primary grain size, achieved through the inhibition of normal grain growth by fine precipitates called inhibitors. Goss grains are named in honor of Norman P. Goss, the inventor of grain-oriented electrical steel circa 1934. Laws of recrystallization There are several, largely empirical laws of recrystallization: Thermally activated. The rate of the microscopic mechanisms controlling the nucleation and growth of recrystallized grains depend on the annealing temperature. Arrhenius-type equations indicate an exponential relationship. Critical temperature. Following from the previous rule it is found that recrystallization requires a minimum temperature for the necessary atomic mechanisms to occur. This recrystallization temperature decreases with annealing time. Critical deformation. The prior deformation applied to the material must be adequate to provide nuclei and sufficient stored energy to drive their growth. Deformation affects the critical temperature. Increasing the magnitude of prior deformation, or reducing the deformation temperature, will increase the stored energy and the number of potential nuclei. As a result, the recrystallization temperature will decrease with increasing deformation. Initial grain size affects the critical temperature. Grain boundaries are good sites for nuclei to form. Since an increase in grain size results in fewer boundaries this results in a decrease in the nucleation rate and hence an increase in the recrystallization temperature Deformation affects the final grain size. Increasing the deformation, or reducing the deformation temperature, increases the rate of nucleation faster than it increases the rate of growth. As a result, the final grain size is reduced by increased deformation. Driving force During plastic deformation the work performed is the integral of the stress and strain in the plastic deformation regime. Although the majority of this work is converted to heat, some fraction (~1–5%) is retained in the material as defects—particularly dislocations. The rearrangement or elimination of these dislocations will reduce the internal energy of the system and so there is a thermodynamic driving force for such processes. At moderate to high temperatures, particularly in materials with a high stacking fault energy such as aluminium and nickel, recovery occurs readily and free dislocations will readily rearrange themselves into subgrains surrounded by low-angle grain boundaries. The driving force is the difference in energy between the deformed and recrystallized state ΔE which can be determined by the dislocation density or the subgrain size and boundary energy (Doherty, 2005): where ρ is the dislocation density, G is the shear modulus, b is the Burgers vector of the dislocations, γs is the subgrain boundary energy and ds is the subgrain size. Nucleation Historically it was assumed that the nucleation rate of new recrystallized grains would be determined by the thermal fluctuation model successfully used for solidification and precipitation phenomena. In this theory it is assumed that as a result of the natural movement of atoms (which increases with temperature) small nuclei would spontaneously arise in the matrix. The formation of these nuclei would be associated with an energy requirement due to the formation of a new interface and an energy liberation due to the formation of a new volume of lower energy material. If the nuclei were larger than some critical radius then it would be thermodynamically stable and could start to grow. The main problem with this theory is that the stored energy due to dislocations is very low (0.1–1 J m−3) while the energy of a grain boundary is quite high (~0.5 J m−3). Calculations based on these values found that the observed nucleation rate was greater than the calculated one by some impossibly large factor (~1050). As a result, the alternate theory proposed by Cahn in 1949 is now universally accepted. The recrystallized grains do not nucleate in the classical fashion but rather grow from pre-existing sub-grains and cells. The 'incubation time' is then a period of recovery where sub-grains with low-angle boundaries (<1–2°) begin to accumulate dislocations and become increasingly misoriented with respect to their neighbors. The increase in misorientation increases the mobility of the boundary and so the rate of growth of the sub-grain increases. If one sub-grain in a local area happens to have an advantage over its neighbors (such as locally high dislocation densities, a greater size or favorable orientation) then this sub-grain will be able to grow more rapidly than its competitors. As it grows its boundary becomes increasingly misoriented with respect to the surrounding material until it can be recognized as an entirely new strain-free grain. Kinetics Recrystallization kinetics are commonly observed to follow the profile shown. There is an initial 'nucleation period' t0 where the nuclei form, and then begin to grow at a constant rate consuming the deformed matrix. Although the process does not strictly follow classical nucleation theory it is often found that such mathematical descriptions provide at least a close approximation. For an array of spherical grains the mean radius R at a time t is (Humphreys and Hatherly 2004): where t0 is the nucleation time and G is the growth rate dR/dt. If N nuclei form in the time increment dt and the grains are assumed to be spherical then the volume fraction will be: This equation is valid in the early stages of recrystallization when f<<1 and the growing grains are not impinging on each other. Once the grains come into contact the rate of growth slows and is related to the fraction of untransformed material (1-f) by the Johnson-Mehl equation: While this equation provides a better description of the process it still assumes that the grains are spherical, the nucleation and growth rates are constant, the nuclei are randomly distributed and the nucleation time t0 is small. In practice few of these are actually valid and alternate models need to be used. It is generally acknowledged that any useful model must not only account for the initial condition of the material but also the constantly changing relationship between the growing grains, the deformed matrix and any second phases or other microstructural factors. The situation is further complicated in dynamic systems where deformation and recrystallization occur simultaneously. As a result, it has generally proven impossible to produce an accurate predictive model for industrial processes without resorting to extensive empirical testing. Since this may require the use of industrial equipment that has not actually been built there are clear difficulties with this approach. Factors influencing the rate The annealing temperature has a dramatic influence on the rate of recrystallization which is reflected in the above equations. However, for a given temperature there are several additional factors that will influence the rate. The rate of recrystallization is heavily influenced by the amount of deformation and, to a lesser extent, the manner in which it is applied. Heavily deformed materials will recrystallize more rapidly than those deformed to a lesser extent. Indeed, below a certain deformation recrystallization may never occur. Deformation at higher temperatures will allow concurrent recovery and so such materials will recrystallize more slowly than those deformed at room temperature e.g. contrast hot and cold rolling. In certain cases deformation may be unusually homogeneous or occur only on specific crystallographic planes. The absence of orientation gradients and other heterogeneities may prevent the formation of viable nuclei. Experiments in the 1970s found that molybdenum deformed to a true strain of 0.3, recrystallized most rapidly when tensioned and at decreasing rates for wire drawing, rolling and compression (Barto & Ebert 1971). The orientation of a grain and how the orientation changes during deformation influence the accumulation of stored energy and hence the rate of recrystallization. The mobility of the grain boundaries is influenced by their orientation and so some crystallographic textures will result in faster growth than others. Solute atoms, both deliberate additions and impurities, have a profound influence on the recrystallization kinetics. Even minor concentrations may have a substantial influence e.g. 0.004% Fe increases the recrystallization temperature by around 100 °C (Humphreys and Hatherly 2004). It is currently unknown whether this effect is primarily due to the retardation of nucleation or the reduction in the mobility of grain boundaries i.e. growth. Influence of second phases Many alloys of industrial significance have some volume fraction of second phase particles, either as a result of impurities or from deliberate alloying additions. Depending on their size and distribution such particles may act to either encourage or retard recrystallization. Small particles Recrystallization is prevented or significantly slowed by a dispersion of small, closely spaced particles due to Zener pinning on both low- and high-angle grain boundaries. This pressure directly opposes the driving force arising from the dislocation density and will influence both the nucleation and growth kinetics. The effect can be rationalized with respect to the particle dispersion level where is the volume fraction of the second phase and r is the radius. At low the grain size is determined by the number of nuclei, and so initially may be very small. However the grains are unstable with respect to grain growth and so will grow during annealing until the particles exert sufficient pinning pressure to halt them. At moderate the grain size is still determined by the number of nuclei but now the grains are stable with respect to normal growth (while abnormal growth is still possible). At high the unrecrystallized deformed structure is stable and recrystallization is suppressed. Large particles The deformation fields around large (over 1 μm) non-deformable particles are characterised by high dislocation densities and large orientation gradients and so are ideal sites for the development of recrystallization nuclei. This phenomenon, called particle stimulated nucleation (PSN), is notable as it provides one of the few ways to control recrystallization by controlling the particle distribution. The size and misorientation of the deformed zone is related to the particle size and so there is a minimum particle size required to initiate nucleation. Increasing the extent of deformation will reduce the minimum particle size, leading to a PSN regime in size-deformation space. If the efficiency of PSN is one (i.e. each particle stimulates one nuclei), then the final grain size will be simply determined by the number of particles. Occasionally the efficiency can be greater than one if multiple nuclei form at each particle but this is uncommon. The efficiency will be less than one if the particles are close to the critical size and large fractions of small particles will actually prevent recrystallization rather than initiating it (see above). Bimodal particle distributions The recrystallization behavior of materials containing a wide distribution of particle sizes can be difficult to predict. This is compounded in alloys where the particles are thermally-unstable and may grow or dissolve with time. In various systems, abnormal grain growth may occur giving rise to unusually large crystallites growing at the expense of smaller ones. The situation is more simple in bimodal alloys which have two distinct particle populations. An example is Al-Si alloys where it has been shown that even in the presence of very large (<5 μm) particles the recrystallization behavior is dominated by the small particles (Chan & Humphreys 1984). In such cases the resulting microstructure tends to resemble one from an alloy with only small particles. Recrystallization Temperature The recrystallization temperature is temperature at which recrystallization can occur for a given material and processing conditions. This is not a set temperature and is dependent upon factors including the following: Increasing annealing time decreases recrystallization temperature Alloys have higher recrystallization temperatures than pure metals Increasing amount of cold work decreases recrystallization temperature Smaller cold-worked grain sizes decrease the recrystallization temperature See also Phase diagram References Metallurgy Phase transitions
Recrystallization (metallurgy)
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,066
[ "Physical phenomena", "Phase transitions", "Metallurgy", "Phases of matter", "Critical phenomena", "Materials science", "nan", "Statistical mechanics", "Matter" ]
3,474,896
https://en.wikipedia.org/wiki/Magnesium%20transporter
Magnesium transporters are proteins that transport magnesium across the cell membrane. All forms of life require magnesium, yet the molecular mechanisms of Mg2+ uptake from the environment and the distribution of this vital element within the organism are only slowly being elucidated. The ATPase function of MgtA is highly cardiolipin dependent and has been shown to detect free magnesium in the μM range In bacteria, Mg2+ is probably mainly supplied by the CorA protein and, where the CorA protein is absent, by the MgtE protein. In yeast the initial uptake is via the Alr1p and Alr2p proteins, but at this stage the only internal Mg2+ distributing protein identified is Mrs2p. Within the protozoa only one Mg2+ transporter (XntAp) has been identified. In metazoa, Mrs2p and MgtE homologues have been identified, along with two novel Mg2+ transport systems TRPM6/TRPM7 and PCLN-1. Finally, in plants, a family of Mrs2p homologues has been identified along with another novel protein, AtMHX. Evolution The evolution of Mg2+ transport appears to have been rather complicated. Proteins apparently based on MgtE are present in bacteria and metazoa, but are missing in fungi and plants, whilst proteins apparently related to CorA are present in all of these groups. The two active transport transporters present in bacteria, MgtA and MgtB, do not appear to have any homologies in higher organisms. There are also Mg2+ transport systems that are found only in the higher organisms. Types There are a large number of proteins yet to be identified that transport Mg2+. Even in the best studied eukaryote, yeast, Borrelly has reported a Mg2+/H+ exchanger without an associated protein, which is probably localised to the Golgi. At least one other major Mg2+ transporter in yeast is still unaccounted for, the one affecting Mg2+ transport in and out of the yeast vacuole. In higher, multicellular organisms, it seems that many Mg2+ transporting proteins await discovery. The CorA-domain-containing Mg2+ transporters (CorA, Alr-like and Mrs2-like) have a similar but not identical array of affinities for divalent cations. In fact, this observation can be extended to all of the Mg2+ transporters identified so far. This similarity suggests that the basic properties of Mg2+ strongly influence the possible mechanisms of recognition and transport. However, this observation also suggests that using other metal ions as tracers for Mg2+ uptake will not necessarily produce results comparable to the transporter's ability to transport Mg2+. Ideally, Mg2+ should be measured directly. Since 28Mg2+ is practically unobtainable, much of the old data will need to be reinterpreted with new tools for measuring Mg2+ transport, if different transporters are to be compared directly. The pioneering work of Kolisek and Froschauer using mag-fura 2 has shown that free Mg2+ can be reliably measured in vivo in some systems. By returning to the analysis of CorA with this new tool, we have gained an important baseline for the analysis of new Mg2+ transport systems as they are discovered. However, it is important that the amount of transporter present in the membrane is accurately determined if comparisons of transport capability are to be made. This bacterial system might also be able to provide some utility for the analysis of eukaryotic Mg2+ transport proteins, but differences in biological systems of prokaryotes and eukaryotes will have to be considered in any experiment. Function Comparing the functions of the characterised Mg2+ transport proteins is currently almost impossible, even though the proteins have been investigated in different biological systems using different methodologies and technologies. Finding a system where all the proteins can be compared directly would be a major advance. If the proteins could be shown to be functional in bacteria (S. typhimurium), then a combination of the techniques of mag-fura 2, quantification of protein in the envelope membrane, and structure of the proteins (X-ray crystal or cryo-TEM) might allow the determination of the basic mechanisms involved in the recognition and transport of the Mg2+ ion. However, perhaps the best advance would be the development of methods allowing the measurement of the protein's function in the patch-clamp system using artificial membranes. Bacteria Early research In 1968, Lusk described the limitation of bacterial (Escherichia coli) growth on Mg2+-poor media, suggesting that bacteria required Mg2+ and were likely to actively take this ion from the environment. The following year, the same group and another group, Silver, independently described the uptake and efflux of Mg2+ in metabolically active E. coli cells using 28Mg2+. By the end of 1971, two papers had been published describing the interference of Co2+, Ni2+ and Mn2+ on the transport of Mg2+ in E. coli and in Aerobacter aerogenes and Bacillus megaterium. In the last major development before the cloning of the genes encoding the transporters, it was discovered that there was a second Mg2+ uptake system that showed similar affinity and transport kinetics to the first system, but had a different range of sensitivities to interfering cations. This system was also repressible by high extracellular concentrations of Mg2+ . CorA The CorA gene and its corresponding protein are the most exhaustively studied Mg2+ transport system in any organism. Most of the published literature on the CorA gene comes from the laboratory of M. E. Maguire. Recently the group of R. J. Schweyen made a significant impact on the understanding of Mg2+ transport by CorA. The gene was originally named after the cobalt-resistant phenotype in E. coli that was caused by the gene's inactivation. The gene was genetically identified in E. coli by Park et al., but wasn't cloned until Hmiel et al. isolated the Salmonella enterica serovar Typhimurium (S. typhimurium) homologue. Later it would be shown by Smith and Maguire that the CorA gene was present in 17 gram-negative bacteria. With the large number of complete genome sequences now available for prokaryotes, CorA has been shown to be virtually ubiquitous among the Eubacteria, as well as being widely distributed among the Archaea. The CorA locus in E. coli contains a single open reading frame of 948 nucleotides, producing a protein of 316 amino acids. This protein is well conserved amongst the Eubacteria and Archaea. Between E. coli and S. typhimurium, the proteins are 98% identical, but in more distantly related species, the similarity falls to between 15 and 20%. In the more distantly related genes, the similarity is often restricted to the C-terminal part of the protein, and a short amino acid motif GMN within this region is very highly conserved. The CorA domain, also known as PF01544 in the pFAM conserved protein domain database (http://webarchive.loc.gov/all/20110506030957/http%3A//pfam.sanger.ac.uk/), is additionally present in a wide range of higher organisms, and these transporters will be reviewed below. The CorA gene is constitutively expressed in S. typhimurium under a wide range of external Mg2+ concentrations. However, recent evidence suggests that the activity of the protein may be regulated by the PhoPQ two-component regulatory system. This sensor responds to low external Mg2+ concentrations during the infection process of S. typhimurium in humans. In low external Mg2+ conditions, the PhoPQ system was reported to suppress the function of CorA and it has been previously shown that the transcription of the alternative Mg2+ transporters MgtA and MgtB is activated in these conditions. Chamnongpol and Groisman suggest that this allows the bacteria to escape metal ion toxicity caused by the transport of other ions, particularly Fe(II), by CorA in the absence of Mg2+. Papp and Maguire offer a conflicting report on the source of the toxicity. The figure (not to scale) shows the originally published transmembrane (TM) domain topology of the S. typhimurium CorA protein, which was said to have three membrane-spanning regions in the C-terminal part of the protein (shown in blue), as determined by Smith et al.. Evidence for CorA acting as a homotetramer was published by Warren et al. in 2004. In December 2005 the crystal structure of the CorA channel was posted to the RSCB protein structure database. The results showed that the protein has two TM domains and exists as a homopentamer, in direct conflict with the earlier reports. Follow this link to see the structure in 3D. The soluble intracellular parts of the protein are highly charged, containing 31 positively charged and 53 negatively charged residues. Conversely, the TM domains contain only one charged amino acid, which has been shown to be unimportant in the activity of the transporter. From mutagenesis experiments, it appears that the chemistry of the Mg2+ transport relies on the hydroxyl groups lining the inside of the transport pore; there is also an absolute requirement for the GMN motif (shown in red). Before the activity of CorA could be studied in vivo, any other Mg2+ transport systems in the bacterial host had to be identified and inactivated or deleted (see below). A strain of S. typhimurium containing a functional CorA gene but lacking MgtA and MgtB was constructed(also see below), and the uptake kinetics of the transporter were analysed. This strain showed nearly normal growth rates on standard media (50 μM Mg2+), but the removal of all three genes created a bacterial strain requiring 100 mM external Mg2+ for normal growth. Mg2+ is transported into cells containing only the CorA transport system with similar kinetics and cation sensitivities as the Mg2+ uptake described in the earlier papers, and has additionally been quantified(see table). The uptake of Mg2+ was seen to plateau as in earlier studies, and although no actual mechanism for the decrease in transport has been determined, so it has been assumed that the protein is inactivated. Co2+ and Ni2+ are toxic to S. typhimurium cells containing a functional CorA protein and this toxicity stems from the blocking of Mg2+ uptake (competitive inhibition) and the accumulation of these ions inside the cell. Co2+ and Ni2+ have been shown to be transported by CorA by using radioactive tracer analysis, although with lower affinities (km) and velocities (Vmax) than for Mg2+ (see table). The km values for Co2+ and Ni2+ are significantly above those expected to be encountered by the cells in their normal environment, so it is unlikely that the CorA transport system mediates the uptake of these ions under natural conditions. To date, the evidence for Mn2+ transport by CorA is limited to E. coli. The table lists the transport kinetics of the CorA Mg2+ transport system. This table has been compiled from the publications of Snavely et al. (1989b), Gibson et al. (1991) and Smith et al. (1998a) and summarises the kinetic data for the CorA transport protein expressed from the wild type promoter in bacteria lacking MgtA and MgtB. km and Vmax were determined at 20 °C as the uptake of Mg2+ at 37 °C was too rapid to measure accurately. Recently the Mg2+-dependent fluorescence of mag-fura 2 was used to measure the free Mg2+ content of S. typhimurium cells in response to external Mg2+, which showed that CorA is the major uptake system for Mg2+ in bacteria. The authors also showed for the first time that the changes in the electric potential (ΔΨ) across the plasma membrane of the cell affected both the rate of Mg2+ uptake and the free Mg2+ content of the cell; depolarisation suppressed transport, while hyperpolarisation increased transport. The kinetics of transport were defined only by the rate of change of free Mg2+ inside the cells (250 μM s−1). Because no quantification of the amount of CorA protein in the membrane was made, this value cannot be compared with other experiments on Mg2+ transporters. The efflux of Mg2+ from bacterial cells was first observed by Lusk and Kennedy (1969) and is mediated by the CorA Mg2+ transport system in the presence of high extracellular concentrations of Mg2+. The efflux can also be triggered by Co2+, Mn2+ and Ni2+, although not to the same degree as Mg2+. No Co2+ efflux through the CorA transport system was observed. The process of Mg2+ efflux additionally requires one of the CorB, CorC or CorD genes. The mutation of any single one of these genes leads to a Co2+ resistance a little less than half of that provided by a CorA mutant. This effect may be due to the inhibition of Mg2+ loss that would otherwise occur in the presence of high levels of Co2+. It is currently unknown whether Mg2+ is more toxic when the CorBCD genes are deleted. It has been speculated that the Mg2+ ion will initially interact with any transport protein through its hydration shell. Cobalt (III) hexaammine, Co(III)Hex, is a covalently bound (non-labile) analog for the first shell of hydration for several divalent cations, including Mg2+. The radius of the Co(III)Hex molecule is 244 pm, very similar to the 250 pm radius of the first hydration shell of Mg2+. This analog is a potent inhibitor of the CorA transport system, more so than Mg2+, Co2+ or Ni2+. The additional strength of the Co(III)Hex inhibition might come from the blocking of the transport pore due to the inability of the protein to ‘dehydrate’ the substrate. It was also shown that Co(III)Hex was not transported into the cells, suggesting that at least partial dehydration would be required for the transport of the normal substrate (Mg2+). Nickel (II) hexaammine, with a radius of 255 pm, did not inhibit the CorA transport system, suggesting a maximum size limit exists for the binding of the CorA substrate ion. These results suggest that the important property involved in the recognition of Mg2+ by CorA is the size of the ion with its first shell of hydration. Hence, the volume change generally quoted for the bare to hydrated Mg2+ ion of greater than 500-fold, including the second sphere of hydration, may not be biologically relevant, and may be a reason for the first sphere volume change of 56-fold to be more commonly used. MgtA and MgtB The presence of these two genes was first suspected when Nelson and Kennedy (1972) showed that there were Mg2+-repressible and non-repressible Mg2+ uptake systems in E. coli. The non-repressible uptake of Mg2+ is mediated by the CorA protein. In S. typhimurium the repressible Mg2+ uptake was eventually shown to be via the MgtA and MgtB proteins. Both MgtA and MgtB are regulated by the PhoPQ system and are actively transcribed during the process of infection of human patients by S. typhimurium. Although neither gene is required for pathogenicity, the MgtB protein does enhance the long-term survival of the pathogen in the cell. The genes are also upregulated in vitro when the Mg2+ concentration falls below 50 μM (Snavely et al., 1991a). Although the proteins have km values similar to CorA and transport rates approximately 10 times less, the genes may be part of a Mg2+ scavenging system. Chamnongpol and Groisman (2002) presents evidence that the role of these proteins may be to compensate for the inactivation of the CorA protein by the PhoPQ regulon. The authors suggest that the CorA protein is inactivated to allow the avoidance of metal toxicity via the protein in the low Mg2+ environments S. typhimurium is subjected to by cells after infection. The proteins are both P-type ATPases and neither gene shows any similarity to CorA. The MgtA and MgtB proteins are 75% similar (50% identical), although it seems that MgtB may have been acquired by horizontal gene transfer as part of Salmonella Pathogenicity Island 3. The TM topology of the MgtB protein has been experimentally determined, showing that the protein has ten TM-spanning helices with the termini of the protein in the cytoplasm (see figure ). MgtA is present in widely divergent bacteria, but is not nearly as common as CorA, while MgtB appears to have a quite restricted distribution. No hypotheses for the unusual distribution have been suggested. The figure, adapted from Smith et al. (1993b), shows the experimentally determined membrane topology of the MgtB protein in S. typhimurium. The TM domains are shown in light blue and the orientation in the membrane and the positions of the N- and C-termini are indicated. The figure is not drawn to scale. While the MgtA and MgtB proteins are very similar, they do show some minor differences in activity. MgtB is very sensitive to temperature, losing all activity (with regard to Mg2+ transport) at a temperature of 20 °C. Additionally, MgtB and MgtA are inhibited by different ranges of cations (Table A10.1). The table lists cation transport characteristics of the MgtA and MgtB proteins in S. typhimurium as well as the kinetic data for the MgtA and MgtB transport proteins at 37 °C. The Vmax numbers listed in parentheses are those for uptake at 20 °C. The inhibition of Mg2+ transport by Mn2+ via MgtA showed unusual kinetics (see Figure 1 of Snavely et al., 1989b) The MgtA and MgtB proteins are ATPases, using one molecule of ATP per transport cycle, whereas the Mg2+ uptake via CorA is simply electrochemically favourable. Chamnongpol and Groisman (2002) have suggested that the MgtA and MgtB proteins form part of a metal toxicity avoidance system. Alternatively, as most P-type ATPases function as efflux mediating transporters, it has been suggested that the MgtA and MgtB proteins act as efflux proteins for a currently unidentified cation, and Mg2+ transport is either non-specific or exchanged to maintain the electro-neutrality of the transport process. Further experiments will be required to define the physiological function of these proteins. MgtE Two papers describe MgtE, a fourth Mg2+ uptake protein in bacteria unrelated to MgtA/B or CorA. This gene has been sequenced and the protein, 312 amino acids in size, is predicted to contain either four or five TM spanning domains that are closely arranged in the C-terminal part of the protein (see figure). This region of the protein has been identified in the Pfam database as a conserved protein domain (PF01769) and species containing proteins that have this protein domain are roughly equally distributed throughout the Eubacteria and Archaea, although it is quite rare in comparison with the distribution of CorA. However, the diversity of the proteins containing the domain is significantly larger than that of the CorA domain. The Pfam database lists seven distinct groups of MgtE domain containing proteins, of which six contain an archaic or eubacterial member. The expression of MgtE is frequently controlled by a conserved RNA structure, YkoK leader or M-box. The figure (right), adapted from Smith et al. (1995) and the PFAM database entry, shows the computer-predicted membrane topology of the MgtE protein in Bacillus firmus OF4. The TM domains are shown in light blue. The CBS domains, named for the protein they were identified in, cystathionine-beta synthase, shown in orange, are identified in the Pfam database as regulatory domains, but the mechanism of action has not yet been described. They are found in several voltage-gated chloride channels. The orientation in the membrane and the positions of the N- and C-termini are indicated. This figure is not drawn to scale. This transporter has recently had its structure solved by x-ray crystallography. The MgtE gene was first identified by Smith et al. (1995) during a screen for CorA-like proteins in bacteria and complements the Mg2+-uptake-deficient S. typhimurium strain MM281 (corA mgtA mgtB), restoring wild type growth on standard media. The kinetics of Mg2+ transport for the protein were not determined, as 28Mg2+ was unavailable. As a substitute, the uptake of 57Co2+ was measured and was shown to have a km of 82 μM and a Vmax of 354 pmol min−1 108 cells−1. Mg2+ was a competitive inhibitor with a Ki of 50 μM—the Ki of Mg2+ inhibition of 60Co2+ uptake via CorA is 10 μM. A comparison of the available kinetic data for MgtA and CorA is shown in the table. Clearly, MgtE does not transport Co2+ to the same degree as CorA, and the inhibition of transport by Mg2+ is also less efficient, which suggests that the affinity of MgtE for Mg2+ is lower than that of CorA. The strongest inhibitor of Co2+ uptake was Zn2+, with a Ki of 20 μM. The transport of Zn2+ by this protein may be as important as that of Mg2+. The table shows a comparison of the transport kinetics of MgtE and CorA, and key kinetic parameter values for them are listed. As shown, the data has been generated at differing incubation temperatures. km and Ki are not significantly altered by the differing incubation temperature. Conversely, Vmax shows a strong positive correlation with temperature, hence the value of Co2+ Vmax for MgtE is not directly comparable with the values for CorA. Yeast Early research The earliest research showing that yeast takes up Mg2+ appears to be done by Schmidt et al. (1949). However, these authors only showed altered yeast Mg2+ content in a table within the paper, and the report's conclusions dealt entirely with the metabolism of phosphate. A series of experiments by Rothstein shifted the focus more towards the uptake of the metal cations, showing that yeast take up cations with the following affinity series; Mg2+, Co2+, Zn2+ > Mn2+ > Ni2+ > Ca2+ > Sr2+. Additionally, it was suggested that the transport of the different cations is mediated by the same transport system — a situation very much like that in bacteria. In 1998, MacDiarmid and Gardner finally identified the proteins responsible for the observed cation transport phenotype in Saccharomyces cerevisiae. The genes involved in this system and a second mitochondrial Mg2+ transport system, functionally identified significantly after the gene was cloned, are described in the sections below. ALR1 and ALR2 Two genes, ALR1 and ALR2, were isolated in a screen for Al3+ tolerance (resistance) in yeast. Over-expression constructs containing yeast genomic DNA were introduced into wild type yeast and the transformants were screened for growth on toxic levels of Al3+. ALR1 and ALR2 containing plasmids allowed the growth of yeast in these conditions. The Alr1p and Alr2p proteins consist of 859 and 858 amino acids respectively and are 70% identical. In a region in the C-terminal, half of these proteins are weakly similar to the full CorA protein. The computer-predicted TM topology of Alr1p is shown in the figure. The presence of a third TM domain was suggested by MacDiarmid and Gardner (1998), on the strength on sequence homology, and more recently by Lee and Gardner (2006), on the strength of mutagenesis studies, making the TM topology of these proteins more like that of CorA (see figure). Also, Alr1p contains the conserved GMN motif at the outside end of TM 2 (TM 2') and the mutation of the methionine (M) in this motif to a leucine (L) led to the loss of transport capability. The figure shows the two possible TM topologies of Alr1p. Part A of the figure shows the computer-predicted membrane topology of the Alr1p protein in yeast and part B shows the topology of Alr1p based on the experimental results of Lee and Gardner (2006). The GMN motif location is indicated in red and the TM domains in light blue. The orientation in the membrane and the positions of the N- and C-termini are indicated, the various sizes of the soluble domains are given in amino acids (AA), and TM domains are numbered by their similarity to CorA. Where any TM domain is missing, the remaining domains are numbered with primes. The figure is not drawn to scale. A third ALR-like gene is present in S. cerevisiae and there are two homologous genes in both Schizosaccharomyces pombe and Neurospora crassa. These proteins contain a GMN motif like that of CorA, with the exception of the second N. crassa gene. No ALR-like genes have been identified in species outside of the fungi. Membrane fractionation and green fluorescent protein (GFP) fusion studies established that Alr1p is localised to the plasma membrane. The localisation of the Alr1p was observed to be internalised and degraded in the vacuole in response to extracellular cations. Mg2+, at very low extracellular concentrations (100 μM; < 10% of the standard media Mg2+ content), and Co2+ and Mn2+ at relatively high concentrations (> 20× standard media), induced the change in Alr1p protein localisation, and the effect was dependent on functional ubiquitination, endocytosis and vacuolar degradation. This mechanism was proposed to allow the regulation of Mg2+ uptake by yeast. However, a recent report indicates that several of the observations made by Stadler et al. were not reproducible. For example, regulation of ALR1 mRNA accumulation by Mg2+ supply was not observed, and the stability of the Alr1 protein was not reduced by exposure to excess Mg2+. The original observation of Mg-dependent accumulation of the Alr1 protein under steady-state low-Mg conditions was replicated, but this effect was shown to be an artifact caused by the addition of a small peptide (epitope) to the protein to allow its detection. Despite these problems, Alr1 activity was demonstrated to respond to Mg supply, suggesting that the activity of the protein is regulated directly, as was observed for some bacterial CorA proteins. A functional Alr1p (wild type) or Alr2p (overexpressed) is required for S. cerevisiae growth in standard conditions (4 mM Mg2+), and Alr1p can support normal growth at Mg2+ concentrations as low as 30 μM. 57Co2+ is taken up into yeast via the Alr1p protein with a km of 77 – 105 μM (; C. MacDiarmid and R. C. Gardner, unpublished data), but the Ki for Mg2+ inhibition of this transport is currently unknown. The transport of other cations by the Alr1p protein was assayed by the inhibition of yeast growth. The overexpression of Alr1p led to increased sensitivity to Ca2+, Co2+, Cu2+, La3+, Mn2+, Ni2+ and Zn2+, an array of cations similar to those shown to be transported into yeast by a CorA-like transport system. The increased toxicity of the cations in the presence of the transporter is assumed to be due to the increased accumulation of the cation inside the cell. The evidence that Alr1p is primarily a Mg2+ transporter is that the loss of Alr1p leads to a decreased total cell content of Mg2+, but not of other cations. Additionally, two electrophysiological studies where Alr1p was produced in yeast or Xenopus oocytes showed a Mg2+-dependent current in the presence of the protein; Salih et al., in prep. The kinetics of Mg2+ uptake by Alr1p have been investigated by electrophysiology techniques on whole yeast cells. The results suggested that Alr1p is very likely to act as an ion-selective channel. In the same paper, the authors reported that Mg2+ transport by Alr1p varied from 200 pA to 1500 pA, with a mean current of 264 pA. No quantification of the amount of protein producing the current was presented, so the results lack comparability with the bacterial Mg2+ transport proteins. The alternative techniques of 28Mg2+ radiotracer analysis and mag-fura 2 to measure Mg2+ uptake have not yet been used with Alr1p. 28Mg2+ is currently not available and the mag-fura 2 system is unlikely to provide simple uptake data in yeast. The yeast cell maintains a heterogeneous distribution of Mg2+ suggesting that multiple systems inside the yeast are transporting Mg2+ into storage compartments. This internal transport will very likely mask the uptake process. The expression of ALR1 in S. typhimurium without Mg2+ uptake genes may be an alternative, but, as stated earlier, the effects of a heterologous expression system would need to be taken into account. MNR2 The MNR2 gene encodes a protein closely related to the Alr proteins, but includes conserved features that define a distinct subgroup of CorA proteins in fungal genomes, suggesting a distinct role in Mg2+ homeostasis. Like an alr1 mutant, growth of an mnr2 mutant was sensitive to Mg2+-deficient conditions, but the mnr2 mutant was observed to accumulate more Mg2+ than a wild-type strain under these conditions. These phenotypes suggested that Mnr2 may regulate Mg2+ storage within an intracellular compartment. Consistent with this interpretation, the Mnr2 protein was localized to the membrane of the vacuole, an internal compartment implicated in the storage of excess mineral nutrients by yeast. A direct role of Mnr2 in Mg2+ transport was suggested by the observation that increased Mnr2 expression, which redirected some Mnr2 protein to the cell surface, also suppressed the Mg2+-requirement of an alr1 alr2 double mutant strain. The mnr2 mutation also altered accumulation of other divalent cations, suggesting this mutation may increase Alr gene expression or protein activity. Recent work supported this model, by showing that Alr1 activity was increased in an mnr2 mutant strain, and that the mutation was associated with induction of Alr1 activity at a higher external Mg concentration than was observed for an Mnr2 wild-type strain. These effects were observed without any change in Alr1 protein accumulation, again indicating that Alr1 activity may be regulated directly by the Mg concentration within the cell. MRS2 and Lpe10 Like the ALR genes, the MRS2 gene was cloned and sequenced before it was identified as a Mg2+ transporter. The MRS2 gene was identified in the nuclear genome of yeast in a screen for suppressors of a mitochondrial gene RNA splicing mutation, and was cloned and sequenced by Wiesenberger et al. (1992). Mrs2p was not identified as a putative Mg2+ transporter until Bui et al. (1999). Gregan et al. (2001a) identified LPE10 by homology to MRS2 and showed that both LPE10 and MRS2 mutants altered the Mg2+ content of yeast mitochondria and affected RNA splicing activity in the organelle. Mg2+ transport has been shown to be directly mediated by Mrs2p, but not for Lpe10p. The Mrs2p and Lpe10p proteins are 470 and 413 amino acid residues in size, respectively, and a 250–300 amino acid region in the middle of the proteins shows a weak similarity to the full CorA protein. The TM topologies of the Mrs2p and Lpe10p proteins have been assessed using a protease protection assay and are shown in the figure. TM 1 and 2 correspond to TM 2 and 3 in the CorA protein. The conserved GMN motif is at the outside end of the first TM domain, and when the glycine (G) in this motif was mutated to a cysteine (C) in Mrs2p, Mg2+ transport was strongly reduced. The figure shows the experimentally determined topology of Mrs2p and Lpe10p as adapted from Bui et al. (1999) and Gregan et al. (2001a). The GMN motif location is indicated in red and the TM domains in light blue. The orientation in the membrane and the positions of the N- and C-termini are indicated. The various sizes of the soluble domains are given in amino acids (AA), TM domains are numbered, and the figure is not drawn to scale. Mrs2p has been localised to the mitochondrial inner membrane by subcellular fractionation and immunodetection and Lpe10p to the mitochondria. Mitochondria lacking Mrs2p do not show a fast Mg2+ uptake, only a slow ‘leak’, and overaccumulation of Mrs2p leads to an increase in the initial rate of uptake. Additionally, CorA, when fused to the mitochondrial leader sequence of Mrs2p, can partially complement the mitochondrial defect conferred by the loss of either Mrs2p or Lpe10p. Hence, Mrs2p and/or Lpe10p may be the major Mg2+ uptake system for mitochondria. A possibility is that the proteins form heterodimers, as neither protein (when overexpressed) can fully complement the loss of the other. The characteristics of Mg2+ uptake in isolated mitochondria by Mrs2p were quantified using mag-fura 2. The uptake of Mg2+ by Mrs2p shared a number of attributes with CorA. First, Mg2+ uptake was directly dependent on the electric potential (ΔΨ) across the boundary membrane. Second, the uptake is saturated far below that which the ΔΨ theoretically permits, so the transport of Mg2+ by Mrs2p is likely to be regulated in a similar manner to CorA, possibly by the inactivation of the protein. Third, Mg2+ efflux was observed via Mrs2p upon the artificial depolarisation of the mitochondrial membrane by valinomycin. Finally, the Mg2+ fluxes through Mrs2p are inhibited by cobalt (III) hexaammine. The kinetics of Mg2+ uptake by Mrs2p were determined in the Froschauer et al. (2004) paper on CorA in bacteria. The initial change in free Mg2+ concentration was 150 μM s-1 for wild type and 750 μM s-1 for mitochondria from yeast overexpressing MRS2. No attempt was made to scale the observed transport to the amount of transporter present. Protozoan (Paramecium) The transport of Mg2+ into Paramecium has been characterised largely by R. R. Preston and his coworkers. Electrophysiological techniques on whole Paramecium were used to identify and characterise Mg2+ currents in a series of papers before the gene was cloned by Haynes et al. (2002). The open reading frame for the XNTA gene is 1707 bp in size, contains two introns and produces a predicted protein of 550 amino acids. The protein has been predicted to contain 11 TM domains and also contains the α1 and α2 motifs (see figure) of the SLC8 (Na+/Ca2+ exchanger) and SLC24 (K+ dependent Na+/Ca2+ exchanger) human solute transport proteins. The XntAp is equally similar to the SLC8 and SLC24 protein families by amino acid sequence, but the predicted TM topology is more like that of SLC24, but the similarity is at best weak and the relationship is very distant. The AtMHX protein from plants also shares a distant relationship with the SLC8 proteins. The figure shows the predicted TM topology of XntAp. Adapted from Haynes et al. (2002), this figure shows the computer predicted membrane topology of XntAp in Paramecium. The orientation in the membrane was determined using HMMTOP. The TM domains are shown in light blue, the α1 and α2 domains are shown in green. The orientation in the membrane and the positions of the N- and C-termini are indicated and the figure is not drawn to scale. The Mg2+-dependent currents carried by XntAp are kinetically like that of a channel protein and have an ion selectivity order of Mg2+ > Co2+, Mn2+ > Ca2+ — a series again very similar to that of CorA. Unlike the other transport proteins reported so far, XntAp is dependent on intracellular Ca2+. The transport is also dependent on ΔΨ, but again Mg2+ is not transported to equilibrium, being limited to approximately 0.4 mM free Mg2+ in the cytoplasm. The existence of an intracellular compartment with a much higher free concentration of Mg2+ (8 mM) was supported by the results. Animals The investigation of Mg2+ in animals, including humans, has lagged behind that in bacteria and yeast. This is largely because of the complexity of the systems involved, but also because of the impression within the field that Mg2+ was maintained at high levels in all cells and was unchanged by external influences. Only in the last 25 years has a series of reports begun to challenge this view, with new methodologies finding that free Mg2+ content is maintained at levels where changes might influence cellular metabolism. MRS2 A bioinformatic search of the sequence databases identified one homologue of the MRS2 gene of yeast in a range of metazoans. The protein has a very similar sequence and predicted TM topology to the yeast protein, and the GMN motif is intact at the end of the first TM domain. The human protein, hsaMrs2p, has been localised to the mitochondrial membrane in mouse cells using a GFP fusion protein. Very little is known about the Mg2+ transport characteristics of the protein in mammals, but Zsurka et al. (2001) has shown that the human Mrs2p complements the mrs2 mutants in the yeast mitochondrial Mg2+ uptake system. SLC41 (MgtE) The identification of this gene family in the metazoa began with a signal sequence trap method for isolating secreted and membrane proteins. Much of the identification has come from bioinformatic analyses. Three genes were eventually identified in humans, another three in mouse and three in Caenorhabditis elegans, with a single gene in Anopheles gambiae. The pFAM database lists the MgtE domain as pFAM01769 and additionally identifies a MgtE domain-containing protein in Drosophila melanogaster. The proteins containing the MgtE domain can be divided into seven classes, as defined by pFAM using the type and organisation of the identifiable domains in each protein. Metazoan proteins are present in three of the seven groups. All of the metazoa proteins contain two MgtE domains, but some of these have been predicted only by context recognition (Coin, Bateman and Durbin, unpublished. See the pFAM website for further details). The human SLC41A1 protein contains two MgtE domains with 52% and 46% respective similarity to the PF01769 consensus sequence and is predicted to contain ten TM domains, five in each MgtE domain (see figure), which suggests that the MgtE protein of bacteria may work as a dimer. Adapted from Wabakken et al. (2003) and the pFAM database, the figure shows the computer predicted membrane topology of MgtE in H. sapiens. The TM domains are shown in light blue, the orientation in the membrane and the positions of the N- and C-termini are indicated, and the figure is not drawn to scale. Wabakken et al. (2003) found that the transcript of the SLC41A1 gene was expressed in all human tissues tested, but at varying levels, with the heart and testis having the highest expression of the gene. No explanation of the expression pattern has been suggested with regard to Mg2+-related physiology. It has not been shown whether the SLC41 proteins transport Mg2+ or complement a Mg2+ transport mutation in any experimental system. However, it has been suggested that as MgtE proteins have no other known function, they are likely to be Mg2+ transporters in the metazoa as they are in the bacteria. This will need to be verified using one of the now standard experiment systems for examining Mg2+ transport. TRPM6/ TRPM7 The investigation of the TRPM genes and proteins in human cells is an area of intense recent study and, at times, debate. Montell et al. (2002) have reviewed the research into the TRP genes, and a second review by Montell (2003) has reviewed the research into the TRPM genes. The TRPM family of ion channels has members throughout the metazoa. The TRPM6 and TRPM7 proteins are highly unusual, containing both an ion channel domain and a kinase domain (Figure 1.7), the role of which brings about the most heated debate. The activity of these two proteins has been very difficult to quantify. TRPM7 by itself appears to be a Ca2+ channel but in the presence of TRPM6 the affinity series of transported cations places Mg2+ above Ca2+. The differences in reported conductance were caused by the expression patterns of these genes. TRPM7 is expressed in all cell types tested so far, while TRPM6 shows a more restricted pattern of expression. An unfortunate choice of experimental system by Voets et al., (2004) led to the conclusion that TRPM6 is a functional Mg2+ transporter. However, later work by Chubanov et al. (2004) clearly showed that TRPM7 is required for TRPM6 activity and that the results of Voets et al. are explained by the expression of TRPM7 in the experimental cell line used by Voets et al. in their experiments. Whether TRPM6 is functional by itself is yet to be determined. The predicted TM topology of the TPRM6 and TRPM7 proteins has been adapted from Nadler et al. (2001), Runnels et al. (2001) and Montell et al. (2002), this figure shows the computer predicted membrane topology of the TRPM6 and TRPM7 proteins in Homo sapiens. At this time, the topology shown should be considered a tentative hypothesis. The TM domains are shown in light blue, the pore loop in purple, the TRP motif in red and the kinase domain in green. The orientation in the membrane and the positions of the N- and C-termini are indicated and the figure is not drawn to scale. The conclusions of the Voets et al. (2004) paper are probably incorrect in attributing the Mg2+ dependent currents to TRPM7 alone, and their kinetic data are likely to reflect the combined TRPM7/ TRPM6 channel. The report presents a robust collection of data consistent with a channel-like activity passing Mg2+, based on both electrophysiological techniques and also mag-fura 2 to determine changes in cytoplasmic free Mg2+. Paracellular transport Claudins allow for Mg2+ transport via the paracellular pathway; that is, it mediates the transport of the ion through the tight junctions between cells that form an epithelial cell layer. In particular, Claudin-16 allows the selective reuptake of Mg2+ in the human kidney. Some patients with mutations in the CLDN19 gene also have altered magnesium transport. The gene Claudin-16 was cloned by Simon et al. (1999), but only after a series of reports described the Mg2+ flux itself with no gene or protein. The expression pattern of the gene was determined by RT-PCR, and was shown to be very tightly confined to a continuous region of the kidney tubule running from the medullary thick descending limb to the distal convoluted tubule. This localisation was consistent with the earlier reports for the location of Mg2+ re-uptake by the kidney. Following the cloning, mutations in the gene were identified in patients with familial hypomagnesaemia with hypercalciuria and nephrocalcinosis, strengthening the links between the gene and the uptake of Mg2+. Plants The current knowledge of the molecular mechanisms for Mg2+ transport in plants is very limited, with only three publications reporting a molecular basis for Mg2+ transport in plants. However, the importance of Mg2+ to plants has been well described, and physiological and ecophysiological studies about the effects of Mg2+ are numerous. This section will summarise the knowledge of a gene family identified in plants that is distantly related to CorA. Another gene, a Mg2+/H+ exchanger (AtMHX), unrelated to this gene family and to CorA has also been identified, is localised to the vacuolar membrane, and will be described last. The AtMRS2 gene family Schock et al. (2000) identified and named the family AtMRS2 based on the similarity of the genes to the MRS2 gene of yeast. The authors also showed that the AtMRS2-1 gene could complement a Δmrs2 yeast mutant phenotype. Independently, Li et al. (2001) published a report identifying the family and showing that two additional members could complement Mg2+ transport deficient mutants, one in S. typhimurium and the other in S. cerevisiae. The three genes that have been shown to transport Mg2+ are AtMRS2-1, AtMRS2-10 and AtMRS2-11, and these genes produce proteins 442, 443 and 459 amino acids in size, respectively. Each of the proteins shows significant similarity to Mrs2p of yeast and a weak similarity to CorA of bacteria, contains the conserved GMN amino acid motif at the outside end of the first TM domain, and is predicted to have two TM domains. The AtMRS2-1 gene, when expressed in yeast from the MRS2 promoter and being fused C-terminally to the first 95 amino acids of the Mrs2p protein, was directed to the mitochondria, where it complemented a Δmrs2 mutant both phenotypically (mitochondrial RNA splicing was restored) and with respect to the Mg2+ content of the organelle. No data on the kinetics of the transport was presented. The AtMRS2-11 gene was analysed in yeast (in the alr1 alr2 strain), where it was shown that expression of the gene significantly increased the rate of Mg2+ uptake into starved cells over the control, as measured using flame atomic absorption spectroscopy of total cellular Mg2+ content. However, Alr1p was shown to be significantly more effective at transporting Mg2+ at low extracellular concentrations, suggesting that the affinity of AtMRS2-11 for Mg2+ is lower than that of Alr1p. An electrophysiological (voltage clamp) analysis of the AtMRS2-11 protein in Xenopus oocytes also showed a Mg2+-dependent current at membrane potentials (ΔΨ) of –100 – –150 mV inside. These values are physiologically significant, as several membranes in plants maintain ΔΨ in this range. However, the author had difficulty reproducing these results due to an apparent "death" of oocytes containing the AtMRS2-11 protein, and therefore these results should be viewed with caution. The AtMRS2-10 transporter has been analysed using radioactive tracer uptake analysis. 63Ni2+ was used as the substitute ion and Mg2+ was shown to inhibit the uptake of 63Ni2+ with a Ki of 20 μM. Uptake was also inhibited by Co(III)Hex and by other divalent cations. Only Co2+ and Cu2+ inhibited transport with Ki values less than 1 mM. The AtMRS2-10 protein was fused to GFP, and was shown to be localised to the plasma membrane. A similar experiment was attempted in the Schock et al. (2000) paper, but the observed localisation was not significantly different from that seen with unfused GFP. The most likely reason for the lack of a definitive localisation of AtMRS2-1 in the Schock et al. paper is that the authors removed the TM domains from the protein, thereby precluding its insertion into a membrane. The exact physiological significance of the AtMRS2-1 and AtMRS2-10 proteins in plants has yet to be clarified. The AtMRS2-11 gene has been overexpressed (from the CaMV 35S promoter) in A. thaliana. The transgenic line has been shown to accumulate high levels of the AtMRS2-11 transcript. A strong Mg2+ deficiency phenotype (necrotic spots on the leaves, see Chapter 1.5 below) was recorded during the screening process (in both the T1 and T2 generations) for a homozygote line, but this phenotype was lost in the T3 generation and could not be reproduced when the earlier generations were screened a second time. The author suggested that environmental effects were the most likely cause of the inconsistent phenotype. AtMHX The first magnesium transporter isolated in any multicellular organism, AtMHX shows no similarity to any previously isolated Mg2+ transport protein. The gene was initially identified in the A. thaliana genomic DNA sequence database, by its similarity to the SLC8 family of Na+/Ca2+ exchanger genes in humans. The cDNA sequence of 1990 bp is predicted to produce a 539-amino acid protein. AtMHX is quite closely related to the SLC8 family at the amino acid level and shares a topology with eleven predicted TM domains (Figure A10.5). There is one major difference in the sequence, in that the long non-membranal loop (see Figure A10.5) is 148 amino acids in the AtMHX protein but 500 amino acids in the SLC8 proteins. However, this loop is not well conserved and is not required for transport function in the SLC8 family. The AtMHX gene is expressed throughout the plant but most strongly in the vascular tissue. The authors suggest that the physiological role of the protein is to store Mg2+ in these tissues for later release when needed. The protein localisation to the vacuolar membrane supports this suggestion (see also Chapter 1.5). The protein transports Mg2+ into the vacuolar space and H+ out, as demonstrated by electrophysiological techniques. The transport is driven by the ΔpH maintained between the vacuolar space (pH 4.5 – 5.9) and the cytoplasm (pH 7.3 – 7.6) by an H+-ATPase. How the transport of Mg2+ by the protein is regulated was not determined. Currents were observed to pass through the protein in both directions, but the Mg2+ out current required a ‘cytoplasmic’ pH of 5.5, a condition not found in plant cells under normal circumstances. In addition to the transport of Mg2+, Shaul et al. (1999) also showed that the protein could transport Zn2+ and Fe2+, but did not report on the capacity of the protein to transport other divalent cations (e.g. Co2+ and Ni2+) or its susceptibility to inhibition by cobalt (III) hexaammine. The detailed kinetics of Mg2+ transport have not been determined for AtMHX. However, physiological effects have been demonstrated. When A. thaliana plants were transformed with overexpression constructs of the AtMHX gene driven by the CaMV 35S promoter, the plants over-accumulated the protein and showed a phenotype of necrotic lesions in the leaves, which the authors suggest is caused by a disruption in the normal function of the vacuole, given their observation that the total Mg2+ (or Zn2+) content of the plants was not altered in the transgenic plants. The image has been adapted from Shaul et al. (1999) and Quednau et al. (2004), and combined with an analysis using HMMTOP, this figure shows the computer predicted membrane topology of the AtMHX protein in Arabidopsis thaliana. At this time the topology shown should be considered a tentative hypothesis. The TM domains are shown in light blue, the orientation in the membrane and the positions of the N- and C-termini are indicated, and the figure is not drawn to scale. The α1 and α2 domains, shown in green, are both quite hydrophobic and may both be inserted into the membrane. References Biology and pharmacology of chemical elements Ion channels Physiology Magnesium Membrane biology
Magnesium transporter
[ "Chemistry", "Biology" ]
11,531
[ "Pharmacology", "Properties of chemical elements", "Physiology", "Biology and pharmacology of chemical elements", "Membrane biology", "Molecular biology", "Biochemistry", "Neurochemistry", "Ion channels" ]
3,474,975
https://en.wikipedia.org/wiki/AMPAC
AMPAC is a general-purpose semiempirical quantum chemistry program. It is marketed by Semichem, Inc. and was developed originally by Michael Dewar and his group. The first version of AMPAC (2.1) was made available in 1985 through the Quantum Chemistry Program Exchange (QCPE). Subsequent versions were released through the same source, representing minor updates and optimized versions for other platforms. In 1992, Semichem, Inc. was formed at Professor Dewar's urging to maintain and market the program. AMPAC 4.0 with Graphical User Interface was released in August of that year. Semichem's current version of AMPAC is 10. AMPAC current implements the SAM1, AM1, MNDO, MNDO/d, PM3, MNDOC MINDO/3, RM1 and PM6 semi-empirical methods and AMSOL and COSMO salvation models. See also Quantum chemistry computer programs References Computational chemistry software
AMPAC
[ "Chemistry" ]
201
[ "Quantum chemistry stubs", "Quantum chemistry", "Computational chemistry software", "Chemistry software", "Theoretical chemistry stubs", "Computational chemistry stubs", "Computational chemistry", "Physical chemistry stubs" ]
6,119,001
https://en.wikipedia.org/wiki/Systems%20Engineering%20and%20Technical%20Assistance
Systems Engineering and Technical Advisory (SETA) contractors are government contractors who are contracted to assist the United States Department of Defense (DoD) components, and acquisition programs. (In some areas of DoD, the acronym SETA refers to "Systems Engineering and Technical Assessment" contractors; also refers to "Systems Engineering and Technical Advisors.") SETA contractors provide analysis and engineering services in a consulting capacity, working closely with the government's own engineering staff members. SETA contractors provide the flexibility and quick availability of expertise without the expense and commitment of sustaining the staff long-term. Use of the term SETA is an industry term, which the DoD has used since at least 1995, for example in the Software Engineering Institute; 'Defense Acquisition Deskbook, "S"; the An Acronym List for the Information Age (Armed Forces Communications and Electronics Association); the DoD Guide to Integrated Product and Process Development. Contracting The government often needs to supplement its internal Systems Engineering and Technical Advisory capability in order to meet its frequently changing needs and demands. Through a formal Request for Information (RFI)/ Request for Proposal (RFP) process the government is able to contract with a commercial organization to provide certain services. SETA contractors work alongside government employees often within the same workspace. SETA contractors may participate in government contracting actions and may assist in managing other contracts. A SETA contractor cannot be the Contracting Officer's Technical Representative (COTR) or Assistant Contracting Officer Representative (ACOR), but they may function as the Technical Point of Contact (TPOC). Since SETA contractors may have access to procurement sensitive information there is a risk of conflict of interest (CoI) which is mitigated through Non-Disclosure Agreements (NDAs) and firewalls restricting communications within corporations. The SETA support rate in total R&D expenditures of DARPA are evaluating as 7.4-9.9%. Policy The policy related to SETA contractors can be found in the Federal Acquisition Regulation (FAR), Defense Federal Acquisition Regulation (DFAR) and DoD Instructions. FAR Part 37 is the starting point for guidance for these types of contracts. Subpart 37.2 defines advisory and assistance services and provides that the use of such services is a legitimate way to improve the prospects for program or systems success. FAR Part 37.201(c) defines engineering and technical services used in support of a program office during the acquisition cycle. FAR 16.505(c) provides that the ordering period of an advisory and assistance services task order contract, including all options or modifications, may not exceed five years unless a longer period is specifically authorized in a law that is applicable to such a contract. DFARS Part 237 provides information for advisory and assistance contracts. FAR Subpart 9.5 addresses the potential for organizational and consultant conflicts of interest. Use of SETA contracts and NDAs allows for services involving systems and data and providing assistance and advice. This excludes Inherently Governmental Functions (IGF), as defined in statue by Public Law 105 - Federal Activities Inventory Reform Act (the "FAIR Act") of 1998 and in regulation by OMB Circular A-76. See also Contracting Officer's Technical Representative Systems Engineering References External links Federal Acquisition Regulation Defense Federal Acquisition Regulation Department of Defense Issuances DARPA SETA Support FY2010 / FY2015 Database DOI:10.6084/m9.figshare.4759186.v2 Public Law 105 - 270 - Federal Activities Inventory Reform Act of 1998 OMB Circular A-76 United States Department of Defense United States defense procurement Systems engineering
Systems Engineering and Technical Assistance
[ "Engineering" ]
736
[ "Systems engineering" ]
6,119,178
https://en.wikipedia.org/wiki/American%20Machinists%27%20Handbook
American Machinists' Handbook was a McGraw-Hill reference book similar to Industrial Press's Machinery's Handbook. (The latter title, still in print and regularly revised, is the one that machinists today are usually referring to when they speak imprecisely of "the machinist's handbook" or "the machinists' handbook".) The somewhat generic sound of the title American Machinists' Handbook, no doubt contributed to the confounding of the two books' titles and identities. It capitalized on readers' familiarity with American Machinist, McGraw-Hill's popular trade journal. But the usage could have benefited from some branding discipline, because of some little confusion over whether the title was properly "American Machinist's Handbook" or "American Machinists' Handbook". ("American Machinist 's Handbook" would be parallel to the construction of the title "Machinery's Handbook") McGraw-Hill's American Machinists' Handbook appeared first (1908). It is doubtful that Industrial Press's Machinery's Handbook (1914) was a mere me-too conceived afterwards in response. The eager market for such reference works had probably been obvious for at least a decade before either work was compiled, perhaps the appearance of the McGraw-Hill title merely prodded Industrial Press to finally get moving on a handbook of its own. American Machinists' Handbook, co-edited by Fred H. Colvin and Frank A. Stanley, went through eight editions between 1908 and 1945. In 1955, McGraw-Hill published The new American machinist's handbook. Based upon earlier editions of American machinists' handbook, but perhaps the book did not compete well enough with Machinery's Handbook. No subsequent editions were produced. List of the editions of American Machinists' Handbook Renewal data from Rutgers. All works after 1923 with renewed copyright are presumably still protected. 1908 non-fiction books Mechanical engineering Metallurgical industry of the United States Handbooks and manuals McGraw-Hill books
American Machinists' Handbook
[ "Physics", "Chemistry", "Engineering" ]
423
[ "Applied and interdisciplinary physics", "Metallurgical industry of the United States", "Mechanical engineering", "Metallurgical industry by country" ]
19,453,961
https://en.wikipedia.org/wiki/Mathematical%20object
A mathematical object is an abstract concept arising in mathematics. Typically, a mathematical object can be a value that can be assigned to a symbol, and therefore can be involved in formulas. Commonly encountered mathematical objects include numbers, expressions, shapes, functions, and sets. Mathematical objects can be very complex; for example, theorems, proofs, and even formal theories are considered as mathematical objects in proof theory. In Philosophy of mathematics, the concept of "mathematical objects" touches on topics of existence, identity, and the nature of reality. In metaphysics, objects are often considered entities that possess properties and can stand in various relations to one another. Philosophers debate whether mathematical objects have an independent existence outside of human thought (realism), or if their existence is dependent on mental constructs or language (idealism and nominalism). Objects can range from the concrete: such as physical objects usually studied in applied mathematics, to the abstract, studied in pure mathematics. What constitutes an "object" is foundational to many areas of philosophy, from ontology (the study of being) to epistemology (the study of knowledge). In mathematics, objects are often seen as entities that exist independently of the physical world, raising questions about their ontological status. There are varying schools of thought which offer different perspectives on the matter, and many famous mathematicians and philosophers each have differing opinions on which is more correct. In philosophy of mathematics Quine-Putnam indispensability Quine-Putnam indispensability is an argument for the existence of mathematical objects based on their unreasonable effectiveness in the natural sciences. Every branch of science relies largely on large and often vastly different areas of mathematics. From physics' use of Hilbert spaces in quantum mechanics and differential geometry in general relativity to biology's use of chaos theory and combinatorics (see mathematical biology), not only does mathematics help with predictions, it allows these areas to have an elegant language to express these ideas. Moreover, it is hard to imagine how areas like quantum mechanics and general relativity could have developed without their assistance from mathematics, and therefore, one could argue that mathematics is indispensable to these theories. It is because of this unreasonable effectiveness and indispensability of mathematics that philosophers Willard Quine and Hilary Putnam argue that we should believe the mathematical objects for which these theories depend actually exist, that is, we ought to have an ontological commitment to them. The argument is described by the following syllogism:(Premise 1) We ought to have ontological commitment to all and only the entities that are indispensable to our best scientific theories. (Premise 2) Mathematical entities are indispensable to our best scientific theories. (Conclusion) We ought to have ontological commitment to mathematical entitiesThis argument resonates with a philosophy in applied mathematics called Naturalism (or sometimes Predicativism) which states that the only authoritative standards on existence are those of science. Schools of thought Platonism Platonism asserts that mathematical objects are seen as real, abstract entities that exist independently of human thought, often in some Platonic realm. Just as physical objects like electrons and planets exist, so do numbers and sets. And just as statements about electrons and planets are true or false as these objects contain perfectly objective properties, so are statements about numbers and sets. Mathematicians discover these objects rather than invent them. (See also: Mathematical Platonism) Some some notable platonists include: Plato: The ancient Greek philosopher who, though not a mathematician, laid the groundwork for Platonism by positing the existence of an abstract realm of perfect forms or ideas, which influenced later thinkers in mathematics. Kurt Gödel: A 20th-century logician and mathematician, Gödel was a strong proponent of mathematical Platonism, and his work in model theory was a major influence on modern platonism Roger Penrose: A contemporary mathematical physicist, Penrose has argued for a Platonic view of mathematics, suggesting that mathematical truths exist in a realm of abstract reality that we discover. Nominalism Nominalism denies the independent existence of mathematical objects. Instead, it suggests that they are merely convenient fictions or shorthand for describing relationships and structures within our language and theories. Under this view, mathematical objects do not have an existence beyond the symbols and concepts we use. Some notable nominalists include: Nelson Goodman: A philosopher known for his work in the philosophy of science and nominalism. He argued against the existence of abstract objects, proposing instead that mathematical objects are merely a product of our linguistic and symbolic conventions. Hartry Field: A contemporary philosopher who has developed the form of nominalism called "fictionalism," which argues that mathematical statements are useful fictions that do not correspond to any actual abstract objects. Logicism Logicism asserts that all mathematical truths can be reduced to logical truths, and all objects forming the subject matter of those branches of mathematics are logical objects. In other words, mathematics is fundamentally a branch of logic, and all mathematical concepts, theorems, and truths can be derived from purely logical principles and definitions. Logicism faced challenges, particularly with the Russillian axioms, the Multiplicative axiom (now called the Axiom of Choice) and his Axiom of Infinity, and later with the discovery of Gödel's incompleteness theorems, which showed that any sufficiently powerful formal system (like those used to express arithmetic) cannot be both complete and consistent. This meant that not all mathematical truths could be derived purely from a logical system, undermining the logicist program. Some notable logicists include: Gottlob Frege: Frege is often regarded as the founder of logicism. In his work, Grundgesetze der Arithmetik (Basic Laws of Arithmetic), Frege attempted to show that arithmetic could be derived from logical axioms. He developed a formal system that aimed to express all of arithmetic in terms of logic. Frege's work laid the groundwork for much of modern logic and was highly influential, though it encountered difficulties, most notably Russell's paradox, which revealed inconsistencies in Frege's system. Bertrand Russell: Russell, along with Alfred North Whitehead, further developed logicism in their monumental work Principia Mathematica. They attempted to derive all of mathematics from a set of logical axioms, using a type theory to avoid the paradoxes that Frege's system encountered. Although Principia Mathematica was enormously influential, the effort to reduce all of mathematics to logic was ultimately seen as incomplete. However, it did advance the development of mathematical logic and analytic philosophy. Formalism Mathematical formalism treats objects as symbols within a formal system. The focus is on the manipulation of these symbols according to specified rules, rather than on the objects themselves. One common understanding of formalism takes mathematics as not a body of propositions representing an abstract piece of reality but much more akin to a game, bringing with it no more ontological commitment of objects or properties than playing ludo or chess. In this view, mathematics is about the consistency of formal systems rather than the discovery of pre-existing objects. Some philosophers consider logicism to be a type of formalism. Some notable formalists include: David Hilbert: A leading mathematician of the early 20th century, Hilbert is one of the most prominent advocates of formalism as a foundation of mathematics (see Hilbert's program). He believed that mathematics is a system of formal rules and that its truth lies in the consistency of these rules rather than any connection to an abstract reality. Hermann Weyl: German mathematician and philosopher who, while not strictly a formalist, contributed to formalist ideas, particularly in his work on the foundations of mathematics. Freeman Dyson wrote that Weyl alone bore comparison with the "last great universal mathematicians of the nineteenth century", Henri Poincaré and David Hilbert. Constructivism Mathematical constructivism asserts that it is necessary to find (or "construct") a specific example of a mathematical object in order to prove that an example exists. Contrastingly, in classical mathematics, one can prove the existence of a mathematical object without "finding" that object explicitly, by assuming its non-existence and then deriving a contradiction from that assumption. Such a proof by contradiction might be called non-constructive, and a constructivist might reject it. The constructive viewpoint involves a verificational interpretation of the existential quantifier, which is at odds with its classical interpretation. There are many forms of constructivism. These include Brouwer's program of intutionism, the finitism of Hilbert and Bernays, the constructive recursive mathematics of mathematicians Shanin and Markov, and Bishop's program of constructive analysis. Constructivism also includes the study of constructive set theories such as Constructive Zermelo–Fraenkel and the study of philosophy. Some notable constructivists include: L. E. J. Brouwer: Dutch mathematician and philosopher regarded as one of the greatest mathematicians of the 20th century, known for (among other things) pioneering the intuitionist movement to mathematical logic, and opposition of David Hilbert's formalism movement (see: Brouwer–Hilbert controversy). Errett Bishop: American mathematician known for his work on analysis. He is best known for developing constructive analysis in his 1967 Foundations of Constructive Analysis, where he proved most of the important theorems in real analysis using constructivist methods. Structuralism Structuralism suggests that mathematical objects are defined by their place within a structure or system. The nature of a number, for example, is not tied to any particular thing, but to its role within the system of arithmetic. In a sense, the thesis is that mathematical objects (if there are such objects) simply have no intrinsic nature. Some notable structuralists include: Paul Benacerraf: A philosopher known for his work in the philosophy of mathematics, particularly his paper "What Numbers Could Not Be," which argues for a structuralist view of mathematical objects. Stewart Shapiro: Another prominent philosopher who has developed and defended structuralism, especially in his book Philosophy of Mathematics: Structure and Ontology. Objects versus mappings Frege famously distinguished between functions and objects. According to his view, a function is a kind of ‘incomplete’ entity that maps arguments to values, and is denoted by an incomplete expression, whereas an object is a ‘complete’ entity and can be denoted by a singular term. Frege reduced properties and relations to functions and so these entities are not included among the objects. Some authors make use of Frege's notion of ‘object’ when discussing abstract objects. But though Frege's sense of ‘object’ is important, it is not the only way to use the term. Other philosophers include properties and relations among the abstract objects. And when the background context for discussing objects is type theory, properties and relations of higher type (e.g., properties of properties, and properties of relations) may be all be considered ‘objects’. This latter use of ‘object’ is interchangeable with ‘entity.’ It is this more broad interpretation that mathematicians mean when they use the term 'object'. See also Abstract object Exceptional object Impossible object List of mathematical objects List of mathematical shapes List of shapes List of surfaces List of two-dimensional geometric shapes Mathematical structure Notes References Citations Further reading Azzouni, J., 1994. Metaphysical Myths, Mathematical Practice. Cambridge University Press. Burgess, John, and Rosen, Gideon, 1997. A Subject with No Object. Oxford Univ. Press. Davis, Philip and Reuben Hersh, 1999 [1981]. The Mathematical Experience. Mariner Books: 156–62. Gold, Bonnie, and Simons, Roger A., 2011. Proof and Other Dilemmas: Mathematics and Philosophy. Mathematical Association of America. Hersh, Reuben, 1997. What is Mathematics, Really? Oxford University Press. Sfard, A., 2000, "Symbolizing mathematical reality into being, Or how mathematical discourse and mathematical objects create each other," in Cobb, P., et al., Symbolizing and communicating in mathematics classrooms: Perspectives on discourse, tools and instructional design. Lawrence Erlbaum. Stewart Shapiro, 2000. Thinking about mathematics: The philosophy of mathematics. Oxford University Press. External links Stanford Encyclopedia of Philosophy: "Abstract Objects"—by Gideon Rosen. Wells, Charles. "Mathematical Objects". AMOF: The Amazing Mathematical Object Factory Mathematical Object Exhibit Philosophical concepts Category theory Mathematical concepts Platonism
Mathematical object
[ "Mathematics" ]
2,542
[ "Mathematical Platonism", "Mathematical structures", "Functions and mappings", "Mathematical objects", "Fields of abstract algebra", "Category theory", "Mathematical relations", "nan" ]
19,454,208
https://en.wikipedia.org/wiki/Dynamic%20topography
The term dynamic topography is used in geodynamics to refer the elevation differences caused by the flow within Earth's mantle. Definition In geodynamics, dynamic topography refers to topography generated by the motion of zones of differing degrees of buoyancy (convection) in Earth's mantle. It is also seen as the residual topography obtained by removing the isostatic contribution from the observed topography (i.e., the topography that cannot be explained by an isostatic equilibrium of the crust or the lithosphere resting on a fluid mantle) and all observed topography due to post-glacial rebound. Elevation differences due to dynamic topography are frequently on the order of a few hundred meters to a couple of kilometers. Large scale surface features due to dynamic topography are mid-ocean ridges and oceanic trenches. Other prominent examples include areas overlying mantle plumes such as the African superswell. For a recent review of observational and modelling constraints on dynamic topography, see Davies et al. (2023). The mid-ocean ridges are high due to dynamic topography because the upwelling hot material underneath them pushes them up above the surrounding seafloor. This provides an important driving force in plate tectonics called ridge push: the increased gravitational potential energy of the mid-ocean ridge due to its dynamic uplift causes it to extend and push the surrounding lithosphere away from the ridge axis. Dynamic topography and mantle density variations can explain 90% of the long-wavelength geoid after the hydrostatic ellipsoid is subtracted out. Dynamic topography is the reason why the geoid is high over regions of low-density mantle. If the mantle were static, these low-density regions would be geoid lows. However, these low-density regions move upwards in a mobile, convecting mantle, elevating density interfaces such as the core-mantle boundary, 440 and 670 kilometer discontinuities, and the Earth's surface. Since both the density and the dynamic topography provide approximately the same magnitude of change in the geoid, the resultant geoid is a relatively small value (being the difference between large but similar numbers). Examples The geological history of the Colorado Plateau during the last 30 million years has been considerably affected by dynamic topography. At first, between 30 and 15 million years ago, the plateau was greatly uplifted. Then, in a second phase, between 15 and 5 million years ago the plateau was tilted to the east. Finally, in the last 5 million years the western part of the plateau has been tilted to the west. The plateau would have reached its high elevation of 1,400 m.a.s.l. due to dynamic topography. In Patagonia a Miocene transgression has been attributed to a down-dragging effect of mantle convection. A subsequent regression in the Late Miocene and Pliocene and further Quaternary uplift in the eastern coast of Patagonia may in turn have been caused a decrease in this convection. The Miocene dynamic topography that developed in Patagonia advanced as a wave from south to north following the northward shift of the Chile Triple Junction and the asthenospheric window associated to it. See also Epeirogenic movement Mountain formation Orogeny References External links Discussion on the definition of dynamic topography. Geodynamics Oceanography
Dynamic topography
[ "Physics", "Environmental_science" ]
673
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
19,454,733
https://en.wikipedia.org/wiki/Constant-resistance%20network
A constant-resistance network in electrical engineering is a network whose input resistance does not change with frequency when correctly terminated. Examples of constant resistance networks include: Zobel network Lattice phase equaliser Boucherot cell Bridged T delay equaliser Electrical engineering Physics-related lists
Constant-resistance network
[ "Engineering" ]
56
[ "Electrical engineering" ]
19,455,228
https://en.wikipedia.org/wiki/Motor%20capacitor
A motor capacitor is an electrical capacitor that alters the current to one or more windings of a single-phase alternating-current induction motor to create a rotating magnetic field. There are two common types of motor capacitors, start capacitor and run capacitor (including a dual run capacitor). Motor capacitors are used with single-phase electric motors that are in turn used to drive air conditioners, hot tub/jacuzzi spa pumps, powered gates, large fans or forced-air heat furnaces for example. A "dual run capacitor" is used in some air conditioner compressor units, to boost both the fan and compressor motors. Permanent-split capacitor (PSC) motors use a motor capacitor that is not disconnected from the motor. Start capacitors Start capacitors lag the voltage to the rotor windings creating a phase shift between field windings and rotor windings. Without the start capacitor, the north and south magnetic fields will line up and the motor hums and will only start spinning when phsically turned, creating a phase shift. A start capacitor stays in the circuit long enough to rapidly bring the motor up to a predetermined speed, which is usually about 75% of the full speed, and is then taken out of the circuit, often by a centrifugal switch that releases at that speed. Afterward the motor works more efficiently with a run capacitor which makes the power factor closer to one.. Start capacitors usually have ratings above 70μF, with four major voltage classifications: 125V, 165V, 250V, and 330V. Start capacitors above 20μF are always non-polarized aluminium electrolytic capacitors with non solid electrolyte and therefore they are only applicable for the short motor starting time. The motor will not work properly if the centrifugal switch is broken. If the switch is always open, the start capacitor is not part of the circuit, so the motor does not start. If the switch is always closed, the start capacitor is always in the circuit, so the motor windings will likely burn out. If a motor does not start, the capacitor is far more likely the problem than the switch. Run capacitors Some single-phase AC electric motors require a "run capacitor" to energize the second-phase winding (auxiliary coil) to create a rotating magnetic field while the motor is running. Run capacitors are designed for continuous duty while the motor is powered, which is why electrolytic capacitors are avoided, and low-loss polymer capacitors are used. Run capacitors are mostly polypropylene film capacitors (historically: metallised paper capacitors) and are energized the entire time the motor is running. Run capacitors are rated in a range of 1.5 to 100 μF, with volt classifications of 250, 370 and 440 V. If a wrong capacitance value is installed, it will cause an uneven magnetic field around the rotor. This causes the rotor to hesitate at the uneven spots, resulting in irregular rotation, especially under load. This hesitation can cause the motor to become noisy, increase energy consumption, cause performance to drop and the motor to overheat. Dual run capacitors A dual run capacitor supports two electric motors, with both a fan motor and a compressor motor. It saves space by combining two physical capacitors into one case. The dual capacitor has three terminals, labeled C for common, FAN, and HERM for hermetically-sealed compressor. Dual capacitors come in a variety of sizes, depending on the capacitance (measured in microfarads, μF), such as 40 plus 5μF, and also on the voltage. A 440-volt capacitor can be used in place of a 370-volt, but not a 370-volt in place of a 440-volt. The capacitance must remain within 5% of its original value. Round cylinder-shaped dual run capacitors are commonly used for air conditioning, to help in the starting of the compressor and the condenser fan motor. An oval dual run capacitor could be used instead of a round capacitor; in either case the mounting strap needs to fit the shape used. Labeling The units of capacitance are labeled in microfarads (μF). Older capacitors may be labeled with the obsolete terms "mfd" or "MFD", which can be ambiguous but are, especially in this context, used for microfarad as well (a millifarad is 1000 microfarads and not usually seen on motors). Failure modes A faulty run capacitor often becomes swollen, with the sides or ends bowed or bulged out further than usual; it can then be clear to see that the capacitor has failed, because it is swollen or even blown apart causing the capacitor's electrolyte to leak out. Some capacitors have a "pressure-sensitive interrupter" design that causes them to fail before internal pressures can cause serious injury. One such design causes the top of the capacitor to expand and break internal wiring. Over many years of use the capacitance of the capacitor drops; this is known as a "weak capacitor". As a result, the motor may fail to start or to run at full power. When a motor is running during a lightning strike on the power grid, the run capacitor may be damaged or weakened by a voltage spike, thus requiring replacement. IEC/EN 60252-1 2011 specifies the following levels of protection for motor run capacitors: S0 – no protection; S1, S2 – fail open-circuit or short-circuit; S3 – fail open-circuit only. Safety issues Heat A motor capacitor which is a component of a hot tub circulating pump can overheat if defective. This poses a fire hazard, and the U.S. Consumer Product Safety Commission (CPSC) has received more than 100 reports of incidents of overheating of the motor capacitor, with some fires started. Toxic Motor capacitors manufactured before 1978 likely contain polychlorinated biphenyls (PCBs). These are extremely toxic and persistent chemicals with many long-lasting negative human and wildlife health effects. Capacitors were required to be labeled in the U.S. with "No PCBs" or similar language. Capacitors without these labels are suspect. PCB containing capacitors must be replaced and disposed by consulting local environmental authorities. If these capacitors leak or fail, PCBs can be released into the environment and will very likely result in extremely expensive environmental cleanups, and potentially, lawsuits. There are extensive toxicological reports on PCBs in the research community. U.S. EPA has much information on the correct way to test and manage these toxic chemicals in older capacitors. References External links http://highperformancehvac.com/run-start-capacitors-hvac-motors-1/ http://www.capacitorformotor.com/motor_capacitor.html http://www.wikihow.com/Check-a-Start-Capacitor http://electronics.stackexchange.com/questions/101408/what-is-the-purpose-of-the-motor-run-capacitor https://www.epa.gov/sites/production/files/documents/pcbidmgmt.pdf Electric motors Capacitors Heating, ventilation, and air conditioning
Motor capacitor
[ "Physics", "Technology", "Engineering" ]
1,626
[ "Physical quantities", "Engines", "Electric motors", "Capacitors", "Capacitance", "Electrical engineering" ]
19,456,533
https://en.wikipedia.org/wiki/Minimum-distance%20estimation
Minimum-distance estimation (MDE) is a conceptual method for fitting a statistical model to data, usually the empirical distribution. Often-used estimators such as ordinary least squares can be thought of as special cases of minimum-distance estimation. While consistent and asymptotically normal, minimum-distance estimators are generally not statistically efficient when compared to maximum likelihood estimators, because they omit the Jacobian usually present in the likelihood function. This, however, substantially reduces the computational complexity of the optimization problem. Definition Let be an independent and identically distributed (iid) random sample from a population with distribution and . Let be the empirical distribution function based on the sample. Let be an estimator for . Then is an estimator for . Let be a functional returning some measure of "distance" between its two arguments. The functional is also called the criterion function. If there exists a such that , then is called the minimum-distance estimate of . Statistics used in estimation Most theoretical studies of minimum-distance estimation, and most applications, make use of "distance" measures which underlie already-established goodness of fit tests: the test statistic used in one of these tests is used as the distance measure to be minimised. Below are some examples of statistical tests that have been used for minimum-distance estimation. Chi-square criterion The chi-square test uses as its criterion the sum, over predefined groups, of the squared difference between the increases of the empirical distribution and the estimated distribution, weighted by the increase in the estimate for that group. Cramér–von Mises criterion The Cramér–von Mises criterion uses the integral of the squared difference between the empirical and the estimated distribution functions . Kolmogorov–Smirnov criterion The Kolmogorov–Smirnov test uses the supremum of the absolute difference between the empirical and the estimated distribution functions . Anderson–Darling criterion The Anderson–Darling test is similar to the Cramér–von Mises criterion except that the integral is of a weighted version of the squared difference, where the weighting relates the variance of the empirical distribution function . Theoretical results The theory of minimum-distance estimation is related to that for the asymptotic distribution of the corresponding statistical goodness of fit tests. Often the cases of the Cramér–von Mises criterion, the Kolmogorov–Smirnov test and the Anderson–Darling test are treated simultaneously by treating them as special cases of a more general formulation of a distance measure. Examples of the theoretical results that are available are: consistency of the parameter estimates; the asymptotic covariance matrices of the parameter estimates. See also Maximum likelihood estimation Maximum spacing estimation References Estimation methods Statistical distance Mathematical modeling
Minimum-distance estimation
[ "Physics", "Mathematics" ]
559
[ "Mathematical modeling", "Physical quantities", "Statistical distance", "Distance", "Applied mathematics" ]
19,461,794
https://en.wikipedia.org/wiki/Porosity
Porosity or void fraction is a measure of the void (i.e. "empty") spaces in a material, and is a fraction of the volume of voids over the total volume, between 0 and 1, or as a percentage between 0% and 100%. Strictly speaking, some tests measure the "accessible void", the total amount of void space accessible from the surface (cf. closed-cell foam). There are many ways to test porosity in a substance or part, such as industrial CT scanning. The term porosity is used in multiple fields including pharmaceutics, ceramics, metallurgy, materials, manufacturing, petrophysics, hydrology, earth sciences, soil mechanics, rock mechanics, and engineering. Void fraction in two-phase flow In gas-liquid two-phase flow, the void fraction is defined as the fraction of the flow-channel volume that is occupied by the gas phase or, alternatively, as the fraction of the cross-sectional area of the channel that is occupied by the gas phase. Void fraction usually varies from location to location in the flow channel (depending on the two-phase flow pattern). It fluctuates with time and its value is usually time averaged. In separated (i.e., non-homogeneous) flow, it is related to volumetric flow rates of the gas and the liquid phase, and to the ratio of the velocity of the two phases (called slip ratio). Porosity in earth sciences and construction Used in geology, hydrogeology, soil science, and building science, the porosity of a porous medium (such as rock or sediment) describes the fraction of void space in the material, where the void may contain, for example, air or water. It is defined by the ratio: where VV is the volume of void-space (such as fluids) and VT is the total or bulk volume of material, including the solid and void components. Both the mathematical symbols and are used to denote porosity. Porosity is a fraction between 0 and 1, typically ranging from less than 0.005 for solid granite to more than 0.5 for peat and clay. The porosity of a rock, or sedimentary layer, is an important consideration when attempting to evaluate the potential volume of water or hydrocarbons it may contain. Sedimentary porosity is a complicated function of many factors, including but not limited to: rate of burial, depth of burial, the nature of the connate fluids, the nature of overlying sediments (which may impede fluid expulsion). One commonly used relationship between porosity and depth is the decreasing exponential function given by the Athy (1930) equation: where, is the porosity of the sediment at a given depth () (m), is the initial porosity of the sediment at the surface of soil (before its burial), and is the compaction coefficient (m−1). The letter with a negative exponent denotes the decreasing exponential function. The porosity of the sediment exponentially decreases with depth, as a function of its compaction. A value for porosity can alternatively be calculated from the bulk density , saturating fluid density and particle density : If the void space is filled with air, the following simpler form may be used: A mean normal particle density can be taken as approximately 2.65 g/cm3 (silica, siliceous sediments or aggregates), or 2.70 g/cm3 (calcite, carbonate sediments or aggregates), although a better estimation can be obtained by examining the lithology of the particles. Porosity and hydraulic conductivity Porosity can be proportional to hydraulic conductivity; for two similar sandy aquifers, the one with a higher porosity will typically have a higher hydraulic conductivity (more open area for the flow of water), but there are many complications to this relationship. The principal complication is that there is not a direct proportionality between porosity and hydraulic conductivity but rather an inferred proportionality. There is a clear proportionality between pore throat radii and hydraulic conductivity. Also, there tends to be a proportionality between pore throat radii and pore volume. If the proportionality between pore throat radii and porosity exists then a proportionality between porosity and hydraulic conductivity may exist. However, as grain size or sorting decreases the proportionality between pore throat radii and porosity begins to fail and therefore so does the proportionality between porosity and hydraulic conductivity. For example: clays typically have very low hydraulic conductivity (due to their small pore throat radii) but also have very high porosities (due to the structured nature of clay minerals), which means clays can hold a large volume of water per volume of bulk material, but they do not release water rapidly and therefore have low hydraulic conductivity. Sorting and porosity Well sorted (grains of approximately all one size) materials have higher porosity than similarly sized poorly sorted materials (where smaller particles fill the gaps between larger particles). The graphic illustrates how some smaller grains can effectively fill the pores (where all water flow takes place), drastically reducing porosity and hydraulic conductivity, while only being a small fraction of the total volume of the material. For tables of common porosity values for earth materials, see the "further reading" section in the Hydrogeology article. Porosity of rocks Consolidated rocks (e.g., sandstone, shale, granite or limestone) potentially have more complex "dual" porosities, as compared with alluvial sediment. This can be split into connected and unconnected porosity. Connected porosity is more easily measured through the volume of gas or liquid that can flow into the rock, whereas fluids cannot access unconnected pores. Porosity is the ratio of pore volume to its total volume. Porosity is controlled by: rock type, pore distribution, cementation, diagenetic history and composition. Porosity is not controlled by grain size, as the volume of between-grain space is related only to the method of grain packing. Rocks normally decrease in porosity with age and depth of burial. Tertiary age Gulf Coast sandstones are in general more porous than Cambrian age sandstones. There are exceptions to this rule, usually because of the depth of burial and thermal history. Porosity of soil Porosity of surface soil typically decreases as particle size increases. This is due to soil aggregate formation in finer textured surface soils when subject to soil biological processes. Aggregation involves particulate adhesion and higher resistance to compaction. Typical bulk density of sandy soil is between 1.5 and 1.7 g/cm3. This calculates to a porosity between 0.43 and 0.36. Typical bulk density of clay soil is between 1.1 and 1.3 g/cm3. This calculates to a porosity between 0.58 and 0.51. This seems counterintuitive because clay soils are termed heavy, implying lower porosity. Heavy apparently refers to a gravitational moisture content effect in combination with terminology that harkens back to the relative force required to pull a tillage implement through the clayey soil at field moisture content as compared to sand. Porosity of subsurface soil is lower than in surface soil due to compaction by gravity. Porosity of 0.20 is considered normal for unsorted gravel size material at depths below the biomantle. Porosity in finer material below the aggregating influence of pedogenesis can be expected to approximate this value. Soil porosity is complex. Traditional models regard porosity as continuous. This fails to account for anomalous features and produces only approximate results. Furthermore, it cannot help model the influence of environmental factors which affect pore geometry. A number of more complex models have been proposed, including fractals, bubble theory, cracking theory, Boolean grain process, packed sphere, and numerous other models. The characterisation of pore space in soil is an associated concept. Types of geologic porosities Primary porosity The main or original porosity system in a rock or unconfined alluvial deposit. Secondary porosity A subsequent or separate porosity system in a rock, often enhancing overall porosity of a rock. This can be a result of chemical leaching of minerals or the generation of a fracture system. This can replace the primary porosity or coexist with it (see dual porosity below). Fracture porosity This is porosity associated with a fracture system or faulting. This can create secondary porosity in rocks that otherwise would not be reservoirs for hydrocarbons due to their primary porosity being destroyed (for example due to depth of burial) or of a rock type not normally considered a reservoir (for example igneous intrusions or metasediments). Vuggy porosity This is secondary porosity generated by dissolution of large features (such as macrofossils) in carbonate rocks leaving large holes, vugs, or even caves. Effective porosity (also called open porosity) Refers to the fraction of the total volume in which fluid flow is effectively taking place and includes catenary and dead-end (as these pores cannot be flushed, but they can cause fluid movement by release of pressure like gas expansion) pores and excludes closed pores (or non-connected cavities). This is very important for groundwater and petroleum flow, as well as for solute transport. Ineffective porosity (also called closed porosity) Refers to the fraction of the total volume in which fluids or gases are present but in which fluid flow can not effectively take place and includes the closed pores. Understanding the morphology of the porosity is thus very important for groundwater and petroleum flow. Dual porosity Refers to the conceptual idea that there are two overlapping reservoirs which interact. In fractured rock aquifers, the rock mass and fractures are often simulated as being two overlapping but distinct bodies. Delayed yield, and leaky aquifer flow solutions are both mathematically similar solutions to that obtained for dual porosity; in all three cases water comes from two mathematically different reservoirs (whether or not they are physically different). Macroporosity In solids (i.e. excluding aggregated materials such as soils), the term 'macroporosity' refers to pores greater than 50 nm in diameter. Flow through macropores is described by bulk diffusion. Mesoporosity In solids (i.e. excluding aggregated materials such as soils), the term 'mesoporosity' refers to pores greater than 2 nm and less than 50 nm in diameter. Flow through mesopores is described by Knudsen diffusion. Microporosity In solids (i.e. excluding aggregated materials such as soils), the term 'microporosity' refers to pores smaller than 2 nm in diameter. Movement in micropores is activated by diffusion. Porosity of fabric or aerodynamic porosity The ratio of holes to solid that the wind "sees". Aerodynamic porosity is less than visual porosity, by an amount that depends on the constriction of holes. Die casting porosity Casting porosity is a consequence of one or more of the following: gasification of contaminants at molten-metal temperatures; shrinkage that takes place as molten metal solidifies; and unexpected or uncontrolled changes in temperature or humidity. While porosity is inherent in die casting manufacturing, its presence may lead to component failure where pressure integrity is a critical characteristic. Porosity may take on several forms from interconnected micro-porosity, folds, and inclusions to macro porosity visible on the part surface. The end result of porosity is the creation of a leak path through the walls of a casting that prevents the part from holding pressure. Porosity may also lead to out-gassing during the painting process, leaching of plating acids and tool chatter in machining pressed metal components. Measuring porosity Several methods can be employed to measure porosity: Direct methods (determining the bulk volume of the porous sample, and then determining the volume of the skeletal material with no pores (pore volume = total volume − material volume). Optical methods (e.g., determining the area of the material versus the area of the pores visible under the microscope). The "areal" and "volumetric" porosities are equal for porous media with random structure. Computed tomography method (using industrial CT scanning to create a 3D rendering of external and internal geometry, including voids. Then implementing a defect analysis utilizing computer software) Imbibition methods, i.e., immersion of the porous sample, under vacuum, in a fluid that preferentially wets the pores. Water saturation method (pore volume = total volume of water − volume of water left after soaking). Water evaporation method (pore volume = (weight of saturated sample − weight of dried sample)/density of water) Mercury intrusion porosimetry (several non-mercury intrusion techniques have been developed due to toxicological concerns, and the fact that mercury tends to form amalgams with several metals and alloys). Gas expansion method. A sample of known bulk volume is enclosed in a container of known volume. It is connected to another container with a known volume which is evacuated (i.e., near vacuum pressure). When a valve connecting the two containers is opened, gas passes from the first container to the second until a uniform pressure distribution is attained. Using ideal gas law, the volume of the pores is calculated as , where VV is the effective volume of the pores, VT is the bulk volume of the sample, Va is the volume of the container containing the sample, Vb is the volume of the evacuated container, P1 is the initial pressure in the initial pressure in volume Va and VV, and P2 is final pressure present in the entire system. The porosity follows straightforwardly by its proper definition . Note that this method assumes that gas communicates between the pores and the surrounding volume. In practice, this means that the pores must not be closed cavities. Thermoporosimetry and cryoporometry. A small crystal of a liquid melts at a lower temperature than the bulk liquid, as given by the Gibbs-Thomson equation. Thus if a liquid is imbibed into a porous material, and frozen, the melting temperature will provide information on the pore-size distribution. The detection of the melting can be done by sensing the transient heat flows during phase-changes using differential scanning calorimetry – (DSC thermoporometry), measuring the quantity of mobile liquid using nuclear magnetic resonance – (NMR cryoporometry) or measuring the amplitude of neutron scattering from the imbibed crystalline or liquid phases – (ND cryoporometry). See also Void ratio Petroleum geology Poromechanics Bulk density Particle density (packed density) Packing density Void (composites) Coherent diffraction imaging References Footnotes External links Absolute Porosity & Effective Porosity Calculations Geology Buzz: Porosity Aquifers Hydrogeology Porous media Soil physics Soil mechanics
Porosity
[ "Physics", "Materials_science", "Mathematics", "Engineering", "Environmental_science" ]
3,128
[ "Physical phenomena", "Hydrology", "Applied and interdisciplinary physics", "Physical quantities", "Porous media", "Quantity", "Soil physics", "Soil mechanics", "Materials science", "Aquifers", "Physical properties", "Hydrogeology" ]
19,463,014
https://en.wikipedia.org/wiki/Atomic%20emission%20spectroscopy
Atomic emission spectroscopy (AES) is a method of chemical analysis that uses the intensity of light emitted from a flame, plasma, arc, or spark at a particular wavelength to determine the quantity of an element in a sample. The wavelength of the atomic spectral line in the emission spectrum gives the identity of the element while the intensity of the emitted light is proportional to the number of atoms of the element. The sample may be excited by various methods. Atomic Emission Spectroscopy allows us to measure interactions between electromagnetic radiation and physical atoms and molecules. This interaction is measured in the form of electromagnetic waves representing the changes in energy between atomic energy levels. When elements are burned by a flame, they emit electromagnetic radiation that can be recorded in the form of spectral lines.  Each element has its own unique spectral line due to the fact that each element has a different atomic arrangement, so this method is an important tool for identifying the makeup of materials. Robert Bunsen and Gustav Kirchhoff were the first to establish atomic emission spectroscopy as a tool in chemistry. When an element is burned in a flame, its atoms move from the ground electronic state to the excited electronic state. As atoms in the excited state  move back down into the ground state, they emit light. The Boltzmann expression is used to relate temperature to the number of atoms in the excited state where larger temperatures indicate a larger population of excited atoms. This relationship is written as: where nupper and nlower are the number of atoms in the higher and lower energy levels, gupper and glower are the degeneracies in the higher and lower energy levels, and εupper and εlower are the energies of the higher and lower energy levels. The wavelengths of this light can be dispersed and measured by a monochromator, and the intensity of the light can be leveraged to determine the number of excited state electrons present. For atomic emission spectroscopy, the radiation emitted by atoms in the excited state are measured specifically after they have already been excited. Much information can be obtained from the use of atomic emission spectroscopy by interpreting the spectral lines produced from exciting an atom. The width of spectral lines can provide information about an atom’s kinetic temperature and electron density. Looking at the different intensities of spectral lines is useful for determining the chemical makeup of mixtures and materials. Atomic emission spectroscopy is mainly used for determining the makeup of mixes of molecules due to the fact that each element has its own unique spectrum. Flame The sample of a material (analyte) is brought into the flame as a gas, sprayed solution, or directly inserted into the flame by use of a small loop of wire, usually platinum. The heat from the flame evaporates the solvent and breaks intramolecular bonds to create free atoms. The thermal energy also excites the atoms into excited electronic states that subsequently emit light when they return to the ground electronic state. Each element emits light at a characteristic wavelength, which is dispersed by a grating or prism and detected in the spectrometer. A frequent application of the emission measurement with the flame is the regulation of alkali metals for pharmaceutical analytics. Inductively coupled plasma Inductively coupled plasma atomic emission spectroscopy (ICP-AES) uses an inductively coupled plasma to produce excited atoms and ions that emit electromagnetic radiation at wavelengths characteristic of a particular element. Advantages of ICP-AES are the excellent limit of detection and linear dynamic range, multi-element capability, low chemical interference and a stable and reproducible signal. Disadvantages are spectral interferences (many emission lines), cost and operating expense and the fact that samples typically must be in a liquid solution. Inductively coupled plasma (ICP) source of the emission consists of an induction coil and plasma. An induction coil is a coil of wire that has an alternating current flowing through it. This current induces a magnetic field inside the coil, coupling a great deal of energy to plasma contained in a quartz tube inside the coil. Plasma is a collection of charged particles (cations and electrons) capable, by virtue of their charge, of interacting with a magnetic field. The plasmas used in atomic emissions are formed by ionizing a flowing stream of argon gas. Plasma's high-temperature results from resistive heating as the charged particles move through the gas. Because plasmas operate at much higher temperatures than flames, they provide better atomization and a higher population of excited states. The predominant form of sample matrix in ICP-AES today is a liquid sample: acidified water or solids digested into aqueous forms. Liquid samples are pumped into the nebulizer and sample chamber via a peristaltic pump. Then the samples pass through a nebulizer that creates a fine mist of liquid particles. Larger water droplets condense on the sides of the spray chamber and are removed via the drain, while finer water droplets move with the argon flow and enter the plasma. With plasma emission, it is possible to analyze solid samples directly. These procedures include incorporating electrothermal vaporization, laser and spark ablation, and glow-discharge vaporization. Spark and arc Spark or arc atomic emission spectroscopy is used for the analysis of metallic elements in solid samples. For non-conductive materials, the sample is ground with graphite powder to make it conductive. In traditional arc spectroscopy methods, a sample of the solid was commonly ground up and destroyed during analysis. An electric arc or spark is passed through the sample, heating it to a high temperature to excite the atoms within it. The excited analyte atoms emit light at characteristic wavelengths that can be dispersed with a monochromator and detected. In the past, the spark or arc conditions were typically not well controlled, the analysis for the elements in the sample were qualitative. However, modern spark sources with controlled discharges can be considered quantitative. Both qualitative and quantitative spark analysis are widely used for production quality control in foundry and metal casting facilities. See also Atomic absorption spectroscopy Atomic spectroscopy Inductively coupled plasma atomic emission spectroscopy Laser-induced breakdown spectroscopy References Bibliography External links Emission spectroscopy Scientific techniques Analytical chemistry
Atomic emission spectroscopy
[ "Physics", "Chemistry" ]
1,245
[ "Emission spectroscopy", "Spectroscopy", "Spectrum (physical sciences)", "nan" ]
19,464,175
https://en.wikipedia.org/wiki/Cold%20fission
Cold fission or cold nuclear fission is defined as involving fission events for which fission fragments have such low excitation energy that no neutrons or gammas are emitted. Cold fission events have so low a probability of occurrence that it is necessary to use a high-flux nuclear reactor to study them. According to research first published in 1981, the first observation of cold fission events was in experiments on fission induced by thermal neutrons of uranium 233, uranium 235, and plutonium 239 using the high-flux reactor at the Institut Laue-Langevin in Grenoble, France. Other experiments on cold fission were also done involving curium 248 and californium 252. A unified approach of cluster decay, alpha decay and cold fission was developed by Dorin N. Poenaru et al. A phenomenological interpretation was proposed by Gönnenwein and Duarte et al. The importance of cold fission phenomena lies in the fact that fragments reaching detectors have the same mass that they obtained at the "scission" configuration, just before the attractive but short-range nuclear force becomes null, and only Coulomb interaction acts between fragments. After this, Coulomb potential energy is converted into fragments of kinetic energies, which—added to pre-scission kinetic energies—is measured by detectors. The fact that cold fission preserves nuclear mass until the fission fragments reach the detectors permits the experimenter to better determine the fission dynamics, especially the aspects related to Coulomb and shell effects in low energy fission and nucleon pair breaking. Adopting several theoretical assumptions about scission configuration one can calculate the maximal value of kinetic energy as a function of charge and mass of fragments and compare them to experimental results. See also Cold fusion References Nuclear chemistry Nuclear physics Nuclear fission
Cold fission
[ "Physics", "Chemistry" ]
358
[ "Nuclear chemistry", "Nuclear fission", "nan", "Nuclear physics" ]
19,465,941
https://en.wikipedia.org/wiki/Poisson%20limit%20theorem
In probability theory, the law of rare events or Poisson limit theorem states that the Poisson distribution may be used as an approximation to the binomial distribution, under certain conditions. The theorem was named after Siméon Denis Poisson (1781–1840). A generalization of this theorem is Le Cam's theorem. Theorem Let be a sequence of real numbers in such that the sequence converges to a finite limit . Then: First proof Assume (the case is easier). Then Since this leaves Alternative proof Using Stirling's approximation, it can be written: Letting and : As , so: Ordinary generating functions It is also possible to demonstrate the theorem through the use of ordinary generating functions of the binomial distribution: by virtue of the binomial theorem. Taking the limit while keeping the product constant, it can be seen: which is the OGF for the Poisson distribution. (The second equality holds due to the definition of the exponential function.) See also De Moivre–Laplace theorem Le Cam's theorem References Articles containing proofs Probability theorems
Poisson limit theorem
[ "Mathematics" ]
219
[ "Theorems in probability theory", "Articles containing proofs", "Mathematical theorems", "Mathematical problems" ]
19,467,352
https://en.wikipedia.org/wiki/Richter%20scale
The Richter scale (), also called the Richter magnitude scale, Richter's magnitude scale, and the Gutenberg–Richter scale, is a measure of the strength of earthquakes, developed by Charles Richter in collaboration with Beno Gutenberg, and presented in Richter's landmark 1935 paper, where he called it the "magnitude scale". This was later revised and renamed the local magnitude scale, denoted as ML or . Because of various shortcomings of the original scale, most seismological authorities now use other similar scales such as the moment magnitude scale () to report earthquake magnitudes, but much of the news media still erroneously refers to these as "Richter" magnitudes. All magnitude scales retain the logarithmic character of the original and are scaled to have roughly comparable numeric values (typically in the middle of the scale). Due to the variance in earthquakes, it is essential to understand the Richter scale uses common logarithms simply to make the measurements manageable (i.e., a magnitude 3 quake factors 10³ while a magnitude 5 quake factors 105 and has seismometer readings 100 times larger). Richter magnitudes The Richter magnitude of an earthquake is determined from the logarithm of the amplitude of waves recorded by seismographs. Adjustments are included to compensate for the variation in the distance between the various seismographs and the epicenter of the earthquake. The original formula is: where is the maximum excursion of the Wood-Anderson seismograph, the empirical function depends only on the epicentral distance of the station, . In practice, readings from all observing stations are averaged after adjustment with station-specific corrections to obtain the value. Because of the logarithmic basis of the scale, each whole number increase in magnitude represents a tenfold increase in measured amplitude. In terms of energy, each whole number increase corresponds to an increase of about 31.6 times the amount of energy released, and each increase of 0.2 corresponds to approximately a doubling of the energy released. Events with magnitudes greater than 4.5 are strong enough to be recorded by a seismograph anywhere in the world, so long as its sensors are not located in the earthquake's shadow. The following describes the typical effects of earthquakes of various magnitudes near the epicenter. The values are typical and may not be exact in a future event because intensity and ground effects depend not only on the magnitude but also on (1) the distance to the epicenter, (2) the depth of the earthquake's focus beneath the epicenter, (3) the location of the epicenter, and (4) geological conditions. (Based on U.S. Geological Survey documents.) The intensity and death toll depend on several factors (earthquake depth, epicenter location, and population density, to name a few) and can vary widely. Millions of minor earthquakes occur every year worldwide, equating to hundreds every hour every day. On the other hand, earthquakes of magnitude ≥8.0 occur about once a year, on average. The largest recorded earthquake was the Great Chilean earthquake of May 22, 1960, which had a magnitude of 9.5 on the moment magnitude scale. Seismologist Susan Hough has suggested that a magnitude 10 quake may represent a very approximate upper limit for what the Earth's tectonic zones are capable of, which would be the result of the largest known continuous belt of faults rupturing together (along the Pacific coast of the Americas). A research at the Tohoku University in Japan found that a magnitude 10 earthquake was theoretically possible if a combined of faults from the Japan Trench to the Kuril–Kamchatka Trench ruptured together and moved by (or if a similar large-scale rupture occurred elsewhere). Such an earthquake would cause ground motions for up to an hour, with tsunamis hitting shores while the ground is still shaking, and if this kind of earthquake occurred, it would probably be a 1-in-10,000-year event. Development Prior to the development of the magnitude scale, the only measure of an earthquake's strength or "size" was a subjective assessment of the intensity of shaking observed near the epicenter of the earthquake, categorized by various seismic intensity scales such as the Rossi–Forel scale. ("Size" is used in the sense of the quantity of energy released, not the size of the area affected by shaking, though higher-energy earthquakes do tend to affect a wider area, depending on the local geology.) In 1883, John Milne surmised that the shaking of large earthquakes might generate waves detectable around the globe, and in 1899 E. Von Rehbur Paschvitz observed in Germany seismic waves attributable to an earthquake in Tokyo. In the 1920s, Harry O. Wood and John A. Anderson developed the Wood–Anderson seismograph, one of the first practical instruments for recording seismic waves. Wood then built, under the auspices of the California Institute of Technology and the Carnegie Institute, a network of seismographs stretching across Southern California. He also recruited the young and unknown Charles Richter to measure the seismograms and locate the earthquakes generating the seismic waves. In 1931, Kiyoo Wadati showed how he had measured, for several strong earthquakes in Japan, the amplitude of the shaking observed at various distances from the epicenter. He then plotted the logarithm of the amplitude against the distance and found a series of curves that showed a rough correlation with the estimated magnitudes of the earthquakes. Richter resolved some difficulties with this method and then, using data collected by his colleague Beno Gutenberg, he produced similar curves, confirming that they could be used to compare the relative magnitudes of different earthquakes. Additional developments were required to produce a practical method of assigning an absolute measure of magnitude. First, to span the wide range of possible values, Richter adopted Gutenberg's suggestion of a logarithmic scale, where each step represents a tenfold increase of magnitude, similar to the magnitude scale used by astronomers for star brightness. Second, he wanted a magnitude of zero to be around the limit of human perceptibility. Third, he specified the Wood–Anderson seismograph as the standard instrument for producing seismograms. Magnitude was then defined as "the logarithm of the maximum trace amplitude, expressed in microns", measured at a distance of . The scale was calibrated by defining a magnitude 0 shock as one that produces (at a distance of ) a maximum amplitude of 1 micron (1 μm, or 0.001 millimeters) on a seismogram recorded by a Wood-Anderson torsion seismometer. Finally, Richter calculated a table of distance corrections, in that for distances less than 200 kilometers the attenuation is strongly affected by the structure and properties of the regional geology. When Richter presented the resulting scale in 1935, he called it (at the suggestion of Harry Wood) simply a "magnitude" scale. "Richter magnitude" appears to have originated when Perry Byerly told the press that the scale was Richter's and "should be referred to as such." In 1956, Gutenberg and Richter, while still referring to "magnitude scale", labelled it "local magnitude", with the symbol , to distinguish it from two other scales they had developed, the surface-wave magnitude (MS) and body wave magnitude (MB) scales. Details The Richter scale was defined in 1935 for particular circumstances and instruments; the particular circumstances refer to it being defined for Southern California and "implicitly incorporates the attenuative properties of Southern California crust and mantle." The particular instrument used would become saturated by strong earthquakes and unable to record high values. The scale was replaced in the 1970s by the moment magnitude scale (MMS, symbol ); for earthquakes adequately measured by the Richter scale, numerical values are approximately the same. Although values measured for earthquakes now are , they are frequently reported by the press as Richter values, even for earthquakes of magnitude over 8, when the Richter scale becomes meaningless. The Richter and MMS scales measure the energy released by an earthquake; another scale, the Mercalli intensity scale, classifies earthquakes by their effects, from detectable by instruments but not noticeable, to catastrophic. The energy and effects are not necessarily strongly correlated; a shallow earthquake in a populated area with soil of certain types can be far more intense in impact than a much more energetic deep earthquake in an isolated area. Several scales have been historically described as the "Richter scale",, especially the local magnitude and the surface wave scale. In addition, the body wave magnitude, , and the moment magnitude, , abbreviated MMS, have been widely used for decades. A couple of new techniques to measure magnitude are in the development stage by seismologists. All magnitude scales have been designed to give numerically similar results. This goal has been achieved well for , , and . The scale gives somewhat different values than the other scales. The reason for so many different ways to measure the same thing is that at different distances, for different hypocentral depths, and for different earthquake sizes, the amplitudes of different types of elastic waves must be measured. is the scale used for the majority of earthquakes reported (tens of thousands) by local and regional seismological observatories. For large earthquakes worldwide, the moment magnitude scale (MMS) is most common, although is also reported frequently. The seismic moment, , is proportional to the area of the rupture times the average slip that took place in the earthquake, thus it measures the physical size of the event. is derived from it empirically as a quantity without units, just a number designed to conform to the scale. A spectral analysis is required to obtain . In contrast, the other magnitudes are derived from a simple measurement of the amplitude of a precisely defined wave. All scales, except , saturate for large earthquakes, meaning they are based on the amplitudes of waves that have a wavelength shorter than the rupture length of the earthquakes. These short waves (high-frequency waves) are too short a yardstick to measure the extent of the event. The resulting effective upper limit of measurement for is about 7 and about 8.5 for . New techniques to avoid the saturation problem and to measure magnitudes rapidly for very large earthquakes are being developed. One of these is based on the long-period P-wave; The other is based on a recently discovered channel wave. The energy release of an earthquake, which closely correlates to its destructive power, scales with the power of the shaking amplitude (see Moment magnitude scale for an explanation). Thus, a difference in magnitude of 1.0 is equivalent to a factor of 31.6 () in the energy released; a difference in magnitude of 2.0 is equivalent to a factor of 1000 () in the energy released. The elastic energy radiated is best derived from an integration of the radiated spectrum, but an estimate can be based on because most energy is carried by the high-frequency waves. Magnitude empirical formulae These formulae for Richter magnitude are alternatives to using Richter correlation tables based on Richter standard seismic event In the formulas below, is the epicentral distance in kilometers, and is the same distance represented as sea level great circle degrees. The Lillie empirical formula is: where is the amplitude (maximum ground displacement) of the P wave, in micrometers (μm), measured at 0.8 Hz. Lahr's empirical formula proposal is: where is seismograph signal amplitude in mm and is in km, for distances under 200 km . and where is in km, for distances between 200 km and 600 km . The Bisztricsany empirical formula (1958) for epicentre distances between 4° and 160° is: where is the duration of the surface wave in seconds, and is in degrees. is mainly between 5 and 8. The Tsumura empirical formula is: where is the total duration of oscillation in seconds. mainly takes on values between 3 and 5. The Tsuboi (University of Tokyo) empirical formula is: where is the amplitude in μm. See also 1935 in science Seismic intensity scales Seismic magnitude scales Timeline of United States inventions (1890–1945) Notes Sources . , NUREG/CR-1457. . . . . . . External links Seismic Monitor – IRIS Consortium USGS Earthquake Magnitude Policy (implemented on January 18, 2002) – USGS Perspective: a graphical comparison of earthquake energy release – Pacific Tsunami Warning Center 1935 in science 1935 introductions California Institute of Technology Seismic magnitude scales Logarithmic scales of measurement American inventions
Richter scale
[ "Physics", "Mathematics" ]
2,599
[ "Quantity", "Logarithmic scales of measurement", "Physical quantities" ]
1,215,732
https://en.wikipedia.org/wiki/Radiant%20intensity
In radiometry, radiant intensity is the radiant flux emitted, reflected, transmitted or received, per unit solid angle, and spectral intensity is the radiant intensity per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. These are directional quantities. The SI unit of radiant intensity is the watt per steradian (), while that of spectral intensity in frequency is the watt per steradian per hertz () and that of spectral intensity in wavelength is the watt per steradian per metre ()—commonly the watt per steradian per nanometre (). Radiant intensity is distinct from irradiance and radiant exitance, which are often called intensity in branches of physics other than radiometry. In radio-frequency engineering, radiant intensity is sometimes called radiation intensity. Mathematical definitions Radiant intensity Radiant intensity, denoted Ie,Ω ("e" for "energetic", to avoid confusion with photometric quantities, and "Ω" to indicate this is a directional quantity), is defined as where ∂ is the partial derivative symbol; Φe is the radiant flux emitted, reflected, transmitted or received; Ω is the solid angle. In general, Ie,Ω is a function of viewing angle θ and potentially azimuth angle. For the special case of a Lambertian surface, Ie,Ω follows the Lambert's cosine law Ie,Ω = I0 cos θ. When calculating the radiant intensity emitted by a source, Ω refers to the solid angle into which the light is emitted. When calculating radiance received by a detector, Ω refers to the solid angle subtended by the source as viewed from that detector. Spectral intensity Spectral intensity in frequency, denoted Ie,Ω,ν, is defined as where ν is the frequency. Spectral intensity in wavelength, denoted Ie,Ω,λ, is defined as where λ is the wavelength. Radio-frequency engineering Radiant intensity is used to characterize the emission of radiation by an antenna: where Ee is the irradiance of the antenna; r is the distance from the antenna. Unlike power density, radiant intensity does not depend on distance: because radiant intensity is defined as the power through a solid angle, the decreasing power density over distance due to the inverse-square law is offset by the increase in area with distance. SI radiometry units See also Candela Luminous intensity References External links Radiation: Activity and Intensity NDE/NDT Resource Center Physical quantities Radiometry
Radiant intensity
[ "Physics", "Mathematics", "Engineering" ]
503
[ "Physical phenomena", "Telecommunications engineering", "Physical quantities", "Quantity", "Physical properties", "Radiometry" ]
1,216,060
https://en.wikipedia.org/wiki/Nitride
In chemistry, a nitride is a chemical compound of nitrogen. Nitrides can be inorganic or organic, ionic or covalent. The nitride anion, N3- ion, is very elusive but compounds of nitride are numerous, although rarely naturally occurring. Some nitrides have a found applications, such as wear-resistant coatings (e.g., titanium nitride, TiN), hard ceramic materials (e.g., silicon nitride, Si3N4), and semiconductors (e.g., gallium nitride, GaN). The development of GaN-based light emitting diodes was recognized by the 2014 Nobel Prize in Physics. Metal nitrido complexes are also common. Synthesis of inorganic metal nitrides is challenging because nitrogen gas (N2) is not very reactive at low temperatures, but it becomes more reactive at higher temperatures. Therefore, a balance must be achieved between the low reactivity of nitrogen gas at low temperatures and the entropy driven formation of N2 at high temperatures. However, synthetic methods for nitrides are growing more sophisticated and the materials are of increasing technological relevance. Uses of nitrides Like carbides, nitrides are often refractory materials owing to their high lattice energy, which reflects the strong bonding of "N3−" to metal cation(s). Thus, cubic boron nitride, titanium nitride, and silicon nitride are used as cutting materials and hard coatings. Hexagonal boron nitride, which adopts a layered structure, is a useful high-temperature lubricant akin to molybdenum disulfide. Nitride compounds often have large band gaps, thus nitrides are usually insulators or wide-bandgap semiconductors; examples include boron nitride and silicon nitride. The wide-band gap material gallium nitride is prized for emitting blue light in LEDs. Like some oxides, nitrides can absorb hydrogen and have been discussed in the context of hydrogen storage, e.g. lithium nitride. Examples Classification of such a varied group of compounds is somewhat arbitrary. Compounds where nitrogen is not assigned −3 oxidation state are not included, such as nitrogen trichloride where the oxidation state is +3; nor are ammonia and its many organic derivatives. Nitrides of the s-block elements Only one alkali metal nitride is stable, the purple-reddish lithium nitride (), which forms when lithium burns in an atmosphere of . Sodium nitride and potassium nitride has been generated, but remains a laboratory curiosity. The nitrides of the alkaline earth metals that have the formula are however numerous. Examples include beryllium nitride (), magnesium nitride (), calcium nitride (), and strontium nitride (). The nitrides of electropositive metals (including Li, Zn, and the alkaline earth metals) readily hydrolyze upon contact with water, including the moisture in the air: Nitrides of the p-block elements Boron nitride exists as several forms (polymorphs). Nitrides of silicon and phosphorus are also known, but only the former is commercially important. The nitrides of aluminium, gallium, and indium adopt the hexagonal wurtzite structure in which each atom occupies tetrahedral sites. For example, in aluminium nitride, each aluminium atom has four neighboring nitrogen atoms at the corners of a tetrahedron and similarly each nitrogen atom has four neighboring aluminium atoms at the corners of a tetrahedron. This structure is like hexagonal diamond (lonsdaleite) where every carbon atom occupies a tetrahedral site (however wurtzite differs from sphalerite and diamond in the relative orientation of tetrahedra). Thallium(I) nitride () is known, but thallium(III) nitride (TlN) is not. Transition metal nitrides Most metal-rich transition metal nitrides adopt a relatively ordered face-centered cubic or hexagonal close-packed crystal structure, with octahedral coordination. Sometimes these materials are called "interstitial nitrides". They are essential for industrial metallurgy, because they are typically much harder and less ductile than their parent metal, and resist air-oxidation. For the group 3 metals, ScN and YN are both known. Group 4, 5, and 6 transition metals (the titanium, vanadium and chromium groups) all form chemically stable, refractory nitrides with high melting point. Thin films of titanium nitride, zirconium nitride, and tantalum nitride protect many industrial surfaces. Nitrides of the group 7 and 8 transition metals tend to be nitrogen-poor, and decompose readily at elevated temperatures. For example, iron nitride, decomposes at 200 °C. Platinum nitride and osmium nitride may contain units, and as such should not be called nitrides. Nitrides of heavier members from group 11 and 12 are less stable than copper nitride () and zinc nitride (): dry silver nitride () is a contact explosive which may detonate from the slightest touch, even a falling water droplet. Nitrides of the lanthanides and actinides Nitride containing species of the lanthanides and actinides are of scientific interest as they can provide a useful handle for determining covalency of bonding. Nuclear magnetic resonance (NMR) spectroscopy along with quantum chemical analysis has often been used to determine the degree to which metal nitride bonds are ionic or covalent in character. One example, a uranium nitride, has the highest known nitrogen-15 chemical shift. Molecular nitrides Many metals form molecular nitrido complexes, as discussed in the specialized article. The main group elements also form some molecular nitrides. Cyanogen () and tetrasulfur tetranitride () are rare examples of a molecular binary (containing one element aside from nitrogen) nitrides. They dissolve in nonpolar solvents. Both undergo polymerization. is also unstable with respect to the elements, but less so that the isostructural . Heating gives a polymer, and a variety of molecular sulfur nitride anions and cations are also known. Related to but distinct from nitride is pernitride diatomic anion () and the azide triatomic anion (N3−). References it is Nitride Anions Nitrides
Nitride
[ "Physics", "Chemistry" ]
1,411
[ "Ions", "Matter", "Anions" ]
1,216,077
https://en.wikipedia.org/wiki/Electrophilic%20addition
In organic chemistry, an electrophilic addition (AE) reaction is an addition reaction where a chemical compound containing a double or triple bond has a π bond broken, with the formation of two new σ bonds. The driving force for this reaction is the formation of an electrophile X+ that forms a covalent bond with an electron-rich, unsaturated C=C bond. The positive charge on X is transferred to the carbon-carbon bond, forming a carbocation during the formation of the C-X bond. In the second step of an electrophilic addition, the positively charge on the intermediate combines with an electron-rich species to form the second covalent bond. The second step is the same nucleophilic attack process found in an SN1 reaction. The exact nature of the electrophile and the nature of the positively charged intermediate are not always clear and depend on reactants and reaction conditions. In all asymmetric addition reactions to carbon, regioselectivity is important and often determined by Markovnikov's rule. Organoborane compounds give anti-Markovnikov additions. Electrophilic attack to an aromatic system results in electrophilic aromatic substitution rather than an addition reaction. Typical electrophilic additions Typical electrophilic additions to alkenes with reagents are: Halogen addition reactions: X2 Hydrohalogenations: HX Hydration reactions: H2O Hydrogenations: H2 Oxymercuration reactions: mercuric acetate, water Hydroboration-oxidation reactions: diborane the Prins reaction: formaldehyde, water References Reaction mechanisms
Electrophilic addition
[ "Chemistry" ]
336
[ "Reaction mechanisms", "Chemical kinetics", "Physical organic chemistry" ]
1,216,247
https://en.wikipedia.org/wiki/Wolff%E2%80%93Kishner%20reduction
The Wolff–Kishner reduction is a reaction used in organic chemistry to convert carbonyl functionalities into methylene groups. In the context of complex molecule synthesis, it is most frequently employed to remove a carbonyl group after it has served its synthetic purpose of activating an intermediate in a preceding step. As such, there is no obvious retron for this reaction. The reaction was reported by Nikolai Kischner in 1911 and Ludwig Wolff in 1912. In general, the reaction mechanism first involves the in situ generation of a hydrazone by condensation of hydrazine with the ketone or aldehyde substrate. Sometimes it is however advantageous to use a pre-formed hydrazone as substrate (see modifications). The rate determining step of the reaction is de-protonation of the hydrazone by an alkoxide base to form a diimide anion by a concerted, solvent mediated protonation/de-protonation step. Collapse of this alkyldiimide with loss of N2 leads to formation of an alkylanion which can be protonated by solvent to give the desired product. Because the Wolff–Kishner reduction requires highly basic conditions, it is unsuitable for base-sensitive substrates. In some cases, formation of the required hydrazone will not occur at sterically hindered carbonyl groups, preventing the reaction. However, this method can be superior to the related Clemmensen reduction for compounds containing acid-sensitive functional groups such as pyrroles and for high-molecular weight compounds. History The Wolff–Kishner reduction was discovered independently by N. Kishner in 1911 and Ludwig Wolff in 1912. Kishner found that addition of pre-formed hydrazone to hot potassium hydroxide containing crushed platinized porous plate led to formation of the corresponding hydrocarbon. A review titled “Disability, Despotism, Deoxygenation—From Exile to Academy Member: Nikolai Matveevich Kizhner” describing the life and work of Kishner was published in 2013. Wolff later accomplished the same result by heating an ethanol solution of semicarbazones or hydrazones in a sealed tube to 180 °C in the presence of sodium ethoxide. The method developed by Kishner has the advantage of avoiding the requirement of a sealed tube, but both methodologies suffered from unreliability when applied to many hindered substrates. These disadvantages promoted the development of Wolff’s procedure, wherein the use of high-boiling solvents such as ethylene glycol and triethylene glycol were implemented to allow for the high temperatures required for the reaction while avoiding the need of a sealed tube. These initial modifications were followed by many other improvements as described below. Mechanism The mechanism of the Wolff–Kishner reduction has been studied by Szmant and coworkers. According to Szmant's research, the first step in this reaction is the formation of a hydrazone anion 1 by deprotonation of the terminal nitrogen by MOH. If semicarbazones are used as substrates, initial conversion into the corresponding hydrazone is followed by deprotonation. A range of mechanistic data suggests that the rate-determining step involves formation of a new carbon–hydrogen bond at the carbon terminal in the delocalized hydrazone anion. This proton capture takes place in a concerted fashion with a solvent-induced abstraction of the second proton at the nitrogen terminal. Szmant’s finding that this reaction is first order in both hydroxide ion and ketone hydrazone supports this mechanistic proposal. Several molecules of solvent have to be involved in this process in order to allow for a concerted process. A detailed Hammett analysis of aryl aldehydes, methyl aryl ketones and diaryl ketones showed a non-linear relationship which the authors attribute to the complexity of the rate-determining step. Mildly electron-withdrawing substituents favor carbon-hydrogen bond formation, but highly electron-withdrawing substituents will decrease the negative charge at the terminal nitrogen and in turn favor a bigger and harder solvation shell that will render breaking of the N-H bond more difficult. The exceptionally high negative entropy of activation values observed can be explained by the high degree of organization in the proposed transition state. It was furthermore found that the rate of the reaction depends on the concentration of the hydroxylic solvent and on the cation in the alkoxide catalyst. The presence of crown ether in the reaction medium can increase the reactivity of the hydrazone anion 1 by dissociating the ion pair and therefore enhance the reaction rate. The final step of the Wolff–Kishner reduction is the collapse of the dimide anion 2 in the presence of a proton source to give the hydrocarbon via loss of dinitrogen to afford an alkyl anion 3, which undergoes rapid and irreversible acid-base reaction with solvent to give the alkane. Evidence for this high-energy intermediate was obtained by Taber via intramolecular trapping. The stereochemical outcome of this experiment was more consistent with an alkyl anion intermediate than the alternative possibility of an alkyl radical. The overall driving force of the reaction is the evolution of nitrogen gas from the reaction mixture. Modifications Many of the efforts devoted to improve the Wolff–Kishner reduction have focused on more efficient formation of the hydrazone intermediate by removal of water and a faster rate of hydrazone decomposition by increasing the reaction temperature. Some of the newer modifications provide more significant advances and allow for reactions under considerably milder conditions. The table shows a summary of some of the modifications that have been developed since the initial discovery. Huang Minlon modification In 1946, Huang Minlon reported a modified procedure for the Wolff–Kishner reduction of ketones in which excess hydrazine and water were removed by distillation after hydrazone formation. The temperature-lowering effect of water that was produced in hydrazone formation usually resulted in long reaction times and harsh reaction conditions even if anhydrous hydrazine was used in the formation of the hydrazone. The modified procedure consists of refluxing the carbonyl compound in 85% hydrazine hydrate with three equivalents of sodium hydroxide followed by distillation of water and excess hydrazine and elevation of the temperature to 200 °C. Significantly reduced reaction times and improved yields can be obtained using this modification. Minlon's original report described the reduction of β-(p-phenoxybenzoyl)propionic acid to γ-(p-phenoxyphenyl)butyric acid in 95% yield compared to 48% yield obtained by the traditional procedure. Barton modification Nine years after Huang Minlon’s first modification, Barton developed a method for the reduction of sterically hindered carbonyl groups. This method features rigorous exclusion of water, higher temperatures, and longer reaction times as well as sodium in diethylene glycol instead of alkoxide base. Under these conditions, some of the problems that normally arise with hindered ketones can be alleviated—for example, the C11-carbonyl group in the steroidal compound shown below was successfully reduced under Barton’s conditions while Huang–Minlon conditions failed to effect this transformation. Cram modification Slow addition of preformed hydrazones to potassium tert-butoxide in DMSO as reaction medium instead of glycols allows hydrocarbon formation to be conducted successfully at temperatures as low as 23 °C. Cram attributed the higher reactivity in DMSO as solvent to higher base strength of potassium tert-butoxide in this medium. This modification has not been exploited to great extent in organic synthesis due to the necessity to isolate preformed hydrazone substrates and to add the hydrazone over several hours to the reaction mixture. Henbest modification Henbest extended Cram’s procedure by refluxing carbonyl hydrazones and potassium tert-butoxide in dry toluene. Slow addition of the hydrazone is not necessary and it was found that this procedure is better suited for carbonyl compounds prone to base-induced side reactions than Cram's modification. It has for example been found that double bond migration in α,β-unsaturated enones and functional group elimination of certain α-substituted ketones are less likely to occur under Henbest's conditions. Caglioti reaction Treatment of tosylhydrazones with hydride-donor reagents to obtain the corresponding alkanes is known as the Caglioti reaction. The initially reported reaction conditions have been modified and hydride donors such as sodium cyanoborohydride, sodium triacetoxyborohydride, or catecholborane can reduce tosylhydrazones to hydrocarbons. The reaction proceeds under relatively mild conditions and can therefore tolerate a wider array of functional groups than the original procedure. Reductions with sodium cyanoborohydride as reducing agent can be conducted in the presence of esters, amides, cyano-, nitro- and chloro-substituents. Primary bromo- and iodo-substituents are displaced by nucleophilic hydride under these conditions. Thereduction pathway is sensitive to the pH, the reducing agent, and the substrate. One possibility, occurring under acidic conditions, includes direct hydride attack of iminium ion 1 following prior protonation of the tosylhydrazone. The resulting tosylhydrazine derivative 2 subsequently undergoes elimination of p-toluenesulfinic acid and decomposes via a diimine intermediate 3 to the corresponding hydrocarbon. A slight variation of this mechanism occurs when tautomerization to the azohydrazone is facilitated by inductive effects. The transient azohydrazine 4 can then be reduced to the tosylhydrazine derivative 2 and furnish the decarbonylated product analogously to the first possibility. This mechanism operates when relatively weak hydride donors are used, such as sodium cyanoborohydride. It is known that these sodium cyanoborohydride is not strong enough to reduce imines, but can reduce iminium ions. When stronger hydride donors are used, a different mechanism is operational, which avoids the use of acidic conditions. Hydride delivery occurs to give intermediate 5, followed by elimination of the metal sulfinate to give azo intermediate 6. This intermediate then decomposes, with loss of nitrogen gas, to give the reduced compound. When strongly basic hydride donors are used such as lithium aluminium hydride, then deprotonation of the tosyl hydrazone can occur before hydride delivery. Intermediate anion 7 can undergo hydride attack, eliminating a metal sulfinate to give azo anion 8. This readily decomposes to carbanion 9, which is protonated to give the reduced product. As with the parent Wolff–Kishner reduction, the decarbonylation reaction can often fail due to unsuccessful formation of the corresponding tosylhydrazone. This is common for sterically hindered ketones, as was the case for the cyclic amino ketone shown below. Alternative methods of reduction can be employed when formation of the hydrazone fail, including thioketal reduction with Raney nickel or reaction with sodium triethylborohydride. Deoxygenation of α,β-unsaturated carbonyl compounds α,β-Unsaturated carbonyl tosylhydrazones can be converted into the corresponding alkenes with migration of the double bond. The reduction proceeds stereoselectively to furnish the E geometric isomer. A very mild method uses one equivalent of catecholborane to reduce α,β-unsaturated tosylhydrazones. The mechanism of NaBH3CN reduction of α,β-unsaturated tosylhydrazones has been examined using deuterium-labeling. Alkene formation is initiated by hydride reduction of the iminium ion followed by double bond migration and nitrogen extrusion which occur in a concerted manner. Allylic diazene rearrangement as the final step in the reductive 1,3-transposition of α,β-unsaturated tosylhydrazones to the reduced alkenes can also be used to establish sp3-stereocenters from allylic diazenes containing prochiral stereocenters. The influence of the alkoxy stereocenter results in diastereoselective reduction of the α,β-unsaturated tosylhydrazone. The authors predicted that diastereoselective transfer of the diazene hydrogen to one face of the prochiral alkene could be enforced during the suprafacial rearrangement. Myers modification In 2004, Myers and coworkers developed a method for the preparation of N-tert-butyldimethylsilylhydrazones from carbonyl-containing compounds. These products can be used as a superior alternative to hydrazones in the transformation of ketones into alkanes. The advantages of this procedure are considerably milder reaction conditions and higher efficiency as well as operational convenience. The condensation of 1,2-bis(tert-butyldimethylsilyl)-hydrazine with aldehydes and ketones with Sc(OTf)3 as catalyst is rapid and efficient at ambient temperature. Formation and reduction of N-tert-butyldimethylsilylhydrazones can be conducted in a one pot procedure in high yield. [This graphic is wrong. It should be TBS-N, not TBSO-N] The newly developed method was compared directly to the standard Huang–Minlon Wolff–Kishner reduction conditions (hydrazine hydrate, potassium hydroxide, diethylene glycol, 195 °C) for the steroidal ketone shown above. The product was obtained in 79% yield compared to 91% obtained from the reduction via an intermediate N-tert-butyldimethylsilylhydrazone. Side reactions The Wolff–Kishner reduction is not suitable for base–sensitive substrates and can under certain conditions be hampered by steric hindrance surrounding the carbonyl group. Some of the more common side-reactions are listed below. Azine formation A commonly encountered side-reaction in Wolff–Kishner reductions involves azine formation by reaction of hydrazone with the carbonyl compound. Formation of the ketone can be suppressed by vigorous exclusion of water during the reaction. Several of the presented procedures require isolation of the hydrazone compound prior to reduction. This can be complicated by further transformation of the product hydrazone to the corresponding hydrazine during product purification. Cram found that azine formation is favored by rapid addition of preformed hydrazones to potassium tert-butoxide in anhydrous dimethylsulfoxide. Reduction of ketones to alcohols by sodium ethoxide The second principal side reaction is the reduction of the ketone or aldehyde to the corresponding alcohol. After initial hydrolysis of the hydrazone, the free carbonyl derivative is reduced by alkoxide to the carbinol. In 1924, Eisenlohr reported that substantial amounts of hydroxydecalin were observed during the attempted Wolff–Kishner reduction of trans-β-decalone. In general, alcohol formation may be repressed by exclusion of water or by addition of excess hydrazine. Kishner–Leonard elimination Kishner noted during his initial investigations that in some instances, α-substitution of a carbonyl group can lead to elimination affording unsaturated hydrocarbons under typical reaction conditions. Leonard later further developed this reaction and investigated the influence of different α-substituents on the reaction outcome. He found that the amount of elimination increases with increasing steric bulk of the leaving group. Furthermore, α-dialkylamino-substituted ketones generally gave a mixture of reduction and elimination product whereas less basic leaving groups resulted in exclusive formation of the alkene product. The fragmentation of α,β-epoxy ketones to allylic alcohols has been extended to a synthetically useful process and is known as the Wharton reaction. Cleavage or rearrangement of strained rings adjacent to the carbonyl group Grob rearrangement of strained rings adjacent to the carbonyl group has been observed by Erman and coworkers. During an attempted Wolff–Kishner reduction of trans-π-bromocamphor under Cram’s conditions, limonene was isolated as the only product. Similarly, cleavage of strained rings adjacent to the carbonyl group can occur. When 9β,19-cyclo-5α-pregnane-3,11,20-trione 3,20-diethylene ketal was subjected to Huang–Minlon conditions, ring-enlargement was observed instead of formation of the 11-deoxo-compound. Applications in total synthesis The Wolff–Kishner reduction has been applied to the total synthesis of scopadulcic acid B, aspidospermidine and dysidiolide. The Huang Minlon modification of the Wolff–Kishner reduction is one of the final steps in their synthesis of (±)-aspidospermidine. The carbonyl group that was reduced in the Wolff–Kishner reduction was essential for preceding steps in the synthesis. The tertiary amide was stable to the reaction conditions and reduced subsequently by lithium aluminum hydride. Amides are usually not suitable substrates for the Wolff–Kishner reduction as demonstrated by the example above. Coe and coworkers found however that a twisted amide can be efficiently reduced under Wolff–Kishner conditions. The authors explain this observation with the stereoelectronic bias of the substrate which prevents “anti–Bredt” iminium ion formation and therefore favors ejection of alcohol and hydrazone formation. The amide functionality in this strained substrate can be considered as isolated amine and ketone functionalities as resonance stabilization is prevented due to torsional restrictions. The product was obtained in 68% overall yield in a two step procedure. A tricyclic carbonyl compound was reduced using the Huang Minlon modification of the Wolff–Kishner reduction. Several attempts towards decarbonylation of tricyclic allylic acetate containing ketone failed and the acetate functionality had to be removed to allow Wolff–Kishner reduction. Finally, the allylic alcohol was installed via oxyplumbation. The Wolff–Kishner reduction has also been used on kilogram scale for the synthesis of a functionalized imidazole substrate. Several alternative reduction methods were investigated, but all of the tested conditions remained unsuccessful. Safety concerns for a large scale Wolff–Kishner reduction were addressed and a highly optimized procedure afforded to product in good yield. An allylic diazene rearrangement was used in the synthesis of the C21–C34 fragment of antascomicin B. The hydrazone was reduced selectively with catecholborane and excess reducing agent decomposed with sodium thiosulfate. The crude reaction product was then treated with sodium acetate and to give the 1,4-syn isomer. See also Clemmensen reduction Wharton reaction Shapiro reaction References Further reading Todd, D. The Wolff-Kishner Reduction. In Org. React. (eds. Adams, E.); John-Wiley & Sons, Inc.: London, 1948, 4, 378 Hutchins, R. O. Reduction of C=X to CH2 by Wolff-Kishner and Other Hydrazone Methods. In Comp. Org. Synth. (eds. Trost, B. M., Fleming, I.); Pergamon: Oxford, 1991, 8, 327 Lewis, D. E. The Wolff-Kishner Reduction and Related Reactions. Discovery and Development; Elsevier: Amsterdam, 2019. Organic reduction reactions Name reactions
Wolff–Kishner reduction
[ "Chemistry" ]
4,173
[ "Name reactions", "Organic redox reactions", "Organic reactions" ]
1,217,104
https://en.wikipedia.org/wiki/Stokes%20shift
Stokes shift is the difference (in energy, wavenumber or frequency units) between positions of the band maxima of the absorption and emission spectra (fluorescence and Raman being two examples) of the same electronic transition. It is named after Irish physicist George Gabriel Stokes. When a system (be it a molecule or atom) absorbs a photon, it gains energy and enters an excited state. The system can relax by emitting a photon. The Stokes shift occurs when the energy of the emitted photon is lower than that of the absorbed photon, representing the difference in energy of the two photons. The Stokes shift is primarily the result of two phenomena: vibrational relaxation or dissipation and solvent reorganization. A fluorophore is a part of a molecule with a dipole moment that exhibits fluorescence. When a fluorophore enters an excited state, its dipole moment changes, but surrounding solvent molecules cannot adjust so quickly. Only after vibrational relaxation do their dipole moments realign. Stokes shifts are given in wavelength units, but this is less meaningful than energy, wavenumber or frequency units because it depends on the absorption wavelength. For instance, a 50 nm Stokes shift from absorption at 300 nm is larger in terms of energy than a 50 nm Stokes shift from absorption at 600 nm. Stokes fluorescence Stokes fluorescence is the emission of a longer-wavelength photon (lower frequency or energy) by a molecule that has absorbed a photon of shorter wavelength (higher frequency or energy). Both absorption and radiation (emission) of energy are distinctive for a particular molecular structure. If a material has a direct bandgap in the range of visible light, the light shining on it is absorbed, which excites electrons to a higher-energy state. The electrons remain in the excited state for about 10−8 seconds. This number varies over several orders of magnitude, depending on the sample, and is known as the fluorescence lifetime of the sample. After losing a small amount of energy through vibrational relaxation, the molecule returns to the ground state, and energy is emitted. Anti-Stokes shift If the emitted photon has more energy than the absorbed photon, the energy difference is called an anti-Stokes shift; this extra energy comes from dissipation of thermal phonons in a crystal lattice, cooling the crystal in the process. Anti-Stokes shifts may also be due to triplet-triplet annihilation processes, resulting in the formation of higher singlet states that emit at higher energies. Applications of Stokes and anti-Stokes shifts Raman spectroscopy In Raman spectroscopy, when a molecule is excited by incident radiation, it undergoes a Stokes shift as it emits radiation at a lower energy level than the incident radiation. Analyzing the intensity and frequency of the spectral shift provides valuable information about the vibrational modes of molecules, enabling the identification of chemical bonds, functional groups, and molecular conformations. Yttrium oxysulfide Yttrium oxysulfide () doped with gadolinium oxysulfide () is a common industrial anti-Stokes pigment, absorbing in the near-infrared and emitting in the visible region of the spectrum. This composite material is often utilized in luminescent applications, where it absorbs lower-energy photons and emits higher-energy photons. This unique property makes it particularly valuable in various technological fields, including security printing, anti-counterfeiting measures, and luminescent displays. By harnessing anti-Stokes fluorescence, this pigment enables the creation of vibrant and durable inks, coatings, and materials with enhanced visibility and authentication capabilities. Photon upconversion Photon upconversion is an anti-Stokes process where lower-energy photons are converted into higher-energy photons. An example of this later process is demonstrated by upconverting nanoparticles. It is more commonly observed in Raman spectroscopy, where it can be used to determine the temperature of a material. Optoelectronic devices In direct-bandgap thin-film semiconducting layers Stokes shifted emission can originate from three main sources: doping, strain, and disorder. Each of these factors can introduce variations in the energy levels of the semiconductor material, leading to a shift in the emitted light towards longer wavelengths compared to the incident light. This phenomenon is particularly relevant in optoelectronic devices where controlling these factors can be crucial for optimizing device performance. See also Jablonski diagram Kasha's rule References Fluorescence Raman spectroscopy
Stokes shift
[ "Chemistry" ]
925
[ "Luminescence", "Fluorescence" ]
1,217,124
https://en.wikipedia.org/wiki/CERN%20Axion%20Solar%20Telescope
The CERN Axion Solar Telescope (CAST) is an experiment in astroparticle physics to search for axions originating from the Sun. The experiment, sited at CERN in Switzerland, was commissioned in 1999 and came online in 2002 with the first data-taking run starting in May 2003. The successful detection of solar axions would constitute a major discovery in particle physics, and would also open up a brand new window on the astrophysics of the solar core. CAST is currently the most sensitive axion helioscope. Theory and operation If the axions exist, they may be produced in the Sun's core when X-rays scatter off electrons and protons in the presence of strong electric fields. The experimental setup is built around a 9.26 m long decommissioned test magnet for the LHC capable of producing a field of up to . This strong magnetic field is expected to convert solar axions back into X-rays for subsequent detection by X-ray detectors. The telescope observes the Sun for about 1.5 hours at sunrise and another 1.5 hours at sunset each day. The remaining 21 hours, with the instrument pointing away from the Sun, are spent measuring background axion levels. CAST began operation in 2003 searching for axions up to . In 2005, Helium-4 was added to the magnet, extending sensitivity to masses up to 0.39 eV, then Helium-3 was used during 2008–2011 for masses up to 1.15 eV. CAST then ran with vacuum again searching for axions below 0.02 eV. As of 2014, CAST has not turned up definitive evidence for solar axions. It has considerably narrowed down the range of parameters where these elusive particles may exist. CAST has set significant limits on axion coupling to electrons and photons. A 2017 paper using data from the 2013–2015 run reported a new best limit on axion-photon coupling of 0.66×10 / GeV. Built upon the experience of CAST, a much larger, new-generation, axion helioscope, the International Axion Observatory (IAXO), has been proposed and is now under preparation. Detectors The CAST focuses on the solar axions using a helioscope, which is a 9.2 m superconducting LHC prototype dipole magnet. The superconductive magnet is maintained by constantly keeping it at 1.8 Kelvin using superfluid helium. There are two magnetic bores of 43 mm diameter and 9.2 6m length with X-ray detectors placed at all ends. These detectors are sensitive to photons from inverse Primakoff conversion of solar axions. The two X-ray telescopes of CAST measures both signal and background simultaneously with the same detector and reduces the systematic uncertainties. From 2003 to 2013, the following three detectors were attached to ends of the dipole magnet, all based on the inverse Primakoff effect, to detect the photons converted from the solar axions. Conventional time projection chamber detectors (TPC). MICROMEsh GAseous Structure detectors (MICROMEGAS). X-ray telescope with a charged couple device (CCD). After 2013 several new detectors such as the RADES, GridPix, and KWISP were installed, with modified goals and newly enhanced technologies. Conventional time projection chamber detectors (TPC) TPC is a gas-filled drift chambers type of detector, designed to detect the low-intensity X-ray signals at CAST. The interactions in this detector take place in a very large gaseous chamber and produce ionizing electrons. These electrons travel towards the multiwire proportional chamber (MWPC), where the signal is then amplified through the avalanche process. MICROMEsh GAseous Structure detectors (MICROMEGAS) This detector operated during the period of 2002 to 2004. It is a gaseous detector and was primarily employed for to detect X-rays in the energy range of 1–10 KeV. The detector itself was made up of low radioactive materials. The choice of material was mainly based on reducing the background noise, and Micromegas achieved a significantly low background rejection of without any shielding. X-ray telescope with a charged couple device (CCD) This detector has a pn-CCD chip located at the focal plane of the X-ray telescope. The X-ray telescope is based on the popular Wolter-I mirror optics concept. This technique is widely used in almost all X-ray astronomy telescopes. Its mirror is made up of 27 gold-coated nickel shells. These parabolic and hyperbolic shells are confocally arranged to optimize the resolution. The largest shell is 163 mm in diameter, while the smallest is 76 mm. The overall mirror system has a focal length of 1.6 m. This detector achieved a remarkably good signal to noise ratio by focusing the axions created inside the magnetic field chamber onto small, about few area. GridPix detector In 2016, The GridPix detector was installed to detect the soft X-rays (energy range of 200 eV to 10 KeV) generated by solar chameleons through the primakoff effect. During the search period of 2014 to 2015 the detected signal-to-noise ratio was below the required levels. InGrid Based X-ray detector The sole aim of this detector is to enhance the sensitivity of CAST to energy thresholds around 1 KeV range. This is an improved sensitive detector set up in 2014 behind the X-ray telescope, for the search of solar chameleons which have low threshold energies. The InGrid detector and its granular Timepix pad readout with low energy threshold of 0.1 KeV for photon detection hunts the solar chameleons in this range. Relic Axion Dark Matter Exploratory Setup (RADES) The RADES started searching for axion-like dark matter in 2018, and the first results from this detector were published in early 2021. Although no significant axion signal was detected above the noise background during the 2018 to 2021 period, RADES became the first detector to search for axions above . CAST helioscope (looks at sun) was made a haloscope (looks at galactic halo) in late 2017. RADES detector attached to this haloscope has a 1 m long alternating-irises stainless-steel cavity able to search for dark matter axions around . Further prospects of improving the detector system with enhancements such as superconductive cavities and ferro-magnetic tunings are being looked into. KWISP detector KWISP at CAST is designed to detect the coupling of solar chameleons with matter particles. It uses a very sensitive optomechanical force sensor, capable of detecting a displacement in a thin membrane caused by the mechanical effects from the solar chameleon interactions. CAST-CAPP This detector has a delicate tuning mechanism, made of 2 parallel sapphire plates and activated by a piezoelectric motor. The maximum tuning corresponds to axions masses between 21–23 μeV. CAST-CAPP detector is also sensitive to dark matter axion tidal or cosmological streams and to the theorized axion mini-clusters. A newer and better version of CAPP is being developed at CAPP, South Korea. Results The CAST experiment began with the goal of devising new methods and implementing novel technologies for the detection of solar axions. Owing to the inter-disciplinary and interrelated field of axion studies, dark matter, dark energy, and axion-like exotic particles, the new collaborations at CAST have broadened their research into the wide field of astroparticle physics. Results from these different domains are described below. Constraints on axions During the initial years, axion detection was the primary goal of CAST. Although the CAST experiment did not yet observe axions directly, it has constraint the search parameters. Mass and the coupling constant of an axion are primary aspects of its detectability.  Over almost 20 years of the operation period, CAST has added very significant details and limitations to the properties of solar axions and axion-like particles. In the initial run period, the first three CAST detectors put an upper limit of  on (parameter for axion-photon coupling) with a 95% confidence limit (CL) for axion mass- . For axion mass range between and , RADES constrained the axion-photon coupling constant with just about 5% error. The most recent results, in 2017 set an upper limit on (with 95% CL) for all axions with masses below 0.02 eV. CAST has thus improved the previous astrophysical limits and has probed numerous relevant axion models of sub-electron-volt mass. Search for dark matter CAST was able to constrain the axion-photon coupling constant from the very low up to the hot dark matter sector; and the current search range overlaps with the present cosmic hot dark matter bound which is axion mass, . The new detectors at CAST are also looking for proposed dark matter candidates such as the solar chameleons and pharaphotons as well as the relic axions from the Big bang and Inflation. In late 2017, the CAST helioscope which originally was searching for solar axion and ALPs, was converted into haloscope to hunt for the Dark Matter wind in milky way's galactic halo while it crosses the Earth. These idea of streaming dark wind is thought to affect and cause the random and anisotropic orientation of solar flares, for which the CAST haloscope will serve as a testbed. Search for dark energy In the dark energy domain CAST is currently looking for signatures of a chameleon, which is hypothesized to be a particle produced when dark energy interacts with the photons. This area is currently in its beginning stages, wherein possible ways of dark energy particles coupling with normal matter are being theorized. Using the GridPix detector, the upper bound on the chameleon photon coupling constant- was determined to be equal to for (chameleon matter coupling constant) in the range of 1 to . KWISP detector obtained an upper limit on the force acting on its detector membrane due to chameleons as pNewton, which corresponds to a specific exclusion zone in - plane and complements the results obtained by GridPix. References External links "CAST experiment constrains solar axions". cerncourier.com. 19 May 2017. Experiments for dark matter search High energy particle telescopes CERN experiments Solar telescopes CERN facilities Particle physics facilities
CERN Axion Solar Telescope
[ "Physics" ]
2,142
[ "Dark matter", "Experiments for dark matter search", "Unsolved problems in physics" ]
12,718,563
https://en.wikipedia.org/wiki/Gravity%20turn
A gravity turn or zero-lift turn is a maneuver used in launching a spacecraft into, or descending from, an orbit around a celestial body such as a planet or a moon. It is a trajectory optimization that uses gravity to steer the vehicle onto its desired trajectory. It offers two main advantages over a trajectory controlled solely through the vehicle's own thrust. First, the thrust is not used to change the spacecraft's direction, so more of it is used to accelerate the vehicle into orbit. Second, and more importantly, during the initial ascent phase the vehicle can maintain low or even zero angle of attack. This minimizes transverse aerodynamic stress on the launch vehicle, allowing for a lighter launch vehicle. The term gravity turn can also refer to the use of a planet's gravity to change a spacecraft's direction in situations other than entering or leaving the orbit. When used in this context, it is similar to a gravitational slingshot; the difference is that a gravitational slingshot often increases or decreases spacecraft velocity and changes direction, while the gravity turn only changes direction. Launch procedure Vertical climb A gravity turn is commonly used with rocket powered vehicles that launch vertically, like the Space Shuttle. The rocket begins by flying straight up, gaining both vertical speed and altitude. During this portion of the launch, gravity acts directly against the thrust of the rocket, lowering its vertical acceleration. Losses associated with this slowing are known as gravity drag, and can be minimized by executing the next phase of the launch, the pitchover maneuver or roll program, as soon as possible. The pitchover should also be carried out while the vertical velocity is small to avoid large aerodynamic loads on the vehicle during the maneuver. The pitchover maneuver consists of the rocket gimbaling its engine slightly to direct some of its thrust to one side. This force creates a net torque on the ship, turning it so that it no longer points vertically. The pitchover angle varies with the launch vehicle and is included in the rocket's inertial guidance system. For some vehicles it is only a few degrees, while other vehicles use relatively large angles (a few tens of degrees). After the pitchover is complete, the engines are reset to point straight down the axis of the rocket again. This small steering maneuver is the only time during an ideal gravity turn ascent that thrust must be used for purposes of steering. The pitchover maneuver serves two purposes. First, it turns the rocket slightly so that its flight path is no longer vertical, and second, it places the rocket on the correct heading for its ascent to orbit. After the pitchover, the rocket's angle of attack is adjusted to zero for the remainder of its climb to orbit. This zeroing of the angle of attack reduces lateral aerodynamic loads and produces negligible lift force during the ascent. Downrange acceleration After the pitchover, the rocket's flight path is no longer completely vertical, so gravity acts to turn the flight path back towards the ground. If the rocket were not producing thrust, the flight path would be a simple ellipse like a thrown ball (it is a common mistake to think it is a parabola: this is only true if it is assumed that the Earth is flat, and gravity always points in the same direction, which is a good approximation for short distances), leveling off and then falling back to the ground. The rocket is producing thrust though, and rather than leveling off and then descending again, by the time the rocket levels off, it has gained sufficient altitude and velocity to place it in a stable orbit. If the rocket is a multi-stage system where stages fire sequentially, the rocket's ascent burn may not be continuous. Some time must be allowed for stage separation and engine ignition between each successive stage, but some rocket designs call for extra free-flight time between stages. This is particularly useful in very high thrust rockets, where if the engines were fired continuously, the rocket would run out of fuel before leveling off and reaching a stable orbit above the atmosphere. The technique is also useful when launching from a planet with a thick atmosphere, such as the Earth. Because gravity turns the flight path during free flight, the rocket can use a smaller initial pitchover angle, giving it higher vertical velocity, and taking it out of the atmosphere more quickly. This reduces both aerodynamic drag as well as aerodynamic stress during launch. Then later during the flight the rocket coasts between stage firings, allowing it to level off above the atmosphere, so when the engine fires again, at zero angle of attack, the thrust accelerates the ship horizontally, inserting it into orbit. Descent and landing procedure Because heat shields and parachutes cannot be used to land on an airless body such as the Moon, a powered descent with a gravity turn is a good alternative. The Apollo Lunar Module used a slightly modified gravity turn to land from lunar orbit. This was essentially a launch in reverse except that a landing spacecraft is lightest at the surface while a spacecraft being launched is heaviest at the surface. A computer program called Lander that simulated gravity turn landings applied this concept by simulating a gravity turn launch with a negative mass flow rate, i.e. the propellant tanks filled during the rocket burn. The idea of using a gravity turn maneuver to land a vehicle was originally developed for the Lunar Surveyor landings, although Surveyor made a direct approach to the surface without first going into lunar orbit. Deorbit and entry The vehicle begins by orienting for a retrograde burn to reduce its orbital velocity, lowering its point of periapsis to near the surface of the body to be landed on. If the craft is landing on a planet with an atmosphere such as Mars the deorbit burn will only lower periapsis into the upper layers of the atmosphere, rather than just above the surface as on an airless body. After the deorbit burn is complete the vehicle can either coast until it is nearer to its landing site or continue firing its engine while maintaining zero angle of attack. For a planet with an atmosphere the coast portion of the trip includes entry through the atmosphere as well. After the coast and possible entry, the vehicle jettisons any no longer necessary heat shields and/or parachutes in preparation for the final landing burn. If the atmosphere is thick enough it can be used to slow the vehicle a considerable amount, thus saving on fuel. In this case a gravity turn is not the optimal entry trajectory but it does allow for approximation of the true delta-v required. In the case where there is no atmosphere however, the landing vehicle must provide the full delta-v necessary to land safely on the surface. Landing If it is not already properly oriented, the vehicle lines up its engines to fire directly opposite its current surface velocity vector, which at this point is either parallel to the ground or only slightly vertical, as shown to the left. The vehicle then fires its landing engine to slow down for landing. As the vehicle loses horizontal velocity the gravity of the body to be landed on will begin pulling the trajectory closer and closer to a vertical descent. In an ideal maneuver on a perfectly spherical body the vehicle could reach zero horizontal velocity, zero vertical velocity, and zero altitude all at the same moment, landing safely on the surface (if the body is not rotating; else the horizontal velocity shall be made equal to the one of the body at the considered latitude). However, due to rocks and uneven surface terrain the vehicle usually picks up a few degrees of angle of attack near the end of the maneuver to zero its horizontal velocity just above the surface. This process is the mirror image of the pitch over maneuver used in the launch procedure and allows the vehicle to hover straight down, landing gently on the surface. Guidance and control The steering of a rocket's course during its flight is divided into two separate components; control, the ability to point the rocket in a desired direction, and guidance, the determination of what direction a rocket should be pointed to reach a given target. The desired target can either be a location on the ground, as in the case of a ballistic missile, or a particular orbit, as in the case of a launch vehicle. Launch The gravity turn trajectory is most commonly used during early ascent. The guidance program is a precalculated lookup table of pitch vs time. Control is done with engine gimballing and/or aerodynamic control surfaces. The pitch program maintains a zero angle of attack (the definition of a gravity turn) until the vacuum of space is reached, thus minimizing lateral aerodynamic loads on the vehicle. (Excessive aerodynamic loads can quickly destroy the vehicle.) Although the preprogrammed pitch schedule is adequate for some applications, an adaptive inertial guidance system that determines location, orientation and velocity with accelerometers and gyroscopes, is almost always employed on modern rockets. The British satellite launcher Black Arrow was an example of a rocket that flew a preprogrammed pitch schedule, making no attempt to correct for errors in its trajectory, while the Apollo-Saturn rockets used "closed loop" inertial guidance after the gravity turn through the atmosphere. The initial pitch program is an open-loop system subject to errors from winds, thrust variations, etc. To maintain zero angle of attack during atmospheric flight, these errors are not corrected until reaching space. Then a more sophisticated closed-loop guidance program can take over to correct trajectory deviations and attain the desired orbit. In the Apollo missions, the transition to closed-loop guidance took place early in second stage flight after maintaining a fixed inertial attitude while jettisoning the first stage and interstage ring. Because the upper stages of a rocket operate in a near vacuum, fins are ineffective. Steering relies entirely on engine gimballing and a reaction control system. Landing To serve as an example of how the gravity turn can be used for a powered landing, an Apollo type lander on an airless body will be assumed. The lander begins in a circular orbit docked to the command module. After separation from the command module the lander performs a retrograde burn to lower its periapsis to just above the surface. It then coasts to periapsis where the engine is restarted to perform the gravity turn descent. It has been shown that in this situation guidance can be achieved by maintaining a constant angle between the thrust vector and the line of sight to the orbiting command module. This simple guidance algorithm builds on a previous study which investigated the use of various visual guidance cues including the uprange horizon, the downrange horizon, the desired landing site, and the orbiting command module. The study concluded that using the command module provides the best visual reference, as it maintains a near constant visual separation from an ideal gravity turn until the landing is almost complete. Because the vehicle is landing in a vacuum, aerodynamic control surfaces are useless. Therefore, a system such as a gimballing main engine, a reaction control system, or possibly a control moment gyroscope must be used for attitude control. Limitations Although gravity turn trajectories use minimal steering thrust they are not always the most efficient possible launch or landing procedure. Several things can affect the gravity turn procedure making it less efficient or even impossible due to the design limitations of the launch vehicle. A brief summary of factors affecting the turn is given below. Atmosphere — In order to minimize gravity drag the vehicle should begin gaining horizontal speed as soon as possible. On an airless body such as the Moon this presents no problem, however on a planet with a dense atmosphere this is not possible. A trade-off exists between flying higher before starting downrange acceleration, thus increasing gravity drag losses; or starting downrange acceleration earlier, reducing gravity drag but increasing the aerodynamic drag experienced during launch. Maximum dynamic pressure — Another effect related to the planet's atmosphere is the maximum dynamic pressure exerted on the launch vehicle during the launch. Dynamic pressure is related to both the atmospheric density and the vehicle's speed through the atmosphere. Just after liftoff the vehicle is gaining speed and increasing dynamic pressure faster than the reduction in atmospheric density can decrease the dynamic pressure. This causes the dynamic pressure exerted on the vehicle to increase until the two rates are equal. This is known as the point of maximum dynamic pressure (abbreviated "max Q"), and the launch vehicle must be built to withstand this amount of stress during launch. As before a trade off exists between gravity drag from flying higher first to avoid the thicker atmosphere when accelerating; or accelerating more at lower altitude, resulting in a heavier launch vehicle because of a higher maximum dynamic pressure experienced on launch. Maximum engine thrust — The maximum thrust the rocket engine can produce affects several aspects of the gravity turn procedure. Firstly, before the pitch over maneuver the vehicle must be capable of not only overcoming the force of gravity but accelerating upwards. The more acceleration the vehicle has beyond the acceleration of gravity the quicker vertical speed can be obtained allowing for lower gravity drag in the initial launch phase. When the pitch over is executed the vehicle begins its downrange acceleration phase; engine thrust affects this phase as well. Higher thrust allows for a faster acceleration to orbital velocity as well. By reducing this time the rocket can level off sooner; further reducing gravity drag losses. Although higher thrust can make the launch more efficient, accelerating too much low in the atmosphere increases the maximum dynamic pressure. This can be alleviated by throttling the engines back during the beginning of downrange acceleration until the vehicle has climbed higher. However, with solid fuel rockets this may not be possible. Maximum tolerable payload acceleration — Another limitation related to engine thrust is the maximum acceleration that can be safely sustained by the crew and/or the payload. Near main engine cut off (MECO), when the launch vehicle has consumed most of its fuel, the vehicle will be much lighter than it was at launch. If the engines are still producing the same amount of thrust, the acceleration will grow as a result of the decreasing vehicle mass. If this acceleration is not kept in check by throttling back the engines, injury to the crew or damage to the payload could occur. This forces the vehicle to spend more time gaining horizontal velocity, increasing gravity drag. Use in orbital redirection For spacecraft missions where large changes in the direction of flight are necessary, direct propulsion by the spacecraft may not be feasible due to the large delta-v requirement. In these cases it may be possible to perform a flyby of a nearby planet or moon, using its gravitational attraction to alter the ship's direction of flight. Although this maneuver is very similar to the gravitational slingshot it differs in that a slingshot often implies a change in both speed and direction whereas the gravity turn only changes the direction of flight. A variant of this maneuver, the free return trajectory allows the spacecraft to depart from a planet, circle another planet once, and return to the starting planet using propulsion only during the initial departure burn. Although in theory it is possible to execute a perfect free return trajectory, in practice small correction burns are often necessary during the flight. Even though it does not require a burn for the return trip, other return trajectory types, such as an aerodynamic turn, can result in a lower total delta-v for the mission. Use in spaceflight Many spaceflight missions have utilized the gravity turn, either directly or in a modified form, to carry out their missions. What follows is a short list of various mission that have used this procedure. Surveyor program — A precursor to the Apollo Program, the Surveyor Program's primary mission objective was to develop the ability to perform soft landings on the surface of the Moon, through the use of an automated descent and landing program built into the lander. Although the landing procedure can be classified as a gravity turn descent, it differs from the technique most commonly employed in that it was shot from the Earth directly to the lunar surface, rather than first orbiting the Moon as the Apollo landers did. Because of this the descent path was nearly vertical, although some "turning" was done by gravity during the landing. Apollo program — Launches of the Saturn V rocket during the Apollo program were carried out using a gravity turn in order to minimize lateral stress on the rocket. At the other end of their journey, the lunar landers utilized a gravity turn landing and ascent from the Moon. Mathematical description The simplest case of the gravity turn trajectory is that which describes a point mass vehicle, in a uniform gravitational field, neglecting air resistance. The thrust force is a vector whose magnitude is a function of time and whose direction can be varied at will. Under these assumptions the differential equation of motion is given by: Here is a unit vector in the vertical direction and is the instantaneous vehicle mass. By constraining the thrust vector to point parallel to the velocity and separating the equation of motion into components parallel to and those perpendicular to we arrive at the following system: Here the current thrust to weight ratio has been denoted by and the current angle between the velocity vector and the vertical by , where . This results in a coupled system of equations which can be integrated to obtain the trajectory. However, for all but the simplest case of constant over the entire flight, the equations cannot be solved analytically and must be integrated numerically. References Rocketry Spaceflight concepts
Gravity turn
[ "Engineering" ]
3,497
[ "Rocketry", "Aerospace engineering" ]
8,065,677
https://en.wikipedia.org/wiki/Distance%20measure
Distance measures are used in physical cosmology to give a natural notion of the distance between two objects or events in the universe. They are often used to tie some observable quantity (such as the luminosity of a distant quasar, the redshift of a distant galaxy, or the angular size of the acoustic peaks in the cosmic microwave background (CMB) power spectrum) to another quantity that is not directly observable, but is more convenient for calculations (such as the comoving coordinates of the quasar, galaxy, etc.). The distance measures discussed here all reduce to the common notion of Euclidean distance at low redshift. In accord with our present understanding of cosmology, these measures are calculated within the context of general relativity, where the Friedmann–Lemaître–Robertson–Walker solution is used to describe the universe. Overview There are a few different definitions of "distance" in cosmology which are all asymptotic one to another for small redshifts. The expressions for these distances are most practical when written as functions of redshift , since redshift is always the observable. They can also be written as functions of scale factor In the remainder of this article, the peculiar velocity is assumed to be negligible unless specified otherwise. We first give formulas for several distance measures, and then describe them in more detail further down. Defining the "Hubble distance" as where is the speed of light, is the Hubble parameter today, and is the dimensionless Hubble constant, all the distances are asymptotic to for small . According to the Friedmann equations, we also define a dimensionless Hubble parameter: Here, and are normalized values of the present radiation energy density, matter density, and "dark energy density", respectively (the latter representing the cosmological constant), and determines the curvature. The Hubble parameter at a given redshift is then . The formula for comoving distance, which serves as the basis for most of the other formulas, involves an integral. Although for some limited choices of parameters (see below) the comoving distance integral has a closed analytic form, in general—and specifically for the parameters of our universe—we can only find a solution numerically. Cosmologists commonly use the following measures for distances from the observer to an object at redshift along the line of sight (LOS): Comoving distance: Transverse comoving distance: Angular diameter distance: Luminosity distance: Light-travel distance: Alternative terminology Peebles calls the transverse comoving distance the "angular size distance", which is not to be mistaken for the angular diameter distance. Occasionally, the symbols or are used to denote both the comoving and the angular diameter distance. Sometimes, the light-travel distance is also called the "lookback distance" and/or "lookback time". Details Peculiar velocity In real observations, the movement of the Earth with respect to the Hubble flow has an effect on the observed redshift. There are actually two notions of redshift. One is the redshift that would be observed if both the Earth and the object were not moving with respect to the "comoving" surroundings (the Hubble flow), defined by the cosmic microwave background. The other is the actual redshift measured, which depends both on the peculiar velocity of the object observed and on their peculiar velocity. Since the Solar System is moving at around 370 km/s in a direction between Leo and Crater, this decreases for distant objects in that direction by a factor of about 1.0012 and increases it by the same factor for distant objects in the opposite direction. (The speed of the motion of the Earth around the Sun is only 30 km/s.) Comoving distance The comoving distance between fundamental observers, i.e. observers that are both moving with the Hubble flow, does not change with time, as comoving distance accounts for the expansion of the universe. Comoving distance is obtained by integrating the proper distances of nearby fundamental observers along the line of sight (LOS), whereas the proper distance is what a measurement at constant cosmic time would yield. In standard cosmology, comoving distance and proper distance are two closely related distance measures used by cosmologists to measure distances between objects; the comoving distance is the proper distance at the present time. The comoving distance (with a small correction for our own motion) is the distance that would be obtained from parallax, because the parallax in degrees equals the ratio of an astronomical unit to the circumference of a circle at the present time going through the sun and centred on the distant object, multiplied by 360°. However, objects beyond a megaparsec have parallax too small to be measured (the Gaia space telescope measures the parallax of the brightest stars with a precision of 7 microarcseconds), so the parallax of galaxies outside our Local Group is too small to be measured. There is a closed-form expression for the integral in the definition of the comoving distance if or, by substituting the scale factor for , if . Our universe now seems to be closely represented by In this case, we have: where The comoving distance should be calculated using the value of that would pertain if neither the object nor we had a peculiar velocity. Together with the scale factor it gives the proper distance of the object when the light we see now was emitted by the it, and set off on its journey to us: Proper distance Proper distance roughly corresponds to where a distant object would be at a specific moment of cosmological time, which can change over time due to the expansion of the universe. Comoving distance factors out the expansion of the universe, which gives a distance that does not change in time due to the expansion of space (though this may change due to other, local factors, such as the motion of a galaxy within a cluster); the comoving distance is the proper distance at the present time. Transverse comoving distance Two comoving objects at constant redshift that are separated by an angle on the sky are said to have the distance , where the transverse comoving distance is defined appropriately. Angular diameter distance An object of size at redshift that appears to have angular size has the angular diameter distance of . This is commonly used to observe so called standard rulers, for example in the context of baryon acoustic oscillations. When accounting for the earth's peculiar velocity, the redshift that would pertain in that case should be used but should be corrected for the motion of the solar system by a factor between 0.99867 and 1.00133, depending on the direction. (If one starts to move with velocity towards an object, at any distance, the angular diameter of that object decreases by a factor of ) Luminosity distance If the intrinsic luminosity of a distant object is known, we can calculate its luminosity distance by measuring the flux and determine , which turns out to be equivalent to the expression above for . This quantity is important for measurements of standard candles like type Ia supernovae, which were first used to discover the acceleration of the expansion of the universe. When accounting for the earth's peculiar velocity, the redshift that would pertain in that case should be used for but the factor should use the measured redshift, and another correction should be made for the peculiar velocity of the object by multiplying by where now is the component of the object's peculiar velocity away from us. In this way, the luminosity distance will be equal to the angular diameter distance multiplied by where is the measured redshift, in accordance with Etherington's reciprocity theorem (see below). Light-travel distance (also known as "lookback time" or "lookback distance") This distance is the time that it took light to reach the observer from the object multiplied by the speed of light. For instance, the radius of the observable universe in this distance measure becomes the age of the universe multiplied by the speed of light (1 light year/year), which turns out to be approximately 13.8 billion light years. There is a closed-form solution of the light-travel distance if involving the inverse hyperbolic functions or (or involving inverse trigonometric functions if the cosmological constant has the other sign). If then there is a closed-form solution for but not for Note that the comoving distance is recovered from the transverse comoving distance by taking the limit , such that the two distance measures are equivalent in a flat universe. There are websites for calculating light-travel distance from redshift. The age of the universe then becomes , and the time elapsed since redshift until now is: Etherington's distance duality The Etherington's distance-duality equation is the relationship between the luminosity distance of standard candles and the angular-diameter distance. It is expressed as follows: See also Big Bang Comoving and proper distances Friedmann equations Parsec Physical cosmology Cosmic distance ladder Friedmann–Lemaître–Robertson–Walker metric Subatomic scale References Scott Dodelson, Modern Cosmology. Academic Press (2003). External links 'The Distance Scale of the Universe' compares different cosmological distance measures. 'Distance measures in cosmology' explains in detail how to calculate the different distance measures as a function of world model and redshift. iCosmos: Cosmology Calculator (With Graph Generation ) calculates the different distance measures as a function of cosmological model and redshift, and generates plots for the model from redshift 0 to 20. Physical cosmology Physical quantities Length, distance, or range measuring devices
Distance measure
[ "Physics", "Astronomy", "Mathematics" ]
2,020
[ "Physical phenomena", "Astronomical sub-disciplines", "Physical quantities", "Quantity", "Theoretical physics", "Astrophysics", "Physical properties", "Physical cosmology" ]
8,067,200
https://en.wikipedia.org/wiki/Spectral%20flatness
Spectral flatness or tonality coefficient, also known as Wiener entropy, is a measure used in digital signal processing to characterize an audio spectrum. Spectral flatness is typically measured in decibels, and provides a way to quantify how much a sound resembles a pure tone, as opposed to being noise-like. Interpretation The meaning of tonal in this context is in the sense of the amount of peaks or resonant structure in a power spectrum, as opposed to the flat spectrum of white noise. A high spectral flatness (approaching 1.0 for white noise) indicates that the spectrum has a similar amount of power in all spectral bands — this would sound similar to white noise, and the graph of the spectrum would appear relatively flat and smooth. A low spectral flatness (approaching 0.0 for a pure tone) indicates that the spectral power is concentrated in a relatively small number of bands — this would typically sound like a mixture of sine waves, and the spectrum would appear "spiky". Dubnov has shown that spectral flatness is equivalent to information theoretic concept of mutual information that is known as dual total correlation. Formulation The spectral flatness is calculated by dividing the geometric mean of the power spectrum by the arithmetic mean of the power spectrum, i.e.: where x(n) represents the magnitude of bin number n. Note that a single (or more) empty bin yields a flatness of 0, so this measure is most useful when bins are generally not empty. The ratio produced by this calculation is often converted to a decibel scale for reporting, with a maximum of 0 dB and a minimum of −∞ dB. The spectral flatness can also be measured within a specified sub-band, rather than across the whole band. Applications This measurement is one of the many audio descriptors used in the MPEG-7 standard, in which it is labelled "AudioSpectralFlatness". In birdsong research, it has been used as one of the features measured on birdsong audio, when testing similarity between two excerpts. Spectral flatness has also been used in the analysis of electroencephalography (EEG) diagnostics and research, and psychoacoustics in humans. References Digital signal processing Spectrum (physical sciences)
Spectral flatness
[ "Physics" ]
467
[ "Waves", "Physical phenomena", "Spectrum (physical sciences)" ]
8,069,949
https://en.wikipedia.org/wiki/Best%20management%20practice%20for%20water%20pollution
Best management practices (BMPs) is a term used in the United States and Canada to describe a type of water pollution control. Historically the term has referred to auxiliary pollution controls in the fields of industrial wastewater control and municipal sewage control, while in stormwater management (both urban and rural) and wetland management, BMPs may refer to a principal control or treatment technique as well. Terminology Beginning in the 20th century, designers of industrial and municipal sewage pollution controls typically utilized engineered systems (e.g. filters, clarifiers, biological reactors) to provide the central components of pollution control systems, and used the term "BMPs" to describe the supporting functions for these systems, such as operator training and equipment maintenance. Stormwater management, as a specialized area within the field of environmental engineering, emerged later in the 20th century, and some practitioners have used the term BMP to describe both structural or engineered control devices and systems (e.g. retention ponds) to treat polluted stormwater, as well as operational or procedural practices (e.g. minimizing use of chemical fertilizers and pesticides). Other practitioners prefer to use the term Stormwater control measure, due to the varied definitions of the term "BMP" and its use in non-stormwater practice. U.S. Clean Water Act References to "BMP" Congress referred to BMP in several sections of the U.S. Clean Water Act (CWA) but did not define the term. The 1977 CWA used the term in describing the areawide waste treatment planning program and in procedures for controlling toxic pollutants associated with industrial discharges. The "Section 404" program, which covers dredge and fill permits, refers to BMPs in one of the enforcement exemptions. References to stormwater BMPs first appear in the 1987 amendment to the CWA in describing the Nonpoint Source Management Demonstration Program. Another stormwater BMP reference was added in 2001 with the authorization for a Wet Weather Watershed Pilot Project program. EPA definitions In implementing the CWA, the U.S. Environmental Protection Agency (EPA) defined BMP in the federal wastewater permit regulations, initially to refer to auxiliary procedures for industrial wastewater controls. ...schedules of activities, prohibitions of practices, maintenance procedures, and other management practices to prevent or reduce the pollution of waters of the United States, BMPs also include treatment requirements, operating procedures, and practices to control plant site runoff, spillage or leaks, sludge or waste disposal, or drainage from raw material storage. Later the Agency added a reference to stormwater management BMPs. ...each NPDES permit shall include conditions meeting the following requirements when applicable... (k) Best management practices (BMPs) to control or abate the discharge of pollutants when: ... (2) Authorized under section 402(p) of the CWA for the control of storm water discharges... Industrial wastewater BMPs Industrial wastewater BMPs are considered an adjunct to engineered treatment systems. Typical BMPs include operator training, maintenance practices, and spill control procedures for treatment chemicals. There are also many BMPs available which are specific to particular industrial processes, for example: source reduction practices in metal finishing industries (e.g. substituting less toxic solvents or using water-based cleaners); in the chemical industry, capturing equipment washdown waters for recycle/reuse at various process stages; in the paper industry, using process control monitoring to optimize bleaching processes, and reduce the overall amount of bleach used. Stormwater management BMPs Stormwater management BMPs are control measures taken to mitigate changes to both quantity and quality of urban runoff caused through changes to land use. Generally BMPs focus on water quality problems caused by increased impervious surfaces from land development. BMPs are designed to reduce stormwater volume, peak flows, and/or nonpoint source pollution through evapotranspiration, infiltration, detention, and filtration or biological and chemical actions. BMPs also can improve receiving-water quality by extending the duration of outflows in comparison to inflow duration (known as hydrograph extension), which dilutes the stormwater discharged into a larger volume of upstream flow. Although structural BMPs can be effective for reducing stormwater loads to receiving waters, studies indicate that they cannot improve in-stream water quality where receiving-water quality is poor. Stormwater BMPs can be classified as "structural" (i.e., devices installed or constructed on a site such as silt fences, rock filter dams, fiber rolls (also called erosion control logs or excelsior wattles), sediment traps and numerous other proprietary products) or "non-structural" (procedures, such as modified landscaping practices, soil disturbing activity scheduling, or street sweeping). There are a variety of BMPs available; selection typically depends on site characteristics and pollutant removal objectives. EPA has published a series of stormwater BMP fact sheets for use by local governments, builders and property owners. Stormwater management BMPs can be also categorized into four basic types: Storage practices: ponds; recovery; green infrastructure design. Vegetative practices: buffers; channels; green roofs; wetlands; functional art; stormwater wetland park design; wetland park engineering & design. Filtration/Infiltration practices: filtering; infiltration; rain gardens; porous pavement; civic infrastructure and design; functional stormwater design. Water sensitive development: better site design (revisions of local land development rules); open space site design; low impact development. See also Industrial wastewater treatment Low Impact Development (LID) Nationwide Urban Runoff Program (NURP) - EPA research program on stormwater BMPs Stochastic Empirical Loading and Dilution Model for modeling the performance of BMPs Sustainable urban drainage systems Blue-Green cities - Incorporating many of the same concepts into a holistic approach for city design. References External links International Stormwater BMP Database – Performance Data on Urban Stormwater BMPs California Stormwater BMP Handbooks - California Stormwater Quality Association Technology Assessment Protocol – Ecology (TAPE) - Washington State Department of Ecology's process for evaluating and approving emerging stormwater treatment BMPs Technology Acceptance Reciprocity Partnership (TARP) - Multi-state protocol for stormwater BMP demonstrations Pennsylvania Stormwater BMP Manual (2006) Environmental engineering Hydrology and urban planning Waste management concepts Waste treatment technology Water pollution Environmental design
Best management practice for water pollution
[ "Chemistry", "Engineering", "Environmental_science" ]
1,322
[ "Environmental design", "Hydrology", "Water treatment", "Chemical engineering", "Water pollution", "Civil engineering", "Hydrology and urban planning", "Environmental engineering", "Waste treatment technology", "Design" ]
8,071,905
https://en.wikipedia.org/wiki/C4H10O2
The molecular formula C4H10O2 may refer to: Butanediols 1,2-Butanediol 1,3-Butanediol 1,4-Butanediol 2,3-Butanediol tert-Butyl hydroperoxide Dimethoxyethane 2-Ethoxyethanol 1-Methoxy-2-propanol Diethyl peroxide
C4H10O2
[ "Chemistry" ]
88
[ "Isomerism", "Set index articles on molecular formulas" ]
10,414,062
https://en.wikipedia.org/wiki/Moisture%20advection
Moisture advection is the horizontal transport of water vapor by the wind. Measurement and knowledge of atmospheric water vapor, or "moisture", is crucial in the prediction of all weather elements, especially clouds, fog, temperature, humidity thermal comfort indices and precipitation. Regions of moisture advection are often co-located with regions of warm advection. Definition Using the classical definition of advection, moisture advection is defined as: in which V is the horizontal wind vector, and is the density of water vapor. However, water vapor content is usually measured in terms of mixing ratio (mass fraction) in reanalyses or dew point (temperature to partial vapor pressure saturation, i.e. relative humidity to 100%) in operational forecasting. The advection of dew point itself can be thought as moisture advection: Moisture flux In terms of mixing ratio, horizontal transport/advection can be represented in terms of moisture flux: in which q is the mixing ratio. The value can be integrated throughout the atmosphere to total transport of moisture through the vertical: where is the density of air, and P is pressure at the ground surface. For the far right definition, we have used Hydrostatic equilibrium approximation. And its divergence (convergence) imply net evapotranspiration (precipitation) as adding (removing) moisture from the column: where P, E, and the integral term are—precipitation, evapotranspiration, and time rate of change of precipitable water, all represented in terms of mass/(unit area * unit time). One can convert to more typical units in length such as mm by multiplying the density of liquid water and the correct length unit conversion factor. See also Haar (fog) Water cycle Positive vorticity advection References External links Moisture Advection Description Using Moisture Advection to Predict Weather Synoptic meteorology and weather Atmospheric dynamics
Moisture advection
[ "Chemistry" ]
397
[ "Atmospheric dynamics", "Fluid dynamics" ]
10,415,613
https://en.wikipedia.org/wiki/Schwinger%20variational%20principle
Schwinger variational principle is a variational principle which expresses the scattering T-matrix as a functional depending on two unknown wave functions. The functional attains stationary value equal to actual scattering T-matrix. The functional is stationary if and only if the two functions satisfy the Lippmann-Schwinger equation. The development of the variational formulation of the scattering theory can be traced to works of L. Hultén and J. Schwinger in 1940s. Linear form of the functional The T-matrix expressed in the form of stationary value of the functional reads where and are the initial and the final states respectively, is the interaction potential and is the retarded Green's operator for collision energy . The condition for the stationary value of the functional is that the functions and satisfy the Lippmann-Schwinger equation and Fractional form of the functional Different form of the stationary principle for T-matrix reads The wave functions and must satisfy the same Lippmann-Schwinger equations to get the stationary value. Application of the principle The principle may be used for the calculation of the scattering amplitude in the similar way like the variational principle for bound states, i.e. the form of the wave functions is guessed, with some free parameters, that are determined from the condition of stationarity of the functional. See also Lippmann-Schwinger equation Quantum scattering theory T-matrix method Green's operator References Bibliography Scattering
Schwinger variational principle
[ "Physics", "Chemistry", "Materials_science" ]
300
[ "Scattering stubs", "Scattering", "Condensed matter physics", "Particle physics", "Nuclear physics" ]
10,416,232
https://en.wikipedia.org/wiki/Phase%20space%20method
In applied mathematics, the phase space method is a technique for constructing and analyzing solutions of dynamical systems, that is, solving time-dependent differential equations. The method consists of first rewriting the equations as a system of differential equations that are first-order in time, by introducing additional variables. The original and the new variables form a vector in the phase space. The solution then becomes a curve in the phase space, parametrized by time. The curve is usually called a trajectory or an orbit. The (vector) differential equation is reformulated as a geometrical description of the curve, that is, as a differential equation in terms of the phase space variables only, without the original time parametrization. Finally, a solution in the phase space is transformed back into the original setting. The phase space method is used widely in physics. It can be applied, for example, to find traveling wave solutions of reaction–diffusion systems. See also Reaction–diffusion system Fisher's equation References Partial differential equations Dynamical systems
Phase space method
[ "Physics", "Mathematics" ]
207
[ "Mechanics", "Dynamical systems" ]
10,416,796
https://en.wikipedia.org/wiki/Pharmaceutical%20lobby
The pharmaceutical lobby refers to the representatives of pharmaceutical drug and biomedicine companies who engage in lobbying in favour of pharmaceutical companies and their products. Political influence in the United States The largest pharmaceutical companies and their two trade groups, Pharmaceutical Research and Manufacturers of America (PhRMA) and Biotechnology Innovation Organization, lobbied on at least 1,600 pieces of legislation between 1998 and 2004. According to the non-partisan OpenSecrets, pharmaceutical companies spent $900 million on lobbying between 1998 and 2005, more than any other industry. During the same period, they donated $89.9 million to federal candidates and political parties, giving approximately three times as much to Republicans as to Democrats. According to the Center for Public Integrity, from January 2005 through June 2006 alone, the pharmaceutical industry spent approximately $182 million on federal lobbying in the United States. In 2005, the industry had 1,274 registered lobbyists in Washington, D.C. A 2020 study found that, from 1999 to 2018, the pharmaceutical industry and health product industry together spent $4.7 billion lobbying the United States federal government, an average of $233 million per year. Controversy in the U.S. Prescription drug costs in the U.S. Critics of the pharmaceutical lobby argue that the drug industry's influence allows it to promote legislation friendly to drug manufacturers at the expense of patients. The lobby's influence in securing the passage of the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 was considered a major and controversial victory for the industry, as it prevents the government from directly negotiating prices with drug companies who provide those prescription drugs covered by Medicare. Price negotiations are instead conducted between manufacturers and the pharmacy benefit managers providing Medicare Part D benefits under contract with Medicare. In 2010 the Congressional Budget Office estimated the average discount negotiated by pharmacy benefit managers at 14%. The high price of U.S. prescription drugs has been a source of ongoing controversy. Pharmaceutical companies state that the high costs are the result of pricey research and development programs. Critics point to the development of drugs having only small incremental benefit. According to Marcia Angell, former editor-in-chief of the New England Journal of Medicine, "The United States is the only advanced country that permits the pharmaceutical industry to charge exactly what the market will bear." In contrast, the RAND Corporation and authors from the National Bureau of Economic Research have argued that price controls stifle innovation and are economically counterproductive in the long term. International operations In 2021, during the height of COVID-19, vaccine makers increased lobbying and public-relations efforts to oppose a proposal that would temporarily waive their patents in Germany, Japan and other countries. This proposal would allow COVID-19 vaccine patents to be licensed to international vaccine makers or otherwise sold entirely. The Biden presidential administration in the U.S. supported the waiver proposal; however, pharmaceutical industry trade groups supported Germany, Japan, and other countries that expressed opposition. Pharmaceutical industry representatives have been lobbying members of Congress to pressure the Biden administration to reverse its support of the waiver, arguing that the patents protect its innovations. However, proponents of the proposal see the patent as giving companies a monopoly over sales of vaccines during a world crisis. See also Bad Pharma (2012) by Ben Goldacre Big Pharma (2006) by Jacky Law Big Pharma conspiracy theory Ethics in pharmaceutical sales List of pharmaceutical companies Lists about the pharmaceutical industry Pharmaceutical marketing References External links Pharmaceutical lobbying totals at Opensecrets Corporations and Health Watch PhRMA's home page PBS series on the pharmaceutical industry Big Bucks, Big Pharma, Amy Goodman, 68 minutes Conflict of interest Lobbying in the United States Lobbying organizations Lobby
Pharmaceutical lobby
[ "Chemistry", "Biology" ]
750
[ "Pharmaceutical industry", "Pharmacology", "Life sciences industry" ]
10,419,626
https://en.wikipedia.org/wiki/Electrostatic%20deflection%20%28molecular%20physics/nanotechnology%29
In molecular physics/nanotechnology, electrostatic deflection is the deformation of a beam-like structure/element bent by an electric field. It can be due to interaction between electrostatic fields and net charge or electric polarization effects. The beam-like structure/element is generally cantilevered (fix at one of its ends). In nanomaterials, carbon nanotubes (CNTs) are typical ones for electrostatic deflections. Mechanisms of electric deflection due to electric polarization can be understood as follows: When a material is brought into an electric field (E), the field tends to shift the positive charge (in red) and the negative charge (in blue) in opposite directions. Thus, induced dipoles are created. (Fig. 2) Fig. 3 shows a beam-like structure/element in an electric field. The interaction between the molecular dipole moment and the electric field results an induced torque (T). Then this torque tends to align the beam toward the direction of field. In case of a cantilevered CNT, it would be bent to the field direction. Meanwhile the electrically induced torque and stiffness of the CNT compete against each other. This deformation has been observed in experiments. This property is an important characteristic for CNTs promising nanoelectromechanical systems applications, as well as for their fabrication, separation and electromanipulation. Recently, several nanoelectromechanical systems based on cantilevered CNTs have been reported such as: nanorelays, nanoswitches, nanotweezers and feedback device which are designed for memory, sensing or actuation uses. Furthermore, theoretical studies have been carried out to try to get a full understanding of the electric deflection of carbon nanotubes, References Molecular physics
Electrostatic deflection (molecular physics/nanotechnology)
[ "Physics", "Chemistry", "Materials_science" ]
373
[ "Materials science stubs", "Molecular physics", "Nanotechnology stubs", " molecular", "nan", "Atomic", "Nanotechnology", "Molecular physics stubs", " and optical physics" ]
15,544,038
https://en.wikipedia.org/wiki/Hamaker%20constant
In molecular physics, the Hamaker constant (denoted ; named for H. C. Hamaker) is a physical constant that can be defined for a van der Waals (vdW) body–body interaction: where are the number densities of the two interacting kinds of particles, and is the London coefficient in the particle–particle pair interaction. The magnitude of this constant reflects the strength of the vdW-force between two particles, or between a particle and a substrate. The Hamaker constant provides the means to determine the interaction parameter from the vdW-pair potential, Hamaker's method and the associated Hamaker constant ignores the influence of an intervening medium between the two particles of interaction. In 1956 Lifshitz developed a description of the vdW energy but with consideration of the dielectric properties of this intervening medium (often a continuous phase). The Van der Waals forces are effective only up to several hundred angstroms. When the interactions are too far apart, the dispersion potential decays faster than this is called the retarded regime, and the result is a Casimir–Polder force. See also Hamaker theory Intermolecular forces van der Waals Forces References Physical chemistry Intermolecular forces
Hamaker constant
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
258
[ "Molecular physics", "Applied and interdisciplinary physics", "Materials science", "Intermolecular forces", "nan", "Molecular physics stubs", "Physical chemistry" ]
15,544,777
https://en.wikipedia.org/wiki/Ridley%E2%80%93Watkins%E2%80%93Hilsum%20theory
In solid state physics the Ridley–Watkins–Hilsum theory (RWH) explains the mechanism by which differential negative resistance is developed in a bulk solid state semiconductor material when a voltage is applied to the terminals of the sample. It is the theory behind the operation of the Gunn diode as well as several other microwave semiconductor devices, which are used practically in electronic oscillators to produce microwave power. It is named for British physicists Brian Ridley, Tom Watkins and Cyril Hilsum who wrote theoretical papers on the effect in 1961. Negative resistance oscillations in bulk semiconductors had been observed in the laboratory by J. B. Gunn in 1962, and were thus named the "Gunn effect", but physicist Herbert Kroemer pointed out in 1964 that Gunn's observations could be explained by the RWH theory. In essence, RWH mechanism is the transfer of conduction electrons in a semiconductor from a high mobility valley to lower-mobility, higher-energy satellite valleys. This phenomenon can only be observed in materials that have such energy band structures. Normally, in a conductor, increasing electric field causes higher charge carrier (usually electron) speeds and results in higher current consistent with Ohm's law. In a multi-valley semiconductor, though, higher energy may push the carriers into a higher energy state where they actually have higher effective mass and thus slow down. In effect, carrier velocities and current drop as the voltage is increased. While this transfer occurs, the material exhibits a decrease in current – that is, a negative differential resistance. At higher voltages, the normal increase of current with voltage relation resumes once the bulk of the carriers are kicked into the higher energy-mass valley. Therefore the negative resistance only occurs over a limited range of voltages. Of the type of semiconducting materials satisfying these conditions, gallium arsenide (GaAs) is the most widely understood and used. However RWH mechanisms can also be observed in indium phosphide (InP), cadmium telluride (CdTe), zinc selenide (ZnSe) and indium arsenide (InAs) under hydrostatic or uniaxial pressure. See also Gunn diode References Other sources Electronic engineering de:Gunn-Effekt
Ridley–Watkins–Hilsum theory
[ "Physics", "Materials_science", "Technology", "Engineering" ]
473
[ "Materials science stubs", "Computer engineering", "Electronic engineering", "Condensed matter physics", "Electrical engineering", "Condensed matter stubs" ]
18,457,721
https://en.wikipedia.org/wiki/Arthropod%20exoskeleton
Arthropods are covered with a tough, resilient integument, cuticle or exoskeleton of chitin. Generally the exoskeleton will have thickened areas in which the chitin is reinforced or stiffened by materials such as minerals or hardened proteins. This happens in parts of the body where there is a need for rigidity or elasticity. Typically the mineral crystals, mainly calcium carbonate, are deposited among the chitin and protein molecules in a process called biomineralization. The crystals and fibres interpenetrate and reinforce each other, the minerals supplying the hardness and resistance to compression, while the chitin supplies the tensile strength. Biomineralization occurs mainly in crustaceans. In insects and arachnids, the main reinforcing materials are various proteins hardened by linking the fibres in processes called sclerotisation and the hardened proteins are called sclerotin. The dorsal tergum, ventral sternum, and the lateral pleura form the hardened plates or sclerites of a typical body segment. In either case, in contrast to the carapace of a tortoise or the cranium of a vertebrate, the exoskeleton has little ability to grow or change its form once it has matured. Except in special cases, whenever the animal needs to grow, it moults, shedding the old skin after growing a new skin from beneath. Microscopic structure A typical arthropod exoskeleton is a multi-layered structure with four functional regions: epicuticle, procuticle, epidermis and basement membrane. Of these, the epicuticle is a multi-layered external barrier that, especially in terrestrial arthropods, acts as a barrier against desiccation. The strength of the exoskeleton is provided by the underlying procuticle, which is in turn secreted by the epithelial cells in the epidermis, which begins as a tough, flexible layer of chitin. Arthropod cuticle is a biological composite material, consisting of two main portions: fibrous chains of alpha-chitin within a matrix of silk-like and globular proteins, of which the best-known is the rubbery protein called resilin. The relative abundance of these two main components varies from approximately 50/50 to 80/20 chitin protein, with softer parts of the exoskeleton having a higher proportion of chitin. The cuticle is soft when first secreted, but it soon hardens as required, in a process of sclerotization. The process is poorly understood, but it involves forms of tanning in which phenolic chemicals crosslink protein molecules or anchor them to surrounding molecules such as chitins. Part of the effect is to make the tanned material hydrophobic. By varying the types of interaction between the proteins and chitins, the insect metabolism produces regions of exoskeleton that differ in their wet and dry behaviour, their colour and their mechanical properties. The chitinous procuticle is formed of an outer exocuticle and the inner endocuticle, and between the exocuticle and endocuticle there may be another layer called mesocuticle which has distinctive staining properties. The tough and flexible endocuticle is a laminated structure of layers of interwoven fibrous chitin and protein molecules, while the exocuticle is the layer in which any major thickening, armouring and biomineralization occurs. Biomineralization with calcite is particularly common in Crustacea, whereas sclerotization particularly occurs in insects. The exocuticle is greatly reduced in many soft-bodied insects, especially in the larval stages such as caterpillars and the larvae of parasitoidal Hymenoptera. In addition to the chitinous-proteinaceous composite of the cuticle, many crustaceans, some myriapods and the extinct trilobites further impregnate the cuticle with mineral salts, above all calcium carbonate, which can make up to 40% of the cuticle. The armoured product commonly has great mechanical strength. Mechanical properties The two layers of the cuticle have different properties. The outer layer is where most of the thickening, biomineralization and sclerotisation takes place, and its material tends to be strong under compressive stresses, though weaker under tension. When a rigid region fails under stress, it does so by cracking. The inner layer is not as highly sclerotised, and is correspondingly softer but tougher; it resists tensile stresses but is liable to failure under compression. This combination is especially effective in resisting predation, as predators tend to exert compression on the outer layer, and tension on the inner. Its degree of sclerotisation or mineralisation determines how the cuticle responds to deformation. Below a certain degree of deformation changes of shape or dimension of the cuticle are elastic and the original shape returns after the stress is removed. Beyond that level of deformation, non-reversible, plastic deformation occurs until finally the cuticle cracks or splits. Generally, the less sclerotised the cuticle, the greater the deformation required to damage the cuticle irreversibly. On the other hand, the more heavily the cuticle is armoured, the greater the stress required to deform it harmfully. Segmentation Hardened plates in the exoskeleton are called sclerites. Sclerites may be simple protective armour, but also may form mechanical components of the exoskeleton, such as in the legs, joints, fins or wings. In the typical body segment of an insect or many other Arthropoda, there are four principal regions. The dorsal region is the tergum; if the tergum bears any sclerites, those are called tergites. The ventral region is called the sternum, which commonly bears sternites. The two lateral regions are called the pleura (singular pleurum) and any sclerites they bear are called pleurites. The arthropod exoskeleton is divided into different functional units, each comprising a series of grouped segments; such a group is called a tagma, and the tagmata are adapted to different functions in a given arthropod body. For example, tagmata of insects include the head, which is a fused capsule, the thorax as nearly a fixed capsule, and the abdomen usually divided into a series of articulating segments. Each segment has sclerites according to its requirements for external rigidity; for example, in the larva of some flies, there are none at all and the exoskeleton is effectively all membranous; the abdomen of an adult fly is covered with light sclerites connected by joints of membranous cuticle. In some beetles most of the joints are so tightly connected, that the body is practically in an armoured, rigid box. However, in most Arthropoda the bodily tagmata are so connected and jointed with flexible cuticle and muscles that they have at least some freedom of movement, and many such animals, such as the Chilopoda or the larvae of mosquitoes are very mobile indeed. In addition, the limbs of arthropods are jointed, so characteristically that the very name "Arthropoda" literally means "jointed legs" in reflection of the fact. The internal surface of the exoskeleton is often infolded, forming a set of structures called apodemes that serve for the attachment of muscles, and functionally amounting to endoskeletal components. They are highly complex in some groups, particularly in Crustacea. Within entomology, the term glabrous is used to refer to those parts of an insect's body lacking in setae (bristles) or scales. Chemical composition Chemically, chitin is a long-chain polymer of a N-acetylglucosamine, which is a derivative of glucose. The polymer bonds between the glucose units are β(1→4) links, the same as in cellulose. In its unmodified form, chitin is translucent, pliable, resilient and tough. In arthropods and other organisms however, it generally is a component of a complex matrix of materials. It practically always is associated with protein molecules that often are in a more or less sclerotised state, stiffened or hardened by cross-linking and by linkage to other molecules in the matrix. In some groups of animals, most conspicuously the Crustacea, the matrix is greatly enriched with, or even dominated by, hard minerals, usually calcite or similar carbonates that form much of the exoskeleton. In some organisms the mineral content may exceed 95%. The role of the chitin and proteins in such structures is more than just holding the crystals together; the crystal structure itself is so affected as to prevent the propagation of cracks under stress, leading to remarkable strength. The process of formation of such mineral-rich matrices is called biomineralization. The difference between the unmodified and modified forms of chitinous arthropodan exoskeletons can be seen by comparing the body wall of say a bee larva, in which modification is minimal, to any armoured species of beetle, or the fangs of a spider. In both those examples there is heavy modification by sclerotisation. Again, contrasting strongly with both unmodified organic material such as largely pure chitin, and with sclerotised chitin and proteins, consider the integument of a heavily armoured crab, in which there is a very high degree of modification by biomineralization. Moulting The chemical and physical nature of the arthropod exoskeleton limits its ability to stretch or change shape as the animal grows. In some special cases, such as the abdomens of termite queens and honeypot ants means that continuous growth of arthropods is not possible. Therefore, growth is periodic and concentrated into a period of time when the exoskeleton is shed, called moulting or ecdysis, which is under the control of a hormone called ecdysone. Moulting is a complex process that is invariably dangerous for the arthropod involved. Before the old exoskeleton is shed, the cuticle separates from the epidermis through a process called apolysis. Early in the process of apolysis the epithelial cells release enzymatic moulting fluid between the old cuticle and the epidermis. The enzymes partly digest the endocuticle and the epidermis absorbs the digested material for the animal to assimilate. Much of that digested material is re-used to build the new cuticle. Once the new cuticle has formed sufficiently, the animal splits the remaining parts of the old integument along built-in lines of weakness and sheds them in the visible process of ecdysis, generally shedding and discarding the epicuticle and the reduced exocuticle, though some species carry them along for camouflage or protection. The shed portions are called the exuviae. After the old cuticle is shed, the arthropod typically pumps up its body (for example, by air or water intake) to allow the new cuticle to expand to a larger size: the process of hardening by dehydration of the cuticle then takes place. The new integument still is soft and usually is pale, and it is said to be teneral or callow. It then undergoes a hardening and pigmentation process that might take anything from several minutes to several days, depending on the nature of the animal and the circumstances. Although the process of ecdysis is metabolically risky and expensive, it does have some advantages. For one thing it permits a complex development cycle of metamorphosis in which young animals may be totally different from older phases, such as the nauplius larvae of crustaceans, the nymphs of say, the Odonata, or the larvae of Endopterygota, such as maggots of flies. Such larval stages commonly have ecological and life cycle roles totally different from those of the mature animals. Secondly, often a major injury in one phase, such as the loss of a leg from an insect nymph, or a claw from a young crab, can be repaired after one or two stages of ecdysis. Similarly, delicate parts that need periodic replacement, such as the outer surfaces of the eye lenses of spiders, or the urticating hairs of caterpillars, can be shed, making way for new structures. See also Glossary of arthropod cuticle References Exoskeleton Biomechanics Skeletal system
Arthropod exoskeleton
[ "Physics" ]
2,687
[ "Biomechanics", "Mechanics" ]
18,458,673
https://en.wikipedia.org/wiki/Fieller%27s%20theorem
In statistics, Fieller's theorem allows the calculation of a confidence interval for the ratio of two means. Approximate confidence interval Variables a and b may be measured in different units, so there is no way to directly combine the standard errors as they may also be in different units. The most complete discussion of this is given by Fieller (1954). Fieller showed that if a and b are (possibly correlated) means of two samples with expectations and , and variances and and covariance , and if are all known, then a (1 − α) confidence interval (mL, mU) for is given by where Here is an unbiased estimator of based on r degrees of freedom, and is the -level deviate from the Student's t-distribution based on r degrees of freedom. Three features of this formula are important in this context: a) The expression inside the square root has to be positive, or else the resulting interval will be imaginary. b) When g is very close to 1, the confidence interval is infinite. c) When g is greater than 1, the overall divisor outside the square brackets is negative and the confidence interval is exclusive. Other methods One problem is that, when g is not small, the confidence interval can blow up when using Fieller's theorem. Andy Grieve has provided a Bayesian solution where the CIs are still sensible, albeit wide. Bootstrapping provides another alternative that does not require the assumption of normality. History Edgar C. Fieller (1907–1960) first started working on this problem while in Karl Pearson's group at University College London, where he was employed for five years after graduating in Mathematics from King's College, Cambridge. He then worked for the Boots Pure Drug Company as a statistician and operational researcher before becoming deputy head of operational research at RAF Fighter Command during the Second World War, after which he was appointed the first head of the Statistics Section at the National Physical Laboratory. See also Gaussian ratio distribution Notes Further reading Fieller, EC. (1940) "The biological standardisation of insulin". Journal of the Royal Statistical Society (Supplement). 1:1–54. Motulsky, Harvey (1995) Intuitive Biostatistics. Oxford University Press. Senn, Steven (2007) Statistical Issues in Drug Development. Second Edition. Wiley. Theorems in statistics Statistical approximations Normal distribution
Fieller's theorem
[ "Mathematics" ]
495
[ "Theorems in statistics", "Mathematical relations", "Statistical approximations", "Mathematical problems", "Mathematical theorems", "Approximations" ]
18,460,129
https://en.wikipedia.org/wiki/Seratrodast
Seratrodast (development name, AA-2414; marketed originally as Bronica) is a thromboxane A2 (TXA2) receptor (TP receptor) antagonist used primarily in the treatment of asthma. It was the first TP receptor antagonist that was developed as an anti-asthmatic drug and received marketing approval in Japan in 1997. As of 2017 seratrodast was marketed as Bronica in Japan, and as Changnuo, Mai Xu Jia, Quan Kang Nuo in China. Unlike thromboxane synthase inhibitors such as ozagrel, seratrodast does not affect thrombus formation, time to occlusion and bleeding time. Seratrodast has no effect on prothrombin time and activated partial thromboplastin time, thus ruling out any action on blood coagulation cascade. Medical uses Seratrodast is used to treat asthma. There are no adequate and well-controlled studies of seratrodast in pregnant women. The drug should be used in pregnancy only if the potential benefits justify the risk to the fetus. Seratrodast should not be used during lactation. The safety and efficacy of seratrodast has not been established in children (<18 years of age). Contraindications and interactions Seratrodast should not be used in people with liver disease. Use with paracetamol or with cephem antibiotics increases the risk of liver damage. Use with aspirin increases the bioavailability of seratrodast. Adverse effects The most frequently observed (0.1 to 5%) adverse reactions include elevated transaminases, nausea, loss of appetite, stomach discomfort, abdominal pain, diarrhea, constipation, dry mouth, taste disturbance, drowsiness, headache, dizziness, palpitations and malaise. Less than 0.1% of patients experienced vomiting, thrombocytopenia, epistaxis, bleeding tendency, insomnia, tremor, numbness, hot flushes and edema. All the adverse reactions reported were of mild to moderate severity, and resolved when the drug was discontinued. Pharmacology Thromboxane A2 (TXA2) is generated in the lungs of people with asthma, and when it signals through the thromboxane receptor it causes bronchoconstriction, vasoconstriction, mucous secretion, and airway hyper-responsiveness. Seratrodast inhibits the activity of the thromboxane receptor, blocking the effects of TXA2. Pharmacokinetics The pharmacokinetics of seratrodast have been studied in Japanese and Caucasian, including Indian, healthy volunteers. The plasma concentrations of seratrodast increase with increasing doses. The absorption of seratrodast is relatively rapid with maximum plasma concentrations of 4.6–6 μg/ml obtained in 3 to 4 hours. Steady state plasma concentrations of seratrodast are reached within 4–5 days. Seratrodast is slowly cleared, mainly by hepatic biotransformation. The drug shows biexponential decay in plasma profiles with a mean elimination half-life of 22 hours. Approximately 20% of the administered dose is recovered in the urine, with 60% of the urinary recovery being in the form of conjugates Chemistry Seratrodast can be prepared in five steps starting from pimelic acid monoester. History Seratrodast was the first thromboxane receptor antagonist to reach the market as a treatment for asthma; it was approved in Japan in 1997. Society and culture As of 2017 seratrodast was marketed as Bronica in Japan, Changnuo, Mai Xu Jia, Quan Kang Nuo in China and as Seretra & Seradair in India. Research Seratrodast was studied in perennial allergic rhinitis, chronic bronchitis and chronic pulmonary emphysema but efforts to bring the drug to market in those indications was abandoned around 2000. References 1,4-Benzoquinones Carboxylic acids
Seratrodast
[ "Chemistry" ]
850
[ "Carboxylic acids", "Functional groups" ]
18,463,064
https://en.wikipedia.org/wiki/Infrared%20and%20thermal%20testing
Infrared and thermal testing refer to passive thermographic inspection techniques, a class of nondestructive testing designated by the American Society for Nondestructive Testing (ASNT). Infrared thermography is the science of measuring and mapping surface temperatures. "Infrared thermography, a nondestructive, remote sensing technique, has proved to be an effective, convenient, and economical method of testing concrete. It can detect internal voids, delaminations, and cracks in concrete structures such as bridge decks, highway pavements, garage floors, parking lot pavements, and building walls. As a testing technique, some of its most important qualities are that (1) it is accurate; (2) it is repeatable; (3) it need not inconvenience the public; and (4) it is economical." Principles There are three ways of transferring thermal energy: conduction convection radiation All objects emit electromagnetic radiation of a wavelength dependent on the object's temperature. The wavelength of the radiation is inversely proportional to the temperature. According to thermodynamics, emitted energy will flow from warmer to cooler areas, and the rate of energy transfer will vary according to the efficiency of the heat transfer processes and the insulating effects of the material through which energy is flowing. In principle, a targeted object or feature will have different thermal properties than its surroundings; for instance, a buried metallic pipe conducts heat more readily than the surrounding soil, so if the fluid it is carrying is at a different temperature than the ambient conditions, the pipe will be visible to a thermal imaging sensor without having to perform an excavation to locate the pipe. Various types of construction materials have different insulating abilities. In addition, differing types of pipeline defects have different insulating values and/or vary in the magnitude of energy supplied. Because of the potential heterogeneities in the surrounding pipe (i.e., different types of soils), it can be difficult to distinguish targeted objects from background noise. Sensitivity An infrared thermographic scanning system can measure and view temperature patterns based upon temperature differences as small as a few hundredths of a degree Celsius. Infrared thermographic testing may be performed during day or night, depending on environmental conditions and the desired results. In practice In infrared thermography, thermal radiation is detected and measured with infrared imagers, also known as thermographic cameras or radiometers. The imagers contain an infrared detector that converts the emitted radiation into electrical signals that are displayed on a color or black and white computer display monitor. After the thermal data is processed, it can be displayed on a monitor in multiple shades of gray scale or color. The colors displayed on the thermogram are arbitrarily set by the Thermographer to best illustrate the infrared data being analyzed. Sample applications A typical application for regularly available IR Thermographic equipment is looking for "hot spots" in electrical equipment, which illustrates high resistance areas in electrical circuits. These "hot spots" are usually measured in the range of above ambient temperatures. When engineers use proprietary systems to locate subsurface targets such as underground storage tanks (USTs), pipelines, pipeline leaks and their plumes, and hidden tunnels, their locations are identified by temperature patterns typically in the range of 0.01 °C to 1 °C above or below ambient temperatures. Roofing In this roofing investigation application, infrared thermographic data was collected during daytime hours, on both sunny and rainy days. This data collection time allowed for solar heating of the roof, and any entrapped water within the roofing system, during the daylight hours. IR data was observed until the roof had sufficiently warmed to allow detection of the entrapped wet areas because of their ability to collect and store more heat than the dry insulated areas. The wet areas would also transfer the heat at a faster rate than the dry insulated roof areas. At this point in time, the wet areas showed up as warmer roof surface temperatures than the surrounding dry background areas of the roof. During the rainy day, with minimum solar loading, any entrapped leak plumes would become evident because of their cooler temperature as compared to the dry roof areas Pipeline testing An infrared thermographic scanning system measures surface temperatures only. But the surface temperatures that are measured on the surface of the ground, above a buried pipeline, are, to a great extent, dependent upon the subsurface conditions. Good solid backfill should have the least resistance to conduction of energy and the convection gas radiation effects should be negligible. The various types of problems associated with soil erosion and poor backfill surrounding buried pipelines increase the insulating ability of the soil, by reducing the energy conduction properties, without substantially increasing the convection effects. This is because dead air spaces do not allow the formation of convection currents. In order to have an energy flow, there must be an energy source. Since buried pipeline testing can involve large areas, the heat source has to be low cost and able to give the ground surface above the pipeline an even distribution of heat. The sun fulfills both of these requirements. The ground surface reacts, storing or transmitting the energy received. For pipelines carrying fluids at temperatures above or below the ambient ground temperatures (i.e., steam, oil, liquefied gases, or chemicals), an alternative is to use the heat sinking ability of the earth to draw heat from the pipeline under test. The crucial point to remember is that the energy must be flowing through the ground and fluids. Ground cover must be evaluated for temperature differentials (i.e., anomalies such as high grass or surface debris), as to how it may affect the surface condition of the test area. Of the three methods of energy transfer, radiation is the method that has the most profound effect upon the ability of the surface to transfer energy. The ability of a material to radiate energy is measured by the emissivity of the material. This is defined as the ability of the material to release energy as compared to a perfect blackbody radiator. This is strictly a surface property. It normally exhibits itself in higher values for rough surfaces and lower values for smooth surfaces. For example, rough concrete may have an emissivity of 0.95 while a shiny piece of tinfoil may have an emissivity of only 0.05. In practical terms, this means that when looking at large areas of ground cover, the engineer in charge of testing must be aware of differing surface textures caused by such things as broom roughed spots, tire rubber tracks, oil spots, loose sand and dirt on the surface and the height of grassy areas. Standards International Organization for Standardization (ISO) ISO 6781, Thermal insulation - Qualitative detection of thermal irregularities in building envelopes - Infrared method ISO 18434-1, Condition monitoring and diagnostics of machines - Thermography - Part 1: General procedures ISO 18436-7, Condition monitoring and diagnostics of machines - Requirements for qualification and assessment of personnel - Part 7: Thermography References Infrared imaging Nondestructive testing
Infrared and thermal testing
[ "Materials_science" ]
1,447
[ "Nondestructive testing", "Materials testing" ]
22,133,704
https://en.wikipedia.org/wiki/National%20Institute%20of%20Genetics
The National Institute of Genetics ("Japanese Institute of Genetics") is a Japanese institution founded in 1949. It hosts the DNA Data Bank of Japan. Notes and references External links Genetic engineering in Japan Genomics Bioinformatics organizations Biological databases Medical research institutes in Japan Research institutes established in 1949 1949 establishments in Japan
National Institute of Genetics
[ "Biology" ]
64
[ "Bioinformatics", "Biological databases", "Bioinformatics organizations" ]
22,134,777
https://en.wikipedia.org/wiki/List%20of%20human%20hormones
The following is a list of hormones found in Humans. Spelling is not uniform for many hormones. For example, current North American and international usage uses estrogen and gonadotropin, while British usage retains the Greek digraph in oestrogen and favours the earlier spelling gonadotrophin. Hormones listing Steroid References Human hormones Cell signaling Signal transduction human hormones
List of human hormones
[ "Chemistry", "Biology" ]
78
[ "Biochemistry", "Neurochemistry", "Signal transduction" ]
22,136,672
https://en.wikipedia.org/wiki/George%20Oenslager
George Oenslager (September 25, 1873 – February 5, 1956) was a Goodrich chemist who discovered that a derivative of aniline accelerated the vulcanization of rubber with sulfur. He first introduced carbon black as a rubber reinforcing agent in 1912. Biography Oenslager attended Harrisburg & Phillips Exeter Academies, AB 1894, AM 1896. He first worked for the Warren Paper Co. in Maine from 1896 until 1905. He then worked for the Diamond & B.F. Goodrich Rubber Companies from 1905 until 1940. In 1912, Oenslager was working with David Spence at Diamond Rubber on additives to improve the vulcanization process. Working off of Oenslager's aniline additives, Spence discovered that p-aminodimethylaniline was a far superior accelerator, vastly improving the tensile strength of the rubber. para-aminodimethylaniline was adopted as the accelerator of choice by the Diamond Rubber Company in 1912. During World War I Oenslager inflated the first hydrogen balloon in the US. Oenslager received his Ph.D. from Harvard under Prof. Theodore William Richards. He was awarded the Perkin Medal in 1933 for his discovery of organic accelerators, specifically thiocarbanilide. This development crucial to the commercialization of both natural and synthetic rubber. Oenslager was awarded the Charles Goodyear Medal in 1948. He was married to Ruth Alderfer Oenslager. References American chemists 1873 births 1956 deaths Polymer scientists and engineers Harvard University alumni
George Oenslager
[ "Chemistry", "Materials_science" ]
317
[ "Polymer scientists and engineers", "Physical chemists", "Polymer chemistry" ]
22,137,635
https://en.wikipedia.org/wiki/Drug-induced%20thrombocytopenic%20purpura
Drug-induced thrombocytopenic purpura is a skin condition result from a low platelet count due to drug-induced anti-platelet antibodies caused by drugs such as heparin, sulfonamides, digoxin, quinine, and quinidine. See also Idiopathic thrombocytopenic purpura Skin lesion References Vascular-related cutaneous conditions Drug-induced diseases
Drug-induced thrombocytopenic purpura
[ "Chemistry" ]
91
[ "Drug-induced diseases", "Drug safety" ]
22,143,100
https://en.wikipedia.org/wiki/Shearography
Shearography or Speckle pattern shearing interferometry is a measuring and testing method similar to holographic interferometry. It uses coherent light or coherent soundwaves to provide information about the quality of different materials in nondestructive testing, strain measurement, and vibration analysis. Shearography is extensively used in production and development in aerospace, wind rotor blades, automotive, and materials research areas. Advantages of shearography are the large area testing capabilities (up to 1 m2 per minute), non-contact properties, its relative insensitivity to environmental disturbances, and its good performance on honeycomb materials, which is a big challenge for traditional nondestructive testing methods. Shearing function When a surface area is illuminated with a highly coherent laser light, a stochastical interference pattern is created. This interference pattern is called a speckle, and is projected on a rigid camera's CCD chip. Analogous with Electronic speckle pattern interferometry (ESPI), to obtain results from the speckle we need to compare it with a known reference light. Shearography uses the test object itself as the known reference; it shears the image so a double image is created. The superposition of the two images, a shear image, represents the surface of the test object at this unloaded state. This makes the method much less sensitive to external vibrations and noise. By applying a small load, the material will deform. A nonuniform material quality will generate a nonuniform movement of the surface of the test object. A new shearing image is recorded at the loaded state and is compared with the sheared image before load. If a flaw is present, it will be seen. Phase-shift technology To increase the sensitivity of the measurement method, a real-time phase shift process is used in the sensor. This contains a stepping mirror that shifts the reference beam, which is then processed with a best fit-algorithm and presents the information in real time. Applications The main applications are in composite nondestructive testing, where typical flaws are: Disbonds, Delaminations, Wrinkles, Porosity, Foreign objects, and Impact damages. Industries where Shearography is used are: Aerospace, Space, Boats, Wind power, Automotive, Tires, and Art conservation. Inspection standards The methodology of shearography is standardized by ASTM International: ASTM E2581-07, "Standard Practice for Shearography on Polymer Matrix Composites, Sandwich Core Materials and Filament Wound Pressure Vessel’s in Aerospace Applications" The following NDT personnel certification documents contain references to shearography: BS EN 4179:2009 NAS 410, 2008 Rev 3 ASNT SNT-TC-1A, 2006 edition ASNT CP-105, 2006 edition References Nondestructive testing
Shearography
[ "Materials_science" ]
573
[ "Nondestructive testing", "Materials testing" ]
22,145,851
https://en.wikipedia.org/wiki/L.%20Gary%20Leal
Leslie Gary Leal (born March 18, 1943) is the Warren & Katharine Schlinger Professor of Chemical Engineering at the University of California, Santa Barbara, United States. He is known for his research work in the dynamics of complex fluids. Leal was elected a member of the National Academy of Engineering in 1987 for fundamental contributions to the understanding of the fluid mechanics of particulate systems, polymer solutions, and suspensions. Career Leal received his B.S. degree from the University of Washington in 1965, M.S. degree from the Stanford University in 1968, and Ph.D. degree from the Stanford University in 1969; all in chemical engineering. His Ph.D. thesis advisor was Andreas Acrivos. Leal started his academic career in 1970 as an assistant professor in chemical engineering at California Institute of Technology. He became full professor in 1978. During 1986–1989, he was Chevron Distinguished Professor of Chemical Engineering. In 1989, Leal joined University of California, Santa Barbara as professor and chair in the department of chemical engineering. He is currently the Warren and Katharine Schlinger Professor of Chemical Engineering at UCSB. Research Leal's research covers a wide range of topics in fluid dynamics, including the dynamics of complex fluids, such as polymeric liquids, emulsions, polymer blends, and liquid crystalline polymers. He also works on large-scale computer simulation of complex fluid flows. Leal and his coworkers made pioneering contributions to the study of drop deformation under different flow conditions. They have developed a scheme based on a finite difference approximation of the equations of motion, applied on a boundary-fitted orthogonal curvilinear coordinate system, inside and outside the drop. Leal has published more than 250 papers on fluid dynamics. He has directed 55 Ph.D. thesis in various topics in fluid dynamics. Several of his students have gone on to become professors at prestigious universities including Howard Stone who is currently at Princeton and Gerald Fuller at Stanford. Leal comes from a long line of researchers that can be traced back from mentor to mentor all the way to Sir Isaac Newton. Editorships From 1998–2015 he served as co-editor-in-chief of Physics of Fluids with John Kim. Honors and awards Distinguished Scholar Lecturer, Mechanical and Aerospace Engineering, Arizona State University, October 2006 Fluid Dynamics Prize, American Physical Society, 2002 Highly Cited Researchers, Original Member, 100 Most Highly Cited Researchers in Engineering, ISI Thompson Scientific, 2001. Bingham Medal, The Society of Rheology, 2001 John Simon Guggenheim Foundation Fellow, 1976. Allan Colburn Memorial Lectureship, department of chemical engineering, University of Delaware, 1978. Allan Colburn Award - National AIChE, 1978. Fellow of the American Physical Society, 1984. Chevron Distinguished Professor of Chem. Engineering, Caltech, April 1986-July 1989. Member of the National Academy of Engineering (elected 1987). Stanley Corrsin Lectureship in Fluid Mechanics, Dept. of Chem. Eng., The Johns Hopkins University, 1990. Stanley Katz Memorial Lectureship in Chemical Engineering, Dept. of Chem. Eng., City College of the City University of New York, 1991. Reilly Memorial Lectureship in Chemical Engineering, University of Notre Dame, April 1992. William H. Walker Award for Excellence in Contributions to Chemical Engineering Literature, AIChE 1993. Robert Pigford Lecturer, University of Delaware, April 1994. Julian C. Smith Lecturer, School of Chemical Engineering Cornell University April 1996. Co-Editor-in-Chief, Physics of Fluids, 1998-2015. NASA Group Achievement Award for MSL-1 Project Team to L.G. Leal, June 30, 1999 Rutgers Collaboratus X Lecturer, Dept. of Chem. and Biochemical Eng., Rutgers University, April, 2000. George K. Batchelor Lecturer in Fluid Mechanics, Department of Applied Mathematics and Theoretical Physics, University of Cambridge, UK, May, 2000. NCE Cullimore Memorial Lecturer, NJIT, October 2001 David M. Mason Lecturer, Dept. Chem. Eng., Stanford University, May 2004. Books L. G. Leal, Laminar Flow and Convective Transport Processes, Butterworth-Heinemann, Stoneham, Massachusetts, 740 pages (1992). L. G. Leal, Advanced Transport Phenomena: Fluid Mechanics and Convective Transport Processes, Cambridge University Press, New York (2007). References External links Curriculum Vitae of L. Gary Leal Fluid dynamicists Stanford University alumni University of California, Santa Barbara faculty Rheologists 1943 births Living people Fellows of the American Physical Society Members of the United States National Academy of Engineering American chemical engineers Physics of Fluids editors
L. Gary Leal
[ "Chemistry" ]
955
[ "Fluid dynamicists", "Fluid dynamics" ]
22,146,315
https://en.wikipedia.org/wiki/Pentacarbon%20dioxide
Pentacarbon dioxide, officially penta-1,2,3,4-tetraene-1,5-dione, is an oxide of carbon (an oxocarbon) with formula C5O2 or O=C=C=C=C=C=O. The compound was described in 1988 by Günter Maier and others, who obtained it by pyrolysis of 2,4,6-tris(diazo)cyclohexane-1,3,5-trione (C6N6O3). Diazo transfer can produce the latter compound from phloroglucinol. It is stable at room temperature in solution. The pure compound is stable up to −90 °C, at which point it polymerizes. References See also Ethylene dione (C2O2) Carbon suboxide (C3O2) Oxocarbons Heterocumulenes Ketenes Diketones Substances discovered in the 1980s
Pentacarbon dioxide
[ "Chemistry" ]
205
[ "Functional groups", "Ketenes" ]
4,680,068
https://en.wikipedia.org/wiki/Multiplicity%20function%20for%20N%20noninteracting%20spins
The multiplicity function for a two state paramagnet, W(n,N), is the number of spin states such that n of the N spins point in the z-direction. This function is given by the combinatoric function C(N,n). That is: It is primarily used in introductory statistical mechanics and thermodynamics textbooks to explain the microscopic definition of entropy to students. If the spins are non-interacting, then the multiplicity function counts the number of states which have the same energy in an external magnetic field. By definition, the entropy S is then given by the natural logarithm of this number: Where k is the Boltzmann constant References Thermodynamics
Multiplicity function for N noninteracting spins
[ "Physics", "Chemistry", "Mathematics" ]
150
[ "Thermodynamics stubs", "Physical chemistry stubs", "Thermodynamics", "Dynamical systems" ]
4,680,330
https://en.wikipedia.org/wiki/Doubly%20labeled%20water
Doubly labeled water is water in which both the hydrogen and the oxygen have been partly or completely replaced (i.e. labeled) with an uncommon isotope of these elements for tracing purposes. In practice, for both practical and safety reasons, almost all recent applications of the "doubly labeled water" (DLW) method use water labeled with heavy but non-radioactive forms of each element (deuterium, H; and oxygen-18, O). In theory, radioactive heavy isotopes of the elements could be used for such labeling; this was the case in many early applications of the method. In particular, DLW can be used for a method to measure the average daily metabolic rate of an organism over a period of time (often also called the Field metabolic rate, or FMR, in non-human animals). This is done by administering a dose of DLW, then measuring the elimination rates of H and O in the subject over time (through regular sampling of heavy isotope concentrations in body water, by sampling saliva, urine, or blood). At least two samples are required: an initial sample (after the isotopes have reached equilibrium in the body), and a second sample some time later. The time between these samples depends on the size of the animal. In small animals, the period may be as short as 24 hours; in larger animals (such as adult humans), the period may be as long as 14 days. The method was invented in the 1950s by Nathan Lifson and colleagues at the University of Minnesota. However, its use was restricted to small animals until the 1980s because of the high cost of oxygen-18. Advances in mass spectrometry during the 1970s and early 1980s reduced the amount of isotope required, which made it feasible to apply the method to larger animals, including humans. The first application to humans was in 1982, by Dale Schoeller, over 25 years after the method was initially discovered. A complete summary of the technique is provided in a book by British biologist John Speakman. Mechanism of the test The technique measures a subject's carbon dioxide production during the interval between first and last body water samples. The method depends on the details of carbon metabolism in our bodies. When cellular respiration breaks down carbon-containing molecules to release energy, carbon dioxide is released as a byproduct. The oxygen atoms in CO exist in equilibrium with the isotopes of oxygen in body water. Therefore if the oxygen in water is labeled with O, then CO produced by respiration will contain labeled oxygen. In addition, as CO travels from the site of respiration through the cytoplasm of a cell, through the interstitial fluids, into the bloodstream and then to the lungs some of it is reversibly converted to bicarbonate. So, after consuming water labeled with O, the O equilibrates with the body's bicarbonate and dissolved carbon dioxide pool (through the action of the enzyme carbonic anhydrase). As carbon dioxide is exhaled, O is lost from the body. This was discovered by Lifson in 1949. However, O is also lost through body water loss (such as urine and evaporation of fluids). However, deuterium (the second label in the doubly labeled water) is lost only when body water is lost. Thus, the loss of deuterium in body water over time can be used to mathematically compensate for the loss of O by the water-loss route. This leaves only the remaining net loss of O in carbon dioxide. This measurement of the amount of carbon dioxide lost is an excellent estimate for total carbon dioxide production. Once this is known, the total metabolic rate may be estimated from simplifying assumptions regarding the ratio of oxygen used in metabolism (and therefore heat generated), to carbon dioxide eliminated (see respiratory quotient). This quotient can be measured in other ways, and almost always has a value between 0.7 and 1.0, and for a mixed diet is usually about 0.8. In lay terms: Metabolism can be calculated from oxygen-in/CO-out. DLW ('tagged' water) is traceable hydrogen (deuterium), and traceable oxygen (O). The O leaves the body in two ways: (i) exhaled CO, and (ii) water loss in (mostly) urine, sweat, and breath. But the deuterium leaves only in the second way (water loss). From deuterium loss, we know how much of the tagged water left the body as water. And, since the concentration of O in the body's water is measured after the labeling dose is given, we also know how much of the tagged oxygen left the body in the water. (A simpler view is that the ratio of deuterium to O in body water is fixed, so total loss-rate of deuterium from the body multiplied by this ratio, immediately gives the loss rate of O in water.) Measurement of O dilution with time gives the total loss of this isotope by all routes (by water and respiration). Since the ratio of O to total water oxygen in the body is measured, we can convert O loss in respiration to total oxygen lost from the body's water pool via conversion to carbon dioxide. How much oxygen left the body as CO is the same as the CO produced by metabolism, since the body only produces CO by this route. The CO loss tells us the energy produced, if we know or can estimate the respiratory quotient (ratio of CO produced to oxygen used). Practical isotope administration DLW may be administered by injection, or orally (the usual route in humans). Since the isotopes will be diluted in body water, there is no need to administer them in a state of high isotopic purity, no need to employ water in which all or even most atoms are heavy atoms, or even to begin with water which is doubly labeled. It is also unnecessary to administer exactly one atom of O for every two atoms of deuterium. This matter in practice is governed by the economics of buying O enriched water, and the sensitivity of the mass-spectrographic equipment available. In practice, doses of doubly labeled water for metabolic work are prepared by simply mixing a dose of deuterium oxide (heavy water) (90 to 99%) with a second dose of HO, which is water which has been separately enriched with O (though usually not to a high level, since doing this would be expensive, and unnecessary for this use), but otherwise contains normal hydrogen. The mixed water sample then contains both types of heavy atoms, in a far higher degree than normal water, and is now "doubly labeled." The free interchange of hydrogens between water molecules (via normal ionization) in liquid water ensures that the pools of oxygen and hydrogen in any sample of water (including the body's pool of water) will be separately equilibrated in a short time with any dose of added heavy isotope(s). Applications The doubly labeled water method is particularly useful for measuring average metabolic rate (field metabolic rate) over relatively long periods of time (a few days or weeks), in subjects for which other types of direct or indirect calorimetric measurements of metabolic rate would be difficult or impossible. For example, the technique can measure the metabolism of animals in the wild state, with the technical problems being related mainly to how to administer the dose of isotope, and collect several samples of body water at later times to check for differential isotope elimination. Most animal studies involve capturing the subject animals and injecting them, then holding them for a variable period before the first blood sample has been collected. This period depends on the size of the animal involved and varies between 30 minutes for very small animals to 6 hours for much larger animals. In both animals and humans, the test is made more accurate if a single determination of respiratory quotient has been made for the organism eating the standard diet at the time of measurement, since this value changes relatively little (and more slowly) compared with the much larger metabolic rate changes related to thermoregulation and activity. Because the heavy hydrogen and oxygen isotopes used in the standard DLW measurement are non-radioactive, and also non-toxic in the doses used (see heavy water), the DLW measurement of mean metabolic rate has been used extensively in human volunteers, and even in infants and pregnant women. The technique has been used on over 200 species of wild animals (mostly birds, mammals and some reptiles). Applications of the method to animals have been reviewed. A paper in 2021 summarized the results of over 6400 measurements using the technique in humans aged between 8 days and 96 years old. DLW (HO) can also be used for unusually warm ice and unusually dense water, as it has a higher melting point than and is denser than either light water or what is normally meant by "heavy water" (HO). HO melts at 4.00~4.04°C (39.2~39.27°F) and the liquid reaches its maximum density of 1.21684~1.21699 g/cm at 11.43~11.49°C (52.57~52.68°F). See also Tritiated water Tritium References Calorimetry Metabolism Water chemistry Water Isotopes Deuterated compounds
Doubly labeled water
[ "Physics", "Chemistry", "Biology", "Environmental_science" ]
1,939
[ "Hydrology", "Water", "Isotopes", "Cellular processes", "nan", "Nuclear physics", "Biochemistry", "Metabolism" ]
4,682,019
https://en.wikipedia.org/wiki/Computational%20chemical%20methods%20in%20solid-state%20physics
Computational chemical methods in solid-state physics follow the same approach as they do for molecules, but with two differences. First, the translational symmetry of the solid has to be utilised, and second, it is possible to use completely delocalised basis functions such as plane waves as an alternative to the molecular atom-centered basis functions. The electronic structure of a crystal is in general described by a band structure, which defines the energies of electron orbitals for each point in the Brillouin zone. Ab initio and semi-empirical calculations yield orbital energies, therefore they can be applied to band structure calculations. Since it is time-consuming to calculate the energy for a molecule, it is even more time-consuming to calculate them for the entire list of points in the Brillouin zone. Calculations can use the Hartree–Fock method, some post-Hartree–Fock methods, particularly Møller–Plesset perturbation theory to second order (MP2) and density functional theory (DFT). See also List of quantum chemistry and solid-state physics software References Computational Chemistry, David Young, Wiley-Interscience, 2001. Chapter 41, pg 318. The extensive references in that chapter provide further reading on this topic. Computational Chemistry and molecular modeling Principles and applications, K.I. Ramachandran, G.Deepa and Krishnan namboori P.K., Springer-Verlag GmbH Computational chemistry Theoretical chemistry Computational science Condensed matter physics Computational physics fr:Chimie numérique#Méthodes de chimie numérique dans les systèmes solides
Computational chemical methods in solid-state physics
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
328
[ "Materials science stubs", "Computational physics stubs", "Theoretical chemistry stubs", "Applied mathematics", "Phases of matter", "Materials science", "Computational physics", "Computational science", "Theoretical chemistry", "Computational chemistry stubs", "Computational chemistry", "Conde...
4,682,498
https://en.wikipedia.org/wiki/Semi-empirical%20quantum%20chemistry%20method
Semi-empirical quantum chemistry methods are based on the Hartree–Fock formalism, but make many approximations and obtain some parameters from empirical data. They are very important in computational chemistry for treating large molecules where the full Hartree–Fock method without the approximations is too expensive. The use of empirical parameters appears to allow some inclusion of electron correlation effects into the methods. Within the framework of Hartree–Fock calculations, some pieces of information (such as two-electron integrals) are sometimes approximated or completely omitted. In order to correct for this loss, semi-empirical methods are parametrized, that is their results are fitted by a set of parameters, normally in such a way as to produce results that best agree with experimental data, but sometimes to agree with ab initio results. Type of simplifications used Semi-empirical methods follow what are often called empirical methods where the two-electron part of the Hamiltonian is not explicitly included. For π-electron systems, this was the Hückel method proposed by Erich Hückel. For all valence electron systems, the extended Hückel method was proposed by Roald Hoffmann. Semi-empirical calculations are much faster than their ab initio counterparts, mostly due to the use of the zero differential overlap approximation. Their results, however, can be very wrong if the molecule being computed is not similar enough to the molecules in the database used to parametrize the method. Preferred application domains Methods restricted to π-electrons These methods exist for the calculation of electronically excited states of polyenes, both cyclic and linear. These methods, such as the Pariser–Parr–Pople method (PPP), can provide good estimates of the π-electronic excited states, when parameterized well. For many years, the PPP method outperformed ab initio excited state calculations. Methods restricted to all valence electrons. These methods can be grouped into several groups: Methods such as CNDO/2, INDO and NDDO that were introduced by John Pople. The implementations aimed to fit, not experiment, but ab initio minimum basis set results. These methods are now rarely used but the methodology is often the basis of later methods. Methods that are in the MOPAC, AMPAC, SPARTAN and/or CP2K computer programs originally from the group of Michael Dewar. These are MINDO, MNDO, AM1, PM3, PM6, PM7 and SAM1. Here the objective is to use parameters to fit experimental heats of formation, dipole moments, ionization potentials, and geometries. This is by far the largest group of semiempirical methods. Methods whose primary aim is to calculate excited states and hence predict electronic spectra. These include ZINDO and SINDO. The OMx (x=1,2,3) methods can also be viewed as belonging to this class, although they are also suitable for ground-state applications; in particular, the combination of OM2 and MRCI is an important tool for excited state molecular dynamics. Tight-binding methods, e.g. a large family of methods known as DFTB, are sometimes classified as semiempirical methods as well. More recent examples include the semiempirical quantum mechanical methods GFNn-xTB (n=0,1,2), which are particularly suited for the geometry, vibrational frequencies, and non-covalent interactions of large molecules. The NOTCH method includes many new, physically-motivated terms compared to the NDDO family of methods, is much less empirical than the other semi-empirical methods (almost all of its parameters are determined non-empirically), provides robust accuracy for bonds between uncommon element combinations, and is applicable to ground and excited states. See also List of quantum chemistry and solid-state physics software References
Semi-empirical quantum chemistry method
[ "Chemistry" ]
794
[ "Computational chemistry", "Quantum chemistry", "Semiempirical quantum chemistry methods" ]
4,683,709
https://en.wikipedia.org/wiki/Tak%20%28function%29
In computer science, the Tak function is a recursive function, named after . It is defined as follows: def tak(x, y, z): if y < x: return tak( tak(x-1, y, z), tak(y-1, z, x), tak(z-1, x, y) ) else: return z This function is often used as a benchmark for languages with optimization for recursion. tak() vs. tarai() The original definition by Takeuchi was as follows: def tarai(x, y, z): if y < x: return tarai( tarai(x-1, y, z), tarai(y-1, z, x), tarai(z-1, x, y) ) else: return y # not z! tarai is short for in Japanese. John McCarthy named this function tak() after Takeuchi. However, in certain later references, the y somehow got turned into the z. This is a small, but significant difference because the original version benefits significantly from lazy evaluation. Though written in exactly the same manner as others, the Haskell code below runs much faster. tarai :: Int -> Int -> Int -> Int tarai x y z | x <= y = y | otherwise = tarai (tarai (x-1) y z) (tarai (y-1) z x) (tarai (z-1) x y) One can easily accelerate this function via memoization yet lazy evaluation still wins. The best known way to optimize tarai is to use a mutually recursive helper function as follows. def laziest_tarai(x, y, zx, zy, zz): if not y < x: return y else: return laziest_tarai( tarai(x-1, y, z), tarai(y-1, z, x), tarai(zx, zy, zz)-1, x, y) def tarai(x, y, z): if not y < x: return y else: return laziest_tarai( tarai(x-1, y, z), tarai(y-1, z, x), z-1, x, y) Here is an efficient implementation of tarai() in C: int tarai(int x, int y, int z) { while (x > y) { int oldx = x, oldy = y; x = tarai(x - 1, y, z); y = tarai(y - 1, z, oldx); if (x <= y) break; z = tarai(z - 1, oldx, oldy); } return y; } Note the additional check for (x <= y) before z (the third argument) is evaluated, avoiding unnecessary recursive evaluation. References External links TAK Function Functions and mappings Special functions
Tak (function)
[ "Mathematics" ]
644
[ "Mathematical analysis", "Functions and mappings", "Special functions", "Mathematical objects", "Combinatorics", "Mathematical relations" ]
449,744
https://en.wikipedia.org/wiki/Storm%20surge
A storm surge, storm flood, tidal surge, or storm tide is a coastal flood or tsunami-like phenomenon of rising water commonly associated with low-pressure weather systems, such as cyclones. It is measured as the rise in water level above the normal tidal level, and does not include waves. The main meteorological factor contributing to a storm surge is high-speed wind pushing water towards the coast over a long fetch. Other factors affecting storm surge severity include the shallowness and orientation of the water body in the storm path, the timing of tides, and the atmospheric pressure drop due to the storm. As extreme weather becomes more intense and the sea level rises due to climate change, storm surges are expected to cause more risk to coastal populations. Communities and governments can adapt by building hard infrastructure, like surge barriers, soft infrastructure, like coastal dunes or mangroves, improving coastal construction practices and building social strategies such as early warning, education and evacuation plans. Mechanics At least five processes can be involved in altering tide levels during storms. Direct wind effect Wind stresses cause a phenomenon referred to as wind setup, which is the tendency for water levels to increase at the downwind shore and to decrease at the upwind shore. Intuitively, this is caused by the storm blowing the water toward one side of the basin in the direction of its winds. Strong surface winds cause surface currents at a 45° angle to the wind direction, by an effect known as the Ekman spiral. Because the Ekman spiral effects spread vertically through the water, the effect is proportional to depth. The surge will be driven into bays in the same way as the astronomical tide. Atmospheric pressure effect The pressure effects of a tropical cyclone will cause the water level in the open ocean to rise in regions of low atmospheric pressure and fall in regions of high atmospheric pressure. The rising water level will counteract the low atmospheric pressure such that the total pressure at some plane beneath the water surface remains constant. This effect is estimated at a increase in sea level for every millibar (hPa) drop in atmospheric pressure. For example, a major storm with a 100 millibar pressure drop would be expected to have a water level rise from the pressure effect. Effect of the Earth's rotation The Earth's rotation causes the Coriolis effect, which bends currents to the right in the Northern Hemisphere and to the left in the Southern Hemisphere. When this bend brings the currents into more perpendicular contact with the shore, it can amplify the surge, and when it bends the current away from the shore it has the effect of lessening the surge. Effect of waves The effect of waves, while directly powered by the wind, is distinct from a storm's wind-powered currents. Powerful wind whips up large, strong waves in the direction of its movement. Although these surface waves are responsible for very little water transport in open water, they may be responsible for significant transport near the shore. When waves are breaking on a line more or less parallel to the beach, they carry considerable water shoreward. As they break, the water moving toward the shore has considerable momentum and may run up a sloping beach to an elevation above the mean water line, which may exceed twice the wave height before breaking. Rainfall effect The rainfall effect is experienced predominantly in estuaries. Hurricanes may dump as much as of rainfall in 24 hours over large areas and higher rainfall densities in localized areas. As a result, surface runoff can quickly flood streams and rivers. This can increase the water level near the head of tidal estuaries as storm-driven waters surging in from the ocean meet rainfall flowing downstream into the estuary. Sea depth and topography In addition to the above processes, storm surge and wave heights on shore are also affected by the flow of water over the underlying topography, i.e. the shape and depth of the ocean floor and coastal area. A narrow shelf, with deep water relatively close to the shoreline, tends to produce a lower surge but higher and more powerful waves. A wide shelf, with shallower water, tends to produce a higher storm surge with relatively smaller waves. For example, in Palm Beach on the southeast coast of Florida, the water depth reaches offshore, and out. This is relatively steep and deep; storm surge is not as great but the waves are larger compared to the west coast of Florida. Conversely, on the Gulf side of Florida, the edge of the Floridian Plateau can lie more than offshore. Florida Bay, lying between the Florida Keys and the mainland, is very shallow with depths between and . These shallow areas are subject to higher storm surges with smaller waves. Other shallow areas include much of the Gulf of Mexico coast, and the Bay of Bengal. The difference is due to how much flow area the storm surge can dissipate to. In deeper water, there is more area and a surge can be dispersed down and away from the hurricane. On a shallow, gently sloping shelf, the surge has less room to disperse and is driven ashore by the wind forces of the hurricane. The topography of the land surface is another important element in storm surge extent. Areas, where the land lies less than a few meters above sea level, are at particular risk from storm surge inundation. Storm size The size of the storm also affects the surge height; this is due to the storm's area not being proportional to its perimeter. If a storm doubles in diameter, its perimeter also doubles, but its area quadruples. As there is proportionally less perimeter for the surge to dissipate to, the surge height ends up being higher. Extratropical storms Similar to tropical cyclones, extratropical cyclones cause an offshore rise of water. However, unlike most tropical cyclone storm surges, extratropical cyclones can cause higher water levels across a large area for longer periods of time, depending on the system. In North America, extratropical storm surges may occur on the Pacific and Alaska coasts, and north of 31°N on the Atlantic Coast. Coasts with sea ice may experience an "ice tsunami" causing significant damage inland. Extratropical storm surges may be possible further south for the Gulf coast mostly during the wintertime, when extratropical cyclones affect the coast, such as in the 1993 Storm of the Century. November 9–13, 2009, marked a significant extratropical storm surge event on the United States east coast when the remnants of Hurricane Ida developed into a nor'easter off the southeast U.S. coast. During the event, winds from the east were present along the northern periphery of the low-pressure center for a number of days, forcing water into locations such as Chesapeake Bay. Water levels rose significantly and remained as high as above normal in numerous locations throughout the Chesapeake for a number of days as water was continually built-up inside the estuary from the onshore winds and freshwater rains flowing into the bay. In many locations, water levels were shy of records by only . Measuring surge Surge can be measured directly at coastal tidal stations as the difference between the forecast tide and the observed rise of water. Another method of measuring surge is by the deployment of pressure transducers along the coastline just ahead of an approaching tropical cyclone. This was first tested for Hurricane Rita in 2005. These types of sensors can be placed in locations that will be submerged and can accurately measure the height of water above them. After surge from a cyclone has receded, teams of surveyors map high-water marks (HWM) on land, in a rigorous and detailed process that includes photographs and written descriptions of the marks. HWMs denote the location and elevation of floodwaters from a storm event. When HWMs are analyzed, if the various components of the water height can be broken out so that the portion attributable to surge can be identified, then that mark can be classified as storm surge. Otherwise, it is classified as storm tide. HWMs on land are referenced to a vertical datum (a reference coordinate system). During the evaluation, HWMs are divided into four categories based on the confidence in the mark; in the U.S., only HWMs evaluated as "excellent" are used by the National Hurricane Center in the post-storm analysis of the surge. Two different measures are used for storm tide and storm surge measurements. Storm tide is measured using a geodetic vertical datum (NGVD 29 or NAVD 88). Since storm surge is defined as the rise of water beyond what would be expected by the normal movement caused by tides, storm surge is measured using tidal predictions, with the assumption that the tide prediction is well-known and only slowly varying in the region subject to the surge. Since tides are a localized phenomenon, storm surge can only be measured in relationship to a nearby tidal station. Tidal benchmark information at a station provides a translation from the geodetic vertical datum to mean sea level (MSL) at that location, then subtracting the tidal prediction yields a surge height above the normal water height. SLOSH The U.S. National Hurricane Center forecasts storm surge using the SLOSH model, which is an abbreviation for Sea, Lake and Overland Surges from Hurricanes. The model is accurate to within 20 percent. SLOSH inputs include the central pressure of a tropical cyclone, storm size, the cyclone's forward motion, its track, and maximum sustained winds. Local topography, bay and river orientation, depth of the sea bottom, astronomical tides, as well as other physical features, are taken into account in a predefined grid referred to as a SLOSH basin. Overlapping SLOSH basins are defined for the southern and eastern coastline of the continental U.S. Some storm simulations use more than one SLOSH basin; for instance, Hurricane Katrina SLOSH model runs used both the Lake Pontchartrain / New Orleans basin, and the Mississippi Sound basin, for the northern Gulf of Mexico landfall. The final output from the model run will display the maximum envelope of water, or MEOW, that occurred at each location. To allow for track or forecast uncertainties, usually several model runs with varying input parameters are generated to create a map of MOMs or Maximum of Maximums. For hurricane evacuation studies, a family of storms with representative tracks for the region, and varying intensity, eye diameter, and speed are modeled to produce worst-case water heights for any tropical cyclone occurrence. The results of these studies are typically generated from several thousand SLOSH runs. These studies have been completed by the United States Army Corps of Engineers, under contract to the Federal Emergency Management Agency (FEMA), for several states and are available on their Hurricane Evacuation Studies (HES) website. They include coastal county maps, shaded to identify the minimum category of hurricane that will result in flooding, in each area of the county. Impacts Storm surge is responsible for significant property damage and loss of life as part of cyclones. Storm surge both destroys built infrastructure, like roads, and undermines foundations and building structures. Unexpected flooding in estuaries and coastal areas can catch populations unprepared, causing loss of life. The deadliest storm surge on record was the 1970 Bhola cyclone. Additionally, storm surge can cause or transform human-utilized land through other processes, hurting soil fertility, increasing saltwater intrusion, hurting wildlife habitat, and spreading chemical or other contaminants from human storage. Mitigation Although meteorological surveys alert about hurricanes or severe storms, in the areas where the risk of coastal flooding is particularly high, there are specific storm surge warnings. These have been implemented, for instance, in the Netherlands, Spain, the United States, and the United Kingdom. Similarly educating coastal communities and developing local evacuation plans can reduce the relative impact on people. A prophylactic method introduced after the North Sea flood of 1953 is the construction of dams and storm-surge barriers (flood barriers). They are open and allow free passage, but close when the land is under threat of a storm surge. Major storm surge barriers are the Oosterscheldekering and Maeslantkering in the Netherlands, which are part of the Delta Works project; the Thames Barrier protecting London; and the Saint Petersburg Dam in Russia. Another modern development (in use in the Netherlands) is the creation of housing communities at the edges of wetlands with floating structures, restrained in position by vertical pylons. Such wetlands can then be used to accommodate runoff and surges without causing damage to the structures while also protecting conventional structures at somewhat higher low-lying elevations, provided that dikes prevent major surge intrusion. Other soft adaptation methods can include changing structures so that they are elevated to avoid flooding directly, or increasing natural protections like mangroves or dunes. For mainland areas, storm surge is more of a threat when the storm strikes land from seaward, rather than approaching from landward. Reverse storm surge Water can also be sucked away from shore prior to a storm surge. This was the case on the western Florida coast in 2017, just before Hurricane Irma made landfall, uncovering land usually underwater. This phenomenon is known as a reverse storm surge, or a negative storm surge. Historic storm surges The deadliest storm surge on record was the 1970 Bhola cyclone, which killed up to 500,000 people in the area of the Bay of Bengal. The low-lying coast of the Bay of Bengal is particularly vulnerable to surges caused by tropical cyclones. The deadliest storm surge in the twenty-first century was caused by Cyclone Nargis, which killed more than 138,000 people in Myanmar in May 2008. The next deadliest in this century was caused by Typhoon Haiyan (Yolanda), which killed more than 6,000 people in the central Philippines in 2013. and resulted in economic losses estimated at $14 billion (USD). The 1900 Galveston hurricane, a Category 4 hurricane that struck Galveston, Texas, drove a devastating surge ashore; between 6,000 and 12,000 people died, making it the deadliest natural disaster ever to strike the United States. The highest storm tide noted in historical accounts was produced by the 1899 Cyclone Mahina, estimated at almost at Bathurst Bay, Australia, but research published in 2000 concluded that the majority of this likely was wave run-up because of the steep coastal topography. However, much of this storm surge was likely due to Mahina's extreme intensity, as computer modeling required an intensity of (the same intensity as the lowest recorded pressure from the storm) to produce the recorded storm surge. In the United States, one of the greatest recorded storm surges was generated by Hurricane Katrina on August 29, 2005, which produced a maximum storm surge of more than in southern Mississippi, with a storm surge height of in Pass Christian. Another record storm surge occurred in this same area from Hurricane Camille in 1969, with a storm tide of , also at Pass Christian. A storm surge of occurred in New York City during Hurricane Sandy in October 2012. See also Coastal flooding Ishiguro Storm Surge Computer Meteotsunami Rogue wave Tsunami-proof building Notes References External links European Space Agency storm Surge Project home pages from NIRAPAD disaster response organisation. NOAA NWS National Hurricane Center storm surge page DeltaWorks.Org North Sea Flood of 1953, includes images, video, and animations. UK storm surge model outputs and real-time tide gauge information from the National Tidal and Sea Level Facility Flood Water waves Tropical cyclone meteorology Severe weather and convection
Storm surge
[ "Physics", "Chemistry", "Environmental_science" ]
3,127
[ "Physical phenomena", "Hydrology", "Water waves", "Flood", "Waves", "Fluid dynamics" ]
449,756
https://en.wikipedia.org/wiki/Epitaxy
Epitaxy (prefix epi- means "on top of”) refers to a type of crystal growth or material deposition in which new crystalline layers are formed with one or more well-defined orientations with respect to the crystalline seed layer. The deposited crystalline film is called an epitaxial film or epitaxial layer. The relative orientation(s) of the epitaxial layer to the seed layer is defined in terms of the orientation of the crystal lattice of each material. For most epitaxial growths, the new layer is usually crystalline and each crystallographic domain of the overlayer must have a well-defined orientation relative to the substrate crystal structure. Epitaxy can involve single-crystal structures, although grain-to-grain epitaxy has been observed in granular films. For most technological applications, single-domain epitaxy, which is the growth of an overlayer crystal with one well-defined orientation with respect to the substrate crystal, is preferred. Epitaxy can also play an important role in the growth of superlattice structures. The term epitaxy comes from the Greek roots epi (ἐπί), meaning "above", and taxis (τάξις), meaning "an ordered manner". One of the main commercial applications of epitaxial growth is in the semiconductor industry, where semiconductor films are grown epitaxially on semiconductor substrate wafers. For the case of epitaxial growth of a planar film atop a substrate wafer, the epitaxial film's lattice will have a specific orientation relative to the substrate wafer's crystalline lattice, such as the [001] Miller index of the film aligning with the [001] index of the substrate. In the simplest case, the epitaxial layer can be a continuation of the same semiconductor compound as the substrate; this is referred to as homoepitaxy. Otherwise, the epitaxial layer will be composed of a different compound; this is referred to as heteroepitaxy. Types Homoepitaxy is a kind of epitaxy performed with only one material, in which a crystalline film is grown on a substrate or film of the same material. This technology is often used to grow a more pure film than the substrate and to fabricate layers with different doping levels. In academic literature, homoepitaxy is often abbreviated to "homoepi". Homotopotaxy is a process similar to homoepitaxy except that the thin-film growth is not limited to two-dimensional growth. Here the substrate is the thin-film material. Heteroepitaxy is a kind of epitaxy performed with materials that are different from each other. In heteroepitaxy, a crystalline film grows on a crystalline substrate or film of a different material. This technology is often used to grow crystalline films of materials for which crystals cannot otherwise be obtained and to fabricate integrated crystalline layers of different materials. Examples include silicon on sapphire, gallium nitride (GaN) on sapphire, aluminium gallium indium phosphide (AlGaInP) on gallium arsenide (GaAs) or diamond or iridium, and graphene on hexagonal boron nitride (hBN). Heteroepitaxy occurs when a film of different composition and/or crystalline films grown on a substrate. In this case, the amount of strain in the film is determined by the lattice mismatch Ԑ: Where and are the lattice constants of the film and the substrate. The film and substrate could have similar lattice spacings but also different thermal expansion coefficients. If a film is grown at a high temperature, it can experience large strains upon cooling to room temperature. In reality, is necessary for obtaining epitaxy. If is larger than that, the film experiences a volumetric strain that builds with each layer until a critical thickness. With increased thickness, the elastic strain in the film is relieved by the formation of dislocations, which can become scattering centers that damage the quality of the structure. Heteroepitaxy is commonly used to create so-called bandgap systems thanks to the additional energy caused by de deformation. Silicon-germanium epitaxial layers are heavily used in CMOS microelectronics and silicon photonics. Heterotopotaxy is a process similar to heteroepitaxy except that thin-film growth is not limited to two-dimensional growth; the substrate is similar only in structure to the thin-film material. Pendeo-epitaxy is a process in which the heteroepitaxial film is growing vertically and laterally simultaneously. In 2D crystal heterostructure, graphene nanoribbons embedded in hexagonal boron nitride give an example of pendeo-epitaxy. Grain-to-grain epitaxy involves epitaxial growth between the grains of a multicrystalline epitaxial and seed layer. This can usually occur when the seed layer only has an out-of-plane texture but no in-plane texture. In such a case, the seed layer consists of grains with different in-plane textures. The epitaxial overlayer then creates specific textures along each grain of the seed layer, due to lattice matching. This kind of epitaxial growth doesn't involve single-crystal films. Epitaxy is used in silicon-based manufacturing processes for bipolar junction transistors (BJTs) and modern complementary metal–oxide–semiconductors (CMOS), but it is particularly important for compound semiconductors such as gallium arsenide. Manufacturing issues include control of the amount and uniformity of the deposition's resistivity and thickness, the cleanliness and purity of the surface and the chamber atmosphere, the prevention of the typically much more highly doped substrate wafer's diffusion of dopant to the new layers, imperfections of the growth process, and protecting the surfaces during manufacture and handling. Mechanism Heteroepitaxial growth is classified into three primary growth modes-- Volmer–Weber (VW), Frank–van der Merwe (FM) and Stranski–Krastanov (SK). In the VW growth regime, the epitaxial film grows out of 3D nuclei on the growth surface. In this mode, the adsorbate-adsorbate interactions are stronger than adsorbate-surface interactions, leading to island formation by local nucleation and the epitaxial layer is formed when the islands join. In the FM growth mode, adsorbate-surface and adsorbate-adsorbate interactions are balanced, which promotes 2D layer-by-layer or step-flow epitaxial growth. The SK mode is a combination of VW and FM modes. In this mechanism, the growth initiates in the FM mode, forming 2D layers, but after reaching a critical thickness, enters a VW-like 3D island growth regime. Practical epitaxial growth, however, takes place in a high supersaturation regime, away from thermodynamic equilibrium. In that case, the epitaxial growth is governed by adatom kinetics rather than thermodynamics, and 2D step-flow growth becomes dominant. Methods Vapor-phase Homoepitaxial growth of semiconductor thin films are generally done by chemical or physical vapor deposition methods that deliver the precursors to the substrate in gaseous state. For example, silicon is most commonly deposited from silicon tetrachloride (or germanium tetrachloride) and hydrogen at approximately 1200 to 1250 °C: SiCl4(g) + 2H2(g) ↔ Si(s) + 4HCl(g) where (g) and (s) represent gas and solid phases, respectively. This reaction is reversible, and the growth rate depends strongly upon the proportion of the two source gases. Growth rates above 2 micrometres per minute produce polycrystalline silicon, and negative growth rates (etching) may occur if too much hydrogen chloride byproduct is present. (Hydrogen chloride may be intentionally added to etch the wafer.) An additional etching reaction competes with the deposition reaction: SiCl4(g) + Si(s) ↔ 2SiCl2(g) Silicon VPE may also use silane, dichlorosilane, and trichlorosilane source gases. For instance, the silane reaction occurs at 650 °C in this way: SiH4 → Si + 2H2 VPE is sometimes classified by the chemistry of the source gases, such as hydride VPE (HVPE) and metalorganic VPE (MOVPE or MOCVD). The reaction chamber where this process takes place may be heated by lamps located outside the chamber. A common technique used in compound semiconductor growth is molecular beam epitaxy (MBE). In this method, a source material is heated to produce an evaporated beam of particles, which travel through a very high vacuum (10−8 Pa; practically free space) to the substrate and start epitaxial growth. Chemical beam epitaxy, on the other hand, is an ultra-high vacuum process that uses gas phase precursors to generate the molecular beam. Another widely used technique in microelectronics and nanotechnology is atomic layer epitaxy, in which precursor gases are alternatively pulsed into a chamber, leading to atomic monolayer growth by surface saturation and chemisorption. Liquid-phase Liquid-phase epitaxy (LPE) is a method to grow semiconductor crystal layers from the melt on solid substrates. This happens at temperatures well below the melting point of the deposited semiconductor. The semiconductor is dissolved in the melt of another material. At conditions that are close to the equilibrium between dissolution and deposition, the deposition of the semiconductor crystal on the substrate is relatively fast and uniform. The most used substrate is indium phosphide (InP). Other substrates like glass or ceramic can be applied for special applications. To facilitate nucleation, and to avoid tension in the grown layer the thermal expansion coefficient of substrate and grown layer should be similar. Centrifugal liquid-phase epitaxy is used commercially to make thin layers of silicon, germanium, and gallium arsenide. Centrifugally formed film growth is a process used to form thin layers of materials by using a centrifuge. The process has been used to create silicon for thin-film solar cells and far-infrared photodetectors. Temperature and centrifuge spin rate are used to control layer growth. Centrifugal LPE has the capability to create dopant concentration gradients while the solution is held at constant temperature. Solid-phase Solid-phase epitaxy (SPE) is a transition between the amorphous and crystalline phases of a material. It is usually produced by depositing a film of amorphous material on a crystalline substrate, then heating it to crystallize the film. The single-crystal substrate serves as a template for crystal growth. The annealing step used to recrystallize or heal silicon layers amorphized during ion implantation is also considered to be a type of solid phase epitaxy. The impurity segregation and redistribution at the growing crystal-amorphous layer interface during this process is used to incorporate low-solubility dopants in metals and silicon. Doping An epitaxial layer can be doped during deposition by adding impurities to the source gas, such as arsine, phosphine, or diborane. Dopants in the source gas, liberated by evaporation or wet etching of the surface, may also diffuse into the epitaxial layer and cause autodoping. The concentration of impurity in the gas phase determines its concentration in the deposited film. Doping can also be achieved by a site-competition technique, where the growth precursor ratios are tuned to enhance the incorporation of vacancies, specific dopant species or vacant-dopant clusters into the lattice. Additionally, the high temperatures at which epitaxy is performed may allow dopants to diffuse into the growing layer from other layers in the wafer (out-diffusion). Minerals In mineralogy, epitaxy is the overgrowth of one mineral on another in an orderly way, such that certain crystal directions of the two minerals are aligned. This occurs when some planes in the lattices of the overgrowth and the substrate have similar spacings between atoms. If the crystals of both minerals are well formed so that the directions of the crystallographic axes are clear then the epitaxic relationship can be deduced just by a visual inspection. Sometimes many separate crystals form the overgrowth on a single substrate, and then if there is epitaxy all the overgrowth crystals will have a similar orientation. The reverse, however, is not necessarily true. If the overgrowth crystals have a similar orientation there is probably an epitaxic relationship, but it is not certain. Some authors consider that overgrowths of a second generation of the same mineral species should also be considered as epitaxy, and this is common terminology for semiconductor scientists who induce epitaxic growth of a film with a different doping level on a semiconductor substrate of the same material. For naturally produced minerals, however, the International Mineralogical Association (IMA) definition requires that the two minerals be of different species. Another man-made application of epitaxy is the making of artificial snow using silver iodide, which is possible because hexagonal silver iodide and ice have similar cell dimensions. Isomorphic minerals Minerals that have the same structure (isomorphic minerals) may have epitaxic relations. An example is albite on microcline . Both these minerals are triclinic, with space group , and with similar unit cell parameters, a = 8.16 Å, b = 12.87 Å, c = 7.11 Å, α = 93.45°, β = 116.4°, γ = 90.28° for albite and a = 8.5784 Å, b = 12.96 Å, c = 7.2112 Å, α = 90.3°, β = 116.05°, γ = 89° for microcline. Polymorphic minerals Minerals that have the same composition but different structures (polymorphic minerals) may also have epitaxic relations. Examples are pyrite and marcasite, both FeS2, and sphalerite and wurtzite, both ZnS. Rutile on hematite Some pairs of minerals that are not related structurally or compositionally may also exhibit epitaxy. A common example is rutile TiO2 on hematite Fe2O3. Rutile is tetragonal and hematite is trigonal, but there are directions of similar spacing between the atoms in the (100) plane of rutile (perpendicular to the a axis) and the (001) plane of hematite (perpendicular to the c axis). In epitaxy these directions tend to line up with each other, resulting in the axis of the rutile overgrowth being parallel to the c axis of hematite, and the c axis of rutile being parallel to one of the axes of hematite. Hematite on magnetite Another example is hematite on magnetite . The magnetite structure is based on close-packed oxygen anions stacked in an ABC-ABC sequence. In this packing the close-packed layers are parallel to (111) (a plane that symmetrically "cuts off" a corner of a cube). The hematite structure is based on close-packed oxygen anions stacked in an AB-AB sequence, which results in a crystal with hexagonal symmetry. If the cations were small enough to fit into a truly close-packed structure of oxygen anions then the spacing between the nearest neighbour oxygen sites would be the same for both species. The radius of the oxygen ion, however, is only 1.36 Å and the Fe cations are big enough to cause some variations. The Fe radii vary from 0.49 Å to 0.92 Å, depending on the charge (2+ or 3+) and the coordination number (4 or 8). Nevertheless, the O spacings are similar for the two minerals hence hematite can readily grow on the (111) faces of magnetite, with hematite (001) parallel to magnetite (111). Applications Epitaxy is used in nanotechnology and in semiconductor fabrication. Indeed, epitaxy is the only affordable method of high quality crystal growth for many semiconductor materials. In surface science, epitaxy is used to create and study monolayer and multilayer films of adsorbed organic molecules on single crystalline surfaces via scanning tunnelling microscopy. See also Heterojunction Island growth Nano-RAM Quantum cascade laser Selective area epitaxy Silicon on sapphire Single event upset Thermal laser epitaxy Thin film Vertical-cavity surface-emitting laser Wake Shield Facility Zhores Alferov References Bibliography External links epitaxy.net : a central forum for the epitaxy-communities Deposition processes CrystalXE.com: a specialized software in epitaxy Thin film deposition Semiconductor device fabrication Crystallography Methods of crystal growth
Epitaxy
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
3,609
[ "Thin film deposition", "Microtechnology", "Methods of crystal growth", "Coatings", "Thin films", "Materials science", "Semiconductor device fabrication", "Crystallography", "Condensed matter physics", "Planes (geometry)", "Solid state engineering" ]
450,004
https://en.wikipedia.org/wiki/Jacobi%20elliptic%20functions
In mathematics, the Jacobi elliptic functions are a set of basic elliptic functions. They are found in the description of the motion of a pendulum, as well as in the design of electronic elliptic filters. While trigonometric functions are defined with reference to a circle, the Jacobi elliptic functions are a generalization which refer to other conic sections, the ellipse in particular. The relation to trigonometric functions is contained in the notation, for example, by the matching notation for . The Jacobi elliptic functions are used more often in practical problems than the Weierstrass elliptic functions as they do not require notions of complex analysis to be defined and/or understood. They were introduced by . Carl Friedrich Gauss had already studied special Jacobi elliptic functions in 1797, the lemniscate elliptic functions in particular, but his work was published much later. Overview There are twelve Jacobi elliptic functions denoted by , where and are any of the letters , , , and . (Functions of the form are trivially set to unity for notational completeness.) is the argument, and is the parameter, both of which may be complex. In fact, the Jacobi elliptic functions are meromorphic in both and . The distribution of the zeros and poles in the -plane is well-known. However, questions of the distribution of the zeros and poles in the -plane remain to be investigated. In the complex plane of the argument , the twelve functions form a repeating lattice of simple poles and zeroes. Depending on the function, one repeating parallelogram, or unit cell, will have sides of length or on the real axis, and or on the imaginary axis, where and are known as the quarter periods with being the elliptic integral of the first kind. The nature of the unit cell can be determined by inspecting the "auxiliary rectangle" (generally a parallelogram), which is a rectangle formed by the origin at one corner, and as the diagonally opposite corner. As in the diagram, the four corners of the auxiliary rectangle are named , , , and , going counter-clockwise from the origin. The function will have a zero at the corner and a pole at the corner. The twelve functions correspond to the twelve ways of arranging these poles and zeroes in the corners of the rectangle. When the argument and parameter are real, with , and will be real and the auxiliary parallelogram will in fact be a rectangle, and the Jacobi elliptic functions will all be real valued on the real line. Since the Jacobian elliptic functions are doubly periodic in , they factor through a torus – in effect, their domain can be taken to be a torus, just as cosine and sine are in effect defined on a circle. Instead of having only one circle, we now have the product of two circles, one real and the other imaginary. The complex plane can be replaced by a complex torus. The circumference of the first circle is and the second , where and are the quarter periods. Each function has two zeroes and two poles at opposite positions on the torus. Among the points there is one zero and one pole. The Jacobian elliptic functions are then doubly periodic, meromorphic functions satisfying the following properties: There is a simple zero at the corner , and a simple pole at the corner . The complex number is equal to half the period of the function ; that is, the function is periodic in the direction , with the period being . The function is also periodic in the other two directions and , with periods such that and are quarter periods. Notation The elliptic functions can be given in a variety of notations, which can make the subject unnecessarily confusing. Elliptic functions are functions of two variables. The first variable might be given in terms of the amplitude , or more commonly, in terms of given below. The second variable might be given in terms of the parameter , or as the elliptic modulus , where , or in terms of the modular angle , where . The complements of and are defined as and . These four terms are used below without comment to simplify various expressions. The twelve Jacobi elliptic functions are generally written as where and are any of the letters , , , and . Functions of the form are trivially set to unity for notational completeness. The “major” functions are generally taken to be , and from which all other functions can be derived and expressions are often written solely in terms of these three functions, however, various symmetries and generalizations are often most conveniently expressed using the full set. (This notation is due to Gudermann and Glaisher and is not Jacobi's original notation.) Throughout this article, . The functions are notationally related to each other by the multiplication rule: (arguments suppressed) from which other commonly used relationships can be derived: The multiplication rule follows immediately from the identification of the elliptic functions with the Neville theta functions Also note that: Definition in terms of inverses of elliptic integrals There is a definition, relating the elliptic functions to the inverse of the incomplete elliptic integral of the first kind . These functions take the parameters and as inputs. The that satisfies is called the Jacobi amplitude: In this framework, the elliptic sine sn u (Latin: sinus amplitudinis) is given by and the elliptic cosine cn u (Latin: cosinus amplitudinis) is given by and the delta amplitude dn u (Latin: delta amplitudinis) In the above, the value is a free parameter, usually taken to be real such that (but can be complex in general), and so the elliptic functions can be thought of as being given by two variables, and the parameter . The remaining nine elliptic functions are easily built from the above three (, , ), and are given in a section below. Note that when , that then equals the quarter period . In the most general setting, is a multivalued function (in ) with infinitely many logarithmic branch points (the branches differ by integer multiples of ), namely the points and where . This multivalued function can be made single-valued by cutting the complex plane along the line segments joining these branch points (the cutting can be done in non-equivalent ways, giving non-equivalent single-valued functions), thus making analytic everywhere except on the branch cuts. In contrast, and other elliptic functions have no branch points, give consistent values for every branch of , and are meromorphic in the whole complex plane. Since every elliptic function is meromorphic in the whole complex plane (by definition), (when considered as a single-valued function) is not an elliptic function. However, a particular cutting for can be made in the -plane by line segments from to with ; then it only remains to define at the branch cuts by continuity from some direction. Then becomes single-valued and singly-periodic in with the minimal period and it has singularities at the logarithmic branch points mentioned above. If and , is continuous in on the real line. When , the branch cuts of in the -plane cross the real line at for ; therefore for , is not continuous in on the real line and jumps by on the discontinuities. But defining this way gives rise to very complicated branch cuts in the -plane (not the -plane); they have not been fully described as of yet. Let be the incomplete elliptic integral of the second kind with parameter . Then the Jacobi epsilon function can be defined as for and and by analytic continuation in each of the variables otherwise: the Jacobi epsilon function is meromorphic in the whole complex plane (in both and ). Alternatively, throughout both the -plane and -plane, is well-defined in this way because all residues of are zero, so the integral is path-independent. So the Jacobi epsilon relates the incomplete elliptic integral of the first kind to the incomplete elliptic integral of the second kind: The Jacobi epsilon function is not an elliptic function, but it appears when differentiating the Jacobi elliptic functions with respect to the parameter. The Jacobi zn function is defined by It is a singly periodic function which is meromorphic in , but not in (due to the branch cuts of and ). Its minimal period in is . It is related to the Jacobi zeta function by Historically, the Jacobi elliptic functions were first defined by using the amplitude. In more modern texts on elliptic functions, the Jacobi elliptic functions are defined by other means, for example by ratios of theta functions (see below), and the amplitude is ignored. In modern terms, the relation to elliptic integrals would be expressed by (or ) instead of . Definition as trigonometry: the Jacobi ellipse are defined on the unit circle, with radius r = 1 and angle arc length of the unit circle measured from the positive x-axis. Similarly, Jacobi elliptic functions are defined on the unit ellipse, with a = 1. Let then: For each angle the parameter (the incomplete elliptic integral of the first kind) is computed. On the unit circle (), would be an arc length. However, the relation of to the arc length of an ellipse is more complicated. Let be a point on the ellipse, and let be the point where the unit circle intersects the line between and the origin . Then the familiar relations from the unit circle: read for the ellipse: So the projections of the intersection point of the line with the unit circle on the x- and y-axes are simply and . These projections may be interpreted as 'definition as trigonometry'. In short: For the and value of the point with and parameter we get, after inserting the relation: into: that: The latter relations for the x- and y-coordinates of points on the unit ellipse may be considered as generalization of the relations for the coordinates of points on the unit circle. The following table summarizes the expressions for all Jacobi elliptic functions pq(u,m) in the variables (x,y,r) and (φ,dn) with Definition in terms of the Jacobi theta functions Using elliptic integrals Equivalently, Jacobi's elliptic functions can be defined in terms of the theta functions. With such that , let and , , . Then with , , and , The Jacobi zn function can be expressed by theta functions as well: where denotes the partial derivative with respect to the first variable. Using modular inversion In fact, the definition of the Jacobi elliptic functions in Whittaker & Watson is stated a little bit differently than the one given above (but it's equivalent to it) and relies on modular inversion: The function , defined by assumes every value in once and only once in where is the upper half-plane in the complex plane, is the boundary of and In this way, each can be associated with one and only one . Then Whittaker & Watson define the Jacobi elliptic functions by where . In the book, they place an additional restriction on (that ), but it is in fact not a necessary restriction (see the Cox reference). Also, if or , the Jacobi elliptic functions degenerate to non-elliptic functions which is described below. Definition in terms of Neville theta functions The Jacobi elliptic functions can be defined very simply using the Neville theta functions: Simplifications of complicated products of the Jacobi elliptic functions are often made easier using these identities. Jacobi transformations The Jacobi imaginary transformations The Jacobi imaginary transformations relate various functions of the imaginary variable i u or, equivalently, relations between various values of the m parameter. In terms of the major functions: Using the multiplication rule, all other functions may be expressed in terms of the above three. The transformations may be generally written as . The following table gives the for the specified pq(u,m). (The arguments are suppressed) {| class="wikitable" style="text-align:center" |+ Jacobi Imaginary transformations !colspan="2" rowspan="2"| !colspan="4"|q |- ! c ! s ! n ! d |- !rowspan="6"|p |- ! c | 1 || i ns || nc || nd |- ! s | −i sn || 1 || −i sc || −i sd |- ! n | cn || i cs || 1 || cd |- ! d | dn || i ds || dc || 1 |} Since the hyperbolic trigonometric functions are proportional to the circular trigonometric functions with imaginary arguments, it follows that the Jacobi functions will yield the hyperbolic functions for m=1. In the figure, the Jacobi curve has degenerated to two vertical lines at x = 1 and x = −1. The Jacobi real transformations The Jacobi real transformations yield expressions for the elliptic functions in terms with alternate values of m. The transformations may be generally written as . The following table gives the for the specified pq(u,m). (The arguments are suppressed) {| class="wikitable" style="text-align:center" |+ Jacobi real transformations !colspan="2" rowspan="2"| !colspan="4"|q |- ! c ! s ! n ! d |- !rowspan="6"|p |- ! c | || || || |- ! s | || || || |- ! n | || || || |- ! d | || || || |} Other Jacobi transformations Jacobi's real and imaginary transformations can be combined in various ways to yield three more simple transformations . The real and imaginary transformations are two transformations in a group (D3 or anharmonic group) of six transformations. If is the transformation for the m parameter in the real transformation, and is the transformation of m in the imaginary transformation, then the other transformations can be built up by successive application of these two basic transformations, yielding only three more possibilities: These five transformations, along with the identity transformation (μU(m) = m) yield the six-element group. With regard to the Jacobi elliptic functions, the general transformation can be expressed using just three functions: where i = U, I, IR, R, RI, or RIR, identifying the transformation, γi is a multiplication factor common to these three functions, and the prime indicates the transformed function. The other nine transformed functions can be built up from the above three. The reason the cs, ns, ds functions were chosen to represent the transformation is that the other functions will be ratios of these three (except for their inverses) and the multiplication factors will cancel. The following table lists the multiplication factors for the three ps functions, the transformed ms, and the transformed function names for each of the six transformations. (As usual, k2 = m, 1 − k2 = k12 = m′ and the arguments () are suppressed) {| class="wikitable" style="text-align:center" |+ Parameters for the six transformations !Transformation i||||||cs'||ns'||ds' |- ! U | 1 || m || cs || ns || ds |- ! I | i || m' || ns || cs || ds |- ! IR | i k || −m'/m || ds || cs || ns |- ! R | k || 1/m || ds || ns || cs |- ! RI |i k1|| 1/m' || ns || ds || cs |- ! RIR | k1 || −m/m' || cs || ds || ns |- |} Thus, for example, we may build the following table for the RIR transformation. The transformation is generally written (The arguments are suppressed) {| class="wikitable" style="text-align:center" |+ The RIR transformation !colspan="2" rowspan="2"| !colspan="4"|q |- ! c ! s ! n ! d |- !rowspan="6"|p |- ! c |1|| k' cs || cd || cn |- ! s | sc|| 1 || sd || sn |- ! n | dc || ds || 1 || dn |- ! d | nc || ns || nd || 1 |} The value of the Jacobi transformations is that any set of Jacobi elliptic functions with any real-valued parameter m can be converted into another set for which and, for real values of u, the function values will be real. Amplitude transformations In the following, the second variable is suppressed and is equal to : where both identities are valid for all such that both sides are well-defined. With we have where all the identities are valid for all such that both sides are well-defined. The Jacobi hyperbola Introducing complex numbers, our ellipse has an associated hyperbola: from applying Jacobi's imaginary transformation to the elliptic functions in the above equation for x and y. It follows that we can put . So our ellipse has a dual ellipse with m replaced by 1-m. This leads to the complex torus mentioned in the Introduction. Generally, m may be a complex number, but when m is real and m<0, the curve is an ellipse with major axis in the x direction. At m=0 the curve is a circle, and for 0<m<1, the curve is an ellipse with major axis in the y direction. At m = 1, the curve degenerates into two vertical lines at x = ±1. For m > 1, the curve is a hyperbola. When m is complex but not real, x or y or both are complex and the curve cannot be described on a real x-y diagram. Minor functions Reversing the order of the two letters of the function name results in the reciprocals of the three functions above: Similarly, the ratios of the three primary functions correspond to the first letter of the numerator followed by the first letter of the denominator: More compactly, we have where p and q are any of the letters s, c, d. Periodicity, poles, and residues In the complex plane of the argument u, the Jacobi elliptic functions form a repeating pattern of poles (and zeroes). The residues of the poles all have the same absolute value, differing only in sign. Each function pq(u,m) has an "inverse function" (in the multiplicative sense) qp(u,m) in which the positions of the poles and zeroes are exchanged. The periods of repetition are generally different in the real and imaginary directions, hence the use of the term "doubly periodic" to describe them. For the Jacobi amplitude and the Jacobi epsilon function: where is the complete elliptic integral of the second kind with parameter . The double periodicity of the Jacobi elliptic functions may be expressed as: where α and β are any pair of integers. K(⋅) is the complete elliptic integral of the first kind, also known as the quarter period. The power of negative unity (γ) is given in the following table: {| class="wikitable" style="text-align:center" |+ !colspan="2" rowspan="2"| !colspan="4"|q |- ! c ! s ! n ! d |- !rowspan="6"|p |- ! c |0||β || α + β || α |- ! s |β || 0 || α || α + β |- ! n | α + β || α || 0 || β |- ! d | α || α + β || β || 0 |} When the factor (−1)γ is equal to −1, the equation expresses quasi-periodicity. When it is equal to unity, it expresses full periodicity. It can be seen, for example, that for the entries containing only α when α is even, full periodicity is expressed by the above equation, and the function has full periods of 4K(m) and 2iK(1 − m). Likewise, functions with entries containing only β have full periods of 2K(m) and 4iK(1 − m), while those with α + β have full periods of 4K(m) and 4iK(1 − m). In the diagram on the right, which plots one repeating unit for each function, indicating phase along with the location of poles and zeroes, a number of regularities can be noted: The inverse of each function is opposite the diagonal, and has the same size unit cell, with poles and zeroes exchanged. The pole and zero arrangement in the auxiliary rectangle formed by (0,0), (K,0), (0,K′) and (K,K′) are in accordance with the description of the pole and zero placement described in the introduction above. Also, the size of the white ovals indicating poles are a rough measure of the absolute value of the residue for that pole. The residues of the poles closest to the origin in the figure (i.e. in the auxiliary rectangle) are listed in the following table: {| class="wikitable" style="text-align:center; width:200px”" |+ Residues of Jacobi Elliptic Functions !colspan="2" rowspan="2"| !colspan="4"|q |- ! width="40pt"|c ! width="40pt"|s ! width="40pt"|n ! width="40pt"|d |- !rowspan="6"|p |- ! height="40pt" |c | ||1|||| |- ! height="40pt" |s | || |||| |- ! height="40pt" |n |||1|| || |- ! height="40pt" | d | -1 || 1 || || |- |} When applicable, poles displaced above by 2K or displaced to the right by 2K′ have the same value but with signs reversed, while those diagonally opposite have the same value. Note that poles and zeroes on the left and lower edges are considered part of the unit cell, while those on the upper and right edges are not. The information about poles can in fact be used to characterize the Jacobi elliptic functions: The function is the unique elliptic function having simple poles at (with ) with residues taking the value at . The function is the unique elliptic function having simple poles at (with ) with residues taking the value at . The function is the unique elliptic function having simple poles at (with ) with residues taking the value at . Special values Setting gives the lemniscate elliptic functions and : When or , the Jacobi elliptic functions are reduced to non-elliptic functions: For the Jacobi amplitude, and where is the Gudermannian function. In general if neither of p,q is d then . Identities Half angle formula K formulas Half K formula Third K formula To get x3, we take the tangent of twice the arctangent of the modulus. Also this equation leads to the sn-value of the third of K: These equations lead to the other values of the Jacobi-Functions: Fifth K formula Following equation has following solution: To get the sn-values, we put the solution x into following expressions: Relations between squares of the functions Relations between squares of the functions can be derived from two basic relationships (Arguments (u,m) suppressed): where m + m' = 1. Multiplying by any function of the form nq yields more general equations: With q = d, these correspond trigonometrically to the equations for the unit circle () and the unit ellipse (), with x = cd, y = sd and r = nd. Using the multiplication rule, other relationships may be derived. For example: Addition theorems The functions satisfy the two square relations (dependence on m suppressed) From this we see that (cn, sn, dn) parametrizes an elliptic curve which is the intersection of the two quadrics defined by the above two equations. We now may define a group law for points on this curve by the addition formulas for the Jacobi functions The Jacobi epsilon and zn functions satisfy a quasi-addition theorem: Double angle formulae can be easily derived from the above equations by setting x = y. Half angle formulae are all of the form: where: Jacobi elliptic functions as solutions of nonlinear ordinary differential equations Derivatives with respect to the first variable The derivatives of the three basic Jacobi elliptic functions (with respect to the first variable, with fixed) are: These can be used to derive the derivatives of all other functions as shown in the table below (arguments (u,m) suppressed): Also With the addition theorems above and for a given m with 0 < m < 1 the major functions are therefore solutions to the following nonlinear ordinary differential equations: solves the differential equations and (for not on a branch cut) solves the differential equations and solves the differential equations and solves the differential equations and The function which exactly solves the pendulum differential equation, with initial angle and zero initial angular velocity is where , and . Derivatives with respect to the second variable With the first argument fixed, the derivatives with respect to the second variable are as follows: Expansion in terms of the nome Let the nome be , , and let . Then the functions have expansions as Lambert series when Bivariate power series expansions have been published by Schett. Fast computation The theta function ratios provide an efficient way of computing the Jacobi elliptic functions. There is an alternative method, based on the arithmetic-geometric mean and Landen's transformations: Initialize where . Define where . Then define for and a fixed . If for , then as . This is notable for its rapid convergence. It is then trivial to compute all Jacobi elliptic functions from the Jacobi amplitude on the real line. In conjunction with the addition theorems for elliptic functions (which hold for complex numbers in general) and the Jacobi transformations, the method of computation described above can be used to compute all Jacobi elliptic functions in the whole complex plane. Another method of fast computation of the Jacobi elliptic functions via the arithmetic–geometric mean, avoiding the computation of the Jacobi amplitude, is due to Herbert E. Salzer: Let Set Then as . Yet, another method for a rapidly converging fast computation of the Jacobi elliptic sine function found in the literature is shown below. Let: Then set: Then: . Approximation in terms of hyperbolic functions The Jacobi elliptic functions can be expanded in terms of the hyperbolic functions. When is close to unity, such that and higher powers of can be neglected, we have: sn(u): cn(u): dn(u): For the Jacobi amplitude, Continued fractions Assuming real numbers with and the nome , with elliptic modulus . If , where is the complete elliptic integral of the first kind, then holds the following continued fraction expansion Known continued fractions involving and with elliptic modulus are For , : pg. 374 For , : pg. 375 For , : pg. 220 For , : pg. 374 For , : pg. 375 Inverse functions The inverses of the Jacobi elliptic functions can be defined similarly to the inverse trigonometric functions; if , . They can be represented as elliptic integrals, and power series representations have been found. Map projection The Peirce quincuncial projection is a map projection based on Jacobian elliptic functions. See also Elliptic curve Schwarz–Christoffel mapping Carlson symmetric form Jacobi theta function Ramanujan theta function Dixon elliptic functions Abel elliptic functions Weierstrass elliptic function Lemniscate elliptic functions Notes Citations References N. I. Akhiezer, Elements of the Theory of Elliptic Functions (1970) Moscow, translated into English as AMS Translations of Mathematical Monographs Volume 79 (1990) AMS, Rhode Island A. C. Dixon The elementary properties of the elliptic functions, with examples (Macmillan, 1894) Alfred George Greenhill The applications of elliptic functions (London, New York, Macmillan, 1892) Edmund T. Whittaker, George Neville Watson: A Course in Modern Analysis. 4th ed. Cambridge, England: Cambridge University Press, 1990. S. 469–470. H. Hancock Lectures on the theory of elliptic functions (New York, J. Wiley & sons, 1910) P. Appell and E. Lacour Principes de la théorie des fonctions elliptiques et applications (Paris, Gauthier Villars, 1897) G. H. Halphen Traité des fonctions elliptiques et de leurs applications (vol. 1) (Paris, Gauthier-Villars, 1886–1891) G. H. Halphen Traité des fonctions elliptiques et de leurs applications (vol. 2) (Paris, Gauthier-Villars, 1886–1891) G. H. Halphen Traité des fonctions elliptiques et de leurs applications (vol. 3) (Paris, Gauthier-Villars, 1886–1891) J. Tannery and J. Molk Eléments de la théorie des fonctions elliptiques. Tome I, Introduction. Calcul différentiel. Ire partie (Paris : Gauthier-Villars et fils, 1893) J. Tannery and J. Molk Eléments de la théorie des fonctions elliptiques. Tome II, Calcul différentiel. IIe partie (Paris : Gauthier-Villars et fils, 1893) J. Tannery and J. Molk Eléments de la théorie des fonctions elliptiques. Tome III, Calcul intégral. Ire partie, Théorèmes généraux. Inversion (Paris : Gauthier-Villars et fils, 1893) J. Tannery and J. Molk Eléments de la théorie des fonctions elliptiques. Tome IV, Calcul intégral. IIe partie, Applications (Paris : Gauthier-Villars et fils, 1893) C. Briot and J. C. Bouquet Théorie des fonctions elliptiques ( Paris : Gauthier-Villars, 1875) Toshio Fukushima: Fast Computation of Complete Elliptic Integrals and Jacobian Elliptic Functions. 2012, National Astronomical Observatory of Japan (国立天文台) Lowan, Blanch und Horenstein: On the Inversion of the q-Series Associated with Jacobian Elliptic Functions. Bull. Amer. Math. Soc. 48, 1942 H. Ferguson, D. E. Nielsen, G. Cook: A partition formula for the integer coefficients of the theta function nome. Mathematics of computation, Volume 29, Nummer 131, Juli 1975 J. D. Fenton and R. S. Gardiner-Garden: Rapidly-convergent methods for evaluating elliptic integrals and theta and elliptic functions. J. Austral. Math. Soc. (Series B) 24, 1982, S. 57 Adolf Kneser: Neue Untersuchung einer Reihe aus der Theorie der elliptischen Funktionen. J. reine u. angew. Math. 157, 1927. pages 209 – 218 External links Elliptic functions Special functions
Jacobi elliptic functions
[ "Mathematics" ]
6,663
[ "Special functions", "Combinatorics" ]
450,543
https://en.wikipedia.org/wiki/Femtochemistry
Femtochemistry is the area of physical chemistry that studies chemical reactions on extremely short timescales (approximately 10−15 seconds or one femtosecond, hence the name) in order to study the very act of atoms within molecules (reactants) rearranging themselves to form new molecules (products). In a 1988 issue of the journal Science, Ahmed Hassan Zewail published an article using this term for the first time, stating "Real-time femtochemistry, that is, chemistry on the femtosecond timescale...". Later in 1999, Zewail received the Nobel Prize in Chemistry for his pioneering work in this field showing that it is possible to see how atoms in a molecule move during a chemical reaction with flashes of laser light. Application of femtochemistry in biological studies has also helped to elucidate the conformational dynamics of stem-loop RNA structures. Many publications have discussed the possibility of controlling chemical reactions by this method, but this remains controversial. The steps in some reactions occur in the femtosecond timescale and sometimes in attosecond timescales, and will sometimes form intermediate products. These reaction intermediates cannot always be deduced from observing the start and end products. Pump–probe spectroscopy The simplest approach and still one of the most common techniques is known as pump–probe spectroscopy. In this method, two or more optical pulses with variable time delay between them are used to investigate the processes happening during a chemical reaction. The first pulse (pump) initiates the reaction, by breaking a bond or exciting one of the reactants. The second pulse (probe) is then used to interrogate the progress of the reaction a certain period of time after initiation. As the reaction progresses, the response of the reacting system to the probe pulse will change. By continually scanning the time delay between pump and probe pulses and observing the response, workers can reconstruct the progress of the reaction as a function of time. Examples Bromine dissociation Femtochemistry has been used to show the time-resolved electronic stages of bromine dissociation. When dissociated by a 400 nm laser pulse, electrons completely localize onto individual atoms after 140 fs, with Br atoms separated by 6.0 Å after 160 fs. See also Attosecond physics (1 attosecond = 10−18 s) Femtotechnology Ultrafast laser spectroscopy Ultrashort pulse Flash photolysis References Further reading Femtochemistry: Ultrafast Dynamics of the Chemical Bond, Ahmed H Zewail, World Scientific, 1994 External links Controlling and probing atoms and molecules with ultrafast laser pulses, PhD Thesis Physical chemistry Ultrafast spectroscopy Articles containing video clips Photochemistry
Femtochemistry
[ "Physics", "Chemistry" ]
571
[ "Physical chemistry", "Applied and interdisciplinary physics", "nan" ]
451,285
https://en.wikipedia.org/wiki/Undeniable%20signature
An undeniable signature is a digital signature scheme which allows the signer to be selective to whom they allow to verify signatures. The scheme adds explicit signature repudiation, preventing a signer later refusing to verify a signature by omission; a situation that would devalue the signature in the eyes of the verifier. It was invented by David Chaum and Hans van Antwerpen in 1989. Overview In this scheme, a signer possessing a private key can publish a signature of a message. However, the signature reveals nothing to a recipient/verifier of the message and signature without taking part in either of two interactive protocols: Confirmation protocol, which confirms that a candidate is a valid signature of the message issued by the signer, identified by the public key. Disavowal protocol, which confirms that a candidate is not a valid signature of the message issued by the signer. The motivation for the scheme is to allow the signer to choose to whom signatures are verified. However, that the signer might claim the signature is invalid at any later point, by refusing to take part in verification, would devalue signatures to verifiers. The disavowal protocol distinguishes these cases removing the signer's plausible deniability. It is important that the confirmation and disavowal exchanges are not transferable. They achieve this by having the property of zero-knowledge; both parties can create transcripts of both confirmation and disavowal that are indistinguishable, to a third-party, of correct exchanges. The designated verifier signature scheme improves upon deniable signatures by allowing, for each signature, the interactive portion of the scheme to be offloaded onto another party, a designated verifier, reducing the burden on the signer. Zero-knowledge protocol The following protocol was suggested by David Chaum. A group, G, is chosen in which the discrete logarithm problem is intractable, and all operation in the scheme take place in this group. Commonly, this will be the finite cyclic group of order p contained in Z/nZ, with p being a large prime number; this group is equipped with the group operation of integer multiplication modulo n. An arbitrary primitive element (or generator), g, of G is chosen; computed powers of g then combine obeying fixed axioms. Alice generates a key pair, randomly chooses a private key, x, and then derives and publishes the public key, y = gx. Message signing Alice signs the message, m, by computing and publishing the signature, z = mx. Confirmation (i.e., avowal) protocol Bob wishes to verify the signature, z, of m by Alice under the key, y. Bob picks two random numbers: a and b, and uses them to blind the message, sending to Alice: Alice picks a random number, q, uses it to blind, c, and then signing this using her private key, x, sending to Bob: Note that {{defn|s1x (cgq) (magb)gqx (m)(g) zy.}} Bob reveals a and b. Alice verifies that a and b are the correct blind values, then, if so, reveals q. Revealing these blinds makes the exchange zero knowledge. Bob verifies s1 = cgq, proving q has not been chosen dishonestly, and proving z is valid signature issued by Alice's key. Note that Alice can cheat at step 2 by attempting to randomly guess s2. Disavowal protocol Alice wishes to convince Bob that z is not a valid signature of m under the key, gx; i.e., z ≠ mx. Alice and Bob have agreed an integer, k, which sets the computational burden on Alice and the likelihood that she should succeed by chance. Bob picks random values, s ∈ {0, 1, ..., k} and a, and sends: where exponentiating by a is used to blind the sent values. Note that Alice, using her private key, computes v and then the quotient, Thus, vv = 1, unless z ≠ m. Alice then tests vv for equality against the values: which are calculated by repeated multiplication of mz (rather than exponentiating for each i). If the test succeeds, Alice conjectures the relevant i to be s; otherwise, she conjectures random value. Where z = m, (mz) = vxv = 1 for all i, s is unrecoverable. Alice commits to i: she picks a random r and sends hash(r, i) to Bob. Bob reveals a. Alice confirms that a is the correct blind (i.e., v and v can be generated using it), then, if so, reveals r. Revealing these blinds makes the exchange zero knowledge. Bob checks hash(r, i) = hash(r, s), proving Alice knows s, hence z ≠ m. If Alice attempts to cheat at step 3 by guessing s at random, the probability of succeeding is 1/(k + 1). So, if k = 1023'' and the protocol is conducted ten times, her chances are 1 to 2100. See also Non-repudiation Designated verifier signature Topics in cryptography References Cryptography Digital signature schemes
Undeniable signature
[ "Mathematics", "Engineering" ]
1,118
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
451,445
https://en.wikipedia.org/wiki/Cube%20%28algebra%29
In arithmetic and algebra, the cube of a number is its third power, that is, the result of multiplying three instances of together. The cube of a number is denoted , using a superscript 3, for example . The cube operation can also be defined for any other mathematical expression, for example . The cube is also the number multiplied by its square: . The cube function is the function (often denoted ) that maps a number to its cube. It is an odd function, as . The volume of a geometric cube is the cube of its side length, giving rise to the name. The inverse operation that consists of finding a number whose cube is is called extracting the cube root of . It determines the side of the cube of a given volume. It is also raised to the one-third power. The graph of the cube function is known as the cubic parabola. Because the cube function is an odd function, this curve has a center of symmetry at the origin, but no axis of symmetry. In integers A cube number, or a perfect cube, or sometimes just a cube, is a number which is the cube of an integer. The non-negative perfect cubes up to 603 are : Geometrically speaking, a positive integer is a perfect cube if and only if one can arrange solid unit cubes into a larger, solid cube. For example, 27 small cubes can be arranged into one larger one with the appearance of a Rubik's Cube, since . The difference between the cubes of consecutive integers can be expressed as follows: . or . There is no minimum perfect cube, since the cube of a negative integer is negative. For example, . Base ten Unlike perfect squares, perfect cubes do not have a small number of possibilities for the last two digits. Except for cubes divisible by 5, where only 25, 75 and 00 can be the last two digits, any pair of digits with the last digit odd can occur as the last digits of a perfect cube. With even cubes, there is considerable restriction, for only 00, 2, 4, 6 and 8 can be the last two digits of a perfect cube (where stands for any odd digit and for any even digit). Some cube numbers are also square numbers; for example, 64 is a square number and a cube number . This happens if and only if the number is a perfect sixth power (in this case 2). The last digits of each 3rd power are: It is, however, easy to show that most numbers are not perfect cubes because all perfect cubes must have digital root 1, 8 or 9. That is their values modulo 9 may be only 0, 1, and 8. Moreover, the digital root of any number's cube can be determined by the remainder the number gives when divided by 3: If the number x is divisible by 3, its cube has digital root 9; that is, If it has a remainder of 1 when divided by 3, its cube has digital root 1; that is, If it has a remainder of 2 when divided by 3, its cube has digital root 8; that is, Sums of two cubes Sums of three cubes It is conjectured that every integer (positive or negative) not congruent to modulo can be written as a sum of three (positive or negative) cubes with infinitely many ways. For example, . Integers congruent to modulo are excluded because they cannot be written as the sum of three cubes. The smallest such integer for which such a sum is not known is 114. In September 2019, the previous smallest such integer with no known 3-cube sum, 42, was found to satisfy this equation: One solution to is given in the table below for , and not congruent to or modulo . The selected solution is the one that is primitive (), is not of the form or (since they are infinite families of solutions), satisfies , and has minimal values for and (tested in this order). Only primitive solutions are selected since the non-primitive ones can be trivially deduced from solutions for a smaller value of . For example, for , the solution results from the solution by multiplying everything by Therefore, this is another solution that is selected. Similarly, for , the solution is excluded, and this is the solution that is selected. Fermat's Last Theorem for cubes The equation has no non-trivial (i.e. ) solutions in integers. In fact, it has none in Eisenstein integers. Both of these statements are also true for the equation . Sum of first n cubes The sum of the first cubes is the th triangle number squared: Proofs. gives a particularly simple derivation, by expanding each cube in the sum into a set of consecutive odd numbers. He begins by giving the identity That identity is related to triangular numbers in the following way: and thus the summands forming start off just after those forming all previous values up to . Applying this property, along with another well-known identity: we obtain the following derivation: In the more recent mathematical literature, uses the rectangle-counting interpretation of these numbers to form a geometric proof of the identity (see also ); he observes that it may also be proved easily (but uninformatively) by induction, and states that provides "an interesting old Arabic proof". provides a purely visual proof, provide two additional proofs, and gives seven geometric proofs. For example, the sum of the first 5 cubes is the square of the 5th triangular number, A similar result can be given for the sum of the first odd cubes, but , must satisfy the negative Pell equation . For example, for and , then, and so on. Also, every even perfect number, except the lowest, is the sum of the first odd cubes (p = 3, 5, 7, ...): Sum of cubes of numbers in arithmetic progression There are examples of cubes of numbers in arithmetic progression whose sum is a cube: with the first one sometimes identified as the mysterious Plato's number. The formula for finding the sum of cubes of numbers in arithmetic progression with common difference and initial cube , is given by A parametric solution to is known for the special case of , or consecutive cubes, as found by Pagliani in 1829. Cubes as sums of successive odd integers In the sequence of odd integers 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, ..., the first one is a cube (); the sum of the next two is the next cube (); the sum of the next three is the next cube (); and so forth. Waring's problem for cubes Every positive integer can be written as the sum of nine (or fewer) positive cubes. This upper limit of nine cubes cannot be reduced because, for example, 23 cannot be written as the sum of fewer than nine positive cubes: 23 = 23 + 23 + 13 + 13 + 13 + 13 + 13 + 13 + 13. In rational numbers Every positive rational number is the sum of three positive rational cubes, and there are rationals that are not the sum of two rational cubes. In real numbers, other fields, and rings In real numbers, the cube function preserves the order: larger numbers have larger cubes. In other words, cubes (strictly) monotonically increase. Also, its codomain is the entire real line: the function is a surjection (takes all possible values). Only three numbers are equal to their own cubes: , , and . If or , then . If or , then . All aforementioned properties pertain also to any higher odd power (, , ...) of real numbers. Equalities and inequalities are also true in any ordered ring. Volumes of similar Euclidean solids are related as cubes of their linear sizes. In complex numbers, the cube of a purely imaginary number is also purely imaginary. For example, . The derivative of equals . Cubes occasionally have the surjective property in other fields, such as in for such prime that , but not necessarily: see the counterexample with rationals above. Also in only three elements 0, ±1 are perfect cubes, of seven total. −1, 0, and 1 are perfect cubes anywhere and the only elements of a field equal to their own cubes: . History Determination of the cubes of large numbers was very common in many ancient civilizations. Mesopotamian mathematicians created cuneiform tablets with tables for calculating cubes and cube roots by the Old Babylonian period (20th to 16th centuries BC). Cubic equations were known to the ancient Greek mathematician Diophantus. Hero of Alexandria devised a method for calculating cube roots in the 1st century CE. Methods for solving cubic equations and extracting cube roots appear in The Nine Chapters on the Mathematical Art, a Chinese mathematical text compiled around the 2nd century BCE and commented on by Liu Hui in the 3rd century CE. See also Cabtaxi number Cubic equation Doubling the cube Eighth power Euler's sum of powers conjecture Fifth power Fourth power Kepler's laws of planetary motion#Third law Monkey saddle Perfect power Seventh power Sixth power Square Taxicab number Notes References Sources Elementary arithmetic Figurate numbers Integer sequences Integers Number theory Unary operations
Cube (algebra)
[ "Mathematics" ]
1,928
[ "Sequences and series", "Figurate numbers", "Functions and mappings", "Integer sequences", "Elementary arithmetic", "Unary operations", "Mathematical structures", "Discrete mathematics", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Elementary mathematics", "Arithmeti...
6,126,703
https://en.wikipedia.org/wiki/CD11
In cell biology, CD11 is the α (alpha) component of various integrins, especially ones in which the β (beta) component is CD18 (β2) and mediate leukocyte adhesion. For example, LFA1 (CD11a/CD18) short representation of Lymphocyte Function-associated Antigen 1, also called αLβ2 integrin Mac1 (CD11b/CD18) present on macrophages that is also called Macrophage-1 antigen (CR3) and αMβ2 integrin. CD11c/CD18 also called complement receptor 4 (CR4) and αXβ2 integrin. References Clusters of differentiation
CD11
[ "Chemistry" ]
152
[ "Biochemistry stubs", "Protein stubs" ]
6,128,820
https://en.wikipedia.org/wiki/Backup%20battery
A backup battery provides power to a system when the primary source of power is unavailable. Backup batteries range from small single cells to retain clock time and date in computers, up to large battery room facilities that power uninterruptible power supply systems for large data centers. Small backup batteries may be primary cells; rechargeable backup batteries are kept charged by the prime power supply. Examples Aircraft emergency batteries Backup batteries in aircraft keep essential instruments and devices running in the event of an engine power failure. Each aircraft has enough power in the backup batteries to facilitate a safe landing. The batteries keeping navigation, ELUs (emergency lighting units), emergency pressure or oxygen systems running at altitude, and radio equipment operational. Larger aircraft have control surfaces that run on these backups as well. Aircraft batteries are either nickel-cadmium or valve-regulated lead acid type. The battery keeps all necessary items running for between 30 minutes and 3 hours. Large aircraft may have a ram air turbine to provide additional power during engine failures. Burglar alarms Backup batteries are almost always used in burglar alarms. The backup battery prevents the burglar from disabling the alarm by turning off power to the building. Additionally these batteries power the remote cellular phone systems that thwart phone line snipping as well. The backup battery usually has a lifespan of 3-10 years depending on the make and model, and so if the battery runs flat, there is only one main source of power to the whole system which is the mains power. Should this fail as well (for example, a power cut), it usually triggers a third backup battery located in the bellboxes on the outside of the building which simply triggers the bell or siren. This however means that the alarm cannot be stopped in any way apart from physically going outside to the bellbox and disabling the siren. It is also why if there is a power outage in the area, most burglar alarms do start ringing and cannot be realistically stopped until the main power is restored. Computers Modern personal computer motherboards have a backup battery to run the real-time clock circuit and retain configuration memory while the system is turned off. This is often called the CMOS battery or BIOS battery. The original IBM AT through to the PS/2 range, used a relatively large primary lithium battery, compared to later models, to retain the clock and configuration memory. These early machines required the backup battery to be replaced periodically due to the relatively large power consumption. Some manufacturers of clone machines used a rechargeable battery to avoid the problems that could be created by a failing battery. Modern systems use a coin style primary battery. In these later machines, the current draw is almost negligible and the primary batteries usually outlast the system that they support. It is rare to find rechargeable batteries in such systems. Backup batteries are used in uninterruptible power supplies (UPS), and provide power to the computers they supply for a variable period after a power failure, usually long enough to at least allow the computer to be shut down gracefully. These batteries are often large valve regulated lead-acid batteries in smaller or portable systems. Data center UPS backup batteries may be wet cell lead-acid or nickel cadmium batteries, with lithium ion cells available in some ratings. Server-grade disk array controllers often contain onboard disk buffer, and provide an option for a "backup battery unit" (BBU) to maintain the contents of this cache after power loss. If this battery is present, disk writes can be considered completed when they reach the cache, thus speeding up I/O throughput by not waiting for the hard drive. This operation mode is called "write-back caching". Telephony A local backup battery unit is necessary in some telephony and combined telephony/data applications built with use of digital passive optical networks. In such networks there are active units on telephone exchange side and on the user side, but nodes between them are all passive in the meaning of electrical power usage. So, if a building (such as an apartment house) loses power, the network continues to function. The user side must have standby power since operating power isn't transferred over data optical line. Telecommunications networks and data centers A valve-regulated lead-acid battery (VRLA) is a battery type that is popular in telecommunications network environments as a reliable backup power source. VRLA batteries are used in the outside plant at locations such as Controlled Environmental Vaults (CEVs), Electronic Equipment Enclosures (EEEs), and huts, and in uncontrolled structures such as cabinets. VRLA Battery String Certification Levels Based on Requirements for Safety and Performance, is a new industry-approved set of VRLA requirements that provides a three-level compliance system. The compliance system provides a common framework for evaluating and qualifying various valve-regulated lead-acid battery technologies. The framework intends to alleviate the complexities associated with product introduction and qualification. For a VRLA, the quality system employed by the manufacturer is an important key to the overall reliability of it. The manufacturing processes, test and inspection procedures, and quality program used by a manufacturer should be adequate to ensure that the final product meets the needs of the end user, the application, and industry-accepted standards and processes (i.e., ANSI/IEC, TL9000, and Generic Requirements for the Physical Design and Manufacture of Telecommunications Products and Equipment. Video game cartridges Cartridge-based video games sometimes contain a battery which is used to preserve the contents of a small RAM chip on which saved games and/or high scores are recorded. Hospitals Power failure in a hospital would result in life-threatening conditions for patients. Patients undergoing surgery or on life support are reliant on a consistent power supply. Backup generators or batteries supply power to critical equipment until main power can be restored. Power Stations Power failure in a power station that produces electricity would result in a blackout situation that would cause irreparable damage to equipment such as the turbine-generator. The safety of power station employees is a major concern during an unscheduled power outage at a power plant. A bank of large station backup batteries are used to power uninterruptible power supplies as well as directly power emergency oil pumps for up to 8 hours while normal power is being restored to the power station. Tesla, Inc installed the world's largest lithium ion battery pack for the government of South Australia in 2017; to help alleviate energy (electricity) blackouts in the state. Tesla met the guarantee by Elon Musk of installation in 100 days or it would be free. See also List of battery types Reserve battery References Battery applications Electric power
Backup battery
[ "Physics", "Engineering" ]
1,367
[ "Physical quantities", "Reliability engineering", "Backup", "Power (physics)", "Electric power", "Electrical engineering" ]
6,129,627
https://en.wikipedia.org/wiki/Womersley%20number
The Womersley number ( or ) is a dimensionless number in biofluid mechanics and biofluid dynamics. It is a dimensionless expression of the pulsatile flow frequency in relation to viscous effects. It is named after John R. Womersley (1907–1958) for his work with blood flow in arteries. The Womersley number is important in keeping dynamic similarity when scaling an experiment. An example of this is scaling up the vascular system for experimental study. The Womersley number is also important in determining the thickness of the boundary layer to see if entrance effects can be ignored. The square root of this number is also referred to as Stokes number, , due to the pioneering work done by Sir George Stokes on the Stokes second problem. Derivation The Womersley number, usually denoted , is defined by the relation where is an appropriate length scale (for example the radius of a pipe), is the angular frequency of the oscillations, and , , are the kinematic viscosity, density, and dynamic viscosity of the fluid, respectively. The Womersley number is normally written in the powerless form In the cardiovascular system, the pulsation frequency, density, and dynamic viscosity are constant, however the Characteristic length, which in the case of blood flow is the vessel diameter, changes by three orders of magnitudes (OoM) between the aorta and fine capillaries. The Womersley number thus changes due to the variations in vessel size across the vasculature system. The Womersley number of human blood flow can be estimated as follows: Below is a list of estimated Womersley numbers in different human blood vessels: It can also be written in terms of the dimensionless Reynolds number (Re) and Strouhal number (St): The Womersley number arises in the solution of the linearized Navier–Stokes equations for oscillatory flow (presumed to be laminar and incompressible) in a tube. It expresses the ratio of the transient or oscillatory inertia force to the shear force. When is small (1 or less), it means the frequency of pulsations is sufficiently low that a parabolic velocity profile has time to develop during each cycle, and the flow will be very nearly in phase with the pressure gradient, and will be given to a good approximation by Poiseuille's law, using the instantaneous pressure gradient. When is large (10 or more), it means the frequency of pulsations is sufficiently large that the velocity profile is relatively flat or plug-like, and the mean flow lags the pressure gradient by about 90 degrees. Along with the Reynolds number, the Womersley number governs dynamic similarity. The boundary layer thickness that is associated with the transient acceleration is inversely related to the Womersley number. This can be seen by recognizing the Stokes number as the square root of the Womersley number. where is a characteristic length. Biofluid mechanics In a flow distribution network that progresses from a large tube to many small tubes (e.g. a blood vessel network), the frequency, density, and dynamic viscosity are (usually) the same throughout the network, but the tube radii change. Therefore, the Womersley number is large in large vessels and small in small vessels. As the vessel diameter decreases with each division the Womersley number soon becomes quite small. The Womersley numbers tend to 1 at the level of the terminal arteries. In the arterioles, capillaries, and venules the Womersley numbers are less than one. In these regions the inertia force becomes less important and the flow is determined by the balance of viscous stresses and the pressure gradient. This is called microcirculation. Some typical values for the Womersley number in the cardiovascular system for a canine at a heart rate of 2 Hz are: Ascending aorta – 13.2 Descending aorta – 11.5 Abdominal aorta – 8 Femoral artery – 3.5 Carotid artery – 4.4 Arterioles – 0.04 Capillaries – 0.005 Venules – 0.035 Inferior vena cava – 8.8 Main pulmonary artery – 15 It has been argued that universal biological scaling laws (power-law relationships that describe variation of quantities such as metabolic rate, lifespan, length, etc., with body mass) are a consequence of the need for energy minimization, the fractal nature of vascular networks, and the crossover from high to low Womersley number flow as one progresses from large to small vessels. References Biomechanics Dimensionless numbers of fluid mechanics Fluid dynamics
Womersley number
[ "Physics", "Chemistry", "Engineering" ]
973
[ "Biomechanics", "Chemical engineering", "Mechanics", "Piping", "Fluid dynamics" ]
6,129,873
https://en.wikipedia.org/wiki/Schouten%20tensor
In Riemannian geometry the Schouten tensor is a second-order tensor introduced by Jan Arnoldus Schouten defined for by: where Ric is the Ricci tensor (defined by contracting the first and third indices of the Riemann tensor), R is the scalar curvature, g is the Riemannian metric, is the trace of P and n is the dimension of the manifold. The Weyl tensor equals the Riemann curvature tensor minus the Kulkarni–Nomizu product of the Schouten tensor with the metric. In an index notation The Schouten tensor often appears in conformal geometry because of its relatively simple conformal transformation law where Further reading Arthur L. Besse, Einstein Manifolds. Springer-Verlag, 2007. See Ch.1 §J "Conformal Changes of Riemannian Metrics." Spyros Alexakis, The Decomposition of Global Conformal Invariants. Princeton University Press, 2012. Ch.2, noting in a footnote that the Schouten tensor is a "trace-adjusted Ricci tensor" and may be considered as "essentially the Ricci tensor." Wolfgang Kuhnel and Hans-Bert Rademacher, "Conformal diffeomorphisms preserving the Ricci tensor", Proc. Amer. Math. Soc. 123 (1995), no. 9, 2841–2848. Online eprint (pdf). T. Bailey, M.G. Eastwood and A.R. Gover, "Thomas's Structure Bundle for Conformal, Projective and Related Structures", Rocky Mountain Journal of Mathematics, vol. 24, Number 4, 1191-1217. See also Weyl–Schouten theorem Cotton tensor Curvature tensors Riemannian geometry Tensors in general relativity
Schouten tensor
[ "Physics", "Engineering" ]
375
[ "Tensors", "Physical quantities", "Tensor physical quantities", "Curvature tensors", "Tensors in general relativity", "Relativity stubs", "Theory of relativity" ]
17,264,004
https://en.wikipedia.org/wiki/X-ray%20laser
An X-ray laser can be created by several methods either in hot, dense plasmas or as a free-electron laser in an accelerator. This article describes the x-ray lasers in plasmas, only. The plasma x-ray lasers rely on stimulated emission to generate or amplify coherent, directional, high-brightness electromagnetic radiation in the near X-ray or extreme ultraviolet region of the spectrum, that is, usually from ~3 nanometers to several tens of nanometers (nm) wavelength. Because of high gain in the lasing medium and short upper-state lifetimes (1–100 ps), X-ray lasers usually operate without mirrors; the beam of X-rays is generated by a single pass through the gain medium. The emitted radiation, based on amplified spontaneous emission, has relatively low spatial coherence. The line is mostly Doppler broadened, which depends on the ions' temperature. As the common visible-light laser transitions between electronic or vibrational states correspond to energies up to only about 10 eV, different active media are needed for X-ray lasers. Between 1978 and 1988 in Project Excalibur the U.S. military attempted to develop a nuclear explosion-pumped X-ray laser for ballistic missile defense as part of the "Star Wars" Strategic Defense Initiative (SDI). Active media The most often used media include highly ionized plasmas, created in a capillary discharge or when a linearly focused optical pulse hits a solid target. In accordance with the Saha ionization equation, the most stable electron configurations are neon-like with 10 electrons remaining and nickel-like with 28 electrons remaining. The electron transitions in highly ionized plasmas usually correspond to energies on the order of hundreds of electron volts (eV). Common methods for creating plasma X-ray lasers include: Capillary plasma-discharge media: In this setup, a several centimeters long capillary made of resistant material (e.g., alumina) confines a high-current, submicrosecond electrical pulse in a low-pressure gas. The Lorentz force causes further compression of the plasma discharge (see pinch). In addition, a pre-ionization electric or optical pulse is often used. An example is the capillary neon-like Ar8+ laser, generating radiation at 47 nm, which was first demonstrated in 1994. Solid-slab target media: After being hit by an ultra-intense optical (laser) pulse, the metal target evaporates and emits highly excited plasma. Again, a pair of pulses is usually used in the so-called "transient pumping" scheme: (1) a longer pulse on the order of nanoseconds (sometimes preceded by one or several smaller "pre-pulses") is often used for plasma creation and (2) a second, shorter (on the order of hundreds of femtoseconds or a picosecond) and more energetic pulse is used for further excitation in the plasma volume. For short lifetimes a so-called "travelling wave" has been developed, where the plasma is heated just before the passage of the x-ray photons (so-called "guillotine principle" geometry). In order to increase the efficiency of energy transfer from the heating laser pulse into the active medium (plasma), a sheared excitation pulse is sometimes employed, so-called GRIP - grazing incidence pump geometry. The gradient in the refractive index of the plasma causes the amplified pulse to bend from the target surface, because at the frequencies above resonance the refractive index decreases with matter density. This can be compensated for by using curved targets or multiple targets in series. Plasma excited by optical field: At optical densities high enough to cause effective electron tunnelling, or even to suppress the potential barrier (> 1016 W/cm2), it is possible to highly ionize gas without contact with any capillary or target. A collinear setup is usually used, enabling the synchronization of pump and signal pulses. An alternative amplifying medium is the relativistic electron beam in a free-electron laser, which, strictly speaking, uses stimulated Compton scattering instead of stimulated emission. Other approaches to optically induced coherent X-ray generation are: high-harmonic generation stimulated Thomson scattering Betatron radiation Applications Applications of coherent X-ray radiation include coherent diffraction imaging, research into dense plasmas (not transparent to visible radiation), X-ray microscopy, phase-resolved medical imaging, material surface research, and weaponry. A soft x-ray laser can perform ablative laser propulsion. See also European x-ray free electron laser Industrial CT scanning LCLS X-ray Free Electron Laser at SLAC Strategic Defense Initiative X-ray laser and Project Excalibur References Laser Laser types
X-ray laser
[ "Physics" ]
989
[ "X-rays", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
17,265,021
https://en.wikipedia.org/wiki/Fizzle%20%28nuclear%20explosion%29
A fizzle occurs when the detonation of a device for creating a nuclear explosion (such as a nuclear weapon) grossly fails to meet its expected yield. The bombs still detonate, but the detonation is much weaker than anticipated. The cause(s) for the failure might be linked to improper design, poor construction, or lack of expertise. All countries that have had a nuclear weapons testing program have experienced some fizzles. A fizzle can spread radioactive material throughout the surrounding area, involve a partial fission reaction of the fissile material, or both. For practical purposes, a fizzle can still have considerable explosive yield when compared to conventional weapons. In multistage fission-fusion weapons, full yield of the fission primary that fails to initiate fusion ignition in the fusion secondary (or produces only a small degree of fusion) is also considered a "fizzle", as the weapon failed to reach its design yield despite the fission primary working correctly. Such fizzles can have very high yields, as in the case of Castle Koon, where the secondary stage of a device with a 1 megaton design fizzled, but its primary still generated a yield of 100 kilotons, and even the fizzled secondary still contributed another 10 kilotons, for a total yield of 110 kT. Fusion boosting If a deuterium-tritium mixture is placed at the center of the device to be compressed and heated by the fission explosion, a fission yield of 250 tons is sufficient to cause D–T fusion releasing high-energy fusion neutrons which will then fission much of the remaining fission fuel. This is known as a boosted fission weapon. If a fission device designed for boosting is tested without the boost gas, a yield in the sub-kiloton range may indicate a successful test that the device's implosion and primary fission stages are working as designed, though this does not test the boosting process itself. Nuclear fission tests considered to be fizzles Buster Able Considered to be the first known failure of any nuclear device. Upshot–Knothole Ruth Testing a uranium hydride bomb. The test failed to declassify the site (erase evidence) as it left the bottom third of the shot tower still standing. Upshot–Knothole Ray Similar test conducted the following month. Allegedly a shorter tower was chosen, to ensure that the tower would be completely destroyed. Gerboise Verte The nuclear test should have had a power of between 6 and 18 kt, but in the end the test will have a power of less than one kiloton. North Korean nuclear test in 2006 Russia claimed to have measured 5–15 kt yield, whereas the United States, France, and South Korea measured less than 1 kt yield. This North Korean debut test was weaker than all other countries' initial tests by a factor of 20, and the smallest initial test in history. Nuclear fusion tests that fizzled Castle Koon A thermonuclear device whose fusion secondary did not successfully ignite, with only low-level fusion burning taking place. Short Granite Dropped by the United Kingdom over Malden Island in the Pacific on May 15, 1957, during Operation Grapple 1, this bomb had an expected yield of over 1 megaton, but only exploded with a force of a quarter of the anticipated yield. The test was still considered successful, as thermonuclear ignition occurred and contributed substantially to the bomb's yield. Another bomb dropped during Grapple 1, Purple Granite, was hoped to give an improved yield over Short Granite, but the yield was even lower. Terrorist concerns One month after the September 11, 2001 attacks, a CIA informant known as "Dragonfire" reported that al-Qaeda had smuggled a low-yield nuclear weapon into New York City. Although the report was found to be false, concerns were expressed that even a "fizzle bomb" capable of yielding a fraction of the known 10-kiloton weapons could cause "horrific" consequences. A detonation in New York City would mean thousands of civilian casualties. In popular culture The nuclear weapon which detonates in Tom Clancy's The Sum of all Fears results in a fizzle, caused by tritium poisoning, which causes the secondary core to fail to ignite. See also List of nuclear tests Lists of nuclear disasters and radioactive incidents Uranium hydride bomb Dirty bomb References External links Not a bomb or a dud but a fizzle Ian Hoffman, Oakland Tribune, October 9, 2006. Nuclear Weapons, howthingswork.virginia.edu Nuclear weapons Nuclear weapons testing Nuclear accidents and incidents
Fizzle (nuclear explosion)
[ "Chemistry", "Technology" ]
943
[ "Nuclear accidents and incidents", "Environmental impact of nuclear power", "Nuclear weapons testing", "Radioactivity" ]
17,275,189
https://en.wikipedia.org/wiki/Posterior%20cortical%20atrophy
Posterior cortical atrophy (PCA), also called Benson's syndrome, is a rare form of dementia which is considered a visual variant or an atypical variant of Alzheimer's disease (AD). The disease causes atrophy of the posterior part of the cerebral cortex, resulting in the progressive disruption of complex visual processing. PCA was first described by D. Frank Benson in 1988. PCA usually affects people at an earlier age than typical cases of Alzheimer's disease, with initial symptoms often experienced in people in their mid-fifties or early sixties. This was the case with writer Terry Pratchett (1948–2015), who went public in 2007 about being diagnosed with PCA. In rare cases, PCA can be caused by dementia with Lewy bodies and Creutzfeldt–Jakob disease. Symptoms The main symptom resulting from posterior cortical atrophy is a decrease in visuospatial and visuoperceptual capabilities, since the area of atrophy involves the occipital lobe responsible for visual processing. The atrophy is progressive; early symptoms include difficulty reading, blurred vision, light sensitivity, issues with depth perception, and trouble navigating through space. Additional symptoms include apraxia, a disorder of movement planning, alexia, an impaired ability to read, and visual agnosia, an object recognition disorder. In the two-streams hypothesis, damage to the ventral, or "what" stream, of the visual system, located in the temporal lobe, leads to the symptoms related to general vision and object recognition deficits; damage to the dorsal, or "where/how" stream, located in the parietal lobe, leads to PCA symptoms related to impaired movements in response to visual stimuli, such as navigation and apraxia. As neurodegeneration spreads, more severe symptoms emerge, including the inability to recognize familiar people and objects, trouble navigating familiar places, and sometimes visual hallucinations. In addition, difficulty may be experienced in making guiding movements towards objects, and a decline in literacy skills including reading, writing, and spelling may develop. Furthermore, if neural death spreads into other anterior cortical regions, symptoms similar to Alzheimer's disease, such as memory loss, may result. In PCA where there is significant atrophy in one hemisphere of the brain hemispatial neglect may result – the inability to see stimuli on one half of the visual field. Anxiety and depression are also common symptoms. Connection to Alzheimer's disease Studies have shown that PCA may be a variant of Alzheimer's disease (AD), with an emphasis on visual deficits. Although in primarily different, but sometimes overlapping, brain regions, both involve progressive neural degeneration, as shown by the loss of neurons and synapses, and the presence of neurofibrillary tangles and senile plaques in affected brain regions; this eventually leads to dementia in both diseases. In PCA there is more cortical damage and gray matter (cell body) loss in posterior regions, especially in the occipital, parietal, and temporal lobes, whereas in Alzheimer's there is typically more damage in the prefrontal cortex and hippocampus. PCA tends to impair visuospatial working memory, while leaving episodic memory intact, whereas in AD there is typically impaired episodic memory, suggesting some differences still lie in the primary areas of cortical damage. Over time, however, atrophy in PCA may spread to regions that are commonly damaged in AD, leading to shared AD symptoms such as deficits in memory, language, learning, and cognition. Although PCA has an earlier onset, a diagnosis with Alzheimer's is often made, suggesting that the degeneration has simply migrated anteriorly to other cortical brain regions. There is no standard definition of PCA and no established diagnostic criteria, so it is not possible to know how many people have the condition. Some studies have found that about 5 percent of people diagnosed with Alzheimer's disease have PCA. However, because PCA often goes unrecognized, the true percentage may be as high as 15 percent. Researchers and physicians are working to establish a standard definition and diagnostic criteria for PCA. PCA may also be correlated with Lewy body disease, Creutzfeldt–Jakob disease, Bálint's syndrome, and Gerstmann syndrome. In addition, PCA may result in part from mutations in the presenilin 1 gene (PSEN1). Diagnosis The cause of PCA is unknown, and there are no fully accepted diagnostic criteria for the disease. This is partially due to the gradual onset of PCA symptoms, their variety, the rare nature of the disease, and the younger age of onset typically 50–60 years. In 2012, the first international conference on PCA was held in Vancouver, Canada. Continued research and testing will hopefully result in accepted and standardized criteria for diagnosis. PCA is often initially misdiagnosed as an anxiety disorder or depression. It has been suggested that depression or anxiety may result from the symptoms of decreased visual function, and the progressive nature of the disease. Early visual impairments have often led to a referral to an ophthalmologist, which can result in unnecessary cataract surgery. Due to the lack of biomarkers for PCA, neuropsychological examinations are advised. Neuroimaging can also assist in the diagnosis of PCA. For PCA and AD neuroimaging is carried out using MRI scans, single-photon emission computed tomography, and positron emission tomography (PET scans). Neuroimages are often compared to those of people with AD to assist diagnosis. Due to the early onset of PCA in comparison to AD, images taken at the early stages of the disease will vary from brain images in AD. At this early stage brain atrophy will be shown to be more centrally located in the right posterior lobe and occipital gyrus, while AD brain images show the majority of atrophy in the medial temporal cortex. This variation within the images will assist in early diagnosis of PCA; however, as the years go on the images will become increasingly similar, due to the majority of PCA also developing to AD later in life because of continued brain atrophy. A key aspect found through brain imaging of PCA patients is a loss of grey matter (collections of neuronal cell bodies) in the posterior and occipital temporal cortices within the right hemisphere. For some people with PCA, neuroimaging may not give a clear diagnosis; therefore, careful observation in relation to PCA symptoms can also assist in the diagnosis. The variation and lack of organized clinical testing has led to continued difficulties and delays in the diagnosis of PCA. Treatment Specific and accepted treatment for PCA has yet to be discovered; this may be due to the rarity and variations of the disease. At times people with PCA are treated with AD treatments, such as cholinesterase inhibitors: donepezil, rivastigmine, galantamine, and also memantine. Antidepressant drugs have also provided some positive effects. Other treatments such as occupational therapy, or help with adapting to visual changes may help. Visual and tactile cues, adaptive equipment and digital aids to manipulate text may all be beneficial. People with PCA and their caregivers are likely to have different needs than the more typical cases of Alzheimer's disease, and may benefit from specialized support groups, or other groups for young people with dementia. No study to date has been definitive to provide accepted conclusive analysis on treatment options. References External links Posterior Cortical Atrophy support from the Dementia Research Centre Central nervous system disorders Aging-associated diseases Ailments of unknown cause Unsolved problems in neuroscience 1988 in science
Posterior cortical atrophy
[ "Biology" ]
1,638
[ "Senescence", "Aging-associated diseases" ]
2,527,405
https://en.wikipedia.org/wiki/Solid-phase%20extraction
Solid-phase extraction (SPE) is a solid-liquid extractive technique, by which compounds that are dissolved or suspended in a liquid mixture are separated, isolated or purified, from other compounds in this mixture, according to their physical and chemical properties. Analytical laboratories use solid phase extraction to concentrate and purify samples for analysis. Solid phase extraction can be used to isolate analytes of interest from a wide variety of matrices, including urine, blood, water, beverages, soil, and animal tissue. SPE uses the affinity of solutes, dissolved or suspended in a liquid (known as the mobile phase), to a solid packing inside a small column, through which the sample is passed (known as the stationary phase), to separate a mixture into desired and undesired components. The result is that either the desired analytes of interest or undesired impurities in the sample are retained on the stationary phase. The portion that passes through the stationary phase is collected or discarded, depending on whether it contains the desired analytes or undesired impurities. If the portion retained on the stationary phase includes the desired analytes, they can then be removed from the stationary phase for collection in an additional step, in which the stationary phase is rinsed with an appropriate eluent. It is possible to have an incomplete recovery of the analytes by SPE caused by incomplete extraction or elution. In the case of an incomplete extraction, the analytes do not have enough affinity for the stationary phase and part of them will remain in the permeate. In an incomplete elution, part of the analytes remain in the sorbent because the eluent used does not have a strong enough affinity. Many of the adsorbents/materials are the same as in chromatographic methods, but SPE is distinctive, with aims separate from chromatography, and so has a unique niche in modern chemical science. SPE and chromatography SPE is in fact a method of chromatography, in the sense of having a mobile phase, carrying mixtures through a stationary phase, packed inside a column. The chromatographic process is harnessed to create a solid-liquid extractive technique—allowing separation of a mixture of components by taking advantage of large differences between the solid and liquid phase Keq, or equilibrium constant, for each component in the mixture. The chemical considerations for the selection of stationary and mobile phases are similar to those for liquid column chromatography and many of the adsorbents/materials used are the same. The theory, procedures, and aims are different, however, and as an extractive technique it has a unique niche in modern chemical science. Normal phase SPE procedure A typical solid phase extraction involves five basic steps. First, the cartridge is equilibrated with a non-polar or slightly polar solvent, which wets the surface and penetrates the bonded phase. Then water, or buffer of the same composition as the sample, is typically washed through the column to wet the silica surface. The sample is then added to the cartridge. As the sample passes through the stationary phase, the polar analytes in the sample will interact and retain on the polar sorbent while the solvent, and other non-polar impurities pass through the cartridge. After the sample is loaded, the cartridge is washed with a non-polar solvent to remove further impurities. Then, the analyte is eluted with a polar solvent or a buffer of the appropriate pH. A stationary phase of polar functionally bonded silicas with short carbons chains frequently makes up the solid phase. This stationary phase will adsorb polar molecules which can be collected with a more polar solvent. Reversed phase SPE Reversed phase SPE separates analytes based on their polarity. The stationary phase of a reversed phase SPE cartridge is derivatized with hydrocarbon chains, which retain compounds of mid to low polarity due to the hydrophobic effect. The analyte can be eluted by washing the cartridge with a non-polar solvent, which disrupts the interaction of the analyte and the stationary phase. A stationary phase of silicon with carbon chains is commonly used. Relying on mainly non-polar, hydrophobic interactions, only non-polar or very weakly polar compounds will adsorb to the surface. Ion exchange SPE Ion exchange sorbents separate analytes based on electrostatic interactions between the analyte of interest and the positively or negatively charged groups on the stationary phase. For ion exchange to occur, both the stationary phase and sample must be at a pH where both are charged. Anion exchange Anion exchange sorbents are derivatized with positively charged functional groups that interact and retain negatively charged anions, such as acids. Strong anion exchange sorbents contain quaternary ammonium groups that have a permanent positive charge in aqueous solutions, and weak anion exchange sorbents use amine groups which are charged when the pH is below about 9. Strong anion exchange sorbents are useful because any strongly acidic impurities in the sample will bind to the sorbent and usually will not be eluted with the analyte of interest; to recover a strong acid a weak anion exchange cartridge should be used. To elute the analyte from either the strong or weak sorbent, the stationary phase is washed with a solvent that neutralizes the charge of either the analyte, the stationary phase, or both. Once the charge is neutralized, the electrostatic interaction between the analyte and the stationary phase no longer exists and the analyte will elute from the cartridge. Cation exchange Cation exchange sorbents are derivatized with functional groups that interact and retain positively charged cations, such as bases. Strong cation exchange sorbents contain aliphatic sulfonic acid groups that are always negatively charged in aqueous solution, and weak cation exchange sorbents contain aliphatic carboxylic acids, which are charged when the pH is above about 5. Strong cation exchange sorbents are useful because any strongly basic impurities in the sample will bind to the sorbent and usually will not be eluted with the analyte of interest; to recover a strong base a weak cation exchange cartridge should be used. To elute the analyte from either the strong or weak sorbent, the stationary phase is washed with a solvent that neutralizes ionic interaction between the analyte and the stationary phase. Cartridges The stationary phase comes in the form of a packed syringe-shaped cartridge, a 96 well plate, a 47- or 90-mm flat disk, or a microextraction by packed sorbent (MEPS) device, a SPE method that uses a packed sorbent material in a liquid handling syringe. These can be mounted on its specific type of extraction manifold. The manifold allows multiple samples to be processed by holding several SPE media in place and allowing for an equal number of samples to pass through them simultaneously. In a standard cartridge SPE manifold up to 24 cartridges can be mounted in parallel, while a typical disk SPE manifold can accommodate 6 disks. Most SPE manifolds are equipped with a vacuum port, where vacuum can be applied to speed up the extraction process by pulling the liquid sample through the stationary phase. The analytes are collected in sample tubes inside or below the manifold after they pass through the stationary phase. Solid phase extraction cartridges and disks can be purchased with several stationary phases, each of which separates analytes depending on different chemical properties. The basis of most stationary phases is silica that has been bonded to a specific functional group. Some of these functional groups include hydrophobic alkyl or aryl chains chains of variable length (for reversed phase), quaternary ammonium or amino groups (for anion exchange), and aliphatic sulfonic acid or carboxyl groups (for cation exchange). Solid-phase microextraction Solid-phase microextraction (SPME), is a solid phase extraction technique that involves the use of a fiber coated with an extracting phase, that can be a liquid (polymer) or a solid (sorbent), which extracts different kinds of analytes (including both volatile and non-volatile) from different kinds of media, that can be in liquid or gas phase. The quantity of analyte extracted by the fibre is proportional to its concentration in the sample as long as equilibrium is reached or, in case of short time pre-equilibrium, with help of convection or agitation. References Further reading E. M. Thurman, M. S. Mills, Solid-Phase Extraction: Principles and Practice, Wiley-Interscience, 1998, Nigel J.K. Simpson, Solid-Phase Extraction: Principles, Techniques, and Applications, CRC, 2000, James S. Fritz, Analytical Solid-Phase Extraction, Wiley-VCH, 1999, Analytical chemistry Extraction (chemistry)
Solid-phase extraction
[ "Chemistry" ]
1,852
[ "Extraction (chemistry)", "nan", "Separation processes" ]
2,528,589
https://en.wikipedia.org/wiki/Volatility%20%28chemistry%29
In chemistry, volatility is a material quality which describes how readily a substance vaporizes. At a given temperature and pressure, a substance with high volatility is more likely to exist as a vapour, while a substance with low volatility is more likely to be a liquid or solid. Volatility can also describe the tendency of a vapor to condense into a liquid or solid; less volatile substances will more readily condense from a vapor than highly volatile ones. Differences in volatility can be observed by comparing how fast substances within a group evaporate (or sublimate in the case of solids) when exposed to the atmosphere. A highly volatile substance such as rubbing alcohol (isopropyl alcohol) will quickly evaporate, while a substance with low volatility such as vegetable oil will remain condensed. In general, solids are much less volatile than liquids, but there are some exceptions. Solids that sublimate (change directly from solid to vapor) such as dry ice (solid carbon dioxide) or iodine can vaporize at a similar rate as some liquids under standard conditions. Description Volatility itself has no defined numerical value, but it is often described using vapor pressures or boiling points (for liquids). High vapor pressures indicate a high volatility, while high boiling points indicate low volatility. Vapor pressures and boiling points are often presented in tables and charts that can be used to compare chemicals of interest. Volatility data is typically found through experimentation over a range of temperatures and pressures. Vapor pressure Vapor pressure is a measurement of how readily a condensed phase forms a vapor at a given temperature. A substance enclosed in a sealed vessel initially at vacuum (no air inside) will quickly fill any empty space with vapor. After the system reaches equilibrium and the rate of evaporation matches the rate of condensation, the vapor pressure can be measured. Increasing the temperature increases the amount of vapor that is formed and thus the vapor pressure. In a mixture, each substance contributes to the overall vapor pressure of the mixture, with more volatile compounds making a larger contribution. Boiling point Boiling point is the temperature at which the vapor pressure of a liquid is equal to the surrounding pressure, causing the liquid to rapidly evaporate, or boil. It is closely related to vapor pressure, but is dependent on pressure. The normal boiling point is the boiling point at atmospheric pressure, but it can also be reported at higher and lower pressures. Contributing factors Intermolecular forces An important factor influencing a substance's volatility is the strength of the interactions between its molecules. Attractive forces between molecules are what holds materials together, and materials with stronger intermolecular forces, such as most solids, are typically not very volatile. Ethanol and dimethyl ether, two chemicals with the same formula (C2H6O), have different volatilities due to the different interactions that occur between their molecules in the liquid phase: ethanol molecules are capable of hydrogen bonding while dimethyl ether molecules are not. The result in an overall stronger attractive force between the ethanol molecules, making it the less volatile substance of the two. Molecular weight In general, volatility tends to decrease with increasing molecular mass because larger molecules can participate in more intermolecular bonding, although other factors such as structure and polarity play a significant role. The effect of molecular mass can be partially isolated by comparing chemicals of similar structure (i.e. esters, alkanes, etc.). For instance, linear alkanes exhibit decreasing volatility as the number of carbons in the chain increases. Applications Distillation Knowledge of volatility is often useful in the separation of components from a mixture. When a mixture of condensed substances contains multiple substances with different levels of volatility, its temperature and pressure can be manipulated such that the more volatile components change to a vapor while the less volatile substances remain in the liquid or solid phase. The newly formed vapor can then be discarded or condensed into a separate container. When the vapors are collected, this process is known as distillation. The process of petroleum refinement utilizes a technique known as fractional distillation, which allows several chemicals of varying volatility to be separated in a single step. Crude oil entering a refinery is composed of many useful chemicals that need to be separated. The crude oil flows into a distillation tower and is heated up, which allows the more volatile components such as butane and kerosene to vaporize. These vapors move up the tower and eventually come in contact with cold surfaces, which causes them to condense and be collected. The most volatile chemical condense at the top of the column while the least volatile chemicals to vaporize condense in the lowest portion. The difference in volatility between water and ethanol has long been used to produce concentrated alcoholic beverages (many of these are referred to as "liquors"). In order to increase the concentration of ethanol in the product, beverage makers would heat the initial alcohol mixture to a temperature where most of the ethanol vaporizes while most of the water remains liquid. The ethanol vapor is then collected and condensed in a separate container, resulting in a much more concentrated product. Perfume Volatility is an important consideration when crafting perfumes. Humans detect odors when aromatic vapors come in contact with receptors in the nose. Ingredients that vaporize quickly after being applied will produce fragrant vapors for a short time before the oils evaporate. Slow-evaporating ingredients can stay on the skin for weeks or even months, but may not produce enough vapors to produce a strong aroma. To prevent these problems, perfume designers carefully consider the volatility of essential oils and other ingredients in their perfumes. Appropriate evaporation rates are achieved by modifying the amount of highly volatile and non-volatile ingredients used. See also References External links Volatility from ilpi.com Physical chemistry Chemical properties Thermodynamic properties Engineering thermodynamics Phase transitions Gases
Volatility (chemistry)
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
1,227
[ "Physical phenomena", "Phase transitions", "Applied and interdisciplinary physics", "Thermodynamic properties", "Physical quantities", "Gases", "Engineering thermodynamics", "Quantity", "Phases of matter", "Critical phenomena", "Thermodynamics", "nan", "Mechanical engineering", "Statistica...
2,528,751
https://en.wikipedia.org/wiki/Papercrete
Papercrete is a building material that consists of re-pulped paper fiber combined with Portland cement or clay, as well as other soils. First patented in 1928 by Eric Patterson and Mike McCain (who originally named it "padobe" and "fibrous cement"), it was revived during the 1980s. It is generally perceived as an environmentally friendly material due to the significant recycled content, although this is offset by the presence of cement, which emits CO2 during manufacture. The material also lacks standardisation, and proper use therefore requires care and experience. However the inventors have both contributed considerably to research into developing the necessary machinery to make it, as well as methods of using it for construction. Manufacture The paper used in papercrete can come from a variety of sources, including newspaper, junk mail, magazines, books. A mixer is used to pulp the mix before this is combined with cement or clay. Depending on the type of mixer, the paper may need to be soaked in water beforehand. Properties The name papercrete arises from the fact that most formulas use a mixture of water and cement with cellulose fiber. The mixture has the appearance and texture of oatmeal and is poured into forms and dried in the sun, much like the process for making adobe. Dried papercrete has very low strength, but fails by slow compression (due to the large air content and hence compressibility) rather than in a brittle manner. Concrete and wood (though dry soft wood can be as high as R-2 per inch, high moisture content reduces this value markedly) are not known for their insulating qualities; however, papercrete also provides good insulation. Papercrete's R-value is reported to be within 2.0 and 3.0 per inch (2.54 cm); papercrete walls are typically thick. Unlike concrete or adobe, papercrete blocks are lightweight, less than a third of the weight of a comparably-sized adobe brick. Papercrete is generally mold resistant and has utility as a sound-proofing material, however mold can develop if the material remains warm and moist for too long. Dried, ready-to-use papercrete has a rough surface. This increases its surface area and provides a very strong bond from one block to the next. Additionally papercrete's ready moldability provides considerable flexibility in the designed shape of structures, for example domed ceilings/roofs can be easily constructed using this material. When properly mixed and dried, the papercrete wall can be left exposed to the elements. In its natural state, it is a grey, fibrous-looking wall. For a more conventional look, stucco can be applied directly to it. Papercrete construction has the advantage of being relatively low-cost, as materials are cheap and widely available. Furthermore, machinery suitable for small-scale construction is simple to design and construct. Standardization and commercial acceptance As of 2007, papercrete lacks approval from the International Code Council. This limits its use within cities in the United States where building codes apply, for example in this area it cannot be used as a load-bearing wall. However, its strength in model structures has been proven, meaning that some homes and small commercial buildings have been constructed using it. In these small building projects, papercrete is being used as an in-fill wall in conjunction with structural steel beams or other load-bearing elements. However, there is little or no evidence of its long-term durability. Research tests into papercrete were carried out by Barry Fuller in Arizona and Zach Rabon in Texas. Fuller directs government-funded research on papercrete through the Arizona State University Ira A. Fulton School of Engineering. He is also head of a subcommittee for the American Society for Testing and Materials, and it is his goal to set standards that will lead to acceptance of the product within the architectural community and commercialization of the product, especially for affordable housing. Structural tests have been completed on several papercrete formulas and Fuller claims the compressive strength of papercrete to be in the range, while others like Kelly Hart claim . For comparison, the compressive strength of concrete ranges from depending on the application. A more useful measure of papercrete's properties is its stiffness, that is, the extent to which it compresses under load. Its stiffness is many times less than that of concrete, but sufficient for the support of roof loads in some low-height buildings. Papercrete was also tested for its tensile strength. Fuller noted that a papercrete block was the equivalent of hundred of pages of paper - almost like a catalog. Papercrete has very good shear strength as a block. Lateral load involves sideways force - the wind load on the entire area of an outside wall for example. Because papercrete walls are usually a minimum of thick, and usually pinned with rebar, they may be strong laterally. Zach Rabon founded Mason Greenstar in Mason, Texas for the purpose of producing and selling a commercially viable papercrete block. His product, Blox Building System, is the only mass-produced commercial papercrete block in the market. He has built several residential structures with it. The Mason Greenstar block had its genesis in a journey Rabon's father, Kent Rabon, made to Marathon, Texas. The elder Rabon made the acquaintance of Clyde T. Curry, the proprietor of Eve's Garden Organic Bed & Breakfast and Ecology Resource Center. Curry was an early proponent of papercrete and benefited from the lack of building regulations in the small mountain community of Marathon. Curry built four of the rooms at the bed and breakfast either partially or entirely out of papercrete and is in the process of building two more, in addition to a library and reception area, entirely out of papercrete. Along with Fuller's work at Arizona State University, Curry's establishment has become a resource center for people interested in papercrete, and workshops are intermittently held there. The Rabons had prior experience as homebuilders and are owners of a ready-mix cement plant in Mason. They invested in research and testing on their product for several years. However, they consider their product a proprietary formula. They filed for a separate patent even though a patent for papercrete had already been filed in 1928. The block developed by Mason Greenstar is known for its uniform shrinkage (all papercrete blocks go through a lengthy dry-time that involves some shrinkage), giving it a sharper edge. Fuller has remarked that in tests he has performed on the Rabon block, it has outperformed other formulas. A study model home made of papercrete has been built at the Lyle Center for Regenerative Studies. This study model is a sample of homes to be built for a sustainable community in Tijuana by students of California State Polytechnic University, Pomona. Similar materials Different earth-paper mixes are promoted under different names. A mix that uses clay as a binder instead of Portland cement is often referred to as "Hybrid Adobe", "Fidobe", or "Padobe". See also Asbestos cement Papier mache Pykrete Fiber cement Concrete cloth Hypertufa Hempcrete References External links Papercrete Guide Wiki Concrete Fibre-reinforced cementitious materials Engineered wood Recycled building materials
Papercrete
[ "Engineering" ]
1,513
[ "Structural engineering", "Concrete" ]
2,529,724
https://en.wikipedia.org/wiki/Grinder%20pump
A grinder pump (also called a macerator pump) is a wastewater conveyance device. Waste from water-using household appliances (toilets, bathtubs, washing machines, etc.) flows through the home’s pipes into the grinder pump’s holding tank. Once the wastewater inside the tank reaches a specific level, the pump will turn on, grind the waste into a fine slurry, and pump it to the central sewer system or septic tank. Grinder pumps can be installed in the basement or in the yard. If installed in the yard, the holding tank must be buried deep enough that the pump and sewage pipes are below the frost line. A grinder pump is different from a sump pump or effluent pump. There are two types of grinder pumps, semi-positive displacement (SPD) and centrifugal. Components The grinder pump “station” consists of the pump, a tank, and an alarm panel. A pump for household use is usually 1 hp, 1.5 hp or 2 hp. A cutting mechanism macerates waste and grinds items that are not normally found in sewage, but may get flushed down the toilet. The pump has a level sensor either built into the pump, called “sensing bells,” or attached externally to the pump, typically a float switch. (The level sensing devices vary among grinder pump manufacturers.) If the pump malfunctions and the waste level in the holding tank rises above a certain level, the alarm panel should alert the homeowner that the pump is experiencing problems. The alarm panel should have both a buzzer and an indicator light. The holding tank, likely constructed of fiberglass, high-density polyethylene (HDPE) or fiberglass-reinforced polyester (FRP), has an inlet opening and a discharge opening. The pipes from the home are connected to the inlet; the pipe that leads to the sewer main is connected to the discharge. Often, more than one home or restroom (in a park, for example) can be connected to one grinder pump station. In this case, more than one inlet can be installed. It is a good idea to consult the manufacturer or factory representative before purchasing a grinder pump station to ensure that more than one inlet hole can be drilled. The tank has a lid made from heavy-duty plastic or metal that is bolted and/or padlocked shut to prevent entry by unauthorized persons. Maintenance Grinder pumps should not require preventive maintenance. However, grinder pumps that use floats to sense the level in the holding tank are prone to grease buildup that may turn the pump on unnecessarily, or not turn on the pump at all, causing the tank to fill up and sewage to possibly back up into the home or yard. To prevent this, grinder pumps that use float switches to sense the level in the tank are often hosed down to remove the grease from the floats. Homeowners are never free to dump everything down their drains, even if their home has a grinder pump. Feminine hygiene products, diapers, kitty litter, paint, oil (both motor oil and cooking oils), etc. should not be flushed or poured down any drain, whether the home is connected to a gravity sewer system, septic tank, grinder pump, or cesspool. Disposable wipes that are made by cleaning companies for personal use, cleaning toilets, etc. are causing problems in communities around the United States. Not only do people clog their household plumbing, they are causing problems with household grinder pumps, lift stations, and sewage treatment plants. Some wipe companies say "flush one at a time," some say "not for pump systems," some say "safe for sewers". As recommended by Consumer Reports, wipes should be put into a garbage can instead of the toilet. The National Association of Clean Water Agencies has compiled a list of articles and municipal documents regarding wipes. In large sewage pump stations, clogging problems are often avoided by installing a chopper pump in the tank. A chopper pump is able to handle larger/tougher solids than a residential grinder pump, and may be able to process hair balls, diapers, sanitary napkins, clothing, etc. See also Chopper pump Sewage pumping Sewer system References Additional Links Submersible Wastewater Pump Association Only flush the 3 Ps: pee, poop, and toilet paper! Waste treatment technology
Grinder pump
[ "Chemistry", "Engineering" ]
912
[ "Water treatment", "Waste treatment technology", "Environmental engineering" ]
2,529,895
https://en.wikipedia.org/wiki/Forward%20scattering%20alignment
The Forward Scattering Alignment (FSA) is a coordinate system used in coherent electromagnetic scattering. The coordinate system is defined from the viewpoint of the electromagnetic wave, before and after scattering. The FSA is most commonly used in optics, specifically when working with Jones Calculus because the electromagnetic wave is typically followed through a series of optical components that represent separate scattering events. FSA gives rise to regular eigenvalue equations. The general alternative coordinate system in electromagnetic scattering is the Back Scattering Alignment (BSA) which is primarily used in radar. Both coordinate systems contain essentially the same information and meaning, and thus a scattering matrix can be transformed from one to the other by use of the matrix, See also Forward scatter Electromagnetic radiation Scattering, absorption and radiative transfer (optics)
Forward scattering alignment
[ "Physics", "Chemistry", "Materials_science" ]
157
[ "Physical phenomena", " absorption and radiative transfer (optics)", "Materials science stubs", "Electromagnetic radiation", "Scattering stubs", "Scattering", "Radiation", "Electromagnetism stubs" ]
2,529,967
https://en.wikipedia.org/wiki/Back%20scattering%20alignment
The Back Scattering Alignment (BSA) is a coordinate system used in coherent electromagnetic scattering. The coordinate system is defined from the viewpoint of the wave source, before and after scattering. The BSA is most commonly used in radar, specifically when working with a Sinclair Matrix because the monostatic radar detector and source are physically coaligned. BSA gives rise to conjugate eigenvalue equations. The alternative coordinate system in electromagnetic scattering is the Forward Scattering Alignment (FSA) which is primarily used in optics. Both coordinate systems contain essentially the same information and meaning, and thus a scattering matrix can be transformed from one to the other by use of the matrix, References See also Backscatter Electromagnetic radiation Electromagnetic radiation
Back scattering alignment
[ "Physics", "Materials_science" ]
146
[ "Physical phenomena", "Materials science stubs", "Electromagnetic radiation", "Radiation", "Electromagnetism stubs" ]
2,531,152
https://en.wikipedia.org/wiki/Diazonium%20compound
Diazonium compounds or diazonium salts are a group of organic compounds sharing a common functional group where R can be any organic group, such as an alkyl or an aryl, and X is an inorganic or organic anion, such as a halide. The parent compound where R is hydrogen, is diazenylium. Structure and general properties Arenediazonium cations and related species According to X-ray crystallography the linkage is linear in typical diazonium salts. The bond distance in benzenediazonium tetrafluoroborate is 1.083(3) Å, which is almost identical to that for dinitrogen molecule (N≡N). The linear free energy constants σm and σp indicate that the diazonium group is strongly electron-withdrawing. Thus, the diazonio-substituted phenols and benzoic acids have greatly reduced pKa values compared to their unsubstituted counterparts. The pKa of phenolic proton of 4-hydroxybenzenediazonium is 3.4, versus 9.9 for phenol itself. In other words, the diazonium group raises the ionization constant Ka (enhances the acidity) by a million-fold. This also causes arenediazonium salts to have decreased reactivity when electron-donating groups are present on the aromatic ring. The stability of arenediazonium salts is highly sensitive to the counterion. Phenyldiazonium chloride is dangerously explosive, but benzenediazonium tetrafluoroborate is easily handled on the bench. Alkanediazonium cations and related species Alkanediazonium salts are synthetically unimportant due to their extreme and uncontrolled reactivity toward SN2/SN1/E1 substitution. These cations are however of theoretical interest. Furthermore, methyldiazonium carboxylate is believed to be an intermediate in the methylation of carboxylic acids by diazomethane, a common transformation. Loss of is both enthalpically and entropically favorable: , ΔH = −43 kcal/mol , ΔH = −11 kcal/mol For secondary and tertiary alkanediazonium species, the enthalpic change is calculated to be close to zero or negative, with minimal activation barrier. Hence, secondary and (especially) tertiary alkanediazonium species are either unbound, nonexistent species or, at best, extremely fleeting intermediates. The aqueous pKa of methyldiazonium () is estimated to be <10. Preparation The process of forming diazonium compounds is called "diazotation", "diazoniation", or "diazotization". The reaction was first reported by Peter Griess in 1858, who subsequently discovered several reactions of this new class of compounds. Most commonly, diazonium salts are prepared by treatment of aromatic amines with nitrous acid and additional acid. Usually the nitrous acid is generated in situ (in the same flask) from sodium nitrite and the excess mineral acid (usually aqueous HCl, , p-, or ): Chloride salts of diazonium cation, traditionally prepared from the aniline, sodium nitrite, and hydrochloric acid, are unstable at room temperature and are classically prepared at 0 – 5 °C. However, one can isolate diazonium compounds as tetrafluoroborate or tosylate salts, which are stable solids at room temperature. It is often preferred that the diazonium salt remain in solution, but they do tend to supersaturate. Operators have been injured or even killed by an unexpected crystallization of the salt followed by its detonation. Due to these hazards, diazonium compounds are often not isolated(not necessarily). Instead they are used in situ. This approach is illustrated in the preparation of an arenesulfonyl compound: Reactions Diazo coupling reactions The first and still main use of diazonium salts is azo coupling, which is exploited in the production of azo dyes. In some cases water-fast dyed fabrics are simply immersed in an aqueous solution of the diazonium compound, followed by immersion in a solution of the coupler (the electron-rich ring that undergoes electrophilic substitution). In this process, the diazonium compound is attacked by, i.e., coupled to, electron-rich substrates. When the coupling partners are arenes such as anilines and phenols, the process is an example of electrophilic aromatic substitution: The deep colors of the dyes reflects their extended conjugation. A popular azo dye is aniline yellow, produced from aniline. Naphthalen-2-ol (beta-naphthol) gives an intensely orange-red dye. Methyl orange is an example of an azo dye that is used in the laboratory as a pH indicator.. Another commercially important class of coupling partners are acetoacetic amides, as illustrated by the preparation of Pigment Yellow 12, a diarylide pigment. Displacement of the group Arenediazonium cations undergo several reactions in which the group is replaced by another group or ion. Sandmeyer reaction Benzenediazonium chloride heated with cuprous chloride or cuprous bromide respectively dissolved in HCl or HBr yield chlorobenzene or bromobenzene, respectively. In the Gattermann reaction (there are other "Gattermann reactions"), benzenediazonium chloride is warmed with copper powder and HCl or HBr to produce chlorobenzene and bromobenzene respectively. Replacement by iodide Arenediazonium cations react with potassium iodide to give the aryl iodide: Replacement by fluoride Fluorobenzene is produced by thermal decomposition of benzenediazonium tetrafluoroborate. The conversion is called the Balz–Schiemann reaction. The traditional Balz–Schiemann reaction has been the subject of many motivations, e.g. using hexafluorophosphate(V) () and hexafluoroantimonate(V) () in place of tetrafluoroborate (). The diazotization can be effected with nitrosonium salts such as nitrosonium hexafluoroantimonate(V) . Biaryl coupling A pair of diazonium cations can be coupled to give biaryls. This conversion is illustrated by the coupling of the diazonium salt derived from anthranilic acid to give diphenic acid (). In a related reaction, the same diazonium salt undergoes loss of and to give benzyne. Replacement by hydrogen Arenediazonium cations reduced by hypophosphorous acid, ethanol, sodium stannite or alkaline sodium thiosulphate gives benzene: An alternative way suggested by Baeyer & Pfitzinger is to replace the diazo group with H is: first to convert it into hydrazine by treating with then to oxidize it into hydrocarbon by boiling with cupric sulphate solution. Replacement by a hydroxyl group Phenols are produced by heating aqueous solutions of arenediazonium salts: This reaction goes by the German name Phenolverkochung ("cooking down to yield phenols"). The phenol formed may react with the diazonium salt and hence the reaction is carried in the presence of an acid which suppresses this further reaction. A Sandmeyer-type hydroxylation is also possible using and in water. Replacement by a nitro group Nitrobenzene can be obtained by treating benzenediazonium fluoroborate with sodium nitrite in presence of copper. Alternatively, the diazotisation of the aniline can be conducted in presence of cuprous oxide, which generates cuprous nitrite in situ: Replacement by a cyano group The cyano group usually cannot be introduced by nucleophilic substitution of haloarenes, but such compounds can be easily prepared from diazonium salts. Illustrative is the preparation of benzonitrile using the reagent cuprous cyanide: This reaction is a special type of Sandmeyer reaction. Replacement by a trifluoromethyl group Two research groups reported trifluoromethylations of diazonium salts in 2013. Goossen reported the preparation of a complex from CuSCN, , and . In contrast, Fu reported the trifluoromethylation using Umemoto's reagent (S-trifluoromethyldibenzothiophenium tetrafluoroborate) and Cu powder (Gattermann-type conditions). They can be described by the following equation: The bracket indicates that other ligands on copper are likely present but are omitted. Replacement by a thiol group Diazonium salts can be converted to thiols in a two-step procedure. Treatment of benzenediazonium chloride with potassium ethylxanthate followed by hydrolysis of the intermediate xanthate ester gives thiophenol: Replacement by an aryl group The aryl group can be coupled to another using arenediazonium salts. For example, treatment of benzenediazonium chloride with benzene (an aromatic compound) in the presence of sodium hydroxide gives diphenyl: This reaction is known as the Gomberg–Bachmann reaction. A similar conversion is also achieved by treating benzenediazonium chloride with ethanol and copper powder. Replacement by boronate ester group A Bpin (pinacolatoboron) group, of use in Suzuki-Miyaura cross coupling reactions, can be installed by reaction of a diazonium salt with bis(pinacolato)diboron in the presence of benzoyl peroxide (2 mol %) as an initiator:. Alternatively similar borylation can be achieved using transition metal carbonyl complexes including dimanganese decacarbonyl. Replacement by formyl group A formyl group, –CHO, can be introduced by treating the aryl diazonium salt with formaldoxime (), followed by hydrolysis of the aryl aldoxime to give the aryl aldehyde. This reaction is known as the Beech reaction. Other dediazotizations by organic reduction at an electrode by mild reducing agents such as ascorbic acid (vitamin C) by gamma radiation from solvated electrons generated in water photoinduced electron transfer reduction by metal cations, most commonly a cuprous salt. anion-induced dediazoniation: a counterion such as iodine gives electron transfer to the diazonium cation forming the aryl radical and an iodine radical solvent-induced dediazoniation with solvent serving as electron donor Meerwein reaction Benzenediazonium chloride reacts with compounds containing activated double bonds to produce phenylated products. The reaction is called the Meerwein arylation: Metal complexation In their reactions with metal complexes, diazonium cations behave similarly to . For example, low-valent metal complexes add with diazonium salts. Illustrative complexes are and the chiral-at-metal complex . Grafting reactions In a potential application in nanotechnology, the diazonium salts 4-chlorobenzenediazonium tetrafluoroborate very efficiently functionalizes single wall nanotubes. In order to exfoliate the nanotubes, they are mixed with an ionic liquid in a mortar and pestle. The diazonium salt is added together with potassium carbonate, and after grinding the mixture at room temperature the surface of the nanotubes are covered with chlorophenyl groups with an efficiency of 1 in 44 carbon atoms. These added substituents prevent the tubes from forming intimate bundles due to large cohesive forces between them, which is a recurring problem in nanotube technology. It is also possible to functionalize silicon wafers with diazonium salts forming an aryl monolayer. In one study, the silicon surface is washed with ammonium hydrogen fluoride leaving it covered with silicon–hydrogen bonds (hydride passivation). The reaction of the surface with a solution of diazonium salt in acetonitrile for 2 hours in the dark is a spontaneous process through a free radical mechanism: So far grafting of diazonium salts on metals has been accomplished on iron, cobalt, nickel, platinum, palladium, zinc, copper and gold surfaces. Also grafting to diamond surfaces has been reported. One interesting question raised is the actual positioning on the aryl group on the surface. An in silico study demonstrates that in the period 4 elements from titanium to copper the binding energy decreases from left to right because the number of d-electrons increases. The metals to the left of iron are positioned tilted towards or flat on the surface favoring metal to carbon pi bond formation and those on the right of iron are positioned in an upright position, favoring metal to carbon sigma bond formation. This also explains why diazonium salt grafting thus far has been possible with those metals to right of iron in the periodic table. Reduction to a hydrazine group Diazonium salts can be reduced with stannous chloride () to the corresponding hydrazine derivatives. This reaction is particularly useful in the Fischer indole synthesis of triptan compounds and indometacin. The use of sodium dithionite is an improvement over stannous chloride since it is a cheaper reducing agent with fewer environmental problems. Biochemistry Alkanediazonium ions, otherwise rarely encountered in organic chemistry, are implicated as the causative agents in the carcinogens. Specifically, nitrosamines are thought to undergo metabolic activation to produce alkanediazonium species. Safety Solid diazonium halides are often dangerously explosive, and fatalities and injuries have been reported. The nature of the anions affects stability of the salt. Arenediazonium perchlorates, such as nitrobenzenediazonium perchlorate, have been used to initiate explosives. See also Diazo Diazo printing process Benzenediazonium chloride Triazene cleavage Dinitrogen complex References External links Organic compounds Carbon-heteroatom bond forming reactions Functional groups Organonitrogen compounds
Diazonium compound
[ "Chemistry" ]
3,030
[ "Functional groups", "Organic reactions", "Organic compounds", "Carbon-heteroatom bond forming reactions", "Organonitrogen compounds" ]
2,531,983
https://en.wikipedia.org/wiki/Gibbons%E2%80%93Hawking%E2%80%93York%20boundary%20term
In general relativity, the Gibbons–Hawking–York boundary term is a term that needs to be added to the Einstein–Hilbert action when the underlying spacetime manifold has a boundary. The Einstein–Hilbert action is the basis for the most elementary variational principle from which the field equations of general relativity can be defined. However, the use of the Einstein–Hilbert action is appropriate only when the underlying spacetime manifold is closed, i.e., a manifold which is both compact and without boundary. In the event that the manifold has a boundary , the action should be supplemented by a boundary term so that the variational principle is well-defined. The necessity of such a boundary term was first realised by James W. York and later refined in a minor way by Gary Gibbons and Stephen Hawking. For a manifold that is not closed, the appropriate action is where is the Einstein–Hilbert action, is the Gibbons–Hawking–York boundary term, is the induced metric (see section below on definitions) on the boundary, its determinant, is the trace of the second fundamental form, is equal to where the normal to is spacelike and where the normal to is timelike, and are the coordinates on the boundary. Varying the action with respect to the metric , subject to the condition gives the Einstein equations; the addition of the boundary term means that in performing the variation, the geometry of the boundary encoded in the transverse metric is fixed (see section below). There remains ambiguity in the action up to an arbitrary functional of the induced metric . That a boundary term is needed in the gravitational case is because , the gravitational Lagrangian density, contains second derivatives of the metric tensor. This is a non-typical feature of field theories, which are usually formulated in terms of Lagrangians that involve first derivatives of fields to be varied over only. The GHY term is desirable, as it possesses a number of other key features. When passing to the Hamiltonian formalism, it is necessary to include the GHY term in order to reproduce the correct Arnowitt–Deser–Misner energy (ADM energy). The term is required to ensure the path integral (a la Hawking) for quantum gravity has the correct composition properties. When calculating black hole entropy using the Euclidean semiclassical approach, the entire contribution comes from the GHY term. This term has had more recent applications in loop quantum gravity in calculating transition amplitudes and background-independent scattering amplitudes. In order to determine a finite value for the action, one may have to subtract off a surface term for flat spacetime: where is the extrinsic curvature of the boundary imbedded flat spacetime. As is invariant under variations of , this addition term does not affect the field equations; as such, this is referred to as the non-dynamical term. Introduction to hyper-surfaces Defining hyper-surfaces In a four-dimensional spacetime manifold, a hypersurface is a three-dimensional submanifold that can be either timelike, spacelike, or null. A particular hyper-surface can be selected either by imposing a constraint on the coordinates or by giving parametric equations, where are coordinates intrinsic to the hyper-surface. For example, a two-sphere in three-dimensional Euclidean space can be described either by where is the radius of the sphere, or by where and are intrinsic coordinates. Hyper-surface orthogonal vector fields We take the metric convention (-,+,...,+). We start with the family of hyper-surfaces given by where different members of the family correspond to different values of the constant . Consider two neighbouring points and with coordinates and , respectively, lying in the same hyper-surface. We then have to first order Subtracting off from this equation gives at . This implies that is normal to the hyper-surface. A unit normal can be introduced in the case where the hyper-surface is not null. This is defined by and we require that point in the direction of increasing . It can then easily be checked that is given by if the hyper-surface either spacelike or timelike. Induced and transverse metric The three vectors are tangential to the hyper-surface. The induced metric is the three-tensor defined by This acts as a metric tensor on the hyper-surface in the coordinates. For displacements confined to the hyper-surface (so that ) Because the three vectors are tangential to the hyper-surface, where is the unit vector () normal to the hyper-surface. We introduce what is called the transverse metric It isolates the part of the metric that is transverse to the normal . It is easily seen that this four-tensor projects out the part of a four-vector transverse to the normal as We have If we define to be the inverse of , it is easy to check where Note that variation subject to the condition implies that , the induced metric on , is held fixed during the variation. See also for clarification on and etc. On proving the main result In the following subsections we will first compute the variation of the Einstein–Hilbert term and then the variation of the boundary term, and show that their sum results in where is the Einstein tensor, which produces the correct left-hand side to the Einstein field equations, without the cosmological term, which however is trivial to include by replacing with where is the cosmological constant. In the third subsection we elaborate on the meaning of the non-dynamical term. Variation of the Einstein–Hilbert term We will use the identity and the Palatini identity: which are both obtained in the article Einstein–Hilbert action. We consider the variation of the Einstein–Hilbert term: The first term gives us what we need for the left-hand side of the Einstein field equations. We must account for the second term. By the Palatini identity We will need Stokes theorem in the form: where is the unit normal to and , and are coordinates on the boundary. And where where , is an invariant three-dimensional volume element on the hyper-surface. In our particular case we take . We now evaluate on the boundary , keeping in mind that on . Taking this into account we have It is useful to note that where in the second line we have swapped around and and used that the metric is symmetric. It is then not difficult to work out . So now where in the second line we used the identity , and in the third line we have used the anti-symmetry in and . As vanishes everywhere on the boundary its tangential derivatives must also vanish: . It follows that . So finally we have Gathering the results we obtain We next show that the above boundary term will be cancelled by the variation of . Variation of the boundary term We now turn to the variation of the term. Because the induced metric is fixed on the only quantity to be varied is is the trace of the extrinsic curvature. We have where we have used that implies So the variation of is where we have used the fact that the tangential derivatives of vanish on We have obtained which cancels the second integral on the right-hand side of Eq. 1. The total variation of the gravitational action is: This produces the correct left-hand side of the Einstein equations. This proves the main result. This result was generalised to fourth-order theories of gravity on manifolds with boundaries in 1983 and published in 1985. The non-dynamical term We elaborate on the role of in the gravitational action. As already mentioned above, because this term only depends on , its variation with respect to gives zero and so does not effect the field equations, its purpose is to change the numerical value of the action. As such we will refer to it as the non-dynamical term. Let us assume that is a solution of the vacuum field equations, in which case the Ricci scalar vanishes. The numerical value of the gravitational action is then where we are ignoring the non-dynamical term for the moment. Let us evaluate this for flat spacetime. Choose the boundary to consist of two hyper-surfaces of constant time value and a large three-cylinder at (that is, the product of a finite interval and a three-sphere of radius ). We have on the hyper-surfaces of constant time. On the three cylinder, in coordinates intrinsic to the hyper-surface, the line element is meaning the induced metric is so that . The unit normal is , so . Then and diverges as , that is, when the spatial boundary is pushed to infinity, even when the is bounded by two hyper-surfaces of constant time. One would expect the same problem for curved spacetimes that are asymptotically flat (there is no problem if the spacetime is compact). This problem is remedied by the non-dynamical term. The difference will be well defined in the limit . Variation of modified gravity terms There are many theories which attempt to modify General Relativity in different ways, for example f(R) gravity replaces R, the Ricci scalar in the Einstein–Hilbert action with a function f(R). Guarnizo et al. found the boundary term for a general f(R) theory. They found that the "modified action in the metric formalism of f(R) gravity plus a Gibbons–York–Hawking like boundary term must be written as:" where . By using the ADM decomposition and introducing extra auxiliary fields, in 2009 Deruelle et al. found a method to find the boundary term for "gravity theories whose Lagrangian is an arbitrary function of the Riemann tensor." This method can be used to find the GHY boundary terms for Infinite derivative gravity. A path-integral approach to quantum gravity As mentioned at the beginning, the GHY term is required to ensure the path integral (a la Hawking et al.) for quantum gravity has the correct composition properties. This older approach to path-integral quantum gravity had a number of difficulties and unsolved problems. The starting point in this approach is Feynman's idea that one can represent the amplitude to go from the state with metric and matter fields on a surface to a state with metric and matter fields on a surface , as a sum over all field configurations and which take the boundary values of the fields on the surfaces and . We write where is a measure on the space of all field configurations and , is the action of the fields, and the integral is taken over all fields which have the given values on and . It is argued that one need only specify the three-dimensional induced metric on the boundary. Now consider the situation where one makes the transition from metric , on a surface , to a metric , on a surface and then on to a metric on a later surface One would like to have the usual composition rule expressing that the amplitude to go from the initial to final state to be obtained by summing over all states on the intermediate surface . Let be the metric between and and be the metric between and . Although the induced metric of and will agree on , the normal derivative of at will not in general be equal to that of at . Taking the implications of this into account, it can then be shown that the composition rule will hold if and only if we include the GHY boundary term. In the next section it is demonstrated how this path integral approach to quantum gravity leads to the concept of black hole temperature and intrinsic quantum mechanical entropy. Calculating black-hole entropy using the Euclidean semi-classical approach Application in loop quantum gravity Transition amplitudes and the Hamilton's principal function In the quantum theory, the object that corresponds to the Hamilton's principal function is the transition amplitude. Consider gravity defined on a compact region of spacetime, with the topology of a four dimensional ball. The boundary of this region is a three-dimensional space with the topology of a three-sphere, which we call . In pure gravity without cosmological constant, since the Ricci scalar vanishes on solutions of Einstein's equations, the bulk action vanishes and the Hamilton's principal function is given entirely in terms of the boundary term, where is the extrinsic curvature of the boundary, is the three-metric induced on the boundary, and are coordinates on the boundary. The functional is a highly non-trivial functional to compute; this is because the extrinsic curvature is determined by the bulk solution singled out by the boundary intrinsic geometry. As such is non-local. Knowing the general dependence of from is equivalent to knowing the general solution of the Einstein equations. Background-independent scattering amplitudes Loop quantum gravity is formulated in a background-independent language. No spacetime is assumed a priori, but rather it is built up by the states of theory themselves however scattering amplitudes are derived from -point functions (Correlation function (quantum field theory)) and these, formulated in conventional quantum field theory, are functions of points of a background space-time. The relation between the background-independent formalism and the conventional formalism of quantum field theory on a given spacetime is far from obvious, and it is far from obvious how to recover low-energy quantities from the full background-independent theory. One would like to derive the -point functions of the theory from the background-independent formalism, in order to compare them with the standard perturbative expansion of quantum general relativity and therefore check that loop quantum gravity yields the correct low-energy limit. A strategy for addressing this problem has been suggested; the idea is to study the boundary amplitude, or transition amplitude of a compact region of spacetime, namely a path integral over a finite space-time region, seen as a function of the boundary value of the field. In conventional quantum field theory, this boundary amplitude is well-defined and codes the physical information of the theory; it does so in quantum gravity as well, but in a fully background-independent manner. A generally covariant definition of -point functions can then be based on the idea that the distance between physical points arguments of the -point function is determined by the state of the gravitational field on the boundary of the spacetime region considered. The key observation is that in gravity the boundary data include the gravitational field, hence the geometry of the boundary, hence all relevant relative distances and time separations. In other words, the boundary formulation realizes very elegantly in the quantum context the complete identification between spacetime geometry and dynamical fields. See also Borde–Guth–Vilenkin theorem Notes References External links Variational formalism of general relativity General relativity Lagrangian mechanics Stephen Hawking
Gibbons–Hawking–York boundary term
[ "Physics", "Mathematics" ]
2,972
[ "Lagrangian mechanics", "Classical mechanics", "General relativity", "Theory of relativity", "Dynamical systems" ]
2,532,789
https://en.wikipedia.org/wiki/Standard%20gravity
The standard acceleration of gravity or standard acceleration of free fall, often called simply standard gravity and denoted by or , is the nominal gravitational acceleration of an object in a vacuum near the surface of the Earth. It is a constant defined by standard as . This value was established by the third General Conference on Weights and Measures (1901, CR 70) and used to define the standard weight of an object as the product of its mass and this nominal acceleration. The acceleration of a body near the surface of the Earth is due to the combined effects of gravity and centrifugal acceleration from the rotation of the Earth (but the latter is small enough to be negligible for most purposes); the total (the apparent gravity) is about 0.5% greater at the poles than at the Equator. Although the symbol is sometimes used for standard gravity, (without a suffix) can also mean the local acceleration due to local gravity and centrifugal acceleration, which varies depending on one's position on Earth (see Earth's gravity). The symbol should not be confused with , the gravitational constant, or g, the symbol for gram. The is also used as a unit for any form of acceleration, with the value defined as above. The value of defined above is a nominal midrange value on Earth, originally based on the acceleration of a body in free fall at sea level at a geodetic latitude of 45°. Although the actual acceleration of free fall on Earth varies according to location, the above standard figure is always used for metrological purposes. In particular, since it is the ratio of the kilogram-force and the kilogram, its numeric value when expressed in coherent SI units is the ratio of the kilogram-force and the newton, two units of force. History Already in the early days of its existence, the International Committee for Weights and Measures (CIPM) proceeded to define a standard thermometric scale, using the boiling point of water. Since the boiling point varies with the atmospheric pressure, the CIPM needed to define a standard atmospheric pressure. The definition they chose was based on the weight of a column of mercury of 760 mm. But since that weight depends on the local gravity, they now also needed a standard gravity. The 1887 CIPM meeting decided as follows: All that was needed to obtain a numerical value for standard gravity was now to measure the gravitational strength at the International Bureau. This task was given to Gilbert Étienne Defforges of the Geographic Service of the French Army. The value he found, based on measurements taken in March and April 1888, was 9.80991(5) m⋅s−2. This result formed the basis for determining the value still used today for standard gravity. The third General Conference on Weights and Measures, held in 1901, adopted a resolution declaring as follows: The numeric value adopted for was, in accordance with the 1887 CIPM declaration, obtained by dividing Defforges's result – 980.991 cm⋅s−2 in the cgs system then en vogue – by 1.0003322 while not taking more digits than are warranted considering the uncertainty in the result. Conversions See also Gravity of Earth Gravity map Seconds pendulum Theoretical gravity References Physical quantities Gravity Units of acceleration Constants
Standard gravity
[ "Physics", "Mathematics" ]
665
[ "Physical phenomena", "Physical quantities", "Acceleration", "Quantity", "Units of acceleration", "Physical properties", "Units of measurement" ]
2,533,237
https://en.wikipedia.org/wiki/Catalytic%20triad
A catalytic triad is a set of three coordinated amino acid residues that can be found in the active site of some enzymes. Catalytic triads are most commonly found in hydrolase and transferase enzymes (e.g. proteases, amidases, esterases, acylases, lipases and β-lactamases). An acid-base-nucleophile triad is a common motif for generating a nucleophilic residue for covalent catalysis. The residues form a charge-relay network to polarise and activate the nucleophile, which attacks the substrate, forming a covalent intermediate which is then hydrolysed to release the product and regenerate free enzyme. The nucleophile is most commonly a serine or cysteine, but occasionally threonine or even selenocysteine. The 3D structure of the enzyme brings together the triad residues in a precise orientation, even though they may be far apart in the sequence (primary structure). As well as divergent evolution of function (and even the triad's nucleophile), catalytic triads show some of the best examples of convergent evolution. Chemical constraints on catalysis have led to the same catalytic solution independently evolving in at least 23 separate superfamilies. Their mechanism of action is consequently one of the best studied in biochemistry. History In the 1950s, a serine residue was identified as the catalytic nucleophile of trypsin and chymotrypsin (first purified in the 1930s) by diisopropyl fluorophosphate modification. The structure of chymotrypsin was solved by X-ray crystallography in the 1960s, showing the orientation of the catalytic triad in the active site. Other proteases were sequenced and aligned to reveal a family of related proteases, now called the S1 family. Simultaneously, the structures of the evolutionarily unrelated papain and subtilisin proteases were found to contain analogous triads. The 'charge-relay' mechanism for the activation of the nucleophile by the other triad members was proposed in the late 1960s. As more protease structures were solved by X-ray crystallography in the 1970s and 80s, homologous (such as TEV protease) and analogous (such as papain) triads were found. The MEROPS classification system in the 1990s and 2000s began classing proteases into structurally related enzyme superfamilies and so acts as a database of the convergent evolution of triads in over 20 superfamilies. Understanding how chemical constraints on evolution led to the convergence of so many enzyme families on the same triad geometries has developed in the 2010s. Researchers have since conducted increasingly detailed investigations of the triad's exact catalytic mechanism. Of particular contention in the 1990s and 2000s was whether low-barrier hydrogen bonding contributed to catalysis, or whether ordinary hydrogen bonding is sufficient to explain the mechanism. The massive body of work on the charge-relay, covalent catalysis used by catalytic triads has led to the mechanism being the best characterised in all of biochemistry. Function Enzymes that contain a catalytic triad use it for one of two reaction types: either to split a substrate (hydrolases) or to transfer one portion of a substrate over to a second substrate (transferases). Triads are an inter-dependent set of residues in the active site of an enzyme and act in concert with other residues (e.g. binding site and oxyanion hole) to achieve nucleophilic catalysis. These triad residues act together to make the nucleophile member highly reactive, generating a covalent intermediate with the substrate that is then resolved to complete catalysis. Mechanism Catalytic triads perform covalent catalysis using a residue as a nucleophile. The reactivity of the nucleophilic residue is increased by the functional groups of the other triad members. The nucleophile is polarised and oriented by the base, which is itself bound and stabilised by the acid. Catalysis is performed in two stages. First, the activated nucleophile attacks the carbonyl carbon and forces the carbonyl oxygen to accept an electron pair, leading to a tetrahedral intermediate. The resulting build-up of negative charge is typically stabilized by an oxyanion hole within the active site. The intermediate then collapses back to a carbonyl, ejecting the first half of the substrate, but leaving the second half still covalently bound to the enzyme as an acyl-enzyme intermediate. Although general-acid catalysis for breakdown of the First and Second tetrahedral intermediate may occur by the path shown in the diagram, evidence supporting such a mechanism with chymotrypsin has been controverted. The second stage of catalysis is the resolution of the acyl-enzyme intermediate by the attack of a second substrate. If the substrate is water then hydrolysis results; if it is an organic molecule then that molecule is transferred onto the first substrate. Attack by the second substrate forms a new tetrahedral intermediate, which resolves by ejecting the enzyme's nucleophile, releasing the second product and regenerating free enzyme. Identity of triad members Nucleophile The side-chain of the nucleophilic residue performs covalent catalysis on the substrate. The lone pair of electrons present on the oxygen or sulfur attacks the electropositive carbonyl carbon. The 20 naturally occurring biological amino acids do not contain any sufficiently nucleophilic functional groups for many difficult catalytic reactions. Embedding the nucleophile in a triad increases its reactivity for efficient catalysis. The most commonly used nucleophiles are the hydroxyl (OH) of serine and the thiol/thiolate ion (SH/S−) of cysteine. Alternatively, threonine proteases use the secondary hydroxyl of threonine, however due to steric hindrance of the side chain's extra methyl group, such proteases use their N-terminal amide as the base rather than a separate amino acid. Use of oxygen or sulfur as the nucleophilic atom causes minor differences in catalysis. Compared to oxygen, sulfur's extra d orbital makes it larger (by 0.4 Å) and softer, allows it to form longer bonds (dC-X and dX-H by 1.3-fold), and gives it a lower pKa (by 5 units). Serine is therefore more dependent than cysteine on optimal orientation of the acid-base triad members to reduce its pKa in order to achieve concerted deprotonation with catalysis. The low pKa of cysteine works to its disadvantage in the resolution of the first tetrahedral intermediate as unproductive reversal of the original nucleophilic attack is the more favourable breakdown product. The triad base is therefore preferentially oriented to protonate the leaving group amide to ensure that it is ejected to leave the enzyme sulfur covalently bound to the substrate N-terminus. Finally, resolution of the acyl-enzyme (to release the substrate C-terminus) requires serine to be re-protonated whereas cysteine can leave as S−. Sterically, the sulfur of cysteine also forms longer bonds and has a bulkier van der Waals radius and if mutated to serine can be trapped in unproductive orientations in the active site. Very rarely, the selenium atom of the uncommon amino acid selenocysteine is used as a nucleophile. The deprotonated Se− state is strongly favoured when in a catalytic triad. Base Since no natural amino acids are strongly nucleophilic, the base in a catalytic triad polarises and deprotonates the nucleophile to increase its reactivity. Additionally, it protonates the first product to aid leaving group departure. The base is most commonly histidine since its pKa allows for effective base catalysis, hydrogen bonding to the acid residue, and deprotonation of the nucleophile residue. β-lactamases such as TEM-1 use a lysine residue as the base. Because lysine's pKa is so high (pKa=11), a glutamate and several other residues act as the acid to stabilise its deprotonated state during the catalytic cycle. Threonine proteases use their N-terminal amide as the base, since steric crowding by the catalytic threonine's methyl prevents other residues from being close enough. Acid The acidic triad member forms a hydrogen bond with the basic residue, leading to mutual alignment via restriction of the basic residue's side-chain rotation. The positive charge on the basic residue is simultaneously stabilised, leading to its polarisation. Two amino acids have acidic side chains at physiological pH (aspartate or glutamate) and so are the most common members of the acidic triad residue. Cytomegalovirus protease uses a pair of histidines, one as the base, as usual, and one as the acid. The second histidine is not as effective an acid as the more common aspartate or glutamate, leading to a lower catalytic efficiency. Examples of triads Ser-His-Asp The Serine-Histidine-Aspartate motif is one of the most thoroughly characterised catalytic motifs in biochemistry. The triad is exemplified by chymotrypsin, a model serine protease from the PA superfamily which uses its triad to hydrolyse protein backbones. The aspartate is hydrogen bonded to the histidine, increasing the pKa of its imidazole nitrogen from 7 to around 12. Histidine is thus able to act as a powerful general base, activating the serine nucleophile. The histidine base aids the first leaving group by donating a proton, and also activates the hydrolytic water substrate by abstracting a proton as the remaining OH− attacks the acyl-enzyme intermediate. The same triad has also convergently evolved in α/β hydrolases such as some lipases and esterases, however orientation of the triad members is reversed. Additionally, brain acetyl hydrolase (which has the same fold as a small G-protein) has also been found to have this triad. Cys-His-Asp The second most studied triad is the Cysteine-Histidine-Aspartate motif. Several families of cysteine proteases use this triad set, for example TEV protease and papain. The triad acts similarly to serine protease triads, with a few notable differences. Due to cysteine's low pKa, the importance of the Asp to catalysis varies and several cysteine proteases are effectively Cys-His dyads (e.g. hepatitis A virus protease), whilst in others the cysteine is already deprotonated before catalysis begins (e.g. papain). This triad is also used by some amidases, such as N-glycanase to hydrolyse non-peptide C-N bonds. Ser-His-His The triad of cytomegalovirus protease uses histidine as both the acid and base triad members. Removing the acid histidine results in only a 10-fold activity loss (compared to >10,000-fold when aspartate is removed from chymotrypsin). This triad has been interpreted as a possible way of generating a less active enzyme to control cleavage rate. Ser-Glu-Asp An unusual triad is found in sedolisin proteases. The low pKa of the glutamate carboxylate group means that it only acts as a base in the triad at very low pH. The triad is hypothesised to be an adaptation to specific environments like acidic hot springs (e.g. kumamolysin) or cell lysosome (e.g. tripeptidyl peptidase). Cys-His-Ser The endothelial protease vasohibin uses a cysteine as the nucleophile, but a serine to coordinate the histidine base. Despite the serine being a poor acid, it is still effective in orienting the histidine in the catalytic triad. Some homologues alternatively have a threonine instead of serine at the acid location. Thr-Nter, Ser-Nter and Cys-Nter Threonine proteases, such as the proteasome protease subunit and ornithine acyltransferases use the secondary hydroxyl of threonine in a manner analogous to the use of the serine primary hydroxyl. However, due to the steric interference of the extra methyl group of threonine, the base member of the triad is the N-terminal amide which polarises an ordered water which, in turn, deprotonates the catalytic hydroxyl to increase its reactivity. Similarly, there exist equivalent 'serine only' and 'cysteine only' configurations such as penicillin acylase G and penicillin acylase V which are evolutionarily related to the proteasome proteases. Again, these use their N-terminal amide as a base. Ser-cisSer-Lys This unusual triad occurs only in one superfamily of amidases. Here, the lysine acts to polarise the middle serine. The middle serine then forms two strong hydrogen bonds to the nucleophilic serine to activate it (one with the side chain hydroxyl and the other with the backbone amide). The middle serine is held in an unusual cis orientation to facilitate precise contacts with the other two triad residues. The triad is further unusual in that the lysine and cis-serine both act as the base in activating the catalytic serine, but the same lysine also performs the role of the acid member as well as making key structural contacts. Sec-His-Glu The rare, but naturally occurring amino acid selenocysteine (Sec), can also be found as the nucleophile in some catalytic triads. Selenocysteine is similar to cysteine, but contains a selenium atom instead of a sulfur. A selenocysteine residue is found in the active site of thioredoxin reductase, which uses the selenol group for reduction of disulfide in thioredoxin. Engineered triads In addition to naturally occurring types of catalytic triads, protein engineering has been used to create enzyme variants with non-native amino acids, or entirely synthetic amino acids. Catalytic triads have also been inserted into otherwise non-catalytic proteins, or protein mimics. Subtilisin (a serine protease) has had its oxygen nucleophile replaced with each of sulfur, selenium, or tellurium. Cysteine and selenocysteine were inserted by mutagenesis, whereas the non-natural amino acid, tellurocysteine, was inserted using auxotrophic cells fed with synthetic tellurocysteine. These elements are all in the 16th periodic table column (chalcogens), so have similar properties. In each case, changing the nucleophile lowered the enzyme's protease activity, but increased a new activity. A sulfur nucleophile improved the enzymes transferase activity (sometimes called subtiligase). Selenium and tellurium nucleophiles converted the enzyme into an oxidoreductase. When the nucleophile of TEV protease was converted from cysteine to serine, it protease activity was strongly reduced, but was able to be restored by directed evolution. Non-catalytic proteins have been used as scaffolds, having catalytic triads inserted into them which were then improved by directed evolution. The Ser-His-Asp triad has been inserted into an antibody, as well as a range of other proteins. Similarly, catalytic triad mimics have been created in small organic molecules like diaryl diselenide, and displayed on larger polymers like Merrifield resins, and self-assembling short peptide nanostructures. Divergent evolution The sophistication of the active site network causes residues involved in catalysis (and residues in contact with these) to be highly evolutionarily conserved. However, many examples of divergent evolution in catalytic triads exist, both in the reaction catalysed, and the residues used in catalysis. The triad remains the core of the active site, but it is evolutionarily adapted to serve different functions. Some proteins, called pseudoenzymes, have non-catalytic functions (e.g. regulation by inhibitory binding) and have accumulated mutations that inactivate their catalytic triad. Reaction changes Catalytic triads perform covalent catalysis via an acyl-enzyme intermediate. If the intermediate is resolved by water, the result is hydrolysis of the substrate. However, if the intermediate is resolved by attack by a second substrate, then the enzyme acts as a transferase. For example, attack by an acyl group results in an acyltransferase reaction. Several families of transferase enzymes have evolved from hydrolases by adaptation to exclude water and favour attack of a second substrate. In different members of the α/β-hydrolase superfamily, the Ser-His-Asp triad is tuned by surrounding residues to perform at least 17 different reactions. Some of these reactions are also achieved with mechanisms that have altered formation, or resolution of the acyl-enzyme intermediate, or that don't proceed via an acyl-enzyme intermediate. Additionally, an alternative transferase mechanism has been evolved by amidophosphoribosyltransferase, which has two active sites. In the first active site, a cysteine triad hydrolyses a glutamine substrate to release free ammonia. The ammonia then diffuses though an internal tunnel in the enzyme to the second active site, where it is transferred to a second substrate. Nucleophile changes Divergent evolution of active site residues is slow, due to strong chemical constraints. Nevertheless, some protease superfamilies have evolved from one nucleophile to another. This can be inferred when a superfamily (with the same fold) contains families that use different nucleophiles. Such nucleophile switches have occurred several times during evolutionary history, however the mechanisms by which this happen are still unclear. Within protease superfamilies that contain a mixture of nucleophiles (e.g. the PA clan), families are designated by their catalytic nucleophile (C=cysteine proteases, S=serine proteases). Pseudoenzymes A further subclass of catalytic triad variants are pseudoenzymes, which have triad mutations that make them catalytically inactive, but able to function as binding or structural proteins. For example, the heparin-binding protein Azurocidin is a member of the PA clan, but with a glycine in place of the nucleophile and a serine in place of the histidine. Similarly, RHBDF1 is a homolog of the S54 family rhomboid proteases with an alanine in the place of the nucleophilic serine. In some cases, pseudoenzymes may still have an intact catalytic triad but mutations in the rest of the protein remove catalytic activity. The CA clan contains catalytically inactive members with mutated triads (calpamodulin has lysine in place of its cysteine nucleophile) and with intact triads but inactivating mutations elsewhere (rat testin retains a Cys-His-Asn triad). Convergent evolution The enzymology of proteases provides some of the clearest known examples of convergent evolution at a molecular level. The same geometric arrangement of triad residues occurs in over 20 separate enzyme superfamilies. Each of these superfamilies is the result of convergent evolution for the same triad arrangement within a different structural fold. This is because there are limited productive ways to arrange three triad residues, the enzyme backbone and the substrate. These examples reflect the intrinsic chemical and physical constraints on enzymes, leading evolution to repeatedly and independently converge on equivalent solutions. Cysteine and serine hydrolases The same triad geometries been converged upon by serine proteases such as the chymotrypsin and subtilisin superfamilies. Similar convergent evolution has occurred with cysteine proteases such as viral C3 protease and papain superfamilies. These triads have converged to almost the same arrangement due to the mechanistic similarities in cysteine and serine proteolysis mechanisms. Families of cysteine proteases Families of serine proteases Threonine proteases Threonine proteases use the amino acid threonine as their catalytic nucleophile. Unlike cysteine and serine, threonine is a secondary hydroxyl (i.e. has a methyl group). This methyl group greatly restricts the possible orientations of triad and substrate as the methyl clashes with either the enzyme backbone or histidine base. When the nucleophile of a serine protease was mutated to threonine, the methyl occupied a mixture of positions, most of which prevented substrate binding. Consequently, the catalytic residue of a threonine protease is located at its N-terminus. Two evolutionarily independent enzyme superfamilies with different protein folds are known to use the N-terminal residue as a nucleophile: Superfamily PB (proteasomes using the Ntn fold) and Superfamily PE (acetyltransferases using the DOM fold) This commonality of active site structure in completely different protein folds indicates that the active site evolved convergently in those superfamilies. Families of threonine proteases See also Enzyme catalysis Enzyme superfamily References Notes Citations Molecular biology Catalysis Evolution
Catalytic triad
[ "Chemistry", "Biology" ]
4,742
[ "Catalysis", "Chemical kinetics", "Biochemistry", "Molecular biology" ]
20,668,503
https://en.wikipedia.org/wiki/Earthquake%20Engineering%20Research%20Institute
The Earthquake Engineering Research Institute (EERI) is a leading technical society in dissemination of earthquake risk and earthquake engineering research both in the U.S. and globally. EERI members include researchers, geologists, geotechnical engineers, educators, government officials, and building code regulators. Their mission, as stated in their 5-year plan published in 2006, has three points: "Advancing the science and practice of earthquake engineering; Improving understanding of the impact of earthquakes on the physical, social, economic, political, and cultural environment; and Advocating comprehensive and realistic measures for reducing the harmful effects of earthquakes". Goals In the 2006 5-year plan, the EERI has identified four main goals towards fulfilling their mission and planned strategies to carry them out. "Enhance and expand educational materials and technical programs." They will hold two seminars per year on topics intended to interest a wide audience. They will also post many of their publications online, such as their journal Earthquake Spectra. "Outreach and Advocacy" They will continue to release their findings on earthquake risks, including the costs of potential disasters. They hope to influence policymakers to increase funds for preventing these risks. They hope also to include earthquake safety into the "green" building design movement. "Maintain a strong program of international activities." They serve as an inflow point in the U.S. for earthquake research from other countries. They also serve as an outflow, translating their research into languages other than English. "Expand and broaden financial resource base." They wish to raise $1 million in donations by 2010, and increase worldwide membership to 3,000. They wish to expand their programs and partnerships with other organizations with more workshops and seminars. In February 2010, the EERI entered a partnership with the Geo-Institute of American Society of Civil Engineers, increasing their collaboration to reduce earthquake hazards. History The EERI was formed in 1948 as an advising committee of the United States Coast and Geodetic Survey. It quickly became its own independent, nonprofit organization, with the purpose of studying why buildings fail under earthquake disasters, and what methods can prevent these failures. At first they conducted their research in laboratories of different University or Government groups. As the EERI grew, they began to more often send research funds to the Universities, and have the university conduct the research. EERI focused more on identifying and investigating areas in need of research, and policymaking based on the university's lab results. In 1952 the EERI organized the first Conference on Earthquake Engineering, at UCLA. In 1956, in observation of the fiftieth anniversary of the 1906 San Francisco earthquake, they held the first World Conference on Earthquake Engineering at University of California, Berkeley. In 1984, the 8th World Conference was held in San Francisco. This conference brought in scientists from 54 countries. At first, membership to the EERI was limited to invite-only engineers and scientists. In 1973, they began to hire members by application, and increased their membership from 126 to 721 by 1978. In 1991, EERI began receiving funding from the Federal Emergency Management Agency (FEMA), to continue publishing information on how to reduce damage from earthquakes. After a number of location changes, the EERI headquarters settled in Oakland, California. Their quarterly journal, Earthquake Spectra, covers current research on earthquake engineering and is available online or by subscription. Its target audience is any geologist, seismologist, or related engineer. EERI also publishes many other types of information, including a monthly newsletter, the Connections oral history series, and field investigation reports. California earthquake assessments EERI performs risk assessments on earthquake potential sites around the world. This is a quick summary of two reports on California cities. In 2006 an engineering firm related to the EERI has projected over $122 billion in damage, if a repeat of the 1906 San Francisco earthquake occurs. This number includes damage to homes and structures, excluding fire damage. The EERI lobbies for government funding to prevent natural disasters. The money is best spent before loss of life and large-scale structural damage, though often it is not seen until afterward, as evidenced by Hurricane Katrina. The EERI and the USGS have identified that a potential large earthquake in Los Angeles would cause more damage than Katrina at New Orleans, with up to $250 billion in total damage and 18,000 deaths. Student involvement EERI has a student chapter in 29 colleges across the U.S. to further promote interest in earthquake engineering. A few representatives from each chapter make up the Student Leadership Council (SLC). Since 2008 the EERI and SLC have held the Undergraduate Seismic Design Competition, which was previously run by the Pacific Earthquake Engineering Research Center (PEER). In this competition a team of undergraduate college students must design and construct a structure made of balsa wood. The structure is limited by many rules, such as a weight limit, the individual heights of each floor, total height limit, and more. The structure is subjected to extra weight and placed on a shake table, which moves to simulate an earthquake. An accelerometer is placed on top of the building to measure how fast the top of the building shakes. Students' structures are judged on a number of criteria, including the height of the structure, number of floors, the accelerometer readings, and whether the structure breaks. Students will want to make a building close to the height limit because the higher floors are worth more points. The annual competition is typically held along with the EERI annual meeting. References External links EERI's Website EERI Student Leadership Council's Website Earthquake Spectra's Website Engineering research institutes Earthquake engineering
Earthquake Engineering Research Institute
[ "Engineering" ]
1,138
[ "Structural engineering", "Earthquake engineering", "Civil engineering", "Engineering research institutes" ]
20,670,019
https://en.wikipedia.org/wiki/Scanning%20SQUID%20microscopy
In condensed matter physics, scanning SQUID microscopy is a technique where a superconducting quantum interference device (SQUID) is used to image surface magnetic field strength with micrometre-scale resolution. A tiny SQUID is mounted onto a tip which is then rastered near the surface of the sample to be measured. As the SQUID is the most sensitive detector of magnetic fields available and can be constructed at submicrometre widths via lithography, the scanning SQUID microscope allows magnetic fields to be measured with unparalleled resolution and sensitivity. The first scanning SQUID microscope was built in 1992 by Black et al. Since then the technique has been used to confirm unconventional superconductivity in several high-temperature superconductors including YBCO and BSCCO compounds. Operating principles The scanning SQUID microscope is based upon the thin-film DC SQUID. A DC SQUID consists of superconducting electrodes in a ring pattern connected by two weak-link Josephson junctions (see figure). Above the critical current of the Josephson junctions, the idealized difference in voltage between the electrodes is given by where R is the resistance between the electrodes, I is the current, I0 is the maximum supercurrent, Ic is the critical current of the Josephson junctions, Φ is the total magnetic flux through the ring, and Φ0 is the magnetic flux quantum. Hence, a DC SQUID can be used as a flux-to-voltage transducer. However, as noted by the figure, the voltage across the electrodes oscillates sinusoidally with respect to the amount of magnetic flux passing through the device. As a result, alone a SQUID can only be used to measure the change in magnetic field from some known value, unless the magnetic field or device size is very small such that Φ < Φ0. To use the DC SQUID to measure standard magnetic fields, one must either count the number of oscillations in the voltage as the field is changed, which is very difficult in practice, or use a separate DC bias magnetic field parallel to the device to maintain a constant voltage and consequently constant magnetic flux through the loop. The strength of the field being measured will then be equal to the strength of the bias magnetic field passing through the SQUID. Although it is possible to read the DC voltage between the two terminals of the SQUID directly, because noise tends to be a problem in DC measurements, an alternating current technique is used. In addition to the DC bias magnetic field, an AC magnetic field of constant amplitude, with field strength generating Φ << Φ0, is also emitted in the bias coil. This AC field produces an AC voltage with amplitude proportional to the DC component in the SQUID. The advantage of this technique is that the frequency of the voltage signal can be chosen to be far away from that of any potential noise sources. By using a lock-in amplifier the device can read only the frequency corresponding to the magnetic field, ignoring many other sources of noise. Instrumentation A Scanning SQUID Microscope is a sensitive near-field imaging system for the measurement of weak magnetic fields by moving a Superconducting Quantum Interference Device (SQUID) across an area. The microscope can map out buried current-carrying wires by measuring the magnetic fields produced by the currents, or can be used to image fields produced by magnetic materials. By mapping out the current in an integrated circuit or a package, short circuits can be localized and chip designs can be verified to see that current is flowing where expected. As the SQUID material must be superconducting, measurements must be performed at low temperatures. Typically, experiments are carried out below liquid helium temperature (4.2 K) in a helium-3 refrigerator or dilution refrigerator. However, advances in high-temperature superconductor thin-film growth have allowed relatively inexpensive liquid nitrogen cooling to instead be used. It is even possible to measure room-temperature samples by only cooling a high Tc squid and maintaining thermal separation with the sample. In either case, due to the extreme sensitivity of the SQUID probe to stray magnetic fields, in general some form of magnetic shielding is used. Most common is a shield made of mu-metal, possibly in combination with a superconducting "can" (all superconductors repel magnetic fields via the Meissner effect). The actual SQUID probe is generally made via thin-film deposition with the SQUID area outlined via lithography. A wide variety of superconducting materials can be used, but the two most common are Niobium, due to its relatively good resistance to damage from thermal cycling, and YBCO, for its high Tc > 77 K and relative ease of deposition compared to other high Tc superconductors. In either case, a superconductor with critical temperature higher than that of the operating temperature should be chosen. The SQUID itself can be used as the pickup coil for measuring the magnetic field, in which case the resolution of the device is proportional to the size of the SQUID. However, currents in or near the SQUID generate magnetic fields which are then registered in the coil and can be a source of noise. To reduce this effect it is also possible to make the size of the SQUID itself very small, but attach the device to a larger external superconducting loop located far from the SQUID. The flux through the loop will then be detected and measured, inducing a voltage in the SQUID. The resolution and sensitivity of the device are both proportional to the size of the SQUID. A smaller device will have greater resolution but less sensitivity. The change in voltage induced is proportional to the inductance of the device, and limitations in the control of the bias magnetic field as well as electronics issues prevent a perfectly constant voltage from being maintained at all times. However, in practice, the sensitivity in most scanning SQUID microscopes is sufficient for almost any SQUID size for many applications, and therefore the tendency is to make the SQUID as small as possible to enhance resolution. Via e-beam lithography techniques it is possible to fabricate devices with total area of 1–10 μm2, although devices in the tens to hundreds of square micrometres are more common. The SQUID itself is mounted onto a cantilever and operated either in direct contact with or just above the sample surface. The position of the SQUID is usually controlled by some form of electric stepping motor. Depending on the particular application, different levels of precision may be required in the height of the apparatus. Operating at lower-tip sample distances increases the sensitivity and resolution of the device, but requires more advanced mechanisms in controlling the height of the probe. In addition such devices require extensive vibration dampening if precise height control is to be maintained. High temperature scanning SQUID microscope A high temperature Scanning SQUID Microscope using a YBCO SQUID is capable of measuring magnetic fields as small as 20 pT (about 2 million times weaker than the Earth's magnetic field). The SQUID sensor is sensitive enough that it can detect a wire even if it is carrying only 10 nA of current at a distance of 100 μm from the SQUID sensor with 1 second averaging. The microscope uses a patented design to allow the sample under investigation to be at room temperature and in air while the SQUID sensor is under vacuum and cooled to less than 80 K using a cryo cooler. No Liquid Nitrogen is used. During non-contact, non-destructive imaging of room temperature samples in air, the system achieves a raw, unprocessed spatial resolution equal to the distance separating the sensor from the current or the effective size of the sensor, whichever is larger. To best locate a wire short in a buried layer, however, a Fast Fourier Transform (FFT) back-evolution technique can be used to transform the magnetic field image into an equivalent map of the current in an integrated circuit or printed wiring board. The resulting current map can then be compared to a circuit diagram to determine the fault location. With this post-processing of a magnetic image and the low noise present in SQUID images, it is possible to enhance the spatial resolution by factors of 5 or more over the near-field limited magnetic image. The system's output is displayed as a false-color image of magnetic field strength or current magnitude (after processing) versus position on the sample. After processing to obtain current magnitude, this microscope has been successful at locating shorts in conductors to within ±16 μm at a sensor-current distance of 150 μm. Operation Operation of a scanning SQUID microscope consists of simply cooling down the probe and sample, and rastering the tip across the area where measurements are desired. As the change in voltage corresponding to the measured magnetic field is quite rapid, the strength of the bias magnetic field is typically controlled by feedback electronics. This field strength is then recorded by a computer system that also keeps track of the position of the probe. An optical camera can also be used to track the position of the SQUID with respect to the sample. As the name implies, SQUIDs are made from superconducting material. As a result, they need to be cooled to cryogenic temperatures of less than 90 K (liquid nitrogen temperatures) for high temperature SQUIDs and less than 9 K (liquid helium temperatures) for low temperature SQUIDs. For magnetic current imaging systems, a small (about 30 μm wide) high temperature SQUID is used. This system has been designed to keep a high temperature SQUID, made from YBa2Cu3O7, cooled below 80K and in vacuum while the device under test is at room temperature and in air. A SQUID consists of two Josephson tunnel junctions that are connected together in a superconducting loop (see Figure 1). A Josephson junction is formed by two superconducting regions that are separated by a thin insulating barrier. Current exists in the junction without any voltage drop, up to a maximum value, called the critical current, Io. When the SQUID is biased with a constant current that exceeds the critical current of the junction, then changes in the magnetic flux, Φ, threading the SQUID loop produce changes in the voltage drop across the SQUID (see Figure 1). Figure 2(a) shows the I-V characteristic of a SQUID where ∆V is the modulation depth of the SQUID due to external magnetic fields. The voltage across a SQUID is a nonlinear periodic function of the applied magnetic field, with a periodicity of one flux quantum, Φ0=2.07×10−15 Tm2 (see Figure 2(b)). In order to convert this nonlinear response to a linear response, a negative feedback circuit is used to apply a feedback flux to the SQUID so as to keep the total flux through the SQUID constant. In such a flux locked loop, the magnitude of this feedback flux is proportional to the external magnetic field applied to the SQUID. Further description of the physics of SQUIDs and SQUID microscopy can be found elsewhere. Magnetic field detection using SQUID Magnetic current imaging uses the magnetic fields produced by currents in electronic devices to obtain images of those currents. This is accomplished through the fundamental physics relationship between magnetic fields and current, the Biot-Savart Law: B is the magnetic induction, Idℓ is an element of the current, the constant μ0 is the permeability of free space, and r is the distance between the current and the sensor. As a result, the current can be directly calculated from the magnetic field knowing only the separation between the current and the magnetic field sensor. The details of this mathematical calculation can be found elsewhere, but what is important to know here is that this is a direct calculation that is not influenced by other materials or effects, and that through the use of Fast Fourier Transforms these calculations can be performed very quickly. A magnetic field image can be converted to a current density image in about 1 or 2 seconds. Applications The scanning SQUID microscope was originally developed for an experiment to test the pairing symmetry of the high-temperature cuprate superconductor YBCO. Standard superconductors are isotropic with respect to their superconducting properties, that is, for any direction of electron momentum in the superconductor, the magnitude of the order parameter and consequently the superconducting energy gap will be the same. However, in the high-temperature cuprate superconductors, the order parameter instead follows the equation , meaning that when crossing over any of the [110] directions in momentum space one will observe a sign change in the order parameter. The form of this function is equal to that of the l = 2 spherical harmonic function, giving it the name d-wave superconductivity. As the superconducting electrons are described by a single coherent wavefunction, proportional to exp(-iφ), where φ is known as the phase of the wavefunction, this property can be also interpreted as a phase shift of π under a 90 degree rotation. This property was exploited by Tsuei et al. by manufacturing a series of YBCO ring Josephson junctions which crossed [110] Bragg planes of a single YBCO crystal (figure). In a Josephson junction ring the superconducting electrons form a coherent wave function, just as in a superconductor. As the wavefunction must have only one value at each point, the overall phase factor obtained after traversing the entire Josephson circuit must be an integer multiple of 2π, as otherwise, one would obtain a different value of the probability density depending on the number of times one traversed the ring. In YBCO, upon crossing the [110] planes in momentum (and real) space, the wavefunction will undergo a phase shift of π. Hence if one forms a Josephson ring device where this plane is crossed (2n+1), number of times, a phase difference of (2n+1)π will be observed between the two junctions. For 2n, or even number of crossings, as in B, C, and D, a phase difference of (2n)π will be observed. Compared to the case of standard s-wave junctions, where no phase shift is observed, no anomalous effects were expected in the B, C, and D cases, as the single valued property is conserved, but for device A, the system must do something to for the φ=2nπ condition to be maintained. In the same property behind the scanning SQUID microscope, the phase of the wavefunction is also altered by the amount of magnetic flux passing through the junction, following the relationship Δφ=π(Φ0). As was predicted by Sigrist and Rice, the phase condition can then be maintained in the junction by a spontaneous flux in the junction of value Φ0/2. Tsuei et al. used a scanning SQUID microscope to measure the local magnetic field at each of the devices in the figure, and observed a field in ring A approximately equal in magnitude Φ0/2A, where A was the area of the ring. The device observed zero field at B, C, and D. The results provided one of the earliest and most direct experimental confirmations of d-wave pairing in YBCO. Scanning SQUID Microscope can detect all types of shorts and conductive paths including Resistive Opens (RO) defects such as cracked or voided bumps, Delaminated Vias, Cracked traces/mouse bites and Cracked Plated Through Holes (PTH). It can map power distributions in packages as well as in 3D Integrated Circuits (IC) with Through-Silicon Via (TSV), System in package (SiP), Multi-Chip Module (MCM) and stacked die. SQUID scanning can also isolate defective components in assembled devices or Printed Circuit Board (PCB). Short Localization in Advanced Wirebond Semiconductor Package Advanced wire-bond packages, unlike traditional Ball Grid Array (BGA) packages, have multiple pad rows on the die and multiple tiers on the substrate. This package technology has brought new challenges to failure analysis. To date, Scanning Acoustic Microscopy (SAM), Time Domain Reflectometry (TDR) analysis, and Real-Time X-ray (RTX) inspection were the non-destructive tools used to detect short faults. Unfortunately, these techniques do not work very well in advanced wire-bond packages. Because of the high density wire bonding in advanced wire-bond packages, it is extremely hard to localize the short with conventional RTX inspection. Without detailed information as to where the short might occur, attempting destructive decapsulation to expose both die surface and bond wires is full of risk. Wet chemical etching to remove mold compound in a large area often results in over-etching. Furthermore, even if the package is successfully decapped, visual inspection of the multi-tiered bond wires is a blind search. The Scanning SQUID Microscopy (SSM) data are current density images and current peak images. The current density images give the magnitude of the current, while the current peak images reveal the current path with a ± 3 μm resolution. Obtaining the SSM data from scanning advanced wire-bond packages is only half the task; fault localization is still necessary. The critical step is to overlay the SSM current images or current path images with CAD files such as bonding diagrams or RTX images to pinpoint the fault location. To make alignment of overlaying possible, an optical two-point reference alignment is made. The package edge and package fiducial are the most convenient package markings to align to. Based on the data analysis, fault localization by SSM should isolate the short in the die, bond wires or package substrate. After all non-destructive approaches are exhausted, the final step is destructive deprocessing to verify SSM data. Depending on fault isolation, the deprocessing techniques include decapsulation, parallel lapping or cross-section. Short in multi-stacked packages Electric shorts in multi-stacked die packages can be very difficult to isolate non-destructively; especially when a large number of bond wires are somehow shorted. For instance, when an electric short is produced by two bond wires touching each other, X-ray analysis may help to identify potential defect locations; however, defects like metal migration produced at wirebond pads, or bond wires somehow touching any other conductive structures, may be very difficult to catch with non-destructive techniques that are not electrical in nature. Here, the availability of analytical tools that can map out the flow of electric current inside the package provide valuable information to guide the failure analyst to potential defect locations. Figure 1a shows the schematic of our first case study consisting of a triple-stacked die package. The X-ray image of figure 1b is intended to illustrate the challenge of finding the potential short locations represented for failure analysts. In particular, this is one of a set of units that were inconsistently failing and recovering under reliability tests. Time domain reflectometry and X-ray analysis were performed on these units with no success in isolating the defects. Also there was no clear indication of defects that could potentially produce the observed electrical short failure mode. Two of those units were analyzed with SSM. Electrically connecting the failing pin to a ground pin produced the electric current path shown in figure 2. This electrical path strongly suggests that the current is somehow flowing through all the ground nets though a conductive path located very close to the wirebond pads from the top down view of the package. Based on electrical and layout analysis of the package, it can be inferred that current is either flowing through the wirebond pads or that the wirebonds are somehow touching a conductive structure at the specified location. After obtaining similar SSM results on the two units under test, further destructive analysis focused around the small potential short region, and it showed that the failing pin wirebond is touching the bottom of one of the stacked dice at the specific XY position highlighted by SSM analysis. The cross section view of one of those units is shown in figure 3. A similar defect was found in the second unit. Short between pins in molding compound package The failure in this example was characterized as an eight-ohm short between two adjacent pins. The bond wires to the pins of interest were cut with no effect on the short as measured at the external pins, indicating that the short was present in the package. Initial attempts to identify the failure with conventional radiographic analysis were unsuccessful. Arguably the most difficult part of the procedure is identifying the physical location of the short with a high enough degree of confidence to permit destructive techniques to be used to reveal the shorting material. Fortunately, two analytical techniques are now available that can significantly increase the effectiveness of the fault localization process. Superconducting Quantum Interference Device (SQUID) Detection One characteristic that all shorts have in common is the movement of electrons from a high potential to a lower one. This physical movement of the electrical charge creates a small magnetic field around the electron. With enough electrons moving, the aggregate magnetic field can be detected by superconducting sensors. Instruments equipped with such sensors can follow the path of a short circuit along its course through a part. The SQUID detector has been used in failure analysis for many years, and is now commercially available for use at the package level. The ability of SQUID to track the flow of current provides a virtual roadmap of the short, including the location in plan view of the shorting material in a package. We used the SQUID facilities at Neocera to investigate the failure in the package of interest, with pins carrying 1.47 milliamps at 2 volts. SQUID analysis of the part revealed a clear current path between the two pins of interest, including the location of the conductive material that bridged the two pins. The SQUID scan of the part is shown in Figure 1. Low-power radiography The second fault location technique will be taken somewhat out of turn, as it was used to characterize this failure after the SQUID analysis, as an evaluation sample for an equipment vendor. The ability to focus and resolve low-power X-rays and detect their presence or absence has improved to the point that radiography can now be used to identify features heretofore impossible to detect. The equipment at Xradia was used to inspect the failure of interest in this analysis. An example of their findings is shown in Figure 2. The feature shown (which is also the material responsible for the failure) is a copper filament approximately three micrometres wide in cross-section, which was impossible to resolve in our in-house radiography equipment. The principal drawback of this technique is that the depth of field is extremely short, requiring many ‘cuts’ on a given specimen to detect very small particles or filaments. At the high magnification required to resolve micrometre-sized features, the technique can become prohibitively expensive in both time and money to perform. In effect, to get the most out of it, the analyst really needs to know already where the failure is located. This makes low-power radiography a useful supplement to SQUID, but not a generally effective replacement for it. It would likely best be used immediately after SQUID to characterize morphology and depth of the shorting material once SQUID had pinpointed its location. Short in a 3D Package Examination of the module shown in Figure 1 in the Failure Analysis Laboratory found no external evidence of the failure. Coordinate axes of the device were chosen as shown in Figure 1. Radiography was performed on the module in three orthogonal views: side, end, and top-down; as shown in Figure 2. For purposes of this paper the top-down X-ray view shows the x-y plane of the module. The side view shows the x-z plane, and the end view shows the y-z plane. No anomalies were noted in the radiographic images. Excellent alignment of components on the mini-boards permitted an uncluttered top-down view of the mini-circuit boards. The internal construction of the module was seen to consist of eight, stacked mini-boards, each with a single microcircuit and capacitor. The mini-boards connected with the external module pins using the gold-plated exterior of the package. External inspection showed that laser-cut trenches created an external circuit on the device, which is used to enable, read, or write to any of the eight EEPROM devices in the encapsulated vertical stack. Regarding nomenclature, the laser-trenched gold panels on the exterior walls of the package were labeled with the pin numbers. The eight miniboards were labeled TSOP01 through TSOP08, beginning at the bottom of the package near the device pins. Pin-to-pin electrical testing confirmed that Vcc Pins 12, 13, 14, and 15 were electrically common, presumably through the common exterior gold panel on the package wall. Likewise, Vss Pins 24, 25, 26, and 27 were common. Comparison to the xray images showed that these four pins funneled into a single wide trace on the mini-boards. All of the Vss pins were shorted to the Vcc pins with a resistance determined by the I-V slope at approximately 1.74 ohms, the low resistance indicating something other than an ESD defect. Similarly electrical overstress was considered an unlikely cause of failure as the part had not been under power since the time it was qualified at the factory. The three-dimensional geometry of the EEPROM module suggested the use of magnetic current imaging (MCI) on three, or more flat sides in order to construct the current path of the short within the module. As noted, the coordinate axes selected for this analysis are shown in Figure 1. Magnetic Current Imaging SQUIDs are the most sensitive magnetic sensors known. This allows one to scan currents of 500 nA at a working distance of about 400 micrometres. As for all near field situations, the resolution is limited by the scanning distance or, ultimately, by the sensor size (typical SQUIDs are about 30 μm wide), although software and data acquisition improvements allow locating currents within 3 micrometres. To operate, the SQUID sensor must be kept cool (about 77 K) and in vacuum, while the sample, at room temperature, is raster-scanned under the sensor at some working distance z, separated from the SQUID enclosure by a thin, transparent diamond window. This allows one to reduce the scanning distance to tens of micrometres from the sensor itself, improving the resolution of the tool. The typical MCI sensor configuration is sensitive to magnetic fields in the perpendicular z direction (i.e., sensitive to the in-plane xy current distribution in the DUT). This does not mean that we are missing vertical information; in the simplest situation, if a current path jumps from one plane to another, getting closer to the sensor in the process, this will be revealed as stronger magnetic field intensity for the section closer to the sensor and also as higher intensity in the current density map. This way, vertical information can be extracted from the current density images. Further details about MCI can be found elsewhere. See also Josephson Effect BCS theory Low-Temperature Physics SQUID Failure analysis Semiconductor References External links John Kirtley, one of the pioneers in scanning SQUID microscopy. Design and applications of a scanning SQUID microscope Center for Superconductivity Research, University of Maryland Neocera LLC Josephson effect Measuring instruments Microscopy Scanning probe microscopy Superconductivity
Scanning SQUID microscopy
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
5,598
[ "Josephson effect", "Physical quantities", "Superconductivity", "Materials science", "Measuring instruments", "Condensed matter physics", "Scanning probe microscopy", "Microscopy", "Nanotechnology", "Electrical resistance and conductance" ]
20,671,794
https://en.wikipedia.org/wiki/Open-system%20environment%20reference%20model
Open-system environment (OSE) reference model (RM) or OSE reference model (OSE/RM) is a 1990 reference model for enterprise architecture. It provides a framework for describing open system concepts and defining a lexicon of terms, that can be agreed upon generally by all interested parties. This reference model is meant as an environment model, complementary to the POSIX architecture for open systems. It offers an extensible framework that allows services, interfaces, protocols, and supporting data formats to be defined in terms of nonproprietary specifications that evolve through open (public), consensus-based forums. This reference model served in the 1990s as a basic building block of several technical reference models and technical architectures. In 1996 this reference model was standardized in the ISO/IEC TR 14252 titled "Information technology -- Guide to the POSIX Open System Environment (OSE)". History The development of the open-system environment reference model started early 1990s by the NIST as refinement of the POSIX (Portable Operating System Interface) standard. POSIX is a standard for maintaining compatibility between operating systems, and addresses interoperation for communications, computing, and entertainment infrastructure. Its development started late 1980s by the POSIX Working Group 1003.0 of the Institute of Electrical and Electronics Engineers (IEEE). The NIST hosted workshops and conducts other support activities to assist users in addressing open systems requirements, preparing for the use of new technology, and identifying the international, national, industry and other open specifications that are available for building open systems frameworks, such as the government's applications portability profile for the open-system environment. NIST sponsors the semiannual Users' Forum on Application Portability Profile (APP) and Open System Environment (OSE) to exchange information and respond to NIST proposals regarding the evaluation and adoption of an integrated set of standards to support the APP and OSE. The quarterly Open Systems Environment Implementors' Workshop (OIW), co-sponsored by NIST and the Institute of Electrical and Electronics Engineers (IEEE) Computer Society, provides a public international technical forum for the timely development of implementation agreements based on emerging OSE standards. OSE/RM topics The open-system environment (OSE) forms an extensible framework that allows services, interfaces, protocols, and supporting data formats to be defined in terms of nonproprietary specifications that evolve through open (public), consensus-based forums. A selected suite of specifications that defines these interfaces, services, protocols, and data formats for a particular class or domain of applications is called a profile. Two types of elements are used in the model: entities consisting of the application software, application platform, and platform external environment; and interfaces including the application program interface and external environment interface. APP service areas The Application Portability Profile (APP) is an OSE profile designed for use by the U.S. Government. It covers a broad range of application software domains of interest to many Federal agencies, but it does not include every domain within the U.S. Government’s application inventory. The individual standards and specifications in the APP define data formats, interfaces, protocols, or a mix of these elements. The services defined in the APP tend to fall into broad service areas. These service areas are: Operating system services (OS) Human/computer interface services (HCI) Data management services (DM) Data interchange services (DI) Software engineering services (SWE) Graphics services (GS) Network services (NS) Each service area is defined in the following sections. The figure illustrates where each of these services areas relates to the OSE/RM. Assume that software engineering services are applicable in all areas. Each of the APP service areas addresses specific components around which interface, data format, or protocol specifications have been or will be defined. Security and management services are common to all of the service areas and pervade these areas in one or more forms. Classes of interfaces There are two classes of interfaces in the OSE reference model: the application program interface and the external environment interface: Application programming interface (API) : The API is the interface between the application software and the application platform. Its primary function is to support portability of application software. An API is categorized in accordance with the types of service accessible via that API. There are four types of API services in the OSE/RM: Human/computer interface services Information interchange services Communication services Internal system services External environment interface (EEI) : The EEI is the interface that supports information transfer between the application platform and the external environment, and between applications executing on the same platform. Consisting chiefly of protocols and supporting data formats, the EEI supports interoperability to a large extent. An EEI is categorized in accordance with the type of information transfer services provided. OSE profile A profile consists of a selected list of standards and other specifications that define a complement of services made available to applications in a specific domain. Examples of domains might include a workstation environment, an embedded process control environment, a distributed environment, a transaction processing environment, or an office automation environment, to name a few. Each of these environments has a different cross-section of service requirements that can be specified independently from the others. Each service, however, is defined in a standard form across all environments. An OSE profile is composed of a selected list of open (public), consensus-based standards and specifications that define services in the OSE/RM. Restricting a profile to a specific domain or group of domains that are of interest to an individual organization results in the definition of an organizational profile. OSE reference model entities The three classes of OSE reference model entities are described as follows: Application software : Within the context of the OSE Reference Model, the application software includes data, documentation, and training, as well as programs. Application platform : The application platform is composed of the collection of hardware and software components that provide the generic application and system services. Platform external environment : The platform external environment consists of those system elements that are external to the application software and the application platform (e.g., services provided by other platforms or peripheral devices). Types of information transfer services There are three types of information transfer services. These are transfer services to and from: Human users External data stores Other application platforms In its simplest form, the OSE/RM illustrates a straightforward user-supplier relationship: the application software is the user of services and the application platform/ external environment entities are the suppliers. The API and EEI define the services that are provided. Applications Basically, the open-system environment model is a basic building block of several technical reference models and technical architecture. A technical architecture identifies and describes the types of applications, platforms, and external entities; their interfaces; and their services; as well as the context within which the entities interoperate. A technical architecture is based on: a Technical Reference Model (TRM); and the selected standards that further describe the TRM elements (the profile). The technical architecture is the basis for selecting and implementing the infrastructure to establish the target architecture. A technical reference model can be defined as a taxonomy of services arranged according to a conceptual model, such as the Open System Environment model. The enumerated services are specific to those needed to support the technology computing style (e.g., distributed object computing) and the industry/business application needs (e.g., Human Services, financial). See also Enterprise architecture framework Federal enterprise architecture GERAM TAFIM TOGAF References Further reading Department of Defense (1996). Technical Architecture Framework for Information Management. Vol. 2, Technical Reference Model. Defense Information Systems Agency (2001). DoD Technical Reference Model, Version 2.0, April 9, 2001. Gary Fisher (1993). Application Portability Profile (APP) : The U.S. Government’s Open System Environment Profile OSE/1 Version 2.0. NIST Special Publication 500-210, June 1993. IEEE P1003.22 Draft Guide for POSIX Open Systems Environment—A Security Framework Reference models Enterprise modelling
Open-system environment reference model
[ "Engineering" ]
1,658
[ "Systems engineering", "Enterprise modelling" ]
12,728,056
https://en.wikipedia.org/wiki/Pomeranz%E2%80%93Fritsch%20reaction
The Pomeranz–Fritsch reaction, also named Pomeranz–Fritsch cyclization, is a named reaction in organic chemistry. It is named after Paul Fritsch (1859–1913) and Cäsar Pomeranz (1860–1926). In general it is a synthesis of isoquinoline. General reaction scheme The reaction below shows the acid-promoted synthesis of isoquinoline from benzaldehyde and a 2,2-dialkoxyethylamine. Various alkyl groups, e.g. methyl and ethyl groups, can be used as substituent R. In the archetypical reaction sulfuric acid was used as proton donor, but Lewis acids such as trifluoroacetic anhydride and lanthanide triflates have been used occasionally. Later, a wide range of diverse isoquinolines were successfully prepared. Reaction mechanism A possible mechanism is depicted below: First the benzalaminoacetal 1 is built by the condensation of benzaldehyde and a 2,2-dialkoxyethylamine. After the condensation a hydrogen-atom is added to one of the alkoxy groups. Subsequently, an alcohol is removed. Next, the compound 2 is built. After that a second hydrogen-atom is added to the compound. In the last step a second alcohol is removed and the bicyclic system becomes aromatic. Applications The Pomeranz–Fritsch reaction has general application in the preparation of isoquinoline derivatives. Isoquinolines find many applications, including: topical anesthetics such as dimethisoquin: vasodilators, a well-known example, papaverine, shown below. See also Bischler–Napieralski reaction Pictet–Spengler reaction References Nitrogen heterocycle forming reactions Heterocycle forming reactions Name reactions Isoquinolines
Pomeranz–Fritsch reaction
[ "Chemistry" ]
402
[ "Name reactions", "Ring forming reactions", "Heterocycle forming reactions", "Organic reactions" ]
22,146,849
https://en.wikipedia.org/wiki/Eccentric%20reducer
An eccentric reducer is a fitting used in piping systems between two pipes of different diameters. The same fitting can be used in reverse as an eccentric increaser or expander. They are used where the diameter of the pipe on the upstream side of the fitting (i.e. where flow is coming from) is larger than the downstream side, and where there is a danger that vapour may accumulate. Unlike a concentric reducer, which resembles a cone, eccentric reducers have an edge that is parallel to the connecting pipe, referred to as the flat side. This parallel edge results in the two pipes having offset center lines. Because eccentric reducers are asymmetrical, they create asymmetrical flow conditions; flow is faster along the angled side, resulting in increased pressure. Horizontal liquid reducers are always eccentric, with the flat side on the top, which prevents the build up of air bubbles in the system, (unless on control set, same as PV, TV, HV, LV) or in a pipe rack. In a pipe rack, the flat side of an eccentric reducer is on the bottom, so that the position of the bottom of the pipe will be constant, and supported by the rack. Eccentric reducers are used at the suction side of pumps to ensure air does not accumulate in the pipe. The gradual accumulation of air in a concentric reducer could result in a large bubble that could eventually cause the pump to stall or cause cavitation when drawn into the pump. Eccentric reducers exhibit a unique design with one side having a larger diameter and the other side being smaller and offset from the centerline. This offset configuration enables the eccentric reducer to maintain a consistent fluid level within the piping system, preventing air or gas accumulation. Horizontal gas reducers are always eccentric, bottom flat, which allows condensed water or oil to drain at low points. Reducers in vertical lines are generally concentric unless the layout dictates otherwise. See also Reducer Rotodynamic pump References Pumps Piping
Eccentric reducer
[ "Physics", "Chemistry", "Engineering" ]
417
[ "Pumps", "Turbomachinery", "Building engineering", "Chemical engineering", "Physical systems", "Hydraulics", "Mechanical engineering", "Piping" ]
22,148,935
https://en.wikipedia.org/wiki/Manumation
Manumation is the automation of paper based processes in public sector and business without improvement regarding its efficiency. Automation of an inefficient process does not lead to an improvement in case of manumation. This term could be seen as a sarcastic description of the digital replication and mimicking of frequently ineffective and even broken paper-based processes in first phase of the societal digitalisation, from 1995 to 2015. Manumation is also a term for automated systems, which require more manual work than the original manual process. Definitions Examples Computerized transaction processing is the automation of previously manual transactions. See also Semi-automation References Impact of automation Systems analysis
Manumation
[ "Engineering" ]
132
[ "Impact of automation", "Automation" ]
22,149,789
https://en.wikipedia.org/wiki/Weingarten%20function
In mathematics, Weingarten functions are rational functions indexed by partitions of integers that can be used to calculate integrals of products of matrix coefficients over classical groups. They were first studied by who found their asymptotic behavior, and named by , who evaluated them explicitly for the unitary group. Unitary groups Weingarten functions are used for evaluating integrals over the unitary group Ud of products of matrix coefficients of the form where denotes complex conjugation. Note that where is the conjugate transpose of , so one can interpret the above expression as being for the matrix element of . This integral is equal to where Wg is the Weingarten function, given by where the sum is over all partitions λ of q . Here χλ is the character of Sq corresponding to the partition λ and s is the Schur polynomial of λ, so that sλd(1) is the dimension of the representation of Ud corresponding to λ. The Weingarten functions are rational functions in d. They can have poles for small values of d, which cancel out in the formula above. There is an alternative inequivalent definition of Weingarten functions, where one only sums over partitions with at most d parts. This is no longer a rational function of d, but is finite for all positive integers d. The two sorts of Weingarten functions coincide for d larger than q, and either can be used in the formula for the integral. Values of the Weingarten function for simple permutations The first few Weingarten functions Wg(σ, d) are (The trivial case where q = 0) where permutations σ are denoted by their cycle shapes. There exist computer algebra programs to produce these expressions. Explicit expressions for the integrals in the first cases The explicit expressions for the integrals of first- and second-degree polynomials, obtained via the formula above, are: Asymptotic behavior For large d, the Weingarten function Wg has the asymptotic behavior where the permutation σ is a product of cycles of lengths Ci, and cn = (2n)!/n!(n + 1)! is a Catalan number, and |σ| is the smallest number of transpositions that σ is a product of. There exists a diagrammatic method to systematically calculate the integrals over the unitary group as a power series in 1/d. Orthogonal and symplectic groups For orthogonal and symplectic groups the Weingarten functions were evaluated by . Their theory is similar to the case of the unitary group. They are parameterized by partitions such that all parts have even size. External links References Random matrices Mathematical physics
Weingarten function
[ "Physics", "Mathematics" ]
558
[ "Random matrices", "Applied mathematics", "Theoretical physics", "Mathematical objects", "Matrices (mathematics)", "Statistical mechanics", "Mathematical physics" ]
19,474,055
https://en.wikipedia.org/wiki/Gauss%20circle%20problem
In mathematics, the Gauss circle problem is the problem of determining how many integer lattice points there are in a circle centered at the origin and with radius . This number is approximated by the area of the circle, so the real problem is to accurately bound the error term describing how the number of points differs from the area. The first progress on a solution was made by Carl Friedrich Gauss, hence its name. The problem Consider a circle in with center at the origin and radius . Gauss's circle problem asks how many points there are inside this circle of the form where and are both integers. Since the equation of this circle is given in Cartesian coordinates by , the question is equivalently asking how many pairs of integers m and n there are such that If the answer for a given is denoted by then the following list shows the first few values of for an integer between 0 and 12 followed by the list of values rounded to the nearest integer: 1, 5, 13, 29, 49, 81, 113, 149, 197, 253, 317, 377, 441 0, 3, 13, 28, 50, 79, 113, 154, 201, 254, 314, 380, 452 Bounds on a solution and conjecture is roughly , the area inside a circle of radius . This is because on average, each unit square contains one lattice point. Thus, the actual number of lattice points in the circle is approximately equal to its area, . So it should be expected that for some error term of relatively small absolute value. Finding a correct upper bound for is thus the form the problem has taken. Note that does not have to be an integer. After one has At these places increases by after which it decreases (at a rate of ) until the next time it increases. Gauss managed to prove that Hardy and, independently, Landau found a lower bound by showing that using the little o-notation. It is conjectured that the correct bound is Writing , the current bounds on are with the lower bound from Hardy and Landau in 1915, and the upper bound proved by Martin Huxley in 2000. Exact forms The value of can be given by several series. In terms of a sum involving the floor function it can be expressed as: This is a consequence of Jacobi's two-square theorem, which follows almost immediately from the Jacobi triple product. A much simpler sum appears if the sum of squares function is defined as the number of ways of writing the number as the sum of two squares. Then Most recent progress rests on the following Identity, which has been first discovered by Hardy: where denotes the Bessel function of the first kind with order 1. Generalizations Although the original problem asks for integer lattice points in a circle, there is no reason not to consider other shapes, for example conics; indeed Dirichlet's divisor problem is the equivalent problem where the circle is replaced by the rectangular hyperbola. Similarly one could extend the question from two dimensions to higher dimensions, and ask for integer points within a sphere or other objects. There is an extensive literature on these problems. If one ignores the geometry and merely considers the problem an algebraic one of Diophantine inequalities, then there one could increase the exponents appearing in the problem from squares to cubes, or higher. The dot planimeter is physical device for estimating the area of shapes based on the same principle. It consists of a square grid of dots, printed on a transparent sheet; the area of a shape can be estimated as the product of the number of dots in the shape with the area of a grid square. The primitive circle problem Another generalization is to calculate the number of coprime integer solutions to the inequality This problem is known as the primitive circle problem, as it involves searching for primitive solutions to the original circle problem. It can be intuitively understood as the question of how many trees within a distance of r are visible in the Euclid's orchard, standing in the origin. If the number of such solutions is denoted then the values of for taking small integer values are 0, 4, 8, 16, 32, 48, 72, 88, 120, 152, 192 … . Using the same ideas as the usual Gauss circle problem and the fact that the probability that two integers are coprime is , it is relatively straightforward to show that As with the usual circle problem, the problematic part of the primitive circle problem is reducing the exponent in the error term. At present, the best known exponent is if one assumes the Riemann hypothesis. Without assuming the Riemann hypothesis, the best upper bound currently known is for a positive constant . In particular, no bound on the error term of the form for any is currently known that does not assume the Riemann Hypothesis. Notes External links Arithmetic functions Lattice points Unsolved problems in mathematics Circles
Gauss circle problem
[ "Mathematics" ]
988
[ "Unsolved problems in mathematics", "Lattice points", "Arithmetic functions", "Circles", "Mathematical problems", "Pi", "Number theory" ]
19,479,876
https://en.wikipedia.org/wiki/Conventional%20electrical%20unit
A conventional electrical unit (or conventional unit where there is no risk of ambiguity) is a unit of measurement in the field of electricity which is based on the so-called "conventional values" of the Josephson constant, the von Klitzing constant agreed by the International Committee for Weights and Measures (CIPM) in 1988, as well as ΔνCs used to define the second. These units are very similar in scale to their corresponding SI units, but are not identical because of the different values used for the constants. They are distinguished from the corresponding SI units by setting the symbol in italic typeface and adding a subscript "90" – e.g., the conventional volt has the symbol V – as they came into international use on 1 January 1990. This system was developed to increase the precision of measurements: The Josephson and von Klitzing constants can be realized with great precision, repeatability and ease, and are exactly defined in terms of the universal constants e and h. The conventional electrical units represent a significant step towards using "natural" fundamental physics for practical measurement purposes. They achieved acceptance as an international standard in parallel to the SI system of units and are commonly used outside of the physics community in both engineering and industry. Addition of the constant c would be needed to define units for all dimensions used in physics, as in the SI. The SI system made the transition to equivalent definitions 29 years later but with values of the constants defined to match the old SI units more precisely. Consequently, the conventional electrical units differ slightly from the corresponding SI units, now with exactly defined ratios. Historical development Several significant steps have been taken in the last half century to increase the precision and utility of measurement units: In 1967, the thirteenth General Conference on Weights and Measures (CGPM) defined the second of atomic time in the International System of Units as the duration of periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium-133 atom. In 1983, the seventeenth CGPM redefined the metre in terms of the second and the speed of light, thus fixing the speed of light at exactly . In 1988, the CIPM recommended adoption of conventional values for the Josephson constant as exactly and for the von Klitzing constant as exactly as of 1 January 1990. In 1991, the eighteenth CGPM noted the conventional values for the Josephson constant and the von Klitzing constant. In 2000, the CIPM approved the use of the quantum Hall effect, with the value of R to be used to establish a reference standard of resistance. In 2018, the twenty-sixth CGPM resolved to abrogate the conventional values of the Josephson and von Klitzing constants with the 2019 revision of the SI. Definition Conventional electrical units are based on defined values of the caesium-133 hyperfine transition frequency, Josephson constant and the von Klitzing constant, the first two which allow a very precise practical measurement of time and electromotive force, and the last which allows a very precise practical measurement of electrical resistance. The conventional volt, V, is the electromotive force (or electric potential difference) measured against a Josephson effect standard using the defined value of the Josephson constant, K; that is, by the relation K = . See Josephson voltage standard. The conventional ohm, Ω, is the electrical resistance measured against a quantum Hall effect standard using the defined value of the von Klitzing constant, R; that is, by the relation R = . Other conventional electrical units are defined by the normal relationships between units paralleling those of SI, as in the conversion table below. Conversion to SI units The 2019 revision of the SI defines all these units in a way that fixes the numeric values of K, R and ΔνCs exactly, albeit with values of the first two that differ slightly from the conventional values. Consequently, these conventional units all have known exact values in terms of the redefined SI units. Because of this, there is no accuracy benefit from maintaining the conventional values. See also ITS-90 References External links History of the electrical units. Electromagnetism Metrology Systems of units
Conventional electrical unit
[ "Physics", "Mathematics" ]
858
[ "Electromagnetism", "Physical phenomena", "Systems of units", "Quantity", "Fundamental interactions", "Units of measurement" ]
19,480,112
https://en.wikipedia.org/wiki/Arctic%20methane%20emissions
Arctic methane emissions contribute to a rise in methane concentrations in the atmosphere. Whilst the Arctic region is one of many natural sources of the greenhouse gas methane, there is nowadays also a human component to this due to the effects of climate change. In the Arctic, the main human-influenced sources of methane are thawing permafrost, Arctic sea ice melting, clathrate breakdown and Greenland ice sheet melting. This methane release results in a positive climate change feedback (meaning one that amplifies warming), as methane is a powerful greenhouse gas. When permafrost thaws due to global warming, large amounts of organic material can become available for methanogenesis and may therefore be released as methane. Since around 2018, there has been consistent increases in global levels of methane in the atmosphere, with the 2020 increase of 15.06 parts per billion breaking the previous record increase of 14.05 ppb set in 1991, and 2021 setting an even larger increase of 18.34 ppb. However, there is currently no evidence connecting the Arctic to this recent acceleration. In fact, a 2021 study indicated that the methane contributions from the Arctic were generally overestimated, while the contributions of tropical regions were underestimated. Nevertheless, the Arctic's role in global methane trends is considered very likely to increase in the future. There is evidence for increasing methane emissions since 2004 from a Siberian permafrost site into the atmosphere linked to warming. Mitigation of CO2 emissions by 2050 (i.e. reaching net zero emissions) is probably not enough to stop the future disappearance of summer Arctic Ocean ice cover. Mitigation of methane emissions is also necessary and this has to be carried out over an even shorter period of time. Such mitigation activities need to be carried out in three main sectors: oil and gas, waste and agriculture. Using available measures this could amount to global reductions of ca.180 Mt/yr or about 45% of the current (2021) emissions by 2030. Observed values and processes NOAA annual records for methane concentrations in the atmosphere have been updated since 1984. They show substantial growth during the 1980s, a slowdown in annual growth during the 1990s, a plateau (including some years of declining atmospheric concentrations) in the early 2000s and another consistent increase beginning in 2007. Since around 2018, there has been consistent annual increases in global levels of methane, with the 2020 increase of 15.06 parts per billion breaking the previous record increase of 14.05 ppb set in 1991, and 2021 setting an even larger increase of 18.34 ppb. Due to the relatively short lifetime of atmospheric methane (7-12 years compared to 100s of years for CO2) its global trends are more complex than those of carbon dioxide. These trends alarm climate scientists, with some suggesting that they represent a climate change feedback increasing natural methane emissions well beyond their preindustrial levels. However, there is currently no evidence connecting the Arctic to this recent acceleration. In fact, a 2021 study indicated that the role of the Arctic was typically overestimated in global methane accounting, while the role of tropical regions was consistently underestimated. The study suggested that tropical wetland methane emissions were the culprit behind the recent growth trend, and this hypothesis was reinforced by a 2022 paper connecting tropical terrestrial emissions to 80% of the global atmospheric methane trends between 2010 and 2019. Nevertheless, the Arctic's role in global methane trends is considered very likely to increase in the future. There is evidence for increasing methane emissions since 2004 from a Siberian permafrost site into the atmosphere linked to warming. Radiocarbon dating of trace methane in lake bubbles and soil organic carbon concluded that 0.2 to 2.5 Pg of permafrost carbon has been released as methane and carbon dioxide over the last 60 years. The 2020 heat wave may have released significant methane from carbonate deposits in Siberian permafrost. Methane emissions by the permafrost carbon feedback—amplification of surface warming due to enhanced radiative forcing by carbon release from permafrost—could contribute an estimated 205 Gt of carbon emissions, leading up to 0.5 °C (0.9 °F) of additional warming by the end of the 21st century. However, recent research based on the carbon isotopic composition of atmospheric methane trapped in bubbles in Antarctic ice suggests that methane emissions from permafrost and methane hydrates were minor during the last deglaciation, suggesting that future permafrost methane emissions may be lower than previously estimated. Comparison of Arctic and Antarctic atmosphere measurements Atmospheric methane concentrations are 8–10% higher in the Arctic than in the Antarctic atmosphere. During cold glacier epochs, this gradient decreases to insignificant levels. Land ecosystems are thought to be the main sources of this asymmetry, although it has been suggested in 2007 that "the role of the Arctic Ocean is significantly underestimated." Soil temperature and moisture levels are important variables in soil methane fluxes in tundra environments. Sources of methane in the Arctic Large quantities of methane are stored in the Arctic in natural gas deposits, permafrost, and as undersea clathrates. Permafrost and clathrates degrade on warming, thus large releases of methane from these sources may arise as a result of global warming. Other sources of methane include submarine taliks, river transport, ice complex retreat, submarine permafrost and decaying gas hydrate deposits. Permafrost contains almost twice as much carbon as the atmosphere, with ~20 Gt of permafrost-associated methane trapped in methane clathrates. Permafrost thaw results in the formation of thermokarst lakes in ice-rich yedoma deposits. Methane frozen in permafrost is slowly released as permafrost thaws. Thawing permafrost Arctic sea ice melting Clathrate breakdown Greenland ice sheet melting A 2014 study found evidence for methane cycling below the ice sheet of the Russell Glacier, based on subglacial drainage samples which were dominated by Pseudomonadota bacteria. During the study, the most widespread surface melt on record for the past 120 years was observed in Greenland; on 12 July 2012, unfrozen water was present on almost the entire ice sheet surface (98.6%). The findings indicate that methanotrophs could serve as a biological methane sink in the subglacial ecosystem, and the region was, at least during the sample time, a source of atmospheric methane. Scaled dissolved methane flux during the four months of the summer melt season for the Russell Glacier catchment area (1200 km2) was estimated at 990 tonnes CH4. Because this catchment area is representative of similar Greenland outlet glaciers, the researchers concluded that the Greenland Ice Sheet may represent a significant global methane source. A study in 2016 concluded that methane clathrates may exist below Greenland's and Antarctica's ice sheets, based on past evidence. Reducing methane emissions More than half of global methane emissions originate from human activities across three main sectors: fossil fuels (35% of human-caused emissions), waste (20%), and agriculture (40%). Within the fossil fuel sector, oil and gas extraction, processing, and distribution contribute 23%, while coal mining accounts for 12% of these emissions. In the waste sector, landfills and wastewater comprise about 20% of global anthropogenic emissions. In agriculture, livestock emissions from manure and enteric fermentation make up roughly 32%, and rice cultivation contributes 8% of global anthropogenic emissions. Mitigation using available measures could reduce these methane emissions by about 180 Mt/yr or about 45% by 2030. Mitigation of CO2 emissions by 2050 (i.e. reaching net zero emissions) is probably not enough to stop the future disappearance of summer Arctic Ocean ice cover. Mitigation of methane emissions is also necessary and this has to be carried out over an even shorter period of time. Flaring methane from oil and gas operations ARPA-E has funded a research project from 2021-2023 to develop a "smart micro-flare fleet" to burn off methane emissions at remote locations. A 2012 review article stated that most existing technologies "operate on confined gas streams of 0.1% methane", and were most suitable for areas where methane is emitted in pockets. If Arctic oil and gas operations use Best Available Technology (BAT) and Best Environmental Practices (BEP) in petroleum gas flaring, this can result in significant methane emissions reductions, according to the Arctic Council. See also References External links Arctic permafrost is thawing fast. That affects us all. National Geographic, 2019 Why the Arctic is smouldering, BBC Future, by Zoe Cormier, 2019 Arctic Ocean Climate change feedbacks Effects of climate change Environment of the Arctic Methane Permafrost Greenhouse gas emissions
Arctic methane emissions
[ "Chemistry" ]
1,814
[ "Greenhouse gases", "Greenhouse gas emissions", "Methane" ]
10,422,205
https://en.wikipedia.org/wiki/Classification%20of%20Fatou%20components
In mathematics, Fatou components are components of the Fatou set. They were named after Pierre Fatou. Rational case If f is a rational function defined in the extended complex plane, and if it is a nonlinear function (degree > 1) then for a periodic component of the Fatou set, exactly one of the following holds: contains an attracting periodic point is parabolic is a Siegel disc: a simply connected Fatou component on which f(z) is analytically conjugate to a Euclidean rotation of the unit disc onto itself by an irrational rotation angle. is a Herman ring: a double connected Fatou component (an annulus) on which f(z) is analytically conjugate to a Euclidean rotation of a round annulus, again by an irrational rotation angle. Attracting periodic point The components of the map contain the attracting points that are the solutions to . This is because the map is the one to use for finding solutions to the equation by Newton–Raphson formula. The solutions must naturally be attracting fixed points. Herman ring The map and t = 0.6151732... will produce a Herman ring. It is shown by Shishikura that the degree of such map must be at least 3, as in this example. More than one type of component If degree d is greater than 2 then there is more than one critical point and then can be more than one type of component Transcendental case Baker domain In case of transcendental functions there is another type of periodic Fatou components, called Baker domain: these are "domains on which the iterates tend to an essential singularity (not possible for polynomials and rational functions)" one example of such a function is: Wandering domain Transcendental maps may have wandering domains: these are Fatou components that are not eventually periodic. See also No-wandering-domain theorem Montel's theorem John Domains Basins of attraction References Lennart Carleson and Theodore W. Gamelin, Complex Dynamics, Springer 1993. Alan F. Beardon Iteration of Rational Functions, Springer 1991. Fractals Limit sets Theorems in complex analysis Complex dynamics Theorems in dynamical systems Mathematical classification systems
Classification of Fatou components
[ "Mathematics" ]
443
[ "Theorems in dynamical systems", "Limit sets", "Functions and mappings", "Theorems in mathematical analysis", "Complex dynamics", "Mathematical analysis", "Theorems in complex analysis", "Mathematical objects", "Fractals", "Topology", "Mathematical relations", "nan", "Mathematical problems",...