id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
12,994,189
https://en.wikipedia.org/wiki/XStream%20Systems
XStream Systems Inc. was a US-based company which developed X-ray based identification equipment for research and pharmaceutical industry applications. The company was sold to Veracity Network, Inc in 2011. Company history This company was incorporated in May 2004. The technology used in XStream Systems' products was first developed at Rutgers University. The company was the first in the industry to deploy counterfeit detection equipment along the pharmaceutical distribution supply chain that could look inside any packaging and perform forensic analysis on the drug products inside. All of XStream Systems' technology, products, and services were acquired in 2011 by Veracity Network, Inc. Areas of expertise XStream Systems' products were based on Energy Dispersive X-Ray Diffraction (EDXRD), a technology also used in synchrotrons. XStream Systems' equipment verifies molecular crystal structure of materials and authenticates a pharmaceutical's composition. External links Veracity Network, Inc. home page National Defense directory listing US Department of Homeland Security Stakeholder's Conference X-Ray Safety Academy Pharmaceutical Processing Magazine appearance Pharmaceutical Technology Magazine appearance XStream Systems' XT250 System featured as Improving Security in Drug Topics Magazine Florida Venture Capital Conference Presenting Company XStream Systems Presents At Middle East Counterfeit Medication Conference XStream Systems offers leasing option for authentication system References Diffraction Technology companies of the United States X-ray equipment manufacturers
XStream Systems
[ "Physics", "Chemistry", "Materials_science" ]
280
[ "Crystallography", "Diffraction", "Spectroscopy", "Spectrum (physical sciences)" ]
4,866,437
https://en.wikipedia.org/wiki/Propulsive%20efficiency
In aerospace engineering, concerning aircraft, rocket and spacecraft design, overall propulsion system efficiency is the efficiency with which the energy contained in a vehicle's fuel is converted into kinetic energy of the vehicle, to accelerate it, or to replace losses due to aerodynamic drag or gravity. Mathematically, it is represented as , where is the cycle efficiency and is the propulsive efficiency. The cycle efficiency is expressed as the percentage of the heat energy in the fuel that is converted to mechanical energy in the engine, and the propulsive efficiency is expressed as the proportion of the mechanical energy actually used to propel the aircraft. The propulsive efficiency is always less than one, because conservation of momentum requires that the exhaust have some of the kinetic energy, and the propulsive mechanism (whether propeller, jet exhaust, or ducted fan) is never perfectly efficient. It is greatly dependent on exhaust expulsion velocity and airspeed. Cycle efficiency Most aerospace vehicles are propelled by heat engines of some kind, usually an internal combustion engine. The efficiency of a heat engine relates how much useful work is output for a given amount of heat energy input. From the laws of thermodynamics: where is the work extracted from the engine. (It is negative because work is done by the engine.) is the heat energy taken from the high-temperature system (heat source). (It is negative because heat is extracted from the source, hence is positive.) is the heat energy delivered to the low-temperature system (heat sink). (It is positive because heat is added to the sink.) In other words, a heat engine absorbs heat from some heat source, converting part of it to useful work, and delivering the rest to a heat sink at lower temperature. In an engine, efficiency is defined as the ratio of useful work done to energy expended. The theoretical maximum efficiency of a heat engine, the Carnot efficiency, depends only on its operating temperatures. Mathematically, this is because in reversible processes, the cold reservoir would gain the same amount of entropy as that lost by the hot reservoir (i.e., ), for no change in entropy. Thus: where is the absolute temperature of the hot source and that of the cold sink, usually measured in kelvins. Note that is positive while is negative; in any reversible work-extracting process, entropy is overall not increased, but rather is moved from a hot (high-entropy) system to a cold (low-entropy one), decreasing the entropy of the heat source and increasing that of the heat sink. Propulsive efficiency Propulsive efficiency is defined as the ratio of propulsive power (i.e. thrust times velocity of the vehicle) to work done on the fluid. In generic terms, the propulsive power can be calculated as follows: where represents thrust and , the flight speed. The thrust can be computed from intake and exhaust massflows ( and ) and velocities ( and ): The work done by the engine to the flow, on the other hand, is the change in kinetic energy per time. This does not take into account the efficiency of the engine used to generate the power, nor of the propeller, fan or other mechanism used to accelerate air. It merely refers to the work done to the flow, by any means, and can be expressed as the difference between exhausted kinetic energy flux and incoming kinetic energy flux: The propulsive efficiency can therefore be computed as: Depending on the type of propulsion used, this equation can be simplified in different ways, demonstrating some of the peculiarities of different engine types. The general equation already shows, however, that propulsive efficiency improves when using large massflows and small velocities compared to small mass-flows and large velocities, since the squared terms in the denominator grow faster than the non-squared terms. The losses modelled by propulsive efficiency are explained by the fact that any mode of aero propulsion leaves behind a jet moving into the opposite direction of the vehicle. The kinetic energy flux in this jet is for the case that . Jet engines The propulsive efficiency formula for air-breathing engines is given below. It can be derived by setting in the general equation, and assuming that . This cancels out the mass-flow and leads to: where is the exhaust expulsion velocity and is both the airspeed at the inlet and the flight velocity. For pure jet engines, particularly with afterburner, a small amount of accuracy can be gained by not assuming the intake and exhaust massflow to be equal, since the exhaust gas also contains the added mass of the fuel injected. For turbofan engines, the exhaust massflow may be marginally smaller than the intake massflow because the engine supplies "bleed air" from the compressor to the aircraft. In most circumstances, this is not taken into account, as it makes no significant difference to the computed propulsive efficiency. By computing the exhaust velocity from the equation for thrust (while still assuming ), we can also obtain the propulsive efficiency as a function of specific thrust (): A corollary of this is that, particularly in air breathing engines, it is more energy efficient to accelerate a large amount of air by a small amount, than it is to accelerate a small amount of air by a large amount, even though the thrust is the same. This is why turbofan engines are more efficient than simple jet engines at subsonic speeds. Rocket engines A rocket engine's is usually high due to the high combustion temperatures and pressures, and the long converging-diverging nozzle used. It varies slightly with altitude due to changing atmospheric pressure, but can be up to 70%. Most of the remainder is lost as heat in the exhaust. Rocket engines have a slightly different propulsive efficiency () than air-breathing jet engines, as the lack of intake air changes the form of the equation. This also allows rockets to exceed their exhaust's velocity. Similarly to jet engines, matching the exhaust speed and the vehicle speed gives optimum efficiency, in theory. However, in practice, this results in a very low specific impulse, causing much greater losses due to the need for exponentially larger masses of propellant. Unlike ducted engines, rockets give thrust even when the two speeds are equal. In 1903, Konstantin Tsiolkovsky discussed the average propulsive efficiency of a rocket, which he called the utilization (utilizatsiya), the "portion of the total work of the explosive material transferred to the rocket" as opposed to the exhaust gas. Propeller engines The calculation is somewhat different for reciprocating and turboprop engines which rely on a propeller for propulsion since their output is typically expressed in terms of power rather than thrust. The equation for heat added per unit time, Q, can be adopted as follows: where H = calorific value of the fuel in BTU/lb, h = fuel consumption rate in lb/hr and J = mechanical equivalent of heat = 778.24 ft.lb/BTU, where is engine output in horsepower, converted to foot-pounds/second by multiplication by 550. Given that specific fuel consumption is Cp = h/Pe and H = 20 052 BTU/lb for gasoline, the equation is simplified to: expressed as a percentage. Assuming a typical propeller efficiency of 86% (for the optimal airspeed and air density conditions for the given propeller design), maximum overall propulsion efficiency is estimated as: See also References Notes Aerodynamics
Propulsive efficiency
[ "Chemistry", "Engineering" ]
1,524
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
4,866,818
https://en.wikipedia.org/wiki/NSMB%20%28mathematics%29
NSMB is a computer system for solving Navier–Stokes equations using the finite volume method. It supports meshes built of several blocks (multi-blocks) and supports parallelisation. The name stands for "Navier–Stokes multi-block". It was developed by a consortium of European scientific institutions and companies, between 1992 and 2003. References Numerical software
NSMB (mathematics)
[ "Mathematics" ]
74
[ "Applied mathematics", "Applied mathematics stubs", "Numerical software", "Mathematical software" ]
4,868,258
https://en.wikipedia.org/wiki/Hermite%27s%20identity
In mathematics, Hermite's identity, named after Charles Hermite, gives the value of a summation involving the floor function. It states that for every real number x and for every positive integer n the following identity holds: Proofs Proof by algebraic manipulation Split into its integer part and fractional part, . There is exactly one with By subtracting the same integer from inside the floor operations on the left and right sides of this inequality, it may be rewritten as Therefore, and multiplying both sides by gives Now if the summation from Hermite's identity is split into two parts at index , it becomes Proof using functions Consider the function Then the identity is clearly equivalent to the statement for all real . But then we find, Where in the last equality we use the fact that for all integers . But then has period . It then suffices to prove that for all . But in this case, the integral part of each summand in is equal to 0. We deduce that the function is indeed 0 for all real inputs . References Mathematical identities Articles containing proofs
Hermite's identity
[ "Mathematics" ]
224
[ "Mathematical problems", "Articles containing proofs", "Mathematical identities", "Mathematical theorems", "Algebra" ]
4,869,700
https://en.wikipedia.org/wiki/Dimensional%20reduction
Dimensional reduction is the limit of a compactified theory where the size of the compact dimension goes to zero. In physics, a theory in D spacetime dimensions can be redefined in a lower number of dimensions d, by taking all the fields to be independent of the location in the extra D − d dimensions. For example, consider a periodic compact dimension with period L. Let x be the coordinate along this dimension. Any field can be described as a sum of the following terms: with An a constant. According to quantum mechanics, such a term has momentum nh/L along x, where h is the Planck constant. Therefore, as L goes to zero, the momentum goes to infinity, and so does the energy, unless n = 0. However n = 0 gives a field which is constant with respect to x. So at this limit, and at finite energy, will not depend on x. This argument generalizes. The compact dimension imposes specific boundary conditions on all fields, for example periodic boundary conditions in the case of a periodic dimension, and typically Neumann or Dirichlet boundary conditions in other cases. Now suppose the size of the compact dimension is L; then the possible eigenvalues under gradient along this dimension are integer or half-integer multiples of 1/L (depending on the precise boundary conditions). In quantum mechanics this eigenvalue is the momentum of the field, and is therefore related to its energy. As L → 0 all eigenvalues except zero go to infinity, and so does the energy. Therefore, at this limit, with finite energy, zero is the only possible eigenvalue under gradient along the compact dimension, meaning that nothing depends on this dimension. Dimensional reduction also refers to a specific cancellation of divergences in Feynman diagrams. It was put forward by Amnon Aharony, Yoseph Imry, and Shang-keng Ma who proved in 1976 that "to all orders in perturbation expansion, the critical exponents in a d-dimensional () system with short-range exchange and a random quenched field are the same as those of a ()-dimensional pure system". Their arguments indicated that the "Feynman diagrams which give the leading singular behavior for the random case are identically equal, apart from combinatorial factors, to the corresponding Feynman diagrams for the pure case in two fewer dimensions." This dimensional reduction was investigated further in the context of supersymmetric theory of Langevin stochastic differential equations by Giorgio Parisi and Nicolas Sourlas who "observed that the most infrared divergent diagrams are those with the maximum number of random source insertions, and, if the other diagrams are neglected, one is left with a diagrammatic expansion for a classical field theory in the presence of random sources ... Parisi and Sourlas explained this dimensional reduction by a hidden supersymmetry." See also Compactification (physics) Kaluza–Klein theory Supergravity Quantum gravity Supersymmetric theory of stochastic dynamics References String theory Quantum field theory Supersymmetry
Dimensional reduction
[ "Physics", "Astronomy" ]
631
[ "Quantum field theory", "Astronomical hypotheses", "Unsolved problems in physics", "Quantum mechanics", "Physics beyond the Standard Model", "String theory", "Supersymmetry", "Symmetry" ]
4,871,070
https://en.wikipedia.org/wiki/Flow%20separation
In fluid dynamics, flow separation or boundary layer separation is the detachment of a boundary layer from a surface into a wake. A boundary layer exists whenever there is relative movement between a fluid and a solid surface with viscous forces present in the layer of fluid close to the surface. The flow can be externally, around a body, or internally, in an enclosed passage. Boundary layers can be either laminar or turbulent. A reasonable assessment of whether the boundary layer will be laminar or turbulent can be made by calculating the Reynolds number of the local flow conditions. Separation occurs in flow that is slowing down, with pressure increasing, after passing the thickest part of a streamline body or passing through a widening passage, for example. Flowing against an increasing pressure is known as flowing in an adverse pressure gradient. The boundary layer separates when it has travelled far enough in an adverse pressure gradient that the speed of the boundary layer relative to the surface has stopped and reversed direction. The flow becomes detached from the surface, and instead takes the forms of eddies and vortices. The fluid exerts a constant pressure on the surface once it has separated instead of a continually increasing pressure if still attached. In aerodynamics, flow separation results in reduced lift and increased pressure drag, caused by the pressure differential between the front and rear surfaces of the object. It causes buffeting of aircraft structures and control surfaces. In internal passages separation causes stalling and vibrations in machinery blading and increased losses (lower efficiency) in inlets and compressors. Much effort and research has gone into the design of aerodynamic and hydrodynamic surface contours and added features which delay flow separation and keep the flow attached for as long as possible. Examples include the fur on a tennis ball, dimples on a golf ball, turbulators on a glider, which induce an early transition to turbulent flow; vortex generators on aircraft. Adverse pressure gradient The flow reversal is primarily caused by adverse pressure gradient imposed on the boundary layer by the outer potential flow. The streamwise momentum equation inside the boundary layer is approximately stated as where are streamwise and normal coordinates. An adverse pressure gradient is when , which then can be seen to cause the velocity to decrease along and possibly go to zero if the adverse pressure gradient is strong enough. Influencing parameters The tendency of a boundary layer to separate primarily depends on the distribution of the adverse or negative edge velocity gradient along the surface, which in turn is directly related to the pressure and its gradient by the differential form of the Bernoulli relation, which is the same as the momentum equation for the outer inviscid flow. But the general magnitudes of required for separation are much greater for turbulent than for laminar flow, the former being able to tolerate nearly an order of magnitude stronger flow deceleration. A secondary influence is the Reynolds number. For a given adverse distribution, the separation resistance of a turbulent boundary layer increases slightly with increasing Reynolds number. In contrast, the separation resistance of a laminar boundary layer is independent of Reynolds number — a somewhat counterintuitive fact. Internal separation Boundary layer separation can occur for internal flows. It can result from such causes such as a rapidly expanding duct of pipe. Separation occurs due to an adverse pressure gradient encountered as the flow expands, causing an extended region of separated flow. The part of the flow that separates the recirculating flow and the flow through the central region of the duct is called the dividing streamline. The point where the dividing streamline attaches to the wall again is called the reattachment point. As the flow goes farther downstream it eventually achieves an equilibrium state and has no reverse flow. Effects of boundary layer separation When the boundary layer separates, its remnants form a shear layer and the presence of a separated flow region between the shear layer and surface modifies the outside potential flow and pressure field. In the case of airfoils, the pressure field modification results in an increase in pressure drag, and if severe enough will also result in stall and loss of lift, all of which are undesirable. For internal flows, flow separation produces an increase in the flow losses, and stall-type phenomena such as compressor surge, both undesirable phenomena. Another effect of boundary layer separation is regular shedding vortices, known as a Kármán vortex street. Vortices shed from the bluff downstream surface of a structure at a frequency depending on the speed of the flow. Vortex shedding produces an alternating force which can lead to vibrations in the structure. If the shedding frequency coincides with a resonance frequency of the structure, it can cause structural failure. These vibrations could be established and reflected at different frequencies based on their origin in adjacent solid or fluid bodies and could either damp or amplify the resonance. See also Triple-deck theory Aerodynamics D'Alembert's paradox Magnus effect Footnotes References Anderson, John D. (2004), Introduction to Flight, McGraw-Hill. . L. J. Clancy (1975), Aerodynamics, Pitman Publishing Limited, London . External links Aerospaceweb-Golf Ball Dimples & Drag Aerodynamics in Sports Equipment, Recreation and Machines – Golf – Instructor Marie Curie Network on Advances in Numerical and Analytical Tools for Detached Flow Prediction Boundary layers Fluid dynamics
Flow separation
[ "Chemistry", "Engineering" ]
1,075
[ "Piping", "Chemical engineering", "Boundary layers", "Fluid dynamics" ]
4,872,137
https://en.wikipedia.org/wiki/Polycide
Polycide is a silicide formed over polysilicon. Widely used in DRAMs. In a polycide MOSFET transistor process, the silicide is formed only over the polysilicon film as formation occurs prior to any polysilicon etch. Polycide processes contrast with salicide processes in which silicide is formed after the polysilicon etch. Thus, with a salicide process, silicide is formed over both the polysilicon gate and the exposed monocrystalline terminal regions of the transistor in a self-aligned fashion. References Semiconductor device fabrication Silicon
Polycide
[ "Materials_science" ]
128
[ "Semiconductor device fabrication", "Microtechnology" ]
2,646,727
https://en.wikipedia.org/wiki/Glasstron
Glasstron was a series of portable head-mounted displays released by Sony, initially introduced in 1996 with the model PLM-50. The products featured two LCD screens and two earphones for video and audio respectively. The products are no longer manufactured nor supported by Sony. The Glasstron was not the first head-mounted display by Sony, with the Visortron being a previous exhibited unit. The Sony HMZ-T1 can be considered a successor to Glasstron. The head-mounted display developed for Sony during the mid-1990s by Virtual i-o is completely unrelated to the Glasstron. One application of this technology was in the game MechWarrior 2, which permitted users to adopt a visual perspective from inside the cockpit of the craft, using their own eyes as visual and seeing the battlefield through their craft's own cockpit. Models Five models were released. Supported video inputs included PC (15 pin, VGA interface), Composite and S-Video. A brief list of the models follows: References Sony products Display technology Eyewear Products introduced in 1996 Computer peripherals Head-mounted displays
Glasstron
[ "Technology", "Engineering" ]
225
[ "Computer peripherals", "Electronic engineering", "Components", "Display technology" ]
2,647,223
https://en.wikipedia.org/wiki/Metabolic%20network
A metabolic network is the complete set of metabolic and physical processes that determine the physiological and biochemical properties of a cell. As such, these networks comprise the chemical reactions of metabolism, the metabolic pathways, as well as the regulatory interactions that guide these reactions. With the sequencing of complete genomes, it is now possible to reconstruct the network of biochemical reactions in many organisms, from bacteria to human. Several of these networks are available online: Kyoto Encyclopedia of Genes and Genomes (KEGG), EcoCyc, BioCyc and metaTIGER. Metabolic networks are powerful tools for studying and modelling metabolism. Uses Metabolic networks can be used to detect comorbidity patterns in diseased patients. Certain diseases, such as obesity and diabetes, can be present in the same individual concurrently, sometimes one disease being a significant risk factor for the other disease. The disease phenotypes themselves are normally the consequence of the cell's inability to breakdown or produce an essential substrate. However, an enzyme defect at one reaction may affect the fluxes of other subsequent reactions. These cascading effects couple the metabolic diseases associated with subsequent reactions resulting in comorbidity effects. Thus, metabolic disease networks can be used to determine if two disorders are connected due to their correlated reactions. See also Metabolic network modelling Metabolic pathway References Metabolism
Metabolic network
[ "Chemistry", "Biology" ]
270
[ "Molecular biology stubs", "Cellular processes", "Molecular biology", "Biochemistry", "Metabolism" ]
2,647,762
https://en.wikipedia.org/wiki/Modular%20Ocean%20Model
The Modular Ocean Model (MOM) is a three-dimensional ocean circulation model designed primarily for studying the ocean climate system. The model is developed and supported primarily by researchers at the National Oceanic and Atmospheric Administration's Geophysical Fluid Dynamics Laboratory (NOAA/GFDL) in Princeton, NJ, USA. Overview MOM has traditionally been a level-coordinate ocean model, in which the ocean is divided into boxes whose bottoms are located at fixed depths. Such a representation makes it easy to solve the momentum equations and the well-mixed, weakly stratified layer known as the ocean mixed layer near the ocean surface. However, level coordinate models have problems when it comes to the representation of thin bottom boundary layers (Winton et al., 1998) and thick sea ice. Additionally, because mixing in the ocean interior is largely along lines of constant potential density rather than along lines of constant depth, mixing must be rotated relative to the coordinate grid- a process that can be computationally expensive. By contrast, in codes which represent the ocean in terms of constant-density layers (which represent the flow in the ocean interior much more faithfully)- representation of the ocean mixed layer becomes a challenge. MOM3, MOM4, and MOM5 are used as a code base for the ocean component of the GFDL coupled models used in the IPCC assessment reports, including the GFDL CM2.X physical climate model series and the ESM2M Earth System Model. Versions of MOM have been used in hundreds of scientific papers by authors around the world. MOM4 is used as the basis for the El Nino prediction system employed by the National Centers for Environmental Prediction. History MOM owes its genesis to work at GFDL in the late 1960s by Kirk Bryan and Michael Cox. This code, along with a version generated at GFDL and UCLA/NCAR by Bert Semtner, is the ancestor of many of the level-coordinate ocean model codes run around the world today. In the late 1980s, Ron Pacanowski, Keith Dixon, and Tony Rosati at GFDL rewrote the Bryan-Cox-Semtner code in a modular form, enabling different options and configurations to be more easily generated and new physical parameterizations to be more easily included. This version, released on December 5, 1990, became known as Modular Ocean Model v1.0 (MOM1). Further development by Pacanowski, aided by Charles Goldberg and encouraged by community feedback, led to the release of v2.0 (MOM2) in 1995. Pacanowski and Stephen Griffies released v3.0 (MOM3) in 1999. Griffies, Matthew Harrison, Rosati and Pacanowski, with considerable input from a scientific community of hundreds of users, resulted in significant evolution of the code released as v4.0 (MOM4) in 2003. An update, v4.1 (MOM4p1) was released by Griffies in 2009, as was the latest version v5.0 (MOM5), which was released in 2012. See also Geophysical Fluid Dynamics Laboratory References External links MOM6 project MOM5 community website NOAA/GFDL Modular Ocean Model home page History of MOM MOM5 manual MOM4p1 manual MOM4 manual MOM3 manual MOM2 manual MOM1 manual Cox code technical report Numerical climate and weather models Oceanographical terminology Physical oceanography
Modular Ocean Model
[ "Physics" ]
693
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
2,648,839
https://en.wikipedia.org/wiki/Quantum-cascade%20laser
Quantum-cascade lasers (QCLs) are semiconductor lasers that emit in the mid- to far-infrared portion of the electromagnetic spectrum and were first demonstrated by Jérôme Faist, Federico Capasso, Deborah Sivco, Carlo Sirtori, Albert Hutchinson, and Alfred Cho at Bell Laboratories in 1994. Unlike typical interband semiconductor lasers that emit electromagnetic radiation through the recombination of electron–hole pairs across the material band gap, QCLs are unipolar, and laser emission is achieved through the use of intersubband transitions in a repeated stack of semiconductor multiple quantum well heterostructures, an idea first proposed in the article "Possibility of amplification of electromagnetic waves in a semiconductor with a superlattice" by R. F. Kazarinov and R. A. Suris in 1971. Intersubband vs. interband transitions Within a bulk semiconductor crystal, electrons may occupy states in one of two continuous energy bands — the valence band, which is heavily populated with low energy electrons and the conduction band, which is sparsely populated with high energy electrons. The two energy bands are separated by an energy band gap in which there are no permitted states available for electrons to occupy. Conventional semiconductor laser diodes generate light by a single photon being emitted when a high energy electron in the conduction band recombines with a hole in the valence band. The energy of the photon and hence the emission wavelength of laser diodes is therefore determined by the band gap of the material system used. A QCL however does not use bulk semiconductor materials in its optically active region. Instead, it consists of a periodic series of thin layers of varying material composition forming a superlattice. The superlattice introduces a varying electric potential across the length of the device, meaning that there is a varying probability of electrons occupying different positions over the length of the device. This is referred to as one-dimensional multiple quantum well confinement and leads to the splitting of the band of permitted energies into a number of discrete electronic subbands. By suitable design of the layer thicknesses it is possible to engineer a population inversion between two subbands in the system which is required in order to achieve laser emission. Because the position of the energy levels in the system is primarily determined by the layer thicknesses and not the material, it is possible to tune the emission wavelength of QCLs over a wide range in the same material system. Additionally, in semiconductor laser diodes, electrons and holes are annihilated after recombining across the band gap and can play no further part in photon generation. However, in a unipolar QCL, once an electron has undergone an intersubband transition and emitted a photon in one period of the superlattice, it can tunnel into the next period of the structure where another photon can be emitted. This process of a single electron causing the emission of multiple photons as it traverses through the QCL structure gives rise to the name cascade and makes a quantum efficiency of greater than unity possible which leads to higher output powers than semiconductor laser diodes. Operating principles Rate equations QCLs are typically based upon a three-level system. Assuming the formation of the wavefunctions is a fast process compared to the scattering between states, the time independent solutions to the Schrödinger equation may be applied and the system can be modelled using rate equations. Each subband contains a number of electrons (where is the subband index) which scatter between levels with a lifetime (reciprocal of the average intersubband scattering rate ), where and are the initial and final subband indices. Assuming that no other subbands are populated, the rate equations for the three level lasers are given by: In the steady state, the time derivatives are equal to zero and . The general rate equation for electrons in subband i of an N level system is therefore: , Under the assumption that absorption processes can be ignored, (i.e. , valid at low temperatures) the middle rate equation gives Therefore, if (i.e. ) then and a population inversion will exist. The population ratio is defined as If all N steady-state rate equations are summed, the right hand side becomes zero, meaning that the system is underdetermined, and it is possible only to find the relative population of each subband. If the total sheet density of carriers in the system is also known, then the absolute population of carriers in each subband may be determined using: . As an approximation, it can be assumed that all the carriers in the system are supplied by doping. If the dopant species has a negligible ionisation energy then is approximately equal to the doping density. Active region designs The scattering rates are tailored by suitable design of the layer thicknesses in the superlattice which determine the electron wave functions of the subbands. The scattering rate between two subbands is heavily dependent upon the overlap of the wave functions and energy spacing between the subbands. The figure shows the wave functions in a three quantum well (3QW) QCL active region and injector. In order to decrease , the overlap of the upper and lower laser levels is reduced. This is often achieved through designing the layer thicknesses such that the upper laser level is mostly localised in the left-hand well of the 3QW active region, while the lower laser level wave function is made to mostly reside in the central and right-hand wells. This is known as a diagonal transition. A vertical transition is one in which the upper laser level is localised in mainly the central and right-hand wells. This increases the overlap and hence which reduces the population inversion, but it increases the strength of the radiative transition and therefore the gain. In order to increase , the lower laser level and the ground level wave functions are designed such that they have a good overlap and to increase further, the energy spacing between the subbands is designed such that it is equal to the longitudinal optical (LO) phonon energy (~36 meV in GaAs) so that resonant LO phonon-electron scattering can quickly depopulate the lower laser level. Material systems The first QCL was fabricated in the GaInAs/AlInAs material system lattice-matched to an InP substrate. This particular material system has a conduction band offset (quantum well depth) of 520 meV. These InP-based devices have reached very high levels of performance across the mid-infrared spectral range, achieving high power, above room-temperature, continuous wave emission. In 1998 GaAs/AlGaAs QCLs were demonstrated by Sirtori et al. proving that the QC concept is not restricted to one material system. This material system has a varying quantum well depth depending on the aluminium fraction in the barriers. Although GaAs-based QCLs have not matched the performance levels of InP-based QCLs in the mid-infrared, they have proven to be very successful in the terahertz region of the spectrum. The short wavelength limit of QCLs is determined by the depth of the quantum well and recently QCLs have been developed in material systems with very deep quantum wells in order to achieve short wavelength emission. The InGaAs/AlAsSb material system has quantum wells 1.6 eV deep and has been used to fabricate QCLs emitting at 3.05 μm. InAs/AlSb QCLs have quantum wells 2.1 eV deep and electroluminescence at wavelengths as short as 2.5 μm has been observed. The couple InAs/AlSb is the most recent QCL material family compared to alloys grown on InP and GaAs substrates. The main advantage of the InAs/AlSb material system is the small effective electron mass in quantum wells, which favors a high intersubband gain. This benefit can be better exploited in long-wavelength QCLs where the lasing transition levels are close to the bottom of the conduction band, and the effect of nonparabolicity is weak. InAs-based QCLs have demonstrated room temperature (RT) continuous wave (CW) operation at wavelengths up to with a pulsed threshold current density as low as . Low values of have also been achieved in InAs-based QCLs emitting in other spectral regions: at , at and at (QCL grown on InAs). Most recently, InAs-based QCLs operating near with as low as at room temperature have been demonstrated. The threshold obtained is lower than the of the best reported InP-based QCLs to date without facet treatment. QCLs may also allow laser operation in materials traditionally considered to have poor optical emission properties. Indirect bandgap materials such as silicon have minimum electron and hole energies at different momentum values. For interband optical transitions, carriers change momentum through a slow, intermediate scattering process, dramatically reducing the optical emission intensity. Intersubband optical transitions, however, are independent of the relative momentum of conduction band and valence band minima and theoretical proposals for Si/SiGe quantum cascade emitters have been made. Intersubband electroluminescence from non-polar SiGe heterostructures has been observed for mid-infrared and far-infrared wavelengths, first in the valence band. Higher gain can be achieved by using strain to push parasitic light-hole states above the heavy-hole to heavy hold transitions. Conduction band designs have now been demonstrated with higher gain than all valence band designs due to removing many of the parasitic channels that reduce gain in valence band designs. Emission wavelengths QCLs currently cover the wavelength range from 2.63 μm to 250 μm (and extends to 355 μm with the application of a magnetic field.) Optical waveguides The first step in processing quantum cascade gain material to make a useful light-emitting device is to confine the gain medium in an optical waveguide. This makes it possible to direct the emitted light into a collimated beam, and allows a laser resonator to be built such that light can be coupled back into the gain medium. Two types of optical waveguides are in common use. A ridge waveguide is created by etching parallel trenches in the quantum cascade gain material to create an isolated stripe of QC material, typically ~10 um wide, and several mm long. A dielectric material is typically deposited in the trenches to guide injected current into the ridge, then the entire ridge is typically coated with gold to provide electrical contact and to help remove heat from the ridge when it is producing light. Light is emitted from the cleaved ends of the waveguide, with an active area that is typically only a few micrometers in dimension. The second waveguide type is a buried heterostructure. Here, the QC material is also etched to produce an isolated ridge. Now, however, new semiconductor material is grown over the ridge. The change in index of refraction between the QC material and the overgrown material is sufficient to create a waveguide. Dielectric material is also deposited on the overgrown material around QC ridge to guide the injected current into the QC gain medium. Buried heterostructure waveguides are efficient at removing heat from the QC active area when light is being produced. Laser types Although the quantum cascade gain medium can be used to produce incoherent light in a superluminescent configuration, it is most commonly used in combination with an optical cavity to form a laser. Fabry–Perot lasers This is the simplest of the quantum cascade lasers. An optical waveguide is first fabricated out of the quantum cascade material to form the gain medium. The ends of the crystalline semiconductor device are then cleaved to form two parallel mirrors on either end of the waveguide, thus forming a Fabry–Pérot resonator. The residual reflectivity on the cleaved facets from the semiconductor-to-air interface is sufficient to create a resonator. Fabry–Pérot quantum cascade lasers are capable of producing high powers, but are typically multi-mode at higher operating currents. The wavelength can be changed chiefly by changing the temperature of the QC device. Distributed feedback lasers A distributed feedback (DFB) quantum cascade laser is similar to a Fabry–Pérot laser, except for a distributed Bragg reflector (DBR) built on top of the waveguide to prevent it from emitting at other than the desired wavelength. This forces single mode operation of the laser, even at higher operating currents. DFB lasers can be tuned chiefly by changing the temperature, although an interesting variant on tuning can be obtained by pulsing a DFB laser. In this mode, the wavelength of the laser is rapidly "chirped" during the course of the pulse, allowing rapid scanning of a spectral region. External cavity lasers In an external cavity (EC) quantum cascade laser, the quantum cascade device serves as the laser gain medium. One, or both, of the waveguide facets has an anti-reflection coating that defeats the optical cavity action of the cleaved facets. Mirrors are then arranged in a configuration external to the QC device to create the optical cavity. If a frequency-selective element is included in the external cavity, it is possible to reduce the laser emission to a single wavelength, and even tune the radiation. For example, diffraction gratings have been used to create a tunable laser that can tune over 15% of its center wavelength. Extended tuning devices There exists several methods to extend the tuning range of quantum cascade lasers using only monolithically integrated elements. Integrated heaters can extend the tuning range at fixed operation temperature to 0.7% of the central wavelength and superstructure gratings operating through the Vernier effect can extend it to 4% of the central wavelength, compared to <0.1% for a standard DFB device. Growth The alternating layers of the two different semiconductors which form the quantum heterostructure may be grown on to a substrate using a variety of methods such as molecular beam epitaxy (MBE) or metalorganic vapour phase epitaxy (MOVPE), also known as metalorganic chemical vapor deposition (MOCVD). Applications Fabry-Perot (FP) quantum cascade lasers were first commercialized in 1998, distributed feedback (DFB) devices were first commercialized in 2004, and broadly-tunable external cavity quantum cascade lasers first commercialized in 2006. The high optical power output, tuning range and room temperature operation make QCLs useful for spectroscopic applications such as remote sensing of environmental gases and pollutants in the atmosphere and security. They may eventually be used for vehicular cruise control in conditions of poor visibility, collision avoidance radar, industrial process control, and medical diagnostics such as breath analyzers. QCLs are also used to study plasma chemistry. When used in multiple-laser systems, intrapulse QCL spectroscopy offers broadband spectral coverage that can potentially be used to identify and quantify complex heavy molecules such as those in toxic chemicals, explosives, and drugs. References External links Bell Labs summary Optipedia: Quantum Cascade Laser American inventions Semiconductor lasers Terahertz technology
Quantum-cascade laser
[ "Physics" ]
3,106
[ "Spectrum (physical sciences)", "Electromagnetic spectrum", "Terahertz technology" ]
2,649,192
https://en.wikipedia.org/wiki/Fresnel%20diffraction
In optics, the Fresnel diffraction equation for near-field diffraction is an approximation of the Kirchhoff–Fresnel diffraction that can be applied to the propagation of waves in the near field. It is used to calculate the diffraction pattern created by waves passing through an aperture or around an object, when viewed from relatively close to the object. In contrast the diffraction pattern in the far field region is given by the Fraunhofer diffraction equation. The near field can be specified by the Fresnel number, , of the optical arrangement. When the diffracted wave is considered to be in the Fraunhofer field. However, the validity of the Fresnel diffraction integral is deduced by the approximations derived below. Specifically, the phase terms of third order and higher must be negligible, a condition that may be written as where is the maximal angle described by and the same as in the definition of the Fresnel number. Hence this condition can be approximated as . The multiple Fresnel diffraction at closely spaced periodical ridges (ridged mirror) causes the specular reflection; this effect can be used for atomic mirrors. Early treatments of this phenomenon Some of the earliest work on what would become known as Fresnel diffraction was carried out by Francesco Maria Grimaldi in Italy in the 17th century. In his monograph entitled "Light", Richard C. MacLaurin explains Fresnel diffraction by asking what happens when light propagates, and how that process is affected when a barrier with a slit or hole in it is interposed in the beam produced by a distant source of light. He uses the Principle of Huygens to investigate, in classical terms, what transpires. The wave front that proceeds from the slit and on to a detection screen some distance away very closely approximates a wave front originating across the area of the gap without regard to any minute interactions with the actual physical edge. The result is that if the gap is very narrow only diffraction patterns with bright centers can occur. If the gap is made progressively wider, then diffraction patterns with dark centers will alternate with diffraction patterns with bright centers. As the gap becomes larger, the differentials between dark and light bands decrease until a diffraction effect can no longer be detected. MacLaurin does not mention the possibility that the center of the series of diffraction rings produced when light is shone through a small hole may be black, but he does point to the inverse situation wherein the shadow produced by a small circular object can paradoxically have a bright center. (p. 219) In his Optics, Francis Weston Sears offers a mathematical approximation suggested by Fresnel that predicts the main features of diffraction patterns and uses only simple mathematics. By considering the perpendicular distance from the hole in a barrier screen to a nearby detection screen along with the wavelength of the incident light, it is possible to compute a number of regions called half-period elements or Fresnel zones. The inner zone is a circle and each succeeding zone will be a concentric annular ring. If the diameter of the circular hole in the screen is sufficient to expose the first or central Fresnel zone, the amplitude of light at the center of the detection screen will be double what it would be if the detection screen were not obstructed. If the diameter of the circular hole in the screen is sufficient to expose two Fresnel zones, then the amplitude at the center is almost zero. That means that a Fresnel diffraction pattern can have a dark center. These patterns can be seen and measured, and correspond well to the values calculated for them. The Fresnel diffraction integral According to the Rayleigh–Sommerfeld diffraction theory, the electric-field diffraction pattern at a point (x, y, z) is given by the following solution to the Helmholtz equation: where is the electric field at the aperture, is the wavenumber is the imaginary unit. The analytical solution of this integral quickly becomes impractically complex for all but the simplest diffraction geometries. Therefore, it is usually calculated numerically. The Fresnel approximation The main problem for solving the integral is the expression of r. First, we can simplify the algebra by introducing the substitution Substituting into the expression for r, we find Next, by the binomial expansion, We can express as If we consider all the terms of binomial series, then there is no approximation. Let us substitute this expression in the argument of the exponential within the integral; the key to the Fresnel approximation is to assume that the third term is very small and can be ignored, and henceforth any higher orders. In order to make this possible, it has to contribute to the variation of the exponential for an almost null term. In other words, it has to be much smaller than the period of the complex exponential, i.e., : Expressing k in terms of the wavelength, we get the following relationship: Multiplying both sides by we have or, substituting the earlier expression for If this condition holds true for all values of , , and , then we can ignore the third term in the Taylor expression. Furthermore, if the third term is negligible, then all terms of higher order will be even smaller, so we can ignore them as well. For applications involving optical wavelengths, the wavelength is typically many orders of magnitude smaller than the relevant physical dimensions. In particular, and Thus, as a practical matter, the required inequality will always hold true as long as We can then approximate the expression with only the first two terms: This equation is the Fresnel approximation, and the inequality stated above is a condition for the approximation's validity. Fresnel diffraction The condition for validity is fairly weak, and it allows all length parameters to take comparable values, provided the aperture is small compared to the path length. For the in the denominator we go one step further and approximate it with only the first term, This is valid in particular if we are interested in the behaviour of the field only in a small area close to the origin, where the values of and are much smaller than . In general, Fresnel diffraction is valid if the Fresnel number is approximately 1. For Fresnel diffraction the electric field at point is then given by This is the Fresnel diffraction integral; it means that, if the Fresnel approximation is valid, the propagating field is a spherical wave, originating at the aperture and moving along . The integral modulates the amplitude and phase of the spherical wave. Analytical solution of this expression is still only possible in rare cases. For a further simplified case, valid only for much larger distances from the diffraction source, see Fraunhofer diffraction. Unlike Fraunhofer diffraction, Fresnel diffraction accounts for the curvature of the wavefront, in order to correctly calculate the relative phase of interfering waves. Alternative forms Convolution The integral can be expressed in other ways in order to calculate it using some mathematical properties. If we define the function then the integral can be expressed in terms of a convolution: in other words, we are representing the propagation using a linear-filter modeling. That is why we might call the function the impulse response of free-space propagation. Fourier transform Another possible way is through the Fourier transform. If in the integral we express in terms of the wavelength: and expand each component of the transverse displacement: then we can express the integral in terms of the two-dimensional Fourier transform. Let us use the following definition: where and are spatial frequencies (wavenumbers). The Fresnel integral can be expressed as That is, first multiply the field to be propagated by a complex exponential, calculate its two-dimensional Fourier transform, replace with and multiply it by another factor. This expression is better than the others when the process leads to a known Fourier transform, and the connection with the Fourier transform is tightened in the linear canonical transformation, discussed below. Linear canonical transformation From the point of view of the linear canonical transformation, Fresnel diffraction can be seen as a shear in the time–frequency domain, corresponding to how the Fourier transform is a rotation in the time–frequency domain. See also Fraunhofer diffraction Fresnel integral Fresnel zone Fresnel number Augustin-Jean Fresnel Ridged mirror Fresnel imager Euler spiral Notes References Diffraction
Fresnel diffraction
[ "Physics", "Chemistry", "Materials_science" ]
1,774
[ "Crystallography", "Diffraction", "Spectroscopy", "Spectrum (physical sciences)" ]
2,649,421
https://en.wikipedia.org/wiki/Integral%20operator
An integral operator is an operator that involves integration. Special instances are: The operator of integration itself, denoted by the integral symbol Integral linear operators, which are linear operators induced by bilinear forms involving integrals Integral transforms, which are maps between two function spaces, which involve integrals Integral calculus
Integral operator
[ "Mathematics" ]
61
[ "Integral calculus", "Calculus" ]
2,650,394
https://en.wikipedia.org/wiki/Respiratory%20rate
The respiratory rate is the rate at which breathing occurs; it is set and controlled by the respiratory center of the brain. A person's respiratory rate is usually measured in breaths per minute. Measurement The respiratory rate in humans is measured by counting the number of breaths for one minute through counting how many times the chest rises. A fibre-optic breath rate sensor can be used for monitoring patients during a magnetic resonance imaging scan. Respiration rates may increase with fever, illness, or other medical conditions. Inaccuracies in respiratory measurement have been reported in the literature. One study compared respiratory rate counted using a 90-second count period, to a full minute, and found significant differences in the rates.. Another study found that rapid respiratory rates in babies, counted using a stethoscope, were 60–80% higher than those counted from beside the cot without the aid of the stethoscope. Similar results are seen with animals when they are being handled and not being handled—the invasiveness of touch apparently is enough to make significant changes in breathing. Various other methods to measure respiratory rate are commonly used, including impedance pneumography, and capnography which are commonly implemented in patient monitoring. In addition, novel techniques for automatically monitoring respiratory rate using wearable sensors are in development, such as estimation of respiratory rate from the electrocardiogram, photoplethysmogram, or accelerometry signals. Breathing rate is often interchanged with the term breathing frequency. However, this should not be considered the frequency of breathing because realistic breathing signal is composed of many frequencies. Normal range For humans, the typical respiratory rate for a healthy adult at rest is 12–15 breaths per minute. The respiratory center sets the quiet respiratory rhythm at around two seconds for an inhalation and three seconds exhalation. This gives the lower of the average rate at 12 breaths per minute. Average resting respiratory rates by age are: birth to 6 weeks: 30–40 breaths per minute 6 months: 25–40 breaths per minute 3 years: 20–30 breaths per minute 6 years: 18–25 breaths per minute 10 years: 17–23 breaths per minute Adults: 15–18 breaths per minute 50 years: 18-25 breaths per minute Elderly ≥ 65 years old: 12–28 breaths per minute. Elderly ≥ 80 years old: 10-30 breaths per minute. Minute volume Respiratory minute volume is the volume of air which is inhaled (inhaled minute volume) or exhaled (exhaled minute volume) from the lungs in one minute. Diagnostic value The value of respiratory rate as an indicator of potential respiratory dysfunction has been investigated but findings suggest it is of limited value. One study found that only 33% of people presenting to an emergency department with an oxygen saturation below 90% had an increased respiratory rate. An evaluation of respiratory rate for the differentiation of the severity of illness in babies under 6 months found it not to be very useful. Approximately half of the babies had a respiratory rate above 50 breaths per minute, thereby questioning the value of having a "cut-off" at 50 breaths per minute as the indicator of serious respiratory illness. It has also been reported that factors such as crying, sleeping, agitation and age have a significant influence on the respiratory rate. As a result of these and similar studies the value of respiratory rate as an indicator of serious illness is limited. Nonetheless, respiratory rate is widely used to monitor the physiology of acutely-ill hospital patients. It is measured regularly to facilitate identification of changes in physiology along with other vital signs. This practice has been widely adopted as part of early warning systems. Abnormal respiratory rates See also Subparabrachial nucleus - nucleus in the brain stem that regulates breathing rate Respiratory system Heart rate and pulse and systolic and diastolic blood pressure measurements and the level of oxygen saturation- some other vital signs- can provide related information about the heart and lungs and the great vessels, since these systems work with one another, are relatively close together in gross (macroscopic) anatomy, and are physiologically very related. References Respiratory physiology Respiratory therapy Temporal rates
Respiratory rate
[ "Physics" ]
830
[ "Temporal quantities", "Temporal rates", "Physical quantities" ]
2,650,410
https://en.wikipedia.org/wiki/Highway%20Addressable%20Remote%20Transducer%20Protocol
The HART Communication Protocol (Highway Addressable Remote Transducer) is a hybrid analog+digital industrial automation open protocol. Its most notable advantage is that it can communicate over legacy 4–20 mA analog instrumentation current loops, sharing the pair of wires used by the analog-only host systems. HART is widely used in process and instrumentation systems ranging from small automation applications up to highly sophisticated industrial applications. Based on the OSI model, HART resides at Layer 7, the Application Layer. Layers 3–6 are not used. When sent over 4–20 mA it uses a Bell 202 for layer 1. But it is often converted to RS485 or RS232. According to Emerson, due to the huge installation base of 4–20 mA systems throughout the world, the HART Protocol is one of the most popular industrial protocols today. HART protocol has made a good transition protocol for users who wished to use the legacy 4–20 mA signals, but wanted to implement a "smart" protocol. History The protocol was developed by Rosemount Inc., built off the Bell 202 early communications standard in the mid-1980s as a proprietary digital communication protocol for their smart field instruments. Soon it evolved into HART and in 1986 it was made an open protocol. Since then, the capabilities of the protocol have been enhanced by successive revisions to the specification. Modes There are two main operational modes of HART instruments: point-to-point (analog/digital) mode, and multi-drop mode. Point to point In point-to-point mode the digital signals are overlaid on the 4–20 mA loop current. Both the 4–20 mA current and the digital signal are valid signalling protocols between the controller and measuring instrument or final control element. The polling address of the instrument is set to "0". Only one instrument can be put on each instrument cable signal pair. One signal, generally specified by the user, is specified to be the 4–20 mA signal. Other signals are sent digitally on top of the 4–20 mA signal. For example, pressure can be sent as 4–20 mA, representing a range of pressures, and temperature can be sent digitally over the same wires. In point-to-point mode, the digital part of the HART protocol can be seen as a kind of digital current loop interface. Multi-drop In multi-drop mode the analog loop current is fixed at 4 mA and it is possible to have more than one instrument on a signal loop. HART revisions 3 through 5 allowed polling addresses of the instruments to be in the range 1–15. HART revision 6 allowed addresses 1 to 63; HART revision 7 allows addresses 0 to 63. Each instrument must have a unique address. Packet structure The request HART packet has the following structure: Preamble Currently all the newer devices implement five byte preamble, since anything greater reduces the communication speed. However, masters are responsible for backwards support. Master communication to a new device starts with the maximum preamble length (20 bytes) and is later reduced once the preamble size for the current device is determined. Preamble is: "ff" "ff" "ff" "ff" "ff" (5 times ff) Start delimiter This byte contains the Master number and specifies that the communication packet is starting. bit 7, if high use Unique (5 byte) address, else use Polling (1 Byte) addresses. bit 6 and 5, Number of Expansion bytes normally it set if Expansion field is used, normally 0. bit 4 and 3, Physical layer type 0=Asynchronous, 1=Synchronous bit 2, 1 and 0, Frame type 1=BACK Burst Acknowledge send by Burst-mode Device 2=STX Master to Field Devices. 6=Slave Acknowledge to STX frame. Address Specifies the destination address as implemented in one of the HART schemes. The original addressing scheme used only four bits to specify the device address, which limited the number of devices to 16 including the master. The newer scheme utilizes 38 bits to specify the device address. This address is requested from the device using either Command 0, or Command 11. Command This is a one byte numerical value representing which command is to be executed. Command 0 and Command 11 are used to request the device number. Number of data bytes Specifies the number of communication data bytes to follow. Status The status field is absent for the master and is two bytes for the slave. This field is used by the slave to inform the master whether it completed the task and what its current health status is. Data Data contained in this field depends on the command to be executed. Checksum Checksum is composed of an XOR of all the bytes starting from the start byte and ending with the last byte of the data field, including those bytes. Manufacturer codes Each manufacturer that participates in the HART convention is assigned an identification number. This number is communicated as part of the basic device identification command used when first connecting to a device. References External links FieldComm Group .NET Open Source project Network protocols Industrial computing Serial buses Industrial automation
Highway Addressable Remote Transducer Protocol
[ "Technology", "Engineering" ]
1,027
[ "Industrial automation", "Industrial computing", "Automation", "Industrial engineering" ]
2,650,443
https://en.wikipedia.org/wiki/Kobe%20Steel
Kobe Steel, Ltd. (株式会社神戸製鋼所, Kabushiki gaisha Kōbe Seikō-sho), is a major Japanese steel manufacturer headquartered in Chūō-ku, Kobe. KOBELCO is the unified brand name of the Kobe Steel Group. Kobe Steel has the lowest proportion of steel operations of any major steelmaker in Japan and is characterised as a conglomerate comprising the three pillars of the Materials Division, the Machinery Division and the Power Division. The materials division has a high market share in wire rods and aluminium materials for transport equipment, while the machinery division has a high market share in screw compressors. In addition, the power sector has one of the largest wholesale power supply operations in the country. Kobe Steel is a member of the Mizuho keiretsu. It was formerly part of the DKB Group, Sanwa Group keiretsu, which later were subsumed into Mizuho. The company is listed on the Tokyo & Nagoya Stock Exchange, where its stock is a component of the Nikkei 225. As of March 31, 2022, Kobe Steel has 201 subsidiaries and 50 affiliated companies across Japan, Asia, Europe, the Middle East and the US. Its main production facilities are Kakogawa Steel Works and Takasago Works. Kobe Steel is also famous as the owner of the rugby team Kobelco Steelers. History In 1905, the general partnership trading company Suzuki Shoten acquired a steel business in Wakinohama, Kobe, called Kobayashi Seikosho, operated by Seiichiro Kobayashi, and changed its name to Kobe Seikosho. Then, in 1911, Suzuki Shoten spun off the company to establish Kobe Steel Works, Ltd. at Wakinohamacho, Kobe. After the Russo-Japanese War, as the Imperial Japanese Navy adopted a policy of fostering private factories, Kobe Steel received technical guidance and orders from the Kure Naval Arsenal and other arsenals in Maizuru and Yokosuka, and expanded its scale. Around 1914, the company started making machinery for naval vessels and began its journey as a machine manufacturer. Its business performance expanded, partly due to the shipbuilding boom during World War I. In 1918, it acquired the rights to manufacture diesel engines from Sulzer of Switzerland, helping to speed up the Japanese naval, marine, locomotive and automobile transport sectors. Today, the KOBELCO Group operates a broad range of business fields that cover Steel & Aluminum, Advanced Materials, Welding, Machinery, Engineering, Construction Machinery, and Electric Power. In the Great Hanshin Earthquake of January 1995, the Kobe head office building and company housing collapsed, and the No. 3 Blast Furnace at the Kobe Steel Works was also damaged, resulting in an emergency shutdown, causing approximately JPY 100 billion in damage, the largest for a private company. The Third Blast Furnace, which restarted only two and a half months after the earthquake, had become a 'symbol of recovery', but was suspended in October 2017 in order to strengthen competitiveness. In recent years, the company has been focusing on fields other than steel, such as aluminium, machinery, and electric power, and is clearly aiming to change from being a 'steelmaker' to a 'manufacturer that also handles steel'. Former prime minister Shinzō Abe worked at Kobe Steel before entering politics. Main locations Source: Domestic Locations Kobe Head Office Tokyo Head Office Takasago Works Kobe Corporate Research Laboratories Kakogawa Works Research & Development Laboratory Kobe Wire Rod & Bar Plant Fujisawa Office Ibaraki Plant Saijo Plant Fukuchiyama Plant Moka Works Chofu Works Daian Works Overseas Regional Headquarters and Offices Kobe Steel USA Inc. (U.S. headquarters): 19575 Victor Parkway, Suite 200 Livonia, MI, 48152, USA Kobelco (China) Holding Co., Ltd. (China headquarters, investment company): Room 3701, Hong Kong New World Tower, No.300 Middle Huai Hai Zhong Road, Huangpu District, Shanghai, 200021, People's Republic of China Kobelco (China) Holding Co., Ltd. (Guangzhou Branch): Room 1203, #285 East Linhe Road, Tianhe District, Guangzhou City, Guangdong Province, People's Republic of China Kobelco South East Asia Ltd. (Regional headquarters for Southeast Asia and South Asia): 17th Floor, Sathorn Thani Tower ll, 92/49 North Sathorn Road, Khwaeng Silom, Khet Bangrak, Bangkok, 10500, Kingdom of Thailand Kobelco Europe GmbH (Regional Headquarters for Europe and the Middle East): Luitpoldstrasse 3, 80335 Munich, Germany Business Units & Main Products Source: Steel & Aluminum Steel Sheets Wire Rods and Bars Aluminum Plate Steel Plates Welding Robots and Electric Power Sources Welding Materials Advanced Materials Steel Castings and Forgings Titanium Copper Sheet and Strip Steel Powder Machinery Standard Compressors Rotating Machinery Tire and Rubber Machinery Plastic Processing Machinery Advanced Technology Equipment Rolling Mill・Press Machine Ultra High Pressure Equipment Energy & Chemical Field Engineering Iron Unit Field Advanced Urban Transit System Electric Power Wholesale Power Supply Scandal In October 2017, Kobe Steel admitted to falsifying data on the strength and durability of its aluminium, copper and steel products. The scandal deepened when the company said it found falsified data on its iron ore powder, which caused its shares to fall 18%. By 11 October, shares had fallen by a third. After testing the parts of their bullet trains, the Central Japan Railway Company announced that 310 components were discovered to contain sub-standard parts supplied by Kobe Steel. Following further news in October 2017 that car makers Toyota, Nissan, and General Motors, and train manufacturer Hitachi, were among 200 companies affected by the Kobe Steel's mislabelling, which had potential safety implications for their vehicles, the CEO of Kobe Steel conceded that his company now had "zero credibility". Other affected companies include Ford, Boeing and Mitsubishi Heavy Industries. CEO Kawasaki promised to lead an internal investigation. On 13 October 2017, Kobe Steel admitted that the number of companies misled was over 500. Despite the costs of dealing with the scandal, Kobe Steel issued a revised profit forecast in February 2018 announcing that it expects to generate a net profit of ¥45 billion ($421 million) for the full 2017 fiscal year, marking its first net profit in three years. Gallery See also Kobeseiko Te-Gō References External links Official global website Kobe Steel Group of Companies History of Kobe Steel Group Kobelco Construction Machinery Europe Steel companies of Japan Crane manufacturers Construction equipment manufacturers of Japan Companies listed on the Tokyo Stock Exchange Companies listed on the Nagoya Stock Exchange Companies listed on the Osaka Exchange Companies in the Nikkei 225 Manufacturing companies based in Kobe Manufacturing companies established in 1905 Japanese companies established in 1905 Defense companies of Japan Japanese brands Midori-kai Industrial machine manufacturers
Kobe Steel
[ "Engineering" ]
1,382
[ "Industrial machine manufacturers", "Industrial machinery" ]
2,650,522
https://en.wikipedia.org/wiki/Canine%20parvovirus
Canine parvovirus (also referred to as CPV, CPV2, or parvo) is a contagious virus mainly affecting dogs and wolves. CPV is highly contagious and is spread from dog to dog by direct or indirect contact with their feces. Vaccines can prevent this infection, but mortality can reach 91% in untreated cases. Treatment often involves veterinary hospitalization. Canine parvovirus often infects other mammals including foxes, cats, and skunks. Felines (cats) are also susceptible to panleukopenia, a different strain of parvovirus. Signs Dogs that develop the disease show signs of the illness within three to ten days. The signs may include lethargy, vomiting, fever, and diarrhea (usually bloody). Generally, the first sign of CPV is lethargy. Secondary signs are loss of weight and appetite or diarrhea followed by vomiting. Diarrhea and vomiting result in dehydration that upsets the electrolyte balance and this may affect the dog critically. Secondary infections occur as a result of the weakened immune system. Because the normal intestinal lining is also compromised, blood and protein leak into the intestines, leading to anemia and loss of protein, and endotoxins escape into the bloodstream, causing endotoxemia. Dogs have a distinctive odor in the later stages of the infection. The white blood cell level falls, further weakening the dog. Any or all of these factors can lead to shock and death. Younger animals have worse survival rates. Diagnosis Diagnosis is made through detection of CPV2 in the feces by either an ELISA or a hemagglutination test, or by electron microscopy. PCR has become available to diagnose CPV2, and can be used later in the disease when potentially less virus is being shed in the feces that may not be detectable by ELISA. Clinically, the intestinal form of the infection can sometimes be confused with coronavirus or other forms of enteritis. Parvovirus, however, is more serious and the presence of bloody diarrhea, a low white blood cell count, and necrosis of the intestinal lining also point more towards parvovirus, especially in an unvaccinated dog. The cardiac form is typically easier to diagnose because the symptoms are distinct. Treatment Survival rate depends on how quickly CPV is diagnosed, the age of the dog, and how aggressive the treatment is. There is no approved treatment, and the current standard of care is supportive care, involving extensive hospitalization, due to severe dehydration and potential damage to the intestines and bone marrow. A CPV test should be given as early as possible if CPV is suspected in order to begin early treatment and increase survival rate if the disease is found. Supportive care ideally also consists of crystalloid IV fluids and/or colloids (e.g., Hetastarch), antinausea injections (antiemetics) such as maropitant, metoclopramide, dolasetron, ondansetron and prochlorperazine, and broad-spectrum antibiotic injections such as cefazolin/enrofloxacin, ampicillin/enrofloxacin, metronidazole, timentin, or enrofloxacin. IV fluids are administered and antinausea and antibiotic injections are given subcutaneously, intramuscularly, or intravenously. The fluids are typically a mix of a sterile, balanced electrolyte solution, with an appropriate amount of B-complex vitamins, dextrose, and potassium chloride. Analgesic medications can be used to counteract the intestinal discomfort caused by frequent bouts of diarrhea; however, the use of opioid analgesics can result in secondary ileus and decreased motility. In addition to fluids given to achieve adequate rehydration, each time the puppy vomits or has diarrhea in a significant quantity, an equal amount of fluid is administered intravenously. The fluid requirements of a patient are determined by the animal's body weight, weight changes over time, degree of dehydration at presentation, and surface area. A blood plasma transfusion from a donor dog that has already survived CPV is sometimes used to provide passive immunity to the sick dog. Some veterinarians keep these dogs on site, or have frozen serum available. There have been no controlled studies regarding this treatment. Additionally, fresh frozen plasma and human albumin transfusions can help replace the extreme protein losses seen in severe cases and help assure adequate tissue healing. However, this is controversial with the availability of safer colloids such as Hetastarch, as it will also increase the colloid osmotic pressure without the ill effect of predisposing that canine patient to future transfusion reaction. Once the dog can keep fluids down, the IV fluids are gradually discontinued, and very bland food slowly introduced. Oral antibiotics are administered for a number of days depending on the white blood cell count and the patient's ability to fight off secondary infection. A puppy with minimal symptoms can recover in two or three days if the IV fluids are begun as soon as symptoms are noticed and the CPV test confirms the diagnosis. If more severe, depending on treatment, puppies can remain ill from five days up to two weeks. However, even with hospitalization, there is no guarantee that the dog will be cured and survive. Treatments in development Kindred Biosciences, a biopharmaceutical company, is developing a monoclonal antibody as a prophylactic therapy to prevent clinical signs of parvovirus infection and also as treatment of established parvovirus infection. In 2021, Kindred Biosciences announced the completion of a pivotal efficacy study showing a 100% survival rate for dogs treated with KIND-030 compared to a 41% survival rate for dogs treated with placebo. Preliminary research in kidney cell lines have identified nitazoxanide, closantel sodium, and closantel as drugs which have the most potential as broad-spectrum antiviral agents against canine parvovirus and its various subspecies, raising the prospect that these drugs may yield potential for future treatments of this disease. In May 2023, the USDA granted Elanco Animal Health conditional approval to develop a Canine Parvovirus Monoclonal Antibody (CPMA) which targets the virus instead of its symptoms. Initial distribution of CPMA to veterinarians began in July 2023. History Parvovirus CPV2 is a relatively new disease that appeared in the late 1970s. It was first recognized in 1978 and spread worldwide in one to two years. The virus is very similar to feline panleukopenia (also a parvovirus); they are 98% identical, differing only in two amino acids in the viral capsid protein VP2. It is also highly similar to mink enteritis virus (MEV), and the parvoviruses of raccoons and foxes. It is possible that CPV2 is a mutant of an unidentified parvovirus (similar to feline parvovirus (FPV)) of some wild carnivore. CPV2 was thought to only cause diseases in canines, but newer evidence suggest pathogenicity in cats too. Variants There are two types of canine parvovirus called canine minute virus (CPV1) and CPV2. CPV2 causes the most serious disease and affects domesticated dogs and wild canids. There are variants of CPV2 called CPV-2a and CPV-2b, identified in 1979 and 1984 respectively. Most of canine parvovirus infection are believed to be caused by these two strains, which have replaced the original strain, and the present day virus is different from the one originally discovered, although they are indistinguishable by most routine tests. An additional variant is CPV-2c, a Glu-426 mutant, and was discovered in Italy, Vietnam, and Spain. The antigenic patterns of 2a and 2b are quite similar to the original CPV2. Variant 2c however has a unique pattern of antigenicity. This has led to claims of ineffective vaccination of dogs, but studies have shown that the existing CPV vaccines based on CPV-2b provide adequate levels of protection against CPV-2c. A strain of CPV-2b (strain FP84) has been shown to cause disease in a small percentage of domestic cats, although vaccination for FPV seems to be protective. With severe disease, dogs can die within 48 to 72 hours without treatment by fluids. In the more common, less severe form, mortality is about 10 percent. Certain breeds, such as Rottweilers, Doberman Pinschers, and Pit bull terriers as well as other black and tan colored dogs may be more susceptible to CPV2. Along with age and breed, factors such as a stressful environment, concurrent infections with bacteria, parasites, and canine coronavirus increase a dog's risk of severe infection. Dogs infected with parvovirus usually die from the dehydration it causes or secondary infection rather than the virus itself. The variants of CPV-2 are defined by surface protein (VP capsid) features. This classification does not correlate well with phylogenies built from other parts of the viral genome, such as the NS1 protein. Intestinal form Dogs become infected through oral contact with CPV2 in feces, infected soil, or fomites that carry the virus. Following ingestion, the virus replicates in the lymphoid tissue in the throat, and then spreads to the bloodstream. From there, the virus attacks rapidly dividing cells, notably those in the lymph nodes, intestinal crypts, and the bone marrow. There is depletion of lymphocytes in lymph nodes and necrosis and destruction of the intestinal crypts. Anaerobic bacteria that normally reside in the intestines can then cross into the bloodstream, a process known as translocation, with bacteremia leading to sepsis. The most common bacteria involved in severe cases are Clostridium, Campylobacter and Salmonella species. This can lead to a syndrome known as systemic inflammatory response syndrome (SIRS). SIRS leads to a range of complications such as hypercoagulability of the blood, endotoxaemia and acute respiratory distress syndrome (ARDS). Bacterial myocarditis has also been reported secondarily to sepsis. Dogs with CPV are at risk of intussusception, a condition where part of the intestine prolapses into another part. Three to four days following infection, the virus is shed in the feces for up to three weeks, and the dog may remain an asymptomatic carrier and shed the virus periodically. The virus is usually more deadly if the host is concurrently infested with worms or other intestinal parasites. Cardiac form This form is less common and affects puppies infected in the uterus or shortly after birth until about 8 weeks of age. The virus attacks the heart muscle and the puppy often dies suddenly or after a brief period of breathing difficulty due to pulmonary edema. On the microscopic level, there are many points of necrosis of the heart muscle that are associated with mononuclear cellular infiltration. The formation of excess fibrous tissue (fibrosis) is often evident in surviving dogs. Myofibers are the site of viral replication within cells. The disease may or may not be accompanied with the signs and symptoms of the intestinal form. However, this form is now rarely seen due to widespread vaccination of breeding dogs. Even less frequently, the disease may also lead to a generalized infection in neonates and cause lesions and viral replication and attack in other tissues other than the gastrointestinal tissues and heart, but also brain, liver, lungs, kidneys, and adrenal cortex. The lining of the blood vessels are also severely affected, which lead the lesions in this region to hemorrhage. Infection of the fetus This type of infection can occur when a pregnant female dog is infected with CPV2. The adult may develop immunity with little or no clinical signs of disease. The virus may have already crossed the placenta to infect the fetus. This can lead to several abnormalities. In mild to moderate cases the pups can be born with neurological abnormalities such as cerebellar hypoplasia. Virology CPV2 is a non-enveloped single-stranded DNA virus in the Parvoviridae family. The name comes from the Latin parvus, meaning small, as the virus is only 20 to 26 nm in diameter. It has an icosahedral symmetry. The genome is about 5000 nucleotides long. CPV2 continues to evolve, and the success of new strains seems to depend on extending the range of hosts affected and improved binding to its receptor, the canine transferrin receptor. CPV2 has a high rate of evolution, possibly due to a rate of nucleotide substitution that is more like RNA viruses such as Influenzavirus A. In contrast, FPV seems to evolve only through random genetic drift. CPV2 affects dogs, wolves, foxes, and other canids. CPV2a and CPV2b have been isolated from a small percentage of symptomatic cats and is more common than feline panleukopenia in big cats. Previously it has been thought that the virus does not undergo cross species infection. However studies in Vietnam have shown that CPV2 can undergo minor antigenic shift and natural mutation to infect felids. Analyses of feline parvovirus (FPV) isolates in Vietnam and Taiwan revealed that more than 80% of the isolates were of the canine parvovirus type, rather than feline panleukopenia virus (FPLV). CPV2 may spread to cats easier than dogs and undergo faster rates of mutation within that species. Prevention and decontamination CPV2 is an extremely virulent and contagious virus; the only reliable way to prevent infection is by vaccination. Puppies are generally vaccinated in a series of doses, extending from the earliest time that the immunity derived from the mother wears off until after that passive immunity is definitely gone. Vaccines are performed starting at 7–8 weeks of age, with a booster given every 2–4 weeks until at least 16 weeks of age. Older puppies (16 weeks or older) are given at least two vaccinations 2 to 4 weeks apart. The duration of immunity of vaccines for CPV2 has been tested for all major vaccine manufacturers in the United States and has been found to be at least three years after the initial puppy series and a booster 1 year later. A dog that successfully recovers from CPV2 generally remains contagious for up to three weeks, but it is possible it may remain contagious for up to six. CPV2 is an extremely resilient virus once it has been shed through the feces into the environment. CPV2 has been found to survive indoors for months and outdoors in moist environments years. It can survive in extremely low and high temperatures, and is resistant to many chemical disinfectants. References External links Canine Parvovirus Information - Common Symptoms and Treatments Parvovirus Information Center from The Pet Health Library Parvovirus Infection In Your Dog Parvo Virus Enteritis—CPV Parvovirus in dogs Parvovirinae Animal viral diseases Dog diseases Infraspecific virus taxa Vaccine-preventable diseases
Canine parvovirus
[ "Biology" ]
3,266
[ "Vaccination", "Vaccine-preventable diseases" ]
2,651,165
https://en.wikipedia.org/wiki/2%2C4-Dinitrotoluene
2,4-Dinitrotoluene (DNT) or dinitro is an organic compound with the formula C7H6N2O4. This pale yellow crystalline solid is well known as a precursor to trinitrotoluene (TNT) but is mainly produced as a precursor to toluene diisocyanate. Isomers of dinitrotoluene Six positional isomers are possible for dinitrotoluene. The most common one is 2,4-dinitrotoluene. The nitration of toluene gives sequentially mononitrotoluene, DNT, and finally TNT. 2,4-DNT is the principal product from dinitration, the other main product being about 30% 1,3-DN2-T. The nitration of 4-nitrotoluene gives 2,4-DNT. Applications Most DNT is used in the production of toluene diisocyanate, which is used to produce flexible polyurethane foams. DNT is hydrogenated to produce 2,4-toluenediamine, which in turn is phosgenated to give toluene diisocyanate. In this way, about 1.4 billion kilograms are produced annually, as of the years 1999–2000. Other uses include the explosives industry. It is not used by itself as an explosive, but some of the production is converted to TNT. Dinitrotoluene is frequently used as a plasticizer, deterrent coating, and burn rate modifier in propellants (e.g., smokeless gunpowders). As it is carcinogenic and toxic, modern formulations tend to avoid its use. In this application it is often used together with dibutyl phthalate. Toxicity Dinitrotoluenes are highly toxic with a threshold limit value (TLV) of 1.5 mg/m3. It converts hemoglobin into methemoglobin. 2,4-Dinitrotoluene is also a listed hazardous waste under 40 CFR 261.24. Its United States Environmental Protection Agency (EPA) Hazardous Waste Number is D030. The maximum concentration that may be contained to not have toxic characteristics is 0.13 mg/L. References External links Explosive chemicals IARC Group 2B carcinogens Nitrotoluenes Plasticizers
2,4-Dinitrotoluene
[ "Chemistry" ]
506
[ "Explosive chemicals" ]
19,833,982
https://en.wikipedia.org/wiki/Antoine%20equation
The Antoine equation is a class of semi-empirical correlations describing the relation between vapor pressure and temperature for pure substances. The Antoine equation is derived from the Clausius–Clapeyron relation. The equation was presented in 1888 by the French engineer (1825–1897). Equation The Antoine equation is where p is the vapor pressure, is temperature (in °C or in K according to the value of C) and , and are component-specific constants. The simplified form with set to zero: is the August equation, after the German physicist Ernst Ferdinand August (1795–1870). The August equation describes a linear relation between the logarithm of the pressure and the reciprocal temp. This assumes a temperature-independent heat of vaporization. The Antoine equation allows an improved, but still inexact description of the change of the heat of vaporization with the temperature. The Antoine equation can also be transformed in a temperature-explicit form with simple algebraic manipulations: Validity range Usually, the Antoine equation cannot be used to describe the entire saturated vapour pressure curve from the triple point to the critical point, because it is not flexible enough. Therefore, multiple parameter sets for a single component are commonly used. A low-pressure parameter set is used to describe the vapour pressure curve up to the normal boiling point and the second set of parameters is used for the range from the normal boiling point to the critical point. Example parameters Example calculation The normal boiling point of ethanol is TB = 78.32 °C. (760mmHg = 101.325kPa = 1.000atm = normal pressure) This example shows a severe problem caused by using two different sets of coefficients. The described vapor pressure is not continuous—at the normal boiling point the two sets give different results. This causes severe problems for computational techniques which rely on a continuous vapor pressure curve. Two solutions are possible: The first approach uses a single Antoine parameter set over a larger temperature range and accepts the increased deviation between calculated and real vapor pressures. A variant of this single set approach is using a special parameter set fitted for the examined temperature range. The second solution is switching to another vapor pressure equation with more than three parameters. Commonly used are simple extensions of the Antoine equation (see below) and the equations of DIPPR or Wagner. Units The coefficients of Antoine's equation are normally given in mmHg—even today where the SI is recommended and pascals are preferred. The usage of the pre-SI units has only historic reasons and originates directly from Antoine's original publication. It is however easy to convert the parameters to different pressure and temperature units. For switching from degrees Celsius to kelvin it is sufficient to subtract 273.15 from the C parameter. For switching from millimeters of mercury to pascals it is sufficient to add the common logarithm of the factor between both units to the A parameter: The parameters for °C and mmHg for ethanol A, 8.20417 B, 1642.89 C, 230.300 are converted for K and Pa to A, 10.32907 B, 1642.89 C, −42.85 The first example calculation with TB = 351.47 K becomes A similarly simple transformation can be used if the common logarithm should be exchanged by the natural logarithm. It is sufficient to multiply the A and B parameters by ln(10) = 2.302585. The example calculation with the converted parameters (for K and Pa): A, 23.7836 B, 3782.89 C, −42.85 becomes (The small differences in the results are only caused by the used limited precision of the coefficients). Extension of the Antoine equations To overcome the limits of the Antoine equation some simple extension by additional terms are used: The additional parameters increase the flexibility of the equation and allow the description of the entire vapor pressure curve. The extended equation forms can be reduced to the original form by setting the additional parameters D, E and F to 0. A further difference is that the extended equations use the e as base for the exponential function and the natural logarithm. This doesn't affect the equation form. Generalized Antoine Equation with Acentric Factor Lee developed a modified form of the Antoine equation that allows for calculating vapor pressure across the entire temperature range using the acentric factor (𝜔) of a substance. The fundamental structure of the equation is based on the van der Waals equation and builds upon the findings of Wall and Gutmann et al., who reformulated it into the Antoine equation. The proposed equation demonstrates improved accuracy compared to the Lee–Kesler method. where A, B, C are as follows where ln𝑝ᵥ,ᵣ is the natural logarithm of the reduced vapor pressure, 𝑇ᵣ is the reduced temperature, and 𝜔 is the acentric factor. Sources for Antoine equation parameters NIST Chemistry WebBook Dortmund Data Bank Directory of reference books and data banks containing Antoine constants Several reference books and publications, e. g. Lange's Handbook of Chemistry, McGraw-Hill Professional Wichterle I., Linek J., "Antoine Vapor Pressure Constants of Pure Compounds" Yaws C. L., Yang H.-C., "To Estimate Vapor Pressure Easily. Antoine Coefficients Relate Vapor Pressure to Temperature for Almost 700 Major Organic Compounds", Hydrocarbon Processing, 68(10), Pages 65–68, 1989 See also Vapour pressure of water Arden Buck equation Lee–Kesler method Goff–Gratch equation Raoult's law Thermodynamic activity References External links Gallica, scanned original paper NIST Chemistry Web Book Calculation of vapor pressures with the Antoine equation Eponymous equations of physics Thermodynamic equations
Antoine equation
[ "Physics", "Chemistry" ]
1,170
[ "Thermodynamic equations", "Eponymous equations of physics", "Equations of physics", "Thermodynamics" ]
19,837,606
https://en.wikipedia.org/wiki/Energy%20Technology%20Engineering%20Center
The Energy Technology Engineering Center (ETEC), was a government-owned, contractor-operated complex of industrial facilities located within the Santa Susana Field Laboratory (SSFL), Ventura County, California. The ETEC specialized in non-nuclear testing of components which were designed to transfer heat from a nuclear reactor using liquid metals instead of water or gas. The center operated from 1966 to 1998. The ETEC site has been closed and is now undergoing building removal and environmental remediation by the U.S. Department of Energy. History In 1966, ETEC began as the Liquid Metals Engineering Center (LMEC). The LMEC was created by the U.S. Atomic Energy Commission to provide development and non-nuclear testing of liquid metal reactor components. The Liquid Metals Information Center (LMIC) was established at the same time by the AEC. The LMIC served as a technical information library relating to liquid metals and liquid metal components for the United States government. Both the LMEC and LMIC supported the United States Government's Liquid Metal Fast Breeder Reactor program. The LMEC and the LMIC were established within a western portion of Santa Susana Field Laboratory called Area IV. In 1978, the LMEC charter was expanded to include general energy-related technology and the center was renamed the Energy Technology Engineering Center. Research and development at ETEC primarily involved metallic sodium because the proposed Fast Breeder Reactor required liquid sodium to operate. Sodium was chosen because it has desirable heat transfer properties, a low operating pressure when compared to water, and sodium has a relatively low melting point. The liquid metal components tested included steam generators, pumps, valves, flow meters and a variety of instrumentation. Investigation into the metallurgical properties of piping exposed to the high temperatures for long periods of time was also performed. The Components were designed and fabricated then installed into a test facility and evaluated under operating conditions with the overall goal of improving the reliability and safety of the components and ultimately, the nuclear reactor the components would be used in. The ETEC personnel operated several unique test facilities to evaluate nuclear reactor component tests using metallic sodium. One facility, the Sodium Pump Test Facility, capable of circulating up to 55,000 gallons of liquid sodium per minute at temperatures up to , was the largest sodium pump test facility in the world. Corporate Organization The LMEC was originally operated by the Atomics International division of North American Aviation and later by way of corporate merger, by Rockwell International. In 1996, The Boeing Company purchased Rocketdyne and assumed the ETEC contract with the Department of Energy. Two distinct organizations within Atomics International were supported by the DOE at SSFL Area IV: one focused on the development of civilian nuclear power and the other, LMEC/ETEC, was the center of excellence for research and testing of non-nuclear components relating to liquid metals. Although ETEC was operated by Atomics International (and later by Rockwell International), the U.S. Government required the ETEC be operated separately from Atomics International in order to avoid giving the company an unfair advantage through preferential access to government-sponsored research. Thus, the ETEC operated as an autonomous entity within Atomics International. At its height in 1973, ETEC employed four hundred fifty people. Parent Atomics International employed some 9,000 people during its height in the late 1970s. The distinction between ETEC and AI nuclear division is blurred by the demise of Atomics International and the cleanup of radioactive materials under DOE's "ETEC Closure" contract with The Boeing Company. The US Department of Energy has assumed responsibility for the identification and, if necessary, cleanup of impacts to the environment resulting from the sodium- or radioactive material-related activities within SSFL Area IV. Waste Management Practices Components removed from a sodium–related test facility require careful management because the residual sodium within the component reacts violently with water, thus is a hazard to human health and the environment. In some cases, bulk quantities of sodium required disposal. Prior to the establishment of the 1976 Federal Resource Conservation and Recovery Act which regulates the treatment and disposal of sodium waste, ETEC personnel operated an on-site treatment and disposal site. The site is called the Former Sodium Disposal Facility (FSDF) and was located at the extreme western edge of Area IV. The components were cleaned at the FSDF by reacting the sodium inside with steam or by tossing them into a large pool of water. The steam (or water) reacts with the sodium and removes the hazardous residues. In 1978, in compliance with the new Federal Resource Conservation and Recovery Act ETEC established the Hazardous Waste Management Facility (HWMF), a specialized facility to remove residual sodium from used components. The HWMF operated under the Federal RCRA regulations and closed in 1998. Environmental Impacts The research and development activities at ETEC resulted in contamination to the surrounding environment. While the FSDF was not intended for the disposal of chemicals or radioactive materials, it is clear these materials were present there. The Final Report for the FSDF cleanup prepared by Boeing notes that "a small amount of very low level radioactive waste was inadvertently disposed of at the site…" The impacted soils were removed from the FSDF by Rocketdyne for the DOE in 1992. A video explaining the 1992 FSDF cleanup was produced by Rocketdyne. In 1998, the California Department of Public Health, Radiologic Health Branch determined the site to be clean up to the standards then in effect. Further cleanup to remove traces of mercury and Polychlorinated biphenyls from the surrounding site was completed in 1999. Other locations within Area IV (and the remainder of SSFL) have been undergoing an environmental Facility Investigation under the Resource Conservation and Recovery Act since 1994. The investigation is overseen by the California State Department of Toxic Substances Control. A firm estimated completion date for the investigation and subsequent remediation, if any, could not be found. By 2007, all of the sodium-related facilities have been removed from Area IV with the exception of the Sodium Pump Test Facility and the Hazardous Waste Management Facility. All of the metallic sodium has been removed from ETEC. Other Santa Susana Field Laboratory activities Most of the Santa Susana Field Laboratory—SSFL was used for the testing and development of rocket engines by Rocketdyne over a fifty-year period, initially for defensive missiles, and then primarily for the National Aeronautics and Space Administration—NASA space vehicles. That took place at locations in Areas I, II, and III totaling ~ 2,560 acres. The ETEC site is ~90 acres, of Area IV's 290 acre total. There has been considerable environmental impact investigations underway across SSFL, including at the ETEC sites, since the 1990s to develop cleanup criteria, characterization measurement standards, and methods to use to reach contractual terms of completion. In the interim, some small site specific cleanups, contaminated surface water flow remediation, and minor habitat restoration efforts have been tried. The cleanup data gathering, and eventual cleanup projects (of chemical &/or radiological toxins), are under the direction of the DTSC—California Department of Toxic Substances Control of CalEPA, with a 2017 completion deadline/goal. Interim remediation means, contaminant characterization studies, and all mandated cleanup work is funded by the R.P.s—Responsible Parties. They are the DOE—U.S. Department of Energy and The Boeing Company for the ETEC site (~90 acres) within Area IV. For the rest of the SSFL property the R.P.s are Boeing and/or NASA fL, depending on: the Area (I, II, &/or III); contaminant types, and physical toxin location (i.e.: surface soils, aquifers, deep bedrock, etc.). See also Santa Susana Field Laboratory Index: Simi Hills References External links DOE: ETEC Closure Project — website Homepage — history, characterization and cleanup DTSC Online Document Library: for DEPARTMENT OF ENERGY (DOE) AREA IV — includes ETEC data, reports, and updates related to cleanup. Atomics International Energy infrastructure in California Nuclear research institutes Industrial buildings and structures in California Buildings and structures in Ventura County, California Simi Hills United States Department of Energy facilities Boeing Civilian nuclear power accidents Radioactively contaminated areas Environmental disasters in the United States Disasters in California History of Ventura County, California Nuclear accidents and incidents in the United States 1966 establishments in California 1998 disestablishments in California
Energy Technology Engineering Center
[ "Chemistry", "Technology", "Engineering" ]
1,731
[ "Nuclear research institutes", "Radioactively contaminated areas", "Nuclear organizations", "Radioactive contamination", "Civilian nuclear power accidents", "Soil contamination", "Environmental impact of nuclear power" ]
23,970,820
https://en.wikipedia.org/wiki/Laser-induced%20incandescence
Laser-induced incandescence (LII) is an in situ method of measuring aerosol particle volume fraction, primary particle sizes, and other thermophysical properties in flames, during gas-phase nanoparticle synthesis, and in aerosol streams more broadly. The technique is prominently used to characterize soot. The technique can broadly be separated into applications involving continuous or pulsed laser sources, with the former implemented in the Single Particle Soot Photometer (SP2) and the latter used in time-resolved laser-induced incandescence (TiRe-LII) analyses. References See also Laser-induced fluorescence Planar laser-induced fluorescence Spectroscopy
Laser-induced incandescence
[ "Physics", "Chemistry", "Astronomy" ]
141
[ "Spectroscopy stubs", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Astronomy stubs", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs" ]
23,971,781
https://en.wikipedia.org/wiki/Magnetic%20energy
The potential magnetic energy of a magnet or magnetic moment in a magnetic field is defined as the mechanical work of the magnetic force on the re-alignment of the vector of the magnetic dipole moment and is equal to: The mechanical work takes the form of a torque : which will act to "realign" the magnetic dipole with the magnetic field. In an electronic circuit the energy stored in an inductor (of inductance ) when a current flows through it is given by: This expression forms the basis for superconducting magnetic energy storage. It can be derived from a time average of the product of current and voltage across an inductor. Energy is also stored in a magnetic field itself. The energy per unit volume in a region of free space with vacuum permeability containing magnetic field is: More generally, if we assume that the medium is paramagnetic or diamagnetic so that a linear constitutive equation exists that relates and the magnetization (for example where is the magnetic permeability of the material), then it can be shown that the magnetic field stores an energy of where the integral is evaluated over the entire region where the magnetic field exists. For a magnetostatic system of currents in free space, the stored energy can be found by imagining the process of linearly turning on the currents and their generated magnetic field, arriving at a total energy of: where is the current density field and is the magnetic vector potential. This is analogous to the electrostatic energy expression ; note that neither of these static expressions apply in the case of time-varying charge or current distributions. References External links Magnetic Energy, Richard Fitzpatrick Professor of Physics The University of Texas at Austin. Forms of energy Magnetism
Magnetic energy
[ "Physics", "Materials_science" ]
352
[ "Materials science stubs", "Physical quantities", "Forms of energy", "Energy (physics)", "Electromagnetism stubs" ]
23,971,865
https://en.wikipedia.org/wiki/Tauc%20plot
A Tauc plot is used to determine the optical bandgap, or Tauc bandgap, of either disordered or amorphous semiconductors. In his original work Jan Tauc () showed that the optical absorption spectrum of amorphous germanium resembles the spectrum of the indirect transitions in crystalline germanium (plus a tail due to localized states at lower energies), and proposed an extrapolation to find the optical bandgap of these crystalline-like states. Typically, a Tauc plot shows the quantity hν (the photon energy) on the abscissa (x-coordinate) and the quantity (αhν)1/2 on the ordinate (y-coordinate), where α is the absorption coefficient of the material. Thus, extrapolating this linear region to the abscissa yields the energy of the optical bandgap of the amorphous material. A similar procedure is adopted to determine the optical bandgap of crystalline semiconductors. In this case, however, the ordinate is given by (α)1/r, in which the exponent 1/r denotes the nature of the transition:,, r = 1/2 for direct allowed transitions r = 3/2 for direct forbidden transitions. r = 2 for indirect allowed transitions r = 3 for indirect forbidden transitions Again, the resulting plot (quite often, incorrectly identified as a Tauc plot) has a distinct linear region that, extrapolated to the abscissa, yields the energy of the optical bandgap of the material. See also Band gap Urbach energy References Plots (graphics) Thin films Semiconductor analysis
Tauc plot
[ "Materials_science", "Mathematics", "Engineering" ]
335
[ "Nanotechnology", "Planes (geometry)", "Thin films", "Materials science" ]
23,974,317
https://en.wikipedia.org/wiki/Andr%C3%A9%E2%80%93Quillen%20cohomology
In commutative algebra, André–Quillen cohomology is a theory of cohomology for commutative rings which is closely related to the cotangent complex. The first three cohomology groups were introduced by and are sometimes called Lichtenbaum–Schlessinger functors T0, T1, T2, and the higher groups were defined independently by and using methods of homotopy theory. It comes with a parallel homology theory called André–Quillen homology. Motivation Let A be a commutative ring, B be an A-algebra, and M be a B-module. The André–Quillen cohomology groups are the derived functors of the derivation functor DerA(B, M). Before the general definitions of André and Quillen, it was known for a long time that given morphisms of commutative rings and a C-module M, there is a three-term exact sequence of derivation modules: This term can be extended to a six-term exact sequence using the functor Exalcomm of extensions of commutative algebras and a nine-term exact sequence using the Lichtenbaum–Schlessinger functors. André–Quillen cohomology extends this exact sequence even further. In the zeroth degree, it is the module of derivations; in the first degree, it is Exalcomm; and in the second degree, it is the second degree Lichtenbaum–Schlessinger functor. Definition Let B be an A-algebra, and let M be a B-module. Let P be a simplicial cofibrant A-algebra resolution of B. André notates the qth cohomology group of B over A with coefficients in M by , while Quillen notates the same group as . The qth André–Quillen cohomology group is: Let denote the relative cotangent complex of B over A. Then we have the formulas: See also Cotangent complex Deformation Theory Exalcomm References Generalizations André–Quillen cohomology of commutative S-algebras Homology and Cohomology of E-infinity ring spectra Commutative algebra Homotopy theory Cohomology theories
André–Quillen cohomology
[ "Mathematics" ]
468
[ "Fields of abstract algebra", "Commutative algebra" ]
23,974,457
https://en.wikipedia.org/wiki/C7H7Cl
{{DISPLAYTITLE:C7H7Cl}} The molecular formula C7H7Cl (molar mass: 126.58 g/mol, exact mass: 126.0236 u) may refer to: Benzyl chloride, or α-chlorotoluene Chlorotoluenes Molecular formulas
C7H7Cl
[ "Physics", "Chemistry" ]
68
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
20,954,465
https://en.wikipedia.org/wiki/Flue-gas%20condensation
Flue gas condensation is a process, where flue gas is cooled below its water dew point and the heat released by the resulting condensation of water is recovered as low temperature heat. Cooling of the flue gas can be performed either directly with a heat exchanger or indirectly via a condensing scrubber. The condensation of water releases more than per ton of condensed water, which can be recovered in the cooler for e.g. district heating purposes. Excess condensed water must continuously be removed from the process. The downstream gas is saturated with water, so even though significant amounts of water may have been removed from the cooled gas, it is likely to leave a visible stack plume of water vapor. If the fuel contains sulfur, the flue gases will contain oxides of sulfur. If the flue gases are cooled below the acid dew-point the acid vapor (sulfuric acid, H2SO4) will begin to condense. Acid condensation can result in low-temperature corrosion, which can threaten the safety of plant. Appropriate corrosion resistant material selection is important. The heat recovery potential of flue gas condensation is highest for fuels with a high moisture content (e.g. biomass and municipal waste), and where heat is useful at the lowest possible temperatures. Thus flue gas condensation is normally implemented at biomass fired boilers and waste incinerators connected district heating grids with relatively low return temperatures (below approximately ). Efficiency exceeding 100 % Flue gas condensation may cause the heat recovered to exceed the Lower Heating Value of the input fuel, and thus an efficiency greater than 100%. Since historically most combustion processes have not condensed the fuel, usual efficiency calculations assume the combustion products are not condensed. This assumption is implicit when basing calculations on the Lower Heating Value. A more rigorous approach would be to base efficiency calculations on the Higher Heating Value, which typically results in efficiencies less than 100%. Should the flue gases be cooled below , even efficiencies based on the Higher Heating Value may exceed 100%, since typical heating value definitions assume that all heat is released when combustion products are cooled to somewhere between and . See also Condensing boiler Scrubber Wet scrubber Heat exchanger District heating Energy References External links On Flue gas Condensation by Götaverken Miljö AB Scrubbers Industrial processes Heat exchangers
Flue-gas condensation
[ "Chemistry", "Engineering" ]
487
[ "Chemical equipment", "Scrubbers", "Heat exchangers" ]
20,956,258
https://en.wikipedia.org/wiki/Aircraft%20systems
Aircraft systems are those required to operate an aircraft efficiently and safely. Their complexity varies with the type of aircraft. Aircraft software systems Aircraft software systems control, manage, and apply the subsystems that are engaged with avionics on board an aircraft. Flight control systems Flight control systems can be manually operated or powered. They are designed to move the flight control surfaces or swashplate, allowing the pilot to maintain or change attitude as required. Landing gear system Landing gear systems for larger aircraft are usually hydraulic for powered retraction/extension of the main legs and doors and also for braking. Anti-skid systems are used to provide maximum braking performance. Hydraulic system A hydraulic system is required for high speed flight and large aircraft to convert the crews' control system movements to surface movements. The hydraulic system is also used to extend and retract landing gear, operate flaps and slats, operate the wheel brakes and steering systems. Hydraulic systems consist of engine driven pumps, fluid reservoirs, oil coolers, valves and actuators. Redundancy for safety is often provided by the use of multiple, isolated systems. Electrical system The electrical system generally consist of a battery, generator or alternator, switches, circuit breakers and instruments such as voltmeters and ammeters. Back up electrical supply can be provided by a ram air turbine (RAT) or Hydrazine powered turbines. Engine bleed air system Bleed air is compressed air taken from the compressor stage of a gas turbine engine upstream of its fuel-burning sections. It is used for several purposes which include cabin pressurisation, cabin heating or cooling, boundary layer control (BLC), ice protection and pressurisation of fuel tanks. Avionics Aircraft avionic systems encompass a wide range of electrical and electronic systems that include flight instruments, radios, and navigation systems. Environmental control system or Cabin control system Aircraft environmental control systems (ECS) provide cabin pressurisation and heating while also providing cooling for electronic systems such as radar. Fuel systems An aircraft fuel system is designed to store and deliver aviation fuel to the propulsion system and auxiliary power unit (APU) if equipped. Fuel systems differ greatly due to different performance of the aircraft in which they are installed. Propulsion systems Propulsion systems encompass engine installations and their controls. Sub-systems include fire detection and protection and thrust reversal. Ice protection systems Aircraft that regularly operate in icing conditions have systems to detect and prevent ice forming (anti-icing) and/or remove the ice accumulation after it has formed (de-icing). This can be achieved by heating the spaces in internal structure with engine bleed air, chemical treatment, electrical heating and expansion/contraction of the skin using de-icing boots. References Citations Bibliography Rolls-Royce. The jet engine. Second edition, Derby, Rolls-Royce Limited, 1966. Taylor, John W.R. The Lore of Flight, London: Universal Books Ltd., 1990. .
Aircraft systems
[ "Engineering" ]
595
[ "Systems engineering", "Aircraft systems" ]
20,957,324
https://en.wikipedia.org/wiki/Bondi%20accretion
In astrophysics, the Bondi accretion (also called Bondi–Hoyle–Lyttleton accretion), named after Hermann Bondi, is spherical accretion onto a compact object traveling through the interstellar medium. It is generally used in the context of neutron star and black hole accretion. To achieve an approximate form of the Bondi accretion rate, accretion is assumed to occur at a rate . where: is the ambient density is the object's velocity or the sound speed in the surrounding medium if is the Bondi radius, defined as . The Bondi radius comes from setting escape velocity equal to the sound speed and solving for radius. It represents the boundary between subsonic and supersonic infall. Substituting the Bondi radius in the above equation yields: . These are only scaling relations rather than rigorous definitions. A more complete solution can be found in Bondi's original work and two other papers. Application to accreting protoplanets When a planet is forming in a protoplanetary disk, it needs the gas in the disk to fall into its Bondi sphere in order for the planet to be able to accrete an atmosphere. For a massive enough planet, the initial accreted gas can quickly fill up the Bondi sphere. At this point, the atmosphere must cool and contract (through the Kelvin–Helmholtz mechanism) for the planet to be able to accrete more of an atmosphere. Bibliography Bondi (1952) MNRAS 112, 195, link Mestel (1954) MNRAS 114, 437, link Hoyle and Lyttleton (1941) MNRAS 101, 227 References Interstellar media Equations of astronomy
Bondi accretion
[ "Physics", "Astronomy" ]
352
[ "Interstellar media", "Outer space", "Concepts in astronomy", "Astronomy stubs", "Astrophysics", "Stellar astronomy stubs", "Astrophysics stubs", "Equations of astronomy" ]
20,957,502
https://en.wikipedia.org/wiki/Colloidal%20crystal
A colloidal crystal is an ordered array of colloidal particles and fine grained materials analogous to a standard crystal whose repeating subunits are atoms or molecules. A natural example of this phenomenon can be found in the gem opal, where spheres of silica assume a close-packed locally periodic structure under moderate compression. Bulk properties of a colloidal crystal depend on composition, particle size, packing arrangement, and degree of regularity. Applications include photonics, materials processing, and the study of self-assembly and phase transitions. Introduction A colloidal crystal is a highly ordered array of particles which can be formed over a long range (to about a centimeter). Arrays such as this appear to be analogous to their atomic or molecular counterparts with proper scaling considerations. A good natural example of this phenomenon can be found in precious opal, where brilliant regions of pure spectral color result from close-packed domains of colloidal spheres of amorphous silicon dioxide, SiO2 (see above illustration). The spherical particles precipitate in highly siliceous pools and form highly ordered arrays after years of sedimentation and compression under hydrostatic and gravitational forces. The periodic arrays of spherical particles make similar arrays of interstitial voids, which act as a natural diffraction grating for light waves in photonic crystals, especially when the interstitial spacing is of the same order of magnitude as the incident lightwave. Origins The origins of colloidal crystals go back to the mechanical properties of bentonite sols, and the optical properties of Schiller layers in iron oxide sols. The properties are supposed to be due to the ordering of monodisperse inorganic particles. Monodisperse colloids, capable of forming long-range ordered arrays, existing in nature. The discovery by W.M. Stanley of the crystalline forms of the tobacco and tomato viruses provided examples of this. Using X-ray diffraction methods, it was subsequently determined that when concentrated by centrifuging from dilute water suspensions, these virus particles often organized themselves into highly ordered arrays. Rod-shaped particles in the tobacco mosaic virus could form a two-dimensional triangular lattice, while a body-centered cubic structure was formed from the almost spherical particles in the tomato Bushy Stunt Virus. In 1957, a letter describing the discovery of "A Crystallizable Insect Virus" was published in the journal Nature. Known as the Tipula Iridescent Virus, from both square and triangular arrays occurring on crystal faces, the authors deduced the face-centered cubic close-packing of virus particles. This type of ordered array has also been observed in cell suspensions, where the symmetry is well adapted to the mode of reproduction of the organism. The limited content of genetic material places a restriction on the size of the protein to be coded by it. The use of a large number of the same proteins to build a protective shell is consistent with the limited length of RNA or DNA content. It has been known for many years that, due to repulsive Coulombic interactions, electrically charged macromolecules in an aqueous environment can exhibit long-range crystal-like correlations with interparticle separation distances often being considerably greater than the individual particle diameter. In all of the cases in nature, the same iridescence is caused by the diffraction and constructive interference of visible lightwaves which falls under Bragg’s law. Because of the rarity and pathological properties, neither opal nor any of the organic viruses have been very popular in scientific laboratories. The number of experiments exploring the physics and chemistry of these “colloidal crystals” has emerged as a result of the simple methods which have evolved in 20 years for preparing synthetic monodisperse colloids, both polymer and mineral, and, through various mechanisms, implementing and preserving their long-range order formation. Trends Colloidal crystals are receiving increased attention, largely due to their mechanisms of ordering and self-assembly, cooperative motion, structures similar to those observed in condensed matter by both liquids and solids, and structural phase transitions. Phase equilibrium has been considered within the context of their physical similarities, with appropriate scaling, to elastic solids. Observations of the interparticle separation distance has shown a decrease on ordering. This led to a re-evaluation of Langmuir's beliefs about the existence of a long-range attractive component in the interparticle potential. Colloidal crystals have found application in optics as photonic crystals. Photonics is the science of generating, controlling, and detecting photons (packets of light), particularly in the visible and near Infrared, but also extending to the Ultraviolet, Infrared and far IR portions of the electromagnetic spectrum. The science of photonics includes the emission, transmission, amplification, detection, modulation, and switching of lightwaves over a broad range of frequencies and wavelengths. Photonic devices include electro-optic components such as lasers (Light Amplification by Stimulated Emission of Radiation) and optical fiber. Applications include telecommunications, information processing, illumination, spectroscopy, holography, medicine (surgery, vision correction, endoscopy), military (guided missile) technology, agriculture and robotics. Polycrystalline colloidal structures have been identified as the basic elements of submicrometre colloidal materials science. Molecular self-assembly has been observed in various biological systems and underlies the formation of a wide variety of complex biological structures. This includes an emerging class of mechanically superior biomaterials based on microstructure features and designs found in nature. The principal mechanical characteristics and structures of biological ceramics, polymer composites, elastomers, and cellular materials are being re-evaluated, with an emphasis on bioinspired materials and structures. Traditional approaches focus on design methods of biological materials using conventional synthetic materials. The uses have been identified in the synthesis of bioinspired materials through processes that are characteristic of biological systems in nature. This includes the nanoscale self-assembly of the components and the development of hierarchical structures. Bulk crystals Aggregation Aggregation in colloidal dispersions (or stable suspensions) has been characterized by the degree of interparticle attraction. For attractions strong relative to the thermal energy (given by kT), Brownian motion produces irreversibly flocculated structures with growth rates limited by the rate of particle diffusion. This leads to a description using such parameters as the degree of branching, ramification or fractal dimensionality. A reversible growth model has been constructed by modifying the cluster-cluster aggregation model with a finite inter-particle attraction energy. In systems where forces of attraction forces are buffered to some degree, a balance of forces leads to an equilibrium phase separation, that is particles coexist with equal chemical potential in two distinct structural phases. The role of the ordered phase as an elastic colloidal solid has been evidenced by the elastic (or reversible) deformation due to the force of gravity. This deformation can be quantified by the distortion of the lattice parameter, or inter-particle spacing. Viscoelasticity Periodic ordered lattices behave as linear viscoelastic solids when subjected to small amplitude mechanical deformations. Okano's group experimentally correlated the shear modulus to the frequency of standing shear modes using mechanical resonance techniques in the ultrasonic range (40 to 70 kHz). In oscillatory experiments at lower frequencies (< 40 Hz), the fundamental mode of vibration as well as several higher frequency partial overtones (or harmonics) have been observed. Structurally, most systems exhibit a clear instability toward the formation of periodic domains of relatively short-range order Above a critical amplitude of oscillation, plastic deformation is the primary mode of structural rearrangement. Phase transitions Equilibrium phase transitions (e.g. order/disorder), an equation of state, and the kinetics of colloidal crystallization have all been actively studied, leading to the development of several methods to control the self-assembly of the colloidal particles. Examples include colloidal epitaxy and space-based reduced-gravity techniques, as well as the use of temperature gradients to define a density gradient. This is somewhat counterintuitive as temperature does not play a role in determining the hard-sphere phase diagram. However, hard-sphere single crystals (size 3 mm) have been obtained from a sample in a concentration regime that would remain in the liquid state in the absence of a temperature gradient. Phonon dispersion Using a single colloidal crystal, phonon dispersion of the normal modes of vibration modes were investigated using photon correlation spectroscopy, or dynamic light scattering. This technique relies on the relaxation or decay of concentration (or density) fluctuations. These are often associated with longitudinal modes in the acoustic range. A distinctive increase in the sound wave velocity (and thus the elastic modulus) by a factor of 2.5 has been observed at the structural transition from colloidal liquid to colloidal solid, or point of ordering. Kossel lines Using a single body-centered cubic colloidal crystal, the occurrence of Kossel lines in diffraction patterns were used to monitor the initial nucleation and subsequent motion caused distortion of the crystal. Continuous or homogeneous deformations occurring beyond the elastic limit produce a 'flowing crystal', where the nucleation site density increases significantly with increasing particle concentration. Lattice dynamics have been investigated for longitudinal as well as transverse modes. The same technique was used to evaluate the crystallization process near the edge of a glass tube. The former might be considered analogous to a homogeneous nucleation event—whereas the latter would clearly be considered a heterogeneous nucleation event, being catalyzed by the surface of the glass tube. Growth rates Small-angle laser light scattering has provided information about spatial density fluctuations or the shape of growing crystal grains. In addition, confocal laser scanning microscopy has been used to observe crystal growth near a glass surface. Electro-optic shear waves have been induced by an ac pulse, and monitored by reflection spectroscopy as well as light scattering. Kinetics of colloidal crystallization have been measured quantitatively, with nucleation rates being depending on the suspension concentration. Similarly, crystal growth rates have been shown to decrease linearly with increasing reciprocal concentration. Microgravity Experiments performed in microgravity on the Space Shuttle Columbia suggest that the typical face-centered cubic structure may be induced by gravitational stresses. Crystals tend to exhibit the hcp structure alone (random stacking of hexagonally close-packed crystal planes), in contrast with a mixture of (rhcp) and face-centred cubic packing when allowed sufficient time to reach mechanical equilibrium under gravitational forces on Earth. Glassy (disordered or amorphous) colloidal samples have become fully crystallized in microgravity in less than two weeks. Thin films Two-dimensional (thin film) semi-ordered lattices have been studied using an optical microscope, as well as those collected at electrode surfaces. Digital video microscopy has revealed the existence of an equilibrium hexatic phase as well as a strongly first-order liquid-to-hexatic and hexatic-to-solid phase transition. These observations are in agreement with the explanation that melting might proceed via the unbinding of pairs of lattice dislocations. Long-range order Long-range order has been observed in thin films of colloidal liquids under oil—with the faceted edge of an emerging single crystal in alignment with the diffuse streaking pattern in the liquid phase. Structural defects have been directly observed in the ordered solid phase as well as at the interface of the solid and liquid phases. Mobile lattice defects have been observed via Bragg reflections, due to the modulation of the light waves in the strain field of the defect and its stored elastic strain energy. Mobile lattice defects All of the experiments have led to at least one common conclusion: colloidal crystals may indeed mimic their atomic counterparts on appropriate scales of length (spatial) and time (temporal). Defects have been reported to flash by in the blink of an eye in thin films of colloidal crystals under oil using a simple optical microscope. But quantitatively measuring the rate of its propagation provides an entirely different challenge, which has been measured at somewhere near the speed of sound. Non-spherical colloid based crystals Crystalline thin-films from non-spherical colloids were produced using convective assembly techniques. Colloid shapes included dumbbell, hemisphere, disc, and sphero-cylinder shapes. Both purely crystalline and plastic crystal phases could be produced, depending on the aspect ratio of the colloidal particle. The low aspect ratio, such as bulge, eye-ball, and snowman-like non-spherical colloids, which spontaneously self-assembled to photonic crystal array with high uniformity. The particles were crystallized both as 2D (i.e., monolayer) and 3D (i.e., multilayer) structures. The observed lattice and particle orientations experimentally confirmed a body of theoretical work on the condensed phases of non-spherical objects. Assembly of crystals from non-spherical colloids can also be directed via the use of electrical fields. Applications Photonics Technologically, colloidal crystals have found application in the world of optics as photonic band gap (PBG) materials (or photonic crystals). Synthetic opals as well as inverse opal configurations are being formed either by natural sedimentation or applied forces, both achieving similar results: long-range ordered structures which provide a natural diffraction grating for lightwaves of wavelength comparable to the particle size. Novel PBG materials are being formed from opal-semiconductor-polymer composites, typically utilizing the ordered lattice to create an ordered array of holes (or pores) which is left behind after removal or decomposition of the original particles. Residual hollow honeycomb structures provide a relative index of refraction (ratio of matrix to air) sufficient for selective filters. Variable index liquids or liquid crystals injected into the network alter the ratio and band gap. Such frequency-sensitive devices may be ideal for optical switching and frequency selective filters in the ultraviolet, visible, or infrared portions of the spectrum, as well as higher efficiency antennae at microwave and millimeter wave frequencies. Self-assembly Self-assembly is the most common term in use in the modern scientific community to describe the spontaneous aggregation of particles (atoms, molecules, colloids, micelles, etc.) without the influence of any external forces. Large groups of such particles are known to assemble themselves into thermodynamically stable, structurally well-defined arrays, quite reminiscent of one of the 7 crystal systems found in metallurgy and mineralogy (e.g. face-centered cubic, body-centered cubic, etc.). The fundamental difference in equilibrium structure is in the spatial scale of the unit cell (or lattice parameter) in each particular case. Molecular self-assembly is found widely in biological systems and provides the basis of a wide variety of complex biological structures. This includes an emerging class of mechanically superior biomaterials based on microstructural features and designs found in nature. Thus, self-assembly is also emerging as a new strategy in chemical synthesis and nanotechnology. Molecular crystals, liquid crystals, colloids, micelles, emulsions, phase-separated polymers, thin films and self-assembled monolayers all represent examples of the types of highly ordered structures which are obtained using these techniques. The distinguishing feature of these methods is self-organization. See also Crystal growth Crystal structure Ceramic engineering Diffusion-limited aggregation Nanomaterials Nanoparticle Nucleation Photonic crystal Opal Sol-gel References Further reading M.W. Barsoum, Fundamentals of Ceramics, McGraw-Hill Co., Inc., 1997, . W.D. Callister, Jr., Materials Science and Engineering: An Introduction, 7th Ed., John Wiley & Sons, Inc., 2006, . W.D. Kingery, H.K. Bowen and D.R. Uhlmann, Introduction to Ceramics, John Wiley & Sons, Inc., 1976, . M.N. Rahaman, Ceramic Processing and Sintering, 2nd Ed., Marcel Dekker Inc., 2003, . J.S. Reed, Introduction to the Principles of Ceramic Processing, John Wiley & Sons, Inc., 1988, . D.W. Richerson, Modern Ceramic Engineering, 2nd Ed., Marcel Dekker Inc., 1992, . W.F. Smith, Principles of Materials Science and Engineering, 3rd Ed., McGraw-Hill, Inc., 1996, . L.H. VanVlack, Physical Ceramics for Engineers, Addison-Wesley Publishing Co., Inc., 1964, . Colloidal Dispersions, Russel, W.B., et al., Eds., Cambridge Univ. Press (1989) Sol-Gel Science: The Physics and Chemistry of Sol-Gel Processing by C. Jeffrey Brinker and George W. Scherer, Academic Press (1990) Sol-Gel Materials: Chemistry and Applications by John D. Wright, Nico A.J.M. Sommerdijk Sol-Gel Technologies for Glass Producers and Users by Michel A. Aegerter and M. Mennig Sol-Gel Optics: Processing and Applications, Lisa Klein, Springer Verlag (1994) External links University of Utrecht Nucleation and Growth Colloidal chemistry Condensed matter physics Soft matter Crystals
Colloidal crystal
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,592
[ "Colloidal chemistry", "Soft matter", "Phases of matter", "Materials science", "Colloids", "Surface science", "Crystallography", "Crystals", "Condensed matter physics", "Matter" ]
20,962,049
https://en.wikipedia.org/wiki/McGehee%20transformation
The McGehee transformation was introduced by Richard McGehee to study the triple collision singularity in the n-body problem. The transformation blows up the single point in phase space where the collision occurs into a collision manifold, the phase space point is cut out and in its place a smooth manifold is pasted. This allows the phase space singularity to be studied in detail. What McGehee found was a distorted sphere with four horns pulled out to infinity and the points at their tips deleted. McGehee then went on to study the flow on the collision manifold. References Celestial Encounters, The Origins of Chaos and Stability, Diacu/Holmes, , Princeton Science Library Classical mechanics
McGehee transformation
[ "Physics" ]
139
[ "Mechanics", "Classical mechanics" ]
20,962,073
https://en.wikipedia.org/wiki/Fisher%E2%80%93Tippett%E2%80%93Gnedenko%20theorem
In statistics, the Fisher–Tippett–Gnedenko theorem (also the Fisher–Tippett theorem or the extreme value theorem) is a general result in extreme value theory regarding asymptotic distribution of extreme order statistics. The maximum of a sample of iid random variables after proper renormalization can only converge in distribution to one of three possible distribution families: the Gumbel distribution, the Fréchet distribution, or the Weibull distribution. Credit for the extreme value theorem and its convergence details are given to Fréchet (1927), Fisher and Tippett (1928), Mises (1936), and Gnedenko (1943). The role of the extremal types theorem for maxima is similar to that of central limit theorem for averages, except that the central limit theorem applies to the average of a sample from any distribution with finite variance, while the Fisher–Tippet–Gnedenko theorem only states that if the distribution of a normalized maximum converges, then the limit has to be one of a particular class of distributions. It does not state that the distribution of the normalized maximum does converge. Statement Let be an -sized sample of independent and identically-distributed random variables, each of whose cumulative distribution function is . Suppose that there exist two sequences of real numbers and such that the following limits converge to a non-degenerate distribution function: or equivalently: In such circumstances, the limiting function is the cumulative distribution function of a distribution belonging to either the Gumbel, the Fréchet, or the Weibull distribution family. In other words, if the limit above converges, then up to a linear change of coordinates will assume either the form: with the non-zero parameter also satisfying for every value supported by (for all values for which ). Otherwise it has the form: This is the cumulative distribution function of the generalized extreme value distribution (GEV) with extreme value index . The GEV distribution groups the Gumbel, Fréchet, and Weibull distributions into a single composite form. Conditions of convergence The Fisher–Tippett–Gnedenko theorem is a statement about the convergence of the limiting distribution above. The study of conditions for convergence of to particular cases of the generalized extreme value distribution began with Mises (1936) and was further developed by Gnedenko (1943). Let be the distribution function of and be some i.i.d. sample thereof. Also let be the population maximum: The limiting distribution of the normalized sample maximum, given by above, will then be: Fréchet distribution For strictly positive the limiting distribution converges if and only if and for all In this case, possible sequences that will satisfy the theorem conditions are and Strictly positive corresponds to what is called a heavy tailed distribution. Gumbel distribution For trivial and with either finite or infinite, the limiting distribution converges if and only if for all with Possible sequences here are and Weibull distribution For strictly negative the limiting distribution converges if and only if (is finite) and for all Note that for this case the exponential term is strictly positive, since is strictly negative. Possible sequences here are and Note that the second formula (the Gumbel distribution) is the limit of the first (the Fréchet distribution) as goes to zero. Examples Fréchet distribution The Cauchy distribution's density function is: and its cumulative distribution function is: A little bit of calculus show that the right tail's cumulative distribution is asymptotic to or so we have Thus we have and letting (and skipping some explanation) for any Gumbel distribution Let us take the normal distribution with cumulative distribution function We have and thus Hence we have If we define as the value that exactly satisfies then around As increases, this becomes a good approximation for a wider and wider range of so letting we find that Equivalently, With this result, we see retrospectively that we need and then so the maximum is expected to climb toward infinity ever more slowly. Weibull distribution We may take the simplest example, a uniform distribution between and , with cumulative distribution function for any value from to  . For values of we have So for we have Let and get Close examination of that limit shows that the expected maximum approaches in inverse proportion to  . See also Extreme value theory Gumbel distribution Generalized extreme value distribution Generalized Pareto distribution Pickands–Balkema–de Haan theorem References Further reading Theorems in statistics Extreme value data Tails of probability distributions
Fisher–Tippett–Gnedenko theorem
[ "Mathematics" ]
913
[ "Mathematical theorems", "Mathematical problems", "Theorems in statistics" ]
26,810,695
https://en.wikipedia.org/wiki/Coefficient%20of%20coincidence
In genetics, the coefficient of coincidence (c.o.c.) is a measure of interference in the formation of chromosomal crossovers during meiosis. It is generally the case that, if there is a crossover at one spot on a chromosome, this decreases the likelihood of a crossover in a nearby spot. This is called interference. The coefficient of coincidence is typically calculated from recombination rates between three genes. If there are three genes in the order A B C, then we can determine how closely linked they are by frequency of recombination. Knowing the recombination rate between A and B and the recombination rate between B and C, we would naively expect the double recombination rate to be the product of these two rates. The coefficient of coincidence is calculated by dividing the actual frequency of double recombinants by this expected frequency: c.o.c. = actual double recombinant frequency / expected double recombinant frequency Interference is then defined as follows: interference = 1 − c.o.c. This figure tells us how strongly a crossover in one of the DNA regions (AB or BC) interferes with the formation of a crossover in the other region. Worked example Drosophila females of genotype a+a b+b c+c were crossed with males of genotype aa bb cc. This led to 1000 progeny of the following phenotypes: a+b+c+: 244 (parental genotype, shows no recombination) a+b+c: 81 (recombinant between B and C) a+bc+: 23 (double recombinant) a+bc: 152 (recombinant between A and B) ab+c+: 148 (recombinant between A and B) ab+c: 27 (double recombinant) abc+: 89 (recombinant between B and C) abc: 236 (parental genotype, shows no recombination) From these numbers it is clear that the b+/b locus lies between the a+/a locus and the c+/c locus. There are 23 + 152 + 148 + 27 = 350 progeny showing recombination between genes A and B. And there are 81 + 23 + 27 + 89 = 220 progeny showing recombination between genes B and C. Thus the expected rate of double recombination is (350 / 1000) * (220 / 1000) = 0.077, or 77 per 1000. However, there are actually only 23 + 27 = 50 double recombinants. The coefficient of coincidence is therefore 50 / 77 = 0.65. Interference is 1 − 0.65 = 0.35. High negative interference When three genetic markers, a, b and c, are all nearby (e.g. within the same gene) the coefficient of coincidence (calculated as in the above example) is generally found to be significantly greater than 1. This implies that any individual recombination event tends to be more closely associated with another nearby recombination event than would be expected by chance. This type of association is known as “negative interference”. When the coefficient of coincidence is substantially greater than 1, it is known as “high negative interference". High negative interference has been reported in bacteriophage T4 (e.g. ) and in human immunodeficiency virus (HIV) infections. References Genetics
Coefficient of coincidence
[ "Biology" ]
722
[ "Genetics" ]
26,818,853
https://en.wikipedia.org/wiki/Galilean%20cannon
A Galilean cannon is a device that demonstrates conservation of linear momentum. It comprises a stack of balls, starting with a large, heavy ball at the base of the stack and progresses up to a small, lightweight ball at the top. The basic idea is that this stack of balls can be dropped to the ground and almost all of the kinetic energy in the lower balls will be transferred to the topmost ball - which will rebound to many times the height from which it was dropped. At first sight, the behavior seems highly counter-intuitive, but in fact is precisely what conservation of momentum predicts. The principal difficulty is in keeping the configuration of the balls stable during the initial drop. Early descriptions involve some sort of glue/tape, tube, or net to align the balls. A modern version of the Galilean cannon was sold by Edmund Scientific Corporation and is still sold as the "Astro Blaster". In this device, a heavy wire is threaded through all of the balls to keep them accurately aligned - but the principle is the same. The resulting rebound is quite powerful; in fact, eye safety issues became so prevalent that this toy now comes with safety goggles. It is possible to demonstrate the principle more simply with just two balls, such as a basketball and a tennis ball. If an experimenter balances the tennis ball on top of the basketball and drops the pair to the ground, the tennis ball will rebound to many times the height from which it was released. Calculation for two balls Assuming elastic collisions, uniform gravity, no air resistance and the sizes of the balls being negligible compared to the heights from which they are dropped, formulas for conservation of momentum and kinetic energy can be used to calculate the speed and heights of rebound of the small ball: . Solving the simultaneous equations above for v2′, Taking velocities upwards as positive, as the balls fall from the same height and the large ball rebounds off the floor with the same speed, v1 = −v2 (the negative sign denoting the direction reversed). Thus . Because . As the rebound height is linearly proportional to the square of the launch speed, the maximum rebound height for a two-ball cannon is 32 = 9 times the original drop height, when m1 >> m2. See also Newton's cradle References Physics experiments Educational toys
Galilean cannon
[ "Physics" ]
473
[ "Experimental physics", "Physics experiments" ]
26,821,712
https://en.wikipedia.org/wiki/Tsunamis%20in%20lakes
A tsunami is a series of large water waves caused by the displacement of a large volume within a body of water, often caused by earthquakes, or similar events. This may occur in lakes as well as oceans, presenting threats to both fishermen and shoreside inhabitants. Because they are generated by a near field source region, tsunamis generated in lakes and reservoirs result in a decreased amount of warning time. Causes Inland tsunami hazards can be generated by many different types of earth movement. Some of these include earthquakes in or around lake systems, landslides, debris flow, rock avalanches, and glacier calving. Volcanogenic processes such as gas and mass flow characteristics are discussed in more detail below. Tsunamis in lakes are very uncommon. Earthquakes Tsunamis in lakes can be generated by fault displacement beneath or around lake systems. Faulting shifts the ground in a vertical motion through reverse, normal or oblique strike slip faulting processes, this displaces the water above causing a tsunami (Figure 1). The reason strike-slip faulting does not cause tsunamis is because there is no vertical displacement within the fault movement, only lateral movement resulting in no displacement of the water. In an enclosed basin such as a lake, tsunamis are referred to as the initial wave produced by coseismic displacement from an earthquake, and the seiche as the harmonic resonance within the lake. In order for a tsunami to be generated certain criteria are required: Needs to occur just below the lake bottom. Earthquake is of high or moderate magnitude typically over magnitude four. Displaces a large enough volume of water to generate a tsunami. These tsunamis are of high damage potential because they are contained within a relatively small body of water, and are near a field source. Warning time, after the event, is reduced, and organised emergency evacuations after the generation of the tsunami is difficult. On low lying shores even small waves may lead to substantial flooding. Residents should be made aware of emergency evacuation routes, in the event of an earthquake. Lake Tahoe Lake Tahoe may be endangered by a tsunami, due to faulting processes. Located in California and Nevada, it lies within an intermountain basin bounded by faults. Most of these faults are at the lake bottom or hidden in glaciofluvial deposits. Lake Tahoe has been affected by prehistoric eruptions, and in studies of the lake bottom sediments, a 10m high scarp has displaced the lake bottom sediments, indicating that the water was once displaced, generating a tsunami. A tsunami and seiche in Lake Tahoe can be treated as shallow-water long waves as the maximum water depth is much smaller than the wavelength. This demonstrates the impact that lakes have on tsunami wave characteristics, which is different from ocean tsunami wave characteristics because the ocean is deeper, and lakes are relatively shallow in comparison. With ocean tsunami, waves amplitudes only increase when the tsunami gets close to shore, however in lake tsunami, waves are generated and contained in a shallow environment. This would have a major impact on the 34,000 permanent residences along the lake, and on tourism in the area. Tsunami run-ups would leave areas near the lake inundated due to permanent ground subsidence attributed to the earthquake, with the highest run-ups and amplitudes being attributed to the seiches rather than the actual tsunami. Seiches cause damage because of resonance within the bays, reflecting the waves, where they combine to make larger standing waves. Lake Tahoe also experienced a massive collapse of the western edge of the basin that formed McKinney Bay around 50,000 years ago. This was thought to have generated a tsunami/seiche wave with a height approaching . Sub-aerial mass flows Sub-aerial mass flows (landslides or rapid mass wasting) result when a large amount of sediment becomes unstable, as the result of shaking from an earthquake, or saturation of the sediment which initiates a sliding layer. The volume of sediment then flows into the lake, causing a sudden large displacement of water. Tsunamis generated by sub-aerial mass flows are defined in terms of the first initial wave being the tsunami wave, and any tsunamis in terms of sub-aerial mass flows, are characterised into three zones. A splash zone or wave generation zone, is the region where landslides and water motion are coupled and it extends as far as the landslide travels. Next, the near field area, which is based on the characteristics of the tsunami wave, such as amplitude and wavelength which are crucial for predictive purposes. Then the far field area, where the process is mainly influenced by dispersion characteristics and is not often used when investigating tsunamis in lakes. Most lake tsunamis are related only to near field processes. A modern example of a landslide into a reservoir lake, overtopping a dam, occurred in Italy with the Vajont Dam disaster in 1963. Evidence exists in paleoseismological observations and other sedimentary core sample proxies of catastrophic rock failures of landslide-triggered lake tsunamis worldwide, including in Lake Geneva during AD 563. New Zealand example In the event of the Alpine fault in New Zealand rupturing in the South Island, it is predicted that there would be shaking of approximately Modified Mercali Intensity 5 in the lake-side towns of Queenstown (Lake Wakatipu) and Wānaka (Lake Wānaka). These could possibly cause sub-aerial mass flows that could generate tsunamis within the lakes. This would have a devastating impact on the 28,224 residents (2013 New Zealand census) who occupy these lake towns, not only in the potential losses of life and property, but the damage to the booming tourism industry, which would require years to rebuild. The Otago Regional Council, responsible for the area, has recognised that in such an event, tsunamis could occur in both lakes. Volcanogenic processes Tsunamis may be generated in lakes by volcanogenic processes, in terms of gas build-up causing violent lake overturns, and other processes such as pyroclastic flows, which require more complex modeling. Lake overturns can be incredibly dangerous and occur when gas, trapped at the bottom of the lake, is heated by rising magma, causing an explosion and release of gas; an example of this is Lake Kivu. Lake Kivu Lake Kivu, one of the African Great Lakes, lies on the border between the Democratic Republic of the Congo and Rwanda, and is part of the East Africa Rift. As part of the rift, it is affected by volcanic activity beneath the lake. This has led to a buildup of methane and carbon dioxide at the bottom of the lake, which can lead to violent limnic eruptions. Limnic eruptions (also called "lake overturns") are due to volcanic interaction with the water at the bottom of the lake that has high gas concentrations, this leads to heating of the lake and this rapid rise in temperature would spark a methane explosion displacing a large amount of water, followed nearly simultaneously by a release of carbon dioxide. This carbon dioxide would suffocate large numbers of people, with a possible tsunami generated from water displaced by the gas explosion affecting all of the 2 million people who occupy the shores of Lake Kivu. This is incredibly important as the warning times for an event such as a lake overturn is incredibly short in the order of minutes and the event itself may not even be noticed. Education of locals and preparation is crucial in this case and much research in this area has been done in order to try to understand what is happening within the lake, in order to try to reduce the effects when this phenomenon does happen. A lake turn-over in Lake Kivu may occur from one of two scenarios. Either (1) up to another hundred years of gas accumulation leads to gas saturation in the lake, resulting in a spontaneous outburst of gas originating at the depth at which gas saturation has exceeded 100%, or (2) a volcanic or even seismic event triggers a turn-over. In either case a strong vertical lift of a large body of water results in a plume of gas bubbles and water rising up to and through the water surface. As the bubbling water column draws in fresh gas-laden water, the bubbling water column widens and becomes more energetic as a virtual "chain reaction" occurs which would look like a watery volcano. Very large volumes of water are displaced, vertically at first, then horizontally away from the centre at surface and horizontally inwards to the bottom of the bubbling water column, feeding in fresh gas-laden water. The speed of the rising column of water increases until it has the potential to rise 25m or more in the centre above lake level. The water column has the potential to widen to well in excess of a kilometre, in a violent disturbance of the whole lake. The watery volcano may take as much as a day to fully develop while it releases upwards of 400 billion cubic metres of gas (~12tcf). Some of these parameters are uncertain, particularly the time taken to release the gas and the height to which the water column can rise. As a secondary effect, particularly if the water column behaves irregularly with a series of surges, the lake surface will both rise by up to several metres and create a series of tsunamis or waves radiating away from the epicentre of the eruption. Surface waters may simultaneously race away from the epicentre at speeds as high as 20-40m/second, slowing as distances from the centre increase. The size of the waves created is unpredictable. Wave heights will be highest if the water column surges periodically, resulting in wave heights is great as 10-20m. This is caused by the ever-shifting pathway that the vertical column takes to the surface. No reliable model exists to predict this overall turnover behaviour. For tsunami precautions it will be necessary for people to move to high ground, at least 20m above lake level. A worse situation may pertain in the Ruzizi River where a surge in lake level would cause flash-flooding of the steeply sloping river valley dropping 700m to Lake Tanganyika, where it is possible that a wall of water from 20-50m high may race down the gorge. Water is not the only problem for residents of the Kivu basin; the more than 400 billion cubic metres of gas released creates a denser-than-air cloud which may blanket the whole valley to a depth of 300m or more. The presence of this opaque gas cloud, which would suffocate any living creatures with its mixture of carbon dioxide and methane laced with hydrogen sulphide, would cause the majority of casualties. Residents would be advised to climb to at least 400m above the lake level to ensure their safety. Strangely the risk of a gas explosion is not great as the gas cloud is only about 20% methane in carbon dioxide, a mixture that is difficult to ignite. Modern example Askja At 11:24 PM on 21 July 2014, in a period experiencing an earthquake swarm related to the upcoming eruption of Bárðarbunga, an 800m-wide section gave way on the slopes of the Icelandic volcano Askja. Beginning at 350m over water height, it caused a tsunami 20–30 meters high across the caldera, and potentially larger at localized points of impact. Thanks to the late hour, no tourists were present; however, search and rescue observed a steam cloud rising from the volcano, apparently geothermal steam released by the landslide. Whether geothermal activity played a role in the landslide is uncertain. A total of 30–50 million cubic meters was involved in the landslide, raising the caldera's water level by 1–2 meters. Spirit Lake On March 27, 1980, Mount St. Helens erupted and Spirit Lake received the full impact of the lateral blast from the volcano. The blast and the debris avalanche associated with this eruption temporarily displaced much of the lake from its bed and forced lake waters as a wave as high as above lake level on the mountain slopes along the north shoreline of the lake. The debris avalanche deposited about of pyrolized trees, other plant material, volcanic ash, and volcanic debris of various origins into Spirit Lake. The deposition of this volcanic material decreased the lake volume by approximately . Lahar and pyroclastic-flow deposits from the eruption blocked its natural pre-eruption outlet to the North Fork Toutle River valley at its outlet, raising the surface elevation of the lake by between and . The surface area of the lake was increased from 1,300 acres to about 2,200 acres and its maximum depth decreased from to . Hazard mitigation Hazard mitigation for tsunamis in lakes is immensely important in the preservation of life, infrastructure and property. In order for hazard management of tsunamis in lakes to function at full capacity there are four aspects that need to be balanced and interacted with each other, these are: Readiness (preparedness for a tsunami in the lake) Evacuation plans Making sure equipment and supplies are on standby in case of a tsunami Education of locals on what hazard is posed to them and what they need to do in the event of a tsunami in the lake Response to the tsunami event in the lake Rescue operations Getting aid into the area such as food and medical equipment Providing temporary housing for people who have been displaced. Recovery from the tsunami Re-establishing damaged road networks and infrastructure Re-building and/or relocation for damaged buildings Cleanup of debris and flooded areas of land. Reduction (plans to reduce the effects of the next tsunami) Putting in place land use zoning to provide a buffer for tsunami run ups, meaning that buildings cannot be built right on the lake shore. When all these aspects are taken into consideration and continually managed and maintained, the vulnerability of an area to a tsunami within the lake decreases. This is not because the hazard itself has decreased but the awareness of the people who would be affected makes them more prepared to deal with the situation when it does occur. This reduces recovery and response times for an area, decreasing the amount of disruption and in turn the effect the disaster has on the community. Future research Investigation into the phenomena of tsunamis in lakes for this article was restricted by certain limitations. Internationally there has been a fair amount of research into certain lakes but not all lakes that can be affected by the phenomenon have been covered. This is especially true for New Zealand with the possible occurrence of tsunamis in the major lakes recognised as a hazard, but with no further research completed. See also List of tsunamis Ice jam Megatsunami (lists several lake incidents) Mount Breakenridge Quick clay Seiche Footnotes References Limnology Lake Water waves Natural hazards
Tsunamis in lakes
[ "Physics", "Chemistry" ]
2,964
[ "Physical phenomena", "Earth phenomena", "Water waves", "Waves", "Natural hazards", "Fluid dynamics" ]
26,822,145
https://en.wikipedia.org/wiki/Structural%20fracture%20mechanics
Structural fracture mechanics is the field of structural engineering concerned with the study of load-carrying structures that includes one or several failed or damaged components. It uses methods of analytical solid mechanics, structural engineering, safety engineering, probability theory, and catastrophe theory to calculate the load and stress in the structural components and analyze the safety of a damaged structure. There is a direct analogy between fracture mechanics of solid and structural fracture mechanics: There are different causes of the first component failure: mechanical overload, fatigue (material), unpredicted scenario, etc. “human intervention” like unprofessional behavior or a terrorist attack. There are two typical scenarios: A localized failure does NOT cause immediate collapse of the entire structure. The entire structure fails immediately after one of its components fails. If the structure does not collapse immediately there is a limited period of time until the catastrophic structural failure of the entire structure. There is a critical number of structural elements that defines whether the system has reserve ability or not. Safety engineers use the failure of the first component as an indicator and try to intervene during the given period of time to avoid the catastrophe of the entire structure. For example, “Leak-Before-Break” methodology means that a leak will be discovered prior to a catastrophic failure of the entire piping system occurring in service. It has been applied to pressure vessels, nuclear piping, gas and oil pipelines, etc. The methods of structural fracture mechanics are used as checking calculations to estimate sensitivity of a structure to its component failure. The failure of a complex system with parallel redundancy can be estimated based on probabilistic properties of the system elements. See also References Structural engineering Continuum mechanics Fracture mechanics Solid mechanics
Structural fracture mechanics
[ "Physics", "Materials_science", "Engineering" ]
344
[ "Structural engineering", "Solid mechanics", "Fracture mechanics", "Continuum mechanics", "Classical mechanics", "Materials science", "Construction", "Civil engineering", "Mechanics", "Materials degradation" ]
25,324,417
https://en.wikipedia.org/wiki/Welding%20defect
In metalworking, a welding defect is any flaw that compromises the usefulness of a weldment. There are many different types of welding defects, which are classified according to ISO 6520, while acceptable limits for welds are specified in ISO 5817 and ISO 10042. Major causes According to the American Society of Mechanical Engineers (ASME), the causes of welding defects can be classified as follows: 41% poor process conditions, 32% operator error, 12% using the wrong technique, 10% incorrect consumables, and 5% bad weld grooves. Hydrogen embrittlement Residual stresses The magnitude of residual stress caused by the heating, and subsequent cooling, from welding can be roughly calculated using: Where is Young's modulus, is the coefficient of thermal expansion, and is the temperature change. This approximates for steel. Types Cracks Arc strikes An arc strike is a discontinuity resulting from an arc consisting of any localized remelted metal, heat-affected metal, or change in the surface profile of any metal object. Arc strikes result in localized base metal heating and very rapid cooling. When located outside the intended weld area, they may result in hardening or localized cracking and may serve as potential sites subsequent fracturing. In statically loaded structures, arc strikes need not be removed unless such removal is required in contract documents. However, in cyclically loaded structures, arc strikes may result in stress concentrations that would be detrimental to the serviceability of such structures, and arc strikes should be ground smooth and visually inspected for cracks. Cold cracking Cold cracking—also known as delayed cracking, hydrogen-assisted cracking (HAC), or hydrogen-induced cracking (HIC)—is a type of defect that often develops after solidification of the weld when the temperature starts to drop from about 190 °C (375 °F); the phenomenon often arises at room temperature, and it can take up to 24 hours to appear even after complete cooling. Some codes require testing of welded objects 48 hours after the welding process. This type of crack is usually observed in the heat affected zone (HAZ), especially with carbon steel, which has limited hardenability. For other alloy steels, with a high degree of hardenability, cold cracking could occur in both the weld metal and the HAZ. This crack mechanism can also propagate between grains and through grains. Factors that can contribute to the occurrence of cold cracking are: The amount of hydrogen (H2) dissolved in weld metal: Dissolved hydrogen in the weld metal is related to hydrogen embrittlement. Hydrogen content can be reduced by using hydrogen-free consumables. In the case of welding filler (especially in shielded metal arc welding (SMAW)) exposed to the atmosphere, proper electrode baking is recommended to eliminate moisture from flux. Preheating of the base material is also one of the techniques used to release hydrogen from the working object. Residual tensile stress: Residual tensile stress can cause cracks to propagate without any applied stress. This can be avoided by preheating the base metal, which reduces the different thermal expansion coefficients that will affect the cooling rate of weld metal. Utilizing low-yield-strength filler metal is also preferable because the magnitude of residual stresses can be equal to the σ yield of the metal. Therefore, the use of austenitic stainless steel or nickel-base filler could be considered due to its ductile nature. Also, post weld heat treatment (PWHT) will release any residual stresses on the weld joint. Hardness of weld metal and heat affected zone (HAZ): Hardness is correlated with the brittleness of the material. To reduce excessive hardness, preheating and PWHT can be applied to the object. Hardness values below 350 VHN have less tendency to crack. Structure of weld metal and HAZ: Cold cracking in steels is associated with forming martensite as the weld cools. Hydrogen has very low solubility in martensite, which can lead to the gas being trapped inside the weld if care isn't taken. Slower cooling rates during the welding process help to avoid martensite formation. In addition, a slower cooling rate means a longer time at an elevated temperature, which allows more hydrogen to escape. A slower cooling rate is achieved by using high heat input and maintaining it during the welding. The alloy composition of the base metal also has an essential role in the likelihood of a cold crack occurring, since that composition relates to the hardenability of materials. With high cooling rates, the risk of forming a hard, brittle structure in the weld metal and HAZ is more likely. The hardenability of a material is usually expressed in terms of its carbon content or, when other elements are taken into account, its carbon equivalent (CE) value. CE_{IIW} = C + Mn/6 + (Cr + Mo + V)/5 + (Ni + Cu)/15 (concentration is shown as percentage of weight) Then, depending on the carbon content (with additional elements influencing the carbon equivalent index), steels can be classified into three zones, from their cold cracking behavior, as shown in the Graville diagram. Zone I includes low-carbon and low-alloy steels with a carbon content lower than 0.10%. Materials that lie in this region are considered not crack-sensitive. Zone II includes most carbon steels with a carbon content above 0.10%. Steels in this zone can be prone to cold cracks. In this case, it is preferable to use low hydrogen filler and slow the cooling rate during welding process. Zone III includes alloy steels with a carbon content above 0.10% and a high carbon equivalent index. Materials in this zone are considered hard to weld because martensite formation is unavoidable, even under controlled cooling. Therefore, additional procedures, such as preheating and PWHT, are needed during the welding process. Crater crack Crater cracks occur when a welding arc is broken, a crater will form if adequate molten metal is available to fill the arc cavity. Hat crack Hat cracks get their name from the shape of the weld cross-section, because the weld flares out at the face of the weld. The crack starts at the fusion line and extends up through the weld. They are usually caused by too much voltage or not enough speed. Hot cracking Hot cracking, also known as solidification cracking, can occur with all metals, and happens in the fusion zone of a weld. Excess restraint in the use of material should be avoided to diminish the probability of this type of cracking, and a proper filler material should be utilized. Other causes include a too-high welding current, poor joint design that does not diffuse heat, impurities (such as sulfur and phosphorus), preheating, welding speed being too fast, and long arcs. Underbead crack An underbead crack, also known as a heat-affected zone (HAZ) crack, forms a short distance away from the fusion line; it occurs in low alloy and high alloy steel. The exact causes of this type of crack are not entirely understood, but it is known that dissolved hydrogen must be present. The other factor that affects this type of crack is internal stresses resulting from: unequal contraction between the base metal and the weld metal, restraint of the base metal, stresses from the formation of martensite, and highlights from the precipitation of hydrogen out of the metal. Longitudinal crack Longitudinal cracks run along the length of a weld bead. There are three types: check cracks, root cracks, and full centerline cracks. Check cracks are visible from the surface and extend partially into the weld. They are usually caused by high shrinkage stresses, especially on final passes, or by a hot cracking mechanism. Root cracks start at the root and extent part-way into the weld. They are the most common type of longitudinal crack because of the small size of the first weld bead. If this type of crack is not addressed, it will usually propagate into subsequent weld passes, which is how full cracks (a crack from the root to the surface) usually form. Reheat cracking Reheat cracking is a type of cracking that occurs in HSLA steels—particularly chromium, molybdenum and vanadium steels—during post-heating. The phenomenon has also been observed in austenitic stainless steel. The poor creep ductility of the heat-affected zone causes such cracks. Any existing defects or notches aggravate crack formation. Conditions that help prevent reheat cracking include preliminary heat treating with a low-temperature soak and then with rapid heating to high temperatures, grinding or peening the weld toes, and using a two-layer welding technique to refine the HAZ grain structure. Root and toe cracks A root crack is formed by the short bead at the root (of edge preparation)—at the beginning of the welding, with low current at the beginning, and with improper filler material. The primary reason for these types of cracks is hydrogen embrittlement. These defects can be eliminated using a high current at the starting and proper filler material. A toe crack occurs due to moisture content in the welded area; it is a surface crack so that it can be easily detected. Preheating and proper joint formation are a must for eliminating these types of defects. Transverse crack Transverse cracks are perpendicular to the direction of the weld. These are generally the result of longitudinal shrinkage stresses acting on weld metal of low ductility. Crater cracks occur in the crater when the welding arc is terminated prematurely. Crater cracks are typically shallow, hot cracks, usually forming single or star cracks. These cracks usually start at a crater pipe and extend longitudinally in the crater. However, they may propagate into longitudinal weld cracks in the rest of the weld. Distortion Welding methods that involve the melting of metal at the site of the joint are necessarily prone to shrinkage as the heated metal cools. Shrinkage then introduces residual stresses and distortion. Distortion can pose a major problem since the final product is not the desired shape. To alleviate certain types of distortion, the workpieces can be offset so that after welding, the product is the correct shape. The following pictures describe various types of welding distortion: Gas inclusion Gas inclusion—gas entrapment within the solidified weld—manifests itself in a wide variety of defects, including porosity, blow holes, and pipes (or wormholes). Gas formation can be from any of the following causes—high sulphur content in the workpiece or electrode, excessive moisture from the electrode or workpiece, too short of an arc, or wrong welding current or polarity. Other inclusions There are two other types of inclusions: linear inclusions and isolated inclusions. Linear inclusions occur when there is slag or flux in the weld. Slag forms from the use of a flux, which is why this type of defect usually occurs in welding processes that use such flux, such as shielded metal arc welding, flux-cored arc welding, and submerged arc welding; but it can also occur in gas metal arc welding. This defect usually occurs in welds that require multiple passes when there is poor overlap between the welds. The poor overlap does not allow the slag from the previous weld to melt out and rise to the top of the new weld bead. It can also occur if the previous weld left an undercut or an uneven surface profile. To prevent slag inclusions, the slag should be cleaned from the weld bead between passes via grinding, wire brushing, or chipping. Isolated inclusions occur when rust or mill scale is present on the base metal. Lack of fusion and incomplete penetration Lack of fusion is the poor adhesion of the weld bead to the base metal. Incomplete penetration is a weld bead that does not start at the root of the weld groove, leaving channels and crevices in the root of the weld. This causes serious issues in pipes because corrosive substances can settle in these areas. These types of defects occur when the welding procedures are not adhered to; possible causes include the current setting, arc length, electrode angle, and electrode manipulation. Defects can be varied and classified as critical or noncritical. Porosity (bubbles) in the weld are never acceptable. Slag inclusions and undercut are tolerated usually up to 1/8" total within a certain length of weld. Some porosity, cracks, and slag inclusions that are visible and will need further inspection to determine acceptability. Liquid Penetrant Testing (dye check) can verify minor defects. Magnetic Particle Inspection can discover Slag inclusions and cracks just below the surface. Deeper defects can be detected using Radiographic (X-rays) and/or Ultrasound (sound waves) testing techniques. Lamellar tearing Lamellar tearing is a welding defect that occurs in rolled steel plates that have been welded together in a way that creates shrinkage forces perpendicular to the faces of the plates and is caused mainly by sulfurous inclusions in the material. Since the 1970s, changes in manufacturing practices, limiting the amount of sulfur used, have greatly reduced the incidence of this problem. Other causes include excess hydrogen in the alloy. This defect can be mitigated by keeping the amount of sulfur in the steel alloy below 0.005%. Adding rare earth elements, zirconium, or calcium to the alloy, to control the configuration of sulfur inclusions throughout the metal lattice, can also mitigate the problem. Modifying the construction process to use cast or forged parts in place of welded parts can eliminate this problem, as Lamellar tearing only occurs in welded parts. Undercut Undercutting is when the weld reduces the base metal's cross-sectional thickness and reduces the strength of the weld and workpieces. One reason for this type of defect is excessive current, which causes the edges of the joint to melt and drain into the weld, thus leaving a drain-like impression along the length of the weld. Another reason is poor technique that doesn't deposit enough filler metal along the edges of the weld. A third reason is use of an incorrect filler metal, which will create greater temperature gradients between the center of the weld and the edges. Other causes include too small of an electrode angle, a dampened electrode, excessive arc length, and slow welding speed. References Bibliography External links Understanding Hydrogen Failures Radiograph Interpretation – Welds Welding
Welding defect
[ "Engineering" ]
3,015
[ "Welding", "Mechanical engineering" ]
25,330,915
https://en.wikipedia.org/wiki/PI3K/AKT/mTOR%20pathway
The PI3K/AKT/mTOR pathway is an intracellular signaling pathway important in regulating the cell cycle. Therefore, it is directly related to cellular quiescence, proliferation, cancer, and longevity. PI3K activation phosphorylates and activates AKT, localizing it in the plasma membrane. AKT can have a number of downstream effects such as activating CREB, inhibiting p27, localizing FOXO in the cytoplasm, activating PtdIns-3ps, and activating mTOR which can affect transcription of p70 or 4EBP1. There are many known factors that enhance the PI3K/AKT pathway including EGF, shh, IGF-1, insulin, and calmodulin. Both leptin and insulin recruit PI3K signalling for metabolic regulation. The pathway is antagonized by various factors including PTEN, GSK3B, and HB9. In many cancers, this pathway is overactive, thus reducing apoptosis and allowing proliferation. This pathway is necessary, however, to promote growth and proliferation over differentiation of adult stem cells, neural stem cells specifically. It is the difficulty in finding an appropriate amount of proliferation versus differentiation that researchers are trying to determine in order to utilize this balance in the development of various therapies. Additionally, this pathway has been found to be a necessary component in neural long term potentiation. Proliferation of neural stem cells Response to glucose Neural stem cells (NSCs) in the brain must find a balance between maintaining their multipotency by self renewing and proliferating as opposed to differentiating and becoming quiescent. The PI3K/AKT pathway is crucial in this decision making process. NSCs are able to sense and respond to changes in the brain or throughout the organism. When blood glucose levels are elevated acutely, insulin is released from the pancreas. Activation of insulin receptors activates the PI3K/AKT pathway, which promotes proliferation. In this way, when there is high glucose and abundant energy in the organism, the PI3K/AKT pathway is activated and NSCs tend to proliferate. When there are low amounts of available energy, the PI3K/AKT pathway is less active and cells adopt a quiescent state. This occurs, in part, when AKT phosphorylates FOXO, keeping FOXO in the cytoplasm. FOXO, when dephosphorylated, can enter the nucleus and work as a transcription factor to promote the expression of various tumor suppressors such as p27 and p21. These tumor suppressors push the NSC to enter quiescence. FOXO knockouts lose the ability for cells to enter a quiescent state as well as cells losing their neural stem cell character, possibly entering a cancer like state. PTEN The PI3K/AKT pathway has a natural inhibitor called Phosphatase and tensin homolog (PTEN) whose function is to limit proliferation in cells, helping to prevent cancer. Knocking out PTEN has been shown to increase the mass of the brain because of the unregulated proliferation that occurs. PTEN works by dephosphorylating PIP3 to PIP2 which limits AKTs ability to bind to the membrane, decreasing its activity. PTEN deficiencies can be compensated downstream to rescue differentiation or quiescence. Knocking out PTEN is not as serious as knocking out FOXO for this reason. CREB The cAMP response element CREB is closely related to the cell decision to proliferate or not. Cells that are forced to overexpress AKT increase the amount of CREB and proliferation compared to wild type cells. These cells also express less glial and neural cell markers such as GFAP or β-tubulin. This is because CREB is a transcription factor that influences the transcription of cyclin A which promotes proliferation. For example, adult hippocampal neural progenitor cells need abeyance as stem cells to differentiate later. This is regulated by Shh. Shh works through a slow protein synthesis dependence, which stimulates other cascades that work synergistically with the PI3K/AKT pathway to induce proliferation. Then, the other pathway can be turned off and the effects of the PI3K/AKT pathway become insufficient in stopping differentiation. The specifics of this pathway are unknown. Roles in cancer Ovarian cancer PI3K/ AKT/mTOR pathway is a central regulator of ovarian cancer. PIM kinases are over expressed in many types of cancers and they also contribute to the regulation of ovarian cancer. PIM are directly and indirectly found to activate mTOR and its upstream effectors like AKT. Besides, PIM kinases can cause phosphorylation of IRS, which can alter PI3K. This indicates the close interaction of PIM with PI3K/ AKT/mTOR cascade and its components. Similarly, AKT has also been reported to perform the BAD phosphorylation in OC cells. PIM and the PI3K/AKT/mTOR network both can inhibit the P21 and P27 expressions in OC cells. These data suggest a strong possibility of interaction and relevance of PIM kinases and the PI3K/AKT/mTOR network in the regulation of ovarian cancer. However, targeting this pathway in ovarian cancer has been challenging with several trials failing to achieve sufficient clinical benefit. Breast cancer In many kinds of breast cancer, aberrations in the PI3K/AKT/mTOR pathway are the most common genomic abnormalities. The most common known aberrations include the PIK3CA gene mutation and the loss-of-function mutations or epigenetic silencing of PTEN. The phosphoinositide 3-kinase (PI3K)/protein kinase B (Akt)/mammalian target of rapamycin (mTOR) pathway is activated in approximately 30–40% of BC cases. In triple-negative breast cancer (TNBC), oncogenic activation of the PI3K/AKT/mTOR pathway can happen as a function of overexpression of upstream regulators like EGFR, activating mutations of PIK3CA, loss of function or expression of phosphatase and tensin homolog (PTEN), and the proline-rich inositol polyphosphatase, which are downregulators of PI3K. It is consistent with the hypothesis that PI3K inhibitors can overcome resistance to endocrine therapy when it is acquired Urothelial cancer PIK3CA frequently have gain of function mutations in urothelial cancer. Similar to PI3Ka, PI3Kb is expressed in many different cells, and it is mainly involved in the activation of platelets and development of thrombotic diseases. Studies have shown that PI3Kb contribute to tumor proliferation as well. Specifically, it has an important role in tumorigenesis in PTEN-negative cancers. It's reported that interfering with the gene for PI3Kb might be a therapeutic approach for high-risk bladder cancers with mutant PTEN and E-cadherin loss. Specific isoform inhibitors to PI3Kb is a potential treatment for PTEN-deficient cancers. Prostate cancer The PI3K pathway is a major source of drug resistance in prostate cancer. This is particularly true in castration-resistant prostate cancer, where tumours become resistant to androgen-deprivation therapy, which block the tumours ability to utilise the hormone androgen to grow. This is due to a complex feedback mechanism which exists between the androgen receptor and the PI3K pathway. As in other tumour types, mutations in key genes of this pathway can lead to hyperactivation of this pathway, for example in PIK3CA, Increases in the copy number of PIK3CA and increased mRNA expression also increases pathway activation in prostate cancers among others. Gains in the nearby genetic region 3q26.31-32 have been shown to co-occur with a number of nearby PI3K family members including PIK3CA, PIK3CB and PIK3R4, leading to transcriptional changes in PIK3C2G, PIK3CA, PIK3CB, PIK3R4 as well as pathways associated with cell proliferation. These large spanning gains associate with Gleason grade, tumour stage, lymph node metastasis and other aggressive clinical features. In patients treated with PI3K inhibitors, those with copy number gains in PIK3CB appear to have increased drug susceptibility. Therapies PI3K inhibitor PI3K inhibitors may overcome drug resistance and improve advanced breast cancer (ABC) outcomes. Different PI3K inhibitors exhibit different effect against various PI3K types. Class IA pan-PI3K inhibitors have been more extensively studied than isoform specific inhibitors; Pictilisib is another pan-PI3K inhibitor with greater subunitα-inhibitor activity than buparlisib. Idelalisib is the first PI3K inhibitor approved by the US Food and Drug Administration and is utilized in the treatment of relapsed/refractory chronic lymphocytic leukemia/small lymphocytic lymphoma and follicular lymphoma. Copanlisib is approved for relapsed follicular lymphoma in patients who have received at least two prior systemic therapies. Duvelisib is approved for relapsed/refractory chronic lymphocytic leukemia/small lymphocytic lymphoma (CLL/SLL), and relapsed/refractory follicular lymphoma, both indications for patients who have received at least two prior therapies. Akt inhibitor AKT is downstream to PI3K and is inhibited by Ipatasertib. Akt is an AGC-family kinase and a central, integral signaling node of the PAM pathway. There are three Akt isozymes, Akt1, Akt2 and Akt3. Small-molecule inhibitors of Akt1 could be especially useful to target tumors with a high prevalence of Akt1 E17K activating mutations, which is observed in 4–6% of breast cancers and 1–2% of colorectal cancer. Research towards Akt inhibition has focused on inhibition of two distinct binding sites: the allosteric pocket of the inactive enzyme, and the ATP binding site. Allosteric Akt inhibitors, highlighted by MK-2206, have been extensively evaluated in a clinical setting; Recently, additional allosteric Akt inhibitors have been identified. ARQ-092, is a potent pan-Akt inhibitor which can inhibit tumor growth preclinically and is currently in Phase I clinical studies. mTOR inhibitor There is significant correlation of phosphorylated mTOR with the survival rate for patients with stages I and II TNBC. A patient-derived xenograft TNBC model testing the mTOR inhibitor rapamycin showed 77–99% tumor-growth inhibition, which is significantly more than has been seen with doxorubicin; protein phosphorylation studies indicated that constitutive activation of the mTOR pathway decreased with treatment. Dual PI3K/AKT/mTOR inhibitors It has been hypothesized that blockage of the PI3K/AKT/mTOR pathway can lead to increased antitumor activity in TNBC. Preclinical data have shown that the combination of compounds targeting different cognate molecules in the PI3K/AKT/mTOR pathway leads to synergistic activity. On the basis of these findings, new compounds targeting different components of the PI3K/AKT/mTOR pathway simultaneously continue to be developed. For example, gedatolisib inhibits mutant forms of PI3K-α with elevated kinase activity at concentrations equivalent to the IC50 for wild-type PI3K-α. PI3K-β, -δ and -γ isoforms were inhibited by gedatolisib at concentrations approximately 10-fold higher than those observed for PI3K-α. Another advantage of simultaneously targeting PI3K and mTOR is the ensuing more robust inhibition of receptor tyrosine kinase-positive feedback loops seen with isolated PI3K inhibition. Gedatolisib is currently under development for the treatment of TNBC, in combination with PTK7 antibody–drug conjugate. Apitolisib (GDC-0980) is a PI3K inhibitor (subunits α, δ, and γ) that also targets mTORC PI3K pathway co-targeted therapy There are numerous cell signalling pathways that exhibit cross-talk with the PI3K pathway, potentially allowing cancer cells to escape inhibition of PI3K. As such, inhibition of the PI3K pathway alongside other targets could offer a synergistic response, such as that seen with PI3K and MEK co-targeted inhibition in lung cancer cells. More recently, co-targeting the PI3K pathway with PIM kinases has been suggested, with numerous pre-clinical studies suggesting the potential benefit of this approach. Development of panels of cell lines that are resistant to inhibition of the PI3K pathway may lead to the identification of future co-targets, and better understanding of which pathways may compensate for loss of PI3K signalling following drug treatment. Combined PI3K inhibition with more traditional therapies such as chemotherapy may also offer improved response over inhibition of PI3K alone. Neural stem cells The type of growth factor signaling can effect whether or not NSCs differentiate into motor neurons or not. Priming a media with FGF2 lowers the activity of the PI3K/AKT pathway, which activates GSK3β. This increases expression of HB9. Directly inhibiting PI3K in NSCs leads to a population of cells that are purely HB9+ and differentiate at an elevated efficiency into motor neurons. Grafting these cells into different parts of rats generates motor neurons regardless of the transplanted cells' microenvironment. Following injury, neural stem cells enter a repair phase and express high levels of PI3K to enhance proliferation. This is better for survival of the neurons as a whole but is at the expense of generating motor neurons. Therefore, it can be difficult for injured motor neurons to recover their ability. It is the purpose of modern research to generate neural stem cells that can proliferate but still differentiate into motor neurons. Lowering the effect of the PI3K pathway and increasing the effect of GSK3β and HB9 in NSCs is a potential way of generating these cells. PTEN inhibitors PTEN is a tumor suppressor that inhibits the PI3K/AKT pathway. PTEN inhibitors, such as bisperoxovanadium, can enhance the PI3K/AKT pathway to promote cell migration, survival and proliferation. While there are some concerns over possible cell cycle dysregulation and tumorigenesis, temporary and moderate PTEN inhibition may confer neuroprotection against traumatic brain injury and improve CNS recovery by reestablishing lost connections by axonogenesis. Medicinal value of PTEN inhibitors remains to be determined. Long-term potentiation In order for long-term potentiation (LTP) to occur, there must be stimulation of NMDA receptors, which causes AMPA receptors to be inserted postsynaptically. PI3K binds to AMPA receptors in a conserved region to orient the receptors in the membrane, specifically at the GluR subunit. PI3K activity increases in response to calcium ions and CaM. Additionally, AKT localizes PtdIns-3Ps in the post synapse, which recruits docking proteins such as tSNARE and Vam7. This directly leads to the docking of AMPA in the post synapse. mTOR activated p70S6K and inactivated 4EBP1 which changes gene expression to allow LTP to occur. Long-term fear conditioning training was affected in rats but there was no effect in short term conditioning. Specifically, amygdala fear conditioning was lost. This is a type of trace conditioning which is a form of learning that requires association of a conditioned stimulus with an unconditioned stimulus. This effect was lost in PI3K knockdowns and increased in PI3K overexpressions. Role in brain growth In addition to its role in synaptic plasticity described above, PI3K-AKT signaling pathway also has an important role in brain growth, which is altered when PI3K signaling is disturbed. For example, intracranial volume is also associated with this pathway, in particular with AKT3 intronic variants. Thyroid hormone was originally identified as the primary regulator of brain growth and cognition, and recent evidence has demonstrated that thyroid hormone produces some of its effects on the maturation and plasticity of synapses through PI3K. See also AKT inhibitor Akt/PKB signaling pathway mTOR inhibitor PI3K inhibitor PTEN References Signal transduction
PI3K/AKT/mTOR pathway
[ "Chemistry", "Biology" ]
3,573
[ "Biochemistry", "Neurochemistry", "Signal transduction" ]
8,352,121
https://en.wikipedia.org/wiki/Baffle%20spray%20scrubber
Baffle spray scrubbers are a technology for air pollution control. They are very similar to spray towers in design and operation. However, in addition to using the energy provided by the spray nozzles, baffles are added to allow the gas stream to atomize some liquid as it passes over them. A simple baffle scrubber system is shown in Figure 1. Liquid sprays capture pollutants and also remove collected particles from the baffles. Adding baffles slightly increases the pressure drop of the system. This type of technology is a part of the group of air pollution controls collectively referred to as wet scrubbers. A number of wet-scrubber designs use energy from both the gas stream and liquid stream to collect pollutants. Many of these combination devices are available commercially. A seemingly unending number of scrubber designs have been developed by changing system geometry and incorporating vanes, nozzles, and baffles. Particle collection These devices are used much the same as spray towers - to preclean or remove particles larger than 10 μm in diameter. However, they will tend to plug or corrode if particle concentration of the exhaust gas stream is high. Gas collection Even though these devices are not specifically used for gas collection, they are capable of a small amount of gas absorption because of their large wetted surface. Summary These devices are most commonly used as precleaners to remove large particles (>10 μm in diameter). The pressure drops across baffle scrubbers are usually low, but so are the collection efficiencies. Maintenance problems are minimal. The main problem is the buildup of solids on the baffles.Table 1 summarizes the operating characteristics of baffle spray scrubbers. Bibliography Bethea, R. M. 1978. Air Pollution Control Technology. New York: Van Nostrand Reinhold. McIlvaine Company. 1974. The Wet Scrubber Handbook. Northbrook, IL: McIlvaine Company. Richards, J. R. 1995. Control of Particulate Emissions (APTI Course 413). U.S. Environmental Protection Agency. Richards, J. R. 1995. Control of Gaseous Emissions. (APTI Course 415). U.S. Environmental Protection Agency. U.S. Environmental Protection Agency. 1969. Control Techniques for Particulate Air Pollutants. AP-51. References Institute of Clean Air Companies - national trade association representing emissions control manufacturers Pollution control technologies Air pollution control systems Wet scrubbers Liquid-phase and gas-phase contacting scrubbers
Baffle spray scrubber
[ "Chemistry", "Engineering" ]
524
[ "Scrubbers", "Wet scrubbers", "Pollution control technologies", "Environmental engineering" ]
6,373,591
https://en.wikipedia.org/wiki/Discontinuous%20deformation%20analysis
Discontinuous deformation analysis (DDA) is a type of discrete element method (DEM) originally proposed by Shi in 1988. DDA is somewhat similar to the finite element method for solving stress-displacement problems, but accounts for the interaction of independent particles (blocks) along discontinuities in fractured and jointed rock masses. DDA is typically formulated as a work-energy method, and can be derived using the principle of minimum potential energy or by using Hamilton's principle. Once the equations of motion are discretized, a step-wise linear time marching scheme in the Newmark family is used for the solution of the equations of motion. The relation between adjacent blocks is governed by equations of contact interpenetration and accounts for friction. DDA adopts a stepwise approach to solve for the large displacements that accompany discontinuous movements between blocks. The blocks are said to be "simply deformable". Since the method accounts for the inertial forces of the blocks' mass, it can be used to solve the full dynamic problem of block motion. Vs DEM Although DDA and DEM are similar in the sense that they both simulate the behavior of interacting discrete bodies, they are quite different theoretically. While DDA is a displacement method, DEM is a force method. While DDA uses displacement as variables in an implicit formulation with opening-closing iterations within each time step to achieve equilibrium of the blocks under constrains of the contact, DEM employs an explicit, time marching scheme to solve the equations of motion directly (Cundall and Hart). The system of equation in DDA is derived from minimizing the total potential energy of the system being analyzed. This guarantee that equilibrium is satisfied at all times and that energy consumption is natural since it is due to frictional forces. In DEM, unbalanced forces drive the solution process, and damping is used to dissipate energy. If a quasi-static solution is desired in which the intermediate steps are not of interest, the type of damping and the type of relaxation scheme can be selected in DEM to obtain the most efficient solution method (Cundall). The application of damping in DEM for quasi-static problem is somewhat analogues to the setting to zero of the initial velocities of the block in the static analysis of DDA. In dynamic problem, however, the amount and type of damping in DEM, which are very difficult to qualify experimentally, has to be selected very carefully to as not to damp out real vibrations. On the other hand, the energy consumption in DDA is due to the frictional resistance at contact. By passing the velocities of the blocks at the end of a time step to the next time step, DDA gives real dynamic solution with correct energy consumption. By using an energy approach, DDA does not require an artificial damping term to dissipate energy as in DEM, and can easily incorporate other mechanisms for energy loss. Strengths and limitations DDA has several strengths recommending it for use in slope stability problems in jointed rock masses, which are balanced by serious limitations be accounted when DDA is used for larger scale, faster moving problems. Strengths Very good for problems with small characteristic as time marching scheme provides necessary numerical damping to control resonance interactions within and between particles. Step-wise linear implicit time marching allows so-called quasi-static solutions, where step-wise velocities are never used. Quasi-static analysis is useful for examining slow, or creeping failures. Limitations The most serious limitation of the DDA method is the reduction of numerical damping which occurs as the characteristic length of a problem grows. The numerically, damping is a function of . Typically, the stiffness doesn't vary over 1 or 2 orders of magnitude, while the mass is a function of the cube of the characteristic length. Modification and improvement Various modifications to the original DDA formulation have been reported in the rock mechanics literature. In the original DDA formulation a first order polynomial displacement function was assumed, so the stresses and strains within a block in the model were constant. This approximation precludes the application of this algorithm to problems with significant stress variations within the block. However, in cases where the displacement inside the block is high and cannot be ignored, the blocks can be divided by mesh. An example of this approach is the research by Chang et al. and Jing who resolved this problem by adding finite element meshes in the two-dimensional domain of the blocks so that stress variations within the blocks can be allowed. Higher order DDA method for two-dimensional problems has been developed in both theory and computer codes by researchers like Koo and Chern, Ma et al. and Hsiung. Additionally, The DDA contact model which was originally based on penalty method was improved by adopting the Lagrange type approach reported by Lin et al. Since a blocky system is a highly non-linear system due to non-linearity within blocks and between blocks, Chang et al. implemented a material non-linearity model to DDA using strain hardening curves. Ma developed a non-linear contact model for analysis of slope progressive failure including strain softening using the stress and strain curve. Recent progress in DDA algorithm is reported by Kim et al. and Jing et al. which considers coupling of fluid flow in fractures. The hydro-mechanical coupling across rock fracture surfaces is also taken into account. The program computes water pressure and seepage throughout the rock mass of interest. In its original formulation, a rock bolt was modeled as a line spring connecting two adjacent blocks. Later, Te-Chin Ke suggested an improved bolt model, followed by the rudimentary formulation of lateral constraint of rock bolting. References Additional references Shi GH. Block system modeling by discontinuous deformation analysis. Computational Mechanics Publications; 1993. Shi GH. Discontinuous deformation analysis technical note. First international forum on discontinuous deformation analysis, June 12–14. Berkeley, California; 1996. Hatzor, Yossef H.; Ma, Gouwei; Shi, Gen-hua. Discontinuous Deformation Analysis in Rock Mechanics Practice. London: CRC Press. 2017 Computational physics Deformation (mechanics)
Discontinuous deformation analysis
[ "Physics", "Materials_science", "Engineering" ]
1,277
[ "Deformation (mechanics)", "Materials science", "Computational physics" ]
1,907,770
https://en.wikipedia.org/wiki/Dispersive%20mass%20transfer
Dispersive mass transfer, in fluid dynamics, is the spreading of mass from highly concentrated areas to less concentrated areas. It is one form of mass transfer. Dispersive mass flux is analogous to diffusion, and it can also be described using Fick's first law: where c is mass concentration of the species being dispersed, E is the dispersion coefficient, and x is the position in the direction of the concentration gradient. Dispersion can be differentiated from diffusion in that it is caused by non-ideal flow patterns (i.e. deviations from plug flow) and is a macroscopic phenomenon, whereas diffusion is caused by random molecular motions (i.e. Brownian motion) and is a microscopic phenomenon. Dispersion is often more significant than diffusion in convection-diffusion problems. The dispersion coefficient is frequently modeled as the product of the fluid velocity, U, and some characteristic length scale, α: References Transport phenomena
Dispersive mass transfer
[ "Physics", "Chemistry", "Engineering" ]
194
[ "Transport phenomena", "Physical phenomena", "Chemical engineering", "Fluid dynamics stubs", "Fluid dynamics" ]
1,908,016
https://en.wikipedia.org/wiki/Einstein%20solid
The Einstein solid is a model of a crystalline solid that contains a large number of independent three-dimensional quantum harmonic oscillators of the same frequency. The independence assumption is relaxed in the Debye model. While the model provides qualitative agreement with experimental data, especially for the high-temperature limit, these oscillations are in fact phonons, or collective modes involving many atoms. Albert Einstein was aware that getting the frequency of the actual oscillations would be difficult, but he nevertheless proposed this theory because it was a particularly clear demonstration that quantum mechanics could solve the specific heat problem in classical mechanics. Historical impact The original theory proposed by Einstein in 1907 has great historical relevance. The heat capacity of solids as predicted by the empirical Dulong–Petit law was required by classical mechanics, the specific heat of solids should be independent of temperature. But experiments at low temperatures showed that the heat capacity changes, going to zero at absolute zero. As the temperature goes up, the specific heat goes up until it approaches the Dulong and Petit prediction at high temperature. By employing Planck's quantization assumption, Einstein's theory accounted for the observed experimental trend for the first time. Together with the photoelectric effect, this became one of the most important pieces of evidence for the need of quantization. Einstein used the levels of the quantum mechanical oscillator many years before the advent of modern quantum mechanics. Heat capacity For a thermodynamic approach, the heat capacity can be derived using different statistical ensembles. All solutions are equivalent at the thermodynamic limit. Microcanonical ensemble The heat capacity of an object at constant volume V is defined through the internal energy U as , the temperature of the system, can be found from the entropy To find the entropy consider a solid made of atoms, each of which has 3 degrees of freedom. So there are quantum harmonic oscillators (hereafter SHOs for "Simple Harmonic Oscillators"). Possible energies of an SHO are given by where the n of SHO is usually interpreted as the excitation state of the oscillating mass but here n is usually interpreted as the number of phonons (bosons) occupying that vibrational mode (frequency). The net effect is that the energy levels are evenly spaced, and one can define a quantum of energy due to a phonon as which is the smallest and only amount by which the energy of an SHO is increased. Next, we must compute the multiplicity of the system. That is, compute the number of ways to distribute quanta of energy among SHOs. This task becomes simpler if one thinks of distributing pebbles over boxes or separating stacks of pebbles with partitions or arranging pebbles and partitions The last picture is the most telling. The number of arrangements of  objects is . So the number of possible arrangements of pebbles and partitions is . However, if partition #3 and partition #5 trade places, no one would notice. The same argument goes for quanta. To obtain the number of possible distinguishable arrangements one has to divide the total number of arrangements by the number of indistinguishable arrangements. There are identical quanta arrangements, and identical partition arrangements. Therefore, multiplicity of the system is given by which, as mentioned before, is the number of ways to deposit quanta of energy into oscillators. Entropy of the system has the form is a huge number—subtracting one from it has no overall effect whatsoever: With the help of Stirling's approximation, entropy can be simplified: Total energy of the solid is given by since there are q energy quanta in total in the system in addition to the ground state energy of each oscillator. Some authors, such as Schroeder, omit this ground state energy in their definition of the total energy of an Einstein solid. We are now ready to compute the temperature Elimination of q between the two preceding formulas gives for U: The first term is associated with zero point energy and does not contribute to specific heat. It will therefore be lost in the next step. Differentiating with respect to temperature to find we obtain: or Although the Einstein model of the solid predicts the heat capacity accurately at high temperatures, and in this limit , which is equivalent to Dulong–Petit law, the heat capacity noticeably deviates from experimental values at low temperatures. See Debye model for how to calculate accurate low-temperature heat capacities. Canonical ensemble Heat capacity is obtained through the use of the canonical partition function of a simple quantum harmonic oscillator. where substituting this into the partition function formula yields This is the partition function of one harmonic oscillator. Because, statistically, heat capacity, energy, and entropy of the solid are equally distributed among its atoms, we can work with this partition function to obtain those quantities and then simply multiply them by to get the total. Next, let's compute the average energy of each oscillator where Therefore, Heat capacity of one oscillator is then Up to now, we calculated the heat capacity of a unique degree of freedom, which has been modeled as a quantum harmonic. The heat capacity of the entire solid is then given by , where the total number of degree of freedom of the solid is three (for the three directional degree of freedom) times , the number of atoms in the solid. One thus obtains which is algebraically identical to the formula derived in the previous section. The quantity has the dimensions of temperature and is a characteristic property of a crystal. It is known as the Einstein temperature. Hence, the Einstein crystal model predicts that the energy and heat capacities of a crystal are universal functions of the dimensionless ratio . Similarly, the Debye model predicts a universal function of the ratio , where is the Debye temperature. Limitations and succeeding model In Einstein's model, the specific heat approaches zero exponentially fast at low temperatures. This is because all the oscillations have one common frequency. The correct behavior is found by quantizing the normal modes of the solid in the same way that Einstein suggested. Then the frequencies of the waves are not all the same, and the specific heat goes to zero as a power law, which matches experiment. This modification is called the Debye model, which appeared in 1912. See also Kinetic theory of solids References External links Condensed matter physics Albert Einstein
Einstein solid
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,310
[ "Phases of matter", "Condensed matter physics", "Matter", "Materials science" ]
1,908,047
https://en.wikipedia.org/wiki/Panoramagram
The panoramagram is an instrument invented in 1824 and a method of stereoscopic viewing in which the left-eye and right-eye photographs are divided into narrow juxtaposed strips and viewed through a superimposed ruled or lenticular screen in such a way that each of the observer's eyes is able to see only the correct picture. Also used to obtain the illusion of depth of one or more objects placed on the horizon and reflected on a flat surface. References Sources Further reading Optical devices
Panoramagram
[ "Materials_science", "Engineering" ]
101
[ "Glass engineering and science", "Optical devices" ]
1,908,142
https://en.wikipedia.org/wiki/Finite-difference%20time-domain%20method
Finite-difference time-domain (FDTD) or Yee's method (named after the Chinese American applied mathematician Kane S. Yee, born 1934) is a numerical analysis technique used for modeling computational electrodynamics (finding approximate solutions to the associated system of differential equations). Since it is a time-domain method, FDTD solutions can cover a wide frequency range with a single simulation run, and treat nonlinear material properties in a natural way. The FDTD method belongs in the general class of grid-based differential numerical modeling methods (finite difference methods). The time-dependent Maxwell's equations (in partial differential form) are discretized using central-difference approximations to the space and time partial derivatives. The resulting finite-difference equations are solved in either software or hardware in a leapfrog manner: the electric field vector components in a volume of space are solved at a given instant in time; then the magnetic field vector components in the same spatial volume are solved at the next instant in time; and the process is repeated over and over again until the desired transient or steady-state electromagnetic field behavior is fully evolved. History Finite difference schemes for time-dependent partial differential equations (PDEs) have been employed for many years in computational fluid dynamics problems, including the idea of using centered finite difference operators on staggered grids in space and time to achieve second-order accuracy. The novelty of Kane Yee's FDTD scheme, presented in his seminal 1966 paper, was to apply centered finite difference operators on staggered grids in space and time for each electric and magnetic vector field component in Maxwell's curl equations. The descriptor "Finite-difference time-domain" and its corresponding "FDTD" acronym were originated by Allen Taflove in 1980. Since about 1990, FDTD techniques have emerged as primary means to computationally model many scientific and engineering problems dealing with electromagnetic wave interactions with material structures. Current FDTD modeling applications range from near-DC (ultralow-frequency geophysics involving the entire Earth-ionosphere waveguide) through microwaves (radar signature technology, antennas, wireless communications devices, digital interconnects, biomedical imaging/treatment) to visible light (photonic crystals, nanoplasmonics, solitons, and biophotonics). In 2006, an estimated 2,000 FDTD-related publications appeared in the science and engineering literature (see Popularity). As of 2013, there are at least 25 commercial/proprietary FDTD software vendors; 13 free-software/open-source-software FDTD projects; and 2 freeware/closed-source FDTD projects, some not for commercial use (see External links). Development of FDTD and Maxwell's equations An appreciation of the basis, technical development, and possible future of FDTD numerical techniques for Maxwell's equations can be developed by first considering their history. The following lists some of the key publications in this area. FDTD models and methods When Maxwell's differential equations are examined, it can be seen that the change in the E-field in time (the time derivative) is dependent on the change in the H-field across space (the curl). This results in the basic FDTD time-stepping relation that, at any point in space, the updated value of the E-field in time is dependent on the stored value of the E-field and the numerical curl of the local distribution of the H-field in space. The H-field is time-stepped in a similar manner. At any point in space, the updated value of the H-field in time is dependent on the stored value of the H-field and the numerical curl of the local distribution of the E-field in space. Iterating the E-field and H-field updates results in a marching-in-time process wherein sampled-data analogs of the continuous electromagnetic waves under consideration propagate in a numerical grid stored in the computer memory. This description holds true for 1-D, 2-D, and 3-D FDTD techniques. When multiple dimensions are considered, calculating the numerical curl can become complicated. Kane Yee's seminal 1966 paper proposed spatially staggering the vector components of the E-field and H-field about rectangular unit cells of a Cartesian computational grid so that each E-field vector component is located midway between a pair of H-field vector components, and conversely. This scheme, now known as a Yee lattice, has proven to be very robust, and remains at the core of many current FDTD software constructs. Furthermore, Yee proposed a leapfrog scheme for marching in time wherein the E-field and H-field updates are staggered so that E-field updates are conducted midway during each time-step between successive H-field updates, and conversely. On the plus side, this explicit time-stepping scheme avoids the need to solve simultaneous equations, and furthermore yields dissipation-free numerical wave propagation. On the minus side, this scheme mandates an upper bound on the time-step to ensure numerical stability. As a result, certain classes of simulations can require many thousands of time-steps for completion. Using the FDTD method To implement an FDTD solution of Maxwell's equations, a computational domain must first be established. The computational domain is simply the physical region over which the simulation will be performed. The E and H fields are determined at every point in space within that computational domain. The material of each cell within the computational domain must be specified. Typically, the material is either free-space (air), metal, or dielectric. Any material can be used as long as the permeability, permittivity, and conductivity are specified. The permittivity of dispersive materials in tabular form cannot be directly substituted into the FDTD scheme. Instead, it can be approximated using multiple Debye, Drude, Lorentz or critical point terms. This approximation can be obtained using open fitting programs and does not necessarily have physical meaning. Once the computational domain and the grid materials are established, a source is specified. The source can be current on a wire, applied electric field or impinging plane wave. In the last case FDTD can be used to simulate light scattering from arbitrary shaped objects, planar periodic structures at various incident angles, and photonic band structure of infinite periodic structures. Since the E and H fields are determined directly, the output of the simulation is usually the E or H field at a point or a series of points within the computational domain. The simulation evolves the E and H fields forward in time. Processing may be done on the E and H fields returned by the simulation. Data processing may also occur while the simulation is ongoing. While the FDTD technique computes electromagnetic fields within a compact spatial region, scattered and/or radiated far fields can be obtained via near-to-far-field transformations. Strengths of FDTD modeling Every modeling technique has strengths and weaknesses, and the FDTD method is no different. FDTD is a versatile modeling technique used to solve Maxwell's equations. It is intuitive, so users can easily understand how to use it and know what to expect from a given model. FDTD is a time-domain technique, and when a broadband pulse (such as a Gaussian pulse) is used as the source, then the response of the system over a wide range of frequencies can be obtained with a single simulation. This is useful in applications where resonant frequencies are not exactly known, or anytime that a broadband result is desired. Since FDTD calculates the E and H fields everywhere in the computational domain as they evolve in time, it lends itself to providing animated displays of the electromagnetic field movement through the model. This type of display is useful in understanding what is going on in the model, and to help ensure that the model is working correctly. The FDTD technique allows the user to specify the material at all points within the computational domain. A wide variety of linear and nonlinear dielectric and magnetic materials can be naturally and easily modeled. FDTD allows the effects of apertures to be determined directly. Shielding effects can be found, and the fields both inside and outside a structure can be found directly or indirectly. FDTD uses the E and H fields directly. Since most EMI/EMC modeling applications are interested in the E and H fields, it is convenient that no conversions must be made after the simulation has run to get these values. Weaknesses of FDTD modeling Since FDTD requires that the entire computational domain be gridded, and the grid spatial discretization must be sufficiently fine to resolve both the smallest electromagnetic wavelength and the smallest geometrical feature in the model, very large computational domains can be developed, which results in very long solution times. Models with long, thin features, (like wires) are difficult to model in FDTD because of the excessively large computational domain required. Methods such as eigenmode expansion can offer a more efficient alternative as they do not require a fine grid along the z-direction. There is no way to determine unique values for permittivity and permeability at a material interface. Space and time steps must satisfy the CFL condition, or the leapfrog integration used to solve the partial differential equation is likely to become unstable. FDTD finds the E/H fields directly everywhere in the computational domain. If the field values at some distance are desired, it is likely that this distance will force the computational domain to be excessively large. Far-field extensions are available for FDTD, but require some amount of postprocessing. Since FDTD simulations calculate the E and H fields at all points within the computational domain, the computational domain must be finite to permit its residence in the computer memory. In many cases this is achieved by inserting artificial boundaries into the simulation space. Care must be taken to minimize errors introduced by such boundaries. There are a number of available highly effective absorbing boundary conditions (ABCs) to simulate an infinite unbounded computational domain. Most modern FDTD implementations instead use a special absorbing "material", called a perfectly matched layer (PML) to implement absorbing boundaries. Because FDTD is solved by propagating the fields forward in the time domain, the electromagnetic time response of the medium must be modeled explicitly. For an arbitrary response, this involves a computationally expensive time convolution, although in most cases the time response of the medium (or Dispersion (optics)) can be adequately and simply modeled using either the recursive convolution (RC) technique, the auxiliary differential equation (ADE) technique, or the Z-transform technique. An alternative way of solving Maxwell's equations that can treat arbitrary dispersion easily is the pseudo-spectral spatial domain (PSSD), which instead propagates the fields forward in space. Grid truncation techniques The most commonly used grid truncation techniques for open-region FDTD modeling problems are the Mur absorbing boundary condition (ABC), the Liao ABC, and various perfectly matched layer (PML) formulations. The Mur and Liao techniques are simpler than PML. However, PML (which is technically an absorbing region rather than a boundary condition per se) can provide orders-of-magnitude lower reflections. The PML concept was introduced by J.-P. Berenger in a seminal 1994 paper in the Journal of Computational Physics. Since 1994, Berenger's original split-field implementation has been modified and extended to the uniaxial PML (UPML), the convolutional PML (CPML), and the higher-order PML. The latter two PML formulations have increased ability to absorb evanescent waves, and therefore can in principle be placed closer to a simulated scattering or radiating structure than Berenger's original formulation. To reduce undesired numerical reflection from the PML additional back absorbing layers technique can be used. Popularity Notwithstanding both the general increase in academic publication throughput during the same period and the overall expansion of interest in all Computational electromagnetics (CEM) techniques, there are seven primary reasons for the tremendous expansion of interest in FDTD computational solution approaches for Maxwell's equations: FDTD does not require a matrix inversion. Being a fully explicit computation, FDTD avoids the difficulties with matrix inversions that limit the size of frequency-domain integral-equation and finite-element electromagnetics models to generally fewer than 109 electromagnetic field unknowns. FDTD models with as many as 109 field unknowns have been run; there is no intrinsic upper bound to this number. FDTD is accurate and robust. The sources of error in FDTD calculations are well understood, and can be bounded to permit accurate models for a very large variety of electromagnetic wave interaction problems. FDTD treats impulsive behavior naturally. Being a time-domain technique, FDTD directly calculates the impulse response of an electromagnetic system. Therefore, a single FDTD simulation can provide either ultrawideband temporal waveforms or the sinusoidal steady-state response at any frequency within the excitation spectrum. FDTD treats nonlinear behavior naturally. Being a time-domain technique, FDTD directly calculates the nonlinear response of an electromagnetic system. This allows natural hybriding of FDTD with sets of auxiliary differential equations that describe nonlinearities from either the classical or semi-classical standpoint. One research frontier is the development of hybrid algorithms which join FDTD classical electrodynamics models with phenomena arising from quantum electrodynamics, especially vacuum fluctuations, such as the Casimir effect. FDTD is a systematic approach. With FDTD, specifying a new structure to be modeled is reduced to a problem of mesh generation rather than the potentially complex reformulation of an integral equation. For example, FDTD requires no calculation of structure-dependent Green functions. Parallel-processing computer architectures have come to dominate supercomputing. FDTD scales with high efficiency on parallel-processing CPU-based computers, and extremely well on recently developed GPU-based accelerator technology. Computer visualization capabilities are increasing rapidly. While this trend positively influences all numerical techniques, it is of particular advantage to FDTD methods, which generate time-marched arrays of field quantities suitable for use in color videos to illustrate the field dynamics. Taflove has argued that these factors combine to suggest that FDTD will remain one of the dominant computational electrodynamics techniques (as well as potentially other multiphysics problems). See also Computational electromagnetics Eigenmode expansion Beam propagation method Finite-difference frequency-domain Finite element method Scattering-matrix method Discrete dipole approximation References Further reading The following article in Nature Milestones: Photons illustrates the historical significance of the FDTD method as related to Maxwell's equations: Allen Taflove's interview, "Numerical Solution," in the January 2015 focus issue of Nature Photonics honoring the 150th anniversary of the publication of Maxwell's equations. This interview touches on how the development of FDTD ties into the century and one-half history of Maxwell's theory of electrodynamics: Nature Photonics interview The following university-level textbooks provide a good general introduction to the FDTD method: External links Free software/Open-source software FDTD projects: FDTD++: advanced, fully featured FDTD software, along with sophisticated material models and predefined fits as well as discussion/support forums and email support openEMS (Fully 3D Cartesian & Cylindrical graded mesh EC-FDTD Solver, written in C++, using a Matlab/Octave-Interface) pFDTD (3D C++ FDTD codes developed by Se-Heon Kim) JFDTD (2D/3D C++ FDTD codes developed for nanophotonics by Jeffrey M. McMahon) WOLFSIM (NCSU) (2-D) Meep (MIT, 2D/3D/cylindrical parallel FDTD) (Geo-) Radar FDTD bigboy (unmaintained, no release files. must get source from cvs) Parallel (MPI&OpenMP) FDTD codes in C++ (developed by Zs. Szabó) FDTD code in Fortran 90 FDTD code in C for 2D EM Wave simulation Angora (3D parallel FDTD software package, maintained by Ilker R. Capoglu) GSvit (3D FDTD solver with graphics card computing support, written in C, graphical user interface XSvit available) gprMax (Open Source (GPLv3), 3D/2D FDTD modelling code in Python/Cython developed for GPR but can be used for general EM modelling.) Freeware/Closed source FDTD projects (some not for commercial use): EMTL (Electromagnetic Template Library) (Free С++ library for electromagnetic simulations. The current version implements mainly the FDTD). Numerical software Simulation software Electromagnetic radiation Numerical differential equations Computational science Computational electromagnetics Electromagnetism Electrodynamics Scattering, absorption and radiative transfer (optics)
Finite-difference time-domain method
[ "Physics", "Chemistry", "Mathematics" ]
3,570
[ "Physical phenomena", "Computational electromagnetics", "Electromagnetism", " absorption and radiative transfer (optics)", "Electromagnetic radiation", "Applied mathematics", "Mathematical software", "Computational physics", "Computational science", "Scattering", "Radiation", "Fundamental inte...
1,908,302
https://en.wikipedia.org/wiki/Slewing
Slewing is the rotation of an object around an axis, usually the z axis. An example is a radar scanning 360 degrees by slewing around the z axis. This is also common terminology in astronomy. The process of rotating a telescope to observe a different region of the sky is referred to as slewing. The term slewing is also found in motion control applications. Often the slew axis is combined with another axis to form a motion profile. In crane terminology, slewing is the angular movement of a crane boom or crane jib in a horizontal plane. The term is also used in the computer game Microsoft Flight Simulator wherein the user presses a key and he or she can rotate and move the virtual aircraft along all three spatial planes. In the modern day use of CNC programs, slewing is a vital part of the process. Mechanics
Slewing
[ "Physics", "Mathematics", "Engineering" ]
177
[ "Mechanics", "Mechanical engineering", "Geometry", "Geometry stubs" ]
1,908,527
https://en.wikipedia.org/wiki/Nuclear%20magnetic%20resonance%20spectroscopy
Nuclear magnetic resonance spectroscopy, most commonly known as NMR spectroscopy or magnetic resonance spectroscopy (MRS), is a spectroscopic technique based on re-orientation of atomic nuclei with non-zero nuclear spins in an external magnetic field. This re-orientation occurs with absorption of electromagnetic radiation in the radio frequency region from roughly 4 to 900 MHz, which depends on the isotopic nature of the nucleus and increased proportionally to the strength of the external magnetic field. Notably, the resonance frequency of each NMR-active nucleus depends on its chemical environment. As a result, NMR spectra provide information about individual functional groups present in the sample, as well as about connections between nearby nuclei in the same molecule. As the NMR spectra are unique or highly characteristic to individual compounds and functional groups, NMR spectroscopy is one of the most important methods to identify molecular structures, particularly of organic compounds. The principle of NMR usually involves three sequential steps: The alignment (polarization) of the magnetic nuclear spins in an applied, constant magnetic field B0. The perturbation of this alignment of the nuclear spins by a weak oscillating magnetic field, usually referred to as a radio-frequency (RF) pulse. Detection and analysis of the electromagnetic waves emitted by the nuclei of the sample as a result of this perturbation. Similarly, biochemists use NMR to identify proteins and other complex molecules. Besides identification, NMR spectroscopy provides detailed information about the structure, dynamics, reaction state, and chemical environment of molecules. The most common types of NMR are proton and carbon-13 NMR spectroscopy, but it is applicable to any kind of sample that contains nuclei possessing spin. NMR spectra are unique, well-resolved, analytically tractable and often highly predictable for small molecules. Different functional groups are obviously distinguishable, and identical functional groups with differing neighboring substituents still give distinguishable signals. NMR has largely replaced traditional wet chemistry tests such as color reagents or typical chromatography for identification. The most significant drawback of NMR spectroscopy is its poor sensitivity (compared to other analytical methods, such as mass spectrometry). Typically 2–50 mg of a substance is required to record a decent-quality NMR spectrum. The NMR method is non-destructive, thus the substance may be recovered. To obtain high-resolution NMR spectra, solid substances are usually dissolved to make liquid solutions, although solid-state NMR spectroscopy is also possible. The timescale of NMR is relatively long, and thus it is not suitable for observing fast phenomena, producing only an averaged spectrum. Although large amounts of impurities do show on an NMR spectrum, better methods exist for detecting impurities, as NMR is inherently not very sensitive though at higher frequencies, sensitivity is higher. Correlation spectroscopy is a development of ordinary NMR. In two-dimensional NMR, the emission is centered around a single frequency, and correlated resonances are observed. This allows identifying the neighboring substituents of the observed functional group, allowing unambiguous identification of the resonances. There are also more complex 3D and 4D methods and a variety of methods designed to suppress or amplify particular types of resonances. In nuclear Overhauser effect (NOE) spectroscopy, the relaxation of the resonances is observed. As NOE depends on the proximity of the nuclei, quantifying the NOE for each nucleus allows construction of a three-dimensional model of the molecule. NMR spectrometers are relatively expensive; universities usually have them, but they are less common in private companies. Between 2000 and 2015, an NMR spectrometer cost around 0.5–5 million USD. Modern NMR spectrometers have a very strong, large and expensive liquid-helium-cooled superconducting magnet, because resolution directly depends on magnetic field strength. Higher magnetic field also improves the sensitivity of the NMR spectroscopy, which depends on the population difference between the two nuclear levels, which increases exponentially with the magnetic field strength. Less expensive machines using permanent magnets and lower resolution are also available, which still give sufficient performance for certain applications such as reaction monitoring and quick checking of samples. There are even benchtop nuclear magnetic resonance spectrometers. NMR spectra of protons (1H nuclei) can be observed even in Earth magnetic field. Low-resolution NMR produces broader peaks, which can easily overlap one another, causing issues in resolving complex structures. The use of higher-strength magnetic fields result in a better sensitivity and higher resolution of the peaks, and it is preferred for research purposes. History Credit for the discovery of NMR goes to Isidor Isaac Rabi, who received the Nobel Prize in Physics in 1944. The Purcell group at Harvard University and the Bloch group at Stanford University independently developed NMR spectroscopy in the late 1940s and early 1950s. Edward Mills Purcell and Felix Bloch shared the 1952 Nobel Prize in Physics for their inventions. NMR-active criteria The key determinant of NMR activity in atomic nuclei is the nuclear spin quantum number (I). This intrinsic quantum property, similar to an atom's "spin", characterizes the angular momentum of the nucleus. To be NMR-active, a nucleus must have a non-zero nuclear spin (I ≠ 0). It is this non-zero spin that enables nuclei to interact with external magnetic fields and show signals in NMR. Atoms with an odd sum of protons and neutrons exhibit half-integer values for the nuclear spin quantum number (I = 1/2, 3/2, 5/2, and so on). These atoms are NMR-active because they possess non-zero nuclear spin. Atoms with an even sum but both an odd number of protons and an odd number of neutrons exhibit integer nuclear spins (I = 1, 2, 3, and so on). Conversely, atoms with an even number of both protons and neutrons have a nuclear spin quantum number of zero (I = 0), and therefore are not NMR-active. NMR-active nuclei, particularly those with a spin quantum number of 1/2, are of great significance in NMR spectroscopy. Examples include 1H, 13C, 15N, and 31P. Some atoms with very high spin (as 9/2 for 99Tc atom) are also extensively studied with NMR spectroscopy. Main aspects of NMR techniques Resonant frequency When placed in a magnetic field, NMR active nuclei (such as 1H or 13C) absorb electromagnetic radiation at a frequency characteristic of the isotope. The resonant frequency, energy of the radiation absorbed, and the intensity of the signal are proportional to the strength of the magnetic field. For example, in a 21-tesla magnetic field, hydrogen nuclei (protons) resonate at 900 MHz. It is common to refer to a 21 T magnet as a 900 MHz magnet, since hydrogen is the most common nucleus detected. However, different nuclei will resonate at different frequencies at this field strength in proportion to their nuclear magnetic moments. Sample handling An NMR spectrometer typically consists of a spinning sample-holder inside a very strong magnet, a radio-frequency emitter, and a receiver with a probe (an antenna assembly) that goes inside the magnet to surround the sample, optionally gradient coils for diffusion measurements, and electronics to control the system. Spinning the sample is usually necessary to average out diffusional motion, however, some experiments call for a stationary sample when solution movement is an important variable. For instance, measurements of diffusion constants (diffusion ordered spectroscopy or DOSY) are done using a stationary sample with spinning off, and flow cells can be used for online analysis of process flows. Deuterated solvents The vast majority of molecules in a solution are solvent molecules, and most regular solvents are hydrocarbons and so contain NMR-active hydrogen-1 nuclei. In order to avoid having the signals from solvent hydrogen atoms overwhelm the experiment and interfere in analysis of the dissolved analyte, deuterated solvents are used where >99% of the protons are replaced with deuterium (hydrogen-2). The most widely used deuterated solvent is deuterochloroform (CDCl3), although other solvents may be used for various reasons, such as solubility of a sample, desire to control hydrogen bonding, or melting or boiling points. The chemical shifts of a molecule change slightly between solvents, and therefore the solvent used is almost always reported with chemical shifts. Proton NMR spectra are often calibrated against the known solvent residual proton peak as an internal standard instead of adding tetramethylsilane (TMS), which is conventionally defined as having a chemical shift of zero. Shim and lock To detect the very small frequency shifts due to nuclear magnetic resonance, the applied magnetic field must be extremely uniform throughout the sample volume. High-resolution NMR spectrometers use shims to adjust the homogeneity of the magnetic field to parts per billion (ppb) in a volume of a few cubic centimeters. In order to detect and compensate for inhomogeneity and drift in the magnetic field, the spectrometer maintains a "lock" on the solvent deuterium frequency with a separate lock unit, which is essentially an additional transmitter and RF processor tuned to the lock nucleus (deuterium) rather than the nuclei of the sample of interest. In modern NMR spectrometers shimming is adjusted automatically, though in some cases the operator has to optimize the shim parameters manually to obtain the best possible resolution. Acquisition of spectra Upon excitation of the sample with a radio frequency (60–1000 MHz) pulse, a nuclear magnetic resonance response a free induction decay (FID) is obtained. It is a very weak signal and requires sensitive radio receivers to pick up. A Fourier transform is carried out to extract the frequency-domain spectrum from the raw time-domain FID. A spectrum from a single FID has a low signal-to-noise ratio, but it improves readily with averaging of repeated acquisitions. Good 1H NMR spectra can be acquired with 16 repeats, which takes only minutes. However, for elements heavier than hydrogen, the relaxation time is rather long, e.g. around 8 seconds for 13C. Thus, acquisition of quantitative heavy-element spectra can be time-consuming, taking tens of minutes to hours. Following the pulse, the nuclei are, on average, excited to a certain angle vs. the spectrometer magnetic field. The extent of excitation can be controlled with the pulse width, typically about 3–8 μs for the optimal 90° pulse. The pulse width can be determined by plotting the (signed) intensity as a function of pulse width. It follows a sine curve and, accordingly, changes sign at pulse widths corresponding to 180° and 360° pulses. Decay times of the excitation, typically measured in seconds, depend on the effectiveness of relaxation, which is faster for lighter nuclei and in solids, slower for heavier nuclei and in solutions, and can be very long in gases. If the second excitation pulse is sent prematurely before the relaxation is complete, the average magnetization vector has not decayed to ground state, which affects the strength of the signal in an unpredictable manner. In practice, the peak areas are then not proportional to the stoichiometry; only the presence, but not the amount of functional groups is possible to discern. An inversion recovery experiment can be done to determine the relaxation time and thus the required delay between pulses. A 180° pulse, an adjustable delay, and a 90° pulse is transmitted. When the 90° pulse exactly cancels out the signal, the delay corresponds to the time needed for 90° of relaxation.<ref>{{cite web |url=http://triton.iqfr.csic.es/guide/eNMR/eNMR1D/invrec.html |title='T1 Measurement using Inversion-Recovery |first=Teodor |last=Parella |work=NMRGuide3.5 |url-status=dead |archive-url=https://web.archive.org/web/20210428064003/triton.iqfr.csic.es/guide/eNMR/eNMR1D/invrec.html |archive-date=2021-04-28}}</ref> Inversion recovery is worthwhile for quantitative 13C, 2D and other time-consuming experiments. Spectral interpretation NMR signals are ordinarily characterized by three variables: chemical shift, spin–spin coupling, and relaxation time. Chemical shift The energy difference ΔE between nuclear spin states is proportional to the magnetic field (Zeeman effect). ΔE is also sensitive to electronic environment of the nucleus, giving rise to what is known as the chemical shift, δ. The simplest types of NMR graphs are plots of the different chemical shifts of the nuclei being studied in the molecule. The value of δ is often expressed in terms of "shielding": shielded nuclei have higher ΔE. The range of δ values is called the dispersion. It is rather small for 1H signals, but much larger for other nuclei. NMR signals are reported relative to a reference signal, usually that of TMS (tetramethylsilane). Additionally, since the distribution of NMR signals is field-dependent, these frequencies are divided by the spectrometer frequency. However, since we are dividing Hz by MHz, the resulting number would be too small, and thus it is multiplied by a million. This operation therefore gives a locator number called the "chemical shift" with units of parts per million. The chemical shift provides structural information. The conversion of chemical shifts (and J's, see below) is called assigning the spectrum. For diamagnetic organic compounds, assignments of 1H and 13C NMR spectra are extremely sophisticated because of the large databases and easy computational tools. In general, chemical shifts for protons are highly predictable, since the shifts are primarily determined by shielding effects (electron density). The chemical shifts for many heavier nuclei are more strongly influenced by other factors, including excited states ("paramagnetic" contribution to shielding tensor). This paramagnetic contribution, which is unrelated to paramagnetism) not only disrupts trends in chemical shifts, which complicates assignments, but it also gives rise to very large chemical shift ranges. For example, most 1H NMR signals for most organic compounds are within 15 ppm. For 31P NMR, the range is hundreds of ppm. In paramagnetic NMR spectroscopy, the samples are paramagnetic, i.e. they contain unpaired electrons. The paramagnetism gives rise to very diverse chemical shifts. In 1H NMR spectroscopy, the chemical shift range can span up to thousands of ppm. J-coupling Some of the most useful information for structure determination in a one-dimensional NMR spectrum comes from J-coupling, or scalar coupling (a special case of spin–spin coupling), between NMR active nuclei. This coupling arises from the interaction of different spin states through the chemical bonds of a molecule and results in the splitting of NMR signals. For a proton, the local magnetic field is slightly different depending on whether an adjacent nucleus points towards or against the spectrometer magnetic field, which gives rise to two signals per proton instead of one. These splitting patterns can be complex or simple and, likewise, can be straightforwardly interpretable or deceptive. This coupling provides detailed insight into the connectivity of atoms in a molecule. The multiplicity of the splitting is an effect of the spins of the nuclei that are coupled and the number of such nuclei involved in the coupling. Coupling to n equivalent spin-1/2 nuclei splits the signal into a n + 1 multiplet with intensity ratios following Pascal's triangle as described in the table. Coupling to additional spins leads to further splittings of each component of the multiplet, e.g. coupling to two different spin-1/2 nuclei with significantly different coupling constants leads to a doublet of doublets (abbreviation: dd). Note that coupling between nuclei that are chemically equivalent (that is, have the same chemical shift) has no effect on the NMR spectra, and couplings between nuclei that are distant (usually more than 3 bonds apart for protons in flexible molecules) are usually too small to cause observable splittings. Long-range couplings over more than three bonds can often be observed in cyclic and aromatic compounds, leading to more complex splitting patterns. For example, in the proton spectrum for ethanol, the CH3 group is split into a triplet with an intensity ratio of 1:2:1 by the two neighboring CH2 protons. Similarly, the CH2 is split into a quartet with an intensity ratio of 1:3:3:1 by the three neighboring CH3 protons. In principle, the two CH2 protons would also be split again into a doublet to form a doublet of quartets by the hydroxyl proton, but intermolecular exchange of the acidic hydroxyl proton often results in a loss of coupling information. Coupling to any spin-1/2 nuclei such as phosphorus-31 or fluorine-19 works in this fashion (although the magnitudes of the coupling constants may be very different). But the splitting patterns differ from those described above for nuclei with spin greater than 1/2 because the spin quantum number has more than two possible values. For instance, coupling to deuterium (a spin-1 nucleus) splits the signal into a 1:1:1 triplet because the spin 1 has three spin states. Similarly, a spin-3/2 nucleus such as 35Cl splits a signal into a 1:1:1:1 quartet and so on. Coupling combined with the chemical shift (and the integration for protons) tells us not only about the chemical environment of the nuclei, but also the number of neighboring NMR active nuclei within the molecule. In more complex spectra with multiple peaks at similar chemical shifts or in spectra of nuclei other than hydrogen, coupling is often the only way to distinguish different nuclei. The magnitude of the coupling (the coupling constant J) is an effect of how strongly the nuclei are coupled to each other. For simple cases, this is an effect of the bonding distance between the nuclei, the magnetic moment of the nuclei, and the dihedral angle between them. Second-order (or strong) coupling The above description assumes that the coupling constant is small in comparison with the difference in NMR frequencies between the inequivalent spins. If the shift separation decreases (or the coupling strength increases), the multiplet intensity patterns are first distorted, and then become more complex and less easily analyzed (especially if more than two spins are involved). Intensification of some peaks in a multiplet is achieved at the expense of the remainder, which sometimes almost disappear in the background noise, although the integrated area under the peaks remains constant. In most high-field NMR, however, the distortions are usually modest, and the characteristic distortions (roofing'') can in fact help to identify related peaks. Some of these patterns can be analyzed with the method published by John Pople, though it has limited scope. Second-order effects decrease as the frequency difference between multiplets increases, so that high-field (i.e. high-frequency) NMR spectra display less distortion than lower-frequency spectra. Early spectra at 60 MHz were more prone to distortion than spectra from later machines typically operating at frequencies at 200 MHz or above. Furthermore, as in the figure to the right, J-coupling can be used to identify ortho-meta-para substitution of a ring. Ortho coupling is the strongest at 15 Hz, Meta follows with an average of 2 Hz, and finally para coupling is usually insignificant for studies. Magnetic inequivalence More subtle effects can occur if chemically equivalent spins (i.e., nuclei related by symmetry and so having the same NMR frequency) have different coupling relationships to external spins. Spins that are chemically equivalent but are not indistinguishable (based on their coupling relationships) are termed magnetically inequivalent. For example, the 4 H sites of 1,2-dichlorobenzene divide into two chemically equivalent pairs by symmetry, but an individual member of one of the pairs has different couplings to the spins making up the other pair. Magnetic inequivalence can lead to highly complex spectra, which can only be analyzed by computational modeling. Such effects are more common in NMR spectra of aromatic and other non-flexible systems, while conformational averaging about C−C bonds in flexible molecules tends to equalize the couplings between protons on adjacent carbons, reducing problems with magnetic inequivalence. Correlation spectroscopy Correlation spectroscopy is one of several types of two-dimensional nuclear magnetic resonance (NMR) spectroscopy or 2D-NMR. This type of NMR experiment is best known by its acronym, COSY. Other types of two-dimensional NMR include J-spectroscopy, exchange spectroscopy (EXSY), Nuclear Overhauser effect spectroscopy (NOESY), total correlation spectroscopy (TOCSY), and heteronuclear correlation experiments, such as HSQC, HMQC, and HMBC. In correlation spectroscopy, emission is centered on the peak of an individual nucleus; if its magnetic field is correlated with another nucleus by through-bond (COSY, HSQC, etc.) or through-space (NOE) coupling, a response can also be detected on the frequency of the correlated nucleus. Two-dimensional NMR spectra provide more information about a molecule than one-dimensional NMR spectra and are especially useful in determining the structure of a molecule, particularly for molecules that are too complicated to work with using one-dimensional NMR. The first two-dimensional experiment, COSY, was proposed by Jean Jeener, a professor at Université Libre de Bruxelles, in 1971. This experiment was later implemented by Walter P. Aue, Enrico Bartholdi and Richard R. Ernst, who published their work in 1976. Solid-state nuclear magnetic resonance A variety of physical circumstances do not allow molecules to be studied in solution, and at the same time not by other spectroscopic techniques to an atomic level, either. In solid-phase media, such as crystals, microcrystalline powders, gels, anisotropic solutions, etc., it is in particular the dipolar coupling and chemical shift anisotropy that become dominant to the behaviour of the nuclear spin systems. In conventional solution-state NMR spectroscopy, these additional interactions would lead to a significant broadening of spectral lines. A variety of techniques allows establishing high-resolution conditions, that can, at least for 13C spectra, be comparable to solution-state NMR spectra. Two important concepts for high-resolution solid-state NMR spectroscopy are the limitation of possible molecular orientation by sample orientation, and the reduction of anisotropic nuclear magnetic interactions by sample spinning. Of the latter approach, fast spinning around the magic angle is a very prominent method, when the system comprises spin-1/2 nuclei. Spinning rates of about 20 kHz are used, which demands special equipment. A number of intermediate techniques, with samples of partial alignment or reduced mobility, is currently being used in NMR spectroscopy. Applications in which solid-state NMR effects occur are often related to structure investigations on membrane proteins, protein fibrils or all kinds of polymers, and chemical analysis in inorganic chemistry, but also include "exotic" applications like the plant leaves and fuel cells. For example, Rahmani et al. studied the effect of pressure and temperature on the bicellar structures' self-assembly using deuterium NMR spectroscopy. Solid-state NMR is usefull also for metal structure understanding in case of X-ray amorphous metal samples (like nano-size refractory metal 99Tc) . Biomolecular NMR spectroscopy Proteins Much of the innovation within NMR spectroscopy has been within the field of protein NMR spectroscopy, an important technique in structural biology. A common goal of these investigations is to obtain high resolution 3-dimensional structures of the protein, similar to what can be achieved by X-ray crystallography. In contrast to X-ray crystallography, NMR spectroscopy is usually limited to proteins smaller than 35 kDa, although larger structures have been solved. NMR spectroscopy is often the only way to obtain high resolution information on partially or wholly intrinsically unstructured proteins. It is now a common tool for the determination of Conformation Activity Relationships where the structure before and after interaction with, for example, a drug candidate is compared to its known biochemical activity. Proteins are orders of magnitude larger than the small organic molecules discussed earlier in this article, but the basic NMR techniques and some NMR theory also applies. Because of the much higher number of atoms present in a protein molecule in comparison with a small organic compound, the basic 1D spectra become crowded with overlapping signals to an extent where direct spectral analysis becomes untenable. Therefore, multidimensional (2, 3 or 4D) experiments have been devised to deal with this problem. To facilitate these experiments, it is desirable to isotopically label the protein with 13C and 15N because the predominant naturally occurring isotope 12C is not NMR-active and the nuclear quadrupole moment of the predominant naturally occurring 14N isotope prevents high resolution information from being obtained from this nitrogen isotope. The most important method used for structure determination of proteins utilizes NOE experiments to measure distances between atoms within the molecule. Subsequently, the distances obtained are used to generate a 3D structure of the molecule by solving a distance geometry problem. NMR can also be used to obtain information on the dynamics and conformational flexibility of different regions of a protein. Nucleic acids Nucleic acid NMR is the use of NMR spectroscopy to obtain information about the structure and dynamics of polynucleic acids, such as DNA or RNA. , nearly half of all known RNA structures had been determined by NMR spectroscopy. Nucleic acid and protein NMR spectroscopy are similar but differences exist. Nucleic acids have a smaller percentage of hydrogen atoms, which are the atoms usually observed in NMR spectroscopy, and because nucleic acid double helices are stiff and roughly linear, they do not fold back on themselves to give "long-range" correlations. The types of NMR usually done with nucleic acids are 1H or proton NMR, 13C NMR, 15N NMR, and 31P NMR. Two-dimensional NMR methods are almost always used, such as correlation spectroscopy (COSY) and total coherence transfer spectroscopy (TOCSY) to detect through-bond nuclear couplings, and nuclear Overhauser effect spectroscopy (NOESY) to detect couplings between nuclei that are close to each other in space. Parameters taken from the spectrum, mainly NOESY cross-peaks and coupling constants, can be used to determine local structural features such as glycosidic bond angles, dihedral angles (using the Karplus equation), and sugar pucker conformations. For large-scale structure, these local parameters must be supplemented with other structural assumptions or models, because errors add up as the double helix is traversed, and unlike with proteins, the double helix does not have a compact interior and does not fold back upon itself. NMR is also useful for investigating nonstandard geometries such as bent helices, non-Watson–Crick basepairing, and coaxial stacking. It has been especially useful in probing the structure of natural RNA oligonucleotides, which tend to adopt complex conformations such as stem-loops and pseudoknots. NMR is also useful for probing the binding of nucleic acid molecules to other molecules, such as proteins or drugs, by seeing which resonances are shifted upon binding of the other molecule. Carbohydrates Carbohydrate NMR spectroscopy addresses questions on the structure and conformation of carbohydrates. The analysis of carbohydrates by 1H NMR is challenging due to the limited variation in functional groups, which leads to 1H resonances concentrated in narrow bands of the NMR spectrum. In other words, there is poor spectral dispersion. The anomeric proton resonances are segregated from the others due to fact that the anomeric carbons bear two oxygen atoms. For smaller carbohydrates, the dispersion of the anomeric proton resonances facilitates the use of 1D TOCSY experiments to investigate the entire spin systems of individual carbohydrate residues. Drug discovery Knowledge of energy minima and rotational energy barriers of small molecules in solution can be found using NMR, e.g. looking at free ligand conformational preferences and conformational dynamics, respectively. This can be used to guide drug design hypotheses, since experimental and calculated values are comparable. For example, AstraZeneca uses NMR for its oncology research & development. High-pressure NMR spectroscopy One of the first scientific works devoted to the use of pressure as a variable parameter in NMR experiments was the work of J. Jonas published in the journal Annual Review of Biophysics in 1994. The use of high pressures in NMR spectroscopy was primarily driven by the desire to study biochemical systems, where the use of high pressure allows controlled changes in intermolecular interactions without significant perturbations. Of course, attempts have been made to solve scientific problems using high-pressure NMR spectroscopy. However, most of them were difficult to reproduce due to the problem of equipment for creating and maintaining high pressure. In the most common types of NMR cells for realization of high-pressure NMR experiments are given. High-pressure NMR spectroscopy has been widely used for a variety of applications, mainly related to the characterization of the structure of protein molecules. However, in recent years, software and design solutions have been proposed to characterize the chemical and spatial structures of small molecules in a supercritical fluid environment, using state parameters as a driving force for such changes. See also Quantum mechanics of nuclear magnetic resonance (NMR) spectroscopy Related methods of nuclear spectroscopy: Mössbauer effect Muon spin spectroscopy Perturbed angular correlation References Further reading External links The Basics of NMR - A non-technical overview of NMR theory, equipment, and techniques by Dr. Joseph Hornak, Professor of Chemistry at RIT GAMMA and PyGAMMA Libraries - GAMMA is an open source C++ library written for the simulation of Nuclear Magnetic Resonance Spectroscopy experiments. PyGAMMA is a Python wrapper around GAMMA. relax Software for the analysis of NMR dynamics Vespa - VeSPA (Versatile Simulation, Pulses and Analysis) is a free software suite composed of three Python applications. These GUI based tools are for magnetic resonance (MR) spectral simulation, RF pulse design, and spectral processing and analysis of MR data.
Nuclear magnetic resonance spectroscopy
[ "Physics", "Chemistry" ]
6,455
[ "Nuclear magnetic resonance", "Spectroscopy", "Spectrum (physical sciences)", "Nuclear magnetic resonance spectroscopy" ]
1,908,635
https://en.wikipedia.org/wiki/Polarimetry
Polarimetry is the measurement and interpretation of the polarization of transverse waves, most notably electromagnetic waves, such as radio or light waves. Typically polarimetry is done on electromagnetic waves that have traveled through or have been reflected, refracted or diffracted by some material in order to characterize that object. Plane polarized light: According to the wave theory of light, an ordinary ray of light is considered to be vibrating in all planes of right angles to the direction of its propagation. If this ordinary ray of light is passed through a nicol prism, the emergent ray has its vibration only in one plane. Applications Polarimetry of thin films and surfaces is commonly known as ellipsometry. Polarimetry is used in remote sensing applications, such as planetary science, astronomy, and weather radar. Polarimetry can also be included in computational analysis of waves. For example, radars often consider wave polarization in post-processing to improve the characterization of the targets. In this case, polarimetry can be used to estimate the fine texture of a material, help resolve the orientation of small structures in the target, and, when circularly-polarized antennas are used, resolve the number of bounces of the received signal (the chirality of circularly polarized waves alternates with each reflection). Imaging In 2003, a visible-near IR (VNIR) Spectropolarimetric Imager with an acousto-optic tunable filter (AOTF) was reported. These hyperspectral and spectropolarimetric imager functioned in radiation regions spanning from ultraviolet (UV) to long-wave infrared (LWIR). In AOTFs a piezoelectric transducer converts a radio frequency (RF) signal into an ultrasonic wave. This wave then travels through a crystal attached to the transducer and upon entering an acoustic absorber is diffracted. The wavelength of the resulting light beams can be modified by altering the initial RF signal. VNIR and LWIR hyperspectral imaging consistently perform better as hyperspectral imagers. This technology was developed at the U.S. Army Research Laboratory. The researchers reported visible near infrared system (VISNIR) data (.4-.9 micrometers) which required an RF signal below 1 W power. The reported experimental data indicates that polarimetric signatures are unique to manmade items and are not found in natural objects. The researchers state that a dual system, collecting both hyperspectral and spectropolarimetric information, is an advantage in image production for target tracking. Polarimetric infrared imaging and detection can also highlight and distinguish different features in a scene and give unique signatures of different objects. A nano-plasmonic chirped metal structure for polarimetric detection in the mid-wave and long-wave infrared dual bands can give unique characteristics about the different detected materials, objects, and surfaces. Gemology Gemologists use polariscopes to identify various properties of gems under examination. Proper examination may require the gem to be inspected in various positions and angles. A gemologist's polariscope is a vertically oriented device, usually with two polarizing lenses with one over the other with some space in between. A light source is built into the polariscope underneath the bottom polarizing lens and pointing upwards. A gemstone will be placed on top of the lower lens and may be properly examined by looking down at it through the top lens. To operate the polariscope, a gemologist may turn the polarizing lenses by hand to observe various characteristics about a gemstone. Polariscopes make use of their polarizing filters to reveal properties of a gem about how it affects light waves passing through it. A polariscope may be first used to determine the optic character of a gem and whether it is singly refracting (isotropic), anomalously doubly refracting (isotropic), doubly refracting (anisotropic), or aggregate. If the stone is doubly refracting and is not an aggregate, the polariscope may be used to further determine the optic figure of the gemstone, or whether it is uniaxial or biaxial. This step may require use of a loupe, also known as a conoscope. Finally, a polariscope can be used to detect the pleochroism of a gemstone, although a dichroscope may be preferred for this purpose as it may show pleochroic colors side by side for easier identification. Equipment A polarimeter is the basic scientific instrument used to make these measurements, although this term is rarely used to describe a polarimetry process performed by a computer, such as is done in polarimetric synthetic aperture radar. Polarimetry can be used to measure various optical properties of a material, including linear birefringence, circular birefringence (also known as optical rotation or optical rotary dispersion), linear dichroism, circular dichroism and scattering. To measure these various properties, there have been many designs of polarimeters, some archaic and some in current use. The most sensitive are based on interferometers, while more conventional polarimeters are based on arrangements of polarising filters, wave plates or other devices. Astronomical polarimetry Polarimetry is used in many areas of astronomy to study physical characteristics of sources including active galactic nuclei and blazars, exoplanets, gas and dust in the interstellar medium, supernovae, gamma-ray bursts, stellar rotation, stellar magnetic fields, debris disks, reflection in binary stars and the cosmic microwave background radiation. Astronomical polarimetry observations are carried out either as imaging polarimetry, where polarization is measured as a function of position in imaging data, or spectropolarimetry, where polarization is measured as a function of wavelength of light, or broad-band aperture polarimetry. Measuring optical rotation Optically active samples, such as solutions of chiral molecules, often exhibit circular birefringence. Circular birefringence causes rotation of the polarization of plane polarized light as it passes through the sample. In ordinary light, the vibrations occur in all planes perpendicular to the direction of propagation. When light passes through a Nicol prism its vibrations in all directions except the direction of axis of the prism are cut off. The light emerging from the prism is said to be plane polarised because its vibration is in one direction. If two Nicol prisms are placed with their polarization planes parallel to each other, then the light rays emerging out of the first prism will enter the second prism. As a result, no loss of light is observed. However, if the second prism is rotated by an angle of 90°, the light emerging from the first prism is stopped by the second prism and no light emerges. The first prism is usually called the polarizer and the second prism is called the analyser. A simple polarimeter to measure this rotation consists of a long tube with flat glass ends, into which the sample is placed. At each end of the tube is a Nicol prism or other polarizer. Light is shone through the tube, and the prism at the other end, attached to an eye-piece, is rotated to arrive at the region of complete brightness or that of half-dark, half-bright or that of complete darkness. The angle of rotation is then read from a scale. The same phenomenon is observed after an angle of 180°. The specific rotation of the sample may then be calculated. Temperature can affect the rotation of light, which should be accounted for in the calculations. where: [α]λT is the specific rotation. T is the temperature. λ is the wavelength of light. α is the angle of rotation. l is the distance the light travels through the sample, the path length. is the mass concentration of solution. See also Ellipsometry References External links Polariscope – Gemstone Buzz instrument to measure optical properties. EU Project NanoCharM nanocharm.org Polarization (waves) Optical metrology
Polarimetry
[ "Physics" ]
1,681
[ "Polarization (waves)", "Astrophysics" ]
1,908,650
https://en.wikipedia.org/wiki/Nicol%20prism
A Nicol prism is a type of polarizer. It is an optical device made from calcite crystal used to convert ordinary light into plane polarized light. It is made in such a way that it eliminates one of the rays by total internal reflection, i.e. the ordinary ray is eliminated and only the extraordinary ray is transmitted through the prism. It was the first type of polarizing prism, invented in 1828 by William Nicol (1770–1851) of Edinburgh. Mechanism The Nicol prism consists of a rhombohedral crystal of Iceland spar (a variety of calcite) that has been cut at an angle of 68° with respect to the crystal axis, cut again diagonally, and then rejoined, using a layer of transparent Canada balsam as a glue. Unpolarized light ray enters through the side face of the crystal, and is split into two orthogonally polarized, differently directed rays by the birefringence property of calcite. The ordinary ray, or o-ray, experiences a refractive index of no = 1.658 in the calcite and undergoes a total internal reflection at the calcite–glue interface because of its angle of incidence at the glue layer (refractive index n = 1.550) exceeds the critical angle for the interface. It passes out the top side of the upper half of the prism with some refraction. The extraordinary ray, or e-ray, experiences a lower refractive index (ne = 1.486) in the calcite crystal and is not totally reflected at the interface because it strikes the interface at a sub-critical angle. The e-ray merely undergoes a slight refraction, or bending, as it passes through the interface into the lower half of the prism. It finally leaves the prism as a ray of plane-polarized light, undergoing another refraction, as it exits the opposite side of the prism. The two exiting rays have polarizations orthogonal (at right angles) to each other, but the lower, or e-ray, is the more commonly used for further experimentation because it is again traveling in the original horizontal direction, assuming that the calcite prism angles have been properly cut. The direction of the upper ray, or o-ray, is quite different from its original direction because it alone suffers total internal reflection at the glue interface, as well as a final refraction on exit from the upper side of the prism. Uses Nicol prisms were once widely used in mineralogical microscopy and polarimetry, and the term "using crossed Nicols" (abbreviated as XN) is still used to refer to the observing of a sample placed between orthogonally oriented polarizers. In most instruments, however, Nicol prisms have been replaced by other types of polarizers such as polaroid sheets and Glan–Thompson prisms. References 1828 in science Microscopy Optical materials Polarization (waves) Prisms (optics) Scottish inventions
Nicol prism
[ "Physics", "Chemistry" ]
601
[ "Astrophysics", "Optical materials", "Materials", "Microscopy", "Polarization (waves)", "Matter" ]
1,909,881
https://en.wikipedia.org/wiki/Redshift%20quantization
Redshift quantization, also referred to as redshift periodicity, redshift discretization, preferred redshifts and redshift-magnitude bands, is the hypothesis that the redshifts of cosmologically distant objects (in particular galaxies and quasars) tend to cluster around multiples of some particular value. In standard inflationary cosmological models, the redshift of cosmological bodies is ascribed to the expansion of the universe, with greater redshift indicating greater cosmic distance from the Earth (see Hubble's law). This is referred to as cosmological redshift and is one of the main pieces of evidence for the Big Bang. Quantized redshifts of objects would indicate, under Hubble's law, that astronomical objects are arranged in a quantized pattern around the Earth. It is more widely posited that the redshift is unrelated to cosmic expansion and is the outcome of some other physical mechanism, referred to as "intrinsic redshift" or "non-cosmological redshift". In 1973, astronomer William G. Tifft was the first to report evidence of this pattern. Subsequent discourse focused upon whether redshift surveys of quasars (QSOs) have produced evidence of quantization in excess of what is expected due to selection effect or galactic clustering. The idea has been on the fringes of astronomy since the mid-1990s and is now discounted by the vast majority of astronomers, but a few scientists who espouse nonstandard cosmological models, including those who reject the Big Bang theory, have referred to evidence of redshift quantization as reason to reject conventional accounts of the origin and evolution of the universe. Original investigation by William G. Tifft György Paál (for QSOs, 1971) and William G. Tifft (for galaxies) were the first to investigate possible redshift quantization, referring to it as "redshift-magnitude banding correlation". In 1973, he wrote: "Using more than 200 redshifts in Coma, Perseus, and A2199, the presence of a distinct band-related periodicity in redshifts is indicated. Finally, a new sample of accurate redshifts of bright Coma galaxies on a single band is presented, which shows a strong redshift periodicity of 220 km s−1. An upper limit of 20 km s−1 is placed on the internal Doppler redshift component of motion in the Coma cluster". Tifft suggested that this observation conflicted with standard cosmological scenarios. He states in summary: "Throughout the development of the program it has seemed increasingly clear that the redshift has properties inconsistent with a simple velocity and/or cosmic scale change interpretation. Various implications have been pointed out from time to time, but basically the work is observationally driven." Early research - focused on galaxies rather than quasars In 1971 from redshift quantization G. Paál came up with the idea that the Universe might have nontrivial topological structure. Studies performed in the 1980s and early 1990s produced confirmatory results: In 1989, Martin R. Croasdale reported finding a quantization of redshifts using a different sample of galaxies in increments of 72 km/s or Δz = (where Δz denotes shift in frequency expressed as a proportion of initial frequency). In 1990, Bruce Guthrie and William Napier reported finding a "possible periodicity" of the same magnitude for a slightly larger data set limited to bright spiral galaxies and excluding other types. In 1992, Guthrie and Napier proposed the observation of a different periodicity in increments of Δz = in a sample of 89 galaxies. In 1992, Paal et al. and Holba et al. concluded that there was an unexplained periodicity of redshifts in a reanalysis of a large sample of galaxies. In 1997, Guthrie and Napier concluded the same: "So far the redshifts of over 250 galaxies with high-precision HI profiles have been used in the study. In consistently selected sub-samples of the datasets of sufficient precision examined so far, the redshift distribution has been found to be strongly quantized in the galactocentric frame of reference. ... The formal confidence levels associated with these results are extremely high." Quasar redshifts Most recent discourse has focused upon whether redshift surveys of quasars (QSOs) produce evidence of quantization beyond that explainable by selection effect. This has been assisted by advances in cataloging in the late 1990s that have increased substantially the sample sizes involved in astronomical measurements. Karlsson's formula Historically, K. G. Karlsson and G. R. Burbidge were first to note that quasar redshifts were quantized in accordance with the empirical formula where: refers to the magnitude of redshift (shift in frequency as a proportion of initial frequency); is an integer with values 1, 2, 3, 4 ... This predicts periodic redshift peaks at = 0.061, 0.30, 0.60, 0.96, 1.41, and 1.9, observed originally in a sample of 600 quasars, verified in later early studies. Modern discourse A 2001 study by Burbidge and Napier found the pattern of periodicity predicted by Karlsson's formula to be present at a high confidence level in three new samples of quasars, concluding that their findings are inexplicable by spectroscopic or similar selection effects. In 2002, Hawkins et al. found no evidence for redshift quantization in a sample of 1647 galaxy-quasar pairs from the 2dF Galaxy Redshift Survey: "Given that there are almost eight times as many data points in this sample as in the previous analysis by Burbidge & Napier (2001), we must conclude that the previous detection of a periodic signal arose from the combination of noise and the effects of the window function." In response, Napier and Burbidge (2003) argue that the methods employed by Hawkins et al. to remove noise from their samples amount to "excessive data smoothing" which could hide a true periodicity. They publish an alternate methodology for this that preserves the periodicity observed in earlier studies. In 2005, Tang and Zhang found no evidence for redshift quantization of quasars in samples from the Sloan Digital Sky Survey and 2dF redshift survey. Arp et al. (2005) examined sample areas in the 2dF and SDSS surveys in detail, noting that quasar redshifts: "... fit very closely the long standing Karlsson formula and strongly suggest the existence of preferred values in the distribution of quasar redshifts." A 2006 study of 46,400 quasars in the SDSS by Bell and McDiarmid discovered 6 peaks in the redshift distribution consistent with the decreasing intrinsic redshift (DIR) model. However, Schneider et al. (2007) and Richards et al. (2006) reported that the periodicity reported by Bell and McDiarmid disappears after correcting for selection effects. Bell and Comeau (2010) concur that selection effects give rise to the apparent redshift peaks, but argue that the correction process removes a large fraction of the data. The authors argue that the "filter gap footprint" renders it impossible to verify or falsify the presence of a true redshift peak at Δz = 0.60. A 2006 review by Bajan et al. discovered weak effects of redshift periodization in data from the Local Group of galaxies and the Hercules Supercluster. They conclude that "galaxy redshift periodization is an effect which can really exist", but that the evidence is not well established pending study of larger databases. A 2007 absorption spectroscopic analysis of quasars by Ryabinkov et al. observed a pattern of statistically significant alternating peaks and dips in the redshift range Δz = 0.0 − 3.7, though they noted no statistical correlation between their findings and Karlsson's formula. References Physical cosmology
Redshift quantization
[ "Physics", "Astronomy" ]
1,723
[ "Astronomical sub-disciplines", "Theoretical physics", "Physical cosmology", "Astrophysics" ]
1,910,119
https://en.wikipedia.org/wiki/Vanadium%28V%29%20oxide
Vanadium(V) oxide (vanadia) is the inorganic compound with the formula V2O5. Commonly known as vanadium pentoxide, it is a dark yellow solid, although when freshly precipitated from aqueous solution, its colour is deep orange. Because of its high oxidation state, it is both an amphoteric oxide and an oxidizing agent. From the industrial perspective, it is the most important compound of vanadium, being the principal precursor to alloys of vanadium and is a widely used industrial catalyst. The mineral form of this compound, shcherbinaite, is extremely rare, almost always found among fumaroles. A mineral trihydrate, V2O5·3H2O, is also known under the name of navajoite. Chemical properties Reduction to lower oxides Upon heating a mixture of vanadium(V) oxide and vanadium(III) oxide, comproportionation occurs to give vanadium(IV) oxide, as a deep-blue solid: V2O5 + V2O3 → 4 VO2 The reduction can also be effected by oxalic acid, carbon monoxide, and sulfur dioxide. Further reduction using hydrogen or excess CO can lead to complex mixtures of oxides such as V4O7 and V5O9 before black V2O3 is reached. Acid-base reactions V2O5 is an amphoteric oxide, and unlike most transition metal oxides, it is slightly water soluble, giving a pale yellow, acidic solution. Thus V2O5 reacts with strong non-reducing acids to form solutions containing the pale yellow salts containing dioxovanadium(V) centers: V2O5 + 2 HNO3 → 2 VO2(NO3) + H2O It also reacts with strong alkali to form polyoxovanadates, which have a complex structure that depends on pH. If excess aqueous sodium hydroxide is used, the product is a colourless salt, sodium orthovanadate, Na3VO4. If acid is slowly added to a solution of Na3VO4, the colour gradually deepens through orange to red before brown hydrated V2O5 precipitates around pH 2. These solutions contain mainly the ions HVO42− and V2O74− between pH 9 and pH 13, but below pH 9 more exotic species such as V4O124− and HV10O285− (decavanadate) predominate. Upon treatment with thionyl chloride, it converts to the volatile liquid vanadium oxychloride, VOCl3: V2O5 + 3 SOCl2 → 2 VOCl3 + 3 SO2 Other redox reactions Hydrochloric acid and hydrobromic acid are oxidised to the corresponding halogen, e.g., V2O5 + 6HCl + 7H2O → 2[VO(H2O)5]2+ + 4Cl− + Cl2 Vanadates or vanadyl compounds in acid solution are reduced by zinc amalgam through the colourful pathway: The ions are all hydrated to varying degrees. Preparation Technical grade V2O5 is produced as a black powder used for the production of vanadium metal and ferrovanadium. A vanadium ore or vanadium-rich residue is treated with sodium carbonate and an ammonium salt to produce sodium metavanadate, NaVO3. This material is then acidified to pH 2–3 using H2SO4 to yield a precipitate of "red cake" (see above). The red cake is then melted at 690 °C to produce the crude V2O5. Vanadium(V) oxide is produced when vanadium metal is heated with excess oxygen, but this product is contaminated with other, lower oxides. A more satisfactory laboratory preparation involves the decomposition of ammonium metavanadate at 500–550 °C: 2 NH4VO3 → V2O5 + 2 NH3 + H2O Uses Ferrovanadium production In terms of quantity, the dominant use for vanadium(V) oxide is in the production of ferrovanadium (see above). The oxide is heated with scrap iron and ferrosilicon, with lime added to form a calcium silicate slag. Aluminium may also be used, producing the iron-vanadium alloy along with alumina as a byproduct. Sulfuric acid production Another important use of vanadium(V) oxide is in the manufacture of sulfuric acid, an important industrial chemical with an annual worldwide production of 165 million tonnes in 2001, with an approximate value of US$8 billion. Vanadium(V) oxide serves the crucial purpose of catalysing the mildly exothermic oxidation of sulfur dioxide to sulfur trioxide by air in the contact process: 2 SO2 + O2 2 SO3 The discovery of this simple reaction, for which V2O5 is the most effective catalyst, allowed sulfuric acid to become the cheap commodity chemical it is today. The reaction is performed between 400 and 620 °C; below 400 °C the V2O5 is inactive as a catalyst, and above 620 °C it begins to break down. Since it is known that V2O5 can be reduced to VO2 by SO2, one likely catalytic cycle is as follows: SO2 + V2O5 → SO3 + 2VO2 followed by 2VO2 +½O2 → V2O5 It is also used as catalyst in the selective catalytic reduction (SCR) of NOx emissions in some power plants and diesel engines. Due to its effectiveness in converting sulfur dioxide into sulfur trioxide, and thereby sulfuric acid, special care must be taken with the operating temperatures and placement of a power plant's SCR unit when firing sulfur-containing fuels. Other oxidations Maleic anhydride is produced by the V2O5-catalysed oxidation of butane with air: C4H10 + 4 O2 → C2H2(CO)2O + 8 H2O Maleic anhydride is used for the production of polyester resins and alkyd resins. Phthalic anhydride is produced similarly by V2O5-catalysed oxidation of ortho-xylene or naphthalene at 350–400 °C. The equation for the vanadium oxide-catalysed oxidation of o-xylene to phthalic anhydride: C6H4(CH3)2 + 3 O2 → C6H4(CO)2O + 3 H2O The equation for the vanadium oxide-catalysed oxidation of naphthalene to phthalic anhydride: C10H8 + 4½ O2 → C6H4(CO)2O + 2CO2 + 2H2O Phthalic anhydride is a precursor to plasticisers, used for conferring pliability to polymers. A variety of other industrial compounds are produced similarly, including adipic acid, acrylic acid, oxalic acid, and anthraquinone. Other applications Due to its high coefficient of thermal resistance, vanadium(V) oxide finds use as a detector material in bolometers and microbolometer arrays for thermal imaging. It also finds application as an ethanol sensor in ppm levels (up to 0.1 ppm). Vanadium redox batteries are a type of flow battery used for energy storage, including large power facilities such as wind farms. Vanadium oxide is also used as a cathode in lithium-ion batteries. Biological activity Vanadium(V) oxide exhibits very modest acute toxicity to humans, with an LD50 of about 470 mg/kg. The greater hazard is with inhalation of the dust, where the LD50 ranges from 4–11 mg/kg for a 14-day exposure. Vanadate (), formed by hydrolysis of V2O5 at high pH, appears to inhibit enzymes that process phosphate (PO43−). However the mode of action remains elusive. References Cited sources Further reading . . External links Vanadium Pentoxide and other Inorganic Vanadium Compounds (Concise International Chemical Assessment Document 29) Vanadium(V) compounds Catalysts Infrared sensor materials IARC Group 2B carcinogens Oxidizing agents Transition metal oxides
Vanadium(V) oxide
[ "Chemistry" ]
1,769
[ "Catalysis", "Catalysts", "Redox", "Oxidizing agents", "Chemical kinetics" ]
1,910,996
https://en.wikipedia.org/wiki/Implant%20%28medicine%29
An implant is a medical device manufactured to replace a missing biological structure, support a damaged biological structure, or enhance an existing biological structure. For example, an implant may be a rod, used to strengthen weak bones. Medical implants are human-made devices, in contrast to a transplant, which is a transplanted biomedical tissue. The surface of implants that contact the body might be made of a biomedical material such as titanium, silicone, or apatite depending on what is the most functional. In 2018, for example, American Elements developed a nickel alloy powder for 3D printing robust, long-lasting, and biocompatible medical implants. In some cases implants contain electronics, e.g. artificial pacemaker and cochlear implants. Some implants are bioactive, such as subcutaneous drug delivery devices in the form of implantable pills or drug-eluting stents. Applications Implants can roughly be categorized into groups by application: Sensory and neurological Sensory and neurological implants are used for disorders affecting the major senses and the brain, as well as other neurological disorders. They are predominately used in the treatment of conditions such as cataract, glaucoma, keratoconus, and other visual impairments; otosclerosis and other hearing loss issues, as well as middle ear diseases such as otitis media; and neurological diseases such as epilepsy, Parkinson's disease, and treatment-resistant depression. Examples include the intraocular lens, intrastromal corneal ring segment, cochlear implant, tympanostomy tube, and neurostimulator. Cardiovascular Cardiovascular medical devices are implanted in cases where the heart, its valves, and the rest of the circulatory system is in disorder. They are used to treat conditions such as heart failure, cardiac arrhythmia, ventricular tachycardia, valvular heart disease, angina pectoris, and atherosclerosis. Examples include the artificial heart, artificial heart valve, implantable cardioverter-defibrillator, artificial cardiac pacemaker, and coronary stent. Orthopedic Orthopaedic implants help alleviate issues with the bones and joints of the body. They are used to treat bone fractures, osteoarthritis, scoliosis, spinal stenosis, and chronic pain as well as in knee and hip replacements. Examples include a wide variety of pins, rods, screws, and plates used to anchor fractured bones while they heal. Metallic glasses based on magnesium with zinc and calcium addition are tested as the potential metallic biomaterials for biodegradable medical implants. Patients with orthopaedic implants sometimes need to be put under magnetic resonance imaging (MRI) machine for detailed musculoskeletal study. Therefore, concerns have been raised regarding the loosening and migration of implant, heating of the implant metal which could cause thermal damage to surrounding tissues, and distortion of the MRI scan that affects the imaging results. A study of orthopaedic implants in 2005 has shown that majority of the orthopaedic implants does not react with magnetic fields under the 1.0 Tesla MRI scanning machine with the exception of external fixator clamps. However, at 7.0 Tesla, several orthopaedic implants would show significant interaction with the MRI magnetic fields, such as heel and fibular implant. Electric Electrical implants are being used to relieve pain from rheumatoid arthritis. The electric implant is embedded in the neck of patients with rheumatoid arthritics, the implant sends electrical signals to electrodes in the vagus nerve. The application of this device is being tested an alternative to medicating people with rheumatoid arthritis for their lifetime. Contraception Contraceptive implants are primarily used to prevent unintended pregnancy and treat conditions such as non-pathological forms of menorrhagia. Examples include copper- and hormone-based intrauterine devices. Cosmetic Cosmetic implants — often prosthetics — attempt to bring some portion of the body back to an acceptable aesthetic norm. They are used as a follow-up to mastectomy due to breast cancer, for correcting some forms of disfigurement, and modifying aspects of the body (as in buttock augmentation and chin augmentation). Examples include the breast implant, nose prosthesis, ocular prosthesis, and injectable filler. Other organs and systems Other types of organ dysfunction can occur in the systems of the body, including the gastrointestinal, respiratory, and urological systems. Implants are used in those and other locations to treat conditions such as gastroesophageal reflux disease, gastroparesis, respiratory failure, sleep apnea, urinary and fecal incontinence, and erectile dysfunction. Examples include the LINX, implantable gastric stimulator, diaphragmatic/phrenic nerve stimulator, neurostimulator, surgical mesh, artificial urinary sphincter and penile implant. Classification United States classification Medical devices are classified by the US Food and Drug Administration (FDA) under three different classes depending on the risks the medical device may impose on the user. According to 21CFR 860.3, Class I devices are considered to pose the least amount of risk to the user and require the least amount of control. Class I devices include simple devices such as arm slings and hand-held surgical instruments. Class II devices are considered to need more regulation than Class I devices and are required to undergo specific requirements before FDA approval. Class II devices include X-ray systems and physiological monitors. Class III devices require the most regulatory controls since the device supports or sustains human life or may not be well tested. Class III devices include replacement heart valves and implanted cerebellar stimulators. Many implants typically fall under Class II and Class III devices. Materials Commonly implanted metals A variety of minimally bioreactive metals are routinely implanted. The most commonly implanted form of stainless steel is 316L. Cobalt-chromium and titanium-based implant alloys are also permanently implanted. All of these are made passive by a thin layer of oxide on their surface. A consideration, however, is that metal ions diffuse outward through the oxide and end up in the surrounding tissue. Bioreaction to metal implants includes the formation of a small envelope of fibrous tissue. The thickness of this layer is determined by the products being dissolved, and the extent to which the implant moves around within the enclosing tissue. Pure titanium may have only a minimal fibrous encapsulation. Stainless steel, on the other hand, may elicit encapsulation of as much as 2 mm. List of implantable metal alloys Stainless Steel ASTM F138/F139 316L ASTM F1314 22Cr-13Ni–5Mn Titanium Alloy ASTM F67 Unalloyed (Commercially Pure) Titanium ASTM F136 Ti-6Al-4V-ELI ASTM F1295 Ti-6Al-7Nb ASTM F1472 Ti-6Al-4V Cobalt Chrome Alloy ASTM F90 Co-20Cr-15W-10Ni ASTM F562 Co-35Ni-20Cr-10Mo ASTM F1537 Co-28Cr-6Mo Tantalum ASTM F560 Unalloyed Tantalum Porosity in Implants Porous implants are characterized by the presence of voids in the metallic or ceramic matrix. Voids can be regular, such as in additively manufactured (AM) lattices, or stochastic, such as in gas-infiltrated production processes. The reduction in the modulus of the implant follows a complex nonlinear relationship dependent on the volume fraction of base material and morphology of the pores. Experimental models exist to predict the range of modulus that stochastic porous material may take. Above 10% vol. fraction porosity, models begin to deviate significantly. Different models, such as the rule of mixtures for low porosity, two-material matrices have been developed to describe mechanical properties. AM lattices have more predictable mechanical properties compared to stochastic porous materials and can be tuned such that they have favorable directional mechanical properties. Variables such as strut diameter, strut shape, and number of cross-beams can have a dramatic effect on loading characteristics of the lattice. AM has the ability to fine-tune the lattice spacing to within a much smaller range than stochastically porous structures, enabling the future cell-development of specific cultures in tissue engineering. Porosity in implants serves two primary purposes 1) The elastic modulus of the implant is decreased, allowing the implant to better match the elastic modulus of the bone. The elastic modulus of cortical bone (~18 GPa) is significantly lower than typical solid titanium or steel implants (110 GPa and 210 GPa, respectively), causing the implant take up a disproportionate amount of the load applied to the appendage, leading to an effect called stress shielding. 2) Porosity enables osteoblastic cells to grow into the pores of implants. Cells can span gaps of smaller than 75 microns and grow into pores larger than 200 microns. Bone ingrowth is a favorable effect, as it anchors the cells into the implant, increasing the strength of the bone-implant interface. More load is transferred from the implant to the bone, reducing stress shielding effects. The density of the bone around the implant is likely to be higher due to the increased load applied to the bone. Bone ingrowth reduces the likelihood of the implant loosening over time because stress shielding and corresponding bone resorption over extended timescales is avoided. Porosity of greater than 40% is favorable to facilitate sufficient anchoring of the osteoblastic cells. Complications Under ideal conditions, implants should initiate the desired host response. Ideally, the implant should not cause any undesired reaction from neighboring or distant tissues. However, the interaction between the implant and the tissue surrounding the implant can lead to complications. The process of implantation of medical devices is subjected to the same complications that other invasive medical procedures can have during or after surgery. Common complications include infection, inflammation, and pain. Other complications that can occur include risk of rejection from implant-induced coagulation and allergic foreign body response. Depending on the type of implant, the complications may vary. When the site of an implant becomes infected during or after surgery, the surrounding tissue becomes infected by microorganisms. Three main categories of infection can occur after operation. Superficial immediate infections are caused by organisms that commonly grow near or on skin. The infection usually occurs at the surgical opening. Deep immediate infection, the second type, occurs immediately after surgery at the site of the implant. Skin-dwelling and airborne bacteria cause deep immediate infection. These bacteria enter the body by attaching to the implant's surface prior to implantation. Though not common, deep immediate infections can also occur from dormant bacteria from previous infections of the tissue at the implantation site that have been activated from being disturbed during the surgery. The last type, late infection, occurs months to years after the implantation of the implant. Late infections are caused by dormant blood-borne bacteria attached to the implant prior to implantation. The blood-borne bacteria colonize on the implant and eventually get released from it. Depending on the type of material used to make the implant, it may be infused with antibiotics to lower the risk of infections during surgery. However, only certain types of materials can be infused with antibiotics, the use of antibiotic-infused implants runs the risk of rejection by the patient since the patient may develop a sensitivity to the antibiotic, and the antibiotic may not work on the bacteria. Inflammation, a common occurrence after any surgical procedure, is the body's response to tissue damage as a result of trauma, infection, intrusion of foreign materials, or local cell death, or as a part of an immune response. Inflammation starts with the rapid dilation of local capillaries to supply the local tissue with blood. The inflow of blood causes the tissue to become swollen and may cause cell death. The excess blood, or edema, can activate pain receptors at the tissue. The site of the inflammation becomes warm from local disturbances of fluid flow and the increased cellular activity to repair the tissue or remove debris from the site. Implant-induced coagulation is similar to the coagulation process done within the body to prevent blood loss from damaged blood vessels. However, the coagulation process is triggered from proteins that become attached to the implant surface and lose their shapes. When this occurs, the protein changes conformation and different activation sites become exposed, which may trigger an immune system response where the body attempts to attack the implant to remove the foreign material. The trigger of the immune system response can be accompanied by inflammation. The immune system response may lead to chronic inflammation where the implant is rejected and has to be removed from the body. The immune system may encapsulate the implant as an attempt to remove the foreign material from the site of the tissue by encapsulating the implant in fibrinogen and platelets. The encapsulation of the implant can lead to further complications, since the thick layers of fibrous encapsulation may prevent the implant from performing the desired functions. Bacteria may attack the fibrous encapsulation and become embedded into the fibers. Since the layers of fibers are thick, antibiotics may not be able to reach the bacteria and the bacteria may grow and infect the surrounding tissue. In order to remove the bacteria, the implant would have to be removed. Lastly, the immune system may accept the presence of the implant and repair and remodel the surrounding tissue. Similar responses occur when the body initiates an allergic foreign body response. In the case of an allergic foreign body response, the implant would have to be removed. Failures The many examples of implant failure include rupture of silicone breast implants, hip replacement joints, and artificial heart valves, such as the Bjork–Shiley valve, all of which have caused FDA intervention. The consequences of implant failure depend on the nature of the implant and its position in the body. Thus, heart valve failure is likely to threaten the life of the individual, while breast implant or hip joint failure is less likely to be life-threatening. Devices implanted directly in the grey matter of the brain produce the highest quality signals, but are prone to scar-tissue build-up, causing the signal to become weaker, or even non-existent, as the body reacts to a foreign object in the brain. In 2018, Implant files, an investigation made by ICIJ revealed that medical devices that are unsafe and have not been adequately tested were implanted in patients' bodies. In United Kingdom, Prof Derek Alderson, president of the Royal College of Surgeons, concludes: "All implantable devices should be registered and tracked to monitor efficacy and patient safety in the long-term." See also Drug-eluting implant Biofunctionalisation Implantable devices List of orthopedic implants Medical device Prosthesis Microchip implant References External links AAOMS - Dental Implant Surgery ACOG - IUDs and Birth Control Implants: Resource Overview FDA - Implants and Prosthetics International Medical Devices Database – Recalls, Safety Alerts and Field Safety Notices of medical devices – International Consortium of Investigative Journalists Implant-Register Biomedical engineering Prosthetics Tissue engineering
Implant (medicine)
[ "Chemistry", "Engineering", "Biology" ]
3,239
[ "Biological engineering", "Biomedical engineering", "Cloning", "Chemical engineering", "Tissue engineering", "Medical technology" ]
1,911,470
https://en.wikipedia.org/wiki/Sethusamudram%20Shipping%20Canal%20Project
Sethusamudram Shipping Canal Project () is a proposed project to create a shipping route in the shallow straits between India and Sri Lanka. This would provide a continuously navigable sea route around the Indian Peninsula. The channel would be dredged in the Sethusamudram sea between Tamil Nadu and Sri Lanka, passing through the limestone shoals of Rama Sethu. The project involves digging a long deepwater channel linking the shallow Palk Strait with the Gulf of Mannar. Conceived in 1860 by Alfred Dundas Taylor, it received approval of the Indian government in 2005. The proposed route through the shoals of Ram Setu is opposed by some groups on religious, environmental and economical grounds. Five alternative routes were considered that avoid damage to the shoals. History Because of its shallow waters, Sethusamudramthe sea separating Sri Lanka from Indiapresents a hindrance to navigation through the Palk Strait. Though trade across the India-Sri Lanka divide has been active since at least the first millennium BCE, it has been limited to small boats and dinghies. Larger oceangoing vessels coming from the West have had to navigate around Sri Lanka to reach India' eastern coast. Eminent British geographer Major James Rennell surveyed the region in late 18th century; he suggested that a "navigable passage could be maintained by dredging of the Ramisseram [sic]". Little notice was given to his proposal, perhaps because it came from "so young and an unknown officer", and the idea was only revived 60 years later. Efforts were made in 1838 to dredge the canal, but the passage did not remain navigable for any vessels except those with a shallow draft. The project was conceived in 1860 by Commander A. D. Taylor of the Indian Marines and has been reviewed many times without a decision being made. It has been part of the election manifestos of all political parties during elections. The Government of India appointed the Sethu Samudram Project Committee in 1955headed by Dr. A. Ramasamy Mudaliarwhich was charged with examining the desirability of the project. After evaluating the costs and benefits, this committee found the project feasible and viable. However it strongly recommended an overland passage instead of a channel cutting through Rama's Bridge. A land passage would have several advantages, such as avoiding shifting sandbanks and navigational hazards. Several reviews of the proposals followed until the United Progressive Alliance Government of India headed by Prime Minister Manmohan Singh announced the inauguration of the project on 2 July 2005. In 2008, Prime Minister Manmohan Singh appointed Rajendra K. Pachauri as the head of a six-member committee to look at an alternative alignment avoiding the sensitive Rama Sethu stretch. In 2013, the committee released its report calling the project "unviable both from the economic as well as ecological angles". The Indian government rejected the committee's report and decided to go ahead with the project in its current form. In 2014, the Modi government decided that the project would be implemented by deepening the Pamban pass which would save the Rama Setu from destruction. the project remains unfinished. In July 2020, parliamentary leader T. R. Baalu presented a letter to Prime Minister Modi urging him to finish the project before 2024. In the letter, Baalu cited tensions between India and China over influence in Sri Lanka, claiming that China will gain too strong of a diplomatic and economic foothold in Sri Lanka if the Indian government does not continue development in the region. In March 2021 the closure of the project was announced. On 12 January 2023, the Tamilnadu Government unanimously passed a resolution demanding that the project be revived. Alignments suggested by earlier committees Issues Economic Some naval hydrographers and experts suggest that the project is unlikely to be financially viable or serve ships in any significant way. The time savings for ships sailing from Kanyakumari or Tuticorin is between 10 and 30 hours. Ships from destinations in the Middle East, Africa, Mauritius and Europe, would save an average of 8 hours using the canal. At the present tariff rates, ships from Africa and Europe will lose on every voyage because the savings in time for these ships are considerably lower than what is calculated in the DPR. This loss is significant because 65% of the canal's projected users are from Africa and Europe. If tariffs are lowered to a point where ships from Africa and Europe will not lose money from using the canal, the IRR of the project falls to 2.6%. This is a level at which even public infrastructure projects are rejected by the government. The canal is designed for ships of 30,000 metric tonnes and lighter. Most new ships weighing more than 60,000 tonnes and tankers weighing above 150,000 tonnes cannot use this canal. Costs of project Axis Bank Ltd. was appointed "loan arranger" for the project in 2005.Since its inception in 2004, costs have risen to at least , interest rates have risen and old loan terms have lapsed. The loan sanctions, valid only up to , lapsed. To secure more money, Sethusamudram Corp. Ltd would have to draw up new reports, sit with parliamentary committees and receive fresh approval. The project cost which originally were will grow by almost , a shipping ministry source said. Environmental impact The project would disturb the ecological balance and destroy corals and kill marine life. The area is an important fishing ground for Tamil Nadu and the Gulf of Mannar Marine National Park is in the vicinity of the proposed project. Opposition to the canal's planned route has come from local fishermen who are demanding alternative channels, which are available. They say the planned route would destroy marine life and corals and would impact the trade in conch shells that is worth almost a year. Deposits of thorium, important for nuclear fuel requirements, would also be affected. Opponents also say that the dumping of dredged material from the Palk Strait and the Gulf of Mannar in deeper waters would "endanger those areas, which are rich reserves containing 400 endangered species, including whales, sea turtles, dugongs and dolphins". Tsunami expert Professor Tad Murtywho advised the Government of India on the tsunami warning systemhas said that the planned route may result in increased impact from tsunami waves. He wrote, "During the Indian Ocean tsunami of 26 December 2004, the southern part of Kerala was generally spared from a major tsunami, mainly because the tsunami waves from Sumatra region travelling south of the Sri Lankan island, partially diffracted northward and affected the central part of the Kerala coast. Since the tsunami is a long gravity wave (similar to tides and storm surges) during the diffraction process, the rather wide turn it has to take spared the south Kerala coast. On the other hand, deepening the Sethu Canal might provide a more direct route for the tsunami and this could impact south Kerala." On 21 April 2010, the Supreme Court of India decided to delay the project until an Environmental impact analysis on the feasibility of a route through Dhanuskodi instead of Rama's Bridge had been carried out. Religion Opposition to the project came from some Hindu groups that want to preserve Rama’s Bridge as they believe it was miraculously created as described in the ancient epic Ramayana. References External links Sethusamudram Corporation Limited Geo-Strategic Implications of Sethusamudram Manitham's Interim Report on Sethusamudram Ship Canal Project Ship canals Canals in India Politics of India Proposed canals International canals Water transport in Sri Lanka Transport in Thoothukudi Gulf of Mannar Palk Strait Macro-engineering Proposed transport infrastructure in Sri Lanka
Sethusamudram Shipping Canal Project
[ "Engineering" ]
1,557
[ "Macro-engineering" ]
1,911,628
https://en.wikipedia.org/wiki/Pseudo-LRU
Pseudo-LRU or PLRU is a family of cache algorithms which improve on the performance of the Least Recently Used (LRU) algorithm by replacing values using approximate measures of age rather than maintaining the exact age of every value in the cache. PLRU usually refers to two cache replacement algorithms: tree-PLRU and bit-PLRU. Tree-PLRU Tree-PLRU is an efficient algorithm to select an item that most likely has not been accessed very recently, given a set of items and a sequence of access events to the items. This technique is used in the CPU cache of the Intel 486 and in many processors in the PowerPC family, such as Freescale's PowerPC G4 used by Apple Computer. The algorithm works as follows: consider a binary search tree for the items in question. Each node of the tree has a one-bit flag denoting "go left to insert a pseudo-LRU element" or "go right to insert a pseudo-LRU element". To find a pseudo-LRU element, traverse the tree according to the values of the flags. To update the tree with an access to an item N, traverse the tree to find N and, during the traversal, set the node flags to denote the direction that is opposite to the direction taken. This algorithm can be sub-optimal since it is an approximation. For example, in the above diagram with A, C, B, D cache lines, if the access pattern was: C, B, D, A, on an eviction, B would be chosen instead of C. This is because both A and C are in the same half and accessing A directs the algorithm to the other half that does not contain cache line C. Bit-PLRU Bit-PLRU stores one status bit for each cache line. These bits are called MRU-bits. Every access to a line sets its MRU-bit to 1, indicating that the line was recently used. Whenever the last remaining 0 bit of a set's status bits is set to 1, all other bits are reset to 0. At cache misses, the leftmost line whose MRU-bit is 0 is replaced. See also Cache algorithms References https://people.cs.clemson.edu/~mark/464/p_lru.txt http://www.ipdps.org/ipdps2010/ipdps2010-slides/session-22/2010IPDPS.pdf http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.217.3594&rep=rep1&type=pdf Memory management algorithms
Pseudo-LRU
[ "Technology" ]
563
[ "Computing stubs", "Computer science", "Computer science stubs" ]
1,911,761
https://en.wikipedia.org/wiki/Magic%20angle%20spinning
In solid-state NMR spectroscopy, magic-angle spinning (MAS) is a technique routinely used to produce better resolution NMR spectra. MAS NMR consists in spinning the sample (usually at a frequency of 1 to 130 kHz) at the magic angle θm (ca. 54.74°, where cos2θm=1/3) with respect to the direction of the magnetic field. Three main interactions responsible in solid state NMR (dipolar, chemical shift anisotropy, quadrupolar) often lead to very broad and featureless NMR lines. However, these three interactions in solids are orientation-dependent and can be averaged to some extent by MAS: The nuclear dipolar interaction has a dependence, where is the angle between the internuclear axis and the main magnetic field. As a result, the dipolar interaction vanish at the magic angle θm and the interaction contributing to the line broadening is removed. Even though all internuclear vectors cannot be all set to the magic angle, rotating the sample around this axis produces the same effect, provided the frequency is comparable to that of the interaction. In addition, a set of spinning sidebands appear on the spectra, which are sharp lines separated from the isotropic resonance frequency by a multiple of the spinning rate. The chemical shift anisotropy (CSA) represents the orientation-dependence of the chemical shift. Powder patterns generated by the CSA interaction can be averaged by MAS, resulting to one single resonance centred at the isotropic chemical shift (centre of mass of the powder pattern). The quadrupolar interaction is only partially averaged by MAS leaving a residual secondary quadrupolar interaction. In solution-state NMR, most of these interactions are averaged out because of the rapid time-averaged molecular motion that occurs due to the thermal energy (molecular tumbling). The spinning of the sample is achieved via an impulse air turbine mechanism, where the sample tube is lifted with a frictionless compressed gas bearing and spun with a gas drive. Sample tubes are hollow cylinders coming in a variety of outer diameters ranging from 0.70 to 7 mm, mounted with a turbine cap. The rotors are typically made from zirconium oxide, although other ceramic materials (silicon nitride) or polymers (poly(methyl methacrylate) (PMMA), polyoxymethylene (POM)) can be found. Removable caps close the ends of the sample tube. They are made from a range of materials typically Kel-F, Vespel, or zirconia and boron nitride for an extended temperature range. Magic-angle spinning was first described in 1958 by Edward Raymond Andrew, A. Bradbury, and R. G. Eades and independently in 1959 by I. J. Lowe. The name "magic-angle spinning" was coined in 1960 by Cornelis J. Gorter at the AMPERE congress in Pisa. Variations High Resolution Magic-Angle Spinning (HR-MAS) HRMAS is usually applied to solutions and gels where dipole-dipole interactions are insufficiently averaged by the intermediate molecular motion. HRMAS can dramatically average out residual dipolar interactions and result in spectra with linewidths similar to solution-state NMR. HRMAS links the gap between solution-state and solid-state NMR, and enable the use of solution-state experiments HRMAS and its medical research application was first described in a 1997 study of human brain tissues from a neurodegenerative disorder. Solution Magic Angle Spinning Use of Magic Angle Spinning has been extended from solid-state to liquid (solution) NMR. Magic angle turning The magic-angle-turning (MAT) technique introduced by Gan employs slow (approximately 30 Hz) rotation of a powdered sample at the magic angle, in concert with pulses synchronized to 1/3 of the rotor period, to obtain isotropic-shift information in one dimension of a 2D spectrum. Magic angle spinning spheres Rather than using cylindrical rotors, spinning spheres can be spun stably at the magic angle, which can be used to increase the filling factor of the coils, hence improve the sensitivity. Magic angle spinning spheres allow stable MAS with faster spinning rates. Applications There are significant advantages to using MAS NMR in structural biology. Magic angle spinning can be used to characterize large insoluble systems, including biological assemblies and intact viruses, that cannot be studied with other methods. References Scientific techniques Nuclear magnetic resonance
Magic angle spinning
[ "Physics", "Chemistry" ]
916
[ "Nuclear magnetic resonance", "Nuclear physics" ]
1,912,367
https://en.wikipedia.org/wiki/Electromagnetic%20tensor
In electromagnetism, the electromagnetic tensor or electromagnetic field tensor (sometimes called the field strength tensor, Faraday tensor or Maxwell bivector) is a mathematical object that describes the electromagnetic field in spacetime. The field tensor was first used after the four-dimensional tensor formulation of special relativity was introduced by Hermann Minkowski. The tensor allows related physical laws to be written concisely, and allows for the quantization of the electromagnetic field by the Lagrangian formulation described below. Definition The electromagnetic tensor, conventionally labelled F, is defined as the exterior derivative of the electromagnetic four-potential, A, a differential 1-form: Therefore, F is a differential 2-form— an antisymmetric rank-2 tensor field—on Minkowski space. In component form, where is the four-gradient and is the four-potential. SI units for Maxwell's equations and the particle physicist's sign convention for the signature of Minkowski space , will be used throughout this article. Relationship with the classical fields The Faraday differential 2-form is given by where is the time element times the speed of light . This is the exterior derivative of its 1-form antiderivative , where has ( is a scalar potential for the irrotational/conservative vector field ) and has ( is a vector potential for the solenoidal vector field ). Note that where is the exterior derivative, is the Hodge star, (where is the electric current density, and is the electric charge density) is the 4-current density 1-form, is the differential forms version of Maxwell's equations. The electric and magnetic fields can be obtained from the components of the electromagnetic tensor. The relationship is simplest in Cartesian coordinates: where c is the speed of light, and where is the Levi-Civita tensor. This gives the fields in a particular reference frame; if the reference frame is changed, the components of the electromagnetic tensor will transform covariantly, and the fields in the new frame will be given by the new components. In contravariant matrix form with metric signature (+,-,-,-), The covariant form is given by index lowering, The Faraday tensor's Hodge dual is From now on in this article, when the electric or magnetic fields are mentioned, a Cartesian coordinate system is assumed, and the electric and magnetic fields are with respect to the coordinate system's reference frame, as in the equations above. Properties The matrix form of the field tensor yields the following properties: Antisymmetry: Six independent components: In Cartesian coordinates, these are simply the three spatial components of the electric field (Ex, Ey, Ez) and magnetic field (Bx, By, Bz). Inner product: If one forms an inner product of the field strength tensor a Lorentz invariant is formed meaning this number does not change from one frame of reference to another. Pseudoscalar invariant: The product of the tensor with its Hodge dual gives a Lorentz invariant: where is the rank-4 Levi-Civita symbol. The sign for the above depends on the convention used for the Levi-Civita symbol. The convention used here is . Determinant: which is proportional to the square of the above invariant. Trace: which is equal to zero. Significance This tensor simplifies and reduces Maxwell's equations as four vector calculus equations into two tensor field equations. In electrostatics and electrodynamics, Gauss's law and Ampère's circuital law are respectively: and reduce to the inhomogeneous Maxwell equation: , where is the four-current. In magnetostatics and magnetodynamics, Gauss's law for magnetism and Maxwell–Faraday equation are respectively: which reduce to the Bianchi identity: or using the index notation with square brackets for the antisymmetric part of the tensor: Using the expression relating the Faraday tensor to the four-potential, one can prove that the above antisymmetric quantity turns to zero identically (). This tensor equation reproduces the homogeneous Maxwell's equations. Relativity The field tensor derives its name from the fact that the electromagnetic field is found to obey the tensor transformation law, this general property of physical laws being recognised after the advent of special relativity. This theory stipulated that all the laws of physics should take the same form in all coordinate systems – this led to the introduction of tensors. The tensor formalism also leads to a mathematically simpler presentation of physical laws. The inhomogeneous Maxwell equation leads to the continuity equation: implying conservation of charge. Maxwell's laws above can be generalised to curved spacetime by simply replacing partial derivatives with covariant derivatives: and where the semicolon notation represents a covariant derivative, as opposed to a partial derivative. These equations are sometimes referred to as the curved space Maxwell equations. Again, the second equation implies charge conservation (in curved spacetime): Lagrangian formulation of classical electromagnetism Classical electromagnetism and Maxwell's equations can be derived from the action: where is over space and time. This means the Lagrangian density is The two middle terms in the parentheses are the same, as are the two outer terms, so the Lagrangian density is Substituting this into the Euler–Lagrange equation of motion for a field: So the Euler–Lagrange equation becomes: The quantity in parentheses above is just the field tensor, so this finally simplifies to That equation is another way of writing the two inhomogeneous Maxwell's equations (namely, Gauss's law and Ampère's circuital law) using the substitutions: where i, j, k take the values 1, 2, and 3. Hamiltonian form The Hamiltonian density can be obtained with the usual relation, . Quantum electrodynamics and field theory The Lagrangian of quantum electrodynamics extends beyond the classical Lagrangian established in relativity to incorporate the creation and annihilation of photons (and electrons): where the first part in the right hand side, containing the Dirac spinor , represents the Dirac field. In quantum field theory it is used as the template for the gauge field strength tensor. By being employed in addition to the local interaction Lagrangian it reprises its usual role in QED. See also Classification of electromagnetic fields Covariant formulation of classical electromagnetism Electromagnetic stress–energy tensor Gluon field strength tensor Ricci calculus Riemann–Silberstein vector Notes References Electromagnetism Minkowski spacetime Theory of relativity Tensor physical quantities Tensors in general relativity
Electromagnetic tensor
[ "Physics", "Mathematics", "Engineering" ]
1,381
[ "Electromagnetism", "Physical phenomena", "Tensors", "Physical quantities", "Quantity", "Tensor physical quantities", "Tensors in general relativity", "Fundamental interactions", "Theory of relativity" ]
1,912,810
https://en.wikipedia.org/wiki/Algar%E2%80%93Flynn%E2%80%93Oyamada%20reaction
The Algar–Flynn–Oyamada reaction is a chemical reaction whereby a chalcone undergoes an oxidative cyclization to form a flavonol. Reaction mechanism There are several possible mechanisms to explain this reaction; however, these reaction mechanisms have not been elucidated. What is known is that a two-stage mechanism exists. First, dihydroflavonol is formed, which then subsequently oxidizes to form a flavonol. Proposed mechanisms involving epoxidation of the alkene have been disproven. The probable mechanisms are thus two possibilities: The phenoxide attacks the enone at the beta position, and the alkene directly attacks hydrogen peroxide from the alpha position, forming the dihydroflavonol. The phenoxide attacks the enone at the beta position, closing the six-membered ring and forming an enolate intermediate. The enolate then attacks hydrogen peroxide, forming the dihydroflavonol. See also Allan–Robinson reaction Auwers synthesis References Heterocycle forming reactions Organic redox reactions Name reactions
Algar–Flynn–Oyamada reaction
[ "Chemistry" ]
233
[ "Name reactions", "Organic redox reactions", "Heterocycle forming reactions", "Organic reactions" ]
1,914,015
https://en.wikipedia.org/wiki/Ashtamangala
The Ashtamangala () is the sacred set of Eight Auspicious Signs (, bajixiang) featured in a number of Indian religions such as Hinduism, Jainism, and Buddhism. The symbols or "symbolic attributes" () are yidam and teaching tools. Not only do these attributes (or energetic signatures) point to qualities of enlightened mindstream, but they are the investiture that ornaments these enlightened "qualities" (Sanskrit: guṇa; ). Many cultural enumerations and variations of the Ashtamangala are extant. Buddhism Tibetan Buddhists make use of a particular set of eight auspicious symbols, ashtamangala, in household and public art. Some common interpretations are given along with each symbol although different teachers may give different interpretations: Conch The right-turning white conch shell (Sanskrit: ; ) represents the beautiful, deep, melodious, interpenetrating and pervasive sound of the dharma, which awakens disciples from the deep slumber of ignorance and urges them to accomplish their own welfare for the welfare of others. Endless knot The endless knot (Sanskrit: śrīvatsa; ) denotes "the auspicious mark represented by a curled noose emblematic of love". It is a symbol of the ultimate unity of everything. Moreover, it represents the intertwining of wisdom and compassion, the mutual dependence of religious doctrine and secular affairs, the union of wisdom and method, the inseparability of śūnyatā "emptiness" and pratītyasamutpāda "interdependent origination", and the union of wisdom and compassion in enlightenment (see: namkha). This knot, net or web metaphor also conveys the Buddhist teaching of interpenetration.. It is also an attribute of the god Vishnu, which is said to be engraved on his chest. A similar engraving of the Shrivatsa on the historical Gautama Buddha's chest is mentioned in some lists of the Physical characteristics of the Buddha. Pair of golden fish The two golden fish (Sanskrit: gaurmatsya; ) symbolise the auspiciousness of all sentient beings in a state of fearlessness without danger of drowning in saṃsāra. The two golden fishes are linked with the Ganges and Yamuna nadi, prana and carp: Lotus The lotus flower (Sanskrit: padma; ) represents the primordial purity of body, speech, and mind, floating above the muddy waters of attachment and desire. The lotus symbolizes purity and renunciation. Although the lotus has its roots in the mud at the bottom of a pond, its flower lies immaculate above the water. The Buddhist lotus bloom has 4, 8, 16, 24, 32, 64, 100, or 1,000 petals. The same figures can refer to the body's 'internal lotuses', that is to say, its energy centres (chakra). Parasol The jewelled parasol (Sanskrit: chatraratna; ), which is similar in ritual function to the baldachin or canopy: represents the protection of beings from harmful forces and illness. It represents the canopy or firmament of the sky and therefore the expansiveness and unfolding of space and the element æther. It represents the expansiveness, unfolding and protective quality of the sahasrara: all take refuge in the dharma under the auspiciousness of the parasol. Vase The treasure vase () represents health, longevity, wealth, prosperity, wisdom and the phenomenon of space. The treasure vase, or pot, symbolizes the Buddha's infinite quality of teaching the dharma: no matter how many teachings he shared, the treasure never lessened. The iconography representation of the treasure vase is often very similar to the kumbha, one of the few possessions permitted a bhikkhu or bhikkhuni in Theravada Buddhism. The wisdom urn or treasure vase is used in many empowerment (Vajrayana) and initiations. Dharmachakra The Dharmachakra or "Wheel of the Law" (Sanskrit; ) represents Gautama Buddha and the Dharma teaching. This symbol is commonly used by Tibetan Buddhists, where it sometimes also includes an inner wheel of the Gankyil (Tibetan). Nepalese Buddhists do not use the Wheel of Law in the eight auspicious symbols. Instead of the Dharmachakra, a fly-whisk may be used as one of the Ashtamangala to symbolize Tantric manifestations. It is made of a yak's tail attached to a silver staff, and used in ritual recitation and during fanning the deities in pujas. Prayer wheels take the form of a Dharmachakra guise. Victory banner The dhvaja (Sanskrit; ) "banner, flag" was a military standard of ancient Indian warfare. The symbol represents the Buddha's victory over the four māras, or hindrances in the path of enlightenment. These hindrances are pride, desire, disturbing emotions, and the fear of death. Within the Tibetan tradition, a list of eleven different forms of the victory banner is given to represent eleven specific methods for overcoming defilement. Many variations of the dhvaja's design can be seen on the roofs of Tibetan monasteries to symbolise the Buddha's victory over four māras. Banners are placed at the four corners of monastery and temple roofs. The cylindrical banners placed on monastery roofs are often made of beaten copper. Sequences of symbols Different traditions order the eight symbols differently. Here is the sequential order of the Eight Auspicious Symbols of Nepali Buddhism: Endless knot Lotus flower Dhvaja Dharmachakra (fly-whisk in Nepali Buddhism) Bumpa Golden Fish Parasol Conch The sequential order for Chinese Buddhism was defined in the Qing dynasty as: Dharmachakra Conch Dhvaja Parasol Lotus flower Bumpa Golden Fish Endless knot Hinduism In Indian and Hindu tradition, the Ashtamangala may be used during certain occasions including: pujas, weddings (of Hindus), and coronations. The ashtamangala finds wide mention in the texts associated with Hinduism, Buddhism, and Jainism. They have been depicted in decorative motifs and cultural artifacts. The Hindu tradition lists them as: lion called raja bull called vrishaba serpent called naga pitcher called kalasha necklace called vaijayanti kettledrum called bheri fan called vyajana lamp called dipa The Hindu tradition lists them as: fly-whisk full vase mirror elephant goad drum lamp flag a pair of fish. The list also differs depending on the place, region, and the social groups. Jainism In Jainism, the Ashtamangala are a set of eight auspicious symbols. There is some variation among different traditions concerning the eight symbols. In the Digambara tradition, the eight symbols are: Parasol Dhvaja Kalasha Chamara Mirror Chair Hand fan Vessel In the Śvētāmbara tradition, the eight symbols are: Swastika Srivatsa Nandavarta Vardhmanaka (food vessel) Bhadrasana (seat) Kalasha (pot) Darpana (mirror) Pair of fish See also Dzi bead Eight Treasures (Chinese equivalent) Iconography Mani stone Sandpainting References Citations Sources Beer, Robert (1999). The Encyclopedia of Tibetan Symbols and Motifs, (Hardcover). Shambhala Publications. , Beer, Robert (2003). The Handbook of Tibetan Buddhist Symbols, Shambhala Publications. About The Eight Auspicious Symbols Tibetan Buddhist Symbols Buddhist symbols Indian iconography Jain symbols Buddhist philosophical concepts Tantric practices Tibetan Buddhist art and culture Hindu symbols Hindu philosophical concepts Magic items
Ashtamangala
[ "Physics" ]
1,581
[ "Magic items", "Physical objects", "Matter" ]
22,463,899
https://en.wikipedia.org/wiki/List%20of%20climate%20engineering%20topics
Climate engineering geoengineering topics related to greenhouse gas remediation include: Solar radiation management Solar radiation management Stratospheric aerosol injection (climate engineering) Marine cloud brightening Cool roof Space sunshade Stratospheric Particle Injection for Climate Engineering Carbon dioxide removal Carbon dioxide removal Biochar Bio-energy with carbon capture and storage Direct air capture Ocean fertilization Enhanced weathering Other greenhouse gas remediation Greenhouse gas removal CFC laser photochemistry Other projects Arctic geoengineering Cirrus Cloud Thinning References Climate change-related lists Outlines of sciences Outlines External links An interactive Geoengineering Map prepared by ETC Group and the Heinrich Boell Foundation
List of climate engineering topics
[ "Engineering" ]
140
[ "Planetary engineering", "Geoengineering" ]
22,468,364
https://en.wikipedia.org/wiki/Volume%20%28thermodynamics%29
In thermodynamics, the volume of a system is an important extensive parameter for describing its thermodynamic state. The specific volume, an intensive property, is the system's volume per unit mass. Volume is a function of state and is interdependent with other thermodynamic properties such as pressure and temperature. For example, volume is related to the pressure and temperature of an ideal gas by the ideal gas law. The physical region covered by a system may or may not coincide with a control volume used to analyze the system. Overview The volume of a thermodynamic system typically refers to the volume of the working fluid, such as, for example, the fluid within a piston. Changes to this volume may be made through an application of work, or may be used to produce work. An isochoric process however operates at a constant-volume, thus no work can be produced. Many other thermodynamic processes will result in a change in volume. A polytropic process, in particular, causes changes to the system so that the quantity is constant (where is pressure, is volume, and is the polytropic index, a constant). Note that for specific polytropic indexes, a polytropic process will be equivalent to a constant-property process. For instance, for very large values of approaching infinity, the process becomes constant-volume. Gases are compressible, thus their volumes (and specific volumes) may be subject to change during thermodynamic processes. Liquids, however, are nearly incompressible, thus their volumes can be often taken as constant. In general, compressibility is defined as the relative volume change of a fluid or solid as a response to a pressure, and may be determined for substances in any phase. Similarly, thermal expansion is the tendency of matter to change in volume in response to a change in temperature. Many thermodynamic cycles are made up of varying processes, some which maintain a constant volume and some which do not. A vapor-compression refrigeration cycle, for example, follows a sequence where the refrigerant fluid transitions between the liquid and vapor states of matter. Typical units for volume are (cubic meters), (liters), and (cubic feet). Heat and work Mechanical work performed on a working fluid causes a change in the mechanical constraints of the system; in other words, for work to occur, the volume must be altered. Hence, volume is an important parameter in characterizing many thermodynamic processes where an exchange of energy in the form of work is involved. Volume is one of a pair of conjugate variables, the other being pressure. As with all conjugate pairs, the product is a form of energy. The product is the energy lost to a system due to mechanical work. This product is one term which makes up enthalpy : where is the internal energy of the system. The second law of thermodynamics describes constraints on the amount of useful work which can be extracted from a thermodynamic system. In thermodynamic systems where the temperature and volume are held constant, the measure of "useful" work attainable is the Helmholtz free energy; and in systems where the volume is not held constant, the measure of useful work attainable is the Gibbs free energy. Similarly, the appropriate value of heat capacity to use in a given process depends on whether the process produces a change in volume. The heat capacity is a function of the amount of heat added to a system. In the case of a constant-volume process, all the heat affects the internal energy of the system (i.e., there is no pV-work, and all the heat affects the temperature). However, in a process without a constant volume, the heat addition affects both the internal energy and the work (i.e., the enthalpy); thus the temperature changes by a different amount than in the constant-volume case and a different heat capacity value is required. Specific volume Specific volume () is the volume occupied by a unit of mass of a material. In many cases, the specific volume is a useful quantity to determine because, as an intensive property, it can be used to determine the complete state of a system in conjunction with another independent intensive variable. The specific volume also allows systems to be studied without reference to an exact operating volume, which may not be known (nor significant) at some stages of analysis. The specific volume of a substance is equal to the reciprocal of its mass density. Specific volume may be expressed in , , , or . where, is the volume, is the mass and is the density of the material. For an ideal gas, where, is the specific gas constant, is the temperature and is the pressure of the gas. Specific volume may also refer to molar volume. Gas volume Dependence on pressure and temperature The volume of gas increases proportionally to absolute temperature and decreases inversely proportionally to pressure, approximately according to the ideal gas law: where: p is the pressure V is the volume n is the amount of substance of gas (moles) R is the gas constant, 8.314 J·K−1mol−1 T is the absolute temperature To simplify, a volume of gas may be expressed as the volume it would have in standard conditions for temperature and pressure, which are and 100 kPa. Humidity exclusion In contrast to other gas components, water content in air, or humidity, to a higher degree depends on vaporization and condensation from or into water, which, in turn, mainly depends on temperature. Therefore, when applying more pressure to a gas saturated with water, all components will initially decrease in volume approximately according to the ideal gas law. However, some of the water will condense until returning to almost the same humidity as before, giving the resulting total volume deviating from what the ideal gas law predicted. Conversely, decreasing temperature would also make some water condense, again making the final volume deviating from predicted by the ideal gas law. Therefore, gas volume may alternatively be expressed excluding the humidity content: Vd (volume dry). This fraction more accurately follows the ideal gas law. On the contrary, Vs (volume saturated) is the volume a gas mixture would have if humidity was added to it until saturation (or 100% relative humidity). General conversion To compare gas volume between two conditions of different temperature or pressure (1 and 2), assuming nR are the same, the following equation uses humidity exclusion in addition to the ideal gas law: Where, in addition to terms used in the ideal gas law: pw is the partial pressure of gaseous water during condition 1 and 2, respectively For example, calculating how much 1 liter of air (a) at 0 °C, 100 kPa, pw = 0 kPa (known as STPD, see below) would fill when breathed into the lungs where it is mixed with water vapor (l), where it quickly becomes , 100 kPa, pw = 6.2 kPa (BTPS): Common conditions Some common expressions of gas volume with defined or variable temperature, pressure and humidity inclusion are: ATPS: Ambient temperature (variable) and pressure (variable), saturated (humidity depends on temperature) ATPD: Ambient temperature (variable) and pressure (variable), dry (no humidity) BTPS: Body temperature (37 °C or 310 K) and pressure (generally same as ambient), saturated (47 mmHg or 6.2 kPa) STPD: Standard temperature (0 °C or 273 K) and pressure ( or ), dry (no humidity) Conversion factors The following conversion factors can be used to convert between expressions for volume of a gas: Partial volume The partial volume of a particular gas is a fraction of the total volume occupied by the gas mixture, with unchanged pressure and temperature. In gas mixtures, e.g. air, the partial volume allows focusing on one particular gas component, e.g. oxygen. It can be approximated both from partial pressure and molar fraction: VX is the partial volume of any individual gas component (X) Vtot is the total volume in gas mixture PX is the partial pressure of gas X Ptot is the total pressure in gas mixture nX is the amount of substance of a gas (X) ntot is the total amount of substance in gas mixture See also Volumetric flow rate References Gases Physical chemistry Standards Thermodynamic properties Volume State functions ca:Volum (termodinàmica)#Volum específic
Volume (thermodynamics)
[ "Physics", "Chemistry", "Mathematics" ]
1,764
[ "State functions", "Scalar physical quantities", "Thermodynamic properties", "Gases", "Applied and interdisciplinary physics", "Physical quantities", "Quantity", "Statistical mechanics", "Phases of matter", "Size", "Extensive quantities", "Thermodynamics", "nan", "Volume", "Wikipedia cat...
3,606,300
https://en.wikipedia.org/wiki/Kruskal%27s%20tree%20theorem
In mathematics, Kruskal's tree theorem states that the set of finite trees over a well-quasi-ordered set of labels is itself well-quasi-ordered under homeomorphic embedding. History The theorem was conjectured by Andrew Vázsonyi and proved by ; a short proof was given by . It has since become a prominent example in reverse mathematics as a statement that cannot be proved in ATR0 (a second-order arithmetic theory with a form of arithmetical transfinite recursion). In 2004, the result was generalized from trees to graphs as the Robertson–Seymour theorem, a result that has also proved important in reverse mathematics and leads to the even-faster-growing SSCG function, which dwarfs . A finitary application of the theorem gives the existence of the fast-growing TREE function. Statement The version given here is that proven by Nash-Williams; Kruskal's formulation is somewhat stronger. All trees we consider are finite. Given a tree with a root, and given vertices , , call a successor of if the unique path from the root to contains , and call an immediate successor of if additionally the path from to contains no other vertex. Take to be a partially ordered set. If , are rooted trees with vertices labeled in , we say that is inf-embeddable in and write if there is an injective map from the vertices of to the vertices of such that: For all vertices of , the label of precedes the label of ; If is any successor of in , then is a successor of ; and If , are any two distinct immediate successors of , then the path from to in contains . Kruskal's tree theorem then states: If is well-quasi-ordered, then the set of rooted trees with labels in is well-quasi-ordered under the inf-embeddable order defined above. (That is to say, given any infinite sequence of rooted trees labeled in , there is some so that .) Friedman's work For a countable label set , Kruskal's tree theorem can be expressed and proven using second-order arithmetic. However, like Goodstein's theorem or the Paris–Harrington theorem, some special cases and variants of the theorem can be expressed in subsystems of second-order arithmetic much weaker than the subsystems where they can be proved. This was first observed by Harvey Friedman in the early 1980s, an early success of the field of reverse mathematics. In the case where the trees above are taken to be unlabeled (that is, in the case where has size one), Friedman found that the result was unprovable in ATR0, thus giving the first example of a predicative result with a provably impredicative proof. This case of the theorem is still provable by Π-CA0, but by adding a "gap condition" to the definition of the order on trees above, he found a natural variation of the theorem unprovable in this system. Much later, the Robertson–Seymour theorem would give another theorem unprovable by Π-CA0. Ordinal analysis confirms the strength of Kruskal's theorem, with the proof-theoretic ordinal of the theorem equaling the small Veblen ordinal (sometimes confused with the smaller Ackermann ordinal). Weak tree function Suppose that is the statement: There is some such that if is a finite sequence of unlabeled rooted trees where has vertices, then for some . All the statements are true as a consequence of Kruskal's theorem and Kőnig's lemma. For each , Peano arithmetic can prove that is true, but Peano arithmetic cannot prove the statement " is true for all ". Moreover, the length of the shortest proof of in Peano arithmetic grows phenomenally fast as a function of , far faster than any primitive recursive function or the Ackermann function, for example. The least for which holds similarly grows extremely quickly with . Define , the weak tree function, as the largest so that we have the following: There is a sequence of unlabeled rooted trees, where each has at most vertices, such that does not hold for any . It is known that , , (about 844 trillion), (where is Graham's number), and (where the argument specifies the number of labels; see below) is larger than To differentiate the two functions, "TREE" (with all caps) is the big TREE function, and "tree" (with all letters in lowercase) is the weak tree function. TREE function By incorporating labels, Friedman defined a far faster-growing function. For a positive integer , take to be the largest so that we have the following: There is a sequence of rooted trees labelled from a set of labels, where each has at most vertices, such that does not hold for any . The TREE sequence begins , , before suddenly explodes to a value so large that many other "large" combinatorial constants, such as Friedman's , , and Graham's number, are extremely small by comparison. A lower bound for , and, hence, an extremely weak lower bound for , is . Graham's number, for example, is much smaller than the lower bound , which is approximately , where is Graham's function. See also Paris–Harrington theorem Kanamori–McAloon theorem Robertson–Seymour theorem Notes Friedman originally denoted this function by TR[n]. n(k) is defined as the length of the longest possible sequence that can be constructed with a k-letter alphabet such that no block of letters xi,...,x2i is a subsequence of any later block xj,...,x2j. . A(x) taking one argument, is defined as A(x, x), where A(k, n), taking two arguments, is a particular version of Ackermann's function defined as: A(1, n) = 2n, A(k+1, 1) = A(k, 1), A(k+1, n+1) = A(k, A(k+1, n)). References Citations Bibliography Mathematical logic Order theory Theorems in discrete mathematics Trees (graph theory) Wellfoundedness
Kruskal's tree theorem
[ "Mathematics" ]
1,305
[ "Discrete mathematics", "Order theory", "Mathematical logic", "Wellfoundedness", "Theorems in discrete mathematics", "Mathematical problems", "Mathematical theorems", "Mathematical induction" ]
3,606,944
https://en.wikipedia.org/wiki/Unit%20dummy%20force%20method
The Unit dummy force method provides a convenient means for computing displacements in structural systems. It is applicable for both linear and non-linear material behaviours as well as for systems subject to environmental effects, and hence more general than Castigliano's second theorem. Discrete systems Consider a discrete system such as trusses, beams or frames having members interconnected at the nodes. Let the consistent set of members' deformations be given by , which can be computed using the member flexibility relation. These member deformations give rise to the nodal displacements , which we want to determine. We start by applying N virtual nodal forces , one for each wanted r, and find the virtual member forces that are in equilibrium with : In the case of a statically indeterminate system, matrix B is not unique because the set of that satisfies nodal equilibrium is infinite. It can be computed as the inverse of the nodal equilibrium matrix of any primary system derived from the original system. Imagine that internal and external virtual forces undergo, respectively, the real deformations and displacements; the virtual work done can be expressed as: External virtual work: Internal virtual work: According to the virtual work principle, the two work expressions are equal: Substitution of (1) gives Since contains arbitrary virtual forces, the above equation gives It is remarkable that the computation in (2) does not involve any integration regardless of the complexity of the systems, and that the result is unique irrespective of the choice of primary system for B. It is thus far more convenient and general than the classical form of the dummy unit load method, which varies with the type of system as well as with the imposed external effects. On the other hand, it is important to note that Eq.(2) is for computing displacements or rotations of the nodes only. This is not a restriction because we can make any point into a node when desired. Finally, the name unit load arises from the interpretation that the coefficients in matrix B are the member forces in equilibrium with the unit nodal force , by virtue of Eq.(1). General systems For a general system, the unit dummy force method also comes directly from the virtual work principle. Fig.(a) shows a system with known actual deformations . These deformations, supposedly consistent, give rise to displacements throughout the system. For example, a point A has moved to A', and we want to compute the displacement r of A in the direction shown. For this particular purpose, we choose the virtual force system in Fig.(b) which shows: The unit force R* is at A and in the direction of r so that the external virtual work done by R* is, noting that the work done by the virtual reactions in (b) is zero because their displacements in (a) are zero: is the desired displacement The internal virtual work done by the virtual stresses is where the virtual stresses must satisfy equilibrium everywhere. Equating the two work expressions gives the desired displacement: Structural analysis
Unit dummy force method
[ "Engineering" ]
618
[ "Structural engineering", "Structural analysis", "Mechanical engineering", "Aerospace engineering" ]
3,607,704
https://en.wikipedia.org/wiki/Fano%27s%20inequality
In information theory, Fano's inequality (also known as the Fano converse and the Fano lemma) relates the average information lost in a noisy channel to the probability of the categorization error. It was derived by Robert Fano in the early 1950s while teaching a Ph.D. seminar in information theory at MIT, and later recorded in his 1961 textbook. It is used to find a lower bound on the error probability of any decoder as well as the lower bounds for minimax risks in density estimation. Let the discrete random variables and represent input and output messages with a joint probability . Let represent an occurrence of error; i.e., that , with being an approximate version of . Fano's inequality is where denotes the support of , denotes the cardinality of (number of elements in) , is the conditional entropy, is the probability of the communication error, and is the corresponding binary entropy. Proof Define an indicator random variable , that indicates the event that our estimate is in error, Consider . We can use the chain rule for entropies to expand this in two different ways Equating the two Expanding the right most term, Since means ; being given the value of allows us to know the value of with certainty. This makes the term . On the other hand, means that , hence given the value of , we can narrow down to one of different values, allowing us to upper bound the conditional entropy . Hence The other term, , because conditioning reduces entropy. Because of the way is defined, , meaning that . Putting it all together, Because is a Markov chain, we have by the data processing inequality, and hence , giving us Intuition Fano's inequality can be interpreted as a way of dividing the uncertainty of a conditional distribution into two questions given an arbitrary predictor. The first question, corresponding to the term , relates to the uncertainty of the predictor. If the prediction is correct, there is no more uncertainty remaining. If the prediction is incorrect, the uncertainty of any discrete distribution has an upper bound of the entropy of the uniform distribution over all choices besides the incorrect prediction. This has entropy . Looking at extreme cases, if the predictor is always correct the first and second terms of the inequality are 0, and the existence of a perfect predictor implies is totally determined by , and so . If the predictor is always wrong, then the first term is 0, and can only be upper bounded with a uniform distribution over the remaining choices. Alternative formulation Let be a random variable with density equal to one of possible densities . Furthermore, the Kullback–Leibler divergence between any pair of densities cannot be too large, for all Let be an estimate of the index. Then where is the probability induced by . Generalization The following generalization is due to Ibragimov and Khasminskii (1979), Assouad and Birge (1983). Let F be a class of densities with a subclass of r + 1 densities ƒθ such that for any θ ≠ θ′ Then in the worst case the expected value of error of estimation is bound from below, where ƒn is any density estimator based on a sample of size n. References P. Assouad, "Deux remarques sur l'estimation", Comptes Rendus de l'Académie des Sciences de Paris, Vol. 296, pp. 1021–1024, 1983. L. Birge, "Estimating a density under order restrictions: nonasymptotic minimax risk", Technical report, UER de Sciences Économiques, Universite Paris X, Nanterre, France, 1983. L. Devroye, A Course in Density Estimation. Progress in probability and statistics, Vol 14. Boston, Birkhauser, 1987. , . also: Cambridge, Massachusetts, M.I.T. Press, 1961. R. Fano, Fano inequality Scholarpedia, 2008. I. A. Ibragimov, R. Z. Has′minskii, Statistical estimation, asymptotic theory. Applications of Mathematics, vol. 16, Springer-Verlag, New York, 1981. Information theory Inequalities
Fano's inequality
[ "Mathematics", "Technology", "Engineering" ]
861
[ "Telecommunications engineering", "Applied mathematics", "Binary relations", "Computer science", "Information theory", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Mathematical theorems" ]
3,607,910
https://en.wikipedia.org/wiki/Atomic%20layer%20deposition
Atomic layer deposition (ALD) is a thin-film deposition technique based on the sequential use of a gas-phase chemical process; it is a subclass of chemical vapour deposition. The majority of ALD reactions use two chemicals called precursors (also called "reactants"). These precursors react with the surface of a material one at a time in a sequential, self-limiting, manner. A thin film is slowly deposited through repeated exposure to separate precursors. ALD is a key process in fabricating semiconductor devices, and part of the set of tools for synthesizing nanomaterials. Introduction During atomic layer deposition, a film is grown on a substrate by exposing its surface to alternate gaseous species (typically referred to as precursors or reactants). In contrast to chemical vapor deposition, the precursors are never present simultaneously in the reactor, but they are inserted as a series of sequential, non-overlapping pulses. In each of these pulses the precursor molecules react with the surface in a self-limiting way, so that the reaction terminates once all the available sites on the surface are consumed. Consequently, the maximum amount of material deposited on the surface after a single exposure to all of the precursors (a so-called ALD cycle) is determined by the nature of the precursor-surface interaction. By varying the number of cycles it is possible to grow materials uniformly and with high precision on arbitrarily complex and large substrates. ALD is a deposition method with great potential for producing very thin, conformal films with control of the thickness and composition of the films possible at the atomic level. A major driving force for the recent interest is the prospective seen for ALD in scaling down microelectronic devices according to Moore's law. ALD is an active field of research, with hundreds of different processes published in the scientific literature, though some of them exhibit behaviors that depart from that of an ideal ALD process. Currently there are several comprehensive review papers that give a summary of the published ALD processes, including the work of Puurunen, Miikkulainen et al., Knoops et al., and Mackus & Schneider et al.. An interactive, community driven database of ALD processes is also available online which generates an up-to-date overview in the form of an annotated periodic table. The sister technique of atomic layer deposition, molecular layer deposition (MLD), uses organic precursors to deposit polymers. By combining the ALD/MLD techniques, it is possible to make highly conformal and pure hybrid films for many applications. Another technology related to ALD is sequential infiltration synthesis (SIS) which uses alternating precursor vapor exposures to infiltrate and modify polymers. SIS is also referred to as vapor phase infiltration (VPI) and sequential vapor infiltration (SVI). History 1960s In the 1960s, Stanislav Koltsov together with Valentin Aleskovsky and colleagues experimentally developed the principles of ALD at Leningrad Technological Institute (LTI) in the Soviet Union. The purpose was to experimentally build upon the theoretical considerations of the "framework hypothesis" coined by Aleskovsky in his 1952 habilitation thesis. The experiments started with metal chloride reactions and water with porous silica, soon extending to other substrate materials and planar thin films. Aleskovskii and Koltsov together proposed the name "Molecular Layering" for the new technique in 1965. The principles of Molecular Layering were summarized in the doctoral thesis ("professor's thesis") of Koltsov in 1971. Research activities of molecular layering covered a broad scope, from fundamental chemistry research to applied research with porous catalysts, sorbents and fillers to microelectronics and beyond. In 1974, when starting the development of thin-film electroluminescent displays (TFEL) at Instrumentarium Oy in Finland, Tuomo Suntola devised ALD as an advanced thin-film technology. Suntola named it atomic layer epitaxy (ALE) based on the meaning of "epitaxy" in Greek language, "arrangement upon". The first experiments were made with elemental Zn and S to grow ZnS. ALE as a means for growth of thin films was internationally patented in more than 20 countries. A breakthrough occurred, when Suntola and co-workers switched from high vacuum reactors to inert gas reactors which enabled the use of compound reactants like metal chlorides, hydrogen sulfide and water vapor for performing the ALE process. The technology was first disclosed in 1980 SID conference. The TFEL display prototype presented consisted of a ZnS layer between two aluminum oxide dielectric layers, all made in an ALE process using ZnCl2 + H2S and AlCl3 + H2O as the reactants. The first large-scale proof-of-concept of ALE-EL displays were the flight information boards installed in the Helsinki-Vantaa airport in 1983. TFEL flat panel display production started in the mid-1980s by Lohja Oy in the Olarinluoma factory. Academic research on ALE started in Tampere University of Technology (where Suntola gave lectures on electron physics) in 1970s, and in 1980s at Helsinki University of Technology. TFEL display manufacturing remained until the 1990s the only industrial application of ALE. In 1987, Suntola started the development of the ALE technology for new applications like photovoltaic devices and heterogeneous catalysts in Microchemistry Ltd., established for that purpose by the Finnish national oil company Neste Oy. In the 1990s, ALE development in Microchemistry was directed to semiconductor applications and ALE reactors suitable for silicon wafer processing. In 1999, Microchemistry Ltd. and the ALD technology were sold to the Dutch ASM International, a major supplier of semiconductor manufacturing equipment and Microchemistry Ltd. became ASM Microchemistry Oy as ASM's Finnish daughter company. Microchemistry Ltd/ASM Microchemistry Ltd was the only manufacturer of commercial ALD-reactors in the 1990s. In the early 2000s, the expertise on ALD reactors in Finland triggered two new manufacturers, Beneq Oy and Picosun Oy, the latter started by Sven Lindfors, Suntola's close coworker since 1975. The number of reactor manufacturers increased rapidly and semiconductor applications became the industrial breakthrough of the ALD technology, as ALD became an enabling technology for the continuation of Moore's law. In 2004, Tuomo Suntola received the European SEMI award for the development of the ALD technology for semiconductor applications and in 2018 the Millennium Technology Prize. The developers of ML and ALE met at the 1st international conference on atomic layer epitaxy, "ALE-1" in Espoo, Finland, 1990. An attempt to expose the extent of molecular layering works was made in a scientific ALD review article in 2005 and later in the VPHA-related publications. The name "atomic layer deposition" was apparently proposed for the first time in writing as an alternative to ALE in analogy with CVD by Markku Leskelä (professor at the University of Helsinki) at the ALE-1 conference, Espoo, Finland. It took about a decade before the name gained general acceptance with the onset of the international conference series on ALD by American Vacuum Society. 2000s In 2000, Gurtej Singh Sandhu and Trung T. Doan of Micron Technology initiated the development of atomic layer deposition high-κ films for DRAM memory devices. This helped drive cost-effective implementation of semiconductor memory, starting with 90-nm node DRAM. Intel Corporation has reported using ALD to deposit high-κ gate dielectric for its 45 nm CMOS technology. ALD has been developed in two independent discoveries under names atomic layer epitaxy (ALE, Finland) and molecular layering (ML, Soviet Union). To clarify the early history, the Virtual Project on the History of ALD (VPHA) has been set up in summer 2013. it resulted in several publications reviewing the historical development of ALD under the names ALE and ML. 2010s In 2010, sequential infiltration synthesis (SIS), first reported by researchers at Argonne National Laboratory, was added to the family of ALD-derived techniques. Surface reaction mechanisms In a prototypical ALD process, a substrate is exposed to two reactants A and B in a sequential, non-overlapping way. In contrast to other techniques such as chemical vapor deposition (CVD), where thin-film growth proceeds on a steady-state fashion, in ALD each reactant reacts with the surface in a self-limited way: the reactant molecules can react only with a finite number of reactive sites on the surface. Once all those sites have been consumed in the reactor, the growth stops. The remaining reactant molecules are flushed away and only then reactant B is inserted into the reactor. By alternating exposures of A and B, a thin film is deposited. This process is shown in the side figure. Consequently, when describing an ALD process one refers to both dose times (the time a surface is being exposed to a precursor) and purge times (the time left in between doses for the precursor to evacuate the chamber) for each precursor. The dose-purge-dose-purge sequence of a binary ALD process constitutes an ALD cycle. Also, rather than using the concept of growth rate, ALD processes are described in terms of their growth per cycle. In ALD, enough time must be allowed in each reaction step so that a full adsorption density can be achieved. When this happens the process has reached saturation. This time will depend on two key factors: the precursor pressure, and the sticking probability. Therefore, the rate of adsorption per unit of surface area can be expressed as: Where R is the rate of adsorption, S is the sticking probability, and F is the incident molar flux. However, a key characteristic of ALD is the S will change with time, as more molecules have reacted with the surface this sticking probability will become smaller until reaching a value of zero once saturation is reached. The specific details on the reaction mechanisms are strongly dependent on the particular ALD process. With hundreds of process available to deposit oxide, metals, nitrides, sulfides, chalcogenides, and fluoride materials, the unraveling of the mechanistic aspects of ALD processes is an active field of research. Some representative examples are shown below. Thermal ALD Thermal ALD requires temperatures ranging from room temperature (~20°C) to 350°C for ligand exchange or combustion type surface reactions. It occurs through surface reactions, which enables accurate thickness control no matter the substrate geometry (subject to aspect ratio) and reactor design. The synthesis of AlO from trimethylaluminum (TMA) and water is one of the best known thermal ALD examples. During the TMA exposure, TMA dissociatively chemisorbs on the substrate surface and any remaining TMA is pumped out of the chamber. The dissociative chemisorption of TMA leaves a surface covered with AlCH. The surface is then exposed to HO vapor, which reacts with the surface –CH forming CH as a reaction byproduct and resulting in a hydroxylated AlO surface. Plasma ALD In plasma-assisted ALD (PA-ALD), the high reactivity of the plasma species allows to reduce the deposition temperature without compromising the film quality; also, a wider range of precursors can be used and thus a wider range of materials can be deposited as compared to thermal ALD. Spatial ALD In temporal ALD the separate precursor and co-reactant doses are separated from each other in time by a purge step. In contrast, in spatial ALD (s-ALD), these gases are delivered at different locations, so they are separated in space. In atmospheric pressure s-ALD the precursor and co-reactant are delivered continuously and they are separated from each other by a gas curtain to prevent gas phase reactions. Such gas curtain typically consists of nitrogen injection and exhaust positions, see Figure 1. As a substrates moves through the different gas zones, self-limiting reactions take place at the substrate surface and the ALD process takes place. As this process can easily be accelerated, the deposition rate for spatial ALD can be much higher than for conventional ALD. For example, for ALD of Al2O3 the deposition rate increases from 100-300 nm per hour to 60 nm per minute. The inline nature of spatial ALD makes it suitable for high volume production lines and roll-to-roll production. In general, s-ALD has been employed to apply moisture permeation barriers, passivation layers in silicon solar cells and functional layers in batteries. The chemistry for spatial ALD processes is comparable with typical temporal ALD processes, and materials that have been explored include inorganic metal oxides such as Al2O3, (Al- or Ga doped) ZnO, SiO2, In2O3, InZnO, LIPON, Zn(O,S), SnOx, and TiOx ,but also PMG metals (Pt, Ir, Ru) can be deposited. Additionally, organic molecules can be grown in combination with inorganic atoms to enable molecular layer deposition (MLD). Plasma- or ozon enhanced spatial ALD has been demonstrated which generally lowers the deposition temperatures required. Photo-assisted ALD In this ALD variety, UV light is used to accelerate surface reactions on the substrate. Hence reaction temperature can be reduced, as in plasma-assisted ALD. As compared to plasma-assisted ALD, the activation is weaker, but is often easier to control by adjusting the wavelength, intensity and timing of illumination. Metal ALD Copper metal ALD has attracted much attention due to the demand for copper as an interconnect material and the relative ease by which copper can be deposited thermally. Copper has a positive standard electrochemical potential and is the most easily reduced metal of the first-row transition metals. Thus, numerous ALD processes have been developed, including several using hydrogen gas as the coreactant. Ideally, copper metal ALD should be performed at ≤100 °C to achieve continuous films with low surface roughness, since higher temperatures can result in agglomeration of deposited copper. Some metals can be grown by ALD via fluorosilane elimination reactions using a metal halide and a silicon precursor (e.g. SiH4, Si2H6) as the reactants. These reactions are very exothermic due to the formation of stable Si–F bonds. Metals deposited by fluorosilane elimination include tungsten and molybdenum. As an example, the surface reactions for tungsten metal ALD using WF6 and Si2H6 as the reactants can be expressed as WSiFH* + WF → WWF* + SiFH WF* + SiH → WSiFH* + SiFH + 2 H The overall ALD reaction is WF + SiH → W + SiFH + 2 H, ∆H = –181 kcal The growth rate can vary from 4 to 7 Å/cycle depending on the deposition temperature (177 to 325 °C) and Si2H6 reactant exposure (~104 to 106 L), factors that may influence Si2H6 insertion into Si–H bonds and result in a silicon CVD contribution to the tungsten ALD growth. The thermal ALD of many other metals is challenging (or presently impossible) due to their very negative electrochemical potentials. Recently, the application of novel strong reducing agents has led to the first reports of low-temperature thermal ALD processes for several electropositive metals. Chromium metal was deposited using a chromium alkoxide precursor and BH3(NHMe2). Titanium and tin metals were grown from their respective metal chlorides (MCl4, M = Ti, Sn) and a bis(trimethylsilyl) six-membered ring compound. Aluminum metal was deposited using an aluminum dihydride precursor and AlCl3. Catalytic SiO ALD The use of catalysts is of paramount importance in delivering reliable methods of SiO ALD. Without catalysts, surface reactions leading to the formation of SiO are generally very slow and only occur at exceptionally high temperatures. Typical catalysts for SiO ALD include Lewis bases such as NH or pyridine and SiO; ALD can also be initiated when these Lewis bases are coupled with other silicon precursors such as tetraethoxysilane (TEOS). Hydrogen bonding is believed to occur between the Lewis base and the SiOH* surface species or between the HO based reactant and the Lewis base. Oxygen becomes a stronger nucleophile when the Lewis base hydrogen bonds with the SiOH* surface species because the SiO-H bond is effectively weakened. As such, the electropositive Si atom in the SiCl reactant is more susceptible to nucleophilic attack. Similarly, hydrogen bonding between a Lewis base and an HO reactant make the electronegative O in HO a strong nucleophile that is able to attack the Si in an existing SiCl* surface species. The use of a Lewis base catalyst is more or less a requirement for SiO ALD, as without a Lewis base catalyst, reaction temperatures must exceed 325 °C and pressures must exceed 10 torr. Generally, the most favorable temperature to perform SiO ALD is at 32 °C and a common deposition rate is 1.35 angstroms per binary reaction sequence. Two surface reactions for SiO ALD, an overall reaction, and a schematic illustrating Lewis base catalysis in SiO ALD are provided below. Primary reactions at surface: SiOH* + SiCl → SiOSiCl* + HCl SiCl* + HO → SiOH* + HCl Overall ALD reaction: SiCl + 2HO → SiO + 4 HCl Applications Microelectronics applications ALD is a useful process for the fabrication of microelectronics due to its ability to produce accurate thicknesses and uniform surfaces in addition to high quality film production using various different materials. In microelectronics, ALD is studied as a potential technique to deposit high-κ (high permittivity) gate oxides, high-κ memory capacitor dielectrics, ferroelectrics, and metals and nitrides for electrodes and interconnects. In high-κ gate oxides, where the control of ultra thin films is essential, ALD is only likely to come into wider use at the 45 nm technology. In metallizations, conformal films are required; currently it is expected that ALD will be used in mainstream production at the 65 nm node. In dynamic random access memories (DRAMs), the conformality requirements are even higher and ALD is the only method that can be used when feature sizes become smaller than 100 nm. Several products that use ALD include magnetic recording heads, MOSFET gate stacks, DRAM capacitors, nonvolatile ferroelectric memories, and many others. Gate oxides Deposition of the high-κ oxides Al2O3, ZrO2, and HfO2 has been one of the most widely examined areas of ALD. The motivation for high-κ oxides comes from the problem of high tunneling current through the commonly used SiO2 gate dielectric in MOSFETs when it is downscaled to a thickness of 1.0 nm and below. With the high-κ oxide, a thicker gate dielectric can be made for the required capacitance density, thus the tunneling current can be reduced through the structure. Transition-metal nitrides Transition-metal nitrides, such as TiN and TaN, find potential use both as metal barriers and as gate metals. Metal barriers are used to encase the copper interconnects used in modern integrated circuits to avoid diffusion of Cu into the surrounding materials, such as insulators and the silicon substrate, and also, to prevent Cu contamination by elements diffusing from the insulators by surrounding every Cu interconnect with a layer of metal barriers. The metal barriers have strict demands: they should be pure; dense; conductive; conformal; thin; have good adhesion towards metals and insulators. The requirements concerning process technique can be fulfilled by ALD. The most studied ALD nitride is TiN which is deposited from TiCl4 and NH3. Metal films Motivations of an interest in metal ALD are: Cu interconnects and W plugs, or at least Cu seed layers for Cu electrodeposition and W seeds for W CVD, transition-metal nitrides (e.g. TiN, TaN, WN) for Cu interconnect barriers noble metals for ferroelectric random access memory (FRAM) and DRAM capacitor electrodes high- and low-work function metals for dual-gate MOSFETs. Magnetic recording heads Magnetic recording heads utilize electric fields to polarize particles and leave a magnetized pattern on a hard disk. AlO ALD is used to create uniform, thin layers of insulation. By using ALD, it is possible to control the insulation thickness to a high level of accuracy. This allows for more accurate patterns of magnetized particles and thus higher quality recordings. DRAM capacitors DRAM capacitors are yet another application of ALD. An individual DRAM cell can store a single bit of data and consists of a single MOS transistor and a capacitor. Major efforts are being put into reducing the size of the capacitor which will effectively allow for greater memory density. In order to change the capacitor size without affecting the capacitance, different cell orientations are being used. Some of these include stacked or trench capacitors. With the emergence of trench capacitors, the problem of fabricating these capacitors comes into play, especially as the size of semiconductors decreases. ALD allows trench features to be scaled to beyond 100 nm. The ability to deposit single layers of material allows for a great deal of control over the material. Except for some issues of incomplete film growth (largely due to insufficient amount or low temperature substrates), ALD provides an effective means of depositing thin films like dielectrics or barriers. Photovoltaic Applications The use of ALD technique in solar cells is becoming more prominent with time. In the past, it has been used to deposit surface passivation layers in crystalline-silicon (c-Si) solar cells, buffer layers in copper indium gallium selenide (CIGS) solar cells and barrier layers in dye-sensitized solar cells (DSSCs). For e.g., the use of ALD grown Al2O3 for solar cell applications was demonstrated by Schmidt et al. It was used as a surface passivation layer for the development of PERC (passivated emitter and rear cell) solar cells. The use of ALD technique to deposit charge transport layers (CTLs) is also being explored widely for perovskite solar cells. The ability of ALD to deposit high quality and conformal films with precise control on thickness can provide great advantage in finely tailoring the interfaces between CTL and perovskite layer. Moreover, it can be useful in obtaining uniform and pin-hole free films over large areas. These aspects make ALD a promising technique in further improving and stabilizing the performance of perovskite solar cells. Electrooptic Applications Thin Film Couplers As photonic integrated circuits (PICs) emerge, often in a manner similar to electronic integrated circuits, a wide variety of on-chip optical device structures are needed. One example is the nanophotonic coupler that behaves as a micrometer-size beamsplitter at the intersection of optical waveguides in which high aspect ratio trenches (~100 nm width x 4 micrometer depth) are first defined by etching then back-filled with aluminum oxide by ALD to form optical-quality interfaces. Biomedical applications Understanding and being able to specify the surface properties on biomedical devices is critical in the biomedical industry, especially regarding devices that are implanted in the body. A material interacts with the environment at its surface, so the surface properties largely direct the interactions of the material with its environment. Surface chemistry and surface topography affect protein adsorption, cellular interactions, and the immune response. Some current uses in biomedical applications include creating flexible sensors, modifying nanoporous membranes, polymer ALD, and creating thin biocompatible coatings. ALD has been used to deposit TiO films to create optical waveguide sensors as diagnostic tools. Also, ALD is beneficial in creating flexible sensing devices that can be used, for example, in the clothing of athletes to detect movement or heart rate. ALD is one possible manufacturing process for flexible organic field-effect transistors (OFETs) because it is a low-temperature deposition method. Nanoporous materials are emerging throughout the biomedical industry in drug delivery, implants, and tissue engineering. The benefit of using ALD to modify the surfaces of nanoporous materials is that, unlike many other methods, the saturation and self-limiting nature of the reactions means that even deeply embedded surfaces and interfaces are coated with a uniform film. Nanoporous surfaces can have their pore size reduced further in the ALD process because the conformal coating will completely coat the insides of the pores. This reduction in pore size may be advantageous in certain applications. As a permeation barrier for plastics ALD can be used as a permeation barrier for plastics. For example, it is well established as a method for encapsulation of OLEDs on plastic. ALD can also be used to inoculate 3-D printed plastic parts for use in vacuum environments by mitigating outgassing, which allows for custom low-cost tools for both semiconductor processing and space applications. ALD can be used to form a barrier on plastics in roll to roll processes. Quality and its control The quality of an ALD process can be monitored using several different imaging techniques to make sure that the ALD process is occurring smoothly and producing a conformal layer over a surface. One option is the use of cross-sectional scanning electron microscopy (SEM) or transmission electron microscopy (TEM). High magnification of images is pertinent for assessing the quality of an ALD layer. X-ray reflectivity (XRR) is a technique that measures thin-film properties including thickness, density, and surface roughness. Another optical quality evaluation tool is spectroscopic ellipsometry. Its application between the depositions of each layer by ALD provides information on the growth rate and material characteristics of the film. Applying this analysis tool during the ALD process, sometimes referred to as in situ spectroscopic ellipsometry, allows for greater control over the growth rate of the films during the ALD process. This type of quality control occurs during the ALD process rather than assessing the films afterwards as in TEM imaging, or XRR. Additionally, Rutherford backscattering spectroscopy (RBS), X-ray photoelectron spectroscopy (XPS), Auger electron spectroscopy (AES), and four-terminal sensing can be used to provide quality control information with regards to thin films deposited by ALD. Advantages and limitations Advantages ALD provides a very controlled method to produce a film to an atomically specified thickness. Also, the growth of different multilayer structures is straightforward. Because of the sensitivity and precision of the equipment, it is very beneficial to those in the field of microelectronics and nanotechnology in producing small, but efficient semiconductors. ALD typically involves the use of relatively low temperatures and a catalyst, which is thermochemically favored. The lower temperature is beneficial when working with soft substrates, such as organic and biological samples. Some precursors that are thermally unstable still may be used so long as their decomposition rate is relatively slow. Disadvantages High purity of the substrates is very important, and as such, high costs will ensue. Although this cost may not be much relative to the cost of the equipment needed, one may need to run several trials before finding conditions that favor their desired product. Once the layer has been made and the process is complete, there may be a requirement of needing to remove excess precursors from the final product. In some final products there are less than 1% of impurities present. Economic viability Atomic layer deposition instruments can range anywhere from $200,000 to $800,000 based on the quality and efficiency of the instrument. There is no set cost for running a cycle of these instruments; the cost varies depending on the quality and purity of the substrates used, as well as the temperature and time of machine operation. Some substrates are less available than others and require special conditions, as some are very sensitive to oxygen and may then increase the rate of decomposition. Multicomponent oxides and certain metals traditionally needed in the microelectronics industry are generally not cost efficient. Reaction time The process of ALD is very slow and this is known to be its major limitation. For example, Al2O3 is deposited at a rate of 0.11 nm per cycle, which can correspond to an average deposition rate of 100–300 nm per hour, depending on cycle duration and pumping speed. This problem can be overrun by using Spatial ALD, where the substrate is moved in space below a special ALD showerhead, and both the precursor gasses are separated by gas curtains/bearings. In this way, deposition rates of 60 nm per minute could be reached. ALD is typically used to produce substrates for microelectronics and nanotechnology, and therefore, thick atomic layers are not needed. Many substrates cannot be used because of their fragility or impurity. Impurities are typically found on the 0.1–1 at.% because of some of the carrier gases are known to leave residue and are also sensitive to oxygen. Chemical limitations Precursors must be volatile, but not subject to decomposition, as most precursors are very sensitive to oxygen/air, thus causing a limitation on the substrates that may be used. Some biological substrates are very sensitive to heat and may have fast decomposition rates that are not favored and yield larger impurity levels. There are a multitude of thin-film substrate materials available, but the important substrates needed for use in microelectronics can be hard to obtain and may be very expensive. References External links ALD animation Atomic layer deposition Chemical vapor deposition Thin film deposition Semiconductor device fabrication Finnish inventions Soviet inventions
Atomic layer deposition
[ "Chemistry", "Materials_science", "Mathematics" ]
6,384
[ "Thin film deposition", "Microtechnology", "Coatings", "Thin films", "Semiconductor device fabrication", "Chemical vapor deposition", "Planes (geometry)", "Solid state engineering" ]
3,608,401
https://en.wikipedia.org/wiki/Acoustic%20foam
Acoustic foam is an open celled foam used for acoustic treatment. It attenuates airborne sound waves, reducing their amplitude, for the purposes of noise reduction or noise control. The energy is dissipated as heat. Acoustic foam can be made in several different colors, sizes and thickness. Acoustic foam can be attached to walls, ceilings, doors, and other features of a room to control noise levels, vibration, and echoes. Many acoustic foam products are treated with dyes and/or fire retardants. Uses The objective of acoustic foam is to improve or change a room's sound qualities by controlling residual sound through absorption. This purpose requires strategic placement of acoustic foam panels on walls, ceilings, floors and other surfaces. Proper placement can help effectively manage resonance within the room and help give the room the desired sonic qualities. Acoustic enhancement The objective of acoustic foam is to enhance the sonic properties of a room by effectively managing unwanted reverberations. For this reason, acoustic foam is often used in restaurants, performance spaces, and recording studios. Acoustic foam is also often installed in large rooms with large, reverberative surfaces like gymnasiums, places of worship, theaters, and concert halls where excess reverberation is prone to arise. The purpose is to reduce, but not entirely eliminate, resonance within the room. In unmanaged spaces without acoustic foam or similar sound absorbing materials, sound waves reflect off of surfaces and continue to bounce around in the room. When a wave encounters a change in acoustic impedance, such as hitting a solid surface, acoustic reflections transpire. These reflections will occur many times before the wave becomes inaudible. Reflections can cause acoustic problems such as phase summation and phase cancellation. A new complex wave originates when the direct source wave coincides with the reflected waves. This complex wave will change the frequency response of the source material. Functionality Acoustic foam is a lightweight material made from polyurethane (either polyether or polyester) or extruded melamine foam. It is usually cut into tiles. One surface of these tiles often features pyramid, cone, wedge, or uneven cuboid shapes. Acoustic foam tiles are suited to placing on sonically reflective surfaces to act as sound absorbers, thus enhancing or changing the sound properties of a room. This type of sound absorption is different from soundproofing, which is typically used to keep sound from escaping or entering a room rather than changing the properties of sound within the room itself. Acoustic foam panels typically suppress reverberations in the mid and high frequencies. To deal with lower frequencies, much thicker pieces of acoustic foam (often in metal or wood enclosures) can be placed in the corners of a room and are called acoustic foam bass traps. See also Anechoic chamber Bushing (isolator) Polystyrene Polyurethane Sorbothane Soundproofing Styrofoam Vibration isolation References Acoustics Foams Noise reduction Noise control
Acoustic foam
[ "Physics", "Chemistry" ]
600
[ "Foams", "Classical mechanics", "Acoustics" ]
3,609,527
https://en.wikipedia.org/wiki/Lomer%E2%80%93Cottrell%20junction
Lomer-Cottrell Junction In materials science, a Lomer–Cottrell junction is a particular configuration of dislocations that forms when two perfect dislocations interact on interacting slip planes in a crystalline material. Formation Mechanism When two perfect dislocations encounter along a slip plane, each perfect dislocation can split into two Shockley partial dislocations: a leading dislocation and a trailing dislocation. When the two leading Shockley partials combine, they form a separate dislocation with a burgers vector that is not in the slip plane. This is the Lomer–Cottrell dislocation. It is sessile and immobile in the slip plane, acting as a barrier against other dislocations in the plane. The trailing dislocations pile up behind the Lomer–Cottrell dislocation, and an ever greater force is required to push additional dislocations into the pile-up. Example in FCC Crystals For an FCC crystal with slip planes of the form {111}, consider the following reactions: |leading| |trailing| Dissociation of dislocations: Combination of leading dislocations: The resulting dislocation lies along a crystal direction that is not a slip plane at room temperature in FCC materials. This configuration contributes to immobility of the Lomer-Cottrell junction. Significance The sessile nature of the Lomer–Cottrell dislocation forms a strong barrier to further dislocation motion. Trailing dislocations pile up behind this junction, leading to an increase in the stress required to sustain deformation. This mechanism is a key contributor to work hardening in ductile materials like aluminum and copper. References Crystallographic defects
Lomer–Cottrell junction
[ "Chemistry", "Materials_science", "Engineering" ]
364
[ "Materials science stubs", "Crystallographic defects", "Materials science", "Crystallography stubs", "Crystallography", "Materials degradation" ]
3,609,638
https://en.wikipedia.org/wiki/Standing%20frame
A standing frame (also known as a stand, stander, standing technology, standing aid, standing device, standing box, tilt table) is assistive technology that can be used by a person who relies on a wheelchair for mobility. A standing frame provides alternative positioning to sitting in a wheelchair by supporting the person in the standing position. Types and function Common types of standers include: sit to stand, prone, supine, upright, multi-positioning standers, and standing wheelchairs. Long leg braces are also a standing device but not used often today. Categories of standers By mobility Passive (static) stander: A passive stander remains in one place. They sometimes have casters, but cannot be self-propelled. Mobile (dynamic) stander: User can self-propel a mobile stander if they have the strength to push a manual wheelchair. Some standers are also available with powered mobility. Active stander: An active stander creates reciprocal movement of the arms legs while standing. By type Prone: Prone standers distribute the body weight to the front of the individual and usually have a tray in front of them. This makes them good for users who are actively trying to carry out some task.  Supine: Supine standers distribute the body weight to the back and are good for cases where the user has more limited mobility or is recovering from injury. Sit-to-Stand: Sit-to-Stand devices are used to enable people in wheelchairs to stand without requiring other people to lift them in. This is more commonly used by adults than children because children are generally light enough for one person to lift them. Diagnoses and users Standers are used by people with mild to severe disabilities such as spinal cord injury, traumatic brain injury, cerebral palsy, spina bifida, muscular dystrophy, multiple sclerosis, stroke, Rett syndrome, and post-polio syndrome. Spinal cord injury Standers are used by people with both paraplegia and quadriplegia since a variety of support options are available to accommodate for mild to severe disabilities. Doug Betters and Mike Utley are both former NFL football players who are quadriplegics due to spinal cord injury. They both stand using active standers. Bone mineral loss and osteoporosis are common consequences after spinal cord injury. Therapeutic standing, a weight-bearing intervention that can be applied using a standing frame, has traditionally been incorporated into rehabilitation programs for those with chronic spinal cord injury in order to prevent osteoporosis. A systematic review of the literature conducted by Biering-Sorenson et al. (2009) shows that therapeutic standing in the chronic phase of injury, defined as one year after injury, has no effect on maintaining bone density. Results on the effectiveness of therapeutic standing during the first year of injury are conflicting and show that shorter, less aggressive intervention is less effective. If therapeutic standing is to be incorporated into treatment, it should be more aggressive and initiated in the early stages of injury if any beneficial impacts on bone mineral density are hoped to be achieved. Common settings and applications Standing devices are used in a variety of settings including: In the home and workplace, Early intervention centers, Schools (special education classes or the inclusive classroom), adapted physical education classes, Children's hospitals and therapy centers, Rehabilitation facilities and hospitals, Extended care units, nursing homes, assisted living centers and group homes, and veterans' hospitals. Obtaining a standing frame Funding (government funding or health insurance) for standing equipment is achievable in most developed countries, but usually requires medical justification and a letter of medical necessity (a detailed medical prescription) written by a physical therapist or medical professional. Sources Able data factsheet on standing aids External links National Registry of Rehabilitation Technology Suppliers - nrrts.org Standing and gait therapy can be a winning combination for SCI patients Assistive technology Mobility devices Accessibility Medical equipment
Standing frame
[ "Engineering", "Biology" ]
797
[ "Accessibility", "Design", "Medical equipment", "Medical technology" ]
3,610,203
https://en.wikipedia.org/wiki/Cdc25
Cdc25 is a dual-specificity phosphatase first isolated from the yeast Schizosaccharomyces pombe as a cell cycle defective mutant. As with other cell cycle proteins or genes such as Cdc2 and Cdc4, the "cdc" in its name refers to "cell division cycle". Dual-specificity phosphatases are considered a sub-class of protein tyrosine phosphatases. By removing inhibitory phosphate residues from target cyclin-dependent kinases (Cdks), Cdc25 proteins control entry into and progression through various phases of the cell cycle, including mitosis and S ("Synthesis") phase. Function in activating Cdk1 Cdc25 activates cyclin dependent kinases by removing phosphate from residues in the Cdk active site. In turn, the phosphorylation by M-Cdk (a complex of Cdk1 and cyclin B) activates Cdc25. Together with Wee1, M-Cdk activation is switch-like. The switch-like behavior forces entry into mitosis to be quick and irreversible. Cdk activity can be reactivated after dephosphorylation by Cdc25. The Cdc25 enzymes Cdc25A-C are known to control the transitions from G1 to S phase and G2 to M phase. Structure The structure of Cdc25 proteins can be divided into two main regions: the N-terminal region, which is highly divergent and contains sites for its phosphorylation and ubiquitination, which regulate the phosphatase activity; and the C-terminal region, which is highly homologous and contains the catalytic site. Evolution and species distribution Cdc25 enzymes are well conserved through evolution, and have been isolated from fungi such as yeasts as well as all metazoans examined to date, including humans. The exception among eukaryotes may be plants, as the purported plant Cdc25s have characteristics, (such as the use of cations for catalysis), that are more akin to serine/threonine phosphatases than dual-specificity phosphatases, raising doubts as to their authenticity as Cdc25 phosphatases. The Cdc25 family appears to have expanded in relation to the complexity of the cell-cycle and life-cycle of higher animals. Yeasts have a single Cdc25 (as well as a distantly related enzyme known as Itsy-bitsy phosphatase 1, or Ibp1). Drosophila melanogaster has two Cdc25s, known as string and twine, which control mitosis and meiosis, respectively. Most other model organisms examined have three Cdc25s, designated Cdc25A, Cdc25B, and Cdc25C. An exception is the nematode Caenorhabditis elegans, which has four distinct Cdc25 genes (Cdc-25.1 to Cdc-25.4). Knockout models Although the highly conserved nature of the Cdc25s implies an important role in cell physiology, Cdc25B and Cdc25C knockout mice (both single and double mutants) are viable and display no major alterations in their cell cycles, suggesting some functional compensation either via other Cdk regulatory enzymes (such as Wee1 and Myt1) or from the activity of the third member of the family, Cdc25A. Hiroaki Kiyokawa's laboratory has shown that Cdc25A knockout mice are not viable. In human disease The Cdc25s, and in particular Cdc25A and Cdc25B, are proto-oncogenes in humans and have been shown to be overexpressed in a number of cancers. The central role of Cdc25s in the cell cycle has garnered them considerable attention from the pharmaceutical industry as potential targets for novel chemotherapeutic (anti-cancer) agents. To date, no clinically viable compounds targeting these enzymes have been described. A large number of potent small-molecule Cdc25 Inhibitors have been identified that bind to the active site and belong to various chemical classes, including natural products, lipophilic acids, quinonoids, electrophiles, sulfonylated aminothiazoles and phosphate bioisosteres. Although some progress has been made in developing potent and selective inhibitors for Cdc25 family of proteins, there is scope for development of novel therapeutic strategies to target them. A new class of peptide-derived inhibitors, based on sequence homology with the protein substrate, can be developed. It is challenging to use these compounds as drugs due to their lack of suitable ADME properties. See also Cyclin References External links Genes Human cDc25A at PDB Enzymes Cell cycle Fungal proteins
Cdc25
[ "Biology" ]
978
[ "Cell cycle", "Cellular processes" ]
3,610,236
https://en.wikipedia.org/wiki/Foster%27s%20reactance%20theorem
Foster's reactance theorem is an important theorem in the fields of electrical network analysis and synthesis. The theorem states that the reactance of a passive, lossless two-terminal (one-port) network always strictly monotonically increases with frequency. It is easily seen that the reactances of inductors and capacitors individually increase or decrease with frequency respectively and from that basis a proof for passive lossless networks generally can be constructed. The proof of the theorem was presented by Ronald Martin Foster in 1924, although the principle had been published earlier by Foster's colleagues at American Telephone & Telegraph. The theorem can be extended to admittances and the encompassing concept of immittances. A consequence of Foster's theorem is that zeros and poles of the reactance must alternate with frequency. Foster used this property to develop two canonical forms for realising these networks. Foster's work was an important starting point for the development of network synthesis. It is possible to construct non-Foster networks using active components such as amplifiers. These can generate an impedance equivalent to a negative inductance or capacitance. The negative impedance converter is an example of such a circuit. Explanation Reactance is the imaginary part of the complex electrical impedance. Both capacitors and inductors possess reactance (but of opposite sign) and are frequency dependent. The specification that the network must be passive and lossless implies that there are no resistors (lossless), or amplifiers or energy sources (passive) in the network. The network consequently must consist entirely of inductors and capacitors and the impedance will be purely an imaginary number with zero real part. Foster's theorem applies equally to the admittance of a network, that is the susceptance (imaginary part of admittance) of a passive, lossless one-port monotonically increases with frequency. This result may seem counterintuitive since admittance is the reciprocal of impedance, but is easily proved. If the impedance is where is reactance and is the imaginary unit, then the admittance is given by where is susceptance. If X is monotonically increasing with frequency then 1/X must be monotonically decreasing. −1/X must consequently be monotonically increasing and hence it is proved that B is increasing also. It is often the case in network theory that a principle or procedure applies equally well to impedance or admittance—reflecting the principle of duality for electric networks. It is convenient in these circumstances to use the concept of immittance, which can mean either impedance or admittance. The mathematics is carried out without specifying units until it is desired to calculate a specific example. Foster's theorem can thus be stated in a more general form as, Foster's theorem (immittance form) The imaginary immittance of a passive, lossless one-port strictly monotonically increases with frequency. Foster's theorem is quite general. In particular, it applies to distributed-element networks, although Foster formulated it in terms of discrete inductors and capacitors. It is therefore applicable at microwave frequencies just as much as it is at lower frequencies. Examples The following examples illustrate this theorem in a number of simple circuits. Inductor The impedance of an inductor is given by, is inductance is angular frequency so the reactance is, which by inspection can be seen to be monotonically (and linearly) increasing with frequency. Capacitor The impedance of a capacitor is given by, is capacitance so the reactance is, which again is monotonically increasing with frequency. The impedance function of the capacitor is identical to the admittance function of the inductor and vice versa. It is a general result that the dual of any immittance function that obeys Foster's theorem will also follow Foster's theorem. Series resonant circuit A series LC circuit has an impedance that is the sum of the impedances of an inductor and capacitor, At low frequencies the reactance is dominated by the capacitor and so is large and negative. This monotonically increases towards zero (the magnitude of the capacitor reactance is becoming smaller). The reactance passes through zero at the point where the magnitudes of the capacitor and inductor reactances are equal (the resonant frequency) and then continues to monotonically increase as the inductor reactance becomes progressively dominant. Parallel resonant circuit A parallel LC circuit is the dual of the series circuit and hence its admittance function is the same form as the impedance function of the series circuit, The impedance function is, At low frequencies the reactance is dominated by the inductor and is small and positive. This monotonically increases towards a pole at the anti-resonant frequency where the susceptance of the inductor and capacitor are equal and opposite and cancel. Past the pole the reactance is large and negative and increasing towards zero where it is dominated by the capacitance. Zeros and poles A consequence of Foster's theorem is that the zeros and poles of any passive immittance function must alternate as frequency increases. After passing through a pole the function will be negative and is obliged to pass through zero before reaching the next pole if it is to be monotonically increasing. The poles and zeroes of an immittance function completely determine the frequency characteristics of a Foster network. Two Foster networks that have identical poles and zeroes will be equivalent circuits in the sense that their immittance functions will be identical. There can be a scaling factor difference between them (all elements of the immittance multiplied by the same scaling factor) but the shape of the two immittance functions will be identical. Another consequence of Foster's theorem is that the phase of an immittance must monotonically increase with frequency. Consequently, the plot of a Foster immittance function on a Smith chart must always travel around the chart in a clockwise direction with increasing frequency. Realisation A one-port passive immittance consisting of discrete elements (that is, not distributed elements) can be represented as a rational function of s, where, is immittance are polynomials with real, positive coefficients is the Laplace transform variable, which can be replaced with when dealing with steady-state AC signals. This follows from the fact the impedance of L and C elements are themselves simple rational functions and any algebraic combination of rational functions results in another rational function. This is sometimes referred to as the driving point impedance because it is the impedance at the place in the network at which the external circuit is connected and "drives" it with a signal. In his paper, Foster describes how such a lossless rational function may be realised (if it can be realised) in two ways. Foster's first form consists of a number of series connected parallel LC circuits. Foster's second form of driving point impedance consists of a number of parallel connected series LC circuits. The realisation of the driving point impedance is by no means unique. Foster's realisation has the advantage that the poles and/or zeroes are directly associated with a particular resonant circuit, but there are many other realisations. Perhaps the most well known is Wilhelm Cauer's ladder realisation from filter design. Non-Foster networks A Foster network must be passive, so an active network, containing a power source, may not obey Foster's theorem. These are called non-Foster networks. In particular, circuits containing an amplifier with positive feedback can have reactance which declines with frequency. For example, it is possible to create negative capacitance and inductance with negative impedance converter circuits. These circuits will have an immittance function with a phase of ±π/2 like a positive reactance but a reactance amplitude with a negative slope against frequency. These are of interest because they can accomplish tasks a Foster network cannot. For example, the usual passive Foster impedance matching networks can only match the impedance of an antenna with a transmission line at discrete frequencies, which limits the bandwidth of the antenna. A non-Foster network could match an antenna over a continuous band of frequencies. This would allow the creation of compact antennas that have wide bandwidth, violating the Chu-Harrington limit. Practical non-Foster networks are an active area of research. History The theorem was developed at American Telephone & Telegraph as part of ongoing investigations into improved filters for telephone multiplexing applications. This work was commercially important; large sums of money could be saved by increasing the number of telephone conversations that could be carried on one line. The theorem was first published by Campbell in 1922 but without a proof. Great use was immediately made of the theorem in filter design, it appears prominently, along with a proof, in Zobel's landmark paper of 1923 which summarised the state of the art of filter design at that time. Foster published his paper the following year which included his canonical realisation forms. Cauer in Germany grasped the importance of Foster's work and used it as the foundation of network synthesis. Amongst Cauer's many innovations was the extension of Foster's work to all 2-element-kind networks after discovering an isomorphism between them. Cauer was interested in finding the necessary and sufficient condition for realisability of a rational one-port network from its polynomial function, a condition now known to be a positive-real function, and the reverse problem of which networks were equivalent, that is, had the same polynomial function. Both of these were important problems in network theory and filter design. Foster networks are only a subset of realisable networks, References Bibliography Foster, R. M., " A reactance theorem", Bell System Technical Journal, vol.3, no. 2, pp. 259–267, November 1924. Campbell, G. A., " Physical theory of the electric wave filter", Bell System Technical Journal, vol.1, no. 2, pp. 1–32, November 1922. Zobel, O. J.," Theory and Design of Uniform and Composite Electric Wave Filters", Bell System Technical Journal, vol.2, no. 1, pp. 1–46, January 1923. Matthew M. Radmanesh, RF & Microwave Design Essentials, AuthorHouse, 2007 . James T. Aberle, Robert Loepsinger-Romak, Antennas with non-Foster matching networks, Morgan & Claypool Publishers, 2007 . Colin Cherry, Pulses and Transients in Communication Circuits, Taylor & Francis, 1950. K. C. A. Smith, R. E. Alley, Electrical circuits: an introduction, Cambridge University Press, 1992 . Carol Gray Montgomery, Robert Henry Dicke, Edward M. Purcell, Principles of microwave circuits, IET, 1987 . E. Cauer, W. Mathis, and R. Pauli, " Life and Work of Wilhelm Cauer (1900–1945)", Proceedings of the Fourteenth International Symposium of Mathematical Theory of Networks and Systems (MTNS2000), Perpignan, June, 2000. Retrieved 19 September 2008. Bray, J, Innovation and the Communications Revolution, Institute of Electrical Engineers, 2002 . Circuit theorems
Foster's reactance theorem
[ "Physics" ]
2,330
[ "Equations of physics", "Circuit theorems", "Physics theorems" ]
3,610,312
https://en.wikipedia.org/wiki/Graphite%20intercalation%20compound
In the area of solid state chemistry, graphite intercalation compounds are a family of materials prepared from graphite. In particular, the sheets of carbon that comprise graphite can be pried apart by the insertion (intercalation) of ions. The graphite is viewed as a host and the inserted ions as guests. The materials have the formula where n ≥ 6. The insertion of the guests increases the distance between the carbon sheets. Common guests are reducing agents such as alkali metals. Strong oxidants also intercalate into graphite. Intercalation involves electron transfer into or out of the carbon sheets. So, in some sense, graphite intercalation compounds are salts. Intercalation is often reversible: the inserted ions can be removed and the sheets of carbon collapse to a graphite-like structure. The properties of graphite intercalation compounds differ from those of the parent graphite. Preparation and structure These materials are prepared by treating graphite with a strong oxidant or a strong reducing agent: The reaction is reversible. The host (graphite) and the guest X interact by charge transfer. An analogous process is the basis of commercial lithium-ion batteries. In a graphite intercalation compound not every layer is necessarily occupied by guests. In so-called stage 1 compounds, graphite layers and intercalated layers alternate and in stage 2 compounds, two graphite layers with no guest material in between alternate with an intercalated layer. The actual composition may vary and therefore these compounds are an example of non-stoichiometric compounds. It is customary to specify the composition together with the stage. The layers are pushed apart upon incorporation of the guest ions. Examples Alkali and alkaline earth derivatives One of the best studied graphite intercalation compounds, , is prepared by melting potassium over graphite powder. The potassium is absorbed into the graphite and the material changes color from black to bronze. The resulting solid is pyrophoric. The composition is explained by assuming that the potassium to potassium distance is twice the distance between hexagons in the carbon framework. The bond between anionic graphite layers and potassium cations is ionic. The electrical conductivity of the material is greater than that of α-graphite. is a superconductor with a very low critical temperature Tc = 0.14 K. Heating leads to the formation of a series of decomposition products as the K atoms are eliminated: Via the intermediates (blue in color), , , ultimately the compound results. The stoichiometry is observed for M = K, Rb and Cs. For smaller ions M = , , , , , and , the limiting stoichiometry is . Calcium graphite is obtained by immersing highly oriented pyrolytic graphite in liquid Li–Ca alloy for 10 days at 350 °C. The crystal structure of belongs to the Rm space group. The graphite interlayer distance increases upon Ca intercalation from 3.35 to 4.524 Å, and the carbon-carbon distance increases from 1.42 to 1.444 Å. With barium and ammonia, the cations are solvated, giving the stoichiometry ((stage 1)) or those with caesium, hydrogen and potassium ((stage 1)). In situ adsorption on free-standing graphene and intercalation in bilayer graphene of the alkali metals K, Cs, and Li was observed by means of low-energy electron microscopy. Different from other alkali metals, the amount of Na intercalation is very small. Quantum-mechanical calculations show that this originates from a quite general phenomenon: among the alkali and alkaline earth metals, Na and Mg generally have the weakest chemical binding to a given substrate, compared with the other elements in the same group of the periodic table. The phenomenon arises from the competition between trends in the ionization energy and the ion–substrate coupling, down the columns of the periodic table. However, considerable Na intercalation into graphite can occur in cases when the ion is wrapped in a solvent shell through the process of co-intercalation. A complex magnesium(I) species has also been intercalated into graphite. Graphite bisulfate, perchlorate, hexafluoroarsenate: oxidized carbons The intercalation compounds graphite bisulfate and graphite perchlorate can be prepared by treating graphite with strong oxidizing agents in the presence of strong acids. In contrast to the potassium and calcium graphites, the carbon layers are oxidized in this process: 48 C + 0. 5 [O ]+ 3 H2SO4 → [C24]+[HSO4]−·2H2SO4 + 0.5 H2O In graphite perchlorate, planar layers of carbon atoms are 794 picometers apart, separated by ions. Cathodic reduction of graphite perchlorate is analogous to heating , which leads to a sequential elimination of . Both graphite bisulfate and graphite perchlorate are better conductors as compared to graphite, as predicted by using a positive-hole mechanism. Reaction of graphite with affords the salt . Metal halide derivatives A number of metal halides intercalate into graphite. The chloride derivatives have been most extensively studied. Examples include (M = Zn, Ni, Cu, Mn), (M = Al, Fe, Ga), (M = Zr, Pt), etc. The materials consists of layers of close-packed metal halide layers between sheets of carbon. The derivative exhibits spin glass behavior. It proved to be a particularly fertile system on which to study phase transitions. A stage n magnetic graphite intercalation compounds has n graphite layers separating successive magnetic layers. As the stage number increases the interaction between spins in successive magnetic layers becomes weaker and 2D magnetic behaviour may arise. Halogen- and oxide-graphite compounds Chlorine and bromine reversibly intercalate into graphite. Iodine does not. Fluorine reacts irreversibly. In the case of bromine, the following stoichiometries are known: for n = 8, 12, 14, 16, 20, and 28. Because it forms irreversibly, carbon monofluoride is often not classified as an intercalation compound. It has the formula . It is prepared by reaction of gaseous fluorine with graphitic carbon at 215–230 °C. The color is greyish, white, or yellow. The bond between the carbon and fluorine atoms is covalent. Tetracarbon monofluoride () is prepared by treating graphite with a mixture of fluorine and hydrogen fluoride at room temperature. The compound has a blackish-blue color. Carbon monofluoride is not electrically conductive. It has been studied as a cathode material in one type of primary (non-rechargeable) lithium batteries. Graphite oxide is an unstable yellow solid. Properties and applications Graphite intercalation compounds have fascinated materials scientists for many years owing to their diverse electronic and electrical properties. Superconductivity Among the superconducting graphite intercalation compounds, exhibits the highest critical temperature Tc = 11.5 K, which further increases under applied pressure (15.1 K at 8 GPa). Superconductivity in these compounds is thought to be related to the role of an interlayer state, a free electron like band lying roughly above the Fermi level; superconductivity only occurs if the interlayer state is occupied. Analysis of pure using a high quality ultraviolet light revealed to conduct angle-resolved photoemission spectroscopy measurements. The opening of a superconducting gap in the π* band revealed a substantial contribution to the total electron–phonon-coupling strength from the π*-interlayer interband interaction. Reagents in chemical synthesis: The bronze-colored material is one of the strongest reducing agents known. It has also been used as a catalyst in polymerizations and as a coupling reagent for aryl halides to biphenyls. In one study, freshly prepared was treated with 1-iodododecane delivering a modification (micrometre scale carbon platelets with long alkyl chains sticking out providing solubility) that is soluble in chloroform. Another potassium graphite compound, , has been used as a neutron monochromator. A new essential application for potassium graphite was introduced by the invention of the potassium-ion battery. Like the lithium-ion battery, the potassium-ion battery should use a carbon-based anode instead of a metallic anode. In this circumstance, the stable structure of potassium graphite is an important advantage. See also Buckminsterfullerene intercalates Covalent superconductors Magnesium diboride, which uses hexagonal planar boron sheets instead of carbon Pyrolytic graphite References Further reading (187 pages), also reprinted as Inorganic carbon compounds Supramolecular chemistry
Graphite intercalation compound
[ "Chemistry", "Materials_science" ]
1,892
[ "Inorganic compounds", "Inorganic carbon compounds", "nan", "Nanotechnology", "Supramolecular chemistry" ]
3,610,887
https://en.wikipedia.org/wiki/Hammar%20experiment
The Hammar experiment was an experiment designed and conducted by Gustaf Wilhelm Hammar (1935) to test the aether drag hypothesis. Its negative result refuted some specific aether drag models, and confirmed special relativity. Overview Experiments such as the Michelson–Morley experiment of 1887 (and later other experiments such as the Trouton–Noble experiment in 1903 or the Trouton–Rankine experiment in 1908), presented evidence against the theory of a medium for light propagation known as the luminiferous aether; a theory that had been an established part of science for nearly one hundred years at the time. These results cast doubts on what was then a very central assumption of modern science, and later led to the development of special relativity. In an attempt to explain the results of the Michelson–Morley experiment in the context of the assumed medium, aether, many new hypotheses were examined. One of the proposals was that instead of passing through a static and unmoving aether, massive objects like the Earth may drag some of the aether along with them, making it impossible to detect a "wind". Oliver Lodge (1893–1897) was one of the first to perform a test of this theory by using rotating and massive lead blocks in an experiment that attempted to cause an asymmetrical aether wind. His tests yielded no appreciable results differing from previous tests for the aether wind. In the 1920s, Dayton Miller conducted repetitions of the Michelson–Morley experiments. He ultimately constructed an apparatus in such a way as to minimize the mass along the path of the experiment, conducting it at the peak of a tall hill in a building that was made of lightweight materials. He produced measurements showing a diurnal variance, suggesting detection of the "wind", which he ascribed to the lack of mass making while previous experiments were carried out with considerable mass around their apparatus. The experiment To test Miller's assertion, Hammar conducted the following experiment using a common-path interferometer in 1935. Using a half-silvered mirror A, he divided a ray of white light into two half-rays. One half-ray was sent in the transverse direction into a heavy walled steel pipe terminated with lead plugs. In this pipe, the ray was reflected by mirror D and sent into the longitudinal direction to another mirror C at the other end of the pipe. There it was reflected and sent in the transverse direction to a mirror B outside of the pipe. From B it traveled back to A in the longitudinal direction. The other half-ray traversed the same path in the opposite direction. The topology of the light path was that of a Sagnac interferometer with an odd number of reflections. Sagnac interferometers offer excellent contrast and fringe stability, and the configuration with an odd number of reflections is only slightly less stable than the configuration with an even number of reflections. (With an odd number of reflections, the oppositely traveling beams are laterally inverted with respect to each other over most of the light path, so that the topology deviates slightly from strict common path.) The relative immunity of his apparatus to vibration, mechanical stress and temperature effects, allowed Hammar to detect fringe displacements as little as 1/10 of a fringe, despite using the interferometer outdoors in an open environment with no temperature control. Similar to Lodge's experiment, Hammar's apparatus should have caused an asymmetry in any proposed aether wind. Hammar's expectation of the results was that: With the apparatus aligned perpendicular to the aether wind, both long arms would be equally affected by aether entrainment. With the apparatus aligned parallel to the aether wind, one arm would be more affected by aether entrainment than the other. The following expected propagation times for the counter-propagating rays were given by Robertson/Noonan: where is the velocity of the entrained aether. This gives an expected time difference: On September 1, 1934, Hammar set up the apparatus on top of a high hill two miles south of Moscow, Idaho, and made many observations with the apparatus turned in all directions of the azimuth during the daylight hours of September 1, 2, and 3. He saw no shift of the interference fringes, corresponding to an upper limit of km/s. These results are considered a proof against the aether drag hypothesis as it was proposed by Miller. Consequences for Aether drag hypothesis Because differing ideas of "aether drag" existed, the interpretation of all aether drag experiments can be done in the context of each version of the hypothesis. None or partial entrainment by any object with mass. This was discussed by scientists such as Augustin-Jean Fresnel and François Arago. It was refuted by the Michelson–Morley experiment. Complete entrainment within or in the vicinity of all masses. It was refuted by the Aberration of light, Sagnac effect, Oliver Lodge's experiments, and Hammar's experiment. Complete entrainment within or in the vicinity of only very large masses such as Earth. It was refuted by the Aberration of light, Michelson–Gale–Pearson experiment. References Physics experiments Aether theories
Hammar experiment
[ "Physics" ]
1,075
[ "Experimental physics", "Physics experiments" ]
3,611,293
https://en.wikipedia.org/wiki/Gas-phase%20ion%20chemistry
Gas phase ion chemistry is a field of science encompassed within both chemistry and physics. It is the science that studies ions and molecules in the gas phase, most often enabled by some form of mass spectrometry. By far the most important applications for this science is in studying the thermodynamics and kinetics of reactions. For example, one application is in studying the thermodynamics of the solvation of ions. Ions with small solvation spheres of 1, 2, 3... solvent molecules can be studied in the gas phase and then extrapolated to bulk solution. Theory Transition state theory Transition state theory is the theory of the rates of elementary reactions which assumes a special type of chemical equilibrium (quasi-equilibrium) between reactants and activated complexes. RRKM theory RRKM theory is used to compute simple estimates of the unimolecular ion decomposition reaction rates from a few characteristics of the potential energy surface. Gas phase ion formation The process of converting an atom or molecule into an ion by adding or removing charged particles such as electrons or other ions can occur in the gas phase. These processes are an important component of gas phase ion chemistry. Associative ionization Associative ionization is a gas phase reaction in which two atoms or molecules interact to form a single product ion. where species A with excess internal energy (indicated by the asterisk) interacts with B to form the ion AB+. One or both of the interacting species may have excess internal energy. Charge-exchange ionization Charge-exchange ionization (also called charge-transfer ionization) is a gas phase reaction between an ion and a neutral species in which the charge of the ion is transferred to the neutral. Chemical ionization In chemical ionization, ions are produced through the reaction of ions of a reagent gas with other species. Some common reagent gases include: methane, ammonia, and isobutane. Chemi-ionization Chemi-ionization can be represented by where G is the excited state species (indicated by the superscripted asterisk), and M is the species that is ionized by the loss of an electron to form the radical cation (indicated by the superscripted "plus-dot"). Penning ionization Penning ionization refers to the interaction between a gas-phase excited-state atom or molecule G* and a target molecule M resulting in the formation of a radical molecular cation M+., an electron e−, and a neutral gas molecule G: Penning ionization occurs when the target molecule has an ionization potential lower than the internal energy of the excited-state atom or molecule. Associative Penning ionization can also occur: Fragmentation There are many important dissociation reactions that take place in the gas phase. Collision-induced dissociation CID (also called collisionally activated dissociation - CAD) is a method used to fragment molecular ions in the gas phase. The molecular ions collide with neutral gas molecules such as helium, nitrogen, or argon. In the collision some of the kinetic energy is converted into internal energy which results in fragmentation. Charge remote fragmentation Charge remote fragmentation is a type of covalent bond breaking that occurs in a gas phase ion in which the cleaved bond is not adjacent to the location of the charge. Charge transfer reactions There are several types of charge-transfer reactions (also known as charge-permutation reactions): partial-charge transfer , charge-stripping reaction , and charge-inversion reaction positive to negative and negative to positive . Applications Pairwise interactions between alkali metal ions and amino acids, small peptides and nucleobases have been studied theoretically in some detail. See also Adiabatic ionization Mass-analyzed ion kinetic energy spectrometry Plasma (physics) Michael T. Bowers R. Graham Cooks Helmut Schwarz References Bibliography Fundamentals of gas phase ion chemistry, Keith R. Jennings (ed.), Dordrecht, Boston, Kluwer Academic, 1991, pp. 226–8 Gas Phase Ion Chemistry, Michael T. Bowers, ed., Academic Press, New York, 1979 Gas Phase Ion Chemistry Vol 2.; Bowers, M.T., Ed.; Academic Press: New York, 1979 Gas Phase Ion Chemistry Vol 3., Michael T. Bowers, ed., Academic Press, New York, 1983 External links http://webbook.nist.gov/chemistry/ion/ Mass spectrometry
Gas-phase ion chemistry
[ "Physics", "Chemistry" ]
923
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Matter" ]
3,612,067
https://en.wikipedia.org/wiki/Invader%20potential
In ecology, invader potential is the qualitative and quantitative measures of a given invasive species probability to invade a given ecosystem. This is often seen through climate matching. There are many reasons why a species may invade a new area. The term invader potential may also be interchangeable with invasiveness. Invader potential is a large threat to global biodiversity. It has been shown that there is an ecosystem function loss due to the introduction of species in areas they are not native to. Invaders are species that, through biomass, abundance, and strong interactions with native species, have significantly altered the structure and composition of the established community. This differs greatly from the term "introduced", which merely refers to species that have been introduced to an environment, disregarding whether or not they have created a successful establishment. They are simply organisms that have been accidentally, or deliberately, placed into an unfamiliar area. Many times, in fact, species do not have a strong impact on the introduced habitat. This can be for a variety of reasons; either the newcomers are not abundant or because they are small and unobtrusive. Understanding the mechanisms of invader potential is important to understanding why species relocate and to predict future invasions. There are three predicted reasons as to why species invade an area. They are as follows: adaptation to physical environment, resource competition and/or utilization, and enemy release. Some of these reasons as to why species move seem relatively simple to understand. For example, species may adapt to the new physical environment through having great phenotypic plasticity and environmental tolerance. Species with high rates of these find it easier to adapt to new environments. In terms of resources, those with low resource requirements thrive in unknown areas more than those with complex resource needs. This is shown directly through Tilman's R* rule. Those with less needs can competitively exclude those with more complex needs and take over an area. And finally, species with high reproduction rate and low defense to natural enemies have a better chance of invading other areas. All of these are reasons why species may thrive in places they are non-native to, due to having desirable flexibility within their species' needs. Climate matching Climate matching is a technique used to identify extralimital destinations that invasive species may like to overtake, based on its similarities to the species previous native range. Species are more likely to invade areas that match their origin for ease of use, and abundance of resources. Climate matching assesses the invasion risk and heavily prioritizes destination-specific action. The Bioga irregularis, the brown tree snake, is a great example of a species that climate matches. This species is native to northern and eastern Australia, eastern Indonesia, Papua New Guinea and most of the Solomon Islands. The brown tree snake was accidentally translocated by means of ship cargo to Guam, where it is responsible for replacing the majority of the native bird species. Human-mediated invader potential Humans play a significant role in the ways species invade an area. By changing the habitat, an invasion is made easier or more advantageous for an invasive species. As previously mentioned, species are more likely to invade areas they feel they can competitively win in. As an example, human led shoreline development, specifically in New England, was found to explain over 90% of intermarsh variation. This has boosted nitrogen availability, which can draw in new species. This human made change, among others, was the reason that Phragmites australis invaded the New England salt marshes. In a study by Sillman and Bertness, 22 salt marshes were surveyed for changes following this invasion. This study specifically looked at how human habitat alteration led to the invasion success of this species. Shoreline development, nutrient enrichment, and salinity reduction were all human made changes that contributed to the species ability to invade. Impact and risk assessment It is critical, especially in conservation biology, to have the ability to foresee impacts on ecosystems. For example, the predictions of the identities and ecological impacts of invasive alien species assists in risk assessment. Currently, scientists are lacking the universal and standardized metrics that are reliable enough to predict the likelihood and degree of impact of the specific invaders. Data on the measurable changes in populations of the affected species, for instance, would be especially beneficial. Invader potential is a tool to aid in this dilemma. By understanding the qualitative and quantitative measures of a given invasive species probability to invade a given ecosystem, researchers can hypothesize which species will impact which environments. The addition, or removal, of a species from an ecosystem can cause drastic changes to environmental factors as well as the community's food web. Predicting these inevitable situations can aid in both maintenance and conservation. This is especially advised for emerging and potential future invaders that have no invasion history. Consequences faced by invading species Although the focus is typically on the invading species' adverse impacts on native species, they are also often negatively impacted, as well. The new colonization of a foreign species has proven to lead to introduced species being subject to genetic bottlenecks, random genetic drift, and increased levels of inbreeding. Genetic changes, such as these, can pose a potential threat to allelic diversity. This could lead to genetic differentiation of the introduced population. In addition, invasive organisms face new biotic and abiotic factors. Invasion potential has a great impact on whether or not the invasive organism will survive these biotic or abiotic factors. The species' ability to adapt to the new conditions will contribute to the success of the particular invasion. In the majority of cases, a small subset of introduced species become invaders as a result of rapid changes in the new habitat. In other cases, the species fails to thrive symbiotically with the ecosystem. See also Invasive species Ecological niche Ecological metrics References Conservation biology Invasive species Ecological metrics
Invader potential
[ "Mathematics", "Biology" ]
1,184
[ "Metrics", "Ecological metrics", "Quantity", "Invasive species", "Pests (organism)", "Conservation biology" ]
3,612,259
https://en.wikipedia.org/wiki/Hierarchy%20%28mathematics%29
In mathematics, a hierarchy is a set-theoretical object, consisting of a preorder defined on a set. This is often referred to as an ordered set, though that is an ambiguous term that many authors reserve for partially ordered sets or totally ordered sets. The term pre-ordered set is unambiguous, and is always synonymous with a mathematical hierarchy. The term hierarchy is used to stress a hierarchical relation among the elements. Sometimes, a set comes equipped with a natural hierarchical structure. For example, the set of natural numbers N is equipped with a natural pre-order structure, where whenever we can find some other number so that . That is, is bigger than only because we can get to from using . This idea can be applied to any commutative monoid. On the other hand, the set of integers Z requires a more sophisticated argument for its hierarchical structure, since we can always solve the equation by writing . A mathematical hierarchy (a pre-ordered set) should not be confused with the more general concept of a hierarchy in the social realm, particularly when one is constructing computational models that are used to describe real-world social, economic or political systems. These hierarchies, or complex networks, are much too rich to be described in the category Set of sets. This is not just a pedantic claim; there are also mathematical hierarchies, in the general sense, that are not describable using set theory. Other natural hierarchies arise in computer science, where the word refers to partially ordered sets whose elements are classes of objects of increasing complexity. In that case, the preorder defining the hierarchy is the class-containment relation. Containment hierarchies are thus special cases of hierarchies. Related terminology Individual elements of a hierarchy are often called levels and a hierarchy is said to be infinite if it has infinitely many distinct levels but said to collapse if it has only finitely many distinct levels. Example In theoretical computer science, the time hierarchy is a classification of decision problems according to the amount of time required to solve them. See also Order theory Nested set collection Lattice Tree related topics: Tree structure Tree (data structure) Tree (graph theory) Tree network Tree (descriptive set theory) Tree (set theory) Effective complexity hierarchies: Polynomial hierarchy Exponential hierarchy Chomsky hierarchy Ineffective complexity hierarchies: Arithmetical hierarchy Hyperarithmetical hierarchy Analytical hierarchy In set theory or logic: Borel hierarchy Difference hierarchy Wadge hierarchy Abstract algebraic hierarchy References Hierarchy Set theory
Hierarchy (mathematics)
[ "Mathematics" ]
517
[ "Mathematical logic", "Set theory" ]
13,000,492
https://en.wikipedia.org/wiki/Moment%20distribution%20method
The moment distribution method is a structural analysis method for statically indeterminate beams and frames developed by Hardy Cross. It was published in 1930 in an ASCE journal. The method only accounts for flexural effects and ignores axial and shear effects. From the 1930s until computers began to be widely used in the design and analysis of structures, the moment distribution method was the most widely practiced method. Introduction In the moment distribution method, every joint of the structure to be analysed is fixed so as to develop the fixed-end moments. Then each fixed joint is sequentially released and the fixed-end moments (which by the time of release are not in equilibrium) are distributed to adjacent members until equilibrium is achieved. The moment distribution method in mathematical terms can be demonstrated as the process of solving a set of simultaneous equations by means of iteration. The moment distribution method falls into the category of displacement method of structural analysis. Implementation In order to apply the moment distribution method to analyse a structure, the following things must be considered. Fixed end moments Fixed end moments are the moments produced at member ends by external loads.Spanwise calculation is carried out assuming each support to be fixed and implementing formulas as per the nature of load ,i.e. point load ( mid span or unequal) ,udl,uvl or couple. Bending stiffness The bending stiffness (EI/L) of a member is represented as the flexural rigidity of the member (product of the modulus of elasticity (E) and the second moment of area (I)) divided by the length (L) of the member. What is needed in the moment distribution method is not the specific values but the ratios of bending stiffnesses between all members. Distribution factors When a joint is being released and begins to rotate under the unbalanced moment, resisting forces develop at each member framed together at the joint. Although the total resistance is equal to the unbalanced moment, the magnitudes of resisting forces developed at each member differ by the members' bending stiffness. Distribution factors can be defined as the proportions of the unbalanced moments carried by each of the members. In mathematical terms, the distribution factor of member framed at joint is given as: where n is the number of members framed at the joint. Carryover factors When a joint is released, balancing moment occurs to counterbalance the unbalanced moment. The balancing moment is initially the same as the fixed-end moment. This balancing moment is then carried over to the member's other end. The ratio of the carried-over moment at the other end to the fixed-end moment of the initial end is the carryover factor. Determination of carryover factors Let one end (end A) of a fixed beam be released and applied a moment while the other end (end B) remains fixed. This will cause end A to rotate through an angle . Once the magnitude of developed at end B is found, the carryover factor of this member is given as the ratio of over : In case of a beam of length L with constant cross-section whose flexural rigidity is , therefore the carryover factor Sign convention Once a sign convention has been chosen, it has to be maintained for the whole structure. The traditional engineer's sign convention is not used in the calculations of the moment distribution method although the results can be expressed in the conventional way. In the BMD case, the left side moment is clockwise direction and other is anticlockwise direction so the bending is positive and is called sagging. Framed structure Framed structure with or without sidesway can be analysed using the moment distribution method. Example The statically indeterminate beam shown in the figure is to be analysed. The beam is considered to be three separate members, AB, BC, and CD, connected by fixed end (moment resisting) joints at B and C. Members AB, BC, CD have the same span . Flexural rigidities are EI, 2EI, EI respectively. Concentrated load of magnitude acts at a distance from the support A. Uniform load of intensity acts on BC. Member CD is loaded at its midspan with a concentrated load of magnitude . In the following calculations, clockwise moments are positive. Fixed end moments Bending stiffness and distribution factors The bending stiffness of members AB, BC and CD are , and , respectively . Therefore, expressing the results in repeating decimal notation: The distribution factors of joints A and D are and . Carryover factors The carryover factors are , except for the carryover factor from D (fixed support) to C which is zero. Moment distribution Numbers in grey are balanced moments; arrows ( → / ← ) represent the carry-over of moment from one end to the other end of a member.* Step 1: As joint A is released, balancing moment of magnitude equal to the fixed end moment develops and is carried-over from joint A to joint B.* Step 2: The unbalanced moment at joint B now is the summation of the fixed end moments , and the carry-over moment from joint A. This unbalanced moment is distributed to members BA and BC in accordance with the distribution factors and . Step 2 ends with carry-over of balanced moment to joint C. Joint A is a roller support which has no rotational restraint, so moment carryover from joint B to joint A is zero.* Step 3: The unbalanced moment at joint C now is the summation of the fixed end moments , and the carryover moment from joint B. As in the previous step, this unbalanced moment is distributed to each member and then carried over to joint D and back to joint B. Joint D is a fixed support and carried-over moments to this joint will not be distributed nor be carried over to joint C.* Step 4: Joint B still has balanced moment which was carried over from joint C in step 3. Joint B is released once again to induce moment distribution and to achieve equilibrium.* Steps 5 - 10: Joints are released and fixed again until every joint has unbalanced moments of size zero or neglectably small in required precision. Arithmetically summing all moments in each respective columns gives the final moment values. Result Moments at joints determined by the moment distribution method The conventional engineer's sign convention is used here, i.e. positive moments cause elongation at the bottom part of a beam member. For comparison purposes, the following are the results generated using a matrix method. Note that in the analysis above, the iterative process was carried to >0.01 precision. The fact that the matrix analysis results and the moment distribution analysis results match to 0.001 precision is mere coincidence. Moments at joints determined by the matrix method Note that the moment distribution method only determines the moments at the joints. Developing complete bending moment diagrams require additional calculations using the determined joint moments and internal section equilibrium. Result via displacements method As the Hardy Cross method provides only approximate results, with a margin of error inversely proportionate to the number of iterations, it is important to have an idea of how accurate this method might be. With this in mind, here is the result obtained by using an exact method: the displacement method For this, the displacements method equation assumes the following form: For the structure described in this example, the stiffness matrix is as follows: The equivalent nodal force vector: Replacing the values presented above in the equation and solving it for leads to the following result: Hence, the moments evaluated in node B are as follows: The moments evaluated in node C are as follows: See also Finite element method Slope deflection method Notes References Structural analysis
Moment distribution method
[ "Engineering" ]
1,557
[ "Structural engineering", "Structural analysis", "Mechanical engineering", "Aerospace engineering" ]
13,000,954
https://en.wikipedia.org/wiki/Australian%20Stem%20Cell%20Centre
The Australian Stem Cell Centre is an Australian medical research and development centre which focuses on regenerative medicine through the use of stem cells. Founded in 2003, the Centre is the National Biotechnology Centre of Excellence and has received over $100 million in funding in recent years. It is Australia's premier stem cell research organisation. In June 2008, the Centre announced that it had begun working on induced pluripotent (iPS) cells (human embryonic stem cells, artificially created without human eggs or embryos). This was the first time in Australia that such research had been carried out, and the first time that scientists had worked on this type of stem cell outside the US or Japan. It is based at Monash Science Technology Research and Innovation Precinct and was founded by nine leading Australian universities and medical research institutes. One of the founders of the Centre is Dr Alan Trounson, a Monash scientist who was part of the team that delivered Australia's first IVF baby in 1980. Trounson has also made several ground-breaking discoveries in stem cell research. In 2000, Trounson led the team of scientists which first reported nerve stem cells derived from embryonic stem cells, which led to a dramatic increase in interest in the potential of stem cell research. See also Australian Regenerative Medicine Institute Health in Australia References External links Official website redirected here in 2014: http://www.stemcellfoundation.net.au/ Australian Stem Cell Centre at Stem Cell Channel Australian Stem Cell Centre Monash University Stem cell research 2003 establishments in Australia Research institutes established in 2003 Research institutes in Australia
Australian Stem Cell Centre
[ "Chemistry", "Biology" ]
329
[ "Translational medicine", "Tissue engineering", "Stem cell research" ]
23,979,335
https://en.wikipedia.org/wiki/Blast%20damper
A blast damper is used to protect occupants and equipment of a structure against overpressures resultant of an explosion. The blast dampers normally protect air inlets and exhaust penetrations in an otherwise hardened structure. Blast dampers are related or identical to blast valves, the latter name is generally used to describe blast mitigation devices as they relate to nuclear explosions. Operation Blast dampers usually employ some type of blade held open with tension from a spring. The damper blades close automatically when pressure overcomes the resistance offered by the spring. Various models differ in the amount of blast protection (e.g. 1 bar/14.5 psi or lower amounts of protection) and whether they stay closed after the blast or remain functional. Design Typical blast dampers are sized to match HVAC ductwork and provide proper airflow with low pressure drop. Common applications include protecting equipment and personnel of control rooms and accommodation modules in petrochemical and industrial process facilities onshore and offshore. Acid- and corrosion-resistant versions are often requested in these instances. See also Explosion vent Fire damper Rupture disc References Explosion protection Heating, ventilation, and air conditioning Safety engineering
Blast damper
[ "Chemistry", "Engineering" ]
235
[ "Systems engineering", "Explosion protection", "Safety engineering", "Combustion engineering", "Explosions" ]
23,979,425
https://en.wikipedia.org/wiki/Torrefaction
Torrefaction of biomass, e.g., wood or grain, is a mild form of pyrolysis at temperatures typically between 200 and 320 °C. Torrefaction changes biomass properties to provide a better fuel quality for combustion and gasification applications. Torrefaction produces a relatively dry product, which reduces or eliminates its potential for organic decomposition. Torrefaction combined with densification creates an energy-dense fuel carrier of 20 to 21 GJ/ton lower heating value (LHV). Torrefaction causes the material to undergo Maillard reactions. Torrefied biomass can be used as an energy carrier or as a feedstock used in the production of bio-based fuels and chemicals. Biomass can be an important energy source. However, there exists a large diversity of potential biomass sources, each with its own unique characteristics. To create efficient biomass-to-energy chains, torrefaction of biomass, combined with densification (pelletisation or briquetting), is a promising step towards overcoming the logistical challenges in developing large-scale sustainable energy solutions, by making it easier to transport and store. Pellets or briquettes have higher density, contain less moisture, and are more stable in storage than the biomass they are derived from. Process Torrefaction is a thermochemical treatment of biomass at . It is carried out under atmospheric pressure and in the absence of oxygen. During the torrefaction process, the water contained in the biomass as well as superfluous volatiles are released, and the biopolymers (cellulose, hemicellulose and lignin) partly decompose, giving off various types of volatiles. The final product is the remaining solid, dry, blackened material that is referred to as torrefied biomass or bio-coal. During the process, the biomass typically loses 20% of its mass (bone dry basis) and 10% of its heating value, with no appreciable change in volume. This energy (the volatiles) can be used as a heating fuel for the torrefaction process. After the biomass is torrefied it can be densified, usually into briquettes or pellets using conventional densification equipment, to increase its mass and energy density and to improve its hydrophobic properties. The final product may repel water and thus can be stored in moist air or rain without appreciable change in moisture content or heating value, unlike the original biomass. The history of torrefaction dates to the beginning of the 19th century, and gasifiers were used on a large scale during the Second World War. Added value of torrefied biomass Torrefied and densified biomass has several advantages in different markets, which makes it a competitive option compared to conventional biomass wood pellets. Higher energy density An energy density of 18–20 GJ/m3 – compared to the 26 to 33 gigajoules per tonne heat content of natural anthracite coal – can be achieved when combined with densification (pelletizing or briquetting) compared to values of 10–11 GJ/m3 for raw biomass, driving a 40–50% reduction in transportation costs. Importantly, pelletizing or briquetting primarily increases energy density. Torrefaction alone typically decreases energy density, though it makes the material easier to make into pellets or briquettes. More homogeneous composition Torrefied biomass can be produced from a wide variety of raw biomass feedstocks that yield similar product properties. Most woody and herbaceous biomass consists of three main polymeric structures: cellulose, hemicellulose and lignin. Together these are called lignocellulose. Torrefaction primarily drives moisture and oxygen-rich and hydrogen-rich functional groups from these structures, producing similar char-like structures in all three cases. Therefore, most biomass fuels, regardless of origin, produce torrefied products with similar propertieswith the exception of ash properties, which largely reflect the original fuel ash content and composition. Hydrophobic behavior Torrefied biomass has hydrophobic properties, i.e., repels water, and when combined with densification make bulk storage in open air feasible. Elimination of biological activity All biological activity is stopped, reducing the risk of fire and stopping biological decomposition like rotting. Improved grindability Torrefaction of biomass leads to improved grindability of biomass. This leads to more efficient co-firing in existing coal-fired power stations or entrained-flow gasification for the production of chemicals and transportation fuels. Markets for torrefied biomass Torrefied biomass has added value for different markets. Biomass in general provides a low-cost, low-risk route to lower CO2-emissions. When high volumes are needed, torrefaction can make biomass from distant sources price competitive because the denser material is easier to store and transport. Wood powder fuel: Torrefied wood powder can be ground into a fine powder and when compressed, mimics liquefied petroleum gas (LPG). Large-scale co-firing in coal-fired power plants: Torrefied biomass results in lower handling costs; Torrefied biomass enables higher co-firing rates; Product can be delivered in a range of LHVs (20–25 GJ/ton) and sizes (briquette, pellet). Co-firing torrefied biomass with coal leads to reduction in net power plant emissions. Steel production: Fibrous biomass is very difficult to deploy in furnaces; To replace injection coal, biomass product needs to have LHV of more than 25 GJ/ton. Residential/decentralized heating: Relatively high percentage of transport on wheels in the supply chain makes biomass expensive. Increasing volumetric energy density does decrease costs; Limited storage space increases need for increased volumetric density; Moisture content important as moisture leads to smoke and smell. Biomass-to-Liquids: Torrefied biomass results in lower handling costs. Torrefied biomass serves as a 'clean' feedstock for production of transportation fuels (Fischer–Tropsch process), which saves on production costs. Miscellaneous uses: Several guitar builders have used torrefaction to obtain more dimensionally stable wood for guitar parts than traditional kiln-drying or air-drying provides, including Yamaha, Martin, Gibson, and luthier Dana Bourgeois. See also Pyrolysis Thermally modified wood Carbonization (contains a detailed description of the inferior combustion qualities of biomass compared to coal, and the positive effects of torrefaction.) References Further reading "Torrefied Wood Powder to Propane"; Zwart, R.W.R.; "Torrefaction Quality Control based on logistic & end-user requirements", ECN report, ECN-L–11-107 Verhoeff, F.; Adell, A.; Boersma, A.R.; Pels, J.R.; Lensselink, J.; Kiel, J.H.A.; Schukken, H.; "TorTech: Torrefaction as key Technology for the production of (solid) fuels from biomass and waste", ECN report, ECN-E–11-039 Bergman, P.C.A.; Kiel, J.H.A., 2005, "Torrefaction for biomass upgrading", ECN report, ECN-RX–05-180 Bergman, P.C.A.; Boersma, A.R.; Zwart, R.W.R.; Kiel, J.H.A., 2005, "Development of torrefaction for biomass co-firing in existing coal-fired power stations", ECN report, ECN-C–05-013 Bergman, P.C.A., 2005, "Combined torrefaction and pelletisation – the TOP process", ECN Report, ECN-C–05-073 Bergman, P.C.A.; Boersma, A.R.; Kiel, J.H.A.; Prins, M.J.; Ptasinski, K.J.; Janssen, F.G.G.J., 2005, "Torrefied biomass for entrained-flow gasification of biomass", ECN Report, ECN-C–05-026. Drying processes Pyrolysis Fuel production
Torrefaction
[ "Chemistry" ]
1,716
[ "Pyrolysis", "Oil shale technology", "Synthetic fuel technologies", "Organic reactions" ]
23,979,808
https://en.wikipedia.org/wiki/Vorton
A vorton is a hypothetical circular cosmic string loop stabilized by the angular momentum of the charge and current trapped on the string. References Further reading Physical cosmology Large-scale structure of the cosmos String theory
Vorton
[ "Physics", "Astronomy" ]
43
[ "Astronomical hypotheses", "Astronomical sub-disciplines", "Theoretical physics", "Astrophysics", "String theory", "Physical cosmology" ]
23,981,306
https://en.wikipedia.org/wiki/C21H27NO2
{{DISPLAYTITLE:C21H27NO2}} The molecular formula C21H27NO2 (molar mass: 325.44 g/mol) may refer to: Etafenone, a vasodilator Ifenprodil Norpropoxyphene SR 59230A Molecular formulas
C21H27NO2
[ "Physics", "Chemistry" ]
70
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
23,981,348
https://en.wikipedia.org/wiki/C21H29NO
{{DISPLAYTITLE:C21H29NO}} The molecular formula C21H29NO (molar mass: 311.46 g/mol) may refer to: Alphamethadol Betamethadol Biperiden Dimepheptanol, or methadol Isomethadol UR-144 Molecular formulas
C21H29NO
[ "Physics", "Chemistry" ]
71
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
23,981,390
https://en.wikipedia.org/wiki/C22H14
{{DISPLAYTITLE:C22H14}} The molecular formula C22H14 (molar mass: 278.36 g/mol) may refer to: Dibenz[a,h]anthracene Dibenz[a,j]anthracene Pentacene Picene Molecular formulas
C22H14
[ "Physics", "Chemistry" ]
67
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
23,981,399
https://en.wikipedia.org/wiki/C22H29NO2
{{DISPLAYTITLE:C22H29NO2}} The molecular formula C22H29NO2 (molar mass: 339.471 g/mol) may refer to: A-834,735 Dextropropoxyphene Levopropoxyphene Lobelanidine Noracymethadol Molecular formulas
C22H29NO2
[ "Physics", "Chemistry" ]
74
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
26,825,148
https://en.wikipedia.org/wiki/De%20Bruijn%E2%80%93Erd%C5%91s%20theorem%20%28incidence%20geometry%29
In incidence geometry, the De Bruijn–Erdős theorem, originally published by Nicolaas Govert de Bruijn and Paul Erdős in 1948, states a lower bound on the number of lines determined by n points in a projective plane. By duality, this is also a bound on the number of intersection points determined by a configuration of lines. Although the proof given by De Bruijn and Erdős is combinatorial, De Bruijn and Erdős noted in their paper that the analogous (Euclidean) result is a consequence of the Sylvester–Gallai theorem, by an induction on the number of points. Statement of the theorem Let P be a configuration of n points in a projective plane, not all on a line. Let t be the number of lines determined by P. Then, t ≥ n, and if t = n, any two lines have exactly one point of P in common. In this case, P is either a projective plane or P is a near pencil, meaning that exactly n - 1 of the points are collinear. Euclidean proof The theorem is clearly true for three non-collinear points. We proceed by induction. Assume n > 3 and the theorem is true for n − 1. Let P be a set of n points not all collinear. The Sylvester–Gallai theorem states that there is a line containing exactly two points of P. Such two point lines are called ordinary lines. Let a and b be the two points of P on an ordinary line. If the removal of point a produces a set of collinear points then P generates a near pencil of n lines (the n - 1 ordinary lines through a plus the one line containing the other n - 1 points). Otherwise, the removal of a produces a set, P' , of n − 1 points that are not all collinear. By the induction hypothesis, P' determines at least n − 1 lines. The ordinary line determined by a and b is not among these, so P determines at least n lines. J. H. Conway's proof John Horton Conway has a purely combinatorial proof which consequently also holds for points and lines over the complex numbers, quaternions and octonions. References Theorems in projective geometry Euclidean plane geometry Theorems in discrete geometry Incidence geometry Paul Erdős
De Bruijn–Erdős theorem (incidence geometry)
[ "Mathematics" ]
471
[ "Theorems in projective geometry", "Euclidean plane geometry", "Combinatorics", "Theorems in discrete mathematics", "Theorems in geometry", "Theorems in discrete geometry", "Planes (geometry)", "Incidence geometry" ]
26,826,914
https://en.wikipedia.org/wiki/Pauli%E2%80%93Lubanski%20pseudovector
In physics, the Pauli–Lubanski pseudovector is an operator defined from the momentum and angular momentum, used in the quantum-relativistic description of angular momentum. It is named after Wolfgang Pauli and Józef Lubański. It describes the spin states of moving particles. It is the generator of the little group of the Poincaré group, that is the maximal subgroup (with four generators) leaving the eigenvalues of the four-momentum vector invariant. Definition It is usually denoted by (or less often by ) and defined by: where is the four-dimensional totally antisymmetric Levi-Civita symbol; is the relativistic angular momentum tensor operator (); is the four-momentum operator. In the language of exterior algebra, it can be written as the Hodge dual of a trivector, Note , and where is the generator of rotations and is the generator of boosts. evidently satisfies as well as the following commutator relations, Consequently, The scalar is a Lorentz-invariant operator, and commutes with the four-momentum, and can thus serve as a label for irreducible unitary representations of the Poincaré group. That is, it can serve as the label for the spin, a feature of the spacetime structure of the representation, over and above the relativistically invariant label for the mass of all states in a representation. Little group On an eigenspace of the 4-momentum operator with 4-momentum eigenvalue of the Hilbert space of a quantum system (or for that matter the standard representation with interpreted as momentum space acted on by 5×5 matrices with the upper left 4×4 block an ordinary Lorentz transformation, the last column reserved for translations and the action effected on elements (column vectors) of momentum space with appended as a fifth row, see standard texts) the following holds: The components of with replaced by form a Lie algebra. It is the Lie algebra of the Little group of , i.e. the subgroup of the homogeneous Lorentz group that leaves invariant. For every irreducible unitary representation of there is an irreducible unitary representation of the full Poincaré group called an induced representation. A representation space of the induced representation can be obtained by successive application of elements of the full Poincaré group to a non-zero element of and extending by linearity. The irreducible unitary representation of the Poincaré group are characterized by the eigenvalues of the two Casimir operators and . The best way to see that an irreducible unitary representation actually is obtained is to exhibit its action on an element with arbitrary 4-momentum eigenvalue in the representation space thus obtained. Irreducibility follows from the construction of the representation space. Massive fields In quantum field theory, in the case of a massive field, the Casimir invariant describes the total spin of the particle, with eigenvalues where is the spin quantum number of the particle and is its rest mass. It is straightforward to see this in the rest frame of the particle, the above commutator acting on the particle's state amounts to ; hence and , so that the little group amounts to the rotation group, Since this is a Lorentz invariant quantity, it will be the same in all other reference frames. It is also customary to take to describe the spin projection along the third direction in the rest frame. In moving frames, decomposing into components , with and orthogonal to , and parallel to , the Pauli–Lubanski vector may be expressed in terms of the spin vector = (similarly decomposed) as where is the energy–momentum relation. The transverse components , along with , satisfy the following commutator relations (which apply generally, not just to non-zero mass representations), For particles with non-zero mass, and the fields associated with such particles, Massless fields In general, in the case of non-massive representations, two cases may be distinguished. For massless particles, where is the dynamic mass moment vector. So, mathematically, 2 = 0 does not imply 2 = 0. Continuous spin representations In the more general case, the components of transverse to may be non-zero, thus yielding the family of representations referred to as the cylindrical luxons ("luxon" is another term for "massless particle"), their identifying property being that the components of form a Lie subalgebra isomorphic to the 2-dimensional Euclidean group , with the longitudinal component of playing the role of the rotation generator, and the transverse components the role of translation generators. This amounts to a group contraction of , and leads to what are known as the continuous spin representations. However, there are no known physical cases of fundamental particles or fields in this family. It can be argued that continuous spin states possess an internal degree of freedom not seen in observed massless particles. Helicity representations In a special case, is parallel to or equivalently For non-zero this constraint can only be consistently imposed for luxons (massless particles), since the commutator of the two transverse components of is proportional to For this family, and the invariant is, instead given by where so the invariant is represented by the helicity operator All particles that interact with the weak nuclear force, for instance, fall into this family, since the definition of weak nuclear charge (weak isospin) involves helicity, which, by above, must be an invariant. The appearance of non-zero mass in such cases must then be explained by other means, such as the Higgs mechanism. Even after accounting for such mass-generating mechanisms, however, the photon (and therefore the electromagnetic field) continues to fall into this class, although the other mass eigenstates of the carriers of the electroweak force (the boson and anti-boson and boson) acquire non-zero mass. Neutrinos were formerly considered to fall into this class as well. However, because neutrinos have been observed to oscillate in flavour, it is now known that at least two of the three mass eigenstates of the left-helicity neutrinos and right-helicity anti-neutrinos each must have non-zero mass. See also Center of mass (relativistic) Wigner's classification Angular momentum operator Casimir operator Chirality Pseudovector Pseudotensor Induced representation Notes References Quantum field theory Representation theory of Lie algebras
Pauli–Lubanski pseudovector
[ "Physics" ]
1,335
[ "Quantum field theory", "Quantum mechanics" ]
26,828,712
https://en.wikipedia.org/wiki/Quantization%20of%20the%20electromagnetic%20field
The quantization of the electromagnetic field is a procedure in physics turning Maxwell's classical electromagnetic waves into particles called photons. Photons are massless particles of definite energy, definite momentum, and definite spin. To explain the photoelectric effect, Albert Einstein assumed heuristically in 1905 that an electromagnetic field consists of particles of energy of amount hν, where h is the Planck constant and ν is the wave frequency. In 1927 Paul A. M. Dirac was able to weave the photon concept into the fabric of the new quantum mechanics and to describe the interaction of photons with matter. He applied a technique which is now generally called second quantization, although this term is somewhat of a misnomer for electromagnetic fields, because they are solutions of the classical Maxwell equations. In Dirac's theory the fields are quantized for the first time and it is also the first time that the Planck constant enters the expressions. In his original work, Dirac took the phases of the different electromagnetic modes (Fourier components of the field) and the mode energies as dynamic variables to quantize (i.e., he reinterpreted them as operators and postulated commutation relations between them). At present it is more common to quantize the Fourier components of the vector potential. This is what is done below. A quantum mechanical photon state belonging to mode is introduced below, and it is shown that it has the following properties: These equations say respectively: a photon has zero rest mass; the photon energy is hν = hc|k| (k is the wave vector, c is speed of light); its electromagnetic momentum is ħk [ħ = h/(2π)]; the polarization μ = ±1 is the eigenvalue of the z-component of the photon spin. Second quantization Second quantization starts with an expansion of a scalar or vector field (or wave functions) in a basis consisting of a complete set of functions. These expansion functions depend on the coordinates of a single particle. The coefficients multiplying the basis functions are interpreted as operators and (anti)commutation relations between these new operators are imposed, commutation relations for bosons and anticommutation relations for fermions (nothing happens to the basis functions themselves). By doing this, the expanded field is converted into a fermion or boson operator field. The expansion coefficients have been promoted from ordinary numbers to operators, creation and annihilation operators. A creation operator creates a particle in the corresponding basis function and an annihilation operator annihilates a particle in this function. In the case of EM fields the required expansion of the field is the Fourier expansion. Electromagnetic field and vector potential As the term suggests, an EM field consists of two vector fields, an electric field and a magnetic field . Both are time-dependent vector fields that in vacuum depend on a third vector field (the vector potential), as well as a scalar field where ∇ × A is the curl of A. Choosing the Coulomb gauge, for which ∇⋅A = 0, makes A into a transverse field. The Fourier expansion of the vector potential enclosed in a finite cubic box of volume V = L3 is then where denotes the complex conjugate of . The wave vector k gives the propagation direction of the corresponding Fourier component (a polarized monochromatic wave) of A(r,t); the length of the wave vector is with ν the frequency of the mode. In this summation k runs over all integers, both positive and negative. (The component of Fourier basis is complex conjugate of component of as is real.) The components of the vector k have discrete values (a consequence of the boundary condition that A has the same value on opposite walls of the box): Two e(μ) ("polarization vectors") are conventional unit vectors for left and right hand circular polarized (LCP and RCP) EM waves (See Jones calculus or Jones vector, Jones calculus) and perpendicular to k. They are related to the orthonormal Cartesian vectors ex and ey through a unitary transformation, The kth Fourier component of A is a vector perpendicular to k and hence is a linear combination of e(1) and e(−1). The superscript μ indicates a component along e(μ). Clearly, the (discrete infinite) set of Fourier coefficients and are variables defining the vector potential. In the following they will be promoted to operators. By using field equations of and in terms of above, electric and magnetic fields are By using identity ( and are vectors) and as each mode has single frequency dependence. Quantization of EM field The best known example of quantization is the replacement of the time-dependent linear momentum of a particle by the rule Note that the Planck constant is introduced here and that the time-dependence of the classical expression is not taken over in the quantum mechanical operator (this is true in the so-called Schrödinger picture). For the EM field we do something similar. The quantity is the electric constant, which appears here because of the use of electromagnetic SI units. The quantization rules are: subject to the boson commutation relations The square brackets indicate a commutator, defined by for any two quantum mechanical operators A and B. The introduction of the Planck constant is essential in the transition from a classical to a quantum theory. The factor is introduced to give the Hamiltonian (energy operator) a simple form, see below. The quantized fields (operator fields) are the following where ω = c = ck. Hamiltonian of the field The classical Hamiltonian has the form The right-hand-side is easily obtained by first using (can be derived from Euler equation and trigonometric orthogonality) where k is wavenumber for wave confined within the box of V = L × L × L as described above and second, using ω = kc. Substitution of the field operators into the classical Hamiltonian gives the Hamilton operator of the EM field, The second equality follows by use of the third of the boson commutation relations from above with k′ = k and μ′ = μ. Note again that ħω = hν = ħc|k| and remember that ω depends on k, even though it is not explicit in the notation. The notation ω(k) could have been introduced, but is not common as it clutters the equations. Digression: harmonic oscillator The second quantized treatment of the one-dimensional quantum harmonic oscillator is a well-known topic in quantum mechanical courses. We digress and say a few words about it. The harmonic oscillator Hamiltonian has the form where ω ≡ 2πν is the fundamental frequency of the oscillator. The ground state of the oscillator is designated by ; and is referred to as the "vacuum state". It can be shown that is an excitation operator, it excites from an n fold excited state to an n + 1 fold excited state: In particular: and Since harmonic oscillator energies are equidistant, the n-fold excited state ; can be looked upon as a single state containing n particles (sometimes called vibrons) all of energy hν. These particles are bosons. For obvious reason the excitation operator is called a creation operator. From the commutation relation follows that the Hermitian adjoint de-excites: in particular so that For obvious reason the de-excitation operator is called an annihilation operator. By mathematical induction the following "differentiation rule", that will be needed later, is easily proved, Suppose now we have a number of non-interacting (independent) one-dimensional harmonic oscillators, each with its own fundamental frequency ωi . Because the oscillators are independent, the Hamiltonian is a simple sum: By substituting for we see that the Hamiltonian of the EM field can be considered a Hamiltonian of independent oscillators of energy ω = c oscillating along direction e(μ) with μ = ±1. Photon number states (Fock states) The quantized EM field has a vacuum (no photons) state . The application of it to, say, gives a quantum state of m photons in mode (k, μ) and n photons in mode (k′, μ′). The proportionality symbol is used because the state on the left-hand is not normalized to unity, whereas the state on the right-hand may be normalized. The operator is the number operator. When acting on a quantum mechanical photon number state, it returns the number of photons in mode (k, μ). This also holds when the number of photons in this mode is zero, then the number operator returns zero. To show the action of the number operator on a one-photon ket, we consider i.e., a number operator of mode (k, μ) returns zero if the mode is unoccupied and returns unity if the mode is singly occupied. To consider the action of the number operator of mode (k, μ) on a n-photon ket of the same mode, we drop the indices k and μ and consider Use the "differentiation rule" introduced earlier and it follows that A photon number state (or a Fock state) is an eigenstate of the number operator. This is why the formalism described here is often referred to as the occupation number representation. Photon energy Earlier the Hamiltonian, was introduced. The zero of energy can be shifted, which leads to an expression in terms of the number operator, The effect of H on a single-photon state is Thus the single-photon state is an eigenstate of H and ħω = hν is the corresponding energy. In the same way Photon momentum Introducing the Fourier expansion of the electromagnetic field into the classical form yields Quantization gives The term 1/2 could be dropped, because when one sums over the allowed k, k cancels with −k. The effect of PEM on a single-photon state is Apparently, the single-photon state is an eigenstate of the momentum operator, and ħk is the eigenvalue (the momentum of a single photon). Photon mass The photon having non-zero linear momentum, one could imagine that it has a non-vanishing rest mass m0, which is its mass at zero speed. However, we will now show that this is not the case: m0 = 0. Since the photon propagates with the speed of light, special relativity is called for. The relativistic expressions for energy and momentum squared are, From p2/E2, Use and it follows that so that m0 = 0. Photon spin The photon can be assigned a triplet spin with spin quantum number S = 1. This is similar to, say, the nuclear spin of the 14N isotope, but with the important difference that the state with MS = 0 is zero, only the states with MS = ±1 are non-zero. Define spin operators: The two operators between the two orthogonal unit vectors are dyadic products. The unit vectors are perpendicular to the propagation direction k (the direction of the z axis, which is the spin quantization axis). The spin operators satisfy the usual angular momentum commutation relations Indeed, use the dyadic product property because ez is of unit length. In this manner, By inspection it follows that and therefore μ labels the photon spin, Because the vector potential A is a transverse field, the photon has no forward (μ = 0) spin component. Classical approximation The classical approximation to EM radiation is good when the number of photons is much larger than unity in the volume where λ is the length of the radio waves. In that case quantum fluctuations are negligible. For example, the photons emitted by a radio station broadcast at the frequency ν = 100 MHz, have an energy content of νh = (1 × 108) × (6.6 × 10−34) = 6.6 × 10−26 J, where h is the Planck constant. The wavelength of the station is λ = c/ν = 3 m, so that λ/(2π) = 48 cm and the volume is 0.109 m3. The energy content of this volume element at 5 km from the station is 2.1 × 10−10 × 0.109 = 2.3 × 10−11 J, which amounts to 3.4 × 1014 photons per Since 3.4 × 1014 > 1, quantum effects do not play a role. The waves emitted by this station are well-described by the classical limit and quantum mechanics is not needed. See also QED vacuum Generalized polarization vector of arbitrary spin fields. References Gauge theories Mathematical quantization
Quantization of the electromagnetic field
[ "Physics" ]
2,667
[ "Mathematical quantization", "Quantum mechanics" ]
491,903
https://en.wikipedia.org/wiki/Thue%E2%80%93Morse%20sequence
In mathematics, the Thue–Morse or Prouhet–Thue–Morse sequence is the binary sequence (an infinite sequence of 0s and 1s) that can be obtained by starting with 0 and successively appending the Boolean complement of the sequence obtained thus far. It is sometimes called the fair share sequence because of its applications to fair division or parity sequence. The first few steps of this procedure yield the strings 0, 01, 0110, 01101001, 0110100110010110, and so on, which are the prefixes of the Thue–Morse sequence. The full sequence begins: 01101001100101101001011001101001.... The sequence is named after Axel Thue and Marston Morse. Definition There are several equivalent ways of defining the Thue–Morse sequence. Direct definition To compute the nth element tn, write the number n in binary. If the number of ones in this binary expansion is odd then tn = 1, if even then tn = 0. That is, tn is the even parity bit for n. John H. Conway et al. deemed numbers n satisfying tn = 1 to be odious (intended to be similar to odd) numbers, and numbers for which tn = 0 to be evil (similar to even) numbers. Fast sequence generation This method leads to a fast method for computing the Thue–Morse sequence: start with , and then, for each n, find the highest-order bit in the binary representation of n that is different from the same bit in the representation of . If this bit is at an even index, tn differs from , and otherwise it is the same as . In Python: def generate_sequence(seq_length: int): """Thue–Morse sequence.""" value = 1 for n in range(seq_length): # Note: assumes that (-1).bit_length() gives 1 x = (n ^ (n - 1)).bit_length() + 1 if x & 1 == 0: # Bit index is even, so toggle value value = 1 - value yield value The resulting algorithm takes constant time to generate each sequence element, using only a logarithmic number of bits (constant number of words) of memory. Recurrence relation The Thue–Morse sequence is the sequence tn satisfying the recurrence relation for all non-negative integers n. L-system The Thue–Morse sequence is a morphic word: it is the output of the following Lindenmayer system: Characterization using bitwise negation The Thue–Morse sequence in the form given above, as a sequence of bits, can be defined recursively using the operation of bitwise negation. So, the first element is 0. Then once the first 2n elements have been specified, forming a string s, then the next 2n elements must form the bitwise negation of s. Now we have defined the first 2n+1 elements, and we recurse. Spelling out the first few steps in detail: We start with 0. The bitwise negation of 0 is 1. Combining these, the first 2 elements are 01. The bitwise negation of 01 is 10. Combining these, the first 4 elements are 0110. The bitwise negation of 0110 is 1001. Combining these, the first 8 elements are 01101001. And so on. So T0 = 0. T1 = 01. T2 = 0110. T3 = 01101001. T4 = 0110100110010110. T5 = 01101001100101101001011001101001. T6 = 0110100110010110100101100110100110010110011010010110100110010110. And so on. In Python: def thue_morse_bits(n): """Return an int containing the first 2**n bits of the Thue-Morse sequence, low-order bit 1st.""" bits = 0 for i in range(n): bits |= ((1 << (1 << i)) - 1 - bits) << (1 << i) return bits Which can then be converted to a (reversed) string as follows: n = 7 print(f"{thue_morse_bits(n):0{1<<n}b}") Infinite product The sequence can also be defined by: where tj is the jth element if we start at j = 0. Properties The Thue–Morse sequence contains many squares: instances of the string , where denotes the string , , , or , where for some and is the bitwise negation of . For instance, if , then . The square appears in starting at the 16th bit. Since all squares in are obtained by repeating one of these 4 strings, they all have length or for some . contains no cubes: instances of . There are also no overlapping squares: instances of or . The critical exponent of is 2. The Thue–Morse sequence is a uniformly recurrent word: given any finite string X in the sequence, there is some length nX (often much longer than the length of X) such that X appears in every block of length nX. Notably, the Thue-Morse sequence is uniformly recurrent without being either periodic or eventually periodic (i.e., periodic after some initial nonperiodic segment). The sequence T2n is a palindrome for any n. Furthermore, let qn be a word obtained by counting the ones between consecutive zeros in T2n . For instance, q1 = 2 and q2 = 2102012. Since Tn does not contain overlapping squares, the words qn are palindromic squarefree words. The Thue–Morse morphism μ is defined on alphabet {0,1} by the substitution map μ(0) = 01, μ(1) = 10: every 0 in a sequence is replaced with 01 and every 1 with 10. If T is the Thue–Morse sequence, then μ(T) is also T. Thus, T is a fixed point of μ. The morphism μ is a prolongable morphism on the free monoid {0,1}∗ with T as fixed point: T is essentially the only fixed point of μ; the only other fixed point is the bitwise negation of T, which is simply the Thue–Morse sequence on (1,0) instead of on (0,1). This property may be generalized to the concept of an automatic sequence. The generating series of T over the binary field is the formal power series This power series is algebraic over the field of rational functions, satisfying the equation In combinatorial game theory The set of evil numbers (numbers with ) forms a subspace of the nonnegative integers under nim-addition (bitwise exclusive or). For the game of Kayles, evil nim-values occur for few (finitely many) positions in the game, with all remaining positions having odious nim-values. The Prouhet–Tarry–Escott problem The Prouhet–Tarry–Escott problem can be defined as: given a positive integer N and a non-negative integer k, partition the set S = { 0, 1, ..., N-1 } into two disjoint subsets S0 and S1 that have equal sums of powers up to k, that is: for all integers i from 1 to k. This has a solution if N is a multiple of 2k+1, given by: S0 consists of the integers n in S for which tn = 0, S1 consists of the integers n in S for which tn = 1. For example, for N = 8 and k = 2, The condition requiring that N be a multiple of 2k+1 is not strictly necessary: there are some further cases for which a solution exists. However, it guarantees a stronger property: if the condition is satisfied, then the set of kth powers of any set of N numbers in arithmetic progression can be partitioned into two sets with equal sums. This follows directly from the expansion given by the binomial theorem applied to the binomial representing the nth element of an arithmetic progression. For generalizations of the Thue–Morse sequence and the Prouhet–Tarry–Escott problem to partitions into more than two parts, see Bolker, Offner, Richman and Zara, "The Prouhet–Tarry–Escott problem and generalized Thue–Morse sequences". Fractals and turtle graphics Using turtle graphics, a curve can be generated if an automaton is programmed with a sequence. When Thue–Morse sequence members are used in order to select program states: If t(n) = 0, move ahead by one unit, If t(n) = 1, rotate by an angle of π/3 radians (60°) The resulting curve converges to the Koch curve, a fractal curve of infinite length containing a finite area. This illustrates the fractal nature of the Thue–Morse Sequence. It is also possible to draw the curve precisely using the following instructions: If t(n) = 0, rotate by an angle of π radians (180°), If t(n) = 1, move ahead by one unit, then rotate by an angle of π/3 radians. Equitable sequencing In their book on the problem of fair division, Steven Brams and Alan Taylor invoked the Thue–Morse sequence but did not identify it as such. When allocating a contested pile of items between two parties who agree on the items' relative values, Brams and Taylor suggested a method they called balanced alternation, or taking turns taking turns taking turns . . . , as a way to circumvent the favoritism inherent when one party chooses before the other. An example showed how a divorcing couple might reach a fair settlement in the distribution of jointly-owned items. The parties would take turns to be the first chooser at different points in the selection process: Ann chooses one item, then Ben does, then Ben chooses one item, then Ann does. Lionel Levine and Katherine E. Stange, in their discussion of how to fairly apportion a shared meal such as an Ethiopian dinner, proposed the Thue–Morse sequence as a way to reduce the advantage of moving first. They suggested that “it would be interesting to quantify the intuition that the Thue–Morse order tends to produce a fair outcome.” Robert Richman addressed this problem, but he too did not identify the Thue–Morse sequence as such at the time of publication. He presented the sequences Tn as step functions on the interval [0,1] and described their relationship to the Walsh and Rademacher functions. He showed that the nth derivative can be expressed in terms of Tn. As a consequence, the step function arising from Tn is orthogonal to polynomials of order n − 1. A consequence of this result is that a resource whose value is expressed as a monotonically decreasing continuous function is most fairly allocated using a sequence that converges to Thue–Morse as the function becomes flatter. An example showed how to pour cups of coffee of equal strength from a carafe with a nonlinear concentration gradient, prompting a whimsical article in the popular press. Joshua Cooper and Aaron Dutle showed why the Thue–Morse order provides a fair outcome for discrete events. They considered the fairest way to stage a Galois duel, in which each of the shooters has equally poor shooting skills. Cooper and Dutle postulated that each dueler would demand a chance to fire as soon as the other's a priori probability of winning exceeded their own. They proved that, as the duelers’ hitting probability approaches zero, the firing sequence converges to the Thue–Morse sequence. In so doing, they demonstrated that the Thue–Morse order produces a fair outcome not only for sequences Tn of length 2n, but for sequences of any length. Thus the mathematics supports using the Thue–Morse sequence instead of alternating turns when the goal is fairness but earlier turns differ monotonically from later turns in some meaningful quality, whether that quality varies continuously or discretely. Sports competitions form an important class of equitable sequencing problems, because strict alternation often gives an unfair advantage to one team. Ignacio Palacios-Huerta proposed changing the sequential order to Thue–Morse to improve the ex post fairness of various tournament competitions, such as the kicking sequence of a penalty shoot-out in soccer. He did a set of field experiments with pro players and found that the team kicking first won 60% of games using ABAB (or T1), 54% using ABBA (or T2), and 51% using full Thue–Morse (or Tn).  As a result, ABBA is undergoing extensive trials in FIFA (European and World Championships) and English Federation professional soccer (EFL Cup). An ABBA serving pattern has also been found to improve the fairness of tennis tie-breaks. In competitive rowing, T2 is the only arrangement of port- and starboard-rowing crew members that eliminates transverse forces (and hence sideways wiggle) on a four-membered coxless racing boat, while T3 is one of only four rigs to avoid wiggle on an eight-membered boat. Fairness is especially important in player drafts. Many professional sports leagues attempt to achieve competitive parity by giving earlier selections in each round to weaker teams. By contrast, fantasy football leagues have no pre-existing imbalance to correct, so they often use a “snake” draft (forward, backward, etc.; or T1). Ian Allan argued that a “third-round reversal” (forward, backward, backward, forward, etc.; or T2) would be even more fair. Richman suggested that the fairest way for “captain A” and “captain B” to choose sides for a pick-up game of basketball mirrors T3: captain A has the first, fourth, sixth, and seventh choices, while captain B has the second, third, fifth, and eighth choices. Hash collisions The initial bits of the Thue–Morse sequence are mapped to 0 by a wide class of polynomial hash functions modulo a power of two, which can lead to hash collisions. Riemann zeta function Certain linear combinations of Dirichlet series whose coefficients are terms of the Thue–Morse sequence give rise to identities involving the Riemann Zeta function (Tóth, 2022 ). For instance: where is the term of the Thue-Morse sequence. In fact, for all with real part greater than , we have History The Thue–Morse sequence was first studied by in 1851, who applied it to number theory. However, Prouhet did not mention the sequence explicitly; this was left to Axel Thue in 1906, who used it to found the study of combinatorics on words. The sequence was only brought to worldwide attention with the work of Marston Morse in 1921, when he applied it to differential geometry. The sequence has been discovered independently many times, not always by professional research mathematicians; for example, Max Euwe, a chess grandmaster and mathematics teacher, discovered it in 1929 in an application to chess: by using its cube-free property (see above), he showed how to circumvent the threefold repetition rule aimed at preventing infinitely protracted games by declaring repetition of moves a draw. At the time, consecutive identical board states were required to trigger the rule; the rule was later amended to the same board position reoccurring three times at any point, as the sequence shows that the consecutive criterion can be evaded forever. See also Dejean's theorem Fabius function First difference of the Thue–Morse sequence Gray code Komornik–Loreti constant Prouhet–Thue–Morse constant Notes References } Further reading External links Allouche, J.-P.; Shallit, J. O. The Ubiquitous Prouhet-Thue-Morse Sequence. (contains many applications and some history) Thue–Morse Sequence over (1,2) Reducing the influence of DC offset drift in analog IPs using the Thue-Morse Sequence. A technical application of the Thue–Morse Sequence MusiNum - The Music in the Numbers. Freeware to generate self-similar music based on the Thue–Morse Sequence and related number sequences. Binary sequences Fixed points (mathematics) Parity (mathematics)
Thue–Morse sequence
[ "Mathematics" ]
3,474
[ "Fixed points (mathematics)", "Mathematical analysis", "Topology", "Dynamical systems" ]
492,012
https://en.wikipedia.org/wiki/C4%20carbon%20fixation
carbon fixation or the Hatch–Slack pathway is one of three known photosynthetic processes of carbon fixation in plants. It owes the names to the 1960s discovery by Marshall Davidson Hatch and Charles Roger Slack. fixation is an addition to the ancestral and more common carbon fixation. The main carboxylating enzyme in photosynthesis is called RuBisCO, which catalyses two distinct reactions using either (carboxylation) or oxygen (oxygenation) as a substrate. RuBisCO oxygenation gives rise to phosphoglycolate, which is toxic and requires the expenditure of energy to recycle through photorespiration. photosynthesis reduces photorespiration by concentrating around RuBisCO. To enable RuBisCO to work in a cellular environment where there is a lot of carbon dioxide and very little oxygen, leaves generally contain two partially isolated compartments called mesophyll cells and bundle-sheath cells. is initially fixed in the mesophyll cells in a reaction catalysed by the enzyme PEP carboxylase in which the three-carbon phosphoenolpyruvate (PEP) reacts with to form the four-carbon oxaloacetic acid (OAA). OAA can then be reduced to malate or transaminated to aspartate. These intermediates diffuse to the bundle sheath cells, where they are decarboxylated, creating a -rich environment around RuBisCO and thereby suppressing photorespiration. The resulting pyruvate (PYR), together with about half of the phosphoglycerate (PGA) produced by RuBisCO, diffuses back to the mesophyll. PGA is then chemically reduced and diffuses back to the bundle sheath to complete the reductive pentose phosphate cycle (RPP). This exchange of metabolites is essential for photosynthesis to work. Additional biochemical steps require more energy in the form of ATP to regenerate PEP, but concentrating allows high rates of photosynthesis at higher temperatures. Higher CO2 concentration overcomes the reduction of gas solubility with temperature (Henry's law). The concentrating mechanism also maintains high gradients of concentration across the stomatal pores. This means that plants have generally lower stomatal conductance, reduced water losses and have generally higher water-use efficiency. plants are also more efficient in using nitrogen, since PEP carboxylase is cheaper to make than RuBisCO. However, since the pathway does not require extra energy for the regeneration of PEP, it is more efficient in conditions where photorespiration is limited, typically at low temperatures and in the shade. Discovery The first experiments indicating that some plants do not use carbon fixation but instead produce malate and aspartate in the first step of carbon fixation were done in the 1950s and early 1960s by Hugo Peter Kortschak and Yuri Karpilov. The pathway was elucidated by Marshall Davidson Hatch and Charles Roger Slack, in Australia, in 1966. While Hatch and Slack originally referred to the pathway as the "C4 dicarboxylic acid pathway", it is sometimes called the Hatch–Slack pathway. Anatomy plants often possess a characteristic leaf anatomy called kranz anatomy, from the German word for wreath. Their vascular bundles are surrounded by two rings of cells; the inner ring, called bundle sheath cells, contains starch-rich chloroplasts lacking grana, which differ from those in mesophyll cells present as the outer ring. Hence, the chloroplasts are called dimorphic. The primary function of kranz anatomy is to provide a site in which can be concentrated around RuBisCO, thereby avoiding photorespiration. Mesophyll and bundle sheath cells are connected through numerous cytoplasmic sleeves called plasmodesmata whose permeability at leaf level is called bundle sheath conductance. A layer of suberin is often deposed at the level of the middle lamella (tangential interface between mesophyll and bundle sheath) in order to reduce the apoplastic diffusion of (called leakage). The carbon concentration mechanism in plants distinguishes their isotopic signature from other photosynthetic organisms. Although most plants exhibit kranz anatomy, there are, however, a few species that operate a limited cycle without any distinct bundle sheath tissue. Suaeda aralocaspica, Bienertia cycloptera, Bienertia sinuspersici and Bienertia kavirense (all chenopods) are terrestrial plants that inhabit dry, salty depressions in the deserts of the Middle East. These plants have been shown to operate single-cell -concentrating mechanisms, which are unique among the known mechanisms. Although the cytology of both genera differs slightly, the basic principle is that fluid-filled vacuoles are employed to divide the cell into two separate areas. Carboxylation enzymes in the cytosol are separated from decarboxylase enzymes and RuBisCO in the chloroplasts. A diffusive barrier is between the chloroplasts (which contain RuBisCO) and the cytosol. This enables a bundle-sheath-type area and a mesophyll-type area to be established within a single cell. Although this does allow a limited cycle to operate, it is relatively inefficient. Much leakage of from around RuBisCO occurs. There is also evidence of inducible photosynthesis by non-kranz aquatic macrophyte Hydrilla verticillata under warm conditions, although the mechanism by which leakage from around RuBisCO is minimised is currently uncertain. Biochemistry In plants, the first step in the light-independent reactions of photosynthesis is the fixation of by the enzyme RuBisCO to form 3-phosphoglycerate. However, RuBisCo has a dual carboxylase and oxygenase activity. Oxygenation results in part of the substrate being oxidized rather than carboxylated, resulting in loss of substrate and consumption of energy, in what is known as photorespiration. Oxygenation and carboxylation are competitive, meaning that the rate of the reactions depends on the relative concentration of oxygen and . In order to reduce the rate of photorespiration, plants increase the concentration of around RuBisCO. To do so two partially isolated compartments differentiate within leaves, the mesophyll and the bundle sheath. Instead of direct fixation by RuBisCO, is initially incorporated into a four-carbon organic acid (either malate or aspartate) in the mesophyll. The organic acids then diffuse through plasmodesmata into the bundle sheath cells. There, they are decarboxylated creating a -rich environment. The chloroplasts of the bundle sheath cells convert this into carbohydrates by the conventional pathway. There is large variability in the biochemical features of C4 assimilation, and it is generally grouped in three subtypes, differentiated by the main enzyme used for decarboxylation ( NADP-malic enzyme, NADP-ME; NAD-malic enzyme, NAD-ME; and PEP carboxykinase, PEPCK). Since PEPCK is often recruited atop NADP-ME or NAD-ME it was proposed to classify the biochemical variability in two subtypes. For instance, maize and sugarcane use a combination of NADP-ME and PEPCK, millet uses preferentially NAD-ME and Megathyrsus maximus, uses preferentially PEPCK. NADP-ME The first step in the NADP-ME type pathway is the conversion of pyruvate (Pyr) to phosphoenolpyruvate (PEP), by the enzyme Pyruvate phosphate dikinase (PPDK). This reaction requires inorganic phosphate and ATP plus pyruvate, producing PEP, AMP, and inorganic pyrophosphate (PPi). The next step is the carboxylation of PEP by the PEP carboxylase enzyme (PEPC) producing oxaloacetate. Both of these steps occur in the mesophyll cells: pyruvate + Pi + ATP → PEP + AMP + PPi PEP + → oxaloacetate PEPC has a low KM for — and, hence, high affinity, and is not confounded by O2 thus it will work even at low concentrations of . The product is usually converted to malate (M), which diffuses to the bundle-sheath cells surrounding a nearby vein. Here, it is decarboxylated by the NADP-malic enzyme (NADP-ME) to produce and pyruvate. The is fixed by RuBisCo to produce phosphoglycerate (PGA) while the pyruvate is transported back to the mesophyll cell, together with about half of the phosphoglycerate (PGA). This PGA is chemically reduced in the mesophyll and diffuses back to the bundle sheath where it enters the conversion phase of the Calvin cycle. For each molecule exported to the bundle sheath the malate shuttle transfers two electrons, and therefore reduces the demand of reducing power in the bundle sheath. NAD-ME Here, the OAA produced by PEPC is transaminated by aspartate aminotransferase to aspartate (ASP) which is the metabolite diffusing to the bundle sheath. In the bundle sheath ASP is transaminated again to OAA and then undergoes a futile reduction and oxidative decarboxylation to release . The resulting Pyruvate is transaminated to alanine, diffusing to the mesophyll. Alanine is finally transaminated to pyruvate (PYR) which can be regenerated to PEP by PPDK in the mesophyll chloroplasts. This cycle bypasses the reaction of malate dehydrogenase in the mesophyll and therefore does not transfer reducing equivalents to the bundle sheath. PEPCK In this variant the OAA produced by aspartate aminotransferase in the bundle sheath is decarboxylated to PEP by PEPCK. The fate of PEP is still debated. The simplest explanation is that PEP would diffuse back to the mesophyll to serve as a substrate for PEPC. Because PEPCK uses only one ATP molecule, the regeneration of PEP through PEPCK would theoretically increase photosynthetic efficiency of this subtype, however this has never been measured. An increase in relative expression of PEPCK has been observed under low light, and it has been proposed to play a role in facilitating balancing energy requirements between mesophyll and bundle sheath. Metabolite exchange While in photosynthesis each chloroplast is capable of completing light reactions and dark reactions, chloroplasts differentiate in two populations, contained in the mesophyll and bundle sheath cells. The division of the photosynthetic work between two types of chloroplasts results inevitably in a prolific exchange of intermediates between them. The fluxes are large and can be up to ten times the rate of gross assimilation. The type of metabolite exchanged and the overall rate will depend on the subtype. To reduce product inhibition of photosynthetic enzymes (for instance PECP) concentration gradients need to be as low as possible. This requires increasing the conductance of metabolites between mesophyll and bundle sheath, but this would also increase the retro-diffusion of out of the bundle sheath, resulting in an inherent and inevitable trade off in the optimisation of the concentrating mechanism. Light harvesting and light reactions To meet the NADPH and ATP demands in the mesophyll and bundle sheath, light needs to be harvested and shared between two distinct electron transfer chains. ATP may be produced in the bundle sheath mainly through cyclic electron flow around Photosystem I, or in the M mainly through linear electron flow depending on the light available in the bundle sheath or in the mesophyll. The relative requirement of ATP and NADPH in each type of cells will depend on the photosynthetic subtype. The apportioning of excitation energy between the two cell types will influence the availability of ATP and NADPH in the mesophyll and bundle sheath. For instance, green light is not strongly adsorbed by mesophyll cells and can preferentially excite bundle sheath cells, or vice versa for blue light. Because bundle sheaths are surrounded by mesophyll, light harvesting in the mesophyll will reduce the light available to reach BS cells. Also, the bundle sheath size limits the amount of light that can be harvested. Efficiency Different formulations of efficiency are possible depending on which outputs and inputs are considered. For instance, average quantum efficiency is the ratio between gross assimilation and either absorbed or incident light intensity. Large variability of measured quantum efficiency is reported in the literature between plants grown in different conditions and classified in different subtypes but the underpinnings are still unclear. One of the components of quantum efficiency is the efficiency of dark reactions, biochemical efficiency, which is generally expressed in reciprocal terms as ATP cost of gross assimilation (ATP/GA). In photosynthesis ATP/GA depends mainly on and O2 concentration at the carboxylating sites of RuBisCO. When concentration is high and O2 concentration is low photorespiration is suppressed and assimilation is fast and efficient, with ATP/GA approaching the theoretical minimum of 3. In photosynthesis concentration at the RuBisCO carboxylating sites is mainly the result of the operation of the concentrating mechanisms, which cost circa an additional 2 ATP/GA but makes efficiency relatively insensitive of external concentration in a broad range of conditions. Biochemical efficiency depends mainly on the speed of delivery to the bundle sheath, and will generally decrease under low light when PEP carboxylation rate decreases, lowering the ratio of /O2 concentration at the carboxylating sites of RuBisCO. The key parameter defining how much efficiency will decrease under low light is bundle sheath conductance. Plants with higher bundle sheath conductance will be facilitated in the exchange of metabolites between the mesophyll and bundle sheath and will be capable of high rates of assimilation under high light. However, they will also have high rates of retro-diffusion from the bundle sheath (called leakage) which will increase photorespiration and decrease biochemical efficiency under dim light. This represents an inherent and inevitable trade off in the operation of photosynthesis. plants have an outstanding capacity to attune bundle sheath conductance. Interestingly, bundle sheath conductance is downregulated in plants grown under low light and in plants grown under high light subsequently transferred to low light as it occurs in crop canopies where older leaves are shaded by new growth. Evolution and advantages plants have a competitive advantage over plants possessing the more common carbon fixation pathway under conditions of drought, high temperatures, and nitrogen or limitation. When grown in the same environment, at 30 °C, grasses lose approximately 833 molecules of water per molecule that is fixed, whereas grasses lose only 277. This increased water use efficiency of grasses means that soil moisture is conserved, allowing them to grow for longer in arid environments. carbon fixation has evolved in at least 62 independent occasions in 19 different families of plants, making it a prime example of convergent evolution. This convergence may have been facilitated by the fact that many potential evolutionary pathways to a phenotype exist, many of which involve initial evolutionary steps not directly related to photosynthesis. plants arose around during the Oligocene (precisely when is difficult to determine) and were becoming ecologically significant in the early Miocene around . metabolism in grasses originated when their habitat migrated from the shady forest undercanopy to more open environments, where the high sunlight gave it an advantage over the pathway. Drought was not necessary for its innovation; rather, the increased parsimony in water use was a byproduct of the pathway and allowed plants to more readily colonize arid environments. Today, plants represent about 5% of Earth's plant biomass and 3% of its known plant species. Despite this scarcity, they account for about 23% of terrestrial carbon fixation. Increasing the proportion of plants on earth could assist biosequestration of and represent an important climate change avoidance strategy. Present-day plants are concentrated in the tropics and subtropics (below latitudes of 45 degrees) where the high air temperature increases rates of photorespiration in plants. Plants that use carbon fixation About 8,100 plant species use carbon fixation, which represents about 3% of all terrestrial species of plants. All these 8,100 species are angiosperms. carbon fixation is more common in monocots compared with dicots, with 40% of monocots using the pathway, compared with only 4.5% of dicots. Despite this, only three families of monocots use carbon fixation compared to 15 dicot families. Of the monocot clades containing plants, the grass (Poaceae) species use the photosynthetic pathway most. 46% of grasses are and together account for 61% of species. has arisen independently in the grass family some twenty or more times, in various subfamilies, tribes, and genera, including the Andropogoneae tribe which contains the food crops maize, sugar cane, and sorghum. Various kinds of millet are also . Of the dicot clades containing species, the order Caryophyllales contains the most species. Of the families in the Caryophyllales, the Chenopodiaceae use carbon fixation the most, with 550 out of 1,400 species using it. About 250 of the 1,000 species of the related Amaranthaceae also use . Members of the sedge family Cyperaceae, and members of numerous families of eudicots – including Asteraceae (the daisy family), Brassicaceae (the cabbage family), and Euphorbiaceae (the spurge family) – also use . No large trees (above 15 m in height) use , however a number of small trees or shrubs smaller than 10 m exist which do: six species of Euphorbiaceae all native to Hawaii and two species of Amaranthaceae growing in deserts of the Middle-East and Asia. Converting plants to Given the advantages of , a group of scientists from institutions around the world are working on the Rice Project to produce a strain of rice, naturally a plant, that uses the pathway by studying the plants maize and Brachypodium. As rice is the world's most important human food—it is the staple food for more than half the planet—having rice that is more efficient at converting sunlight into grain could have significant global benefits towards improving food security. The team claims rice could produce up to 50% more grain—and be able to do it with less water and nutrients. The researchers have already identified genes needed for photosynthesis in rice and are now looking towards developing a prototype rice plant. In 2012, the Government of the United Kingdom along with the Bill & Melinda Gates Foundation provided US$14 million over three years towards the Rice Project at the International Rice Research Institute. In 2019, the Bill & Melinda Gates Foundation granted another US$15 million to the Oxford-University-led C4 Rice Project. The goal of the 5-year project is to have experimental field plots up and running in Taiwan by 2024. C2 photosynthesis, an intermediate step between and Kranz , may be preferred over for rice conversion. The simpler system is less optimized for high light and high temperature conditions than , but has the advantage of requiring fewer steps of genetic engineering and performing better than under all temperatures and light levels. In 2021, the UK Government provided £1.2 million on studying C2 engineering. See also C2 photosynthesis CAM photosynthesis photosynthesis References External links Khan Academy, video lecture Photosynthesis
C4 carbon fixation
[ "Chemistry", "Biology" ]
4,165
[ "Biochemistry", "Photosynthesis" ]
492,043
https://en.wikipedia.org/wiki/Photorespiration
Photorespiration (also known as the oxidative photosynthetic carbon cycle or C2 cycle) refers to a process in plant metabolism where the enzyme RuBisCO oxygenates RuBP, wasting some of the energy produced by photosynthesis. The desired reaction is the addition of carbon dioxide to RuBP (carboxylation), a key step in the Calvin–Benson cycle, but approximately 25% of reactions by RuBisCO instead add oxygen to RuBP (oxygenation), creating a product that cannot be used within the Calvin–Benson cycle. This process lowers the efficiency of photosynthesis, potentially lowering photosynthetic output by 25% in plants. Photorespiration involves a complex network of enzyme reactions that exchange metabolites between chloroplasts, leaf peroxisomes and mitochondria. The oxygenation reaction of RuBisCO is a wasteful process because 3-phosphoglycerate is created at a lower rate and higher metabolic cost compared with RuBP carboxylase activity. While photorespiratory carbon cycling results in the formation of G3P eventually, around 25% of carbon fixed by photorespiration is re-released as and nitrogen, as ammonia. Ammonia must then be detoxified at a substantial cost to the cell. Photorespiration also incurs a direct cost of one ATP and one NAD(P)H. While it is common to refer to the entire process as photorespiration, technically the term refers only to the metabolic network which acts to rescue the products of the oxygenation reaction (phosphoglycolate). Photorespiratory reactions Addition of molecular oxygen to ribulose-1,5-bisphosphate produces 3-phosphoglycerate (PGA) and 2-phosphoglycolate (2PG, or PG). PGA is the normal product of carboxylation, and productively enters the Calvin cycle. Phosphoglycolate, however, inhibits certain enzymes involved in photosynthetic carbon fixation (hence is often said to be an 'inhibitor of photosynthesis'). It is also relatively difficult to recycle: in higher plants it is salvaged by a series of reactions in the peroxisome, mitochondria, and again in the peroxisome where it is converted into glycerate. Glycerate reenters the chloroplast and by the same transporter that exports glycolate. A cost of 1 ATP is associated with conversion to 3-phosphoglycerate (PGA) (Phosphorylation), within the chloroplast, which is then free to re-enter the Calvin cycle. Several costs are associated with this metabolic pathway; the production of hydrogen peroxide in the peroxisome (associated with the conversion of glycolate to glyoxylate). Hydrogen peroxide is a dangerously strong oxidant which must be immediately split into water and oxygen by the enzyme catalase. The conversion of 2× 2Carbon glycine to 1× serine in the mitochondria by the enzyme glycine-decarboxylase is a key step, which releases , NH3, and reduces NAD to NADH. Thus, one molecule is produced for every two molecules of (two deriving from RuBisCO and one from peroxisomal oxidations). The assimilation of NH3 occurs via the GS-GOGAT cycle, at a cost of one ATP and one NADPH. Cyanobacteria have three possible pathways through which they can metabolise 2-phosphoglycolate. They are unable to grow if all three pathways are knocked out, despite having a carbon concentrating mechanism that should dramatically lower the rate of photorespiration (see below). Substrate specificity of RuBisCO The oxidative photosynthetic carbon cycle reaction is catalyzed by RuBP oxygenase activity: RuBP + → Phosphoglycolate + 3-phosphoglycerate + 2 During the catalysis by RuBisCO, an 'activated' intermediate is formed (an enediol intermediate) in the RuBisCO active site. This intermediate is able to react with either or . It has been demonstrated that the specific shape of the RuBisCO active site acts to encourage reactions with . Although there is a significant "failure" rate (~25% of reactions are oxygenation rather than carboxylation), this represents significant favouring of , when the relative abundance of the two gases is taken into account: in the current atmosphere, is approximately 500 times more abundant, and in solution is 25 times more abundant than . The ability of RuBisCO to specify between the two gases is known as its selectivity factor (or Srel), and it varies between species, with angiosperms more efficient than other plants, but with little variation among the vascular plants. A suggested explanation of RuBisCO's inability to discriminate completely between and is that it is an evolutionary relic: The early atmosphere in which primitive plants originated contained very little oxygen, the early evolution of RuBisCO was not influenced by its ability to discriminate between and . Conditions which affect photorespiration Photorespiration rates are affected by: Altered substrate availability: lowered or increased O2 Factors which influence this include the atmospheric abundance of the two gases, the supply of the gases to the site of fixation (i.e. in land plants: whether the stomata are open or closed), the length of the liquid phase (how far these gases have to diffuse through water in order to reach the reaction site). For example, when the stomata are closed to prevent water loss during drought: this limits the supply, while production within the leaf will continue. In algae (and plants which photosynthesise underwater); gases have to diffuse significant distances through water, which results in a decrease in the availability of relative to . It has been predicted that the increase in ambient concentrations predicted over the next 100 years may lower the rate of photorespiration in most plants by around 50%. However, at temperatures higher than the photosynthetic thermal optimum, the increases in turnover rate are not translated into increased assimilation because of the decreased affinity of Rubisco for . Increased temperature At higher temperatures RuBisCO is less able to discriminate between and . This is because the enediol intermediate is less stable. Increasing temperatures also lower the solubility of , thus lowering the concentration of relative to in the chloroplast. Biological adaptation to minimize photorespiration The vast majority of plants are C3, meaning they photorespire when necessary. Certain species of plants or algae have mechanisms to lower the uptake of molecular oxygen by RuBisCO. These are commonly referred to as Carbon Concentrating Mechanisms (CCMs), as they increase the concentration of so that RuBisCO is less likely to produce glycolate through reaction with . Biochemical carbon concentrating mechanisms Biochemical CCMs concentrate carbon dioxide in one temporal or spatial region, through metabolite exchange. C4 and CAM photosynthesis both use the enzyme Phosphoenolpyruvate carboxylase (PEPC) to add to a 4-carbon sugar. PEPC is faster than RuBisCO, and more selective for . C4 C4 plants capture carbon dioxide in their mesophyll cells (using an enzyme called phosphoenolpyruvate carboxylase which catalyzes the combination of carbon dioxide with a compound called phosphoenolpyruvate (PEP)), forming oxaloacetate. This oxaloacetate is then converted to malate and is transported into the bundle sheath cells (site of carbon dioxide fixation by RuBisCO) where oxygen concentration is low to avoid photorespiration. Here, carbon dioxide is removed from the malate and combined with RuBP by RuBisCO in the usual way, and the Calvin cycle proceeds as normal. The concentrations in the Bundle Sheath are approximately 10–20 fold higher than the concentration in the mesophyll cells. This ability to avoid photorespiration makes these plants more hardy than other plants in dry and hot environments, wherein stomata are closed and internal carbon dioxide levels are low. Under these conditions, photorespiration does occur in C4 plants, but at a much lower level compared with C3 plants in the same conditions. C4 plants include sugar cane, corn (maize), and sorghum. CAM (Crassulacean acid metabolism) CAM plants, such as cacti and succulent plants, also use the enzyme PEP carboxylase to capture carbon dioxide, but only at night. Crassulacean acid metabolism allows plants to conduct most of their gas exchange in the cooler night-time air, sequestering carbon in 4-carbon sugars which can be released to the photosynthesizing cells during the day. This allows CAM plants to minimize water loss (transpiration) by maintaining closed stomata during the day. CAM plants usually display other water-saving characteristics, such as thick cuticles, stomata with small apertures, and typically lose around 1/3 of the amount of water per fixed. C2 C2 photosynthesis (also called glycine shuttle and photorespiratory CO2 pump) is a CCM that works by making use of – as opposed to avoiding – photorespiration. It performs carbon refixation by delaying the breakdown of photorespired glycine, so that the molecule is shuttled from the mesophyll into the bundle sheath. Once there, the glycine is decarboxylated in mitochondria as usual, releasing CO2 and concentrating it to triple the usual concentration. Although C2 photosynthesis is traditionally understood as an intermediate step between C3 and C4, a wide variety of plant lineages do end up in the C2 stage without further evolving, showing that it is an evolutionary steady state of its own. C2 may be easier to engineer into crops, as the phenotype requires fewer anatomical changes to produce. Algae There have been some reports of algae operating a biochemical CCM: shuttling metabolites within single cells to concentrate in one area. This process is not fully understood. Biophysical carbon-concentrating mechanisms This type of carbon-concentrating mechanism (CCM) relies on a contained compartment within the cell into which is shuttled, and where RuBisCO is highly expressed. In many species, biophysical CCMs are only induced under low carbon dioxide concentrations. Biophysical CCMs are more evolutionary ancient than biochemical CCMs. There is some debate as to when biophysical CCMs first evolved, but it is likely to have been during a period of low carbon dioxide, after the Great Oxygenation Event (2.4 billion years ago). Low periods occurred around 750, 650, and 320–270 million years ago. Eukaryotic algae In nearly all species of eukaryotic algae (Chloromonas being one notable exception), upon induction of the CCM, ~95% of RuBisCO is densely packed into a single subcellular compartment: the pyrenoid. Carbon dioxide is concentrated in this compartment using a combination of CO2 pumps, bicarbonate pumps, and carbonic anhydrases. The pyrenoid is not a membrane-bound compartment but is found within the chloroplast, often surrounded by a starch sheath (which is not thought to serve a function in the CCM). Hornworts Certain species of hornwort are the only land plants that are known to have a biophysical CCM involving concentration of carbon dioxide within pyrenoids in their chloroplasts . Cyanobacteria Cyanobacterial CCMs are similar in principle to those found in eukaryotic algae and hornworts, but the compartment into which carbon dioxide is concentrated has several structural differences. Instead of the pyrenoid, cyanobacteria contain carboxysomes, which have a protein shell, and linker proteins packing RuBisCO inside with a very regular structure. Cyanobacterial CCMs are much better understood than those found in eukaryotes, partly due to the ease of genetic manipulation of prokaryotes. Possible purpose of photorespiration Lowering photorespiration may not result in increased growth rates for plants. Photorespiration may be necessary for the assimilation of nitrate from soil. Thus, a lowering in photorespiration by genetic engineering or because of increasing atmospheric carbon dioxide may not benefit plants as has been proposed. Several physiological processes may be responsible for linking photorespiration and nitrogen assimilation. Photorespiration increases availability of NADH, which is required for the conversion of nitrate to nitrite. Certain nitrite transporters also transport bicarbonate, and elevated has been shown to suppress nitrite transport into chloroplasts. However, in an agricultural setting, replacing the native photorespiration pathway with an engineered synthetic pathway to metabolize glycolate in the chloroplast resulted in a 40 percent increase in crop growth. Although photorespiration is much lower in C4 species, it is still an essential pathwaymutants without functioning 2-phosphoglycolate metabolism cannot grow in normal conditions. One mutant was shown to rapidly accumulate glycolate. Although the functions of photorespiration remain controversial, it is widely accepted that this pathway influences a wide range of processes from bioenergetics, photosystem II function, and carbon metabolism to nitrogen assimilation and respiration. The oxygenase reaction of RuBisCO may prevent depletion near its active sites and contributes to the regulation of CO2. concentration in the atmosphere The photorespiratory pathway is a major source of hydrogen peroxide () in photosynthetic cells. Through production and pyrimidine nucleotide interactions, photorespiration makes a key contribution to cellular redox homeostasis. In so doing, it influences multiple signalling pathways, in particular, those that govern plant hormonal responses controlling growth, environmental and defense responses, and programmed cell death. It has been postulated that photorespiration may function as a "safety valve", preventing the excess of reductive potential coming from an overreduced NADPH-pool from reacting with oxygen and producing free radicals (oxidants), as these can damage the metabolic functions of the cell by subsequent oxidation of membrane lipids, proteins or nucleotides. The mutants deficient in photorespiratory enzymes are characterized by a high redox level in the cell, impaired stomatal regulation, and accumulation of formate. See also photosynthesis photosynthesis CAM photosynthesis References Further reading Plant physiology Photosynthesis Metabolism
Photorespiration
[ "Chemistry", "Biology" ]
3,088
[ "Plant physiology", "Plants", "Photosynthesis", "Cellular processes", "Biochemistry", "Metabolism" ]
492,429
https://en.wikipedia.org/wiki/Electret
An electret (formed as a portmanteau of electr- from "electricity" and -et from "magnet") is a dielectric material that has a quasi-permanent electrical polarisation. An electret has internal and external electric fields, and is the electrostatic equivalent of a permanent magnet. The term electret was coined by Oliver Heaviside for a (typically dielectric) material which has electrical charges of opposite sign at its extremities. Some materials with electret properties were already known to science and had been studied since the early 1700s. One example is the electrophorus, a device consisting of a slab with electret properties and a separate metal plate. The electrophorus was originally invented by Johan Carl Wilcke in Sweden in 1762 and improved by Alessandro Volta in Italy in 1775. The first documented case of production was by Mototarô Eguchi in 1925 who melting a suitable dielectric material such as a polymer or wax that contains polar molecules, and then allowing it to solidify in a powerful electric field. The polar molecules of the dielectric align themselves to the direction of the electric field, producing a dipole electret with a permanent polarization. Modern electrets are sometimes made by embedding excess charges into a highly insulating dielectric, e.g. using an electron beam, corona discharge, injection from an electron gun, electric breakdown across a gap, or a dielectric barrier. Electret types There are two types of electrets: Real-charge electrets which contain excess free charges such as electrons or electron holes of one or both polarities which can move around, either on the dielectric's surfaces (a surface charge) within the dielectric's volume (a space charge) Space charge electrets with internal bipolar charges known as ferroelectrets. Oriented-dipole electrets contain oriented (aligned) dipoles. These contain bound charges at their surface, which are not free to move around. These are similar to ferroelectric materials, and are always in materials which have no inversion symmetry so would also display piezoelectricity. Similarity to magnets Electrets, like magnets, are dipoles. Another similarity is the fields: they produce an electrostatic field (as opposed to a magnetic field) outside the material. When a magnet and an electret are near one another, a rather unusual phenomenon occurs: while stationary, neither has any effect on one another. However, when an electret is moved with respect to a magnetic pole, a force is felt which acts perpendicular to the magnetic field, pushing the electret along a path 90 degrees to the expected direction of "push" as would be felt with another magnet. Similarity to capacitors There is a similarity between an electret and the dielectric layer used in capacitors; the difference is that dielectrics in capacitors have an induced polarisation that is only transient, dependent on the potential applied on the dielectric, while dielectrics with electret properties exhibit quasi-permanent charge storage or polarisation. Some materials also display ferroelectricity (i.e. they react to the external fields with a hysteresis of the polarisation). Ferroelectrics can retain the polarisation permanently because they are in thermodynamic equilibrium, and thus are used in ferroelectric capacitors. Although electrets are only in a metastable state, those fashioned from very low leakage materials can retain excess charge or polarisation for many years. An electret microphone is a type of condenser microphone that eliminates the need for a polarisation voltage from the power supply by using a permanently charged material. Materials Electret materials are quite common in nature. Quartz and other forms of silicon dioxide, for example, are naturally occurring electrets. Today, most electrets are made from synthetic polymers, e.g. fluoropolymers, polypropylene, polyethyleneterephthalate (PET), etc. Real-charge electrets contain either positive or negative excess charges or both, while oriented-dipole electrets contain oriented dipoles. The quasi-permanent internal or external electric fields created by electrets can be exploited in various applications. Manufacture Bulk electrets can be prepared by heating or melting the material, then cooling it in the presence of a strong electric field. The electric field repositions the charge carriers or aligns the dipoles within the material. When the material cools, solidification "freezes" the dipoles in position. Materials used for electrets are usually waxes, polymers or resins. One of the earliest recipes consists of 45% carnauba wax, 45% white rosin, and 10% white beeswax, melted, mixed together, and left to cool in a static electric field of several kilovolts/cm. The thermo-dielectric effect, related to this process, was first described by Brazilian researcher Joaquim Costa Ribeiro. Electrets can also be manufactured by embedding excess negative charge within a dielectric using a particle accelerator, or by stranding charges on, or near, the surface using high voltage corona discharges, a process called corona charging. Excess charge within an electret decays exponentially. The decay constant is a function of the material's relative dielectric constant and its bulk resistivity. Materials with extremely high resistivity, such as PTFE, may retain excess charge for many hundreds of years. Most commercially produced electrets are based on fluoropolymers (e.g. amorphous Teflon) machined to thin films. See also Oliver Heaviside Corona wire Telephone Electret microphone Electromotive force Tip ring sleeve Ferroelectricity References Patents Nowlin, Thomas E., and Curt R. Raschke, , "A process for making polymer electrets" Further reading A discussion on polarization, thermoelectrets, photoelectrets and applications Condensed matter physics Electrical phenomena Dielectrics Electrostatics
Electret
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,279
[ "Physical phenomena", "Phases of matter", "Materials science", "Materials", "Electrical phenomena", "Condensed matter physics", "Dielectrics", "Matter" ]
493,521
https://en.wikipedia.org/wiki/Flux%20%28metallurgy%29
In metallurgy, a flux is a chemical reducing agent, flowing agent, or purifying agent. Fluxes may have more than one function at a time. They are used in both extractive metallurgy and metal joining. Some of the earliest known fluxes were sodium carbonate, potash, charcoal, coke, borax, lime, lead sulfide and certain minerals containing phosphorus. Iron ore was also used as a flux in the smelting of copper. These agents served various functions, the simplest being a reducing agent, which prevented oxides from forming on the surface of the molten metal, while others absorbed impurities into slag, which could be scraped off molten metal. Fluxes are also used in foundries for removing impurities from molten nonferrous metals such as aluminium, or for adding desirable trace elements such as titanium. As reducing agents, fluxes facilitate soldering, brazing, and welding by removing oxidation from the metals to be joined. In some applications molten flux also serves as a heat-transfer medium, facilitating heating of the joint by the soldering tool. Uses Metal joining In high-temperature metal joining processes (welding, brazing and soldering), fluxes are nearly inert at room temperature, but become strongly reducing at elevated temperatures, preventing oxidation of the base and filler materials. The role of flux is typically dual: dissolving the oxides already present on the metal surface to facilitate wetting by molten metal, and acting as an oxygen barrier by coating the hot surface, preventing oxidation. For example, tin-lead solder attaches very well to copper metal, but poorly to its oxides, which form quickly at soldering temperatures. By preventing the formation of metal oxides, flux enables the solder to adhere to the clean metal surface, rather than forming beads, as it would on an oxidized surface. Soldering In soldering metals, flux serves a threefold purpose: it removes any oxidized metal from the surfaces to be soldered, seals out air thus preventing further oxidation, and improves the wetting characteristics of the liquid solder. Some fluxes are corrosive, so the parts have to be cleaned with a damp sponge or other absorbent material after soldering to prevent damage. Several types of flux are used in electronics. A number of standards exist to define the various flux types. The principal standard is J-STD-004. Various tests, including the ROSE test, may be used after soldering to check for the presence of ionic or other contaminants that could cause short circuits or other problems. Brazing and silver soldering Brazing (sometimes known as silver soldering or hard soldering) requires a higher temperature than soft soldering (> 450 °C). As well as removing existing oxides, rapid oxidation of the metal at the elevated temperatures has to be avoided. This means that fluxes need to be more aggressive and to provide a physical barrier. Traditionally borax was used as a flux for brazing, but there are now many different fluxes available, often using active chemicals such as fluorides as well as wetting agents. Many of these chemicals are toxic and due care should be taken during their use. Smelting In the process of smelting, inorganic chlorides, fluorides (see fluorite), limestone and other materials are designated as "fluxes" when added to the contents of a smelting furnace or a cupola for the purpose of purging the metal of chemical impurities such as phosphorus, and of rendering slag more liquid at the smelting temperature. Slag is a liquid mixture of ash, flux, and other impurities. This reduction of slag viscosity with temperature, increasing the flow of slag in smelting, is the origin of the word flux in metallurgy. The flux most commonly used in iron and steel furnaces is limestone, which is charged in the proper proportions with the iron and fuel. Drawbacks Fluxes have several serious drawbacks: Corrosivity, which is mostly due to the aggressive compounds of the activators; hygroscopic properties of the flux residues may aggravate the effects Interference with test equipment, which is due to the insulating residues deposited on the test contacts on electronic circuit boards Interference with machine vision systems when the layer of flux or its remains is too thick or improperly located Contamination of sensitive parts, e.g. facets of laser diodes, contacts of connectors and mechanical switches, and MEMS assemblies Deterioration of electrical properties of printed circuit boards, as soldering temperatures are above the glass transition temperature of the board material and flux components (e.g. glycols, or chloride and bromide ions) can diffuse into its matrix; e.g. water-soluble fluxes containing polyethylene glycol were demonstrated to have such impact Deterioration of high-frequency circuit performance by flux residues Deterioration of surface insulation resistance, which tends to be as much as three orders of magnitude lower than the bulk resistance of the material Electromigration and growth of whiskers between nearby traces, aided by ionic residues, surface moisture and a bias voltage The fumes liberated during soldering have adverse health effects, and volatile organic compounds can be outgassed during processing The solvents required for post-soldering cleaning of the boards are expensive and may have adverse environmental impact In special cases the drawbacks are sufficiently serious to warrant using fluxless techniques. Dangers Acid flux types (not used in electronics) may contain hydrochloric acid, zinc chloride or ammonium chloride, which are harmful to humans. Therefore, flux should be handled with gloves and goggles, and used with adequate ventilation. Prolonged exposure to rosin fumes released during soldering can cause occupational asthma (formerly called colophony disease in this context) in sensitive individuals, although it is not known which component of the fumes causes the problem. While molten solder has low tendency to adhere to organic materials, molten fluxes, especially of the resin/rosin type, adhere well to fingers. A mass of hot sticky flux can transfer more heat to skin and cause more serious burns than a comparable particle of non-adhering molten metal, which can be quickly shaken off. In this regard, molten flux is similar to molten hot glue. Fluxless techniques In some cases the presence of flux is undesirable; flux traces interfere with e.g. precision optics or MEMS assemblies. Flux residues also tend to outgas in vacuum and space applications, and traces of water, ions and organic compounds may adversely affect long-term reliability of non-hermetic packages. Trapped flux residues are also the cause of most voids in the joints. Flux-less techniques are therefore desirable there. For successful soldering and brazing, the oxide layer has to be removed from both the surfaces of the materials and the surface of the filler metal preform; the exposed surfaces also have to be protected against oxidation during heating. Flux-coated preforms can also be used to eliminate flux residue entirely from the soldering process. Protection of the surfaces against further oxidation is relatively simple, by using vacuum or inert atmosphere. Removal of the native oxide layer is more troublesome; physical or chemical cleaning methods have to be employed and the surfaces can be protected by e.g. gold plating. The gold layer has to be sufficiently thick and non-porous to provide protection for reasonable storage time. Thick gold metallization also limits choice of soldering alloys, as tin-based solders dissolve gold and form brittle intermetallics, embrittling the joint. Thicker gold coatings are usually limited to use with indium-based solders and solders with high gold content. Removal of the oxides from the solder preform is also troublesome. Fortunately some alloys are able to dissolve the surface oxides in their bulk when superheated by several degrees above their melting point; the Sn-Cu1 and Sn-Ag4 require superheating by 18–19 °C, the Sn-Sb5 requires as little as 10 °C, but the Sn-Pb37 alloy requires 77 °C above its melting point to dissolve its surface oxide. The self-dissolved oxide degrades the solder's properties and increases its viscosity in molten state, however, so this approach is not optimal. Solder preforms are preferred to be with high volume-to-surface ratio, as that limits the amount of oxide being formed. Pastes have to contain smooth spherical particles, preforms are ideally made of round wire. Problems with preforms can be also sidestepped by depositing the solder alloy directly on the surfaces of the parts or substrates, by chemical or electrochemical means for example. A protective atmosphere with chemically reducing properties can be beneficial in some cases. Molecular hydrogen can be used to reduce surface oxides of tin and indium at temperatures above 430 and 470 °C; for zinc the temperature is above 500 °C, where zinc is already becoming volatilized. (At lower temperatures the reaction speed is too slow for practical applications.) Very low partial pressures of oxygen and water vapor have to be achieved for the reaction to proceed. Other reactive atmospheres are also in use. Vapors of formic acid and acetic acid are the most commonly used. Carbon monoxide and halogen gases (for example carbon tetrafluoride, sulfur hexafluoride, or dichlorodifluoromethane) require fairly high temperatures for several minutes to be effective. Atomic hydrogen is much more reactive than molecular hydrogen. In contact with surface oxides it forms hydroxides, water, or hydrogenated complexes, which are volatile at soldering temperatures. A practical dissociation method is an electrical discharge. Argon-hydrogen gas compositions with hydrogen concentration below the low flammable limit can be used, eliminating the safety issues. The operation has to be performed at low pressure, as the stability of atomic hydrogen at atmospheric pressure is insufficient. Such hydrogen plasma can be used for fluxless reflow soldering. Active atmospheres are relatively common in furnace brazing; due to the high process temperatures the reactions are reasonably fast. The active ingredients are usually carbon monoxide (possibly in the form of combusted fuel gas) and hydrogen. Thermal dissociation of ammonia yields an inexpensive mixture of hydrogen and nitrogen. Bombardment with atomic particle beams can remove surface layers at a rate of tens of nanometers per minute. The addition of hydrogen to the plasma augments the removal efficiency by chemical mechanisms. Mechanical agitation is another possibility for disrupting the oxide layer. Ultrasound can be used for assisting tinning and soldering; an ultrasonic transducer can be mounted on the soldering iron, in a solder bath, or in the wave for wave soldering. The oxide disruption and removal involves cavitation effects between the molten solder and the base metal surface. A common application of ultrasound fluxing is in tinning of passive parts (active parts do not cope well with the mechanical stresses involved); even aluminium can be tinned this way. The parts can then be soldered or brazed conventionally. Mechanical rubbing of a heated surface with molten solder can be used for coating the surface. Both surfaces to be joined can be prepared this way, then placed together and reheated. This technique was formerly used to repair small damages on aluminium aircraft skins. A very thin layer of zinc can be used for joining aluminium parts. The parts have to be perfectly machined, or pressed together, due to the small volume of filler metal. At high temperature applied for long time, the zinc diffuses away from the joint. The resulting joint does not present a mechanical weakness and is corrosion-resistant. The technique is known as diffusion soldering. Fluxless brazing of copper alloys can be done with self-fluxing filler metals. Such metals contain an element capable of reaction with oxygen, usually phosphorus. A good example is the family of copper-phosphorus alloys. Properties Fluxes have several important properties: Activity – the ability to dissolve existing oxides on the metal surface and promote wetting with solder. Highly active fluxes are often acidic or corrosive in nature. Corrosivity – the promotion of corrosion by the flux and its residues. Most active fluxes tend to be corrosive at room temperatures and require careful removal. As activity and corrosivity are linked, the preparation of surfaces to be joined should allow use of milder fluxes. Some water-soluble flux residues are hygroscopic, which causes problems with electrical resistance and contributes to corrosion. Fluxes containing halides and mineral acids are highly corrosive and require thorough removal. Some fluxes, especially those based on borax used for brazing, form very hard glass-like coatings that are difficult to remove. Cleanability – the difficulty of removal of flux and its residues after the soldering operation. Fluxes with higher content of solids tend to leave larger amount of residues; thermal decomposition of some vehicles also leads to formation of difficult-to-clean, polymerized and possibly even charred deposits (a problem especially for hand soldering). Some flux residues are soluble in organic solvents, others in water, some in both. Some fluxes are no-clean, as they are sufficiently volatile or undergo thermal decomposition to volatile products, that they do not require the cleaning step. Other fluxes leave non-corrosive residues that can be left in place. However, flux residues can interfere with subsequent operations; they can impair adhesion of conformal coatings, or act as undesired insulation on connectors and contact pads for test equipment. Residue tack – the stickiness of the surface of the flux residue. When not removed, the flux residue should have smooth, hard surface. Tacky surfaces tend to accumulate dust and particulates, which causes issues with electrical resistance; the particles themselves can be conductive or they can be hygroscopic or corrosive. Volatility – this property has to be balanced to facilitate easy removal of solvents during the preheating phase but to not require too frequent replenishing of solvent in the process equipment. Viscosity – especially important for solder pastes, which have to be easy to apply but also thick enough to stay in place without spreading to undesired locations. Solder pastes may also function as a temporary adhesive for keeping electronic parts in place before and during soldering. Fluxes applied by e.g. foam require low viscosity. Flammability – relevant especially for glycol-based vehicles and for organic solvents. Flux vapors tend to have low autoignition temperature and present a risk of a flash fire when the flux comes in contact with a hot surface. Solids – the percentage of solid material in the flux. Fluxes with low solids, sometimes as little as 1–2%, are called low solids flux, low-residue flux, or no clean flux. They are often composed of weak organic acids, with addition of small amount of rosin or other resins. Conductivity – some fluxes remain conductive after soldering if not cleaned properly, leading to random malfunctions on circuits with high impedances. Different types of fluxes are differently prone to cause these issues. Composition Fluxes for metal joining The composition of fluxes is tailored for the required properties - the base metals and their surface preparation (which determine the composition and thickness of surface oxides), the solder (which determines the wetting properties and the soldering temperature), the corrosion resistance and ease of removal, and others. Fluxes for soft soldering are typically of organic nature, though inorganic fluxes, usually based on halogenides or acids, are also used in non-electronics applications. Fluxes for brazing operate at significantly higher temperatures and are therefore mostly inorganic; the organic compounds tend to be of supplementary nature, e.g. to make the flux sticky at low temperature so it can be easily applied. The surface of the tin-based solder is coated predominantly with tin oxides; even in alloys the surface layer tends to become relatively enriched by tin. Fluxes for indium and zinc based solders have different compositions than fluxes for ordinary tin-lead and tin-based solders, due to different soldering temperatures and different chemistry of the oxides involved. Organic fluxes are unsuitable for flame soldering and flame brazing, as they tend to char and impair solder flow. Some metals are classified as "unsolderable" in air, and have to be either coated with another metal before soldering or special fluxes or protective atmospheres have to be used. Such metals are beryllium, chromium, magnesium, titanium, and some aluminium alloys. Fluxes for high-temperature soldering differ from the fluxes for use at lower temperatures. At higher temperatures even relatively mild chemicals have sufficient oxide-disrupting activity, but the metal oxidation rates become fairly high; the barrier function of the vehicle therefore becomes more important than the fluxing activity. High molecular weight hydrocarbons are often used for this application; a diluent with a lower molecular weight, boiling off during the preheat phase, is usually used to aid application. Common fluxes are ammonium chloride or resin acids (contained in rosin) for soldering copper and tin; hydrochloric acid and zinc chloride for soldering galvanized iron (and other zinc surfaces); and borax for brazing, braze-welding ferrous metals, and forge welding. Organic fluxes Organic fluxes typically consist of four major components: Activators – chemicals disrupting/dissolving the metal oxides. Their role is to expose unoxidized, easily wettable metal surface and aid soldering by other means, e.g. by exchange reactions with the base metals. Highly active fluxes contain chemicals that are corrosive at room temperature. The compounds used include metal halides (most often zinc chloride or ammonium chloride), hydrochloric acid, phosphoric acid, citric acid, and hydrobromic acid. Salts of mineral acids with amines are also used as aggressive activators. Aggressive fluxes typically facilitate corrosion, require careful removal, and are unsuitable for finer work. Activators for fluxes for soldering and brazing aluminium often contain fluorides. Milder activators begin to react with oxides only at elevated temperature. Typical compounds used are carboxylic acids (e.g. fatty acids (most often oleic acid and stearic acid), dicarboxylic acids) and sometimes amino acids. Some milder fluxes also contain halides or organohalides. Vehicles – high-temperature tolerant chemicals in the form of non-volatile liquids or solids with suitable melting point; they are generally liquid at soldering temperatures. Their role is to act as an oxygen barrier to protect the hot metal surface against oxidation, to dissolve the reaction products of activators and oxides and carry them away from the metal surface, and to facilitate heat transfer. Solid vehicles tend to be based on natural or modified rosin (mostly abietic acid, pimaric acid, and other resin acids) or natural or synthetic resins. Water-soluble organic fluxes tend to contain vehicles based on high-boiling polyols - glycols, diethylene glycol and higher polyglycols, polyglycol-based surfactants and glycerol. Solvents – added to facilitate processing and deposition to the joint. Solvents are typically dried out during preheating before the soldering operation; incomplete solvent removal may lead to boiling off and spattering of solder paste particles or molten solder. Additives – numerous other chemicals modifying the flux properties. Additives can be surfactants (especially nonionic), corrosion inhibitors, stabilizers and antioxidants, tackifiers, thickeners and other rheological modifiers (especially for solder pastes), plasticizers (especially for flux-cored solders), and dyes. Inorganic fluxes Inorganic fluxes contain components playing the same role as in organic fluxes. They are more often used in brazing and other high-temperature applications, where organic fluxes have insufficient thermal stability. The chemicals used often simultaneously act as both vehicles and activators; typical examples are borax, borates, fluoroborates, fluorides and chlorides. Halogenides are active at lower temperatures than borates, and are therefore used for brazing of aluminium and magnesium alloys; they are however highly corrosive. Behavior of activators The role of the activators is primarily disruption and removal of the oxide layer on the metal surface (and also the molten solder), to facilitate direct contact between the molten solder and metal. The reaction product is usually soluble or at least dispersible in the molten vehicle. The activators are usually either acids, or compounds that release acids at elevated temperature. The general reaction of oxide removal is: Metal oxide + Acid → Salt + Water Salts are ionic in nature and can cause problems from metallic leaching or dendrite growth, with possible product failure. In some cases, particularly in high-reliability applications, flux residues must be removed. The activity of the activator generally increases with temperature, up to a certain value where activity ceases, either due to thermal decomposition or excessive volatilization. However the oxidation rate of the metals also increases with temperature. At high temperatures, copper oxide reacts with hydrogen chloride to water-soluble and mechanically weak copper chloride, and with rosin to salts of copper and abietic acid which is soluble in molten rosin. Some activators may also contain metal ions, capable of exchange reaction with the underlying metal; such fluxes aid soldering by chemically depositing a thin layer of easier solderable metal on the exposed base metal. An example is the group of fluxes containing zinc, tin or cadmium compounds, usually chlorides, sometimes fluorides or fluoroborates. Inorganic activators Common high-activity activators are mineral acids, often together with halides, amines, water or alcohols: hydrochloric acid, most common phosphoric acid, less common, use limited by its polymerization at higher temperatures Inorganic acids are highly corrosive to metals even at room temperature, which causes issues during storage, handling and applications. As soldering involves high temperatures, compounds that decompose or react, with acids as products, are frequently used: zinc chloride, which at high temperatures reacts with moisture, forming oxychloride and hydrochloric acid ammonium chloride, thermally decomposing to ammonia and hydrochloric acid amine hydrochlorides, decomposing to the amine and hydrochloric acid Rosin fluxes The terms resin flux and rosin flux are ambiguous and somewhat interchangeable, with different vendors using different assignments. Generally, fluxes are labeled as rosin if the vehicle they are based on is primarily natural rosin. Some manufactures reserve "rosin" designation for military fluxes based on rosin (R, RMA and RA compositions) and label others as "resin". Rosin has good flux properties. A mixture of organic acids (resin acids, predominantly abietic acid, with pimaric acid, isopimaric acid, neoabietic acid, dihydroabietic acid, and dehydroabietic acid), rosin is a glassy solid, virtually nonreactive and noncorrosive at normal temperature, but liquid, ionic and mildly reactive to metal oxides at molten state. Rosin tends to soften between 60–70 °C and is fully fluid at around 120 °C; molten rosin is weakly acidic and is able to dissolve thinner layers of surface oxides from copper without further additives. For heavier surface contamination or improved process speed, additional activators can be added. There are several possible activator groups for rosins: halide activators (organic halide salts, e.g. dimethylammonium chloride and diethylammonium chloride) organic acids (monocarboxylic, e.g. formic acid, acetic acid, propionic acid, and dicarboxylic, e.g. oxalic acid, malonic acid, sebacic acid) There are three types of rosin: gum rosin (from pine tree oleoresin), wood rosin (obtained by extraction of tree stumps), and tall oil rosin (obtained from tall oil, a byproduct of kraft paper process). Gum rosin has a milder odor and lower tendency to crystallize from solutions than wood rosin, and is therefore preferred for flux applications. Tall oil rosin finds increased use due to its higher thermal stability and therefore lower tendency to form insoluble thermal decomposition residues. The composition and quality of rosin differs by the tree type, and also by location and even by year. In Europe, rosin for fluxes is usually obtained from a specific type of Portuguese pine; in America a North Carolina variant is used. Natural rosin can be used as is, or can be chemically modified by e.g. esterification, polymerization, or hydrogenation. The properties being altered are increased thermal stability, better cleanability, altered solution viscosity, and harder residue (or conversely, softer and more tacky residue). Rosin can be also converted to a water-soluble rosin flux, by formation of an ethoxylated rosin amine, an adduct with a polyglycol and an amine. One of the early fluxes was a mixture of equal amounts of rosin and vaseline. A more aggressive early composition was a mixture of saturated solution of zinc chloride, alcohol, and glycerol. Fluxes can be also prepared from synthetic resins, often based on esters of polyols and fatty acids. Such resins have improved fume odor and lower residue tack, but their fluxing activity and solubility tend to be lower than that of natural resins. Rosin flux grades Rosin fluxes are categorized by grades of activity: L for low, M for moderate, and H for high. There are also other abbreviations for different rosin flux grades: R (Rosin) – pure rosin, no activators, low activity, mildest WW (water-white) – purest rosin grade, no activators, low activity, sometimes synonymous with R RMA (rosin mildly activated) - contains mild activators, typically no halides RA (rosin activated) – rosin with strong activators, high activity, contains halides OA (organic acid) – rosin activated with organic acids, high activity, highly corrosive, aqueous cleaning SA (synthetically activated) – rosin with strong synthetic activators, high activity; formulated to be easily soluble in organic solvents (chlorofluorocarbons, alcohols) to facilitate cleaning WS (water-soluble) – usually based on inorganic or organic halides; highly corrosive residues SRA (superactivated rosin) – rosin with very strong activators, very high activity IA (inorganic acid) – rosin activated with inorganic acids (usually hydrochloric acid or phosphoric acid), highest activities, highly corrosive R, WW, and RMA grades are used for joints that can not be easily cleaned or where there is too high corrosion risk. More active grades require thorough cleaning of the residues. Improper cleaning can actually aggravate the corrosion by releasing trapped activators from the flux residues. Special fluxes Fluxes for soldering certain metals Some materials are very difficult to solder. In some cases special fluxes have to be employed. Aluminum and its alloys Aluminium and its alloys are difficult to solder due to the formation of the passivation layer of aluminium oxide. The flux has to be able to disrupt this layer and facilitate wetting by solder. Salts or organic complexes of some metals can be used; the salt has to be able to penetrate the cracks in the oxide layer. The metal ions, more noble than aluminium, then undergo a redox reaction, dissolve the surface layer of aluminium and form a deposit there. This intermediate layer of another metal then can be wetted with a solder. One example of such flux is a composition of triethanolamine, fluoroboric acid, and cadmium fluoroborate. More than 1% magnesium in the alloy impairs the flux action, however, as the magnesium oxide layer is more refractory. Another possibility is an inorganic flux composed of zinc chloride or tin(II) chloride, ammonium chloride, and a fluoride (e.g. sodium fluoride). Presence of silicon in the alloy impairs the flux effectivity, as silicon does not undergo the exchange reaction aluminium does. Magnesium alloys Magnesium alloys. A putative flux for soldering these alloys at low temperature is molten acetamide. Acetamide dissolves surface oxides on both aluminium and magnesium; promising experiments were done with its use as a flux for a tin-indium solder on magnesium. Stainless steel Stainless steel is a material which is difficult to solder because of its stable, self-healing surface oxide layer and its low thermal conductivity. A solution of zinc chloride in hydrochloric acid is a common flux for stainless steels; it has however to be thoroughly removed afterwards as it would cause pitting corrosion. Another highly effective flux is phosphoric acid; its tendency to polymerize at higher temperatures however limits its applications. Metal salts as flux in hot corrosion Hot corrosion can affect gas turbines operating in high salt environments (e.g., near the ocean). Salts, including chlorides and sulfates, are ingested by the turbines and deposited in the hot sections of the engine; other elements present in fuels also form salts, e.g. vanadates. The heat from the engine melts these salts which then can flux the passivating oxide layers on the metal components of the engine, allowing corrosion to occur at an accelerated rate. List of fluxes Borax – for brazing Beeswax Citric acid – for soldering copper/electronics Tallow and lead Paraffin wax Palm oil Zinc chloride ("killed spirits") Zinc chloride and ammonium chloride Olive oil and ammonium chloride – for iron Rosin, tallow, olive oil, and zinc chloride – for aluminium Cryolite (sodium hexafluoroaluminate) Cryolite and phosphoric acid Phosphoric acid and alcohol Cryolite and barium chloride Oleic acid Lithium chloride Magnesium chloride Sodium chloride Potassium chloride Unslaked lime Flux recovery During the submerged arc welding process, not all flux turns into slag. Depending on the welding process, 50% to 90% of the flux can be reused. Standards Solder fluxes are specified according to several standards. ISO 9454-1 and DIN EN 29454-1 The most common standard in Europe is ISO 9454-1 (also known as DIN EN 29454-1). This standard specifies each flux by a four-character code: flux type, base, activator, and form. The form is often omitted. Therefore, 1.1.2 means rosin flux with halides. DIN 8511 The older German DIN 8511 specification is still often in use in shops. In the table below, note that the correspondence between DIN 8511 and ISO 9454-1 codes is not one-to-one. J-STD-004 One standard increasingly used (e.g. in the United States) is J-STD-004. It is very similar to DIN EN 61190-1-1. Four characters (two letters, then one letter, and last a number) represent flux composition, flux activity, and whether activators include halides: First two letters: Base RO: rosin RE: resin OR: organic IN: inorganic Third letter: Activity L: low M: moderate H: high Number: Halide content 0: less than 0.05% in weight (“halide-free”) 1: halide content depends on activity: less than 0.5% for low activity 0.5% to 2.0% for moderate activity greater than 2.0% for high activity Any combination is possible, e.g. ROL0, REM1 or ORH0. J-STD-004 characterizes the flux by reliability of residue from a surface insulation resistance (SIR) and electromigration standpoint. It includes tests for electromigration and surface insulation resistance (which must be greater than 100 MΩ after 168 hours at elevated temperature and humidity with a DC bias applied). MIL-F-14256 and QQ-S-571 The old MIL-F-14256 and QQ-S-571 standards defined fluxes as: R (rosin) RMA (rosin mildly activated) RA (rosin activated) WS (water-soluble) Any of these categories may be no-clean, or not, depending on the chemistry selected and the standard that the manufacturer requires. See also Flux-cored arc welding Gas metal arc welding Shielded metal arc welding References External links MetalShapers.Org Tips & Tricks from the Pros: ''Aluminum Welding" (includes Filler Metal chart) Solder Fume and You Metallurgy Brazing and soldering
Flux (metallurgy)
[ "Chemistry", "Materials_science", "Engineering" ]
6,971
[ "Metallurgy", "Materials science", "nan" ]
494,055
https://en.wikipedia.org/wiki/DNA%20synthesis
DNA synthesis is the natural or artificial creation of deoxyribonucleic acid (DNA) molecules. DNA is a macromolecule made up of nucleotide units, which are linked by covalent bonds and hydrogen bonds, in a repeating structure. DNA synthesis occurs when these nucleotide units are joined to form DNA; this can occur artificially (in vitro) or naturally (in vivo). Nucleotide units are made up of a nitrogenous base (cytosine, guanine, adenine or thymine), pentose sugar (deoxyribose) and phosphate group. Each unit is joined when a covalent bond forms between its phosphate group and the pentose sugar of the next nucleotide, forming a sugar-phosphate backbone. DNA is a complementary, double stranded structure as specific base pairing (adenine and thymine, guanine and cytosine) occurs naturally when hydrogen bonds form between the nucleotide bases. There are several different definitions for DNA synthesis: it can refer to DNA replication - DNA biosynthesis (in vivo DNA amplification), polymerase chain reaction - enzymatic DNA synthesis (in vitro DNA amplification) or gene synthesis - physically creating artificial gene sequences. Though each type of synthesis is very different, they do share some features. Nucleotides that have been joined to form polynucleotides can act as a DNA template for one form of DNA synthesis - PCR - to occur. DNA replication also works by using a DNA template, the DNA double helix unwinds during replication, exposing unpaired bases for new nucleotides to hydrogen bond to. Gene synthesis, however, does not require a DNA template and genes are assembled de novo. DNA synthesis occurs in all eukaryotes and prokaryotes, as well as some viruses. The accurate synthesis of DNA is important in order to avoid mutations to DNA. In humans, mutations could lead to diseases such as cancer so DNA synthesis, and the machinery involved in vivo, has been studied extensively throughout the decades. In the future these studies may be used to develop technologies involving DNA synthesis, to be used in data storage. DNA replication In nature, DNA molecules are synthesised by all living cells through the process of DNA replication. This typically occurs as a part of cell division. DNA replication occurs so, during cell division, each daughter cell contains an accurate copy of the genetic material of the cell. In vivo DNA synthesis (DNA replication) is dependent on a complex set of enzymes which have evolved to act during the S phase of the cell cycle, in a concerted fashion. In both eukaryotes and prokaryotes, DNA replication occurs when specific topoisomerases, helicases and gyrases (replication initiator proteins) uncoil the double-stranded DNA, exposing the nitrogenous bases. These enzymes, along with accessory proteins, form a macromolecular machine which ensures accurate duplication of DNA sequences. Complementary base pairing takes place, forming a new double-stranded DNA molecule. This is known as semi-conservative replication since one strand of the new DNA molecule is from the 'parent' strand. Continuously, eukaryotic enzymes encounter DNA damage which can perturb DNA replication. This damage is in the form of DNA lesions that arise spontaneously or due to DNA damaging agents. DNA replication machinery is therefore highly controlled in order to prevent collapse when encountering damage. Control of the DNA replication system ensures that the genome is replicated only once per cycle; over-replication induces DNA damage. Deregulation of DNA replication is a key factor in genomic instability during cancer development. This highlights the specificity of DNA synthesis machinery in vivo. Various means exist to artificially stimulate the replication of naturally occurring DNA, or to create artificial gene sequences. However, DNA synthesis in vitro can be a very error-prone process. DNA repair synthesis Damaged DNA is subject to repair by several different enzymatic repair processes, where each individual process is specialized to repair particular types of damage. The DNA of humans is subject to damage from multiple natural sources and insufficient repair is associated with disease and premature aging. Most DNA repair processes form single-strand gaps in DNA during an intermediate stage of the repair, and these gaps are filled in by repair synthesis. The specific repair processes that require gap filling by DNA synthesis include nucleotide excision repair, base excision repair, mismatch repair, homologous recombinational repair, non-homologous end joining and microhomology-mediated end joining. Reverse Transcription Reverse transcription is part of the replication cycle of particular virus families, including retroviruses. It involves copying RNA into double-stranded complementary DNA (cDNA), using reverse transcriptase enzymes. In retroviruses, viral RNA is inserted into a host cell nucleus. There, a viral reverse transcriptase enzyme adds DNA nucleotides onto the RNA sequence, generating cDNA that is inserted into the host cell genome by the enzyme integrase, encoding viral proteins. Polymerase chain reaction A polymerase chain reaction is a form of enzymatic DNA synthesis in the laboratory, using cycles of repeated heating and cooling of the reaction for DNA melting and enzymatic replication of the DNA. DNA synthesis during PCR is very similar to living cells but has very specific reagents and conditions. During PCR, DNA is chemically extracted from host chaperone proteins then heated, causing thermal dissociation of the DNA strands. Two new cDNA strands are built from the original strand, these strands can be split again to act as the template for further PCR products. The original DNA is multiplied through many rounds of PCR. More than a billion copies of the original DNA strand can be made. Random mutagenesis For many experiments, such as structural and evolutionary studies, scientists need to produce a large library of variants of a particular DNA sequence. Random mutagenesis takes place in vitro, when mutagenic replication with a low fidelity DNA polymerase is combined with selective PCR amplification to produce many copies of mutant DNA. RT-PCR RT-PCR differs from conventional PCR as it synthesizes cDNA from mRNA, rather than template DNA. The technique couples a reverse transcription reaction with PCR-based amplification, as an RNA sequence acts as a template for the enzyme, reverse transcriptase. RT-PCR is often used to test gene expression in particular tissue or cell types at various developmental stages or to test for genetic disorders. Gene synthesis Artificial gene synthesis is the process of synthesizing a gene in vitro without the need for initial template DNA samples. In 2010 J. Craig Venter and his team were the first to use entirely synthesized DNA to create a self-replicating microbe, dubbed Mycoplasma laboratorium. Oligonucleotide synthesis Oligonucleotide synthesis is the chemical synthesis of sequences of nucleic acids. The majority of biological research and bioengineering involves synthetic DNA, which can include oligonucleotides, synthetic genes, or even chromosomes. Today, most synthetic DNA is custom-built using the phosphoramidite method by Marvin H. Caruthers. Oligos are synthesized from building blocks which replicate natural bases. Other techniques for synthesising DNA have been commercially made available, including Short Oligo Ligation Assembly. The process has been automated since the late 1970s and can be used to form desired genetic sequences as well as for other uses in medicine and molecular biology. However, creating sequences chemically is impractical beyond 200-300 bases, and is an environmentally hazardous process. These oligos, of around 200 bases, can be connected using DNA assembly methods, creating larger DNA molecules. Some studies have explored the possibility of enzymatic synthesis using terminal deoxynucleotidyl transferase (TdT), a DNA polymerase that requires no template. However, this method is not yet as effective as chemical synthesis, and is not commercially available. With advances in artificial DNA synthesis, the possibility of DNA data storage is being explored. With its ultrahigh storage density and long-term stability, synthetic DNA is an interesting option to store large amounts of data. Although information can be retrieved very quickly from DNA through next generation sequencing technologies, de novo synthesis of DNA is a major bottleneck in the process. Only one nucleotide can be added per cycle, with each cycle taking seconds, so the overall synthesis is very time-consuming, as well as very error prone. However, if biotechnology improves, synthetic DNA could one day be used in data storage. Base pair synthesis It has been reported that new nucleobase pairs can be synthesized, as well as A-T (adenine - thymine) and G-C (guanine - cytosine). Synthetic nucleotides can be used to expand the genetic alphabet and allow specific modification of DNA sites. Even just a third base pair would expand the number of amino acids that can be encoded by DNA from the existing 20 amino acids to a possible 172. Hachimoji DNA is built from eight nucleotide letters, forming four possible base pairs. It therefore doubles the information density of natural DNA. In studies, RNA has even been produced from hachimoji DNA. This technology could also be used to allow data storage in DNA. References DNA replication
DNA synthesis
[ "Biology" ]
1,933
[ "Genetics techniques", "DNA replication", "Molecular genetics" ]
494,418
https://en.wikipedia.org/wiki/Proper%20time
In relativity, proper time (from Latin, meaning own time) along a timelike world line is defined as the time as measured by a clock following that line. The proper time interval between two events on a world line is the change in proper time, which is independent of coordinates, and is a Lorentz scalar. The interval is the quantity of interest, since proper time itself is fixed only up to an arbitrary additive constant, namely the setting of the clock at some event along the world line. The proper time interval between two events depends not only on the events, but also the world line connecting them, and hence on the motion of the clock between the events. It is expressed as an integral over the world line (analogous to arc length in Euclidean space). An accelerated clock will measure a smaller elapsed time between two events than that measured by a non-accelerated (inertial) clock between the same two events. The twin paradox is an example of this effect. By convention, proper time is usually represented by the Greek letter τ (tau) to distinguish it from coordinate time represented by t. Coordinate time is the time between two events as measured by an observer using that observer's own method of assigning a time to an event. In the special case of an inertial observer in special relativity, the time is measured using the observer's clock and the observer's definition of simultaneity. The concept of proper time was introduced by Hermann Minkowski in 1908, and is an important feature of Minkowski diagrams. Mathematical formalism The formal definition of proper time involves describing the path through spacetime that represents a clock, observer, or test particle, and the metric structure of that spacetime. Proper time is the pseudo-Riemannian arc length of world lines in four-dimensional spacetime. From the mathematical point of view, coordinate time is assumed to be predefined and an expression for proper time as a function of coordinate time is required. On the other hand, proper time is measured experimentally and coordinate time is calculated from the proper time of inertial clocks. Proper time can only be defined for timelike paths through spacetime which allow for the construction of an accompanying set of physical rulers and clocks. The same formalism for spacelike paths leads to a measurement of proper distance rather than proper time. For lightlike paths, there exists no concept of proper time and it is undefined as the spacetime interval is zero. Instead, an arbitrary and physically irrelevant affine parameter unrelated to time must be introduced. In special relativity With the timelike convention for the metric signature, the Minkowski metric is defined by and the coordinates by for arbitrary Lorentz frames. In any such frame an infinitesimal interval, here assumed timelike, between two events is expressed as and separates points on a trajectory of a particle (think clock{?}). The same interval can be expressed in coordinates such that at each moment, the particle is at rest. Such a frame is called an instantaneous rest frame, denoted here by the coordinates for each instant. Due to the invariance of the interval (instantaneous rest frames taken at different times are related by Lorentz transformations) one may write since in the instantaneous rest frame, the particle or the frame itself is at rest, i.e., . Since the interval is assumed timelike (ie. ), taking the square root of the above yields or Given this differential expression for , the proper time interval is defined as Here is the worldline from some initial event to some final event with the ordering of the events fixed by the requirement that the final event occurs later according to the clock than the initial event. Using and again the invariance of the interval, one may write where is an arbitrary bijective parametrization of the worldline such that give the endpoints of and a < b; is the coordinate speed at coordinate time ; and , , and are space coordinates. The first expression is manifestly Lorentz invariant. They are all Lorentz invariant, since proper time and proper time intervals are coordinate-independent by definition. If , are parameterised by a parameter , this can be written as If the motion of the particle is constant, the expression simplifies to where Δ means the change in coordinates between the initial and final events. The definition in special relativity generalizes straightforwardly to general relativity as follows below. In general relativity Proper time is defined in general relativity as follows: Given a pseudo-Riemannian manifold with a local coordinates and equipped with a metric tensor , the proper time interval between two events along a timelike path is given by the line integral This expression is, as it should be, invariant under coordinate changes. It reduces (in appropriate coordinates) to the expression of special relativity in flat spacetime. In the same way that coordinates can be chosen such that in special relativity, this can be done in general relativity too. Then, in these coordinates, This expression generalizes definition and can be taken as the definition. Then using invariance of the interval, equation follows from it in the same way follows from , except that here arbitrary coordinate changes are allowed. Examples in special relativity Example 1: The twin "paradox" For a twin paradox scenario, let there be an observer A who moves between the A-coordinates (0,0,0,0) and (10 years, 0, 0, 0) inertially. This means that A stays at for 10 years of A-coordinate time. The proper time interval for A between the two events is then So being "at rest" in a special relativity coordinate system means that proper time and coordinate time are the same. Let there now be another observer B who travels in the x direction from (0,0,0,0) for 5 years of A-coordinate time at 0.866c to (5 years, 4.33 light-years, 0, 0). Once there, B accelerates, and travels in the other spatial direction for another 5 years of A-coordinate time to (10 years, 0, 0, 0). For each leg of the trip, the proper time interval can be calculated using A-coordinates, and is given by So the total proper time for observer B to go from (0,0,0,0) to (5 years, 4.33 light-years, 0, 0) and then to (10 years, 0, 0, 0) is Thus it is shown that the proper time equation incorporates the time dilation effect. In fact, for an object in a SR (special relativity) spacetime traveling with velocity for a time , the proper time interval experienced is which is the SR time dilation formula. Example 2: The rotating disk An observer rotating around another inertial observer is in an accelerated frame of reference. For such an observer, the incremental () form of the proper time equation is needed, along with a parameterized description of the path being taken, as shown below. Let there be an observer C on a disk rotating in the xy plane at a coordinate angular rate of and who is at a distance of r from the center of the disk with the center of the disk at . The path of observer C is given by , where is the current coordinate time. When r and are constant, and . The incremental proper time formula then becomes So for an observer rotating at a constant distance of r from a given point in spacetime at a constant angular rate of ω between coordinate times and , the proper time experienced will be as for a rotating observer. This result is the same as for the linear motion example, and shows the general application of the integral form of the proper time formula. Examples in general relativity The difference between SR and general relativity (GR) is that in GR one can use any metric which is a solution of the Einstein field equations, not just the Minkowski metric. Because inertial motion in curved spacetimes lacks the simple expression it has in SR, the line integral form of the proper time equation must always be used. Example 3: The rotating disk (again) An appropriate coordinate conversion done against the Minkowski metric creates coordinates where an object on a rotating disk stays in the same spatial coordinate position. The new coordinates are and The t and z coordinates remain unchanged. In this new coordinate system, the incremental proper time equation is With r, θ, and z being constant over time, this simplifies to which is the same as in Example 2. Now let there be an object off of the rotating disk and at inertial rest with respect to the center of the disk and at a distance of R from it. This object has a coordinate motion described by , which describes the inertially at-rest object of counter-rotating in the view of the rotating observer. Now the proper time equation becomes So for the inertial at-rest observer, coordinate time and proper time are once again found to pass at the same rate, as expected and required for the internal self-consistency of relativity theory. Example 4: The Schwarzschild solution – time on the Earth The Schwarzschild solution has an incremental proper time equation of where t is time as calibrated with a clock distant from and at inertial rest with respect to the Earth, r is a radial coordinate (which is effectively the distance from the Earth's center), ɸ is a co-latitudinal coordinate, the angular separation from the north pole in radians. θ is a longitudinal coordinate, analogous to the longitude on the Earth's surface but independent of the Earth's rotation. This is also given in radians. m is the geometrized mass of the Earth, m = GM/c2, M is the mass of the Earth, G is the gravitational constant. To demonstrate the use of the proper time relationship, several sub-examples involving the Earth will be used here. For the Earth, , meaning that . When standing on the north pole, we can assume (meaning that we are neither moving up or down or along the surface of the Earth). In this case, the Schwarzschild solution proper time equation becomes . Then using the polar radius of the Earth as the radial coordinate (or ), we find that At the equator, the radius of the Earth is . In addition, the rotation of the Earth needs to be taken into account. This imparts on an observer an angular velocity of of 2π divided by the sidereal period of the Earth's rotation, 86162.4 seconds. So . The proper time equation then produces From a non-relativistic point of view this should have been the same as the previous result. This example demonstrates how the proper time equation is used, even though the Earth rotates and hence is not spherically symmetric as assumed by the Schwarzschild solution. To describe the effects of rotation more accurately the Kerr metric may be used. See also Lorentz transformation Minkowski space Proper length Proper acceleration Proper mass Proper velocity Clock hypothesis Peres metric Footnotes References Minkowski spacetime Theory of relativity Timekeeping Time in physics de:Zeitdilatation#Eigenzeit
Proper time
[ "Physics" ]
2,294
[ "Physical phenomena", "Time in physics", "Physical quantities", "Time", "Timekeeping", "Theory of relativity", "Spacetime" ]
4,876,828
https://en.wikipedia.org/wiki/Law%20of%20the%20wall
In fluid dynamics, the law of the wall (also known as the logarithmic law of the wall) states that the average velocity of a turbulent flow at a certain point is proportional to the logarithm of the distance from that point to the "wall", or the boundary of the fluid region. This law of the wall was first published in 1930 by Hungarian-American mathematician, aerospace engineer, and physicist Theodore von Kármán. It is only technically applicable to parts of the flow that are close to the wall (<20% of the height of the flow), though it is a good approximation for the entire velocity profile of natural streams. General logarithmic formulation The logarithmic law of the wall is a self similar solution for the mean velocity parallel to the wall, and is valid for flows at high Reynolds numbers — in an overlap region with approximately constant shear stress and far enough from the wall for (direct) viscous effects to be negligible: with and where {| border="0" |- ||| is the wall coordinate: the distance y to the wall, made dimensionless with the friction velocity uτ and kinematic viscosity ν, |- ||| is the dimensionless velocity: the velocity u parallel to the wall as a function of y (distance from the wall), divided by the friction velocity uτ, |- ||| is the wall shear stress, |- ||| is the fluid density, |- ||| is called the friction velocity or shear velocity, |- ||| is the Von Kármán constant, |- ||| is a constant, and |- ||| is the natural logarithm. |} From experiments, the von Kármán constant is found to be and for a smooth wall. With dimensions, the logarithmic law of the wall can be written as: where y0 is the distance from the boundary at which the idealized velocity given by the law of the wall goes to zero. This is necessarily nonzero because the turbulent velocity profile defined by the law of the wall does not apply to the laminar sublayer. The distance from the wall at which it reaches zero is determined by comparing the thickness of the laminar sublayer with the roughness of the surface over which it is flowing. For a near-wall laminar sublayer of thickness and a characteristic roughness length-scale , {| border="0" |- | || : hydraulically smooth flow, |- | || : transitional flow, |- | || : hydraulically rough flow. |} Intuitively, this means that if the roughness elements are hidden within the laminar sublayer, they have a much different effect on the turbulent law of the wall velocity profile than if they are sticking out into the main part of the flow. This is also often more formally formulated in terms of a boundary Reynolds number, , where The flow is hydraulically smooth for , hydraulically rough for , and transitional for intermediate values. Values for are given by: {| border="0" |- |   || for hydraulically smooth flow |- | || for hydraulically rough flow. |} Intermediate values are generally given by the empirically derived Nikuradse diagram, though analytical methods for solving for this range have also been proposed. For channels with a granular boundary, such as natural river systems, where is the average diameter of the 84th largest percentile of the grains of the bed material. Power law solutions Works by Barenblatt and others have shown that besides the logarithmic law of the wall — the limit for infinite Reynolds numbers — there exist power-law solutions, which are dependent on the Reynolds number. In 1996, Cipra submitted experimental evidence in support of these power-law descriptions. This evidence itself has not been fully accepted by other experts. In 2001, Oberlack claimed to have derived both the logarithmic law of the wall, as well as power laws, directly from the Reynolds-averaged Navier–Stokes equations, exploiting the symmetries in a Lie group approach. However, in 2014, Frewer et al. refuted these results. For scalars For scalars (most notably temperature), the self-similar logarithmic law of the wall has been theorized (first formulated by B. A. Kader) and observed in experimental and computational studies. In many cases, extensions to the original law of the wall formulation (usually through integral transformations) are generally needed to account for compressibility, variable-property and real fluid effects. Near the wall Below the region where the law of the wall is applicable, there are other estimations for friction velocity. Viscous sublayer In the region known as the viscous sublayer, below 5 wall units, the variation of to is approximately 1:1, such that: For where, {| border="0" ||| is the wall coordinate: the distance y to the wall, made dimensionless with the friction velocity and kinematic viscosity , |- ||| is the dimensionless velocity: the velocity u parallel to the wall as a function of y (distance from the wall), divided by the friction velocity , |} This approximation can be used farther than 5 wall units, but by the error is more than 25%. Buffer layer In the buffer layer, between 5 wall units and 30 wall units, neither law holds, such that: For with the largest variation from either law occurring approximately where the two equations intersect, at . That is, before 11 wall units the linear approximation is more accurate and after 11 wall units the logarithmic approximation should be used, though neither are relatively accurate at 11 wall units. The mean streamwise velocity profile is improved for with an eddy viscosity formulation based on a near-wall turbulent kinetic energy function and the van Driest mixing length equation. Comparisons with DNS data of fully developed turbulent channel flows for showed good agreement. Notes References Further reading External links Definition from ScienceWorld Formula on CFD Online Y+ estimator Fluid dynamics Turbulence
Law of the wall
[ "Chemistry", "Engineering" ]
1,266
[ "Piping", "Chemical engineering", "Turbulence", "Fluid dynamics" ]
4,879,260
https://en.wikipedia.org/wiki/Asymmetric%20induction
Asymmetric induction describes the preferential formation in a chemical reaction of one enantiomer (enantioinduction) or diastereoisomer (diastereoinduction) over the other as a result of the influence of a chiral feature present in the substrate, reagent, catalyst or environment. Asymmetric induction is a key element in asymmetric synthesis. Asymmetric induction was introduced by Hermann Emil Fischer based on his work on carbohydrates. Several types of induction exist. Internal asymmetric induction makes use of a chiral center bound to the reactive center through a covalent bond and remains so during the reaction. The starting material is often derived from chiral pool synthesis. In relayed asymmetric induction the chiral information is introduced in a separate step and removed again in a separate chemical reaction. Special synthons are called chiral auxiliaries. In external asymmetric induction chiral information is introduced in the transition state through a catalyst of chiral ligand. This method of asymmetric synthesis is economically most desirable. Carbonyl 1,2 asymmetric induction Several models exist to describe chiral induction at carbonyl carbons during nucleophilic additions. These models are based on a combination of steric and electronic considerations and are often in conflict with each other. Models have been devised by Cram (1952), Cornforth (1959), Felkin (1969) and others. Cram's rule The Cram's rule of asymmetric induction named after Donald J. Cram states In certain non-catalytic reactions that diastereomer will predominate, which could be formed by the approach of the entering group from the least hindered side when the rotational conformation of the C-C bond is such that the double bond is flanked by the two least bulky groups attached to the adjacent asymmetric center. The rule indicates that the presence of an asymmetric center in a molecule induces the formation of an asymmetric center adjacent to it based on steric hindrance (scheme 1). The experiments involved two reactions. In experiment one 2-phenylpropionaldehyde (1, racemic but (R)-enantiomer shown) was reacted with the Grignard reagent of bromobenzene to 1,2-diphenyl-1-propanol (2) as a mixture of diastereomers, predominantly the threo isomer (see for explanation the Fischer projection). The preference for the formation of the threo isomer can be explained by the rule stated above by having the active nucleophile in this reaction attacking the carbonyl group from the least hindered side (see Newman projection A) when the carbonyl is positioned in a staggered formation with the methyl group and the hydrogen atom, which are the two smallest substituents creating a minimum of steric hindrance, in a gauche orientation and phenyl as the most bulky group in the anti conformation. The second reaction is the organic reduction of 1,2-diphenyl-1-propanone 2 with lithium aluminium hydride, which results in the same reaction product as above but now with preference for the erythro isomer (2a). Now a hydride anion (H−) is the nucleophile attacking from the least hindered side (imagine hydrogen entering from the paper plane). Felkin model The Felkin model (1968) named after Hugh Felkin also predicts the stereochemistry of nucleophilic addition reactions to carbonyl groups. Felkin argued that the Cram model suffered a major drawback: an eclipsed conformation in the transition state between the carbonyl substituent (the hydrogen atom in aldehydes) and the largest α-carbonyl substituent. He demonstrated that by increasing the steric bulk of the carbonyl substituent from methyl to ethyl to isopropyl to tert-butyl, the stereoselectivity also increased, which is not predicted by Cram's rule: The Felkin rules are: The transition states are reactant-like. Torsional strain (Pitzer strain) involving partial bonds (in transition states) represents a substantial fraction of the strain between fully formed bonds, even when the degree of bonding is quite low. The conformation in the TS is staggered and not eclipsed with the substituent R skew with respect to two adjacent groups one of them the smallest in TS A. For comparison TS B is the Cram transition state. The main steric interactions involve those around R and the nucleophile but not the carbonyl oxygen atom. Attack of the nucleophile occurs according to the Dunitz angle (107 degrees), eclipsing the hydrogen, rather than perpendicular to the carbonyl. A polar effect or electronic effect stabilizes a transition state with maximum separation between the nucleophile and an electron-withdrawing group. For instance haloketones do not obey Cram's rule, and, in the example above, replacing the electron-withdrawing phenyl group by a cyclohexyl group reduces stereoselectivity considerably. Felkin–Anh model The Felkin–Anh model is an extension of the Felkin model that incorporates improvements suggested by Nguyễn Trọng Anh and Odile Eisenstein to correct for two key weaknesses in Felkin's model. The first weakness addressed was the statement by Felkin of a strong polar effect in nucleophilic addition transition states, which leads to the complete inversion of stereochemistry by SN2 reactions, without offering justifications as to why this phenomenon was observed. Anh's solution was to offer the antiperiplanar effect as a consequence of asymmetric induction being controlled by both substituent and orbital effects. In this effect, the best nucleophile acceptor σ* orbital is aligned parallel to both the π and π* orbitals of the carbonyl, which provide stabilization of the incoming anion. The second weakness in the Felkin Model was the assumption of substituent minimization around the carbonyl R, which cannot be applied to aldehydes. Incorporation of Bürgi–Dunitz angle ideas allowed Anh to postulate a non-perpendicular attack by the nucleophile on the carbonyl center, anywhere from 95° to 105° relative to the oxygen-carbon double bond, favoring approach closer to the smaller substituent and thereby solve the problem of predictability for aldehydes. Anti–Felkin selectivity Though the Cram and Felkin–Anh models differ in the conformers considered and other assumptions, they both attempt to explain the same basic phenomenon: the preferential addition of a nucleophile to the most sterically favored face of a carbonyl moiety. However, many examples exist of reactions that display stereoselectivity opposite of what is predicted by the basic tenets of the Cram and Felkin–Anh models. Although both of the models include attempts to explain these reversals, the products obtained are still referred to as "anti-Felkin" products. One of the most common examples of altered asymmetric induction selectivity requires an α-carbon substituted with a component with Lewis base character (i.e. O, N, S, P substituents). In this situation, if a Lewis acid such as Al-iPr2 or Zn2+ is introduced, a bidentate chelation effect can be observed. This locks the carbonyl and the Lewis base substituent in an eclipsed conformation, and the nucleophile will then attack from the side with the smallest free α-carbon substituent. If the chelating R group is identified as the largest, this will result in an "anti-Felkin" product. This stereoselective control was recognized and discussed in the first paper establishing the Cram model, causing Cram to assert that his model requires non-chelating conditions. An example of chelation control of a reaction can be seen here, from a 1987 paper that was the first to directly observe such a "Cram-chelate" intermediate, vindicating the model: Here, the methyl titanium chloride forms a Cram-chelate. The methyl group then dissociates from titanium and attacks the carbonyl, leading to the anti-Felkin diastereomer. A non-chelating electron-withdrawing substituent effect can also result in anti-Felkin selectivity. If a substituent on the α-carbon is sufficiently electron withdrawing, the nucleophile will add anti- relative to the electron withdrawing group, even if the substituent is not the largest of the 3 bonded to the α-carbon. Each model offers a slightly different explanation for this phenomenon. A polar effect was postulated by the Cornforth model and the original Felkin model, which placed the EWG substituent and incoming nucleophile anti- to each other in order to most effectively cancel the dipole moment of the transition structure. This Newman projection illustrates the Cornforth and Felkin transition state that places the EWG anti- to the incoming nucleophile, regardless of its steric bulk relative to RS and RL. The improved Felkin–Anh model, as discussed above, makes a more sophisticated assessment of the polar effect by considering molecular orbital interactions in the stabilization of the preferred transition state. A typical reaction illustrating the potential anti-Felkin selectivity of this effect, along with its proposed transition structure, is pictured below: Carbonyl 1,3 asymmetric induction It has been observed that the stereoelectronic environment at the β-carbon of can also direct asymmetric induction. A number of predictive models have evolved over the years to define the stereoselectivity of such reactions. Chelation model According to Reetz, the Cram-chelate model for 1,2-inductions can be extended to predict the chelated complex of a β-alkoxy aldehyde and metal. The nucleophile is seen to attack from the less sterically hindered side and anti- to the substituent Rβ, leading to the anti-adduct as the major product. To make such chelates, the metal center must have at least two free coordination sites and the protecting ligands should form a bidentate complex with the Lewis acid. Non-chelation model Cram–Reetz model Cram and Reetz demonstrated that 1,3-stereocontrol is possible if the reaction proceeds through an acyclic transition state. The reaction of β-alkoxy aldehyde with allyltrimethylsilane showed good selectivity for the anti-1,3-diol, which was explained by the Cram polar model. The polar benzyloxy group is oriented anti to the carbonyl to minimize dipole interactions and the nucleophile attacks anti- to the bulkier (RM) of the remaining two substituents. Evans model More recently, Evans presented a different model for nonchelate 1,3-inductions. In the proposed transition state, the β-stereocenter is oriented anti- to the incoming nucleophile, as seen in the Felkin–Anh model. The polar X group at the β-stereocenter is placed anti- to the carbonyl to reduce dipole interactions, and Rβ is placed anti- to the aldehyde group to minimize the steric hindrance. Consequently, the 1,3-anti-diol would be predicted as the major product. Carbonyl 1,2 and 1,3 asymmetric induction If the substrate has both an α- and β-stereocenter, the Felkin–Anh rule (1,2-induction) and the Evans model (1,3-induction) should considered at the same time. If these two stereocenters have an anti- relationship, both models predict the same diastereomer (the stereoreinforcing case). However, in the case of the syn-substrate, the Felkin–Anh and the Evans model predict different products (non-stereoreinforcing case). It has been found that the size of the incoming nucleophile determines the type of control exerted over the stereochemistry. In the case of a large nucleophile, the interaction of the α-stereocenter with the incoming nucleophile becomes dominant; therefore, the Felkin product is the major one. Smaller nucleophiles, on the other hand, result in 1,3 control determining the asymmetry. Acyclic alkenes asymmetric induction Chiral acyclic alkenes also show diastereoselectivity upon reactions such as epoxidation and enolate alkylation. The substituents around the alkene can favour the approach of the electrophile from one or the other face of the molecule. This is the basis of the Houk's model, based on theoretical work by Kendall Houk, which predicts that the selectivity is stronger for cis than for trans double bonds. In the example shown, the cis alkene assumes the shown conformation to minimize steric clash between RS and the methyl group. The approach of the electrophile preferentially occurs from the same side of the medium group (RM) rather than the large group (RL), mainly producing the shown diastereoisomer. Since for a trans alkene the steric hindrance between RS and the H group is not as large as for the cis case, the selectivity is much lower. Substrate control: asymmetric induction by molecular framework in acyclic systems Asymmetric induction by the molecular framework of an acyclic substrate is the idea that asymmetric steric and electronic properties of a molecule may determine the chirality of subsequent chemical reactions on that molecule. This principal is used to design chemical syntheses where one stereocentre is in place and additional stereocentres are required. When considering how two functional groups or species react, the precise 3D configurations of the chemical entities involved will determine how they may approach one another. Any restrictions as to how these species may approach each other will determine the configuration of the product of the reaction. In the case of asymmetric induction, we are considering the effects of one asymmetric centre on a molecule on the reactivity of other functional groups on that molecule. The closer together these two sites are, the larger an influence is expected to be observed. A more holistic approach to evaluating these factors is by computational modelling, however, simple qualitative factors may also be used to explain the predominant trends seen for some synthetic steps. The ease and accuracy of this qualitative approach means it is more commonly applied in synthesis and substrate design. Examples of appropriate molecular frameworks are alpha chiral aldehydes and the use of chiral auxiliaries. Asymmetric induction at alpha-chiral aldehydes Possible reactivity at aldehydes include nucleophilic attack and addition of allylmetals. The stereoselectivity of nucleophilic attack at alpha-chiral aldehydes may be described by the Felkin–Anh or polar Felkin Anh models and addition of achiral allylmetals may be described by Cram’s rule. Felkin–Anh and polar Felkin–Anh model Selectivity in nucleophilic additions to chiral aldehydes is often explained by the Felkin–Anh model (see figure). The nucleophile approaches the carbon of the carbonyl group at the Burgi-Dunitz angle. At this trajectory, attack from the bottom face is disfavored due to steric bulk of the adjacent, large, functional group. The polar Felkin–Anh model is applied in the scenario where X is an electronegative group. The polar Felkin–Anh model postulates that the observed stereochemistry arises due to hyperconjugative stabilization arising from the anti-periplanar interaction between the C-X antibonding σ* orbital and the forming bond. Improving Felkin–Anh selectivity for organometal additions to aldehydes can be achieved by using organo-aluminum nucleophiles instead of the corresponding Grignard or organolithium nucleophiles. Claude Spino and co-workers have demonstrated significant stereoselectivity improvements upon switching from vinylgrignard to vinylalane reagents with a number of chiral aldehydes. Cram’s rule Addition of achiral allylmetals to aldehydes forms a chiral alcohol, the stereochemical outcome of this reaction is determined by the chirality of the α-carbon on the aldehyde substrate (Figure "Substrate control: addition of achiral allylmetals to α-chiral aldehydes"). The allylmetal reagents used include boron, tin and titanium. Cram’s rule explains the stereoselectivity by considering the transition state depicted in figure 3. In the transition state the oxygen lone pair is able to interact with the boron centre whilst the allyl group is able to add to the carbon end of the carbonyl group. The steric demand of this transition state is minimized by the α-carbon configuration holding the largest group away from (trans to) the congested carbonyl group and the allylmetal group approaching past the smallest group on the α-carbon centre. In the example below (Figure "An example of substrate controlled addition of achiral allyl-boron to α-chiral aldehyde"), (R)-2-methylbutanal (1) reacts with the allylboron reagent (2) with two possible diastereomers of which the (R, R)-isomer is the major product. The Cram model of this reaction is shown with the carbonyl group placed trans to the ethyl group (the large group) and the allyl boron approaching past the hydrogen (the small group). The structure is shown in Newman projection. In this case the nucleophilic addition reaction happens at the face where the hydrogen (the small group) is, producing the (R, R)-isomer as the major product. Chiral auxiliaries Asymmetric stereoinduction can be achieved with the use of chiral auxiliaries. Chiral auxiliaries may be reversibly attached to the substrate, inducing a diastereoselective reaction prior to cleavage, overall producing an enantioselective process. Examples of chiral auxiliaries include, Evans’ chiral oxazolidinone auxiliaries (for asymmetric aldol reactions) pseudoephedrine amides and tert-butanesulfinamide imines. Substrate control: asymmetric induction by molecular framework in cyclic systems Cyclic molecules often exist in much more rigid conformations than their linear counterparts. Even very large macrocycles like erythromycin exist in defined geometries despite having many degrees of freedom. Because of these properties, it is often easier to achieve asymmetric induction with macrocyclic substrates rather than linear ones. Early experiments performed by W. Clark Still and colleagues showed that medium- and large-ring organic molecules can provide striking levels of stereo induction as substrates in reactions such as kinetic enolate alkylation, dimethylcuprate addition, and catalytic hydrogenation. Even a single methyl group is often sufficient to bias the diastereomeric outcome of the reaction. These studies, among others, helped challenge the widely-held scientific belief that large rings are too floppy to provide any kind of stereochemical control. A number of total syntheses have made use of macrocyclic stereocontrol to achieve desired reaction products. In the synthesis of (−)-cladiella-6,11-dien-3-ol, a strained trisubstituted olefin was dihydroxylated diasetereoselectively with N-methylmorpholine N-oxide (NMO) and osmium tetroxide, in the presence of an unstrained olefin. En route to (±)-periplanone B, chemists achieved a facial selective epoxidation of an enone intermediate using tert-butyl hydroperoxide in the presence of two other alkenes. Sodium borohydride reduction of a 10-membered ring enone intermediate en route to the sesquiterpene eucannabinolide proceeded as predicted by molecular modelling calculations that accounted for the lowest energy macrocycle conformation. Substrate-controlled synthetic schemes have many advantages, since they do not require the use of complex asymmetric reagents to achieve selective transformations. Reagent control: addition of chiral allylmetals to achiral aldehydes In organic synthesis, reagent control is an approach to selectively forming one stereoisomer out of many, the stereoselectivity is determined by the structure and chirality of the reagent used. When chiral allylmetals are used for nucleophilic addition reaction to achiral aldehydes, the chirality of the newly generated alcohol carbon is determined by the chirality of the allymetal reagents (Figure 1). The chirality of the allymetals usually comes from the asymmetric ligands used. The metals in the allylmetal reagents include boron, tin, titanium, silicon, etc. Various chiral ligands have been developed to prepare chiral allylmetals for the reaction with aldehydes. H. C. Brown was the first to report the chiral allylboron reagents for asymmetric allylation reactions with aldehydes. The chiral allylboron reagents were synthesized from the natural product (+)-a-pinene in two steps. The TADDOL ligands developed by Dieter Seebach has been used to prepare chiral allyltitanium compounds for asymmetric allylation with aldehydes. Jim Leighton has developed chiral allysilicon compounds in which the release of ring strain facilitated the stereoselective allylation reaction, 95% to 98% enantiomeric excess could be achieved for a range of achiral aldehydes. See also Macrocyclic stereocontrol Cieplak effect References External links The Evolution of Models for Carbonyl Addition Evans Group Afternoon Seminar Sarah Siska February 9, 2001 Stereochemistry Induct
Asymmetric induction
[ "Physics", "Chemistry" ]
4,819
[ "Stereochemistry", "Space", "nan", "Asymmetry", "Spacetime", "Symmetry" ]
4,882,496
https://en.wikipedia.org/wiki/Parks%E2%80%93McClellan%20filter%20design%20algorithm
The Parks–McClellan algorithm, published by James McClellan and Thomas Parks in 1972, is an iterative algorithm for finding the optimal Chebyshev finite impulse response (FIR) filter. The Parks–McClellan algorithm is utilized to design and implement efficient and optimal FIR filters. It uses an indirect method for finding the optimal filter coefficients. The goal of the algorithm is to minimize the error in the pass and stop bands by utilizing the Chebyshev approximation. The Parks–McClellan algorithm is a variation of the Remez exchange algorithm, with the change that it is specifically designed for FIR filters. It has become a standard method for FIR filter design. History History of optimal FIR filter design In the 1960s, researchers within the field of analog filter design were using the Chebyshev approximation for filter design. During this time, it was well known that the best filters contain an equiripple characteristic in their frequency response magnitude and the elliptic filter (or Cauer filter) was optimal with regards to the Chebyshev approximation. When the digital filter revolution began in the 1960s, researchers used a bilinear transform to produce infinite impulse response (IIR) digital elliptic filters. They also recognized the potential for designing FIR filters to accomplish the same filtering task and soon the search was on for the optimal FIR filter using the Chebyshev approximation. It was well known in both mathematics and engineering that the optimal response would exhibit an equiripple behavior and that the number of ripples could be counted using the Chebyshev approximation. Several attempts to produce a design program for the optimal Chebyshev FIR filter were undertaken in the period between 1962 and 1971. Despite the numerous attempts, most did not succeed, usually due to problems in the algorithmic implementation or problem formulation. Otto Herrmann, for example, proposed a method for designing equiripple filters with restricted band edges. This method obtained an equiripple frequency response with the maximum number of ripples by solving a set of nonlinear equations. Another method introduced at the time implemented an optimal Chebyshev approximation, but the algorithm was limited to the design of relatively low-order filters. Similar to Herrmann's method, Ed Hofstetter presented an algorithm that designed FIR filters with as many ripples as possible. This has become known as the Maximal Ripple algorithm. The Maximal Ripple algorithm imposed an alternating error condition via interpolation and then solved a set of equations that the alternating solution had to satisfy. One notable limitation of the Maximal Ripple algorithm was that the band edges were not specified as inputs to the design procedure. Rather, the initial frequency set {ωi} and the desired function D(ωi) defined the pass and stop band implicitly. Unlike previous attempts to design an optimal filter, the Maximal Ripple algorithm used an exchange method that tried to find the frequency set {ωi} where the best filter had its ripples. Thus, the Maximal Ripple algorithm was not an optimal filter design but it had quite a significant impact on how the Parks–McClellan algorithm would formulate. History of Parks–McClellan In August 1970, James McClellan entered graduate school at Rice University with a concentration in mathematical models of analog filter design and enrolled in a new course called "Digital Filters" due to his interest in filter design. The course was taught jointly by Thomas Parks and Sid Burrus. At that time, DSP was an emerging field and as a result lectures often involved recently published research papers. The following semester, the spring of 1971, Thomas Parks offered a course called "Signal Theory," which McClellan took as well. During spring break of the semester, Parks drove from Houston to Princeton in order to attend a conference, where he heard Ed Hofstetter's presentation about a new FIR filter design algorithm (Maximal Ripple algorithm). He brought the paper by Hofstetter, Oppenheim, and Siegel, back to Houston, thinking about the possibility of using the Chebyshev approximation theory to design FIR filters. He heard that the method implemented in Hofstetter's algorithm was similar to the Remez exchange algorithm and decided to pursue the path of using the Remez exchange algorithm. The students in the "Signal Theory" course were required to do a project and since Chebyshev approximation was a major topic in the course, the implementation of this new algorithm became James McClellan's course project. This ultimately led to the Parks–McClellan algorithm, which involved the theory of optimal Chebyshev approximation and an efficient implementation. By the end of the spring semester, McClellan and Parks were attempting to write a variation of the Remez exchange algorithm for FIR filters. It took about six weeks to develop and some optimal filters had been designed successfully by the end of May. The algorithm The Parks–McClellan Algorithm is implemented using the following steps: Initialization: Choose an extremal set of frequences {ωi(0)}. Finite Set Approximation: Calculate the best Chebyshev approximation on the present extremal set, giving a value δ(m) for the min-max error on the present extremal set. Interpolation: Calculate the error function E(ω) over the entire set of frequencies Ω using (2). Look for local maxima of |E(m)(ω)| on the set Ω. If max(ω∈Ω)|E(m)(ω)| > δ(m), then update the extremal set to {ωi(m+1)} by picking new frequencies where |E(m)(ω)| has its local maxima. Make sure that the error alternates on the ordered set of frequencies as described in (4) and (5). Return to Step 2 and iterate. If max(ω∈Ω)|E(m)(ω)| ≤ δ(m), then the algorithm is complete. Use the set {ωi(0)} and the interpolation formula to compute an inverse discrete Fourier transform to obtain the filter coefficients. The Parks–McClellan Algorithm may be restated as the following steps: Make an initial guess of the L+2 extremal frequencies. Compute δ using the equation given. Using Lagrange Interpolation, we compute the dense set of samples of A(ω) over the passband and stopband. Determine the new L+2 largest extrema. If the alternation theorem is not satisfied, then we go back to (2) and iterate until the alternation theorem is satisfied. If the alternation theorem is satisfied, then we compute h(n) and we are done. To gain a basic understanding of the Parks–McClellan Algorithm mentioned above, we can rewrite the algorithm above in a simpler form as: Guess the positions of the extrema are evenly spaced in the pass and stop band. Perform polynomial interpolation and re-estimate positions of the local extrema. Move extrema to new positions and iterate until the extrema stop shifting. Explanation The picture above on the right displays the various extremal frequencies for the plot shown. The extremal frequencies are the maximum and minimum points in the stop and pass bands. The stop band ripple is the lower portion of ripples on the bottom right of the plot and the pass band ripple is the upper portion of the ripples on the top left of the plot. The dashed lines going across the plot indicate the δ or maximum error. Given the positions of the extremal frequencies, there is a formula for the optimum δ or optimum error. Since we do not know the optimum δ or the exact positions of the extrema on the first attempt, we iterate. Effectively, we assume the positions of the extrema initially, and calculate δ. We then re-estimate and move the extrema and recalculate δ, or the error. We repeat this process until δ stops changing. The algorithm will cause the δ error to converge, generally within ten to twelve iterations. Additional notes Before applying the Chebyshev approximation, a set of steps were necessary: Define the set of basis function for the approximation, and Exploit the fact that the pass and stop bands of bandpass filters would always be separated by transition regions. Since FIR filters could be reduced to the sum of cosines case, the same core program could be used to perform all possible linear-phase FIR filters. In contrast to the Maximum Ripple approach, the band edges could now be specified ahead of time. To achieve an efficient implementation of the optimal filter design using the Parks-McClellan algorithm, two difficulties have to be overcome: Defining a flexible exchange strategy, and Implementing a robust interpolation method. In some sense, the programming involved the implementation and adaptation of a known algorithm for use in FIR filter design. Two faces of the exchange strategy were taken to make the program more efficient: Allocating the extremal frequencies between the pass and stop bands, and Enabling movement of the extremals between the bands as the program iterated. At initialization, the number of extremals in the pass and stop band could be assigned by using the ratio of the sizes of the bands. Furthermore, the pass and stop band edge would always be placed in the extremal set, and the program's logic kept those edge frequencies in the extremal set. The movement between bands was controlled by comparing the size of the errors at all the candidate extremal frequencies and taking the largest. The second element of the algorithm was the interpolation step needed to evaluate the error function. They used a method called the Barycentric form of Lagrange interpolation, which was very robust. All conditions for the Parks–McClellan algorithm are based on Chebyshev's alternation theorem. The alternation theorem states that the polynomial of degree L that minimizes the maximum error will have at least L+2 extrema. The optimal frequency response will barely reach the maximum ripple bounds. The extrema must occur at the pass and stop band edges and at either ω=0 or ω=π or both. The derivative of a polynomial of degree L is a polynomial of degree L−1, which can be zero at most at L−1 places. So the maximum number of local extrema is the L−1 local extrema plus the 4 band edges, giving a total of L+3 extrema. References Additional references The following additional links provide information on the Parks–McClellan Algorithm, as well as on other research and papers written by James McClellan and Thomas Parks: Chebyshev Approximation for Nonrecursive Digital Filters with Linear Phase Short Help on Parks–McClellan Design of FIR Low Pass Filters Using MATLAB Intro to DSP The MathWorks MATLAB documentation ELEC4600 Lecture Notes (original link, archived on 15 Apr 2012) C Code Implementation (LGPL License) – By Jake Janovetz Revised and expanded algorithm McClellan, Parks, & Rabiner, 1975; Fortran code. Digital signal processing Filter theory
Parks–McClellan filter design algorithm
[ "Engineering" ]
2,289
[ "Telecommunications engineering", "Filter theory" ]
4,882,514
https://en.wikipedia.org/wiki/Remez%20algorithm
The Remez algorithm or Remez exchange algorithm, published by Evgeny Yakovlevich Remez in 1934, is an iterative algorithm used to find simple approximations to functions, specifically, approximations by functions in a Chebyshev space that are the best in the uniform norm L∞ sense. It is sometimes referred to as Remes algorithm or Reme algorithm. A typical example of a Chebyshev space is the subspace of Chebyshev polynomials of order n in the space of real continuous functions on an interval, C[a, b]. The polynomial of best approximation within a given subspace is defined to be the one that minimizes the maximum absolute difference between the polynomial and the function. In this case, the form of the solution is precised by the equioscillation theorem. Procedure The Remez algorithm starts with the function to be approximated and a set of sample points in the approximation interval, usually the extrema of Chebyshev polynomial linearly mapped to the interval. The steps are: Solve the linear system of equations (where ), for the unknowns and E. Use the as coefficients to form a polynomial . Find the set of points of local maximum error . If the errors at every are of equal magnitude and alternate in sign, then is the minimax approximation polynomial. If not, replace with and repeat the steps above. The result is called the polynomial of best approximation or the minimax approximation algorithm. A review of technicalities in implementing the Remez algorithm is given by W. Fraser. Choice of initialization The Chebyshev nodes are a common choice for the initial approximation because of their role in the theory of polynomial interpolation. For the initialization of the optimization problem for function f by the Lagrange interpolant Ln(f), it can be shown that this initial approximation is bounded by with the norm or Lebesgue constant of the Lagrange interpolation operator Ln of the nodes (t1, ..., tn + 1) being T being the zeros of the Chebyshev polynomials, and the Lebesgue functions being Theodore A. Kilgore, Carl de Boor, and Allan Pinkus proved that there exists a unique ti for each Ln, although not known explicitly for (ordinary) polynomials. Similarly, , and the optimality of a choice of nodes can be expressed as For Chebyshev nodes, which provides a suboptimal, but analytically explicit choice, the asymptotic behavior is known as ( being the Euler–Mascheroni constant) with for and upper bound Lev Brutman obtained the bound for , and being the zeros of the expanded Chebyshev polynomials: Rüdiger Günttner obtained from a sharper estimate for Detailed discussion This section provides more information on the steps outlined above. In this section, the index i runs from 0 to n+1. Step 1: Given , solve the linear system of n+2 equations (where ), for the unknowns and E. It should be clear that in this equation makes sense only if the nodes are ordered, either strictly increasing or strictly decreasing. Then this linear system has a unique solution. (As is well known, not every linear system has a solution.) Also, the solution can be obtained with only arithmetic operations while a standard solver from the library would take operations. Here is the simple proof: Compute the standard n-th degree interpolant to at the first n+1 nodes and also the standard n-th degree interpolant to the ordinates To this end, use each time Newton's interpolation formula with the divided differences of order and arithmetic operations. The polynomial has its i-th zero between and , and thus no further zeroes between and : and have the same sign . The linear combination is also a polynomial of degree n and This is the same as the equation above for and for any choice of E. The same equation for i = n+1 is and needs special reasoning: solved for the variable E, it is the definition of E: As mentioned above, the two terms in the denominator have same sign: E and thus are always well-defined. The error at the given n+2 ordered nodes is positive and negative in turn because The theorem of de La Vallée Poussin states that under this condition no polynomial of degree n exists with error less than E. Indeed, if such a polynomial existed, call it , then the difference would still be positive/negative at the n+2 nodes and therefore have at least n+1 zeros which is impossible for a polynomial of degree n. Thus, this E is a lower bound for the minimum error which can be achieved with polynomials of degree n. Step 2 changes the notation from to . Step 3 improves upon the input nodes and their errors as follows. In each P-region, the current node is replaced with the local maximizer and in each N-region is replaced with the local minimizer. (Expect at A, the near , and at B.) No high precision is required here, the standard line search with a couple of quadratic fits should suffice. (See ) Let . Each amplitude is greater than or equal to E. The Theorem of de La Vallée Poussin and its proof also apply to with as the new lower bound for the best error possible with polynomials of degree n. Moreover, comes in handy as an obvious upper bound for that best possible error. Step 4: With and as lower and upper bound for the best possible approximation error, one has a reliable stopping criterion: repeat the steps until is sufficiently small or no longer decreases. These bounds indicate the progress. Variants Some modifications of the algorithm are present on the literature. These include: Replacing more than one sample point with the locations of nearby maximum absolute differences. Replacing all of the sample points with in a single iteration with the locations of all, alternating sign, maximum differences. Using the relative error to measure the difference between the approximation and the function, especially if the approximation will be used to compute the function on a computer which uses floating point arithmetic; Including zero-error point constraints. The Fraser-Hart variant, used to determine the best rational Chebyshev approximation. See also References External links Minimax Approximations and the Remez Algorithm, background chapter in the Boost Math Tools documentation, with link to an implementation in C++ Intro to DSP Polynomials Approximation theory Numerical analysis
Remez algorithm
[ "Mathematics" ]
1,338
[ "Approximation theory", "Polynomials", "Computational mathematics", "Mathematical relations", "Numerical analysis", "Approximations", "Algebra" ]
4,882,595
https://en.wikipedia.org/wiki/Minimax%20approximation%20algorithm
A minimax approximation algorithm (or L∞ approximation or uniform approximation) is a method to find an approximation of a mathematical function that minimizes maximum error. For example, given a function defined on the interval and a degree bound , a minimax polynomial approximation algorithm will find a polynomial of degree at most to minimize Polynomial approximations The Weierstrass approximation theorem states that every continuous function defined on a closed interval [a,b] can be uniformly approximated as closely as desired by a polynomial function. For practical work it is often desirable to minimize the maximum absolute or relative error of a polynomial fit for any given number of terms in an effort to reduce computational expense of repeated evaluation. Polynomial expansions such as the Taylor series expansion are often convenient for theoretical work but less useful for practical applications. Truncated Chebyshev series, however, closely approximate the minimax polynomial. One popular minimax approximation algorithm is the Remez algorithm. References External links Minimax approximation algorithm at MathWorld Numerical analysis
Minimax approximation algorithm
[ "Mathematics" ]
202
[ "Mathematical relations", "Computational mathematics", "Approximations", "Numerical analysis" ]
1,285,760
https://en.wikipedia.org/wiki/Penetrating%20oil
Penetrating oil, also known as penetrating fluid, is a low-viscosity oil. It can be used to free rusted mechanical parts (such as nuts and bolts) so that they can be removed, because it can penetrate into the narrow space between the threads of two parts. It can also be used as a cleaner; however, it should not be used as a general-purpose lubricant or a corrosion stopper. Using penetrating fluids as general-purpose lubricants is not advisable, because such oils are relatively volatile. As a result, much of the penetrating oil will evaporate in a short amount of time, leaving little residual lubricant. Other uses include removing chewing gum and adhesive stickers, and lessening friction on metal-stringed musical instruments. External links How to make penetrating oil, by Rob Goodier on "Engineering for change". oils
Penetrating oil
[ "Physics", "Chemistry" ]
186
[ "Oils", "Carbohydrates", "Materials stubs", "Materials", "Matter" ]
1,285,827
https://en.wikipedia.org/wiki/Steam%20distillation
Steam distillation is a separation process that consists of distilling water together with other volatile and non-volatile components. The steam from the boiling water carries the vapor of the volatiles to a condenser; both are cooled and return to the liquid or solid state, while the non-volatile residues remain behind in the boiling container. If, as is usually the case, the volatiles are not miscible with water, they will spontaneously form a distinct phase after condensation, allowing them to be separated by decantation or with a separatory funnel. Steam distillation can be used when the boiling point of the substance to be extracted is higher than that of water, and the starting material cannot be heated to that temperature because of decomposition or other unwanted reactions. It may also be useful when the amount of the desired substance is small compared to that of the non-volatile residues. It is often used to separate volatile essential oils from plant material. for example, to extract limonene (boiling point 176 °C) from orange peels. Steam distillation once was a popular laboratory method for purification of organic compounds, but it has been replaced in many such uses by vacuum distillation and supercritical fluid extraction. It is however much simpler and economical than those alternatives, and remains important in certain industrial sectors. In the simplest form, water distillation or hydrodistillation, the water is mixed with the starting material in the boiling container. In direct steam distillation, the starting material is suspended above the water in the boiling flask, supported by a metal mesh or perforated screen. In dry steam distillation, the steam from a boiler is forced to flow through the starting material in a separate container. The latter variant allows the steam to be heated above the boiling point of water (thus becoming superheated steam), for more efficient extraction. History Steam distillation is used in many of the recipes given in the ('Book of Gentleness on Perfume'), also known as the ('Book of the Chemistry of Perfume and Distillations'), attributed to the early Arabic philosopher al-Kindi (–873). Steam distillation was also used by the Persian philosopher and physician Avicenna (980–1037) to produce essential oils by adding water to rose petals and distilling the mixture. The process was also used by al-Dimashqi (1256–1327) to produce rose water on a large scale. Principle Every substance has some vapor pressure even below its boiling point, so in theory it could be distilled at any temperature by collecting and condensing its vapors. However, ordinary distillation below the boiling point is not practical because a layer of vapor-rich air would form over the liquid, and evaporation would stop as soon as the partial pressure of the vapor in that layer reached the vapor pressure. The vapor would then flow to the condenser only by diffusion, which is an extremely slow process. Simple distillation is generally done by boiling the starting material, because, once its vapor pressure exceeds atmospheric pressure, that still vapor-rich layer of air will be disrupted, and there will be a significant and steady flow of vapor from the boiling flask to the condenser. In steam distillation, that positive flow is provided by steam from boiling water, rather than by the boiling of the substances of interest. The steam carries with it the vapors of the latter. The substance of interest does not need to be miscible water or soluble in it. It suffices that it has significant vapor pressure at the steam's temperature. If the water forms an azeotrope with the substances of interest, the boiling point of the mixture may be lower than the boiling point of water. For example, bromobenzene boils at 156 °C (at normal atmospheric pressure), but a mixture with water boils at 95 °C. However, the formation of an azeotrope is not necessary for steam distillation to work. Applications Steam distillation is often employed in the isolation of essential oils, for use in perfumes, for example. In this method, steam is passed through the plant material containing the desired oils. Eucalyptus oil, camphor oil and orange oil are obtained by this method on an industrial scale. Steam distillation is a means of purifying fatty acids, e.g. from tall oils. Steam distillation is sometimes used in the chemical laboratory. Illustrative is a classic preparation of bromobiphenyl where steam distillation is used to first remove the excess benzene and subsequently to purifiy the brominated product. In one preparation of benzophenone, steam is employed to first recover unreacted carbon tetrachloride and subsequently to hydrolyze the intermediate benzophenone dichloride into benzophenone, which is in fact not steam distilled. It one preparation of a purine, steam distillation is used to remove volatile benzaldehyde from nonvolatile product. Equipment On a lab scale, steam distillations are carried out using steam generated outside the system and piped through the mixture to be purified. Steam can also be generated in-situ using a Clevenger-type apparatus. See also Azeotropic distillation Batch distillation Distillation Extractive distillation Fractional distillation Heteroazeotrope Herbal distillates Hydrodistillation Laboratory equipment Steam engine Steam stripping Supercritical fluid extraction Theoretical plate References Distillation
Steam distillation
[ "Chemistry" ]
1,159
[ "Distillation", "Separation processes" ]
1,286,190
https://en.wikipedia.org/wiki/Lithium%20chloride
Lithium chloride is a chemical compound with the formula LiCl. The salt is a typical ionic compound (with certain covalent characteristics), although the small size of the Li+ ion gives rise to properties not seen for other alkali metal chlorides, such as extraordinary solubility in polar solvents (83.05 g/100 mL of water at 20 °C) and its hygroscopic properties. Chemical properties The salt forms crystalline hydrates, unlike the other alkali metal chlorides. Mono-, tri-, and pentahydrates are known. The anhydrous salt can be regenerated by heating the hydrates. LiCl also absorbs up to four equivalents of ammonia/mol. As with any other ionic chloride, solutions of lithium chloride can serve as a source of chloride ion, e.g., forming a precipitate upon treatment with silver nitrate: LiCl + AgNO3 → AgCl + LiNO3 Preparation Lithium chloride is produced by treatment of lithium carbonate with hydrochloric acid. Anhydrous LiCl is prepared from the hydrate by heating in a stream of hydrogen chloride. Uses Commercial applications Lithium chloride is mainly used for the production of lithium metal by electrolysis of a LiCl/KCl melt at . LiCl is also used as a brazing flux for aluminium in automobile parts. It is used as a desiccant for drying air streams. In more specialized applications, lithium chloride finds some use in organic synthesis, e.g., as an additive in the Stille reaction. Also, in biochemical applications, it can be used to precipitate RNA from cellular extracts. Lithium chloride is also used as a flame colorant to produce dark red flames. Niche uses Lithium chloride is used as a relative humidity standard in the calibration of hygrometers. At a saturated solution (45.8%) of the salt will yield an equilibrium relative humidity of 11.30%. Additionally, lithium chloride can be used as a hygrometer. This deliquescent salt forms a self-solution when exposed to air. The equilibrium LiCl concentration in the resulting solution is directly related to the relative humidity of the air. The percent relative humidity at can be estimated, with minimal error in the range , from the following first-order equation: RH=107.93-2.11C, where C is solution LiCl concentration, percent by mass. Molten LiCl is used for the preparation of carbon nanotubes, graphene and lithium niobate. Lithium chloride has been shown to have strong acaricidal properties, being effective against Varroa destructor in populations of honey bees. Lithium chloride is used as an aversive agent in lab animals to study conditioned place preference and aversion. Precautions Lithium salts affect the central nervous system in a variety of ways. While the citrate, carbonate, and orotate salts are currently used to treat bipolar disorder, other lithium salts including the chloride were used in the past. For a short time in the 1940s lithium chloride was manufactured as a salt substitute for people with hypertension, but this was prohibited after the toxic effects of the compound (tremors, fatigue, nausea) were recognized. It was, however, noted by J. H. Talbott that many symptoms attributed to lithium chloride toxicity may have also been attributable to sodium chloride deficiency, to the diuretics often administered to patients who were given lithium chloride, or to the patients' underlying conditions. See also Lithium chloride (data page) Solubility table References Handbook of Chemistry and Physics, 71st edition, CRC Press, Ann Arbor, Michigan, 1990. N. N. Greenwood, A. Earnshaw, Chemistry of the Elements, 2nd ed., Butterworth-Heinemann, Oxford, UK, 1997. R. Vatassery, titration analysis of LiCl, sat'd in Ethanol by AgNO3 to precipitate AgCl(s). EP of this titration gives %Cl by mass. H. Nechamkin, The Chemistry of the Elements, McGraw-Hill, New York, 1968. External links Radiochemical measurements of activity coefficients, from Betts & MacKenzie, Can. J. Chem. Chlorides Alkali metal chlorides Lithium salts Metal halides Mood stabilizers Desiccants Rock salt crystal structure
Lithium chloride
[ "Physics", "Chemistry" ]
906
[ "Chlorides", "Inorganic compounds", "Lithium salts", "Salts", "Desiccants", "Materials", "Metal halides", "Matter" ]
1,286,222
https://en.wikipedia.org/wiki/Pyrometric%20cone
Pyrometric cones are pyrometric devices that are used to gauge heatwork during the firing of ceramic materials in a kiln. The cones, often used in sets of three, are positioned in a kiln with the wares to be fired and, because the individual cones in a set soften and fall over at different temperatures, they provide a visual indication of when the wares have reached a required state of maturity, a combination of time and temperature. Pyrometric cones give a temperature equivalent; they are not simple temperature-measuring devices. Definition The pyrometric cone is "A pyramid with a triangular base and of a defined shape and size; the "cone" is shaped from a carefully proportioned and uniformly mixed batch of ceramic materials so that when it is heated under stated conditions, it will bend due to softening, the tip of the cone becoming level with the base at a definitive temperature. Pyrometric cones are made in series, the temperature interval between the successive cones usually being 20 degrees Celsius. The best known series are Seger Cones (Germany), Orton Cones (USA) and Staffordshire Cones (UK)." Usage For some products, such as porcelain and lead-free glazes, it can be advantageous to fire within a two-cone range. The three-cone system can be used to determine temperature uniformity and to check the performance of an electronic controller. The three-cone system consists of three consecutively numbered cones: Guide cone – one cone number cooler than firing cone. Firing cone – the cone recommended by manufacturer of glaze, slip, etc. Guard cone – one cone number hotter than firing cone. Additionally, most kilns have temperature differences from top to bottom. The amount of difference depends on the design of the kiln, the age of the heating elements, the load distribution in the kiln, and the cone number to which the kiln is fired. Usually, kilns have a greater temperature difference at cooler cone numbers. Cones should be used on the lower, middle and top shelves to determine how much difference exists during firing. This will aid in the way the kiln is loaded and fired to reduce the difference. Downdraft venting will also even out temperatures variance. Both temperature and time and sometimes atmosphere affect the final bending position of a cone. Temperature is the predominant variable. The temperature is referred to as an equivalent temperature, since actual firing conditions may vary somewhat from those in which the cones were originally standardized. Observation of cone bending is used to determine when a kiln has reached a desired state. Additionally, small cones or bars can be arranged to mechanically trigger kiln controls when the temperature rises enough for them to deform. Precise, consistent placement of large and small cones must be followed to ensure the proper temperature equivalent is being reached. Every effort needs to be made to always have the cone inclined at 8° from the vertical. Large cones must be mounted 2 inches above the plaque and small cones mounted 15/16 inches. With the cones having their own base, "self-supporting cones" eliminate errors with their mounting. Pyrometric cones can be used in a "kiln sitter", a device which senses the softening of a cone and produces a mechanical output through a trigger assembly, typically to switch off the kiln. Control of variability Pyrometric cones are sensitive measuring devices and it is important to users that they should remain consistent in the way that they react to heating. Cone manufacturers follow procedures to control variability (within batches and between batches) to ensure that cones of a given grade remain consistent in their properties over long periods. A number of national standards and an ISO standard have been published regarding pyrometric cones. Even though cones from different manufacturers can have relatively similar numbering systems, they are not identical in their characteristics. If a change is made from one manufacturer to another, then allowances for the differences can sometimes be necessary. History In 1782, Josiah Wedgwood created accurately scaled pyrometric device, with details published in the Philosophical Transactions of the Royal Society of London in 1782 (Vol. LXXII, part 2). This led him to be elected a fellow of the Royal Society. The modern form of the pyrometric cone was developed by Hermann Seger and first used to control the firing of porcelain wares at the Royal Porcelain Factory, Berlin (Königliche Porzellanmanufaktur, in 1886, where Seger was director. Seger cones are made by a small number of companies and the term is often used as a synonym for pyrometric cones. The Standard Pyrometric Cone Company was founded in Columbus, Ohio, by Edward J. Orton, Jr. in 1896 to manufacture pyrometric cones, and following his death a charitable trust established to operate the company, which is known Edward Orton Jr. Ceramic Foundation, or Orton Ceramic Foundation. Pyrometric cones are often referred to as Orton Cones within the United States, but in his lifetime Orton preferred calling them Seger cones. Ceramic art A biennial ceramic art exhibition for small work, the Orton Cone Box Show, took the Orton Cone company's pyrometric cone box as the size constraint for submissions. Temperature ranges The following temperature equivalents for pyrometric cones were retrieved from references in the External Links section. Notes References Hamer, Frank and Hamer, Janet (1991). The Potter's Dictionary of Materials and Techniques. Third edition. A & C Black Publishers, Limited, London, England. . External links Temperature equivalents table Nimra & description of Nimra Cerglass pyrometric cones Temperature equivalents table Orton Celsius Temperature Equivalents and Description of Orton Cones up to Cone 14 Temperature equivalents table of Seger pyrometric cones Pottery Temperature
Pyrometric cone
[ "Physics", "Chemistry" ]
1,178
[ "Scalar physical quantities", "Thermodynamic properties", "Temperature", "Physical quantities", "SI base quantities", "Intensive quantities", "Thermodynamics", "Wikipedia categories named after physical quantities" ]