id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
25,234,160
https://en.wikipedia.org/wiki/Properties%20of%20concrete
Concrete has relatively high compressive strength (resistance to breaking when squeezed), but significantly lower tensile strength (resistance to breaking when pulled apart). The compressive strength is typically controlled with the ratio of water to cement when forming the concrete, and tensile strength is increased by additives, typically steel, to create reinforced concrete. In other words we can say concrete is made up of sand (which is a fine aggregate), ballast (which is a coarse aggregate), cement (can be referred to as a binder) and water (which is an additive). Reinforced concrete Concrete has relatively high compressive strength, but significantly lower tensile strength. As a result, without compensating, concrete would almost always fail from tensile stresses even when loaded in compression. The practical implication of this is that concrete elements subjected to tensile stresses must be reinforced with materials that are strong in tension (often steel). The elasticity of concrete is relatively constant at low stress levels but starts decreasing at higher stress levels as matrix cracking develops. Concrete has a very low coefficient of thermal expansion, and as it matures concrete shrinks. All concrete structures will crack to some extent, due to shrinkage and tension. Concrete which is subjected to long-duration forces is prone to creep. The density of concrete varies, but is around . Reinforced concrete is the most common form of concrete. The reinforcement is often steel rebar (mesh, spiral, bars and other forms). Structural fibers of various materials are available. Concrete can also be prestressed (reducing tensile stress) using internal steel cables (tendons), allowing for beams or slabs with a longer span than is practical with reinforced concrete alone. Inspection of existing concrete structures can be non-destructive if carried out with equipment such as a Schmidt hammer, which is sometimes used to estimate relative concrete strengths in the field. Mix design The ultimate strength of concrete is influenced by the water-cementitious ratio (w/cm), the design constituents, and the mixing, placement and curing methods employed. All things being equal, concrete with a lower water-cement (cementitious) ratio makes a stronger concrete than that with a higher ratio. The total quantity of cementitious materials (portland cement, slag cement, pozzolans) can affect strength, water demand, shrinkage, abrasion resistance and density. All concrete will crack independent of whether or not it has sufficient compressive strength. In fact, high Portland cement content mixtures can actually crack more readily due to increased hydration rate. As concrete transforms from its plastic state, hydrating to a solid, the material undergoes shrinkage. Plastic shrinkage cracks can occur soon after placement but if the evaporation rate is high they often can actually occur during finishing operations, for example in hot weather or a breezy day. In very high-strength concrete mixtures (greater than 70 MPa) the crushing strength of the aggregate can be a limiting factor to the ultimate compressive strength. In lean concretes (with a high water-cement ratio) the crushing strength of the aggregates is not so significant. The internal forces in common shapes of structure, such as arches, vaults, columns and walls are predominantly compressive forces, with floors and pavements subjected to tensile forces. Compressive strength is widely used for specification requirement and quality control of concrete. Engineers know their target tensile (flexural) requirements and will express these in terms of compressive strength. Wired.com reported on 13 April 2007 that a team from the University of Tehran, competing in a contest sponsored by the American Concrete Institute, demonstrated several blocks of concretes with abnormally high compressive strengths between at 28 days. The blocks appeared to use an aggregate of steel fibres and quartz – a mineral with a compressive strength of 1100 MPa, much higher than typical high-strength aggregates such as granite (). Reactive powder concrete, also known as ultra-high-performance concrete, can be even stronger, with strengths of up to 800 MPa (116,000 PSI). These are made by eliminating large aggregate completely, carefully controlling the size of the fine aggregates to ensure the best possible packing, and incorporating steel fibers (sometimes produced by grinding steel wool) into the matrix. Reactive powder concretes may also make use of silica fume as a fine aggregate. Commercial reactive powder concretes are available in the strength range. Elasticity The modulus of elasticity of concrete is a function of the modulus of elasticity of the aggregates and the cement matrix and their relative proportions. The modulus of elasticity of concrete is relatively constant at low stress levels but starts decreasing at higher stress levels as matrix cracking develops. The elastic modulus of the hardened paste may be in the order of 10-30 GPa and aggregates about 45 to 85 GPa. The concrete composite is then in the range of 30 to 50 GPa. The American Concrete Institute allows the modulus of elasticity to be calculated using the following equation: (psi) where weight of concrete (pounds per cubic foot) and where compressive strength of concrete at 28 days (psi) This equation is completely empirical and is not based on theory. Note that the value of Ec found is in units of psi. For normal weight concrete (defined as concrete with a wc of 150 lb/ft3 and subtracting 5 lb/ft3 for steel) Ec is permitted to be taken as . The publication used by structural bridge engineers is the AASHTO Load and Resistance Factor Design Manual, or "LRFD." From the LRFD, section 5.4.2.4, Ec is determined by: (ksi) where correction factor for aggregate source (taken as 1.0 unless determined otherwise) weight of concrete (kips per cubic foot), where and specified compressive strength of concrete at 28 days (ksi) For normal weight concrete (wc=0.145 kips per cubic feet) Ec may be taken as: (ksi) Thermal properties Expansion and shrinkage Concrete has a very low coefficient of thermal expansion. However, if no provision is made for expansion, very large forces can be created, causing cracks in parts of the structure not capable of withstanding the force or the repeated cycles of expansion and contraction. The coefficient of thermal expansion of Portland cement concrete is 0.000009 to 0.000012 (per degree Celsius) (8 to 12 microstrains/°C)(8-12 1/MK). Thermal Conductivity Concrete has moderate thermal conductivity, much lower than metals, but significantly higher than other building materials such as wood, and is a poor insulator. A layer of concrete is frequently used for 'fireproofing' of steel structures. However, the term fireproof is inappropriate, for high temperature fires can be hot enough to induce chemical changes in concrete, which in the extreme can cause considerable structural damage to the concrete. Cracking As concrete matures it continues to shrink, due to the ongoing reaction taking place in the material, although the rate of shrinkage falls relatively quickly and keeps reducing over time (for all practical purposes concrete is usually considered to not shrink due to hydration any further after 30 years). The relative shrinkage and expansion of concrete and brickwork require careful accommodation when the two forms of construction interface. All concrete structures will crack to some extent. One of the early designers of reinforced concrete, Robert Maillart, employed reinforced concrete in a number of arched bridges. His first bridge was simple, using a large volume of concrete. He then realized that much of the concrete was very cracked, and could not be a part of the structure under compressive loads, yet the structure clearly worked. His later designs simply removed the cracked areas, leaving slender, beautiful concrete arches. The Salginatobel Bridge is an example of this. Concrete cracks due to tensile stress induced by shrinkage or stresses occurring during setting or use. Various means are used to overcome this. Fiber reinforced concrete uses fine fibers distributed throughout the mix or larger metal or other reinforcement elements to limit the size and extent of cracks. In many large structures, joints or concealed saw-cuts are placed in the concrete as it sets to make the inevitable cracks occur where they can be managed and out of sight. Water tanks and highways are examples of structures requiring crack control. Shrinkage cracking Shrinkage cracks occur when concrete members undergo restrained volumetric changes (shrinkage) as a result of either drying, autogenous shrinkage, or thermal effects. Restraint is provided either externally (i.e. supports, walls, and other boundary conditions) or internally (differential drying shrinkage, reinforcement). Once the tensile strength of the concrete is exceeded, a crack will develop. The number and width of shrinkage cracks that develop are influenced by the amount of shrinkage that occurs, the amount of restraint present, and the amount and spacing of reinforcement provided. These are minor indications and have no real structural impact on the concrete member. Plastic-shrinkage cracks are immediately apparent, visible within 0 to 2 days of placement, while drying-shrinkage cracks develop over time. Autogenous shrinkage also occurs when the concrete is quite young and results from the volume reduction resulting from the chemical reaction of the Portland cement. Tension cracking Concrete members may be put into tension by applied loads. This is most common in concrete beams where a transversely applied load will put one surface into compression and the opposite surface into tension due to induced bending. The portion of the beam that is in tension may crack. The size and length of cracks is dependent on the magnitude of the bending moment and the design of the reinforcing in the beam at the point under consideration. Reinforced concrete beams are designed to crack in tension rather than in compression. This is achieved by providing reinforcing steel which yields before failure of the concrete in compression occurs and allowing remediation, repair, or if necessary, evacuation of an unsafe area. Creep Creep is the permanent movement or deformation of a material in order to relieve stresses within the material. Concrete that is subjected to long-duration forces is prone to creep. Short-duration forces (such as wind or earthquakes) do not cause creep. Creep can sometimes reduce the amount of cracking that occurs in a concrete structure or element, but it also must be controlled. The amount of primary and secondary reinforcing in concrete structures contributes to a reduction in the amount of shrinkage, creep and cracking. Water retention Portland cement concrete holds water. However, some types of concrete (like Pervious concrete) allow water to pass, hereby being perfect alternatives to Macadam roads, as they do not need to be fitted with storm drains. Concrete testing Engineers usually specify the required compressive strength of concrete, which is normally given as the 28-day compressive strength in megapascals (MPa) or pounds per square inch (psi). Twenty eight days is a long wait to determine if desired strengths are going to be obtained, so three-day and seven-day strengths can be useful to predict the ultimate 28-day compressive strength of the concrete. A 25% strength gain between 7 and 28 days is often observed with 100% OPC (ordinary Portland cement) mixtures, and between 25% and 40% strength gain can be realized with the inclusion of pozzolans such as flyash, and supplementary cementitious materials (SCMs) such as slag cement. Strength gain depends on the type of mixture, its constituents, the use of standard curing, proper testing by certified technicians, and care of cylinders in transport. For practical immediate considerations, it is incumbent to accurately test the fundamental properties of concrete in its fresh, plastic state. Concrete is typically sampled while being placed, with testing protocols requiring that test samples be cured under laboratory conditions (standard cured). Additional samples may be field cured (non-standard) for the purpose of early 'stripping' strengths, that is, form removal, evaluation of curing, etc. but the standard cured cylinders comprise acceptance criteria. Concrete tests can measure the "plastic" (unhydrated) properties of concrete prior to, and during placement. As these properties affect the hardened compressive strength and durability of concrete (resistance to freeze-thaw), the properties of workability (slump/flow), temperature, density and age are monitored to ensure the production and placement of 'quality' concrete. Depending on project location, tests are performed per ASTM International, European Committee for Standardization or Canadian Standards Association. As measurement of quality must represent the potential of concrete material delivered and placed, it is imperative that concrete technicians performing concrete tests are certified to do so according to these standards. Structural design, concrete material design and properties are often specified in accordance with national/regional design codes such as American Concrete Institute. Compressive strength tests are conducted by certified technicians using an instrumented, hydraulic ram which has been annually calibrated with instruments traceable to the Cement and Concrete Reference Laboratory (CCRL) of the National Institute of Standards and Technology (NIST) in the U.S., or regional equivalents internationally. Standardized form factors are 6" by 12" or 4" by 8" cylindrical samples, with some laboratories opting to utilize cubic samples. These samples are compressed to failure. Tensile strength tests are conducted either by three-point bending of a prismatic beam specimen or by compression along the sides of a standard cylindrical specimen. These destructive tests are not to be equated with nondestructive testing using a rebound hammer or probe systems which are hand-held indicators, for relative strength of the top few millimeters, of comparative concretes in the field. Mechanical properties at elevated temperature Temperatures elevated above degrade the mechanical properties of concrete, including compressive strength, fracture strength, tensile strength, and elastic modulus, with respect to deleterious effect on its structural changes. Chemical changes With elevated temperature, concrete will lose its hydration product because of water evaporation. Therefore its resistance of moisture flow of concrete decreases and the number of unhydrated cement grains grows with the loss of chemically bonded water, resulting in lower compressive strength. Also, the decomposition of calcium hydroxide in concrete forms lime and water. When temperature decreases, lime will reacts with water and expands to cause a reduction of strength. Physical changes At elevated temperatures, small cracks form and propagate inside the concrete with increased temperature, possibly caused by differential thermal coefficients of expansion within the cement matrix. Likewise, when water evaporates from concrete, the loss of water impedes the expansion of cement matrix by shrinking. Moreover, when the temperatures reach , siliceous aggregates transform from α-phase, hexagonal crystal system, to β-phase, bcc structure, causing expansion of concrete and decreasing the strength of the material. Spalling Spalling at elevated temperature is pronounced, driven by vapor pressure and thermal stresses. When the concrete surface is subjected to a sufficiently high temperature, the water close to the surface starts to move out from the concrete into atmosphere. However, with a high temperature gradient between the surface and the interior, vapor can also move inwards where it may condense with lower temperatures. A water-saturated interior resists the further movement of vapor into the mass of the concrete. If the condensation rate of vapor is much faster than the escaping speed of vapor out of concrete due to sufficiently high heating rate or adequately dense pore structure, a large pore pressure can cause spalling. At the same time, thermal expansion on the surface will generate a perpendicular compressive stress opposing the tensile stress within the concrete. Spalling occurs when the compressive stress exceeds the tensile stress. See also Segregation in concrete - particle segregation in concrete applications Creep and shrinkage of concrete References Properties
Properties of concrete
[ "Engineering" ]
3,230
[ "Structural engineering", "Concrete" ]
6,302,572
https://en.wikipedia.org/wiki/Hydrogen%20leak%20testing
Hydrogen leak testing is the normal way in which a hydrogen pressure vessel or installation is checked for leaks or flaws. This usually involves charging hydrogen as a tracer gas into the device undergoing testing, with any leaking gas detected by hydrogen sensors. Various test mechanisms have been devised. Test mechanisms Hydrostatic test In the hydrostatic test, a vessel is filled with a nearly incompressible liquid – usually water or oil – and examined for leaks or permanent changes in shape. The test pressure is always considerably higher than the operating pressure to give a margin for safety, typically 150% of the operating pressure. Burst test In the burst test, a vessel is filled with a gas and tested for leaks. The test pressure is always considerably more than the operating pressure to give a margin for safety, typically 200% or more of the operating pressure. Helium leak test The helium leak test uses helium (the lightest inert gas) as a tracer gas and detects it in concentrations as small as one part in 10 million. The helium is selected primarily because it penetrates small leaks readily, is inert and will not react with the test piece while having a naturally low quantity in air making detection less complicated. It is possible to detect leaks as small as 5x10−10 Pa·m3/s in vacuum mode and modern digital machines can detect 5x10−10 Pa·m3/s in sniffing mode. Vacuum test Usually a vacuum inside the object is created with an external pump connected to the instrument. Alternatively helium can be injected inside the product while the product itself is enclosed in a vacuum chamber connected to the instrument. In this case, burst and leakage tests can be combined in one operation. Hydrogen sensor test During the hydrogen sensor test, the object is filled with a mixture of 5% hydrogen/ 95% nitrogen, (below 5.7% hydrogen) is non-flammable (ISO-10156). This is called typically a sniffing test. The handprobe connected to the microelectronic hydrogen sensors is used to check the object. An audiosignal increases in proximity of a leak. Detection of leaks go down to 5x10−7 cubic centimeters per second. Compared to the helium test, hydrogen is cheaper than helium, no need for a vacuum, the instrument could be cheaper but is not as sensitive as a helium leak detector so will not find smaller leaks. Chemo-chromic hydrogen leak detectors are materials that can be proactively applied to a connection or fitting. In the event of a hydrogen leak, the chemo-chromic material changes color to alert an inspector that a leak is present. Chemo-chromic indicators can also be added to silicone tapes for hydrogen detection purposes. See also Hydrogen analyzer Hydrogen piping Hydrogen safety Hydrogen station Tracer-gas leak testing method Tubing (material) References External links Leakpedia – The free wiki encyclopedia about the industrial leak testing Detectape – Hydrogen detection silicone tape Hydrogen technologies Tests Hydrogen Nondestructive testing
Hydrogen leak testing
[ "Materials_science" ]
615
[ "Nondestructive testing", "Materials testing" ]
6,308,157
https://en.wikipedia.org/wiki/Nanoporous%20materials
Nanoporous materials consist of a regular organic or inorganic bulk phase in which a porous structure is present. Nanoporous materials exhibit pore diameters that are most appropriately quantified using units of nanometers. The diameter of pores in nanoporous materials is thus typically 100 nanometers or smaller. Pores may be open or closed, and pore connectivity and void fraction vary considerably, as with other porous materials. Open pores are pores that connect to the surface of the material whereas closed pores are pockets of void space within a bulk material. Open pores are useful for molecular separation techniques, adsorption, and catalysis studies. Closed pores are mainly used in thermal insulators and for structural applications. Most nanoporous materials can be classified as bulk materials or membranes. Activated carbon and zeolites are two examples of bulk nanoporous materials, while cell membranes can be thought of as nanoporous membranes. A porous medium or a porous material is a material containing pores (voids). The skeletal portion of the material is often called the "matrix" or "frame". The pores are typically filled with a fluid (liquid or gas). There are many natural nanoporous materials, but artificial materials can also be manufactured. One method of doing so is to combine polymers with different melting points, so that upon heating one polymer degrades. A nanoporous material with consistently sized pores has the property of letting only certain substances pass through, while blocking others. Classifications Classification By Size The term nanomaterials covers diverse forms of materials with various applications. According to IUPAC porous materials are subdivided into 3 categories: Microporous materials: 0.2–2 nm Mesoporous materials: 2–50 nm Macroporous materials: 50–1000 nm These categories conflict with the classical definition of nanoporous materials, as they have pore diameters between 1 and 100 nm. This range covers all the classifications listed above. However, for the sake of simplicity, scientists choose to use the term nanomaterials and list its associated diameter instead. Microporous and mesoporous materials are distinguished as separate material classes owing to the distinct applications afforded by the pores sizes in these materials. Confusingly, the term microporous is used to describe materials with smaller pores sizes than materials commonly referred to simply as nanoporous. More correctly, microporous materials are better understood as a subset of nanoporous materials, namely materials that exhibit pore diameters smaller than 2 nm. Having pore diameters with length scales of molecules, such materials enable applications that require molecular selectivity such as filtration and separation membranes. Mesoporous materials, referring generally to materials with average pore diameters in the range 2-50 nm are interesting as catalyst support materials and adsorbents owing to their high surface area to volume ratios. Sometimes classifying by size becomes difficult as there could be porous materials that have various diameters. For example, microporous materials may have a few pores with 2 to 50 nm diameter due to random grain packing. These specifics must be taken into consideration when categorizing by pore size. Classification By Network Materials In addition to classification by size, nanoporous materials can be further classified into organic and inorganic network materials. A network material is the structure 'hosts' the pores and is where the medium (gas or liquid) interacts with the substrate. While there are plenty of inorganic nanoporous membranes, there are few organic ones due to issues with stability. Organic Organic nanoporous materials are polymers made from elements such as boron, carbon, nitrogen, and oxygen. These materials are usually microporous although mesoporous/microporous structures do exist. These include covalent organic frameworks (COFs), covalent triazine frameworks, polymers of intrinsic microporosity (PIMs), hyper cross-linked polymers (HCPs), and conjugated microporous polymers (CMPs). Each of these has different structures and manufacturing steps. In general, to create organic nanoporous materials, a monomer with greater than 2 branches (i.e. covalent bonds) is dissolved in a solvent. After additional monomers are added and polymerization occurs, the solvent is removed and the remaining structure is considered a nanoporous material. Organic nanoporous materials can be further classified into crystalline and amorphous networks. Crystalline networks are materials that have a well-defined pore sizes. The pore sizes are so well defined that simply by changing the monomer, one can obtain different pore sizes. COFs are an example of such crystalline structure. In contrast, amorphous nanoporous materials have a distribution of pore sizes and are usually disordered. An example is PIMs. Both categories have various uses in gas sorption and catalysis reactions. Inorganic Inorganic nanoporous materials are porous materials that include the use of oxide-type, carbon, binary, and pure metal materials. Examples include zeolites, nanoporous alumina, and titania nanotubes. Zeolites are crystalline hydrated tectoaluminosilicates. This material is a combination of alkali/alkali earth metals, alumina, and silica hydrates. These are used for ion-exchange beds and for water purification. Nanoporous alumina is a biocompatible material widely used in various dental and orthopedic implants. Titania nanotubes are also used in orthopedics but are special as they can form a titanium oxide layer upon exposure to oxygen. Because the surface of the material is oxide-protected, this material has excellent biocompatibility with incredible mechanical strength. Applications Gas Storage/Sensing Gas storage is crucial for energy, medical, and environmental applications. Nanoporous materials enable a unique method of gas storage through adsorption. When the substrate and gas interact with each other, the gas molecules can physio-adsorb or covalently bond with the nanoporous material, which is known as physical storage and chemical storage, respectively. While one may store gases in the bulk phase, such as in a bottle, nanoporous materials enable higher storage density, which is attractive for energy applications. One example of this application is hydrogen storage. With the onset of climate change, there is an increased interest in zero-emission vehicles, especially in fuel cell electric vehicles. By storing hydrogen at high densities using porous materials, one can increase electric car mileage range. Another use case for nanoporous materials is as a substrate for gas sensors. For example, measuring the electrical resistivity of a porous metal can yield the exact concentration of an analyte species in gaseous form. Since the resistivity of the substrate is proportional to the surface area of the porous media, using nanoporous materials will yield higher sensitivity in detecting trace gaseous species than their bulk counterparts. This is especially useful as nanoporous materials have a higher effective surface area normalized to the top-view surface area Biological applications Nanoporous materials are used in biological applications as well. Enzyme catalyzed reactions in biological applications are highly utilized for metabolism and processing large molecules. Nanoporous materials offer the opportunity to embed enzymes onto the porous substrate which enhances the lifetime of the reactions for long-term implants. Another application is found in DNA sequencing. By coating an inorganic nanoporous membrane on an insulating material, nanopores can be utilized for single-molecule analysis. By threading DNA through these nanopores, one can read out the ionic current through the pore which can be correlated to one of four nucleotides. References Porous media Nanomaterials
Nanoporous materials
[ "Materials_science", "Engineering" ]
1,610
[ "Nanotechnology", "Nanomaterials", "Porous media", "Materials science" ]
6,308,405
https://en.wikipedia.org/wiki/Delta%20potential
In quantum mechanics the delta potential is a potential well mathematically described by the Dirac delta function - a generalized function. Qualitatively, it corresponds to a potential which is zero everywhere, except at a single point, where it takes an infinite value. This can be used to simulate situations where a particle is free to move in two regions of space with a barrier between the two regions. For example, an electron can move almost freely in a conducting material, but if two conducting surfaces are put close together, the interface between them acts as a barrier for the electron that can be approximated by a delta potential. The delta potential well is a limiting case of the finite potential well, which is obtained if one maintains the product of the width of the well and the potential constant while decreasing the well's width and increasing the potential. This article, for simplicity, only considers a one-dimensional potential well, but analysis could be expanded to more dimensions. Single delta potential The time-independent Schrödinger equation for the wave function of a particle in one dimension in a potential is where is the reduced Planck constant, and is the energy of the particle. The delta potential is the potential where is the Dirac delta function. It is called a delta potential well if is negative, and a delta potential barrier if is positive. The delta has been defined to occur at the origin for simplicity; a shift in the delta function's argument does not change any of the following results. Solving the Schrödinger equation Source: The potential splits the space in two parts ( and ). In each of these parts the potential is zero, and the Schrödinger equation reduces to this is a linear differential equation with constant coefficients, whose solutions are linear combinations of and , where the wave number is related to the energy by In general, due to the presence of the delta potential in the origin, the coefficients of the solution need not be the same in both half-spaces: where, in the case of positive energies (real ), represents a wave traveling to the right, and one traveling to the left. One obtains a relation between the coefficients by imposing that the wavefunction be continuous at the origin: A second relation can be found by studying the derivative of the wavefunction. Normally, we could also impose differentiability at the origin, but this is not possible because of the delta potential. However, if we integrate the Schrödinger equation around , over an interval : In the limit as , the right-hand side of this equation vanishes; the left-hand side becomes because Substituting the definition of into this expression yields The boundary conditions thus give the following restrictions on the coefficients Bound state (E < 0) In any one-dimensional attractive potential there will be a bound state. To find its energy, note that for , is imaginary, and the wave functions which were oscillating for positive energies in the calculation above are now exponentially increasing or decreasing functions of x (see above). Requiring that the wave functions do not diverge at infinity eliminates half of the terms: . The wave function is then From the boundary conditions and normalization conditions, it follows that from which it follows that must be negative, that is, the bound state only exists for the well, and not for the barrier. The Fourier transform of this wave function is a Lorentzian function. The energy of the bound state is then Scattering (E > 0) For positive energies, the particle is free to move in either half-space: or . It may be scattered at the delta-function potential. The quantum case can be studied in the following situation: a particle incident on the barrier from the left side . It may be reflected or transmitted . To find the amplitudes for reflection and transmission for incidence from the left, we put in the above equations (incoming particle), (reflection), (no incoming particle from the right) and (transmission), and solve for and even though we do not have any equations in . The result is Due to the mirror symmetry of the model, the amplitudes for incidence from the right are the same as those from the left. The result is that there is a non-zero probability for the particle to be reflected. This does not depend on the sign of , that is, a barrier has the same probability of reflecting the particle as a well. This is a significant difference from classical mechanics, where the reflection probability would be 1 for the barrier (the particle simply bounces back), and 0 for the well (the particle passes through the well undisturbed). The probability for transmission is Remarks and application The calculation presented above may at first seem unrealistic and hardly useful. However, it has proved to be a suitable model for a variety of real-life systems. One such example regards the interfaces between two conducting materials. In the bulk of the materials, the motion of the electrons is quasi-free and can be described by the kinetic term in the above Hamiltonian with an effective mass . Often, the surfaces of such materials are covered with oxide layers or are not ideal for other reasons. This thin, non-conducting layer may then be modeled by a local delta-function potential as above. Electrons may then tunnel from one material to the other giving rise to a current. The operation of a scanning tunneling microscope (STM) relies on this tunneling effect. In that case, the barrier is due to the air between the tip of the STM and the underlying object. The strength of the barrier is related to the separation being stronger the further apart the two are. For a more general model of this situation, see Finite potential barrier (QM). The delta function potential barrier is the limiting case of the model considered there for very high and narrow barriers. The above model is one-dimensional while the space around us is three-dimensional. So, in fact, one should solve the Schrödinger equation in three dimensions. On the other hand, many systems only change along one coordinate direction and are translationally invariant along the others. The Schrödinger equation may then be reduced to the case considered here by an Ansatz for the wave function of the type . Alternatively, it is possible to generalize the delta function to exist on the surface of some domain D (see Laplacian of the indicator). The delta function model is actually a one-dimensional version of the Hydrogen atom according to the dimensional scaling method developed by the group of Dudley R. Herschbach The delta function model becomes particularly useful with the double-well Dirac Delta function model which represents a one-dimensional version of the Hydrogen molecule ion, as shown in the following section. Double delta potential The double-well Dirac delta function models a diatomic hydrogen molecule by the corresponding Schrödinger equation: where the potential is now where is the "internuclear" distance with Dirac delta-function (negative) peaks located at (shown in brown in the diagram). Keeping in mind the relationship of this model with its three-dimensional molecular counterpart, we use atomic units and set . Here is a formally adjustable parameter. From the single-well case, we can infer the "ansatz" for the solution to be Matching of the wavefunction at the Dirac delta-function peaks yields the determinant Thus, is found to be governed by the pseudo-quadratic equation which has two solutions . For the case of equal charges (symmetric homonuclear case), , and the pseudo-quadratic reduces to The "+" case corresponds to a wave function symmetric about the midpoint (shown in red in the diagram), where , and is called gerade. Correspondingly, the "−" case is the wave function that is anti-symmetric about the midpoint, where , and is called ungerade (shown in green in the diagram). They represent an approximation of the two lowest discrete energy states of the three-dimensional H2^+ and are useful in its analysis. Analytical solutions for the energy eigenvalues for the case of symmetric charges are given by where W is the standard Lambert W function. Note that the lowest energy corresponds to the symmetric solution . In the case of unequal charges, and for that matter the three-dimensional molecular problem, the solutions are given by a generalization of the Lambert W function (see ). One of the most interesting cases is when qR ≤ 1, which results in . Thus, one has a non-trivial bound state solution with . For these specific parameters, there are many interesting properties that occur, one of which is the unusual effect that the transmission coefficient is unity at zero energy. See also Free particle Particle in a box Finite potential well Particle in a ring Particle in a spherically symmetric potential Quantum harmonic oscillator Hydrogen atom or hydrogen-like atom Ring wave guide Particle in a one-dimensional lattice (periodic potential) Hydrogen molecular ion Holstein–Herring method Laplacian of the indicator List of quantum-mechanical systems with analytical solutions References For the 3-dimensional case look for the "delta shell potential"; further see K. Gottfried (1966), Quantum Mechanics Volume I: Fundamentals, ch. III, sec. 15. External links Quantum mechanical potentials Quantum models Scattering theory Schrödinger equation Exactly solvable models
Delta potential
[ "Physics", "Chemistry" ]
1,904
[ "Scattering theory", "Equations of physics", "Eponymous equations of physics", "Quantum mechanics", "Quantum models", "Quantum mechanical potentials", "Scattering", "Schrödinger equation" ]
20,894,507
https://en.wikipedia.org/wiki/Nitraria%20retusa
Nitraria retusa, commonly known as Nitre bush, is a salt-tolerant and drought-resistant shrub in the family Nitrariaceae. It can grow to heights of , although it seldom exceeds more than 1 m in height. It produces small white/green coloured flowers and small edible red fruit. The plant is native to desert areas of northern Africa, where it grows in primary succession on barren sand dunes, and in areas with high salinities such as salt marshes. Description Nitraria retusa is a bush growing to a maximum height of about . The twigs are furry when young, with the bluish-grey fleshy leaves being alternate, wedge or sickle-shaped, with entire margins and measuring by . The small, sweetly-scented, whitish or greenish flowers have short pedicels and parts in fives. The fruit is a triangular drupe, in diameter. Distribution and habitat This plant is native to North and East Africa, the Arabian Peninsula and the Middle East. It typically grows in salt marshes and semi-arid saline areas of deserts and it can help in the stabilisation of loose soils. Ecology In the Moghra Oasis, N. retusa plays an important role in the stabilisation of sand dunes. Here it is the dominant plant in some zones, forming hummocks known as nabkhas, where windblown materials heap up at the base of the plants. It shows a range of tolerances toward soil salinity and the availability of water. Near the lake, where salinity is low and the water table high, it associates with the sea rush, the common reed, salt grass and Zygophyllum album. At the outer fringe of the vegetated zone, where the salinity is high and the water table deep, it grows with Z.album and the Nile tamarisk. Uses The fruit turns red as it ripens and is enjoyed by humans and wildlife, which spread the seed. Camels and goats feed on the succulent leaves of this plant and desert-dwellers have used it as a source of salt. The wood is used for fuel and the fruits are sometimes used to make an intoxicating drink. N. retusa is one of a number of salt-tolerant plants that are being investigated as potential fodder crops for livestock. References Nitrariaceae Desert flora Flora of North Africa Taxa named by Paul Friedrich August Ascherson Taxa named by Peter Forsskål Flora of the Arabian Peninsula
Nitraria retusa
[ "Biology" ]
516
[ "Flora", "Desert flora" ]
20,894,986
https://en.wikipedia.org/wiki/Radon%E2%80%93Riesz%20property
The Radon–Riesz property is a mathematical property for normed spaces that helps ensure convergence in norm. Given two assumptions (essentially weak convergence and continuity of norm), we would like to ensure convergence in the norm topology. Definition Suppose that (X, ||·||) is a normed space. We say that X has the Radon–Riesz property (or that X is a Radon–Riesz space) if whenever is a sequence in the space and is a member of X such that converges weakly to and , then converges to in norm; that is, . Other names Although it would appear that Johann Radon was one of the first to make significant use of this property in 1913, M. I. Kadets and V. L. Klee also used versions of the Radon–Riesz property to make advancements in Banach space theory in the late 1920s. It is common for the Radon–Riesz property to also be referred to as the Kadets–Klee property or property (H). According to Robert Megginson, the letter H does not stand for anything. It was simply referred to as property (H) in a list of properties for normed spaces that starts with (A) and ends with (H). This list was given by K. Fan and I. Glicksberg (Observe that the definition of (H) given by Fan and Glicksberg includes additionally the rotundity of the norm, so it does not coincide with the Radon-Riesz property itself). The "Riesz" part of the name refers to Frigyes Riesz. He also made some use of this property in the 1920s. It is important to know that the name "Kadets-Klee property" is used sometimes to speak about the coincidence of the weak topologies and norm topologies in the unit sphere of the normed space. Examples 1. Every real Hilbert space is a Radon–Riesz space. Indeed, suppose that H is a real Hilbert space and that is a sequence in H converging weakly to a member of H. Using the two assumptions on the sequence and the fact that and letting n tend to infinity, we see that Thus H is a Radon–Riesz space. 2. Every uniformly convex Banach space is a Radon-Riesz space. See Section 3.7 of Haim Brezis' Functional analysis. See also Johann Radon Frigyes Riesz Hilbert space or Banach space theory Weak topology Normed space Functional analysis Schur's property References Functional analysis
Radon–Riesz property
[ "Mathematics" ]
544
[ "Functional analysis", "Functions and mappings", "Mathematical relations", "Mathematical objects" ]
20,895,158
https://en.wikipedia.org/wiki/Olszewski%20tube
An Olszewski tube is a pipe designed to bring oxygen-poor water from the bottom of a lake to the top. This tube was first proposed by a Polish limnologist named Przemysław Olszewski in 1961 and helps combat the negative effects of eutrophication, high nutrient content, in lakes. The basic concept behind the Olszewski tube is the reduction of nutrient concentration and destratification; the more specific goal is hypolimnetic withdrawal. Eutrophication When nutrients build up in a lake, eutrophication occurs, and this generally occurs in the top layer of a lake. The nutrients come both naturally and artificially and usually contain phosphates. The artificial nutrients can come from sewage and fertilizers, from agricultural runoff. Phosphorus from the phosphates causes algae to grow rapidly and spread throughout the top layer of the lake. Algal blooms have negative effects on both the aesthetics and the ecology of the lake. Aesthetically, the lake is not pleasing because it is covered with algae. Ecologically, eutrophication causes organisms in the lake to die because the algae deplete the dissolved oxygen in the lake. Design At the most simple level, the Olszewski tube is a pipe that spans from the bottom, hypolimnetic layer of the lake to the outlet. The outlet part of the pipe is installed under lake level in order for the device to act as a siphon. Once warm water flows in the lake at the surface, it forces the cold anoxic water of the hypolimnetic layer through and up the tube. This oxygen-poor water is then brought to the top of the lake where the eutrophication occurs. This eventually helps the lake as a whole because the bottom of the lake will have more dissolved oxygen and the top of the lake will have less eutrophication. Implementations The first implementation of the Olszewski tube was attempted at Lake Kortowo in Poland and this led to oligotrophication, reduction of nutrient cycling. This tube has shown the most promise in a 3.9 meter deep eutrophic lake in Switzerland because the phosphorus and nitrogen levels in the summer drastically decreased, oxygen levels increased, and the amount of cyanobacteria decreased from 152 grams per square meter to 41 grams per square meter. It has also been reported by a scientist named Bjork that there have been successes with the Olszewski tube in European lakes. Other limnologists like Pechlaner and Gachter have reported successes in small lakes where the total phosphorus decreased, transparency of water increased, and less algae was present. Complications Some complications that can arise with the use of an Olszewski tube include disruption of the thermocline and excessive water loss. The thermocline separates the upper layer of water that is mixed temperatures with the deeper, cooler water. If the thermocline is disrupted, it could alter the ecology of the lake, potentially making it uninhabitable. Another complication is that the installation must be a long-term process. Short-term uses of Olszewski tubes have largely failed because it takes some time for the anoxic condition of the hypolimnetic layer to increase in dissolved oxygen. Also, it must be a slow process in order to avoid disrupting the thermocline in a lake. If the Olszewski tube is operated slowly enough, the rate of water going in and going out will be fairly constant causing the thermocline to stay intact. Cost One advantage to hypolimnetic withdrawal is that it is relatively inexpensive to install an Olszewski tube or any similar device. Along with low initial cost, it also has a relatively low annual maintenance cost. The following are four systems installed in the United States (2002), their area in hectares, the rate of flow in cube-meters per minute, and their initial installation costs in US dollars: Lake Ballinger 41 ha 3.4 m3/min $420,000 Lake Waramaug 287 ha 6.3 m3/min $62,000 Devil's Lake 151 ha 9.1 m3/min $310,000 Pine Lake 412 ha 5.3 m3/min $282,000 Other Techniques Aside from using an Olszewski tube and hypolimnetic withdrawal, there are other techniques implemented to achieve the same goals as an Olszewski tube. These include increasing dissolved oxygen, reducing nutrient concentration, and lessening the amount of algae and unwanted biomass in lakes. Sediment oxidation is the artificial oxidation of the top 15 to 20 centimeters of anaerobic lake sediment. This technique reduces internal nutrient release through a series of chemical reactions starting with iron(III) chloride. After these reactions, the concentrations of phosphorus and ammonium (another nutrient found in lakes) decrease and the demand for oxygen gas in reduced as well. This technique is still not fully developed yet but can mirror the effects of the Olszewski tube. Biological control methods are the most promising techniques because they do the least harm to the ecosystem. These methods introduce a particular species (e.g. fish, bacteria, etc.) into a lake as a solution to a current problem. The introduction of a certain type of bacteria can help decrease nutrients. In turn the algae will not spread and the oxygen in the lake will stay in high dissolved concentrations. Hypolimnetic aeration is another technique in which oxygen is added to the lake. This helps increase the concentration of dissolved oxygen in the lake as well as bring down the levels of phosphorus. While the results of this technique are similar to those of the Olszewski tube, hypolimnetic aeration differs in that it uses compressed air to move the water rather than a siphoning effect. References Limnology Environmental engineering
Olszewski tube
[ "Chemistry", "Engineering" ]
1,210
[ "Chemical engineering", "Civil engineering", "Environmental engineering" ]
20,898,181
https://en.wikipedia.org/wiki/Geometric%20combinatorics
Geometric combinatorics is a branch of mathematics in general and combinatorics in particular. It includes a number of subareas such as polyhedral combinatorics (the study of faces of convex polyhedra), convex geometry (the study of convex sets, in particular combinatorics of their intersections), and discrete geometry, which in turn has many applications to computational geometry. Other important areas include metric geometry of polyhedra, such as the Cauchy theorem on rigidity of convex polytopes. The study of regular polytopes, Archimedean solids, and kissing numbers is also a part of geometric combinatorics. Special polytopes are also considered, such as the permutohedron, associahedron and Birkhoff polytope. See also Topological combinatorics References What is geometric combinatorics?, Ezra Miller and Vic Reiner, 2004 Topics in Geometric Combinatorics Geometric Combinatorics, Edited by: Ezra Miller and Victor Reiner Combinatorics Discrete geometry
Geometric combinatorics
[ "Mathematics" ]
212
[ "Discrete geometry", "Discrete mathematics", "Combinatorics stubs", "Combinatorics" ]
20,899,639
https://en.wikipedia.org/wiki/Schur%27s%20property
In mathematics, Schur's property, named after Issai Schur, is the property of normed spaces that is satisfied precisely if weak convergence of sequences entails convergence in norm. Motivation When we are working in a normed space X and we have a sequence that converges weakly to , then a natural question arises. Does the sequence converge in perhaps a more desirable manner? That is, does the sequence converge to in norm? A canonical example of this property, and commonly used to illustrate the Schur property, is the sequence space. Definition Suppose that we have a normed space (X, ||·||), an arbitrary member of X, and an arbitrary sequence in the space. We say that X has Schur's property if converging weakly to implies that . In other words, the weak and strong topologies share the same convergent sequences. Note however that weak and strong topologies are always distinct in infinite-dimensional space. Examples The space ℓ1 of sequences whose series is absolutely convergent has the Schur property. Name This property was named after the early 20th century mathematician Issai Schur who showed that ℓ1 had the above property in his 1921 paper. See also Radon-Riesz property for a similar property of normed spaces Schur's theorem Notes References Functional analysis Issai Schur
Schur's property
[ "Mathematics" ]
276
[ "Functional analysis", "Functions and mappings", "Mathematical relations", "Mathematical objects" ]
20,901,691
https://en.wikipedia.org/wiki/Apogee%20kick%20motor
An apogee kick motor (AKM) is a rocket motor that is regularly employed on artificial satellites to provide the final impulse to change the trajectory from the transfer orbit into its final orbit (most commonly circular). For a satellite launched from the Earth, the rocket firing is done at the highest point of the transfer orbit, known as the apogee. An apogee kick motor is used, for example, for satellites launched into a geostationary orbit. As the vast majority of geostationary satellite launches are carried out from spaceports at a significant distance away from Earth's equator, the carrier rocket often only launches the satellite into an orbit with a non-zero inclination approximately equal to the latitude of the launch site. This orbit is commonly known as a "geostationary transfer orbit" or a "geosynchronous transfer orbit". The satellite must then provide thrust to bring forth the needed delta v to reach a geostationary orbit. This is typically done with a fixed onboard apogee kick motor. When the satellite reaches its orbit's apogee position, the AKM is ignited, transforming the elliptical orbit into a circular orbit, while at the same time bringing the inclination to around zero degrees, thereby accomplishing the insertion into a geostationary orbit. This process is called an "apogee kick". More generally, firing a rocket engine to place a vehicle into the desired final orbit from a transfer orbit is labelled an "orbital insertion burn" or, if the desired orbit is circular, a circularization burn. For orbits around bodies other than Earth, it may be referred to as an apoapsis burn. The amount of fuel carried on board a satellite directly affects its lifetime, therefore it is desirable to make the apogee kick maneuver as efficient as possible. The mass of most geostationary satellites at the beginning of their operational life in geostationary orbit is typically about half that when they separated from their vehicle in geostationary transfer orbit, with the other half having been fuel expended in the apogee kick maneuver. Use on interplanetary missions A Star 48 kick motor was used to launch the New Horizons spacecraft towards Pluto. See also Advanced Cryogenic Evolved Stage Centaur Hohmann transfer orbit Liquid apogee engine List of upper stages Multistage rocket Space tug References Rocketry Rocket engines
Apogee kick motor
[ "Technology", "Engineering" ]
482
[ "Rocket engines", "Rocketry", "Engines", "Aerospace engineering" ]
20,901,868
https://en.wikipedia.org/wiki/Combinatorics%20and%20dynamical%20systems
The mathematical disciplines of combinatorics and dynamical systems interact in a number of ways. The ergodic theory of dynamical systems has recently been used to prove combinatorial theorems about number theory which has given rise to the field of arithmetic combinatorics. Also dynamical systems theory is heavily involved in the relatively recent field of combinatorics on words. Also combinatorial aspects of dynamical systems are studied. Dynamical systems can be defined on combinatorial objects; see for example graph dynamical system. See also Symbolic dynamics Analytic combinatorics Combinatorics and physics Arithmetic dynamics References . . . . . . . . . . . . External links Combinatorics of Iterated Functions: Combinatorial Dynamics & Dynamical Combinatorics Combinatorial dynamics at Scholarpedia
Combinatorics and dynamical systems
[ "Physics", "Mathematics" ]
164
[ "Discrete mathematics", "Combinatorics", "Mechanics", "Combinatorics stubs", "Dynamical systems" ]
20,902,402
https://en.wikipedia.org/wiki/Reichstein%20process
The Reichstein process in chemistry is a combined chemical and microbial method for the production of ascorbic acid from D-glucose that takes place in several steps. This process was devised by Nobel Prize winner Tadeusz Reichstein and his colleagues in 1933 while working in the laboratory of the ETH in Zürich. Reaction steps The reaction steps are: hydrogenation of D-glucose to D-sorbitol, an organic reaction with nickel as a catalyst under high temperature and high pressure. Microbial oxidation or fermentation of sorbitol to L-sorbose with acetobacter at pH 4-6 and 30 °C. protection of the 4 hydroxyl groups in sorbose by formation of the acetal with acetone and an acid to Diacetone-L-sorbose (2,3:4,6−Diisopropyliden−α−L−sorbose) Organic oxidation with potassium permanganate (to Diprogulic acid) followed by heating with water gives the 2-Keto-L-gulonic acid The final step is a ring-closing step or gamma lactonization with removal of water. Intermediate 5 can also be prepared directly from 3 with oxygen and platinum The microbial oxidation of sorbitol to sorbose is important because it provides the correct stereochemistry. Importance This process was patented and sold to Hoffmann-La Roche in 1934. The first commercially sold vitamin C product was either Cebion from Merck or Redoxon from Hoffmann-La Roche. Even today industrial methods for the production of ascorbic acid can be based on the Reichstein process. In modern methods however, sorbose is directly oxidized with a platinum catalyst (developed by Kurt Heyns (1908–2005) in 1942). This method avoids the use of protective groups. A side product with particular modification is 5-Keto-D-gluconic acid. A shorter biotechnological synthesis of ascorbic acid was announced in 1988 by Genencor International and Eastman Chemical. Glucose is converted to 2-keto-L-gulonic acid in two steps (via 2,4-diketo-L-gulonic acid intermediate) as compared to five steps in the traditional process. Though many organisms synthesize their own vitamin C, the steps can be different in plants and mammals. Smirnoff concluded that “..little is known about many of the enzymes involved in ascorbate biosynthesis or about the factors controlling flux through the pathways". There is interest in finding alternatives to the Reichstein process. Experiments suggest that genetically modified bacteria might be commercially usable. References Literature Boudrant, J. (1990): Microbial processes for ascorbic acid biosynthesis: a review. In: Enzyme Microb Technol. 12(5); 322–9; ; Bremus, C. et al. (2006): The use of microorganisms in L-ascorbic acid production. In: J Biotechnol.'' 124(1); 196–205; ; External links http://www.chemieunterricht.de/dc2/asch2/a-synthe.htm Der Schweizerische Weg zur Viamin-C-Synthese Organic reactions
Reichstein process
[ "Chemistry" ]
699
[ "Organic reactions" ]
481,484
https://en.wikipedia.org/wiki/Wood%20preservation
Wood easily degrades without sufficient preservation. Apart from structural wood preservation measures, there are a number of different chemical preservatives and processes (also known as timber treatment, lumber treatment or pressure treatment) that can extend the life of wood, timber, and their associated products, including engineered wood. These generally increase the durability and resistance from being destroyed by insects or fungi. History As proposed by Richardson, treatment of wood has been practiced for almost as long as the use of wood itself. There are records of wood preservation reaching back to ancient Greece during Alexander the Great's rule, where bridge wood was soaked in olive oil. The Romans protected their ship hulls by brushing the wood with tar. During the Industrial Revolution, wood preservation became a cornerstone of the wood processing industry. Inventors and scientists such as Bethell, Boucherie, Burnett and Kyan made historic developments in wood preservation, with the preservative solutions and processes. Commercial pressure treatment began in the latter half of the 19th century with the protection of railroad cross-ties using creosote. Treated wood was used primarily for industrial, agricultural, and utility applications, where it is still used, until its use grew considerably (at least in the United States) in the 1970s, as homeowners began building decks and backyard projects. Innovation in treated timber products continues to this day, with consumers becoming more interested in less toxic materials. Hazards Wood that has been industrially pressure-treated with approved preservative products poses a limited risk to the public and should be disposed of properly. On December 31, 2003, the U.S. wood treatment industry stopped treating residential lumber with arsenic and chromium (chromated copper arsenate, or CCA). This was a voluntary agreement with the United States Environmental Protection Agency. CCA was replaced by copper-based pesticides, with exceptions for certain industrial uses. CCA may still be used for outdoor products like utility trailer beds and non-residential construction like piers, docks, and agricultural buildings. Industrial wood preservation chemicals are generally not available directly to the public and may require special approval to import or purchase, depending on the product and the jurisdiction where being used. In most countries, industrial wood preservation operations are notifiable industrial activities that require licensing from relevant regulatory authorities such as EPA or equivalent. Reporting and licensing conditions vary widely, depending on the particular chemicals used and the country of use. Although pesticides are used to treat lumber, preserving lumber protects natural resources (in the short term) by enabling wood products to last longer. Previous poor practices in industry have left legacies of contaminated ground and water around wood treatment sites in some cases. However, under currently approved industry practices and regulatory controls, such as implemented in Europe, North America, Australia, New Zealand, Japan and elsewhere, environmental impact of these operations should be minimal. Wood treated with modern preservatives is generally safe to handle, given appropriate handling precautions and personal protection measures. However, treated wood may present certain hazards in some circumstances, such as during combustion or where loose wood dust particles or other fine toxic residues are generated, or where treated wood comes into direct contact with food and agriculture. Preservatives containing copper in the form of microscopic particles have recently been introduced to the market, usually with "micronized" or "micro" trade names and designations such as MCQ or MCA. The manufacturers represent that these products are safe and EPA has registered these products. The American Wood Protection Association (AWPA) recommends that all treated wood be accompanied by a Consumer Information Sheet (CIS), to communicate safe handling and disposal instructions, as well as potential health and environmental hazards of treated wood. Many producers have opted to provide Material Safety Data Sheets (MSDS) instead. Although the practice of distributing MSDS instead of CIS is widespread, there is an ongoing debate regarding the practice and how to best communicate potential hazards and hazard mitigation to the end-user. Neither MSDS nor the newly adopted International Safety Data Sheets (SDS) are required for treated lumber under current U.S. Federal law. Chemical Chemical preservatives can be classified into three broad categories: water-borne preservatives oil-borne preservatives light organic solvent preservatives (LOSPs). Micronized copper Particulate (micronised or dispersed) copper preservative technology has been introduced in the US and Europe. In these systems, copper is ground into micro sized particles and suspended in water rather than dissolved, as is the case with other copper products such as ACQ and copper azole. There are two particulate copper systems in production. One system uses a quat biocide system (known as MCQ) and is a derivative of ACQ. The other uses an azole biocide (known as MCA or μCA-C) derived from copper azole. Two particulate copper systems, one marketed as MicroPro and the other as Wolmanized using μCA-C formulation, have achieved Environmentally Preferable Product (EPP) certification. The EPP certification was issued by Scientific Certifications Systems (SCS) and is based on a comparative life-cycle impact assessments with an industry standard. The copper particle size used in the "micronized" copper beads ranges from 1 to 700 nm with an average under 300 nm. Larger particles (such as actual micron-scale particles) of copper do not adequately penetrate the wood cell walls. These micronized preservatives use nano particles of copper oxide or copper carbonate, for which there are alleged safety concerns. An environmental group petitioned EPA in 2011 to revoke the registration of the micronized copper products, citing safety issues. Alkaline copper quaternary Alkaline copper quaternary (ACQ) is a preservative made of copper, a fungicide, and a quaternary ammonium compound (quat) like didecyl dimethyl ammonium chloride, an insecticide which also augments the fungicidal treatment. ACQ has come into wide use in the US, Europe, Japan and Australia following restrictions on CCA. Its use is governed by national and international standards, which determine the volume of preservative uptake required for a specific timber end use. Since it contains high levels of copper, ACQ-treated timber is five times more corrosive to common steel. It is necessary to use fasteners meeting or exceeding requirements for ASTM A 153 Class D, such as ceramic-coated, as mere galvanized and even common grades of stainless steel corrode. The U.S. began mandating the use of non-arsenic containing wood preservatives for virtually all residential use timber in 2004. The American Wood Protection Association (AWPA) standards for ACQ require a retention of for above ground use and for ground contact. Chemical Specialties, Inc (CSI, now Viance) received U.S. Environmental Protection Agency's Presidential Green Chemistry Challenge Award in 2002 for commercial introduction of ACQ. Its widespread use has eliminated major quantities of arsenic and chromium previously contained in CCA. Copper azole Copper azole preservative (denoted as CA-B and CA-C under American Wood Protection Association/AWPA standards) is a major copper based wood preservative that has come into wide use in Canada, the US, Europe, Japan and Australia following restrictions on CCA. Its use is governed by national and international standards, which determine the volume of preservative uptake required for a specific timber end use. Copper azole is similar to ACQ with the difference being that the dissolved copper preservative is augmented by an azole co-biocide like organic triazoles such as tebuconazole or propiconazole, which are also used to protect food crops, instead of the quat biocide used in ACQ. The azole co-biocide yields a copper azole product that is effective at lower retentions than required for equivalent ACQ performance. The general appearance of wood treated with copper azole preservative is similar to CCA with a green colouration. Copper azole treated wood is marketed widely under the Preserve CA and Wolmanized brands in North America, and the Tanalith brand across Europe and other international markets. The AWPA standard retention for CA-B is for above ground applications and for ground contact applications. Type C copper azole, denoted as CA-C, has been introduced under the Wolmanized and Preserve brands. The AWPA standard retention for CA-C is for above ground applications and for ground contact applications. Copper naphthenate Copper naphthenate, invented in Denmark in 1911, has been used effectively for many applications including: fencepost, canvas, nets, greenhouses, utility poles, railroad ties, beehives, and wooden structures in ground contact. Copper naphthenate is registered with the EPA as a non-restricted use pesticide, so there is no federal applicators licensing requirements for its use as a wood preservative. Copper Naphthenate can be applied by brush, dip, or pressure treatment. The University of Hawaii has found that copper naphthenate in wood at loadings of is resistant to Formosan termite attack. On February 19, 1981, the Federal Register outlined the EPA's position regarding the health risks associated with various wood preservatives. As a result, the National Park Service recommended the use of copper naphthenate in its facilities as an approved substitute for pentachlorophenol, creosote, and inorganic arsenicals. A 50-year study presented to AWPA in 2005 by Mike Freeman and Douglas Crawford says, "This study reassessed the condition of the treated wood posts in southern Mississippi, and statistically calculated the new expected post life span. It was determined that commercial wood preservatives, like pentachlorophenol in oil, creosote, and copper naphthenate in oil, provided excellent protection for posts, with life spans now calculated to exceed 60 years. Surprisingly, creosote and penta treated posts at 75% of the recommended AWPA retention, and copper naphthenate at 50% of the required AWPA retention, gave excellent performance in this AWPA Hazard Zone 5 site. Untreated southern pine posts lasted 2 years in this test site." The AWPA M4 Standard for the care of preservative-treated wood products, reads, "The appropriateness of the preservation system for field treatment shall be determined by the type of preservative originally used to protect the product and the availability of a field treatment preservative. Because many preservative products are not packaged and labeled for use by the general public, a system different from the original treatment may need to be utilized for field treatment. Users shall carefully read and follow the instructions and precautions listed on the product label when using these materials. Copper naphthenate preservatives containing a minimum of 2.0% copper metal are recommended for material originally treated with copper naphthenate, pentachlorophenol, creosote, creosote solution or waterborne preservatives." The M4 Standard has been adopted by the International Code Council's (ICC) 2015 International Building Code (IBC) section 2303.1.9 Preservative-treated Wood, and 2015 International Residential Code (IRC) R317.1.1 Field Treatment. The American Association of State Highway and Transportation Officials AASHTO has also adopted the AWPA M4 Standard. A waterborne copper naphthenate is sold to consumers under the tradename QNAP 5W. Oilborne copper naphthenates with 1% copper as metal solutions are sold to consumers under the tradenames Copper Green, and Wolmanized Copper Coat, a 2% copper as metal solution is sold under the tradename Tenino. Chromated copper arsenate (CCA) In CCA treatment, copper is the primary fungicide, arsenic is a secondary fungicide and an insecticide, and chromium is a fixative which also provides ultraviolet (UV) light resistance. Recognized for the greenish tint it imparts to timber, CCA is a preservative that was very common for many decades. In the pressure treatment process, an aqueous solution of CCA is applied using a vacuum and pressure cycle, and the treated wood is then stacked to dry. During the process, the mixture of oxides reacts to form insoluble compounds, helping with leaching problems. The process can apply varying amounts of preservative at varying levels of pressure to protect the wood against increasing levels of attack. Increasing protection can be applied (in increasing order of attack and treatment) for: exposure to the atmosphere, implantation within soil, or insertion into a marine environment. In the last decade concerns were raised that the chemicals may leach from the wood into surrounding soil, resulting in concentrations higher than naturally occurring background levels. A study cited in Forest Products Journal found 12–13% of the chromated copper arsenate leached from treated wood buried in compost during a 12-month period. Once these chemicals have leached from the wood, they are likely to bind to soil particles, especially in soils with clay or soils that are more alkaline than neutral. In the United States the US Consumer Product Safety Commission issued a report in 2002 stating that exposure to arsenic from direct human contact with CCA treated wood may be higher than was previously thought. On 1 January 2004, the Environmental Protection Agency (EPA) in a voluntary agreement with industry began restricting the use of CCA in treated timber in residential and commercial construction, with the exception of shakes and shingles, permanent wood foundations, and certain commercial applications. This was in an effort to reduce the use of arsenic and improve environmental safety, although the EPA were careful to point out that they had not concluded that CCA treated wood structures in service posed an unacceptable risk to the community. The EPA did not call for the removal or dismantling of existing CCA treated wood structures. In Australia, the Australian Pesticides and Veterinary Medicines Authority (APVMA) restricted the use of CCA preservative for treatment of timber used in certain applications from March 2006. CCA may no longer be used to treat wood used in 'intimate human contact' applications such as children's play equipment, furniture, residential decking and handrailing. Use for low contact residential, commercial and industrial applications remains unrestricted, as does its use in all other situations. The APVMA decision to restrict the use of CCA in Australia was a precautionary measure, even though the report found no evidence that demonstrated CCA treated timber posed unreasonable risks to humans in normal use. Similarly to the US EPA, the APVMA did not recommend dismantling or removal of existing CCA treated wood structures. In Europe, Directive 2003/2/EC restricts the marketing and use of arsenic, including CCA wood treatment. CCA treated wood is not permitted to be used in residential or domestic constructions. It is permitted for use in various industrial and public works, such as bridges, highway safety fencing, electric power transmission and telecommunications poles. In the United Kingdom waste timber treated with CCA was classified in July 2012 as hazardous waste by the department for the Environment, Food and Rural Affairs. Other copper compounds These include copper HDO (Bis-(N-cyclohexyldiazeniumdioxy)-copper or CuHDO), copper chromate, copper citrate, acid copper chromate, and ammoniacal copper zinc arsenate (ACZA). The CuHDO treatment is an alternative to CCA, ACQ and CA used in Europe and in approval stages for United States and Canada. ACZA is generally used for marine applications. Borate Boric acid, oxides and salts (borates) are effective wood preservatives and are supplied under numerous brand names throughout the world. One of the most common compounds used is disodium octaborate tetrahydrate , commonly abbreviated DOT. Borate treated wood is of low toxicity to humans, and does not contain copper or other heavy metals. However, unlike most other preservatives, borate compounds do not become fixed in the wood and can be partially leached out if exposed repeatedly to water that flows away rather than evaporating (evaporation leaves the borate behind so is not a problem). Even though leaching will not normally reduce boron concentrations below effective levels for preventing fungal growth, borates should not be used where they will be exposed to repeated rain, water or ground contact unless the exposed surfaces are treated to repel water. Zinc-borate compounds are less susceptible to leaching than sodium-borate compounds, but are still not recommended for below-ground use unless the timber is first sealed. Recent interest in low toxicity timber for residential use, along with new regulations restricting some wood preservation agents, has resulted in a resurgence of the use of borate treated wood for floor beams and internal structural members. Researchers at CSIRO in Australia have developed organoborates which are much more resistant to leaching, while still providing timber with good protection from termite and fungal attack. The cost of the production of these modified borates will limit their widespread take-up but they are likely to be suitable for certain niche applications, especially where low mammalian toxicity is of paramount importance. PTI Recent concerns about the health and environmental effects of metallic wood preservatives have created a market interest in non-metallic wood preservatives such as propiconazole-tebuconazole-imidacloprid better known as PTI. The American Wood Protection Association (AWPA) standards for PTI require a retention of for above ground use and when applied in combination with a wax stabilizer. The AWPA has not developed a standard for a PTI ground contact preservative, so PTI is currently limited to above ground applications such as decks. All three of the PTI components are also used in food crop applications. The very low required amounts of PTI in pressure treated wood further limits effects and substantially decreases the freight costs and associated environmental impacts for shipping preservative components to the pressure treating plants. The PTI preservative imparts very little color to the wood. Producers generally add a color agent or a trace amount of copper solution so as to identify the wood as pressure treated and to better match the color of other pressure treated wood products. The PTI wood products are very well adapted for paint and stain applications with no bleed-through. The addition of the wax stabilizer allows a lower preservative retention plus substantially reduces the tendency of wood to warp and split as it dries. In combination with normal deck maintenance and sealer applications, the stabilizer helps maintain appearance and performance over time. PTI pressure treated wood products are no more corrosive than untreated wood and are approved for all types of metal contact, including aluminum. PTI pressure treated wood products are relatively new to the market place and are not yet widely available in building supply stores. However, there are some suppliers selling PTI products for delivery anywhere in the US on a job lot order basis. Sodium silicate Sodium silicate is produced by fusing sodium carbonate with sand or heating both ingredients under pressure. It has been in use since the 19th century. It can be a deterrent against insect attack and possesses minor flame-resistant properties; however, it is easily washed out of wood by moisture, forming a flake-like layer on top of the wood. Timber Treatment Technology, LLC, markets TimberSIL, a sodium silicate wood preservative. The TimberSIL proprietary process surrounds the wood fibers with a protective, non-toxic, amorphous glass matrix. The result is a product the company calls "Glass Wood," which they claim is Class A fire-retardant, chemically inert, rot and decay resistant, and superior in strength to untreated wood. Timbersil is currently involved in litigation over its claims. Potassium silicate There are a number of European natural paint fabricants that have developed potassium silicate (potassium waterglass) based preservatives. They frequently include boron compounds, cellulose, lignin and other plant extracts. They are a surface application with a minimal impregnation for internal use. Bifenthrin spray In Australia, a water-based bifenthrin preservative has been developed to improve the insect resistance of timber. As this preservative is applied by spray, it only penetrates the outer 2 mm of the timber cross-section. Concerns have been raised as to whether this thin-envelope system will provide protection against insects in the longer term, particularly when exposed to sunlight for extended periods. Fire retardant treated The fireproofing of wood utilizes a fire retardant chemical that remains stable in high temperature environments. The fire retardant is applied under pressure at a wood treating plant like the preservatives described above, or applied as a surface coating. In both cases, treatment provides a physical barrier to flame spread. The treated wood chars but does not oxidize. Effectively this creates a convective layer that transfers flame heat to the wood in a uniform way which significantly slows the progress of fire to the material. There are several commercially available wood-based construction materials using pressure-treatment (such as those marketed in the United States and elsewhere under the trade names of 'FirePro', 'Burnblock' 'Wood-safe, 'Dricon', 'D-Blaze,' and 'Pyro-Guard'), as well as factory-applied coatings under the trade names of 'PinkWood' and 'NexGen'. Some site-applied coatings as well as brominated fire retardants have lost favor due to safety concerns as well as concerns surrounding the consistency of application. Specialized treatments also exist for wood used in weather-exposed applications. The only impregnation-applied fire retardant commercially available in Australia is 'NexGen'. 'Guardian', which used calcium formate as a 'powerful wood modifying agent', was removed from sale in early 2010 for unspecified reasons. Oil-borne These include pentachlorophenol ("penta") and creosote. They emit a strong petrochemical odor and are generally not used in consumer products. Both of these pressure treatments routinely protect wood for 40 years in most applications. Coal-tar creosote Creosote was the first wood preservative to gain industrial importance more than 150 years ago and it is still widely used today for protection of industrial timber components where long service life is essential. Creosote is a tar-based preservative that is commonly used for utility poles and railroad ties or sleepers. Creosote is one of the oldest wood preservatives, and was originally derived from a wood distillate, but now, virtually all creosote is manufactured from the distillation of coal tar. Creosote is regulated as a pesticide, and is not usually sold to the general public. Linseed oil In recent years in Australia and New Zealand, linseed oil has been incorporated in preservative formulations as a solvent and water repellent to "envelope treat" timber. This involves just treating the outer 5 mm of the cross-section of a timber member with preservative (e.g., permethrin 25:75), leaving the core untreated. While not as effective as CCA or LOSP methods, envelope treatments are significantly cheaper, as they use far less preservative. Major preservative manufacturers add a blue (or red) dye to envelope treatments. Blue colored timber is for use south of the Tropic of Capricorn and red for elsewhere. The colored dye also indicates that the timber is treated for resistance to termites/white ants. There is an ongoing promotional campaign in Australia for this type of treatment. Other emulsions Light organic solvent preservatives (LOSP) This class of timber treatments use white spirit, or light oils such as kerosene, as the solvent carrier to deliver preservative compounds into timber. Synthetic pyrethroids are typically used as an insecticide, such as permethrin, bifenthrin or deltamethrin. In Australia and New Zealand, the most common formulations use permethrin as an insecticide, and propiconazole and tebuconazole as fungicides. While still using a chemical preservative, this formulation contains no heavy-metal compounds. With the introduction of strict volatile organic compound (VOC) laws in the European Union, LOSPs have disadvantages due to the high cost and long process times associated with vapour-recovery systems. LOSPs have been emulsified into water-based solvents. While this does significantly reduce VOC emissions, the timber swells during treatment, removing many of the advantages of LOSP formulations. Epoxy Various epoxy resins usually thinned with a solvent like acetone or methyl ethyl ketone (MEK) can be used to both preserve and seal wood. The wood coatings market in general will exceed $12 billion by 2027. New technologies Biological modified timber Biological modified timber is treated with biopolymers from agricultural waste. After drying and curing, the soft timber becomes durable and strong. With this process fast growing pinewood acquires properties similar to tropical hardwood. Production facilities for this process are in The Netherlands and is known under the trade name “NobelWood”. From agricultural waste, like sugarcane bagasse, furfuryl alcohol is manufactured. Theoretically this alcohol can be from any fermented bio-mass waste and therefore can be called a green chemical. After condensation reactions pre-polymers are formed from furfuryl alcohol. Fast growing softwood is impregnated with the water-soluble bio-polymer. After impregnation the wood is dried and heated which initiates a polymerisation reaction between the bio-polymer and the wood cells. This process results in wood cells which are resistant to microorganisms. At the moment the only timber species which is being used for this process is Pinus radiata. This is the fastest growing tree species on Earth that has a porous structure which is particularly suitable for impregnation processes. The technique is applied to timber mainly for the building industry as a cladding material. The technique is being further developed in order to reach similar physical and biological properties of other polyfurfuryl impregnated wood species. Besides the impregnation with the biopolymers the timber can also be impregnated with fire retardant resins. This combination creates a timber with durability class I and a fire safety certification of Euro class B. Acetylation of wood Chemical modification of wood at the molecular level has been used to improve its performance properties. Many chemical reaction systems for the modification of wood, especially those using various types of anhydrides, have been published; however, the reaction of wood with acetic anhydride has been the most studied. The physical properties of any material are determined by its chemical structure. Wood contains an abundance of chemical groups called free hydroxyls. Free hydroxyl groups readily absorb and release water according to changes in the climatic conditions to which they are exposed. This is the main reason why wood's dimensional stability is impacted by swelling and shrinking. It is also believed that the digestion of wood by enzymes initiates at the free hydroxyl sites, which is one of the principal reasons why wood is prone to decay. Acetylation effectively changes the compounds with free hydroxyls within wood into acetate esters. This is done by reacting the wood with acetic anhydride, which comes from acetic acid. When free hydroxyl groups are transformed to acetoxy groups, the ability of the wood to absorb water is greatly reduced, rendering the wood more dimensionally stable and, because it is no longer digestible, extremely durable. In general, softwoods naturally have an acetyl content from 0.5 to 1.5% and more durable hardwoods from 2 to 4.5%. Acetylation takes wood well beyond these levels with corresponding benefits. These include an extended coatings life due to acetylated wood acting as a more stable substrate for paints and translucent coatings. acetylated wood is non-toxic and does not have the environmental issues associated with traditional preservation techniques. The acetylation of wood was first done in Germany in 1928 by Fuchs. In 1946, Tarkow, Stamm and Erickson first described the use of wood acetylation to stabilize wood from swelling in water. Since the 1940s, many laboratories around the world have looked at acetylation of many different types of woods and agricultural resources. In spite of the vast amount of research on chemical modification of wood, and, more specifically, on the acetylation of wood, commercialization did not come easily. The first patent on the acetylation of wood was filed by Suida in Austria in 1930. Later, in 1947, Stamm and Tarkow filed a patent on the acetylation of wood and boards using pyridine as a catalyst. In 1961, the Koppers Company published a technical bulletin on the acetylation of wood using no catalysis, but with an organic cosolvent In 1977, in Russia, Otlesnov and Nikitina came close to commercialization, but the process was discontinued, presumably because cost-effectiveness could not be achieved. In 2007, Titan Wood, a London-based company, with production facilities in The Netherlands, achieved cost-effective commercialization and began large-scale production of acetylated wood under the trade name "Accoya". Natural Copper plating Copper plating or copper sheathing is the practice of covering wood, most commonly wooden hulls of ships, with copper metal. As metallic copper is both repellent and toxic to fungus, insects such as termites, and marine bi-valves this would preserve the wood and also act as an anti-fouling measure to prevent aquatic life from attaching to the ship's hull and reducing a ship's speed and maneuverability. Modern marine bottom paints often incorporate a significant amount of copper in their formulations for the same reason, although they are not recommended for aluminum hulls because of the possibilities for galvanic corrosion. Naturally rot-resistant woods These species are resistant to decay in their natural state, due to high levels of organic chemicals called extractives, mainly polyphenols, providing them antimicrobial properties. Extractives are chemicals that are deposited in the heartwood of certain tree species as they convert sapwood to heartwood; they are present in both parts though. Huon pine (Lagarostrobos franklinii), merbau (Intsia bijuga), ironbark (Eucalyptus spp.), totara (Podocarpus totara), puriri (Vitex lucens), kauri (Agathis australis), and many cypresses, such as coast redwood (Sequoia sempervirens) and western red cedar (Thuja plicata), fall in this category. However, many of these species tend to be prohibitively expensive for general construction applications. Huon pine was used for ship hulls in the 19th century, but over-harvesting and Huon pine's extremely slow growth rate makes this now a specialty timber. Huon pine is so rot resistant that fallen trees from many years ago are still commercially valuable. Merbau is still a popular decking timber and has a long life in above ground applications, but it is logged in an unsustainable manner and is too hard and brittle for general use. Ironbark is a good choice where available. It is harvested from both old-growth and plantation in Australia and is highly resistant to rot and termites. It is most commonly used for fence posts and house stumps. Eastern red cedar (Juniperus virginiana) and black locust (Robinia pseudoacacia) have long been used for rot-resistant fence posts and rails in eastern United States, with the black locust also planted in modern times in Europe. Coast redwood is commonly used for similar applications in the western United States. Totara and puriri were used extensively in New Zealand during the European colonial era when native forests were "mined", even as fence posts of which many are still operating. Totara was used by the Māori to build large waka (canoes). Today, they are specialty timbers as a result of their scarcity, although lower grade stocks are sold for landscaping use. Kauri is a superb timber for building the hulls and decks of boats. It too is now a specialty timber and ancient logs (in excess of 3 000 years) that have been mined from swamps are used by wood turners and furniture makers. The natural durability or rot and insect resistance of wood species is always based on the heartwood (or "truewood"). The sapwood of all timber species should be considered to be non-durable without preservative treatment. Natural extractives Natural substances, purified from naturally rot-resistant trees and responsible for natural durability, also known as natural extractives, are another promising wood preservatives. Several compounds have been described to be responsible for natural durability, including different polyphenols, lignins, lignans (such as gmelinol, plicatic acid), hinokitiol, α-cadinol and other sesquiterpenoids, flavonoids (such as mesquitol), and other substances. These compounds are mostly identified in the heartwood, although they are also present in minimal concentrations in the sapwood. Tannins, which have also shown to act as protectants, are present in the bark of trees. Treatment of timber with natural extractives, such as hinokitiol, tannins, and different tree extracts, has been studied and proposed to be another environmentally-friendly wood preservation method. Tung oil Tung oil has been used for hundreds of years in China, where it was used as a preservative for wood ships. The oil penetrates the wood, and then hardens to form an impermeable hydrophobic layer up to 5 mm into the wood. As a preservative it is effective for exterior work above and below ground, but the thin layer makes it less useful in practice. It is not available as a pressure treatment. Heat treatments By going beyond kiln drying wood, heat treatment may make timber more durable. By heating timber to a certain temperature, it may be possible to make the wood fibre less appetizing to insects. Heat treatment can also improve the properties of the wood with respect to water, with lower equilibrium moisture, less moisture deformation, and weather resistance. It is weather-resistant enough to be used unprotected, in facades or in kitchen tables, where wetting is expected. However, heating can reduce the amount of volatile organic compounds, which generally have antimicrobial properties. There are four similar heat treatments — Westwood, developed in the United States; Retiwood, developed in France; Thermowood, developed in Finland by VTT; and Platowood, developed in The Netherlands. These processes autoclave the treated wood, subjecting it to pressure and heat, along with nitrogen or water vapour to control drying in a staged treatment process ranging from 24 to 48 hours at temperatures of 180 °C to 230 °C depending on timber species. These processes increase the durability, dimensional stability and hardness of the treated wood by at least one class; however, the treated wood is darkened in colour, and there are changes in certain mechanical characteristics: Specifically, the modulus of elasticity is increased to 10%, and the modulus of rupture is diminished by 5% to 20%. Thus, the treated wood requires drilling for nailing to avoid splitting the wood. Certain of these processes cause less impact than others in their mechanical effects upon the treated wood. Wood treated with this process is often used for cladding or siding, flooring, furniture and windows. For the control of pests that may be harbored in wood packaging material (i.e. crates and pallets), the ISPM 15 requires heat treatment of wood to 56 °C for 30 minutes to receive the HT stamp. This is typically required to ensure the killing of the pine wilt nematode and other kinds of wood pests that could be transported internationally. Mud treatment Wood and bamboo can be buried in mud to help protect them from insects and decay. This practice is used widely in Vietnam to build farm houses consisting of a wooden structural frame, a bamboo roof frame and bamboo with mud mixed with rice hay for the walls. While wood in contact with soil will generally decompose more quickly than wood not in contact with it, it is possible that the predominantly clay soils prevalent in Vietnam provide a degree of mechanical protection against insect attack, which compensates for the accelerated rate of decay. Also, since wood is subject to bacterial decay only under specific temperature and moisture content ranges, submerging it in water-saturated mud can retard decay, by saturating the wood's internal cells beyond their moisture decay range. Application processes Introduction and history Probably the first attempts made to protect wood from decay and insect attack consisted of brushing or rubbing preservatives onto the surfaces of the treated wood. Through trial and error the most effective preservatives and application processes were slowly determined. In the Industrial Revolution, demands for such things as telegraph poles and railroad ties (UK: railway sleepers) helped to fuel an explosion of new techniques that emerged in the early 19th century. The sharpest rise in inventions took place between 1830 and 1840, when Bethell, Boucherie, Burnett and Kyan were making wood-preserving history. Since then, numerous processes have been introduced or existing processes improved. The goal of modern-day wood preservation is to ensure a deep, uniform penetration with reasonable cost, without endangering the environment. The most widespread application processes today are those using artificial pressure through which many woods are being effectively treated, but several species (such as spruce, Douglas-fir, larch, hemlock and fir) are very resistant to impregnation. With the use of incising, the treatment of these woods has been somewhat successful but with a higher cost and not always satisfactory results. One can divide the wood-preserving methods roughly into either non-pressure processes or pressure processes. Non-pressure processes There are numerous non-pressure processes of treating wood which vary primarily in their procedure. The most common of these treatments involve the application of the preservative by means of brushing or spraying, dipping, soaking, steeping or by means of hot and cold bath. There is also a variety of additional methods involving charring, applying preservatives in bored holes, diffusion processes and sap displacement. Brush and spray treatments Brushing preservatives is a long-practised method and often used in today's carpentry workshops. Technological developments mean it is also possible to spray preservative over the surface of the timber. Some of the liquid is drawn into the wood as the result of capillary action before the spray runs off or evaporates, but unless puddling occurs penetration is limited and may not be suitable for long-term weathering. By using the spray method, coal-tar creosote, oil-borne solutions and water-borne salts (to some extent) can also be applied. A thorough brush or spray treatment with coal-tar creosote can add 1 to 3 years to the lifespan of poles or posts. Two or more coats provide better protection than one, but the successive coats should not be applied until the prior coat has dried or soaked into the wood. The wood should be seasoned before treatment. Dipping Dipping consists of simply immersing the wood in a bath of creosote or other preservative for a few seconds or minutes. Similar penetrations to that of brushing and spraying processes are achieved. It has the advantage of minimizing hand labor. It requires more equipment and larger quantities of preservative and is not adequate for treating small lots of timber. Usually the dipping process is useful in the treatment of window sashes and doors. Except for copper naphthenate, treatment with copper salt preservative is no longer allowed with this method. Steeping In this process the wood is submerged in a tank of water-preservative mix, and allowed to soak for a longer period of time (several days to weeks). This process was developed in the 19th century by John Kyan. The depth and retention achieved depends on factors such as species, wood moisture, preservative and soak duration. The majority of the absorption takes place during the first two or three days, but will continue at a slower pace for an indefinite period. As a result, the longer the wood can be left in the solution, the better treatment it will receive. When treating seasoned timber, both the water and the preservative salt soak into the wood, making it necessary to season the wood a second time. Posts and poles can be treated directly on endangered areas, but should be treated at least above the future ground level. The depth obtained during regular steeping periods varies from up to by sap pine. Due to the low absorption, solution strength should be somewhat stronger than that in pressure processes, around 5% for seasoned timber and 10% for green timber (because the concentration slowly decreases as the chemicals diffuse into the wood). The solution strength should be controlled continually and, if necessary, be corrected with the salt additive. After the timber is removed from the treatment tank, the chemical will continue to spread within the wood if it has sufficient moisture content. The wood should be weighed down and piled so that the solution can reach all surfaces. (Sawed materials stickers should be placed between every board layer.) This process finds minimal use despite its former popularity in continental Europe and Great Britain. Kyanizing Named after John Howard Kyan, who patented this process in England in 1833, Kyanizing consists of steeping wood in a 0.67% mercuric chloride preservative solution. It is no longer used. Gedrian's Bath Patented by Charles A. Seely, this process achieves treatment by immersing seasoned wood in successive baths of hot and cold preservatives. During the hot baths, the air expands in the timbers. When the timbers are changed to the cold bath (the preservative can also be changed) a partial vacuum is created within the lumen of the cells, causing the preservative to be drawn into the wood. Some penetration occurs during the hot baths, but most of it takes place during the cold baths. This cycle is repeated with a significant time reduction compared to other steeping processes. Each bath may last 4 to 8 hours or in some cases longer. The temperature of the preservative in the hot bath should be between and in the cold bath (depending on preservative and tree species). The average penetration depths achieved with this process ranges from . Both preservative oils and water-soluble salts can be used with this treatment. Due to the longer treatment periods, this method finds little use in the commercial wood preservation industry today. Preservative precipitation As explained in Uhlig's Corrosion Handbook, this process involves two or more chemical baths that undergo a reaction with the cells of the wood, and result in the precipitation of preservative into the wood cells. Two chemicals commonly employed in this process are copper ethanolamine, and sodium dimethyldithiocarbamate, which reacts to precipitate copper dimetyldithiocarbamate. The precipitated preservative is very resistant to leeching. Since its use in the mid-1990s, it has been discontinued in the United States of America, but it never saw commercialization in Canada. Pressure processes Pressure processes are the most permanent method around today in preserving timber life. Pressure processes are those in which the treatment is carried out in closed cylinders with applied pressure or vacuum. These processes have a number of advantages over the non-pressure methods. In most cases, a deeper and more uniform penetration and a higher absorption of preservative is achieved. Another advantage is that the treating conditions can be controlled so that retention and penetration can be varied. These pressure processes can be adapted to large-scale production. The high initial costs for equipment and the energy costs are the biggest disadvantages. These treatment methods are used to protect ties, poles and structural timbers and find use throughout the world today. The various pressure processes that are used today differ in details, but the general method is in all cases the same. The treatment is carried out in cylinders. The timbers are loaded onto special tram cars, so called buggies or bogies, and into the cylinder. These cylinders are then set under pressure often with the addition of higher temperature. As final treatment, a vacuum is frequently used to extract excess preservatives. These cycles can be repeated to achieve better penetration. Light organic solvent preservative (LOSP) treatments often use a vacuum impregnation process. This is possible because of the lower viscosity of the white-spirit carrier used. Full-cell process In the full-cell process, the intent is to keep as much of the liquid absorbed into the wood during the pressure period as possible, thus leaving the maximum concentration of preservatives in the treated area. Usually, water solutions of preservative salts are employed with this process, but it is also possible to impregnate wood with oil. The desired retention is achieved by changing the strength of the solution. William Burnett patented this development in 1838 of full-cell impregnation with water solutions. The patent covered the use of zinc chloride on water basis, also known as Burnettizing. A full-cell process with oil was patented in 1838 by John Bethell. His patent described the injection of tar and oils into wood by applying pressure in closed cylinders. This process is still used today with some improvements. Fluctuation pressure process Contrary to the static full-cell and empty-cell processes, the fluctuation process is a dynamic process. By this process the pressure inside the impregnation cylinder changes between pressure and vacuum within a few seconds. There have been inconsistent claims that through this process it is possible to reverse the pit closure by spruce. However, the best results that have been achieved with this process by spruce do not exceed a penetration deeper than . Specialized equipment is necessary and therefore higher investment costs are incurred. Boucherie process Developed by Dr. Boucherie of France in 1838, this approach consisted of attaching a bag or container of preservative solution to a standing or a freshly cut tree with bark, branches, and leaves still attached, thereby injecting the liquid into the sap stream. Through transpiration of moisture from the leaves the preservative is drawn upward through the sapwood of the tree trunk. The modified Boucherie process consists of placing freshly cut, unpeeled timbers onto declining skids, with the stump slightly elevated, then fastening watertight covering caps or boring a number of holes into the ends, and inserting a solution of copper sulfate or other waterborne preservative into the caps or holes from an elevated container. Preservative oils tend to not penetrate satisfactorily by this method. The hydrostatic pressure of the liquid forces the preservative lengthwise into and through the sapwood, thus pushing the sap out of the other end of the timber. After a few days, the sapwood is completely impregnated; unfortunately little or no penetration takes place in the heartwood. Only green wood can be treated in this manner. This process has found considerable usage to impregnate poles and also larger trees in Europe and North America, and has experienced a revival of usage to impregnate bamboo in countries such as Costa Rica, Bangladesh, India and the state of Hawaii. High-pressure sap displacement system Developed in the Philippines, this method (abbreviated HPSD) consists of a cylinder pressure cap made from a 3 mm thick mild steel plate secured with 8 sets of bolts, a 2-HP diesel engine, and a pressure regulator with 1.4–14 kg/m2 capacity. The cap is placed over the stump of a pole, tree or bamboo and the preservative is forced into the wood with pressure from the engine. Incising First tested and patented in 1911 and 1912, this process consists of making shallow, slit-like holes in the surfaces of material to be treated, so that deeper and more uniform penetration of preservative may be obtained. Incisions made in sawed material usually are parallel with the grain of the wood. This process is common in North America (since the 1950s), where Douglas-fir products and pole butts of various species are prepared before treatment. It is most useful for woods that are resistant to side penetration, but allow preservative transport along the grain. In the region in which it is produced, it is common practice to incise all sawed Douglas-fir or more in thickness before treatment. Unfortunately, the impregnation of spruce, the most important structural timber in large areas in Europe, has shown that unsatisfactory treatment depths have been achieved with impregnation. The maximum penetration of is not sufficient to protect wood in weathered positions. The present-day incising machines consist essentially of four revolving drums fitted with teeth or needles or with lasers that burn the incisions into the wood. Preservatives can be spread along the grain up to in radial and up to in tangential and radial direction. In North America, where smaller timber dimensions are common, incision depths of have become standard. In Europe, where larger dimensions are widespread, incision depths of are necessary. The incisions are visible and often considered to be wood error. Incisions by laser are significantly smaller than those of spokes or needles. The costs for each process type are approximately for spoke/conventional all-round incising €0.50/m2, by laser incising €3.60/m2 and by needle incision €1.00/m2. (Figures originate from the year 1998 and may vary from present day prices.) Microwaving An alternative increases the permeability of timber using microwave technology. There is some concern that this method may adversely affect the structural performance of the material. Research in this area has been conducted by the Cooperative Research Centre at the University of Melbourne, Australia. Charring Charring of timber results in surfaces which are fire-resistant, insect-resistant and proof against weathering. Wood surfaces are ignited using a hand-held burner or moved slowly across a fire. The charred surface is then cleaned using a steel brush to remove loose bits and to expose the grain. Oil or varnish may be applied if required. Charring wood with a red-hot iron is a traditional method in Japan, where it is called or (literally 'fire cypress'). See also Conservation and restoration of waterlogged wood Heavy metals Leaky homes crisis in New Zealand Nanotoxicology Impregnation resin Saw dust Yakisugi References External links Non-CCA Non-CCA Wood Preservatives: Guide to Selected Resources - National Pesticide Information Center Arsenate Case Studies in Environmental Medicine - Arsenic Toxicity Sodium silicate Albite (sodium aluminum silicate) mineral Miscellaneous Information from the U.S. Environmental Protection Agency The American Wood Protection Association (AWPA, formerly the American Wood Preservers' Association) American Lumber Standards Committee (ALSC) Wood Structural engineering Preservatives
Wood preservation
[ "Engineering" ]
10,797
[ "Structural engineering", "Civil engineering", "Construction" ]
481,814
https://en.wikipedia.org/wiki/Leased%20line
A leased line is a private telecommunications circuit between two or more locations provided according to a commercial contract. It is sometimes also known as a private circuit, and as a data line in the UK. Typically, leased lines are used by businesses to connect geographically distant offices. Unlike traditional telephone lines in the public switched telephone network (PSTN) leased lines are generally not switched circuits, and therefore do not have an associated telephone number. Each side of the line is permanently connected, always active and dedicated to the other. Leased lines can be used for telephone, Internet, or other data communication services. Some are ringdown services, and some connect to a private branch exchange (PBX) or network router. The primary factors affecting the recurring lease fees are the distance between end stations and the bandwidth of the circuit. Since the connection does not carry third-party communications, the carrier can assure a specified level of quality. An Internet leased line is a premium Internet connectivity product, normally delivered over fiber, which provides uncontended, symmetrical bandwidth with full-duplex traffic. It is also known as an Ethernet leased line, dedicated line, data circuit or private line. History Leased line services (or private line services) became digital in the 1970s with the conversion of the Bell backbone network from analog to digital circuits. This allowed AT&T to offer Dataphone Digital Services (later re-branded digital data services) that started the deployment of ISDN and T1 lines to customer premises to connect. Leased lines were used to connect mainframe computers with terminals and remote sites, via IBM's Systems Network Architecture (created in 1974) or DEC's DECnet (created in 1975). With the extension of digital services in the 1980s, leased lines were used to connect customer premises to Frame Relay or ATM networks. Access data rates increased from the original T1 option with maximum transmission speed of 1.544 Mbit/s up to T3 circuits. In the 1990s, with the advances of the Internet, leased lines were also used to connect customer premises to ISP point of presence whilst the following decade saw a convergence of the aforementioned services (frame relay, ATM, Internet for businesses) with the MPLS integrated offerings. Access data rates also evolved dramatically to speeds of up to 10 Gbit/s in the early 21st century with the Internet boom and increased offering in long-haul optical networks or metropolitan area networks. Applications Leased lines are used to build up private networks, private telephone networks (by interconnecting PBXs) or access the internet or a partner network (extranet). Here is a review of the leased-line applications in network designs over time: Site to site data connectivity Terminating a leased line with two routers can extend network capabilities across sites. Leased lines were first used in the 1970s by enterprise with proprietary protocols such as IBM System Network Architecture and Digital Equipment DECnet, and with TCP/IP in University and Research networks before the Internet became widely available. Note that other Layer 3 protocols were used such as Novell IPX on enterprise networks until TCP/IP became ubiquitous in the 2000s. Today, point to point data circuits are typically provisioned as either TDM, Ethernet, or Layer 3 MPLS. Site to site PBX connectivity Terminating a leased line with two PBX allowed customers to by-pass PSTN for inter-site telephony. This allowed the customers to manage their own dial plan (and to use short extensions for internal telephone number) as well as to make significant savings if enough voice traffic was carried across the line (especially when the savings on the telephone bill exceeded the fixed cost of the leased line). Site to network connectivity As demand grew on data network telcos started to build more advanced networks using packet switching on top of their infrastructure. Thus, a number of telecommunication companies added ATM, Frame-relay or ISDN offerings to their services portfolio. Leased lines were used to connect the customer site to the telco network access point. International private leased circuit An international private leased circuit (IPLC) functions as a point-to-point private line. IPLCs are usually time-division multiplexing (TDM) circuits that utilize the same circuit amongst many customers. The nature of TDM requires the use of a CSU/DSU and a router. Usually the router will include the CSU/DSU. Then came the Internet (in the mid-1990s) and since then the most common application for leased line is to connect a customer to its ISP point of presence. With the changes that the Internet brought in the networking world other technologies were developed to propose alternatives to frame-relay or ATM networks such as VPNs (hardware and software) and MPLS networks (that are in effect an upgrade to TCP/IP of existing ATM/frame-relay infrastructures). Availability In the United Kingdom In the UK, leased lines are available at speeds from 64 kbit/s increasing in 64 kbit/s increments to 2.048 Mbit/s over a channelised E1 tail circuit and at speeds between 2.048 Mbit/s to 34.368 Mbit/s via channelised E3 tail circuits. The NTE will terminate the circuit and provide the requested presentation most frequently X.21 however higher speed interfaces are available such as G.703 or 10BASE-T. Some ISPs however use the term more loosely, defining a leased line as “any dedicated bandwidth service delivered over a leased fibre connection". As of March 2018, Leased Line services are most commonly available in the region of 100 Mbit/s to 1 Gbit/s. In large cities, for example, London, speeds of 10 Gbit/s are attainable. In the United States In the U.S., low-speed leased lines (56 kbit/s and below) are usually provided using analog modems. Higher-speed leased lines are usually presented using FT1 (Fractional T1): a T1 bearer circuit with 1 to 24, 56k or 64k timeslots. Customers typically manage their own network termination equipment, which include a Channel Service Unit and Data Service Unit (CSU/DSU). In Hong Kong In Hong Kong, leased lines are usually available at speeds of 64k, 128k, 256k, 512k, T1 (channelized or not) or E1 (less common). Whatever the speed, telcos usually provide the CSU/DSU and present to the customer on V.35 interface. Fibre circuits are slowly replacing the traditional circuits and are available at nearly any bandwidth. In India In India, leased lines are available at speeds of 64 kbit/s, 128 kbit/s, 256 kbit/s, 512 kbit/s, 1 Mbit/s, 2 Mbit/s, 4 Mbit/s, 8 Mbit/s, 1000 Mbit/s T1(1.544 Mbit/s) or E1(2.048 Mbit/s) and up to 622 Mbit/s. Customers are connected either through OFC, telephone lines, ADSL, or through Wi-Fi. Customers would have to manage their own network termination equipment, namely the channel service unit and data service unit. In Italy In Italy, leased lines are available at speeds of 64 kbit/s (terminated by DCE2 or DCE2plus modem) or multiple of 64 kbit/s from 128 kbit/s up to framed or unframed E1 (DCE3 modem) in digital form (PDH service, known as CDN, Circuito Diretto Numerico). Local telephone companies also may provide CDA (Circuito Diretto Analogico), that are plain copper dry pair between two buildings, without any line termination: in the past (pre-2002) a full analog base band was provided, giving an option to customer to deploy xDSL technology between sites: nowadays everything is limited at 4 kHz of bearer channel, so the service is just a POTS connection without any setup channel. For many purposes, leased lines are gradually being replaced by DSL and metro Ethernet. Leased line alternatives Leased lines are more expensive than alternative connectivity services including (ADSL, SDSL, etc.) because they are reserved exclusively to the leaseholder. Some internet service providers have therefore developed alternative products that aim to deliver leased-line type services (carrier Ethernet-based, zero contention, guaranteed availability), with more moderate bandwidth, over the standard UK national broadband network. While a leased line is full-duplex, most leased line alternatives provide only half-duplex or in many cases asymmetrical service. See also Circuit ID Dark fibre Dry loop Tie line (telephony) References Communication circuits Local loop
Leased line
[ "Engineering" ]
1,816
[ "Telecommunications engineering", "Communication circuits" ]
481,852
https://en.wikipedia.org/wiki/Somatostatin
Somatostatin, also known as growth hormone-inhibiting hormone (GHIH) or by several other names, is a peptide hormone that regulates the endocrine system and affects neurotransmission and cell proliferation via interaction with G protein-coupled somatostatin receptors and inhibition of the release of numerous secondary hormones. Somatostatin inhibits insulin and glucagon secretion. Somatostatin has two active forms produced by the alternative cleavage of a single preproprotein: one consisting of 14 amino acids (shown in infobox to right), the other consisting of 28 amino acids. Among the vertebrates, there exist six different somatostatin genes that have been named: SS1, SS2, SS3, SS4, SS5 and SS6. Zebrafish have all six. The six different genes, along with the five different somatostatin receptors, allow somatostatin to possess a large range of functions. Humans have only one somatostatin gene, SST. Nomenclature Synonyms of "somatostatin" include: growth hormone–inhibiting hormone (GHIH) growth hormone release–inhibiting hormone (GHRIH) somatotropin release–inhibiting factor (SRIF) somatotropin release–inhibiting hormone (SRIH) Production Digestive system Somatostatin is secreted by delta cells at several locations in the digestive system, namely the pyloric antrum, the duodenum and the pancreatic islets. Somatostatin released in the pyloric antrum travels via the portal venous system to the heart, then enters the systemic circulation to reach the locations where it will exert its inhibitory effects. In addition, somatostatin release from delta cells can act in a paracrine manner. In the stomach, somatostatin acts directly on the acid-producing parietal cells via a G-protein coupled receptor (which inhibits adenylate cyclase, thus effectively antagonising the stimulatory effect of histamine) to reduce acid secretion. Somatostatin can also indirectly decrease stomach acid production by preventing the release of other hormones, including gastrin and histamine which effectively slows down the digestive process. Brain Somatostatin is produced by neuroendocrine neurons of the ventromedial nucleus of the hypothalamus. These neurons project to the median eminence, where somatostatin is released from neurosecretory nerve endings into the hypothalamohypophysial system through neuron axons. Somatostatin is then carried to the anterior pituitary gland, where it inhibits the secretion of growth hormone from somatotrope cells. The somatostatin neurons in the periventricular nucleus mediate negative feedback effects of growth hormone on its own release; the somatostatin neurons respond to high circulating concentrations of growth hormone and somatomedins by increasing the release of somatostatin, so reducing the rate of secretion of growth hormone. Somatostatin is also produced by several other populations that project centrally, i.e., to other areas of the brain, and somatostatin receptors are expressed at many different sites in the brain. In particular, populations of somatostatin neurons occur in the arcuate nucleus, the hippocampus, and the brainstem nucleus of the solitary tract. Functions Somatostatin is classified as an inhibitory hormone, and is induced by low pH. Its actions are spread to different parts of the body. Somatostatin release is inhibited by the vagus nerve. Anterior pituitary In the anterior pituitary gland, the effects of somatostatin are: Inhibiting the release of growth hormone (GH) (thus opposing the effects of growth hormone–releasing hormone (GHRH)) Inhibiting the release of thyroid-stimulating hormone (TSH) Inhibiting adenylyl cyclase in parietal cells Inhibiting the release of prolactin (PRL) Gastrointestinal system Somatostatin is homologous with cortistatin (see somatostatin family) and suppresses the release of gastrointestinal hormones Decreases the rate of gastric emptying, and reduces smooth muscle contractions and blood flow within the intestine Suppresses the release of pancreatic hormones Somatostatin release is triggered by the beta cell peptide urocortin3 (Ucn3) to inhibit insulin release. Inhibits the release of glucagon Suppresses the exocrine secretory action of the pancreas Synthetic substitutes Octreotide (brand name Sandostatin, Novartis Pharmaceuticals) is an octapeptide that mimics natural somatostatin pharmacologically, though is a more potent inhibitor of growth hormone, glucagon, and insulin than the natural hormone, and has a much longer half-life (about 90 minutes, compared to 2–3 minutes for somatostatin). Since it is absorbed poorly from the gut, it is administered parenterally (subcutaneously, intramuscularly, or intravenously). It is indicated for symptomatic treatment of carcinoid syndrome and acromegaly. It is also finding increased use in polycystic diseases of the liver and kidney. Lanreotide (Somatuline, Ipsen Pharmaceuticals) is a medication used in the management of acromegaly and symptoms caused by neuroendocrine tumors, most notably carcinoid syndrome. It is a long-acting analog of somatostatin, like octreotide. It is available in several countries, including the United Kingdom, Australia, and Canada, and was approved for sale in the United States by the Food and Drug Administration on August 30, 2007. Pasireotide, sold under the brand name Signifor, is an orphan drug approved in the United States and the European Union for the treatment of Cushing's disease in patients who fail or are ineligible for surgical therapy. It was developed by Novartis. Pasireotide is somatostatin analog with a 40-fold increased affinity to somatostatin receptor 5 compared to other somatostatin analogs. Evolutionary history Six somatostatin genes have been discovered in vertebrates. The current proposed history as to how these six genes arose is based on the three whole-genome duplication events that took place in vertebrate evolution along with local duplications in teleost fish. An ancestral somatostatin gene was duplicated during the first whole-genome duplication event (1R) to create SS1 and SS2. These two genes were duplicated during the second whole-genome duplication event (2R) to create four new somatostatin genes:SS1, SS2, SS3, and one gene that was lost during the evolution of vertebrates. Tetrapods retained SS1 (also known as SS-14 and SS-28) and SS2 (also known as cortistatin) after the split in the Sarcopterygii and Actinopterygii lineage split. In teleost fish, SS1, SS2, and SS3 were duplicated during the third whole-genome duplication event (3R) to create SS1, SS2, SS4, SS5, and two genes that were lost during the evolution of teleost fish. SS1 and SS2 went through local duplications to give rise to SS6 and SS3. See also FK962 Hypothalamic–pituitary–somatic axis Octreotide References Further reading External links Antidiarrhoeals Endocrine system Hormones of the somatotropic axis Neuropeptides Neuroendocrinology Pancreatic hormones Somatostatin inhibitors
Somatostatin
[ "Biology" ]
1,660
[ "Organ systems", "Endocrine system" ]
481,856
https://en.wikipedia.org/wiki/Rigid%20body%20dynamics
In the physical science of dynamics, rigid-body dynamics studies the movement of systems of interconnected bodies under the action of external forces. The assumption that the bodies are rigid (i.e. they do not deform under the action of applied forces) simplifies analysis, by reducing the parameters that describe the configuration of the system to the translation and rotation of reference frames attached to each body. This excludes bodies that display fluid, highly elastic, and plastic behavior. The dynamics of a rigid body system is described by the laws of kinematics and by the application of Newton's second law (kinetics) or their derivative form, Lagrangian mechanics. The solution of these equations of motion provides a description of the position, the motion and the acceleration of the individual components of the system, and overall the system itself, as a function of time. The formulation and solution of rigid body dynamics is an important tool in the computer simulation of mechanical systems. Planar rigid body dynamics If a system of particles moves parallel to a fixed plane, the system is said to be constrained to planar movement. In this case, Newton's laws (kinetics) for a rigid system of N particles, P, i=1,...,N, simplify because there is no movement in the k direction. Determine the resultant force and torque at a reference point R, to obtain where r denotes the planar trajectory of each particle. The kinematics of a rigid body yields the formula for the acceleration of the particle P in terms of the position R and acceleration A of the reference particle as well as the angular velocity vector ω and angular acceleration vector α of the rigid system of particles as, For systems that are constrained to planar movement, the angular velocity and angular acceleration vectors are directed along k perpendicular to the plane of movement, which simplifies this acceleration equation. In this case, the acceleration vectors can be simplified by introducing the unit vectors e from the reference point R to a point r and the unit vectors , so This yields the resultant force on the system as and torque as where and is the unit vector perpendicular to the plane for all of the particles P. Use the center of mass C as the reference point, so these equations for Newton's laws simplify to become where is the total mass and I is the moment of inertia about an axis perpendicular to the movement of the rigid system and through the center of mass. Rigid body in three dimensions Orientation or attitude descriptions Several methods to describe orientations of a rigid body in three dimensions have been developed. They are summarized in the following sections. Euler angles The first attempt to represent an orientation is attributed to Leonhard Euler. He imagined three reference frames that could rotate one around the other, and realized that by starting with a fixed reference frame and performing three rotations, he could get any other reference frame in the space (using two rotations to fix the vertical axis and another to fix the other two axes). The values of these three rotations are called Euler angles. Commonly, is used to denote precession, nutation, and intrinsic rotation. Tait–Bryan angles These are three angles, also known as yaw, pitch and roll, Navigation angles and Cardan angles. Mathematically they constitute a set of six possibilities inside the twelve possible sets of Euler angles, the ordering being the one best used for describing the orientation of a vehicle such as an airplane. In aerospace engineering they are usually referred to as Euler angles. Orientation vector Euler also realized that the composition of two rotations is equivalent to a single rotation about a different fixed axis (Euler's rotation theorem). Therefore, the composition of the former three angles has to be equal to only one rotation, whose axis was complicated to calculate until matrices were developed. Based on this fact he introduced a vectorial way to describe any rotation, with a vector on the rotation axis and module equal to the value of the angle. Therefore, any orientation can be represented by a rotation vector (also called Euler vector) that leads to it from the reference frame. When used to represent an orientation, the rotation vector is commonly called orientation vector, or attitude vector. A similar method, called axis-angle representation, describes a rotation or orientation using a unit vector aligned with the rotation axis, and a separate value to indicate the angle (see figure). Orientation matrix With the introduction of matrices the Euler theorems were rewritten. The rotations were described by orthogonal matrices referred to as rotation matrices or direction cosine matrices. When used to represent an orientation, a rotation matrix is commonly called orientation matrix, or attitude matrix. The above-mentioned Euler vector is the eigenvector of a rotation matrix (a rotation matrix has a unique real eigenvalue). The product of two rotation matrices is the composition of rotations. Therefore, as before, the orientation can be given as the rotation from the initial frame to achieve the frame that we want to describe. The configuration space of a non-symmetrical object in n-dimensional space is SO(n) × Rn. Orientation may be visualized by attaching a basis of tangent vectors to an object. The direction in which each vector points determines its orientation. Orientation quaternion Another way to describe rotations is using rotation quaternions, also called versors. They are equivalent to rotation matrices and rotation vectors. With respect to rotation vectors, they can be more easily converted to and from matrices. When used to represent orientations, rotation quaternions are typically called orientation quaternions or attitude quaternions. Newton's second law in three dimensions To consider rigid body dynamics in three-dimensional space, Newton's second law must be extended to define the relationship between the movement of a rigid body and the system of forces and torques that act on it. Newton formulated his second law for a particle as, "The change of motion of an object is proportional to the force impressed and is made in the direction of the straight line in which the force is impressed." Because Newton generally referred to mass times velocity as the "motion" of a particle, the phrase "change of motion" refers to the mass times acceleration of the particle, and so this law is usually written as where F is understood to be the only external force acting on the particle, m is the mass of the particle, and a is its acceleration vector. The extension of Newton's second law to rigid bodies is achieved by considering a rigid system of particles. Rigid system of particles If a system of N particles, Pi, i=1,...,N, are assembled into a rigid body, then Newton's second law can be applied to each of the particles in the body. If Fi is the external force applied to particle Pi with mass mi, then where Fij is the internal force of particle Pj acting on particle Pi that maintains the constant distance between these particles. An important simplification to these force equations is obtained by introducing the resultant force and torque that acts on the rigid system. This resultant force and torque is obtained by choosing one of the particles in the system as a reference point, R, where each of the external forces are applied with the addition of an associated torque. The resultant force F and torque T are given by the formulas, where Ri is the vector that defines the position of particle Pi. Newton's second law for a particle combines with these formulas for the resultant force and torque to yield, where the internal forces Fij cancel in pairs. The kinematics of a rigid body yields the formula for the acceleration of the particle Pi in terms of the position R and acceleration a of the reference particle as well as the angular velocity vector ω and angular acceleration vector α of the rigid system of particles as, Mass properties The mass properties of the rigid body are represented by its center of mass and inertia matrix. Choose the reference point R so that it satisfies the condition then it is known as the center of mass of the system. The inertia matrix [IR] of the system relative to the reference point R is defined by where is the column vector ; is its transpose, and is the 3 by 3 identity matrix. is the scalar product of with itself, while is the tensor product of with itself. Force-torque equations Using the center of mass and inertia matrix, the force and torque equations for a single rigid body take the form and are known as Newton's second law of motion for a rigid body. The dynamics of an interconnected system of rigid bodies, , , is formulated by isolating each rigid body and introducing the interaction forces. The resultant of the external and interaction forces on each body, yields the force-torque equations Newton's formulation yields 6M equations that define the dynamics of a system of M rigid bodies. Rotation in three dimensions A rotating object, whether under the influence of torques or not, may exhibit the behaviours of precession and nutation. The fundamental equation describing the behavior of a rotating solid body is Euler's equation of motion: where the pseudovectors τ and L are, respectively, the torques on the body and its angular momentum, the scalar I is its moment of inertia, the vector ω is its angular velocity, the vector α is its angular acceleration, D is the differential in an inertial reference frame and d is the differential in a relative reference frame fixed with the body. The solution to this equation when there is no applied torque is discussed in the articles Euler's equation of motion and Poinsot's ellipsoid. It follows from Euler's equation that a torque τ applied perpendicular to the axis of rotation, and therefore perpendicular to L, results in a rotation about an axis perpendicular to both τ and L. This motion is called precession. The angular velocity of precession ΩP is given by the cross product: Precession can be demonstrated by placing a spinning top with its axis horizontal and supported loosely (frictionless toward precession) at one end. Instead of falling, as might be expected, the top appears to defy gravity by remaining with its axis horizontal, when the other end of the axis is left unsupported and the free end of the axis slowly describes a circle in a horizontal plane, the resulting precession turning. This effect is explained by the above equations. The torque on the top is supplied by a couple of forces: gravity acting downward on the device's centre of mass, and an equal force acting upward to support one end of the device. The rotation resulting from this torque is not downward, as might be intuitively expected, causing the device to fall, but perpendicular to both the gravitational torque (horizontal and perpendicular to the axis of rotation) and the axis of rotation (horizontal and outwards from the point of support), i.e., about a vertical axis, causing the device to rotate slowly about the supporting point. Under a constant torque of magnitude τ, the speed of precession ΩP is inversely proportional to L, the magnitude of its angular momentum: where θ is the angle between the vectors ΩP and L. Thus, if the top's spin slows down (for example, due to friction), its angular momentum decreases and so the rate of precession increases. This continues until the device is unable to rotate fast enough to support its own weight, when it stops precessing and falls off its support, mostly because friction against precession cause another precession that goes to cause the fall. By convention, these three vectors - torque, spin, and precession - are all oriented with respect to each other according to the right-hand rule. Virtual work of forces acting on a rigid body An alternate formulation of rigid body dynamics that has a number of convenient features is obtained by considering the virtual work of forces acting on a rigid body. The virtual work of forces acting at various points on a single rigid body can be calculated using the velocities of their point of application and the resultant force and torque. To see this, let the forces F1, F2 ... Fn act on the points R1, R2 ... Rn in a rigid body. The trajectories of Ri, are defined by the movement of the rigid body. The velocity of the points Ri along their trajectories are where ω is the angular velocity vector of the body. Virtual work Work is computed from the dot product of each force with the displacement of its point of contact If the trajectory of a rigid body is defined by a set of generalized coordinates , , then the virtual displacements are given by The virtual work of this system of forces acting on the body in terms of the generalized coordinates becomes or collecting the coefficients of Generalized forces For simplicity consider a trajectory of a rigid body that is specified by a single generalized coordinate q, such as a rotation angle, then the formula becomes Introduce the resultant force F and torque T so this equation takes the form The quantity Q defined by is known as the generalized force associated with the virtual displacement δq. This formula generalizes to the movement of a rigid body defined by more than one generalized coordinate, that is where It is useful to note that conservative forces such as gravity and spring forces are derivable from a potential function , known as a potential energy. In this case the generalized forces are given by D'Alembert's form of the principle of virtual work The equations of motion for a mechanical system of rigid bodies can be determined using D'Alembert's form of the principle of virtual work. The principle of virtual work is used to study the static equilibrium of a system of rigid bodies, however by introducing acceleration terms in Newton's laws this approach is generalized to define dynamic equilibrium. Static equilibrium The static equilibrium of a mechanical system rigid bodies is defined by the condition that the virtual work of the applied forces is zero for any virtual displacement of the system. This is known as the principle of virtual work. This is equivalent to the requirement that the generalized forces for any virtual displacement are zero, that is Qi=0. Let a mechanical system be constructed from rigid bodies, Bi, , and let the resultant of the applied forces on each body be the force-torque pairs, and , . Notice that these applied forces do not include the reaction forces where the bodies are connected. Finally, assume that the velocity and angular velocities , , for each rigid body, are defined by a single generalized coordinate q. Such a system of rigid bodies is said to have one degree of freedom. The virtual work of the forces and torques, and , applied to this one degree of freedom system is given by where is the generalized force acting on this one degree of freedom system. If the mechanical system is defined by m generalized coordinates, , , then the system has m degrees of freedom and the virtual work is given by, where is the generalized force associated with the generalized coordinate . The principle of virtual work states that static equilibrium occurs when these generalized forces acting on the system are zero, that is These equations define the static equilibrium of the system of rigid bodies. Generalized inertia forces Consider a single rigid body which moves under the action of a resultant force F and torque T, with one degree of freedom defined by the generalized coordinate q. Assume the reference point for the resultant force and torque is the center of mass of the body, then the generalized inertia force associated with the generalized coordinate is given by This inertia force can be computed from the kinetic energy of the rigid body, by using the formula A system of rigid bodies with m generalized coordinates has the kinetic energy which can be used to calculate the m generalized inertia forces Dynamic equilibrium D'Alembert's form of the principle of virtual work states that a system of rigid bodies is in dynamic equilibrium when the virtual work of the sum of the applied forces and the inertial forces is zero for any virtual displacement of the system. Thus, dynamic equilibrium of a system of n rigid bodies with m generalized coordinates requires that for any set of virtual displacements . This condition yields equations, which can also be written as The result is a set of m equations of motion that define the dynamics of the rigid body system. Lagrange's equations If the generalized forces Qj are derivable from a potential energy , then these equations of motion take the form In this case, introduce the Lagrangian, , so these equations of motion become These are known as Lagrange's equations of motion. Linear and angular momentum System of particles The linear and angular momentum of a rigid system of particles is formulated by measuring the position and velocity of the particles relative to the center of mass. Let the system of particles Pi, be located at the coordinates ri and velocities vi. Select a reference point R and compute the relative position and velocity vectors, The total linear and angular momentum vectors relative to the reference point R are and If R is chosen as the center of mass these equations simplify to Rigid system of particles To specialize these formulas to a rigid body, assume the particles are rigidly connected to each other so P, i=1,...,n are located by the coordinates r and velocities v. Select a reference point R and compute the relative position and velocity vectors, where ω is the angular velocity of the system. The linear momentum and angular momentum of this rigid system measured relative to the center of mass R is These equations simplify to become, where M is the total mass of the system and [I] is the moment of inertia matrix defined by where [ri − R] is the skew-symmetric matrix constructed from the vector ri − R. Applications For the analysis of robotic systems For the biomechanical analysis of animals, humans or humanoid systems For the analysis of space objects For the understanding of strange motions of rigid bodies. For the design and development of dynamics-based sensors, such as gyroscopic sensors. For the design and development of various stability enhancement applications in automobiles. For improving the graphics of video games which involves rigid bodies See also Analytical mechanics Analytical dynamics Calculus of variations Classical mechanics Dynamics (mechanics) History of classical mechanics Lagrangian mechanics Lagrangian Hamiltonian mechanics Rigid body Rigid transformation Rigid rotor Soft-body dynamics Multibody system Polhode Herpolhode Precession Poinsot's ellipsoid Gyroscope Physics engine Physics processing unit Physics Abstraction Layer – Unified multibody simulator RigidChips – Japanese rigid-body simulator Euler's Equation References Further reading E. Leimanis (1965). The General Problem of the Motion of Coupled Rigid Bodies about a Fixed Point. (Springer, New York). W. B. Heard (2006). Rigid Body Mechanics: Mathematics, Physics and Applications. (Wiley-VCH). External links Chris Hecker's Rigid Body Dynamics Information Physically Based Modeling: Principles and Practice DigitalRune Knowledge Base contains a master thesis and a collection of resources about rigid body dynamics. F. Klein, "Note on the connection between line geometry and the mechanics of rigid bodies" (English translation) F. Klein, "On Sir Robert Ball's theory of screws" (English translation) E. Cotton, "Application of Cayley geometry to the geometric study of the displacement of a solid around a fixed point" (English translation) Rigid bodies Rigid bodies mechanics Engineering mechanics Rotational symmetry
Rigid body dynamics
[ "Physics", "Engineering" ]
3,992
[ "Civil engineering", "Engineering mechanics", "Mechanical engineering", "Symmetry", "Rotational symmetry" ]
482,602
https://en.wikipedia.org/wiki/Durability
Durability is the ability of a physical product to remain functional, without requiring excessive maintenance or repair, when faced with the challenges of normal operation over its design lifetime. There are several measures of durability in use, including years of life, hours of use, and number of operational cycles. In economics, goods with a long usable life are referred to as durable goods. Requirements for product durability Product durability is predicated by good repairability and regenerability in conjunction with maintenance. Every durable product must be capable of adapting to technical, technological and design developments. This must be accompanied by a willingness on the part of consumers to forgo having the "very latest" version of a product. In the United Kingdom, durability as a characteristic relating to the quality of goods that can be demanded by consumers was not clearly established until an amendment of the Sale of Goods Act 1979 relating to the quality standards for supplied goods in 1994. Product life spans and sustainable consumption The lifespan of household goods is a significant factor in sustainable consumption. Longer product life spans can contribute to eco-efficiency and sufficiency, thus slowing consumption in order to progress towards a sustainable level of consumption. Cooper (2005) proposed a model to demonstrate the crucial role of product lifespans to sustainable production and consumption. Types of durability Durability can encompass several specific physical properties of designed products, including: Ageing (of polymers) Dust resistance Resistance to fatigue Fire resistance Radiation hardening Thermal resistance Rot-proofing Rustproofing Toughness Waterproofing Examples Chemically strengthened glass e.g. Superfest Durable medical equipment Durable water repellent See also Availability Consumables Disposable product Durable good Interchangeable parts Maintainability Product life Product stewardship Throwaway society Waste minimization References Broad-concept articles Materials science Waste minimisation
Durability
[ "Physics", "Materials_science", "Engineering" ]
375
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
482,912
https://en.wikipedia.org/wiki/Bingham%20plastic
In materials science, a Bingham plastic is a viscoplastic material that behaves as a rigid body at low stresses but flows as a viscous fluid at high stress. It is named after Eugene C. Bingham who proposed its mathematical form. It is used as a common mathematical model of mud flow in drilling engineering, and in the handling of slurries. A common example is toothpaste, which will not be extruded until a certain pressure is applied to the tube. It is then pushed out as a relatively coherent plug. Explanation Figure 1 shows a graph of the behaviour of an ordinary viscous (or Newtonian) fluid in red, for example in a pipe. If the pressure at one end of a pipe is increased this produces a stress on the fluid tending to make it move (called the shear stress) and the volumetric flow rate increases proportionally. However, for a Bingham Plastic fluid (in blue), stress can be applied but it will not flow until a certain value, the yield stress, is reached. Beyond this point the flow rate increases steadily with increasing shear stress. This is roughly the way in which Bingham presented his observation, in an experimental study of paints. These properties allow a Bingham plastic to have a textured surface with peaks and ridges instead of a featureless surface like a Newtonian fluid. Figure 2 shows the way in which it is normally presented currently. The graph shows shear stress on the vertical axis and shear rate on the horizontal one. (Volumetric flow rate depends on the size of the pipe, shear rate is a measure of how the velocity changes with distance. It is proportional to flow rate, but does not depend on pipe size.) As before, the Newtonian fluid flows and gives a shear rate for any finite value of shear stress. However, the Bingham plastic again does not exhibit any shear rate (no flow and thus no velocity) until a certain stress is achieved. For the Newtonian fluid the slope of this line is the viscosity, which is the only parameter needed to describe its flow. By contrast, the Bingham plastic requires two parameters, the yield stress and the slope of the line, known as the plastic viscosity. The physical reason for this behaviour is that the liquid contains particles (such as clay) or large molecules (such as polymers) which have some kind of interaction, creating a weak solid structure, formerly known as a false body, and a certain amount of stress is required to break this structure. Once the structure has been broken, the particles move with the liquid under viscous forces. If the stress is removed, the particles associate again. Definition The material is an elastic solid for shear stress , less than a critical value . Once the critical shear stress (or "yield stress") is exceeded, the material flows in such a way that the shear rate, ∂u/∂y (as defined in the article on viscosity), is directly proportional to the amount by which the applied shear stress exceeds the yield stress: Friction factor formulae In fluid flow, it is a common problem to calculate the pressure drop in an established piping network. Once the friction factor, f, is known, it becomes easier to handle different pipe-flow problems, viz. calculating the pressure drop for evaluating pumping costs or to find the flow-rate in a piping network for a given pressure drop. It is usually extremely difficult to arrive at exact analytical solution to calculate the friction factor associated with flow of non-Newtonian fluids and therefore explicit approximations are used to calculate it. Once the friction factor has been calculated the pressure drop can be easily determined for a given flow by the Darcy–Weisbach equation: where: is the Darcy friction factor (SI units: dimensionless) is the frictional head loss (SI units: m) is the gravitational acceleration (SI units: m/s²) is the pipe diameter (SI units: m) is the pipe length (SI units: m) is the mean fluid velocity (SI units: m/s) Laminar flow An exact description of friction loss for Bingham plastics in fully developed laminar pipe flow was first published by Buckingham. His expression, the Buckingham–Reiner equation, can be written in a dimensionless form as follows: where: is the laminar flow Darcy friction factor (SI units: dimensionless) is the Reynolds number (SI units: dimensionless) is the Hedstrom number (SI units: dimensionless) The Reynolds number and the Hedstrom number are respectively defined as: and where: is the mass density of fluid (SI units: kg/m3) is the dynamic viscosity of fluid (SI units: kg/m s) is the yield point (yield strength) of fluid (SI units: Pa) Turbulent flow Darby and Melson developed an empirical expression that was then refined, and is given by: where: is the turbulent flow friction factor (SI units: dimensionless) Note: Darby and Melson's expression is for a Fanning friction factor, and needs to be multiplied by 4 to be used in the friction loss equations located elsewhere on this page. Approximations of the Buckingham–Reiner equation Although an exact analytical solution of the Buckingham–Reiner equation can be obtained because it is a fourth order polynomial equation in f, due to complexity of the solution it is rarely employed. Therefore, researchers have tried to develop explicit approximations for the Buckingham–Reiner equation. Swamee–Aggarwal equation The Swamee–Aggarwal equation is used to solve directly for the Darcy–Weisbach friction factor f for laminar flow of Bingham plastic fluids. It is an approximation of the implicit Buckingham–Reiner equation, but the discrepancy from experimental data is well within the accuracy of the data. The Swamee–Aggarwal equation is given by: Danish–Kumar solution Danish et al. have provided an explicit procedure to calculate the friction factor f by using the Adomian decomposition method. The friction factor containing two terms through this method is given as: where and Combined equation for friction factor for all flow regimes Darby–Melson equation In 1981, Darby and Melson, using the approach of Churchill and of Churchill and Usagi, developed an expression to get a single friction factor equation valid for all flow regimes: where: Both Swamee–Aggarwal equation and the Darby–Melson equation can be combined to give an explicit equation for determining the friction factor of Bingham plastic fluids in any regime. Relative roughness is not a parameter in any of the equations because the friction factor of Bingham plastic fluids is not sensitive to pipe roughness. See also Bagnold number Bernoulli's principle Bingham-Papanastasiou model Rheology Shear thinning References Materials Non-Newtonian fluids Viscosity Offshore engineering
Bingham plastic
[ "Physics", "Engineering" ]
1,392
[ "Physical phenomena", "Physical quantities", "Offshore engineering", "Construction", "Materials", "Wikipedia categories named after physical quantities", "Viscosity", "Physical properties", "Matter" ]
483,855
https://en.wikipedia.org/wiki/Cloud%20chamber
A cloud chamber, also known as a Wilson chamber, is a particle detector used for visualizing the passage of ionizing radiation. A cloud chamber consists of a sealed environment containing a supersaturated vapor of water or alcohol. An energetic charged particle (for example, an alpha or beta particle) interacts with the gaseous mixture by knocking electrons off gas molecules via electrostatic forces during collisions, resulting in a trail of ionized gas particles. The resulting ions act as condensation centers around which a mist-like trail of small droplets form if the gas mixture is at the point of condensation. These droplets are visible as a "cloud" track that persists for several seconds while the droplets fall through the vapor. These tracks have characteristic shapes. For example, an alpha particle track is thick and straight, while a beta particle track is wispy and shows more evidence of deflections by collisions. Cloud chambers were invented in the early 1900s by the Scottish physicist Charles Thomson Rees Wilson. They played a prominent role in experimental particle physics from the 1920s to the 1950s, until the advent of the bubble chamber. In particular, the discoveries of the positron in 1932 (see Fig. 1) and the muon in 1936, both by Carl Anderson (awarded a Nobel Prize in Physics in 1936), used cloud chambers. The Discovery of the kaon by George Rochester and Clifford Charles Butler in 1947 was made using a cloud chamber as the detector. In each of these cases, cosmic rays were the source of ionizing radiation. Yet they were also used with artificial sources of particles, for example in radiography applications as part of the Manhattan Project. Invention Charles Thomson Rees Wilson (1869–1959), a Scottish physicist, is credited with inventing the cloud chamber. Inspired by sightings of the Brocken spectre while working on the summit of Ben Nevis in 1894, he began to develop expansion chambers for studying cloud formation and optical phenomena in moist air. Very rapidly he discovered that ions could act as centers for water droplet formation in such chambers. He pursued the application of this discovery and perfected the first cloud chamber in 1911. In Wilson's original chamber, the air inside the sealed device was saturated with water vapor, then a diaphragm was used to expand the air inside the chamber (adiabatic expansion), cooling the air and starting to condense water vapor. Hence the name expansion cloud chamber is used. When an ionizing particle passes through the chamber, water vapor condenses on the resulting ions and the trail of the particle is visible in the vapor cloud. A cine film was used to record the images. Further developments were made by Patrick Blackett who utilised a stiff spring to expand and compress the chamber very rapidly, making the chamber sensitive to particles several times a second. This kind of chamber is also called a pulsed chamber because the conditions for operation are not continuously maintained. Wilson received half the Nobel Prize in Physics in 1927 for his work on the cloud chamber (the same year as Arthur Compton received half the prize for the Compton Effect). The diffusion cloud chamber was developed in 1936 by Alexander Langsdorf. This chamber differs from the expansion cloud chamber in that it is continuously sensitized to radiation, and in that the bottom must be cooled to a rather low temperature, generally colder than . Instead of water vapor, alcohol is used because of its lower freezing point. Cloud chambers cooled by dry ice or Peltier effect thermoelectric cooling are common demonstration and hobbyist devices; the alcohol used in them is commonly isopropyl alcohol or methylated spirit. Structure and operation Diffusion-type cloud chambers will be discussed here. A simple cloud chamber consists of the sealed environment, a warm top plate and a cold bottom plate (See Fig. 3). It requires a source of liquid alcohol at the warm side of the chamber where the liquid evaporates, forming a vapor that cools as it falls through the gas and condenses on the cold bottom plate. Some sort of ionizing radiation is needed. Isopropanol, methanol, or other alcohol vapor saturates the chamber. The alcohol falls as it cools down and the cold condenser provides a steep temperature gradient. The result is a supersaturated environment. As energetic charged particles pass through the gas they leave ionization trails. The alcohol vapor condenses around gaseous ion trails left behind by the ionizing particles. This occurs because alcohol and water molecules are polar, resulting in a net attractive force toward a nearby free charge (See Fig. 4). The result is a misty cloud-like formation, seen by the presence of droplets falling down to the condenser. When the tracks are emitted from a source, their point of origin can easily be determined. Fig. 5 shows an example of an alpha particle from a Pb-210 pin-type source undergoing Rutherford scattering. Just above the cold condenser plate there is a volume of the chamber which is sensitive to ionization tracks. The ion trail left by the radioactive particles provides an optimal trigger for condensation and cloud formation. This sensitive volume is increased in height by employing a steep temperature gradient, and stable conditions. A strong electric field is often used to draw cloud tracks down to the sensitive region of the chamber and increase the sensitivity of the chamber. The electric field can also serve to prevent large amounts of background "rain" from obscuring the sensitive region of the chamber, caused by condensation forming above the sensitive volume of the chamber, thereby obscuring tracks by constant precipitation. A black background makes it easier to observe cloud tracks, and typically a tangential light source is needed to illuminate the white droplets against the black background. Often the tracks are not apparent until a shallow pool of alcohol is formed at the condenser plate. If a magnetic field is applied across the cloud chamber, positively and negatively charged particles will curve in opposite directions, according to the Lorentz force law; strong-enough fields are difficult to achieve, however, with small hobbyist setups. This method was also used to prove the existence of the Positron in 1932, in accordance with Paul Dirac's theoretical proof, published in 1928. Other particle detectors The bubble chamber was invented by Donald A. Glaser of the United States in 1952, and for this, he was awarded the Nobel Prize in Physics in 1960. The bubble chamber similarly reveals the tracks of subatomic particles, but inverts the principle of the cloud chamber to detect them as trails of bubbles in a superheated liquid, usually liquid hydrogen, rather than as trails of drops in a supercritical vapor. Bubble chambers can be made physically larger than cloud chambers, and since they are filled with much-denser liquid material, they can reveal the tracks of much more energetic particles. These factors rapidly made the bubble chamber the predominant particle detector for a number of decades, so that cloud chambers were effectively superseded in fundamental research by the start of the 1960s. A spark chamber is an electrical device that uses a grid of uninsulated electric wires in a gas-filled chamber, with high voltages applied between the wires. Energetic charged particles cause ionization of the gas along the path of the particle in the same way as in the Wilson cloud chamber, but in this case the ambient electric fields are high enough to precipitate full-scale gas breakdown in the form of sparks at the position of the initial ionization. The presence and location of these sparks is then registered electrically, and the information is stored for later analysis, such as by a digital computer. Similar condensation effects can be observed as Wilson clouds, also called condensation clouds, at large explosions in humid air and other Prandtl–Glauert singularity effects. Gallery See also Nuclear emulsion – also used to record and investigate fast charged particles Bubble chamber Spark chamber Gilbert U-238 Atomic Energy Laboratory science kit for children (1950–1951) Contrail Nephelescope Notes References External links , Peter Wothers, Royal Institution, December 2012] Particle detectors Radioactivity Scottish inventions Ionising radiation detectors Articles containing video clips
Cloud chamber
[ "Physics", "Chemistry", "Technology", "Engineering" ]
1,667
[ "Radioactive contamination", "Measuring instruments", "Particle detectors", "Ionising radiation detectors", "Nuclear physics", "Radioactivity" ]
23,912,155
https://en.wikipedia.org/wiki/Gauge%20theory
In physics, a gauge theory is a type of field theory in which the Lagrangian, and hence the dynamics of the system itself, does not change under local transformations according to certain smooth families of operations (Lie groups). Formally, the Lagrangian is invariant under these transformations. The term gauge refers to any specific mathematical formalism to regulate redundant degrees of freedom in the Lagrangian of a physical system. The transformations between possible gauges, called gauge transformations, form a Lie group—referred to as the symmetry group or the gauge group of the theory. Associated with any Lie group is the Lie algebra of group generators. For each group generator there necessarily arises a corresponding field (usually a vector field) called the gauge field. Gauge fields are included in the Lagrangian to ensure its invariance under the local group transformations (called gauge invariance). When such a theory is quantized, the quanta of the gauge fields are called gauge bosons. If the symmetry group is non-commutative, then the gauge theory is referred to as non-abelian gauge theory, the usual example being the Yang–Mills theory. Many powerful theories in physics are described by Lagrangians that are invariant under some symmetry transformation groups. When they are invariant under a transformation identically performed at every point in the spacetime in which the physical processes occur, they are said to have a global symmetry. Local symmetry, the cornerstone of gauge theories, is a stronger constraint. In fact, a global symmetry is just a local symmetry whose group's parameters are fixed in spacetime (the same way a constant value can be understood as a function of a certain parameter, the output of which is always the same). Gauge theories are important as the successful field theories explaining the dynamics of elementary particles. Quantum electrodynamics is an abelian gauge theory with the symmetry group U(1) and has one gauge field, the electromagnetic four-potential, with the photon being the gauge boson. The Standard Model is a non-abelian gauge theory with the symmetry group U(1) × SU(2) × SU(3) and has a total of twelve gauge bosons: the photon, three weak bosons and eight gluons. Gauge theories are also important in explaining gravitation in the theory of general relativity. Its case is somewhat unusual in that the gauge field is a tensor, the Lanczos tensor. Theories of quantum gravity, beginning with gauge gravitation theory, also postulate the existence of a gauge boson known as the graviton. Gauge symmetries can be viewed as analogues of the principle of general covariance of general relativity in which the coordinate system can be chosen freely under arbitrary diffeomorphisms of spacetime. Both gauge invariance and diffeomorphism invariance reflect a redundancy in the description of the system. An alternative theory of gravitation, gauge theory gravity, replaces the principle of general covariance with a true gauge principle with new gauge fields. Historically, these ideas were first stated in the context of classical electromagnetism and later in general relativity. However, the modern importance of gauge symmetries appeared first in the relativistic quantum mechanics of electronsquantum electrodynamics, elaborated on below. Today, gauge theories are useful in condensed matter, nuclear and high energy physics among other subfields. History The concept and the name of gauge theory derives from the work of Hermann Weyl in 1918. Weyl, in an attempt to generalize the geometrical ideas of general relativity to include electromagnetism, conjectured that Eichinvarianz or invariance under the change of scale (or "gauge") might also be a local symmetry of general relativity. After the development of quantum mechanics, Weyl, Vladimir Fock and Fritz London replaced the simple scale factor with a complex quantity and turned the scale transformation into a change of phase, which is a U(1) gauge symmetry. This explained the electromagnetic field effect on the wave function of a charged quantum mechanical particle. Weyl's 1929 paper introduced the modern concept of gauge invariance subsequently popularized by Wolfgang Pauli in his 1941 review. In retrospect, James Clerk Maxwell's formulation, in 1864–65, of electrodynamics in "A Dynamical Theory of the Electromagnetic Field" suggested the possibility of invariance, when he stated that any vector field whose curl vanishes—and can therefore normally be written as a gradient of a function—could be added to the vector potential without affecting the magnetic field. Similarly unnoticed, David Hilbert had derived the Einstein field equations by postulating the invariance of the action under a general coordinate transformation. The importance of these symmetry invariances remained unnoticed until Weyl's work. Inspired by Pauli's descriptions of connection between charge conservation and field theory driven by invariance, Chen Ning Yang sought a field theory for atomic nuclei binding based on conservation of nuclear isospin. In 1954, Yang and Robert Mills generalized the gauge invariance of electromagnetism, constructing a theory based on the action of the (non-abelian) SU(2) symmetry group on the isospin doublet of protons and neutrons. This is similar to the action of the U(1) group on the spinor fields of quantum electrodynamics. The Yang–Mills theory became the prototype theory to resolve some of the confusion in elementary particle physics. This idea later found application in the quantum field theory of the weak force, and its unification with electromagnetism in the electroweak theory. Gauge theories became even more attractive when it was realized that non-abelian gauge theories reproduced a feature called asymptotic freedom. Asymptotic freedom was believed to be an important characteristic of strong interactions. This motivated searching for a strong force gauge theory. This theory, now known as quantum chromodynamics, is a gauge theory with the action of the SU(3) group on the color triplet of quarks. The Standard Model unifies the description of electromagnetism, weak interactions and strong interactions in the language of gauge theory. In the 1970s, Michael Atiyah began studying the mathematics of solutions to the classical Yang–Mills equations. In 1983, Atiyah's student Simon Donaldson built on this work to show that the differentiable classification of smooth 4-manifolds is very different from their classification up to homeomorphism. Michael Freedman used Donaldson's work to exhibit exotic R4s, that is, exotic differentiable structures on Euclidean 4-dimensional space. This led to an increasing interest in gauge theory for its own sake, independent of its successes in fundamental physics. In 1994, Edward Witten and Nathan Seiberg invented gauge-theoretic techniques based on supersymmetry that enabled the calculation of certain topological invariants (the Seiberg–Witten invariants). These contributions to mathematics from gauge theory have led to a renewed interest in this area. The importance of gauge theories in physics is exemplified in the success of the mathematical formalism in providing a unified framework to describe the quantum field theories of electromagnetism, the weak force and the strong force. This theory, known as the Standard Model, accurately describes experimental predictions regarding three of the four fundamental forces of nature, and is a gauge theory with the gauge group SU(3) × SU(2) × U(1). Modern theories like string theory, as well as general relativity, are, in one way or another, gauge theories. See Jackson and Okun for early history of gauge and Pickering for more about the history of gauge and quantum field theories. Description Global and local symmetries Global symmetry In physics, the mathematical description of any physical situation usually contains excess degrees of freedom; the same physical situation is equally well described by many equivalent mathematical configurations. For instance, in Newtonian dynamics, if two configurations are related by a Galilean transformation (an inertial change of reference frame) they represent the same physical situation. These transformations form a group of "symmetries" of the theory, and a physical situation corresponds not to an individual mathematical configuration but to a class of configurations related to one another by this symmetry group. This idea can be generalized to include local as well as global symmetries, analogous to much more abstract "changes of coordinates" in a situation where there is no preferred "inertial" coordinate system that covers the entire physical system. A gauge theory is a mathematical model that has symmetries of this kind, together with a set of techniques for making physical predictions consistent with the symmetries of the model. Example of global symmetry When a quantity occurring in the mathematical configuration is not just a number but has some geometrical significance, such as a velocity or an axis of rotation, its representation as numbers arranged in a vector or matrix is also changed by a coordinate transformation. For instance, if one description of a pattern of fluid flow states that the fluid velocity in the neighborhood of (, ) is 1 m/s in the positive x direction, then a description of the same situation in which the coordinate system has been rotated clockwise by 90 degrees states that the fluid velocity in the neighborhood of (, ) is 1 m/s in the negative y direction. The coordinate transformation has affected both the coordinate system used to identify the location of the measurement and the basis in which its value is expressed. As long as this transformation is performed globally (affecting the coordinate basis in the same way at every point), the effect on values that represent the rate of change of some quantity along some path in space and time as it passes through point P is the same as the effect on values that are truly local to P. Local symmetry Use of fiber bundles to describe local symmetries In order to adequately describe physical situations in more complex theories, it is often necessary to introduce a "coordinate basis" for some of the objects of the theory that do not have this simple relationship to the coordinates used to label points in space and time. (In mathematical terms, the theory involves a fiber bundle in which the fiber at each point of the base space consists of possible coordinate bases for use when describing the values of objects at that point.) In order to spell out a mathematical configuration, one must choose a particular coordinate basis at each point (a local section of the fiber bundle) and express the values of the objects of the theory (usually "fields" in the physicist's sense) using this basis. Two such mathematical configurations are equivalent (describe the same physical situation) if they are related by a transformation of this abstract coordinate basis (a change of local section, or gauge transformation). In most gauge theories, the set of possible transformations of the abstract gauge basis at an individual point in space and time is a finite-dimensional Lie group. The simplest such group is U(1), which appears in the modern formulation of quantum electrodynamics (QED) via its use of complex numbers. QED is generally regarded as the first, and simplest, physical gauge theory. The set of possible gauge transformations of the entire configuration of a given gauge theory also forms a group, the gauge group of the theory. An element of the gauge group can be parameterized by a smoothly varying function from the points of spacetime to the (finite-dimensional) Lie group, such that the value of the function and its derivatives at each point represents the action of the gauge transformation on the fiber over that point. A gauge transformation with constant parameter at every point in space and time is analogous to a rigid rotation of the geometric coordinate system; it represents a global symmetry of the gauge representation. As in the case of a rigid rotation, this gauge transformation affects expressions that represent the rate of change along a path of some gauge-dependent quantity in the same way as those that represent a truly local quantity. A gauge transformation whose parameter is not a constant function is referred to as a local symmetry; its effect on expressions that involve a derivative is qualitatively different from that on expressions that do not. (This is analogous to a non-inertial change of reference frame, which can produce a Coriolis effect.) Gauge fields The "gauge covariant" version of a gauge theory accounts for this effect by introducing a gauge field (in mathematical language, an Ehresmann connection) and formulating all rates of change in terms of the covariant derivative with respect to this connection. The gauge field becomes an essential part of the description of a mathematical configuration. A configuration in which the gauge field can be eliminated by a gauge transformation has the property that its field strength (in mathematical language, its curvature) is zero everywhere; a gauge theory is not limited to these configurations. In other words, the distinguishing characteristic of a gauge theory is that the gauge field does not merely compensate for a poor choice of coordinate system; there is generally no gauge transformation that makes the gauge field vanish. When analyzing the dynamics of a gauge theory, the gauge field must be treated as a dynamical variable, similar to other objects in the description of a physical situation. In addition to its interaction with other objects via the covariant derivative, the gauge field typically contributes energy in the form of a "self-energy" term. One can obtain the equations for the gauge theory by: starting from a naïve ansatz without the gauge field (in which the derivatives appear in a "bare" form); listing those global symmetries of the theory that can be characterized by a continuous parameter (generally an abstract equivalent of a rotation angle); computing the correction terms that result from allowing the symmetry parameter to vary from place to place; and reinterpreting these correction terms as couplings to one or more gauge fields, and giving these fields appropriate self-energy terms and dynamical behavior. This is the sense in which a gauge theory "extends" a global symmetry to a local symmetry, and closely resembles the historical development of the gauge theory of gravity known as general relativity. Physical experiments Gauge theories used to model the results of physical experiments engage in: limiting the universe of possible configurations to those consistent with the information used to set up the experiment, and then computing the probability distribution of the possible outcomes that the experiment is designed to measure. We cannot express the mathematical descriptions of the "setup information" and the "possible measurement outcomes", or the "boundary conditions" of the experiment, without reference to a particular coordinate system, including a choice of gauge. One assumes an adequate experiment isolated from "external" influence that is itself a gauge-dependent statement. Mishandling gauge dependence calculations in boundary conditions is a frequent source of anomalies, and approaches to anomaly avoidance classifies gauge theories. Continuum theories The two gauge theories mentioned above, continuum electrodynamics and general relativity, are continuum field theories. The techniques of calculation in a continuum theory implicitly assume that: given a completely fixed choice of gauge, the boundary conditions of an individual configuration are completely described given a completely fixed gauge and a complete set of boundary conditions, the least action determines a unique mathematical configuration and therefore a unique physical situation consistent with these bounds fixing the gauge introduces no anomalies in the calculation, due either to gauge dependence in describing partial information about boundary conditions or to incompleteness of the theory. Determination of the likelihood of possible measurement outcomes proceed by: establishing a probability distribution over all physical situations determined by boundary conditions consistent with the setup information establishing a probability distribution of measurement outcomes for each possible physical situation convolving these two probability distributions to get a distribution of possible measurement outcomes consistent with the setup information These assumptions have enough validity across a wide range of energy scales and experimental conditions to allow these theories to make accurate predictions about almost all of the phenomena encountered in daily life: light, heat, and electricity, eclipses, spaceflight, etc. They fail only at the smallest and largest scales due to omissions in the theories themselves, and when the mathematical techniques themselves break down, most notably in the case of turbulence and other chaotic phenomena. Quantum field theories Other than these classical continuum field theories, the most widely known gauge theories are quantum field theories, including quantum electrodynamics and the Standard Model of elementary particle physics. The starting point of a quantum field theory is much like that of its continuum analog: a gauge-covariant action integral that characterizes "allowable" physical situations according to the principle of least action. However, continuum and quantum theories differ significantly in how they handle the excess degrees of freedom represented by gauge transformations. Continuum theories, and most pedagogical treatments of the simplest quantum field theories, use a gauge fixing prescription to reduce the orbit of mathematical configurations that represent a given physical situation to a smaller orbit related by a smaller gauge group (the global symmetry group, or perhaps even the trivial group). More sophisticated quantum field theories, in particular those that involve a non-abelian gauge group, break the gauge symmetry within the techniques of perturbation theory by introducing additional fields (the Faddeev–Popov ghosts) and counterterms motivated by anomaly cancellation, in an approach known as BRST quantization. While these concerns are in one sense highly technical, they are also closely related to the nature of measurement, the limits on knowledge of a physical situation, and the interactions between incompletely specified experimental conditions and incompletely understood physical theory. The mathematical techniques that have been developed in order to make gauge theories tractable have found many other applications, from solid-state physics and crystallography to low-dimensional topology. Classical gauge theory Classical electromagnetism In electrostatics, one can either discuss the electric field, E, or its corresponding electric potential, V. Knowledge of one makes it possible to find the other, except that potentials differing by a constant, , correspond to the same electric field. This is because the electric field relates to changes in the potential from one point in space to another, and the constant C would cancel out when subtracting to find the change in potential. In terms of vector calculus, the electric field is the gradient of the potential, . Generalizing from static electricity to electromagnetism, we have a second potential, the vector potential A, with The general gauge transformations now become not just but where f is any twice continuously differentiable function that depends on position and time. The electromagnetic fields remain the same under the gauge transformation. Example: scalar O(n) gauge theory The remainder of this section requires some familiarity with classical or quantum field theory, and the use of Lagrangians. Definitions in this section: gauge group, gauge field, interaction Lagrangian, gauge boson. The following illustrates how local gauge invariance can be "motivated" heuristically starting from global symmetry properties, and how it leads to an interaction between originally non-interacting fields. Consider a set of non-interacting real scalar fields, with equal masses m. This system is described by an action that is the sum of the (usual) action for each scalar field The Lagrangian (density) can be compactly written as by introducing a vector of fields The term is the partial derivative of along dimension . It is now transparent that the Lagrangian is invariant under the transformation whenever G is a constant matrix belonging to the n-by-n orthogonal group O(n). This is seen to preserve the Lagrangian, since the derivative of transforms identically to and both quantities appear inside dot products in the Lagrangian (orthogonal transformations preserve the dot product). This characterizes the global symmetry of this particular Lagrangian, and the symmetry group is often called the gauge group; the mathematical term is structure group, especially in the theory of G-structures. Incidentally, Noether's theorem implies that invariance under this group of transformations leads to the conservation of the currents where the Ta matrices are generators of the SO(n) group. There is one conserved current for every generator. Now, demanding that this Lagrangian should have local O(n)-invariance requires that the G matrices (which were earlier constant) should be allowed to become functions of the spacetime coordinates x. In this case, the G matrices do not "pass through" the derivatives, when G = G(x), The failure of the derivative to commute with "G" introduces an additional term (in keeping with the product rule), which spoils the invariance of the Lagrangian. In order to rectify this we define a new derivative operator such that the derivative of again transforms identically with This new "derivative" is called a (gauge) covariant derivative and takes the form where g is called the coupling constant; a quantity defining the strength of an interaction. After a simple calculation we can see that the gauge field A(x) must transform as follows The gauge field is an element of the Lie algebra, and can therefore be expanded as There are therefore as many gauge fields as there are generators of the Lie algebra. Finally, we now have a locally gauge invariant Lagrangian Pauli uses the term gauge transformation of the first type to mean the transformation of , while the compensating transformation in is called a gauge transformation of the second type. The difference between this Lagrangian and the original globally gauge-invariant Lagrangian is seen to be the interaction Lagrangian This term introduces interactions between the n scalar fields just as a consequence of the demand for local gauge invariance. However, to make this interaction physical and not completely arbitrary, the mediator A(x) needs to propagate in space. That is dealt with in the next section by adding yet another term, , to the Lagrangian. In the quantized version of the obtained classical field theory, the quanta of the gauge field A(x) are called gauge bosons. The interpretation of the interaction Lagrangian in quantum field theory is of scalar bosons interacting by the exchange of these gauge bosons. Yang–Mills Lagrangian for the gauge field The picture of a classical gauge theory developed in the previous section is almost complete, except for the fact that to define the covariant derivatives D, one needs to know the value of the gauge field at all spacetime points. Instead of manually specifying the values of this field, it can be given as the solution to a field equation. Further requiring that the Lagrangian that generates this field equation is locally gauge invariant as well, one possible form for the gauge field Lagrangian is where the are obtained from potentials , being the components of , by and the are the structure constants of the Lie algebra of the generators of the gauge group. This formulation of the Lagrangian is called a Yang–Mills action. Other gauge invariant actions also exist (e.g., nonlinear electrodynamics, Born–Infeld action, Chern–Simons model, theta term, etc.). In this Lagrangian term there is no field whose transformation counterweighs the one of . Invariance of this term under gauge transformations is a particular case of a priori classical (geometrical) symmetry. This symmetry must be restricted in order to perform quantization, the procedure being denominated gauge fixing, but even after restriction, gauge transformations may be possible. The complete Lagrangian for the gauge theory is now Example: electrodynamics As a simple application of the formalism developed in the previous sections, consider the case of electrodynamics, with only the electron field. The bare-bones action that generates the electron field's Dirac equation is The global symmetry for this system is The gauge group here is U(1), just rotations of the phase angle of the field, with the particular rotation determined by the constant . "Localising" this symmetry implies the replacement of by . An appropriate covariant derivative is then Identifying the "charge" (not to be confused with the mathematical constant e in the symmetry description) with the usual electric charge (this is the origin of the usage of the term in gauge theories), and the gauge field with the four-vector potential of the electromagnetic field results in an interaction Lagrangian where is the electric current four vector in the Dirac field. The gauge principle is therefore seen to naturally introduce the so-called minimal coupling of the electromagnetic field to the electron field. Adding a Lagrangian for the gauge field in terms of the field strength tensor exactly as in electrodynamics, one obtains the Lagrangian used as the starting point in quantum electrodynamics. Mathematical formalism Gauge theories are usually discussed in the language of differential geometry. Mathematically, a gauge is just a choice of a (local) section of some principal bundle. A gauge transformation is just a transformation between two such sections. Although gauge theory is dominated by the study of connections (primarily because it's mainly studied by high-energy physicists), the idea of a connection is not central to gauge theory in general. In fact, a result in general gauge theory shows that affine representations (i.e., affine modules) of the gauge transformations can be classified as sections of a jet bundle satisfying certain properties. There are representations that transform covariantly pointwise (called by physicists gauge transformations of the first kind), representations that transform as a connection form (called by physicists gauge transformations of the second kind, an affine representation)—and other more general representations, such as the B field in BF theory. There are more general nonlinear representations (realizations), but these are extremely complicated. Still, nonlinear sigma models transform nonlinearly, so there are applications. If there is a principal bundle P whose base space is space or spacetime and structure group is a Lie group, then the sections of P form a principal homogeneous space of the group of gauge transformations. Connections (gauge connection) define this principal bundle, yielding a covariant derivative ∇ in each associated vector bundle. If a local frame is chosen (a local basis of sections), then this covariant derivative is represented by the connection form A, a Lie algebra-valued 1-form, which is called the gauge potential in physics. This is evidently not an intrinsic but a frame-dependent quantity. The curvature form F, a Lie algebra-valued 2-form that is an intrinsic quantity, is constructed from a connection form by where d stands for the exterior derivative and stands for the wedge product. ( is an element of the vector space spanned by the generators , and so the components of do not commute with one another. Hence the wedge product does not vanish.) Infinitesimal gauge transformations form a Lie algebra, which is characterized by a smooth Lie-algebra-valued scalar, ε. Under such an infinitesimal gauge transformation, where is the Lie bracket. One nice thing is that if , then where D is the covariant derivative Also, , which means transforms covariantly. Not all gauge transformations can be generated by infinitesimal gauge transformations in general. An example is when the base manifold is a compact manifold without boundary such that the homotopy class of mappings from that manifold to the Lie group is nontrivial. See instanton for an example. The Yang–Mills action is now given by where is the Hodge star operator and the integral is defined as in differential geometry. A quantity which is gauge-invariant (i.e., invariant under gauge transformations) is the Wilson loop, which is defined over any closed path, γ, as follows: where χ is the character of a complex representation ρ and represents the path-ordered operator. The formalism of gauge theory carries over to a general setting. For example, it is sufficient to ask that a vector bundle have a metric connection; when one does so, one finds that the metric connection satisfies the Yang–Mills equations of motion. Quantization of gauge theories Gauge theories may be quantized by specialization of methods which are applicable to any quantum field theory. However, because of the subtleties imposed by the gauge constraints (see section on Mathematical formalism, above) there are many technical problems to be solved which do not arise in other field theories. At the same time, the richer structure of gauge theories allows simplification of some computations: for example Ward identities connect different renormalization constants. Methods and aims The first gauge theory quantized was quantum electrodynamics (QED). The first methods developed for this involved gauge fixing and then applying canonical quantization. The Gupta–Bleuler method was also developed to handle this problem. Non-abelian gauge theories are now handled by a variety of means. Methods for quantization are covered in the article on quantization. The main point to quantization is to be able to compute quantum amplitudes for various processes allowed by the theory. Technically, they reduce to the computations of certain correlation functions in the vacuum state. This involves a renormalization of the theory. When the running coupling of the theory is small enough, then all required quantities may be computed in perturbation theory. Quantization schemes intended to simplify such computations (such as canonical quantization) may be called perturbative quantization schemes. At present some of these methods lead to the most precise experimental tests of gauge theories. However, in most gauge theories, there are many interesting questions which are non-perturbative. Quantization schemes suited to these problems (such as lattice gauge theory) may be called non-perturbative quantization schemes. Precise computations in such schemes often require supercomputing, and are therefore less well-developed currently than other schemes. Anomalies Some of the symmetries of the classical theory are then seen not to hold in the quantum theory; a phenomenon called an anomaly. Among the most well known are: The scale anomaly, which gives rise to a running coupling constant. In QED this gives rise to the phenomenon of the Landau pole. In quantum chromodynamics (QCD) this leads to asymptotic freedom. The chiral anomaly in either chiral or vector field theories with fermions. This has close connection with topology through the notion of instantons. In QCD this anomaly causes the decay of a pion to two photons. The gauge anomaly, which must cancel in any consistent physical theory. In the electroweak theory this cancellation requires an equal number of quarks and leptons. Pure gauge A pure gauge is the set of field configurations obtained by a gauge transformation on the null-field configuration, i.e., a gauge transform of zero. So it is a particular "gauge orbit" in the field configuration's space. Thus, in the abelian case, where , the pure gauge is just the set of field configurations for all . See also Gauge principle Aharonov–Bohm effect Coulomb gauge Electroweak theory Gauge covariant derivative Gauge fixing Gauge gravitation theory Gauge group (mathematics) Kaluza–Klein theory Lorenz gauge Quantum chromodynamics Gluon field Gluon field strength tensor Quantum electrodynamics Electromagnetic four-potential Electromagnetic tensor Quantum field theory Standard Model Standard Model (mathematical formulation) Symmetry breaking Symmetry in physics Charge (physics) Symmetry in quantum mechanics Fock symmetry Ward identities Yang–Mills theory Yang–Mills existence and mass gap 1964 PRL symmetry breaking papers Gauge theory (mathematics) References Bibliography General readers Schumm, Bruce (2004) Deep Down Things. Johns Hopkins University Press. Esp. chpt. 8. A serious attempt by a physicist to explain gauge theory and the Standard Model with little formal mathematics. Texts Articles External links Yang–Mills equations on DispersiveWiki Gauge theories on Scholarpedia Gauge theories Mathematical physics
Gauge theory
[ "Physics", "Mathematics" ]
6,570
[ "Applied mathematics", "Theoretical physics", "Mathematical physics" ]
23,913,353
https://en.wikipedia.org/wiki/Multiphase%20heat%20transfer
A multiphase flow system is one characterized by the simultaneous presence of several phases, the two-phase system being the simplest case. The term ‘two-component’ is sometimes used to describe flows in which the phases consist of different chemical substances. However, since the same mathematics describes two-phase and two-component flows, the two expressions can be treated as synonymous. Analysis of multiphase systems can include consideration of multiphase flow and multiphase heat transfer. The former occurs only if all parts are at the same temperature, but interphase heat transfer also occurs when the temperatures of the individual phases are different. If different phases of the same pure substance are present in a multiphase system, interphase heat transfer will result in a change of phase, which is always accompanied by interphase mass transfer. Definitions A multiphase flow system is one characterized by the simultaneous presence of several phases, the two-phase system being the simplest case. The term ‘two-component’ is sometimes used to describe flows in which the phases consist of different chemical substances. For example, steam-water flows are two-phase, while air-water flows are two-component. Some two-component flows (mostly liquid-liquid) technically consist of a single phase but are identified as two-phase flows in which the term “phase” is applied to each of the components. Since the same mathematics describes two-phase and two-component flows, the two expressions can be treated as synonymous. Multiphase flow versus heat transfer The analysis of multiphase systems can include consideration of multiphase flow and multiphase heat transfer. When all of the phases in a multiphase system exist at the same temperature, multiphase flow is the only concern. However, when the temperatures of the individual phases are different, interphase heat transfer also occurs. Phase-change heat transfer If different phases of the same pure substance are present in a multiphase system, interphase heat transfer will result in a change of phase, which is always accompanied by interphase mass transfer. The combination of heat transfer with mass transfer during phase change makes multiphase systems distinctly more challenging than simpler systems. Based on the phases that are involved in the system, phase change problems can be classified as: (1) solid–liquid phase change (melting and solidification), (2) solid–vapor phase change (sublimation and deposition), and (3) liquid–vapor phase change (boiling/evaporation and condensation). Melting and sublimation are also referred to as fluidification because both liquid and vapor are regarded as fluids. References Faghri, A., and Zhang, Y., 2020, Fundamentals of Multiphase Heat Transfer and Flow, , Springer Nature Switzerland AG. Faghri, A., and Zhang, Y., 2006, Transport Phenomena in Multiphase Systems, , Elsevier, Burlington, MA. Lock, G.S.H., 1994, Latent Heat Transfer, Oxford Science Publications, Oxford University, Oxford, UK. Mechanical engineering Transport phenomena Heat transfer
Multiphase heat transfer
[ "Physics", "Chemistry", "Engineering" ]
640
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Applied and interdisciplinary physics", "Chemical engineering", "Thermodynamics", "Mechanical engineering" ]
23,915,451
https://en.wikipedia.org/wiki/BS%20EN%2013121-3
BS EN 13121-3 is European Standard adopted by UK, titled "GRP tanks and vessels for use above ground. Design and workmanship". Design and workmanship is the final part within a four parts standard series which specifies the necessary requirements for design, fabrication, inspection and testing by manufacturers and specifiers within the area of chemical storage. The standard was prepared under the European Commission Mandate 83/139/EEC the Pressure Equipment Directive (PED). In Europe this has been in force since May 2002 and all organizations in the European Union are obliged to comply with it. Industry Concern With the publication of BS EN 13121-3, BS 4994:1987 Specification for design and construction of vessels and tanks in reinforced plastics is declared obsolescent in European Union. However FRP fabricators and design engineers elsewhere are still using it, as it covers those tanks still in service as tanks made from GRP are generally accepted to have a long working life. See also BS4994 FRP tanks and vessels References Storage tanks 13121-3
BS EN 13121-3
[ "Chemistry", "Engineering" ]
217
[ "Chemical equipment", "Storage tanks" ]
19,753,612
https://en.wikipedia.org/wiki/Black%20silicon
Black silicon is a semiconductor material, a surface modification of silicon with very low reflectivity and correspondingly high absorption of visible (and infrared) light. The modification was discovered in the 1980s as an unwanted side effect of reactive ion etching (RIE). Other methods for forming a similar structure include electrochemical etching, stain etching, metal-assisted chemical etching, and laser treatment. Black silicon has become a major asset to the solar photovoltaic industry as it enables greater light to electricity conversion efficiency of standard crystalline silicon solar cells, which significantly reduces their costs. Properties Black silicon is a needle-shaped surface structure where needles are made of single-crystal silicon and have a height above 10 μm and diameter less than 1 μm. Its main feature is an increased absorption of incident light—the high reflectivity of the silicon, which is usually 20–30% for quasi-normal incidence, is reduced to about 5%. This is due to the formation of a so-called effective medium by the needles. Within this medium, there is no sharp interface, but a continuous change of the refractive index that reduces Fresnel reflection. When the depth of the graded layer is roughly equal to the wavelength of light in silicon (about one-quarter the wavelength in vacuum) the reflection is reduced to 5%; deeper grades produce even blacker silicon. For low reflectivity, the nanoscale features producing the index graded layer must be smaller than the wavelength of the incident light to avoid scattering. Applications The unusual optical characteristics, combined with the semiconducting properties of silicon make this material interesting for sensor applications. Potential applications include: Image sensors with increased sensitivity Thermal imaging cameras Photodetector with high-efficiency through increased absorption. Mechanical contacts and interfaces Terahertz applications. Solar cells Antibacterial surfaces that work by physically rupturing bacteria's cellular membranes. Surface enhanced Raman spectroscopy Semiconductor chemical or gas sensors Gettering of metallic impurities Production Reactive-ion etching In semiconductor technology, reactive-ion etching (RIE) is a standard procedure for producing trenches and holes with a depth of up to several hundred micrometres and very high aspect ratios. In Bosch process RIE, this is achieved by repeatedly switching between an etching and passivation. With cryogenic RIE, the low temperature and oxygen gas achieve this sidewall passivation by forming , easily removed from the bottom by directional ions. Both RIE methods can produce black silicon, but the morphology of the resulting structure differs substantially. The switching between etching and passivation of the Bosch process creates undulated sidewalls, which are visible also on the black silicon formed this way. During etching, however, small debris remain on the substrate; they mask the ion beam and produce structures that are not removed and in the following etching and passivation steps result in tall silicon pillars. The process can be set so that a million needles are formed on an area of one square millimeter. Mazur's method In 1999, a Harvard University group led by Eric Mazur developed a process in which black silicon was produced by irradiating silicon with femtosecond laser pulses. After irradiation in the presence of a gas containing sulfur hexafluoride and other dopants, the surface of silicon develops a self-organized microscopic structure of micrometer-sized cones. The resulting material has many remarkable properties, such as absorption that extends to the infrared range, below the band gap of silicon, including wavelengths for which ordinary silicon is transparent. sulfur atoms are forced to the silicon surface, creating a structure with a lower band gap and therefore the ability to absorb longer wavelengths. Similar surface modification can be achieved in vacuum using the same type of laser and laser processing conditions. In this case, the individual silicon cones lack sharp tips (see image). The reflectivity of such a micro-structured surface is very low, 3–14% in the spectral range 350–1150 nm. Such reduction in reflectivity is contributed by the cone geometry, which increases the light internal reflections between them. Hence, the possibility of light absorption is increased. The gain in absorption achieved by fs laser texturization was superior to that achieved by using an alkaline chemical etch method, which is a standard industrial approach for surface texturing of mono-crystalline silicon wafers in solar cell manufacturing. Such surface modification is independent of local crystalline orientation. A uniform texturing effect can be achieved across the surface of a multi-crystalline silicon wafer. The very steep angles lower the reflection to near zero and also increase the probability of recombination, keeping it from use in solar cells. Nanopores When a mix of copper nitrate, phosphorous acid, hydrogen fluoride and water are applied to a silicon wafer, the phosphorous acid reduction reduces the copper ions to copper nanoparticles. The nanoparticles attract electrons from the wafer's surface, oxidizing it and allowing the hydrogen fluoride to burn inverted pyramid-shaped nanopores into the silicon. The process produced pores as small as 590 nm that let through more than 99% of light. Chemical Etching Black silicon can also be produced by chemical etching using a process called Metal-Assisted Chemical Etching (MACE). Function When the material is biased by a small electric voltage, absorbed photons are able to excite dozens of electrons. The sensitivity of black silicon detectors is 100–500 times higher than that of untreated silicon (conventional silicon), in both the visible and infrared spectra. A group at the National Renewable Energy Laboratory reported black silicon solar cells with 18.2% efficiency. This black silicon anti-reflective surface was formed by a metal-assisted etch process using nano particles of silver. In May 2015, researchers from Finland's Aalto University, working with researchers from Universitat Politècnica de Catalunya announced they had created black silicon solar cells with 22.1% efficiency by applying a thin passivating film on the nanostructures by Atomic Layer Deposition, and by integrating all metal contacts on the back side of the cell. A team led by Elena Ivanova at Swinburne University of Technology in Melbourne discovered in 2012 that cicada wings were potent killers of Pseudomonas aeruginosa, an opportunist germ that also infects humans and is becoming resistant to antibiotics. The effect came from regularly-spaced "nanopillars" on which bacteria were sliced to shreds as they settled on the surface. Both cicada wings and black silicon were put through their paces in a lab, and both were bactericidal. Smooth to human touch, the surfaces destroyed Gram-negative and Gram-positive bacteria, as well as bacterial spores. The three targeted bacterial species were P. aeruginosa, Staphylococcus aureus and Bacillus subtilis, a wide-ranging soil germ that is a cousin of anthrax. The killing rate was 450,000 bacteria per square centimetre per minute over the first three hours of exposure or 810 times the minimum dose needed to infect a person with S. aureus, and 77,400 times that of P. aeruginosa. However, it was later proven that the quantification protocol of Ivanova's team was not suitable for these kind of antibacterial surfaces. A group led by Gagik Ayvazyan at National Polytechnic University of Armenia in Yerevan demonstrated in 2023 that needle-like nanotextures provide a feasible light-management solution for perovskite/silicon tandem solar cells. The black silicon interlayer enhances light absorption within the bottom silicon solar sub-cell. See also Quantum efficiency of a solar cell Solasys University of Wisconsin-Madison. "'Stealth' material hides hot objects from infrared eyes." ScienceDaily. www.sciencedaily.com/releases/2018/06/180622174752.htm (accessed 23 June 2018). References External links SiOnyx brings "Black Silicon" into the light New New York Times article (needs NYT subscription) SiOnyx homepage Lasers for Photovoltaics – Knowledge Base Lasers Improve PV Efficiency Lasers, Plasmas et Procédés Photoniques – Recherche – Structuration du silicium : Application au Photovoltaïque (in French) Allotropes of silicon Silicon, black Silicon solar cells Infrared solar cells Thin-film cells
Black silicon
[ "Chemistry", "Materials_science", "Mathematics" ]
1,752
[ "Allotropes", "Thin-film cells", "Semiconductor materials", "Group IV semiconductors", "Allotropes of silicon", "Planes (geometry)", "Thin films" ]
19,755,266
https://en.wikipedia.org/wiki/Pharos%20network%20coordinates
Pharos is a hierarchical and decentralized network coordinate system. With the help of a simple two-level architecture, it achieves much better prediction accuracy then the representative Vivaldi coordinates, and it is incrementally deployable. Overview Network coordinate (NC) systems are an efficient mechanism for Internet latency prediction with scalable measurements. Vivaldi is the most common distributed NC system, and it is deployed in many well-known internet systems, such as Bamboo DHT (Distributed hash table), Stream-Based Overlay Network (SBON) and Azureus BitTorrent. Pharos is a fully decentralized NC system. All nodes in Pharos form two levels of overlays, namely a base overlay for long link prediction, and a local cluster overlay for short link prediction. The Vivaldi algorithm is applied to both the base overlay and the local cluster. As a result, each Pharos node has two sets of coordinates. The coordinates calculated in the base overlay, which are named global NC, are used for the global scale, and the coordinates calculated in the corresponding local cluster, which are named local NC, covers a smaller range of distance. To form the local cluster, Pharos uses a method similar to binning and chooses some nodes called anchors to help node clustering. This method only requires a one-time measurement (with possible periodic refreshes) by the client to a small, fixed set of anchors. Any stable nodes which are able to response ICMP ping message can serve as anchor, such as the existing DNS servers. The experimental results show that Pharos greatly outperforms Vivaldi in internet distance prediction without adding any significant overhead. Insights behind Pharos Simple and effective, obtain significant improvement in prediction accuracy by introducing a straightforward hierarchical distance prediction Fully compatible with Vivaldi, the most widely deployed NC system. For every host where the Vivaldi client has been deployed, it just needs to run classic Vivaldi NC algorithm to join global overlay and local cluster, without deploying another NC client. The anchors in Pharos is different from landmarks in Global network positioning (GNP), which not only has to reply the ICMP ping but also need to reply the queries from all clients by sending their latest NCs. No requirement to deploy any extra software on the anchors. See also Peer-to-peer Global network positioning Phoenix network coordinates External links Simulator of Pharos Network Coordinates Network Coordinates Research at Tsinghua University References Computer networking
Pharos network coordinates
[ "Technology", "Engineering" ]
523
[ "Computer networking", "Computer science", "Computer engineering" ]
19,757,640
https://en.wikipedia.org/wiki/Power%20processing%20unit
A power processing unit (PPU) is a circuit device that convert an electricity input from a utility line into the appropriate voltage and current to be used for the device in question. They serve the same purpose as linear amplifiers, but they are much more efficient, since the use of linear amplifiers results in much power loss due to the use of a resistor to change the voltage and current. Instead of using a resistor, PPUs use switches to turn a signal on and off quite rapidly in order to change the average current and voltage. In this way, they could be conflated with DC-AC converters, but the frequency at which they switch the signal on and off is a few orders of magnitude higher than that of AC signals. They are used to convert the current and voltage of both direct current (DC) and alternating current (AC) signals. Spacecraft In the context of spacecraft, the power processing unit (PPU) is a module containing the electrical subsystem responsible for providing electrical power to other parts of the spacecraft. The PPU needs to be able to cope with varying demands for power output and provide that power in the most efficient manner possible. There are two main constraints placed on PPUs: Power generation from, for instance, a solar array or radioisotope thermoelectric generator where power generation can vary based on external conditions. Power utilization, which varies depending on the current internal activities performed by the spacecraft such as burst radio transmission or external demand factors like outside temperature and the need to provide heat to maintain constant internal temperature. Major considerations in building PPUs are weight, size and efficiency. Most PPUs process and supply direct current because that is what is generated by a solar array. The PPU is also responsible for voltage conversion and supplying the required voltage to other subsystems of the spacecraft. References Spacecraft components Electrical engineering
Power processing unit
[ "Engineering" ]
378
[ "Electrical engineering" ]
19,759,995
https://en.wikipedia.org/wiki/Kinodynamic%20planning
In robotics and motion planning, kinodynamic planning is a class of problems for which velocity, acceleration, and force/torque bounds must be satisfied, together with kinematic constraints such as avoiding obstacles. The term was coined by Bruce Donald, Pat Xavier, John Canny, and John Reif. Donald et al. developed the first polynomial-time approximation schemes (PTAS) for the problem. By providing a provably polynomial-time ε-approximation algorithm, they resolved a long-standing open problem in optimal control. Their first paper considered time-optimal control ("fastest path") of a point mass under Newtonian dynamics, amidst polygonal (2D) or polyhedral (3D) obstacles, subject to state bounds on position, velocity, and acceleration. Later they extended the technique to many other cases, for example, to 3D open-chain kinematic robots under full Lagrangian dynamics. More recently, many practical heuristic algorithms based on stochastic optimization and iterative sampling were developed, by a wide range of authors, to address the kinodynamic planning problem. These techniques for kinodynamic planning have been shown to work well in practice. However, none of these heuristic techniques can guarantee the optimality of the computed solution (i.e., they have no performance guarantees), and none can be mathematically proven to be faster than the original PTAS algorithms (i.e., none have a provably lower computational complexity). References Robot control Automated planning and scheduling Robot kinematics Algorithms
Kinodynamic planning
[ "Mathematics", "Engineering" ]
321
[ "Robotics engineering", "Algorithms", "Mathematical logic", "Applied mathematics", "Robot control", "Robot kinematics" ]
19,760,932
https://en.wikipedia.org/wiki/Sprengel%20pump
The Sprengel pump is a vacuum pump that uses drops of mercury falling through a small-bore capillary tube to trap air from the system to be evacuated. It was invented by Hanover-born chemist Hermann Sprengel in 1865 while he was working in London. The pump created the highest vacuum achievable at that time, less than 1 μPa (approximately 1×10−11 atm). Operation The supply of mercury is contained in the reservoir on the left. It flows over into the bulb B, where it forms drops which fall into the long tube on the right. These drops entrap between them the air in B. The mercury which runs out is collected and poured back into reservoir on the left. In this manner practically all the air can be removed from the bulb B, and hence from any vessel R, which may be connected with B. At M is a manometer which indicates the pressure in the vessel R, which is being exhausted. Falling mercury drops compress the air to atmospheric pressure which is released when the stream reaches a container at the bottom of the tube. As the pressure drops, the cushioning effect of trapped air between the droplets diminishes, so a hammering or knocking sound can be heard, accompanied by flashes of light within the evacuated vessel due to electrostatic effects on the mercury. The speed, simplicity and efficiency of the Sprengel pump made it a popular device with experimenters. Sprengel's earliest model could evacuate a half litre vessel in 20 minutes. Applications William Crookes used the pumps in series in his studies of electric discharges. William Ramsay used them to isolate the noble gases, and Joseph Swan and Thomas Edison used them to evacuate their new carbon filament lamps. The Sprengel pump was the key tool which made it possible in 1879 to sufficiently exhaust the air from a light bulb so a carbon filament incandescent electric light bulb lasted long enough to be commercially practical. Sprengel himself moved on to investigating explosives and was eventually elected as a Fellow of the Royal Society. Notes References Further reading Thompson, Silvanus Phillips, The Development of the Mercurial Air-pump (London, England: E. & F. N. Spon, 1888) pages 14–15. Vacuum pumps
Sprengel pump
[ "Physics", "Engineering" ]
465
[ "Vacuum pumps", "Vacuum systems", "Vacuum", "Matter" ]
19,761,886
https://en.wikipedia.org/wiki/Toepler%20pump
A Toepler pump is a form of mercury piston pump, invented by August Toepler in 1850. Operation The principle is illustrated in the diagram. When reservoir G is lowered, bulb B and tube T are filled with gas from the enclosure being evacuated (through tube A). When G is raised, mercury rises in tube F and cuts off the gas in B and T at C. This gas is then forced through the mercury in tube D into the atmosphere. The end of tube D is bent upward at E to facilitate collection of gas (or vapor). By alternately raising G, a pumping action results. Clearly tubes F and D must be long enough to support mercury columns corresponding to atmospheric pressure (76 cm at sea level). Instead of using mercury to provide a valving action at C, it is possible to use a glass float valve. References Further reading Andrew Guthrie (1963). Vacuum Technology; Wiley; New York and London. R. W. Cahn (2001). The Coming of Materials Science. Pergamon. University of Michigan. p405. German inventions Vacuum pumps
Toepler pump
[ "Physics", "Engineering" ]
225
[ "Vacuum pumps", "Vacuum systems", "Vacuum", "Matter" ]
19,762,116
https://en.wikipedia.org/wiki/Fourier-transform%20infrared%20spectroscopy
Fourier transform infrared spectroscopy (FTIR) is a technique used to obtain an infrared spectrum of absorption or emission of a solid, liquid, or gas. An FTIR spectrometer simultaneously collects high-resolution spectral data over a wide spectral range. This confers a significant advantage over a dispersive spectrometer, which measures intensity over a narrow range of wavelengths at a time. The term Fourier transform infrared spectroscopy originates from the fact that a Fourier transform (a mathematical process) is required to convert the raw data into the actual spectrum. Conceptual introduction The goal of absorption spectroscopy techniques (FTIR, ultraviolet-visible ("UV-vis") spectroscopy, etc.) is to measure how much light a sample absorbs at each wavelength. The most straightforward way to do this, the "dispersive spectroscopy" technique, is to shine a monochromatic light beam at a sample, measure how much of the light is absorbed, and repeat for each different wavelength. (This is how some UV–vis spectrometers work, for example.) Fourier transform spectroscopy is a less intuitive way to obtain the same information. Rather than shining a monochromatic beam of light (a beam composed of only a single wavelength) at the sample, this technique shines a beam containing many frequencies of light at once and measures how much of that beam is absorbed by the sample. Next, the beam is modified to contain a different combination of frequencies, giving a second data point. This process is rapidly repeated many times over a short time span. Afterwards, a computer takes all this data and works backward to infer what the absorption is at each wavelength. The beam described above is generated by starting with a broadband light source—one containing the full spectrum of wavelengths to be measured. The light shines into a Michelson interferometer—a certain configuration of mirrors, one of which is moved by a motor. As this mirror moves, each wavelength of light in the beam is periodically blocked, transmitted, blocked, transmitted, by the interferometer, due to wave interference. Different wavelengths are modulated at different rates, so that at each moment or mirror position the beam coming out of the interferometer has a different spectrum. As mentioned, computer processing is required to turn the raw data (light absorption for each mirror position) into the desired result (light absorption for each wavelength). The processing required turns out to be a common algorithm called the Fourier transform. The Fourier transform converts one domain (in this case displacement of the mirror in cm) into its inverse domain (wavenumbers in cm−1). The raw data is called an "interferogram". History The first low-cost spectrophotometer capable of recording an infrared spectrum was the Perkin-Elmer Infracord produced in 1957. This instrument covered the wavelength range from 2.5 μm to 15 μm (wavenumber range 4,000 cm−1 to 660 cm−1). The lower wavelength limit was chosen to encompass the highest known vibration frequency due to a fundamental molecular vibration. The upper limit was imposed by the fact that the dispersing element was a prism made from a single crystal of rock-salt (sodium chloride), which becomes opaque at wavelengths longer than about 15 μm; this spectral region became known as the rock-salt region. Later instruments used potassium bromide prisms to extend the range to 25 μm (400 cm−1) and caesium iodide 50 μm (200 cm−1). The region beyond 50 μm (200 cm−1) became known as the far-infrared region; at very long wavelengths it merges into the microwave region. Measurements in the far infrared needed the development of accurately ruled diffraction gratings to replace the prisms as dispersing elements, since salt crystals are opaque in this region. More sensitive detectors than the bolometer were required because of the low energy of the radiation. One such was the Golay detector. An additional issue is the need to exclude atmospheric water vapour because water vapour has an intense pure rotational spectrum in this region. Far-infrared spectrophotometers were cumbersome, slow and expensive. The advantages of the Michelson interferometer were well-known, but considerable technical difficulties had to be overcome before a commercial instrument could be built. Also an electronic computer was needed to perform the required Fourier transform, and this only became practicable with the advent of minicomputers, such as the PDP-8, which became available in 1965. Digilab pioneered the world's first commercial FTIR spectrometer (Model FTS-14) in 1969. Digilab FTIRs are now a part of Agilent Technologies's molecular product line after Agilent acquired spectroscopy business from Varian. Michelson interferometer In a Michelson interferometer adapted for FTIR, light from the polychromatic infrared source, approximately a black-body radiator, is collimated and directed to a beam splitter. Ideally 50% of the light is refracted towards the fixed mirror and 50% is transmitted towards the moving mirror. Light is reflected from the two mirrors back to the beam splitter and some fraction of the original light passes into the sample compartment. There, the light is focused on the sample. On leaving the sample compartment the light is refocused on to the detector. The difference in optical path length between the two arms to the interferometer is known as the retardation or optical path difference (OPD). An interferogram is obtained by varying the retardation and recording the signal from the detector for various values of the retardation. The form of the interferogram when no sample is present depends on factors such as the variation of source intensity and splitter efficiency with wavelength. This results in a maximum at zero retardation, when there is constructive interference at all wavelengths, followed by series of "wiggles". The position of zero retardation is determined accurately by finding the point of maximum intensity in the interferogram. When a sample is present the background interferogram is modulated by the presence of absorption bands in the sample. Commercial spectrometers use Michelson interferometers with a variety of scanning mechanisms to generate the path difference. Common to all these arrangements is the need to ensure that the two beams recombine exactly as the system scans. The simplest systems have a plane mirror that moves linearly to vary the path of one beam. In this arrangement the moving mirror must not tilt or wobble as this would affect how the beams overlap as they recombine. Some systems incorporate a compensating mechanism that automatically adjusts the orientation of one mirror to maintain the alignment. Arrangements that avoid this problem include using cube corner reflectors instead of plane mirrors as these have the property of returning any incident beam in a parallel direction regardless of orientation. Systems where the path difference is generated by a rotary movement have proved very successful. One common system incorporates a pair of parallel mirrors in one beam that can be rotated to vary the path without displacing the returning beam. Another is the double pendulum design where the path in one arm of the interferometer increases as the path in the other decreases. A quite different approach involves moving a wedge of an IR-transparent material such as KBr into one of the beams. Increasing the thickness of KBr in the beam increases the optical path because the refractive index is higher than that of air. One limitation of this approach is that the variation of refractive index over the wavelength range limits the accuracy of the wavelength calibration. Measuring and processing the interferogram The interferogram has to be measured from zero path difference to a maximum length that depends on the resolution required. In practice the scan can be on either side of zero resulting in a double-sided interferogram. Mechanical design limitations may mean that for the highest resolution the scan runs to the maximum OPD on one side of zero only. The interferogram is converted to a spectrum by Fourier transformation. This requires it to be stored in digital form as a series of values at equal intervals of the path difference between the two beams. To measure the path difference a laser beam is sent through the interferometer, generating a sinusoidal signal where the separation between successive maxima is equal to the wavelength of the laser (typically a 633 nm HeNe laser is used). This can trigger an analog-to-digital converter to measure the IR signal each time the laser signal passes through zero. Alternatively, the laser and IR signals can be measured synchronously at smaller intervals with the IR signal at points corresponding to the laser signal zero crossing being determined by interpolation. This approach allows the use of analog-to-digital converters that are more accurate and precise than converters that can be triggered, resulting in lower noise. The result of Fourier transformation is a spectrum of the signal at a series of discrete wavelengths. The range of wavelengths that can be used in the calculation is limited by the separation of the data points in the interferogram. The shortest wavelength that can be recognized is twice the separation between these data points. For example, with one point per wavelength of a HeNe reference laser at () the shortest wavelength would be (). Because of aliasing, any energy at shorter wavelengths would be interpreted as coming from longer wavelengths and so has to be minimized optically or electronically. The spectral resolution, i.e. the separation between wavelengths that can be distinguished, is determined by the maximum OPD. The wavelengths used in calculating the Fourier transform are such that an exact number of wavelengths fit into the length of the interferogram from zero to the maximum OPD as this makes their contributions orthogonal. This results in a spectrum with points separated by equal frequency intervals. For a maximum path difference adjacent wavelengths and will have and cycles, respectively, in the interferogram. The corresponding frequencies are ν1 and ν2: {| | d = nλ1 || and d = (n+1)λ2 |- | λ1 = d/n || and λ2 =d/(n+1) |- | ν1 = 1/λ1 || and ν2 = 1/λ2 |- | ν1 = n/d || and ν2 = (n+1)/d |- |colspan=2| ν2 − ν1 = 1/d |} The separation is the inverse of the maximum OPD. For example, a maximum OPD of 2 cm results in a separation of . This is the spectral resolution in the sense that the value at one point is independent of the values at adjacent points. Most instruments can be operated at different resolutions by choosing different OPD's. Instruments for routine analyses typically have a best resolution of around , while spectrometers have been built with resolutions as high as , corresponding to a maximum OPD of 10 m. The point in the interferogram corresponding to zero path difference has to be identified, commonly by assuming it is where the maximum signal occurs. This so-called centerburst is not always symmetrical in real world spectrometers so a phase correction may have to be calculated. The interferogram signal decays as the path difference increases, the rate of decay being inversely related to the width of features in the spectrum. If the OPD is not large enough to allow the interferogram signal to decay to a negligible level there will be unwanted oscillations or sidelobes associated with the features in the resulting spectrum. To reduce these sidelobes the interferogram is usually multiplied by a function that approaches zero at the maximum OPD. This so-called apodization reduces the amplitude of any sidelobes and also the noise level at the expense of some reduction in resolution. For rapid calculation the number of points in the interferogram has to equal a power of two. A string of zeroes may be added to the measured interferogram to achieve this. More zeroes may be added in a process called zero filling to improve the appearance of the final spectrum although there is no improvement in resolution. Alternatively, interpolation after the Fourier transform gives a similar result. Advantages There are three principal advantages for an FT spectrometer compared to a scanning (dispersive) spectrometer. The multiplex or Fellgett's advantage (named after Peter Fellgett). This arises from the fact that information from all wavelengths is collected simultaneously. It results in a higher signal-to-noise ratio for a given scan-time for observations limited by a fixed detector noise contribution (typically in the thermal infrared spectral region where a photodetector is limited by generation-recombination noise). For a spectrum with m resolution elements, this increase is equal to the square root of m. Alternatively, it allows a shorter scan-time for a given resolution. In practice multiple scans are often averaged, increasing the signal-to-noise ratio by the square root of the number of scans. The throughput or Jacquinot's advantage (named after Pierre Jacquinot). This results from the fact that in a dispersive instrument, the monochromator has entrance and exit slits which restrict the amount of light that passes through it. The interferometer throughput is determined only by the diameter of the collimated beam coming from the source. Although no slits are needed, FTIR spectrometers do require an aperture to restrict the convergence of the collimated beam in the interferometer. This is because convergent rays are modulated at different frequencies as the path difference is varied. Such an aperture is called a Jacquinot stop. For a given resolution and wavelength this circular aperture allows more light through than a slit, resulting in a higher signal-to-noise ratio. The wavelength accuracy or Connes' advantage (named after Janine Connes). The wavelength scale is calibrated by a laser beam of known wavelength that passes through the interferometer. This is much more stable and accurate than in dispersive instruments where the scale depends on the mechanical movement of diffraction gratings. In practice, the accuracy is limited by the divergence of the beam in the interferometer which depends on the resolution. Another minor advantage is less sensitivity to stray light, that is radiation of one wavelength appearing at another wavelength in the spectrum. In dispersive instruments, this is the result of imperfections in the diffraction gratings and accidental reflections. In FT instruments there is no direct equivalent as the apparent wavelength is determined by the modulation frequency in the interferometer. Resolution The interferogram belongs in the length dimension. Fourier transform (FT) inverts the dimension, so the FT of the interferogram belongs in the reciprocal length dimension([L−1]), that is the dimension of wavenumber. The spectral resolution in cm−1 is equal to the reciprocal of the maximal retardation in cm. Thus a 4 cm−1 resolution will be obtained if the maximal retardation is 0.25 cm; this is typical of the cheaper FTIR instruments. Much higher resolution can be obtained by increasing the maximal retardation. This is not easy, as the moving mirror must travel in a near-perfect straight line. The use of corner-cube mirrors in place of the flat mirrors is helpful, as an outgoing ray from a corner-cube mirror is parallel to the incoming ray, regardless of the orientation of the mirror about axes perpendicular to the axis of the light beam. A spectrometer with 0.001 cm−1 resolution is now available commercially. The throughput advantage is important for high-resolution FTIR, as the monochromator in a dispersive instrument with the same resolution would have very narrow entrance and exit slits. In 1966 Janine Connes measured the temperature of the atmosphere of Venus by recording the vibration-rotation spectrum of Venusian CO2 at 0.1 cm−1 resolution. Michelson himself attempted to resolve the hydrogen Hα emission band in the spectrum of a hydrogen atom into its two components by using his interferometer. p25 Motivation FTIR is a method of measuring infrared absorption and emission spectra. For a discussion of why people measure infrared absorption and emission spectra, i.e. why and how substances absorb and emit infrared light, see the article: Infrared spectroscopy. Components IR sources FTIR spectrometers are mostly used for measurements in the mid and near IR regions. For the mid-IR region, 2−25 μm (5,000–400 cm−1), the most common source is a silicon carbide (SiC) element heated to about (Globar). The output is similar to a blackbody. Shorter wavelengths of the near-IR, 1−2.5 μm (10,000–4,000 cm−1), require a higher temperature source, typically a tungsten-halogen lamp. The long wavelength output of these is limited to about 5 μm (2,000 cm−1) by the absorption of the quartz envelope. For the far-IR, especially at wavelengths beyond 50 μm (200 cm−1) a mercury discharge lamp gives higher output than a thermal source. Detectors Far-IR spectrometers commonly use pyroelectric detectors that respond to changes in temperature as the intensity of IR radiation falling on them varies. The sensitive elements in these detectors are either deuterated triglycine sulfate (DTGS) or lithium tantalate (LiTaO3). These detectors operate at ambient temperatures and provide adequate sensitivity for most routine applications. To achieve the best sensitivity the time for a scan is typically a few seconds. Cooled photoelectric detectors are employed for situations requiring higher sensitivity or faster response. Liquid nitrogen cooled mercury cadmium telluride (MCT) detectors are the most widely used in the mid-IR. With these detectors an interferogram can be measured in as little as 10 milliseconds. Uncooled indium gallium arsenide photodiodes or DTGS are the usual choices in near-IR systems. Very sensitive liquid-helium-cooled silicon or germanium bolometers are used in the far-IR where both sources and beamsplitters are inefficient. Beam splitter An ideal beam-splitter transmits and reflects 50% of the incident radiation. However, as any material has a limited range of optical transmittance, several beam-splitters may be used interchangeably to cover a wide spectral range. In a simple Michelson interferometer, one beam passes twice through the beamsplitter but the other passes through only once. To correct for this, an additional compensator plate of equal thickness is incorporated. For the mid-IR region, the beamsplitter is usually made of KBr with a germanium-based coating that makes it semi-reflective. KBr absorbs strongly at wavelengths beyond 25 μm (400 cm−1), so CsI or KRS-5 are sometimes used to extend the range to about 50 μm (200 cm−1). ZnSe is an alternative where moisture vapour can be a problem, but is limited to about 20 μm (500 cm−1). CaF2 is the usual material for the near-IR, being both harder and less sensitive to moisture than KBr, but cannot be used beyond about 8 μm (1,200 cm−1). Far-IR beamsplitters are mostly based on polymer films, and cover a limited wavelength range. Attenuated total reflectance Attenuated total reflectance (ATR) is one accessory of FTIR spectrophotometer to measure surface properties of solid or thin film samples rather than their bulk properties. Generally, ATR has a penetration depth of around 1 or 2 micrometers depending on sample conditions. Fourier transform The interferogram in practice consists of a set of intensities measured for discrete values of retardation. The difference between successive retardation values is constant. Thus, a discrete Fourier transform is needed. The fast Fourier transform (FFT) algorithm is used. Spectral range Far-infrared The first FTIR spectrometers were developed for far-infrared range. The reason for this has to do with the mechanical tolerance needed for good optical performance, which is related to the wavelength of the light being used. For the relatively long wavelengths of the far infrared, ~10 μm tolerances are adequate, whereas for the rock-salt region tolerances have to be better than 1 μm. A typical instrument was the cube interferometer developed at the NPL and marketed by Grubb Parsons. It used a stepper motor to drive the moving mirror, recording the detector response after each step was completed. Mid-infrared With the advent of cheap microcomputers it became possible to have a computer dedicated to controlling the spectrometer, collecting the data, doing the Fourier transform and presenting the spectrum. This provided the impetus for the development of FTIR spectrometers for the rock-salt region. The problems of manufacturing ultra-high precision optical and mechanical components had to be solved. A wide range of instruments are now available commercially. Although instrument design has become more sophisticated, the basic principles remain the same. Nowadays, the moving mirror of the interferometer moves at a constant velocity, and sampling of the interferogram is triggered by finding zero-crossings in the fringes of a secondary interferometer lit by a helium–neon laser. In modern FTIR systems the constant mirror velocity is not strictly required, as long as the laser fringes and the original interferogram are recorded simultaneously with higher sampling rate and then re-interpolated on a constant grid, as pioneered by James W. Brault. This confers very high wavenumber accuracy on the resulting infrared spectrum and avoids wavenumber calibration errors. Near-infrared The near-infrared region spans the wavelength range between the rock-salt region and the start of the visible region at about 750 nm. Overtones of fundamental vibrations can be observed in this region. It is used mainly in industrial applications such as process control and chemical imaging. Applications FTIR can be used in all applications where a dispersive spectrometer was used in the past (see external links). In addition, the improved sensitivity and speed have opened up new areas of application. Spectra can be measured in situations where very little energy reaches the detector. Fourier transform infrared spectroscopy is used in geology, chemistry, materials, botany and biology research fields. Nano and biological materials FTIR is also used to investigate various nanomaterials and proteins in hydrophobic membrane environments. Studies show the ability of FTIR to directly determine the polarity at a given site along the backbone of a transmembrane protein. The bond features involved with various organic and inorganic nanomaterials and their quantitative analysis can be done with the help of FTIR. Microscopy and imaging An infrared microscope allows samples to be observed and spectra measured from regions as small as 5 microns across. Images can be generated by combining a microscope with linear or 2-D array detectors. The spatial resolution can approach 5 microns with tens of thousands of pixels. The images contain a spectrum for each pixel and can be viewed as maps showing the intensity at any wavelength or combination of wavelengths. This allows the distribution of different chemical species within the sample to be seen. This technique has been applied in various biological applications including the analysis of tissue sections as an alternative to conventional histopathology, examining the homogeneity of pharmaceutical tablets, and for differentiating morphologically-similar pollen grains. Nanoscale and spectroscopy below the diffraction limit The spatial resolution of FTIR can be further improved below the micrometer scale by integrating it into scanning near-field optical microscopy platform. The corresponding technique is called nano-FTIR and allows for performing broadband spectroscopy on materials in ultra-small quantities (single viruses and protein complexes) and with 10 to 20 nm spatial resolution. FTIR as detector in chromatography The speed of FTIR allows spectra to be obtained from compounds as they are separated by a gas chromatograph. However this technique is little used compared to GC-MS (gas chromatography-mass spectrometry) which is more sensitive. The GC-IR method is particularly useful for identifying isomers, which by their nature have identical masses. Liquid chromatography fractions are more difficult because of the solvent present. One notable exception is to measure chain branching as a function of molecular size in polyethylene using gel permeation chromatography, which is possible using chlorinated solvents that have no absorption in the area in question. TG-IR (thermogravimetric analysis-infrared spectrometry) Measuring the gas evolved as a material is heated allows qualitative identification of the species to complement the purely quantitative information provided by measuring the weight loss. Water content determination in plastics and composites FTIR analysis is used to determine water content in fairly thin plastic and composite parts, more commonly in the laboratory setting. Such FTIR methods have long been used for plastics, and became extended for composite materials in 2018, when the method was introduced by Krauklis, Gagani and Echtermeyer. FTIR method uses the maxima of the absorbance band at about 5,200 cm−1 which correlates with the true water content in the material. See also − for computing periodicity in evenly spaced data − for computing periodicity in unevenly spaced data References External links Infracord spectrometer photograph The Grubb-Parsons-NPL cube interferometer Spectroscopy, part 2 by Dudley Williams, page 81 Infrared materials Properties of many salt crystals and useful links. University FTIR lab example from the University of Bristol Scientific instruments Fourier analysis Infrared spectroscopy
Fourier-transform infrared spectroscopy
[ "Physics", "Chemistry", "Technology", "Engineering" ]
5,348
[ "Spectrum (physical sciences)", "Scientific instruments", "Measuring instruments", "Infrared spectroscopy", "Spectroscopy" ]
19,763,060
https://en.wikipedia.org/wiki/Slater%E2%80%93Condon%20rules
Within computational chemistry, the Slater–Condon rules express integrals of one- and two-body operators over wavefunctions constructed as Slater determinants of orthonormal orbitals in terms of the individual orbitals. In doing so, the original integrals involving N-electron wavefunctions are reduced to sums over integrals involving at most two molecular orbitals, or in other words, the original 3N dimensional integral is expressed in terms of many three- and six-dimensional integrals. The rules are used in deriving the working equations for all methods of approximately solving the Schrödinger equation that employ wavefunctions constructed from Slater determinants. These include Hartree–Fock theory, where the wavefunction is a single determinant, and all those methods which use Hartree–Fock theory as a reference such as Møller–Plesset perturbation theory, and Coupled cluster and Configuration interaction theories. In 1929 John C. Slater derived expressions for diagonal matrix elements of an approximate Hamiltonian while investigating atomic spectra within a perturbative approach. The following year Edward Condon extended the rules to non-diagonal matrix elements. In 1955 Per-Olov Löwdin further generalized these results for wavefunctions constructed from non-orthonormal orbitals, leading to what are known as the Löwdin rules. Mathematical background In terms of an antisymmetrization operator () acting upon a product of N orthonormal spin-orbitals (with r and σ denoting spatial and spin variables), a determinantal wavefunction is denoted as A wavefunction differing from this by only a single orbital (the m'''th orbital) will be denoted as and a wavefunction differing by two orbitals will be denoted as For any particular one- or two-body operator, Ô, the Slater–Condon rules show how to simplify the following types of integrals: Matrix elements for two wavefunctions differing by more than two orbitals vanish unless higher order interactions are introduced. Integrals of one-body operators One body operators depend only upon the position or momentum of a single electron at any given instant. Examples are the kinetic energy, dipole moment, and total angular momentum operators. A one-body operator in an N-particle system is decomposed as The Slater–Condon rules for such an operator are: Integrals of two-body operators Two-body operators couple two particles at any given instant. Examples being the electron-electron repulsion, magnetic dipolar coupling, and total angular momentum-squared operators. A two-body operator in an N''-particle system is decomposed as The Slater–Condon rules for such an operator are: where Any matrix elements of a two-body operator with wavefunctions that differ in three or more spin orbitals will vanish, meaning References Computational chemistry Quantum chemistry
Slater–Condon rules
[ "Physics", "Chemistry" ]
604
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", "Computational chemistry", " molecular", "Atomic", " and optical physics" ]
19,763,505
https://en.wikipedia.org/wiki/Rat-tail%20splice
A rat-tail splice, also known as a twist splice or a pig-tail splice, is a basic electrical splice that can be done with both solid and stranded wire. It is made by taking two or more bare wires and wrapping them together symmetrically around the common axis of both wires. The bare splice can be insulated with electrical tape or by other means. This common and simple splice is not very strong mechanically. It can be made stronger by coating it with solder, which, however, renders the splice less mechanically flexible, or it can be twisted and then held in place by the internal metal spring or threads of a twist-on wire connector, also called a wire nut. For safety reasons, neither covering the splice with tape only nor soldering the splice are permitted for 110 volts or higher by most, or all, North American building electrical codes. The rat-tail splice is not meant to connect wires that will be pulled or stressed. Rather, it is intended for wires that are protected inside an enclosure or junction box. See also Western Union splice T-splice References https://web.archive.org/web/20080921113045/http://workmanship.nasa.gov/guidadv_recmeth.jsp Telecommunications equipment Electrical wiring Splices
Rat-tail splice
[ "Physics", "Engineering" ]
281
[ "Electrical systems", "Building engineering", "Physical systems", "Electrical engineering", "Electrical wiring" ]
19,763,678
https://en.wikipedia.org/wiki/T-splice
In electrical wiring, a T-splice is a splice that is used for connecting the end of one wire to the middle of another wire, thus forming a shape like that of the letter "T". This splice can be used with solid or stranded wires. The existing wire is called the main wire. The new wire that connects to the main wire is called the branch wire or tap wire. This is a prevalent junction type used in knob and tube wiring. See also Rat-tail splice Western Union splice Wire wrap References Telecommunications equipment Electrical wiring Splices
T-splice
[ "Physics", "Engineering" ]
117
[ "Electrical systems", "Building engineering", "Physical systems", "Electrical engineering", "Electrical wiring" ]
2,624,245
https://en.wikipedia.org/wiki/Autoclaved%20aerated%20concrete
Autoclaved aerated concrete (AAC) is a lightweight, precast, cellular concrete building material. It is eco-friendly, and suitable for producing concrete-like blocks. It is composed of quartz sand, calcined gypsum, lime, portland cement, water, and aluminium powder. AAC products are cured under heat and pressure in an autoclave. Developed in the mid-1920s, AAC provides insulation, fire, and mold-resistance. Forms include blocks, wall panels, floor and roof panels, cladding (façade) panels and lintels. AAC products are used in construction, such as industrial buildings, residential houses, apartment buildings, and townhouses. Their applications include exterior and interior walls, firewalls, wet room walls, diffusion-open thermal insulation boards, intermediate floors, upper floors, stairs, opening crossings, beams, and pillars. Exterior uses require an applied finish to guard against weathering, such as a polymer-modified stucco or plaster compound, or a covering of siding materials such as natural or manufactured stone, veneer brick, metal, or vinyl siding. AAC materials can be routed, sanded, or cut to size on-site using a hand saw and standard power tools with carbon steel cutters. Names Autoclaved aerated concrete is also known by various other names, including autoclaved cellular concrete (ACC), autoclaved concrete, cellular concrete, porous concrete, Aircrete, Thermalite, Hebel, Aercon, Starken, Gasbeton, Airbeton, Durox, Siporex (silicon pore expansion), Suporex, H+H, and Ytong. History AAC was first created in the mid-1920s by the Swedish architect and inventor Dr. Johan Axel Eriksson (1888–1961), along with Professor Henrik Kreüger at the Royal Institute of Technology. The process was patented in 1924. In 1929, production started in Sweden in the city of Yxhult. "Yxhults Ånghärdade Gasbetong" later became the first registered building materials brand in the world: Ytong. Another brand, "Siporex", was established in Sweden in 1939, though following WWII activities were reduced to a minimum level and no new plants were built since the 1990s. Josef Hebel of Memmingen established another cellular concrete brand, Hebel, which opened their first plant in Germany in 1943. Ytong AAC was originally produced in Sweden using alum shale, which contained combustible carbon beneficial to the production process. However, these deposits were found to contain natural uranium, which decays over time to radon, which then accumulates in structures where the AAC was used. This problem was addressed in 1972 by the Swedish Radiation Safety Authority, and by 1975, Ytong abandoned alum shale in favor of a formulation made from quartz sand, calcined gypsum, lime (mineral), cement, water and aluminium powder currently in use by most major brands. In 1978, Siporex Sweden opened the Siporex Factory in Saudi Arabia, establishing the Lightweight Construction Company - Siporex - LCC SIPOREX, targeting markets in the Middle East, Africa, and Japan. Currently LCC SIPOREX has three branches in Saudi Arabia. Today, the production of AAC is widespread, concentrated in Europe and Asia with some facilities located in the Americas. Egypt has the sole manufacturing plant in Africa. Although the European AAC market has seen a reduction in growth, Asia is experiencing a rapid expansion in the industry, driven by an escalating need for residential and commercial spaces. Currently, China has the largest Aircrete market globally, with several hundred manufacturing plants. The most significant AAC production and consumption occur in China, Central Asia, India, and the Middle East, reflecting the dynamic growth and demand in these regions. Like other masonry materials, the product Aircrete is sold under many different brand names. Ytong and Hebel are brands of the international operating company Xella, headquartered in Duisburg. Other brand names in Europe are H+H Celcon (Denmark) and Solbet (Poland). Uses AAC is a concrete-based material used for both exterior and interior construction. One of its advantages is quick and easy installation because the material can be routed, sanded, or cut to size on-site using a hand saw and standard power tools with carbon steel cutters. AAC is well suited for high-rise buildings and those with high temperature variations. Due to its lower density, high-rise buildings constructed using AAC require less steel and concrete for structural members. The mortar needed for laying AAC blocks is reduced due to the lower number of joints. Similarly, less material is required for rendering, because AAC can be shaped precisely before installation. Even though regular cement mortar can be used, most buildings that use AAC materials use thin bed mortar in thicknesses around , depending on the national building codes. Manufacturing Unlike most other concrete applications, AAC is produced using no aggregate larger than sand. Quartz sand (SiO2), calcined gypsum, lime (mineral) and/or cement and water are used as a binding agent. Aluminum powder is used at a rate of 0.05%–0.08% by volume (depending on the pre-specified density). In some countries, like India and China, fly ash generated from coal-fired power plants, and having 50–65% silica content, is used as an aggregate. When AAC is mixed and cast in forms, aluminium powder reacts with calcium hydroxide and water to form hydrogen. The hydrogen gas foams and doubles the volume of the raw mix creating gas bubbles up to in diameter—it has been described as having bubbles inside like "a chocolate Aero bar". At the end of the foaming process, the hydrogen escapes into the atmosphere and is replaced by air, leaving a product as light as 20% of the weight of conventional concrete. When the forms are removed from the material, it is solid but still soft. It is then cut into either blocks or panels and placed in an autoclave chamber for 12 hours. During this steam pressure hardening process, when the temperature reaches and the pressure reaches , quartz sand reacts with calcium hydroxide to form calcium silicate hydrate, which gives AAC its high strength and other unique properties. Because of the relatively low temperature used, AAC blocks are not considered to be a fired brick but a lightweight concrete masonry unit. After the autoclaving process, the material is stored and shipped to construction sites for use. Depending on its density, up to 80% of the volume of an AAC block is air. AAC's low density also accounts for its low structural compression strength. It can carry loads of up to , approximately 50% of the compressive strength of regular concrete. In 1978, the first AAC material factory - the LCC Siporex- Lightweight Construction Company - was opened in the Persian Gulf state of Saudi Arabia, supplying Gulf Cooperation Council countries with aerated blocks and panels. Since 1980, there has been a worldwide increase in the use of AAC materials. New production plants are being built in Australia, Bahrain, China, Eastern Europe, India and the United States. AAC is increasingly used worldwide by developers. Reinforced autoclaved aerated concrete Reinforced autoclaved aerated concrete (RAAC) is a reinforced version of autoclaved aerated concrete, commonly used in roofing and wall construction. The first structural reinforced roof and floor panels were manufactured in Sweden, soon after the first autoclaved aerated concrete block plant started up there in 1929, but Belgian and German technologies became market leaders for RAAC elements after the Second World War. In Europe, it gained popularity in the mid-1950s as a cheaper and more lightweight alternative to conventional reinforced concrete, with documented widespread use in a number of European countries as well as Japan and former territories of the British Empire. RAAC was used in roof, floor and wall construction due to its lighter weight and lower cost compared to traditional concrete, and has good fire resistance properties; it does not require plastering to achieve good fire resistance and fire does not cause spalls. RAAC was used in construction in Europe, in buildings constructed after the mid-1950s. RAAC elements have also been used in Japan as walling units owing to their good behaviour in seismic conditions. RAAC has been shown to have limited structural reinforcement bar (rebar) integrity in 40 to 50 year-old RAAC roof panels, which began to be observed in the 1990s. The material is liable to fail without visible deterioration or warning. This is often caused by RAAC's high susceptibility to water infiltration due to its porous nature, which causes corrosion of internal reinforcements in ways that are hard to detect. This places increased tensile stress on the bond between the reinforcement and concrete, lowering the material's service life. Detailed risk analyses are required on a structure-by-structure basis to identify areas in need of maintenance and lower the chance of catastrophic failure. Professional engineering concern about the structural performance of RAAC was first publicly raised in the United Kingdom in 1995 following inspections of cracked units in British school roofs, and was subsequently reinforced in 2022 when the Government Property Agency declared the material to be life-expired, and in 2023 when, following the partial or total closure of 174 schools at risk of a roofing collapse, other buildings were found to have issues with their RAAC construction, with some of these only being discovered to have been made from RAAC during the crisis. During the 2023 crisis, it was observed that it was likely for RAAC in other countries to exhibit problems similar to those found in the United Kingdom. The original site of the Ontario Science Centre in Toronto, Canada, a major museum with similar roof construction, was ordered permanently closed 21 June 2024 because of severely deteriorated roof panels dating from its opening in 1969. While repair options were proposed, the centre's ultimate owner, the provincial government of Ontario, had previously announced plans to relocate the centre and therefore requested the facility be closed immediately rather than paying for repairs. Approximately 400 other public buildings in Ontario are understood to contain the material and are under review, but no other closures were anticipated at the time of the Science Centre closure. Eco-friendliness The high resource efficiency of autoclaved aerated concrete contributes to a lower environmental impact than conventional concrete, from raw material processing to the disposal of aerated concrete waste. Due to continuous improvements in efficiency, the production of aerated concrete blocks requires relatively little raw materials per m3 of product and is five times less than the production of other building materials. There is no loss of raw materials in the production process, and all production waste is returned to the production cycle. Production of aerated concrete requires less energy than for all other masonry products, thereby reducing the use of fossil fuels and associated carbon dioxide () emissions. The curing process also saves energy, as the steam curing takes place at relatively low temperatures and the hot steam generated in the autoclaves is reused for subsequent batches. Advantages AAC has been produced for more than 70 years and has several advantages over other cement construction materials, one of the most important being its lower environmental impact. Improved thermal efficiency reduces the heating and cooling load in buildings. Porous structure gives superior fire resistance. Workability allows accurate cutting, which minimizes the generation of solid waste during use. Is eco-friendly and does not pollute the environment and contributes to LEED rating green building material. Resource efficiency gives it lower environmental impact than conventional concrete in all phases from the processing of raw materials to the ultimate disposal of waste. Being lighter allows for easier handling of concrete bricks. The lighter weight saves cost and energy in transportation, labour expenses, and increases chances of survival during seismic activity. Larger size blocks leads to faster masonry work. Reduces project cost for large constructions. Fire-resistant: AAC, like other concretes, is fire-resistant. Ease of Handling: AAC Blocks are lightweight, making them Easier to Lift, Carry, and Install, Smoothing out construction and further improving Efficiency compared to Traditional Materials. Good ventilation: This material is very airy and allows the diffusion of water, reducing humidity inside the building. AAC absorbs moisture and releases humidity, helping to prevent condensation and other problems related to mildew. Non-toxic: There are no toxic gases or other toxic substances in autoclaved aerated concrete. It does not attract rodents or other pests, and cannot be damaged by them. Accuracy: Panels and blocks made of autoclaved aerated concrete are produced to the exact sizes needed before leaving the factory. There is less need for on-site trimming. Since the blocks and panels fit so well together, there is less use of finishing materials such as mortar. Long-lasting: The life of this material is longer because it is not affected by harsh climates or extreme weather changes, and will not degrade under normal climate changes. Disadvantages AAC has been produced for more than 70 years. However, some disadvantages were found when it was introduced in the UK (where double-leaf masonry, also known as cavity walls, are the norm). The process of using AAC is somewhat complex, so builders have to undergo special training. Non-structural shrinkage cracks may appear in AAC blocks after installation in rainy weather or humid environments. This is more likely in poor-quality blocks that were not properly steam-cured. However, most AAC block manufacturers are certified and their blocks are tested in certified labs, so poor-quality blocks are rare. Has some brittle nature: requires more care than clay bricks to avoid breakage during handling and transporting. Fixings: the brittle nature of the blocks requires longer, thinner screws when fitting cabinets and wall hangings. Special wall fasteners (screw wall plug anchors) designed for autoclaved aerated concrete including gypsum board and plaster tiles are available at a higher cost than standard expandable wall plugs, including special safety-relevant anchors for high load bearing; It is recommended that fixing holes be drilled using HSS drill bits at a steady constant speed without hammer action. Masonry drill bits and standard expandable wall plugs are not suitable for use with AAC blocks. Using European standard density (400 kg/m3, B2,5), AAC blocks alone would require very thick — 500mm or thicker — walls to achieve the insulation levels required by newer building codes in Northern Europe. References External links History of Autoclaved Aerated Concrete Using Autoclaved Aerated Concrete Correctly - Masonry Magazine, June 2008 Aircrete Products Association Autoclaved Aerated Concrete - Portland Cement Association Building materials Concrete Masonry Swedish inventions 1929 introductions
Autoclaved aerated concrete
[ "Physics", "Engineering" ]
3,036
[ "Structural engineering", "Masonry", "Building engineering", "Construction", "Materials", "Building materials", "Concrete", "Matter", "Architecture" ]
2,624,309
https://en.wikipedia.org/wiki/Lanthanum%20gallium%20silicate
Lanthanum gallium silicate (referred to as LGS in this article), also known as langasite, has a chemical formula of the form A3BC3D2O14, where A, B, C and D indicate particular cation sites. A is a decahedral (Thomson cube) site coordinated by 8 oxygen atoms. B is octahedral site coordinated by 6 oxygen atoms, and C and D are tetrahedral sites coordinated by 4 oxygen atoms. In this material, lanthanum occupied the A-sites, gallium the B, C and half of D-sites, and, silicon the other half of D-sites. LGS is a piezoelectric material, with no phase transitions up to its melting point of 1470 °C. Single crystal LGS can be grown via the Czochralski method, in which crystallization is initiated on a rotating seed crystal lowered into the melt followed by pulling from the melt. The growth atmosphere is usually argon or nitrogen with up to 5% of oxygen. The use of oxygen in the growth environment is reported to suppress gallium loss from the melt; however, too high an oxygen level can lead to platinum (crucible material used for the melt) dissolution in the melt. The growth of LGS is primarily along the z direction. Currently the 3-inch (76 mm) langasite boules produced commercially have growth rates of 1.5 to 5 mm/h. The quality of the crystals tends to improve as the growth rate is reduced. See also Ceramic lanthanum gallium tantalum oxide, langatite (CAS RN 83381-05-9) La6Ga11TaO28 (i.e., La3Ga5.5Ta00.5O14) References External links Properties of a langasite crystal Lanthanum compounds Gallium compounds Silicates Piezoelectric materials Ceramic materials
Lanthanum gallium silicate
[ "Physics", "Engineering" ]
398
[ "Physical phenomena", "Materials", "Electrical phenomena", "Ceramic materials", "Ceramic engineering", "Piezoelectric materials", "Matter" ]
2,626,942
https://en.wikipedia.org/wiki/Sorel%20cement
Sorel cement (also known as magnesia cement or magnesium oxychloride) is a non-hydraulic cement first produced by the French chemist Stanislas Sorel in 1867. In fact, in 1855, before working with magnesium compounds, Stanislas Sorel first developed a two-component cement by mixing zinc oxide powder with a solution of zinc chloride. In a few minutes he obtained a dense material harder than limestone. Only a decade later, Sorel replaced zinc with magnesium in his formula and also obtained a cement with similar favorable properties. This new type of cement was stronger and more elastic than Portland cement, and therefore exhibited a more resilient behavior when submitted to shocks. The material could be easily molded like plaster when freshly prepared, or machined on a lathe after setting and hardening. It was very hard, could be easily bound to many different types of materials (good adhesive properties), and colored with pigments. Therefore, it was used to make mosaics and to mimic marble. After mixing with cotton crushed in powder, it was also used as a surrogate material for ivory to fabricate billiard balls resistant to shock. Sorel cement is a mixture of magnesium oxide (burnt magnesia) with magnesium chloride with the approximate chemical formula Mg4Cl2(OH)6(H2O)8, or MgCl2·3Mg(OH)2·8H2O, corresponding to a weight ratio of 2.5–3.5 parts MgO to one part MgCl2. Quite surprisingly, much more recently, another chemist, Charles A. Sorrell (1977, 1980) – whose family name sounds quite similar to that of Stanislas Sorel – also studied the topic and published works on the same family of oxychloride compounds based on zinc and magnesium, just as Sorel had done about 100 years before. The zinc oxychloride cement is prepared from zinc oxide and zinc chloride instead of the magnesium compounds. Composition and structure The set cement consists chiefly of a mixture of magnesium oxychlorides and magnesium hydroxide in varying proportions, depending on the initial cement formulation, setting time, and other variables. The main stable oxychlorides at ambient temperature are the so-called "phase 3" and "phase 5", whose formulas can be written as 3··8 and 5··8, respectively; or, equivalently, ·4 and ·4. Phase 5 crystallizes mainly as long needles which are actually rolled-up sheets. These interlocking needles give the cement its strength. In the long term the oxychlorides absorb and react with carbon dioxide from the air to form magnesium chlorocarbonates. History These compounds are the primary components of matured Sorel cement, first prepared in 1867 by Stanislas Sorel. In the late 19th century, several attempts were made to determine the composition of the hardened Sorel's cement, but the results were not conclusive. Phase 3 was properly isolated and described by Robinson and Waggaman (1909), and phase 5 was identified by Lukens (1932). Properties Sorel cement can withstand 10,000–12,000 psi (69–83 MPa) of compressive force whereas standard Portland cement can typically only withstand 7,000–8,000 psi (48–55 MPa). It also achieves high strength in a shorter time. Sorel cement has a remarkable capacity to bond with, and contain, other materials. It also exhibits some elasticity, an interesting property increasing its capacity to resist shocks (better mechanical resilience), particularly useful for billiard balls. The pore solution in wet Sorel cement is slightly alkaline (pH 8.5 to 9.5), but significantly less so than that of Portland cement (hyperalkaline conditions: pH 12.5 to 13.5). Other differences between magnesium-based cements and portland cement include water permeability, preservation of plant and animal substances, and corrosion of metals. These differences make different construction applications suitable. Prolonged exposure of Sorel cement to water leaches out the soluble , leaving hydrated brucite as the binding phase, which without absorption of , can result in loss of strength. Fillers and reinforcement In use, Sorel cement is usually combined with filler materials such as gravel, sand, marble flour, asbestos, wood particles and expanded clays. Sorel cement is incompatible with steel reinforcement because the presence of chloride ions in the pore solution and the low alkalinity (pH < 9) of the cement promote steel corrosion (pitting corrosion). However, the low alkalinity makes it more compatible with glass fiber reinforcement. It is also better than Portland cement as a binder for wood composites, since its setting is not retarded by the lignin and other wood chemicals. The resistance of the cement to water can be improved with the use of additives such as phosphoric acid, soluble phosphates, fly ash, or silica. Uses Magnesium oxychloride cement is used to make floor tiles and industrial flooring, in fire protection, wall insulation panels, and as a binder for grinding wheels. Due to its resemblance to marble, it is also used for artificial stones, artificial ivory (e.g. for billiard balls) and other similar purposes. Sorel cement is also studied as a candidate material for chemical buffers and engineered barriers (drift seals made of salt-concrete) for deep geological repositories of high-level nuclear waste in salt-rock formations (Waste Isolation Pilot Plant (WIPP) in New Mexico, USA; Asse II salt mine, Gorleben and Morsleben in Germany). Phase 5 of the magnesium oxychloride could be a useful complement, or replacement, for MgO (periclase) presently used as a getter in the WIPP disposal chambers to limit the solubility of minor actinides carbonate complexes, while establishing moderately alkaline conditions (pH: 8.5–9.5) still compatible with the undisturbed geochemical conditions initially prevailing in situ in the salt formations. The much more soluble calcium oxide and hydroxide (portlandite) are not authorized in WIPP (New Mexico) because they would impose a too high pH (12.5). As is the second most-abundant cation present in sea water after , and that magnesium compounds are less soluble than those of calcium, magnesium-based buffer materials and Sorel cement are considered more appropriate backfil materials for radioactive waste disposal in deep salt formations than common calcium-based cements (Portland cement and their derivatives). Moreover, as magnesium hydroxychloride is also a possible pH buffer in marine evaporite brines, Sorel cement is expected to less disturb initial in situ conditions prevailing in deep salts formations. Preparation Sorel cement is usually prepared by mixing finely divided powder with a concentrated solution of . In theory, the ingredients should be combined in the molar proportions of phase 5, which has the best mechanical properties. However, the chemical reactions that create the oxychlorides may not run to completion, leaving unreacted particles and/or in pore solution. While the former act as an inert filler, leftover chloride is undesirable since it promotes corrosion of steel in contact with the cement. Excess water may also be necessary to achieve a workable consistency. Therefore, in practice the proportions of magnesium oxide and water in the initial mix are higher than those in pure phase 5. In one study, the best mechanical properties were obtained with a molar ratio : of 13:1 (instead of the stoichiometry 5:1). Production Periclase (MgO) and magnesite () are not abundant raw materials, so their manufacture into Sorel cement is expensive and limited to specialized niche applications requiring modest materials quantities. China is the dominant supplier of raw materials for the production of magnesium oxide and derivatives. Magnesium-based "green cements" derived from the more abundant dolomite () deposits (dolomite), but also containing calcium carbonate, have to be not confused with the original Sorel cement, as this latter does not contain calcium oxide. Indeed, Sorel cement is a pure magnesium oxychloride. See also Binder (material) Magnesium oxychloride Friedel's salt Salt-concrete Periclase (MgO) References Building materials Cement Magnesium compounds Metal halides Oxychlorides Pavements Visual arts materials
Sorel cement
[ "Physics", "Chemistry", "Engineering" ]
1,753
[ "Inorganic compounds", "Building engineering", "Salts", "Architecture", "Construction", "Materials", "Metal halides", "Matter", "Building materials" ]
2,629,002
https://en.wikipedia.org/wiki/Le%20Cam%27s%20theorem
In probability theory, Le Cam's theorem, named after Lucien Le Cam, states the following. Suppose: are independent random variables, each with a Bernoulli distribution (i.e., equal to either 0 or 1), not necessarily identically distributed. (i.e. follows a Poisson binomial distribution) Then In other words, the sum has approximately a Poisson distribution and the above inequality bounds the approximation error in terms of the total variation distance. By setting pi = λn/n, we see that this generalizes the usual Poisson limit theorem. When is large a better bound is possible: , where represents the operator. It is also possible to weaken the independence requirement. References External links Probability theorems Probabilistic inequalities Statistical inequalities Theorems in statistics
Le Cam's theorem
[ "Mathematics" ]
169
[ "Mathematical theorems", "Theorems in statistics", "Statistical inequalities", "Theorems in probability theory", "Probabilistic inequalities", "Inequalities (mathematics)", "Mathematical problems" ]
2,629,646
https://en.wikipedia.org/wiki/Spin%20wave
In condensed matter physics, a spin wave is a propagating disturbance in the ordering of a magnetic material. These low-lying collective excitations occur in magnetic lattices with continuous symmetry. From the equivalent quasiparticle point of view, spin waves are known as magnons, which are bosonic modes of the spin lattice that correspond roughly to the phonon excitations of the nuclear lattice. As temperature is increased, the thermal excitation of spin waves reduces a ferromagnet's spontaneous magnetization. The energies of spin waves are typically only in keeping with typical Curie points at room temperature and below. Theory The simplest way of understanding spin waves is to consider the Hamiltonian for the Heisenberg ferromagnet: where is the exchange energy, the operators represent the spins at Bravais lattice points, is the Landé -factor, is the Bohr magneton and is the internal field which includes the external field plus any "molecular" field. Note that in the classical continuum case and in dimensions the Heisenberg ferromagnet equation has the form In and dimensions this equation admits several integrable and non-integrable extensions like the Landau-Lifshitz equation, the Ishimori equation and so on. For a ferromagnet and the ground state of the Hamiltonian is that in which all spins are aligned parallel with the field . That is an eigenstate of can be verified by rewriting it in terms of the spin-raising and spin-lowering operators given by: resulting in where has been taken as the direction of the magnetic field. The spin-lowering operator annihilates the state with minimum projection of spin along the -axis, while the spin-raising operator annihilates the ground state with maximum spin projection along the -axis. Since for the maximally aligned state, we find where N is the total number of Bravais lattice sites. The proposition that the ground state is an eigenstate of the Hamiltonian is confirmed. One might guess that the first excited state of the Hamiltonian has one randomly selected spin at position rotated so that but in fact this arrangement of spins is not an eigenstate. The reason is that such a state is transformed by the spin raising and lowering operators. The operator will increase the -projection of the spin at position back to its low-energy orientation, but the operator will lower the -projection of the spin at position . The combined effect of the two operators is therefore to propagate the rotated spin to a new position, which is a hint that the correct eigenstate is a spin wave, namely a superposition of states with one reduced spin. The exchange energy penalty associated with changing the orientation of one spin is reduced by spreading the disturbance over a long wavelength. The degree of misorientation of any two near-neighbor spins is thereby minimized. From this explanation one can see why the Ising model magnet with discrete symmetry has no spin waves: the notion of spreading a disturbance in the spin lattice over a long wavelength makes no sense when spins have only two possible orientations. The existence of low-energy excitations is related to the fact that in the absence of an external field, the spin system has an infinite number of degenerate ground states with infinitesimally different spin orientations. The existence of these ground states can be seen from the fact that the state does not have the full rotational symmetry of the Hamiltonian , a phenomenon which is called spontaneous symmetry breaking. Magnetization In this model the magnetization where is the volume. The propagation of spin waves is described by the Landau-Lifshitz equation of motion: where is the gyromagnetic ratio and is the damping constant. The cross-products in this forbidding-looking equation show that the propagation of spin waves is governed by the torques generated by internal and external fields. (An equivalent form is the Landau-Lifshitz-Gilbert equation, which replaces the final term by a more "simple looking" equivalent one.) The first term on the right hand side of the equation describes the precession of the magnetization under the influence of the applied field, while the above-mentioned final term describes how the magnetization vector "spirals in" towards the field direction as time progresses. In metals the damping forces described by the constant are in many cases dominated by the eddy currents. One important difference between phonons and magnons lies in their dispersion relations. The dispersion relation for phonons is to first order linear in wavevector , namely , where is frequency, and is the velocity of sound. Magnons have a parabolic dispersion relation: where the parameter represents a "spin stiffness." The form is the third term of a Taylor expansion of a cosine term in the energy expression originating from the dot product. The underlying reason for the difference in dispersion relation is that the order parameter (magnetization) for the ground-state in ferromagnets violates time-reversal symmetry. Two adjacent spins in a solid with lattice constant that participate in a mode with wavevector have an angle between them equal to . Experimental observation Spin waves are observed through four experimental methods: inelastic neutron scattering, inelastic light scattering (Brillouin scattering, Raman scattering and inelastic X-ray scattering), inelastic electron scattering (spin-resolved electron energy loss spectroscopy), and spin-wave resonance (ferromagnetic resonance). In inelastic neutron scattering the energy loss of a beam of neutrons that excite a magnon is measured, typically as a function of scattering vector (or equivalently momentum transfer), temperature and external magnetic field. Inelastic neutron scattering measurements can determine the dispersion curve for magnons just as they can for phonons. Important inelastic neutron scattering facilities are present at the ISIS neutron source in Oxfordshire, UK, the Institut Laue-Langevin in Grenoble, France, the High Flux Isotope Reactor at Oak Ridge National Laboratory in Tennessee, USA, and at the National Institute of Standards and Technology in Maryland, USA. Brillouin scattering similarly measures the energy loss of photons (usually at a convenient visible wavelength) reflected from or transmitted through a magnetic material. Brillouin spectroscopy is similar to the more widely known Raman scattering, but probes a lower energy and has a superior energy resolution in order to be able to detect the meV energy of magnons. Ferromagnetic (or antiferromagnetic) resonance instead measures the absorption of microwaves, incident on a magnetic material, by spin waves, typically as a function of angle, temperature and applied field. Ferromagnetic resonance is a convenient laboratory method for determining the effect of magnetocrystalline anisotropy on the dispersion of spin waves. One group at the Max Planck Institute of Microstructure Physics in Halle, Germany proved that by using spin polarized electron energy loss spectroscopy (SPEELS), very high energy surface magnons can be excited. This technique allows one to probe the dispersion of magnons in the ultrathin ferromagnetic films. The first experiment was performed for a 5 ML Fe film. With momentum resolution, the magnon dispersion was explored for an 8 ML fcc Co film on Cu(001) and an 8 ML hcp Co on W(110), respectively. The maximum magnon energy at the border of the surface Brillouin zone was 240 meV. Practical significance When magnetoelectronic devices are operated at high frequencies, the generation of spin waves can be an important energy loss mechanism. Spin wave generation limits the linewidths and therefore the quality factors Q of ferrite components used in microwave devices. The reciprocal of the lowest frequency of the characteristic spin waves of a magnetic material gives a time scale for the switching of a device based on that material. See also Magnonics Holstein–Primakoff transformation Spin engineering References External links Spin waves - The Feynman Lectures on Physics List of labs performing Brillouin scattering measurements. Magnetic ordering Waves de:Spinwelle pl:Fale spinowe
Spin wave
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,704
[ "Physical phenomena", "Electric and magnetic fields in matter", "Materials science", "Waves", "Motion (physics)", "Magnetic ordering", "Condensed matter physics" ]
1,267,288
https://en.wikipedia.org/wiki/Poincar%C3%A9%E2%80%93Hopf%20theorem
In mathematics, the Poincaré–Hopf theorem (also known as the Poincaré–Hopf index formula, Poincaré–Hopf index theorem, or Hopf index theorem) is an important theorem that is used in differential topology. It is named after Henri Poincaré and Heinz Hopf. The Poincaré–Hopf theorem is often illustrated by the special case of the hairy ball theorem, which simply states that there is no smooth vector field on an even-dimensional n-sphere having no sources or sinks. Formal statement Let be a differentiable manifold, of dimension , and a vector field on . Suppose that is an isolated zero of , and fix some local coordinates near . Pick a closed ball centered at , so that is the only zero of in . Then the index of at , , can be defined as the degree of the map from the boundary of to the -sphere given by . Theorem. Let be a compact differentiable manifold. Let be a vector field on with isolated zeroes. If has boundary, then we insist that be pointing in the outward normal direction along the boundary. Then we have the formula where the sum of the indices is over all the isolated zeroes of and is the Euler characteristic of . A particularly useful corollary is when there is a non-vanishing vector field implying Euler characteristic 0. The theorem was proven for two dimensions by Henri Poincaré and later generalized to higher dimensions by Heinz Hopf. Significance The Euler characteristic of a closed surface is a purely topological concept, whereas the index of a vector field is purely analytic. Thus, this theorem establishes a deep link between two seemingly unrelated areas of mathematics. It is perhaps as interesting that the proof of this theorem relies heavily on integration, and, in particular, Stokes' theorem, which states that the integral of the exterior derivative of a differential form is equal to the integral of that form over the boundary. In the special case of a manifold without boundary, this amounts to saying that the integral is 0. But by examining vector fields in a sufficiently small neighborhood of a source or sink, we see that sources and sinks contribute integer amounts (known as the index) to the total, and they must all sum to 0. This result may be considered one of the earliest of a whole series of theorems (e.g. Atiyah–Singer index theorem, De Rham's theorem, Grothendieck–Riemann–Roch theorem) establishing deep relationships between geometric and analytical or physical concepts. They play an important role in the modern study of both fields. Sketch of proof Embed M in some high-dimensional Euclidean space. (Use the Whitney embedding theorem.) Take a small neighborhood of M in that Euclidean space, Nε. Extend the vector field to this neighborhood so that it still has the same zeroes and the zeroes have the same indices. In addition, make sure that the extended vector field at the boundary of Nε is directed outwards. The sum of indices of the zeroes of the old (and new) vector field is equal to the degree of the Gauss map from the boundary of Nε to the sphere. Thus, the sum of the indices is independent of the actual vector field, and depends only on the manifold M. Technique: cut away all zeroes of the vector field with small neighborhoods. Then use the fact that the degree of a map from the boundary of an n-dimensional manifold to an sphere, that can be extended to the whole n-dimensional manifold, is zero. Finally, identify this sum of indices as the Euler characteristic of M. To do that, construct a very specific vector field on M using a triangulation of M for which it is clear that the sum of indices is equal to the Euler characteristic. Generalization It is still possible to define the index for a vector field with nonisolated zeroes. A construction of this index and the extension of Poincaré–Hopf theorem for vector fields with nonisolated zeroes is outlined in Section 1.1.2 of . Another generalization that use only compact triangulable space and continuous mappings with finitely many fixed points is Lefschetz-Hopf theorem. Since every vector field induce flow on manifold and fixed points of small flows corresponds to zeroes of vector field (and indices of zeroes equals indices of fixed points), then Poincare-Hopf theorem follows immediately from it. See also Eisenbud–Levine–Khimshiashvili signature formula Hopf theorem References Theorems in differential topology Differential topology
Poincaré–Hopf theorem
[ "Mathematics" ]
939
[ "Topology", "Differential topology", "Theorems in differential topology", "Theorems in topology" ]
1,270,187
https://en.wikipedia.org/wiki/Current%20transformer
A current transformer (CT) is a type of transformer that reduces or multiplies alternating current (AC), producing a current in its secondary which is proportional to the current in its primary. Current transformers, along with voltage or potential transformers, are instrument transformers, which scale the large values of voltage or current to small, standardized values that are easy to handle for measuring instruments and protective relays. Instrument transformers isolate measurement or protection circuits from the high voltage of the primary system. A current transformer presents a negligible load to the primary circuit. Current transformers are the current-sensing units of the power system and are used at generating stations, electrical substations, and in industrial and commercial electric power distribution. Function A current transformer has a primary winding, a core, and a secondary winding, although some transformers use an air core. While the physical principles are the same, the details of a "current" transformer compared with a "voltage" transformer will differ owing to different requirements of the application. A current transformer is designed to maintain an accurate ratio between the currents in its primary and secondary circuits over a defined range. The alternating current in the primary produces an alternating magnetic field in the core, which then induces an alternating current in the secondary. The primary circuit is largely unaffected by the insertion of the CT. Accurate current transformers need close coupling between the primary and secondary to ensure that the secondary current is proportional to the primary current over a wide current range. The current in the secondary is the current in the primary (assuming a single turn primary) divided by the number of turns of the secondary. In the illustration on the right, 'I' is the current in the primary, 'B' is the magnetic field, 'N' is the number of turns on the secondary, and 'A' is an AC ammeter. Current transformers typically consist of a silicon steel ring core wound with many turns of copper wire, as shown in the illustration to the right. The conductor carrying the primary current is passed through the ring. The CT's primary, therefore, consists of a single 'turn'. The primary 'winding' may be a permanent part of the current transformer, i.e., a heavy copper bar to carry current through the core. Window-type current transformers are also common, which can have circuit cables run through the middle of an opening in the core to provide a single-turn primary winding. To assist accuracy, the primary conductor should be centered in the aperture. CTs are specified by their current ratio from primary to secondary. The rated secondary current is normally standardized at 1 or 5 amperes. For example, a 4000:5 CT secondary winding will supply an output current of 5 amperes when the primary winding current is 4000 amperes. This ratio can also be used to find the impedance or voltage on one side of the transformer, given the appropriate value at the other side. For the 4000:5 CT, the secondary impedance can be found as , and the secondary voltage can be found as . In some cases, the secondary impedance is referred to the primary side, and is found as . Referring the impedance is done simply by multiplying initial secondary impedance value by the current ratio. The secondary winding of a CT can have taps to provide a range of ratios, five taps being common. Current transformer shapes and sizes vary depending on the end-user or switch gear manufacturer. Low-voltage single ratio metering current transformers are either a ring type or plastic molded case. Split-core current transformers either have a two-part core or a core with a removable section. This allows the transformer to be placed around a conductor without disconnecting it first. Split-core current transformers are typically used in low current measuring instruments, often portable, battery-operated, and hand-held (see illustration lower right). Use Current transformers are used extensively for measuring current and monitoring the operation of the power grid. Along with voltage leads, revenue-grade CTs drive the electrical utility's watt-hour meter on many larger commercial and industrial supplies. High-voltage current transformers are mounted on porcelain or polymer insulators to isolate them from ground. Some CT configurations slip around the bushing of a high-voltage transformer or circuit breaker, which automatically centers the conductor inside the CT window. Current transformers can be mounted on the low voltage or high voltage leads of a power transformer. Sometimes a section of a bus bar can be removed to replace a current transformer. Often, multiple CTs are installed as a "stack" for various uses. For example, protection devices and revenue metering may use separate CTs to provide isolation between metering and protection circuits and allows current transformers with different characteristics (accuracy, overload performance) to be used for the devices. In the United States, the National Electrical Code (NEC) requires residual current devices in commercial and residential electrical systems to protect outlets installed in "wet" locations such as kitchens and bathrooms, as well as weatherproof outlets installed outdoors. Such devices, most commonly ground fault circuit interrupters (GFCIs), typically run both the 120-volt energized conductor and the neutral return conductor through a current transformer, with the secondary coil connected to a trip device. Under normal conditions, the current in the two circuit wires will be equal and flow in opposite directions, resulting in zero net current through the CT and no current in the secondary coil. If the supply current is redirected downstream into the third (ground) circuit conductor (e.g., if the grounded metallic case of a power tool contacts a 120-volt conductor), or into earth ground (e.g., if a person contacts a 120-volt conductor), the neutral return current will be less than the supply current, resulting in a positive net current flow through the CT. This net current flow will induce current in the secondary coil, which will cause the trip device to operate and de-energize the circuit - typically within 0.2 seconds. The burden (load) impedance should not exceed the specified maximum value to avoid the secondary voltage exceeding the limits for the current transformer. The primary current rating of a current transformer should not be exceeded, or the core may enter its non-linear region and ultimately saturate. This would occur near the end of the first half of each half (positive and negative) of the AC sine wave in the primary and compromise accuracy. Safety Current transformers are often used to monitor high currents or currents at high voltages. Technical standards and design practices are used to ensure the safety of installations using current transformers. The secondary of a current transformer should not be disconnected from its burden while current is in the primary, as the secondary will attempt to continue driving current into an effective infinite impedance potentially generating high voltages and thus compromising operator safety. For certain current transformers, this voltage may reach several kilovolts and may cause arcing. Exceeding the secondary voltage may also degrade the accuracy of the transformer or destroy it. Output voltage in open operation is limited by core saturation since the primary flux is no longer canceled by secondary flux, smaller current transformers may not actually incur dangerous voltages when operating nominally. Faster current transients from loads being switched on etc. can however still induce dangerous voltage levels due to high current slope. Accuracy The accuracy of a CT is affected by a number of factors including: Burden Burden class/saturation class Rating factor Load External electromagnetic fields Temperature Physical configuration The selected tap, for multi-ratio CTs Phase change Capacitive coupling between primary and secondary Resistance of primary and secondary Core magnetizing current Accuracy classes for various types of measurement and at standard loads in the secondary circuit (burdens) are defined in IEC 61869-1 as classes 0.1, 0.2s, 0.2, 0.5, 0.5s, 1 and 3. The class designation is an approximate measure of the CT's accuracy. The ratio (primary to secondary current) error of a Class 1 CT is 1% at rated current; the ratio error of a Class 0.5 CT is 0.5% or less. Errors in phase are also important, especially in power measuring circuits. Each class has an allowable maximum phase error for a specified load impedance. Current transformers used for protective relaying also have accuracy requirements at overload currents in excess of the normal rating to ensure accurate performance of relays during system faults. A CT with a rating of 2.5L400 specifies with an output from its secondary winding of twenty times its rated secondary current (usually ) and 400 V (IZ drop) its output accuracy will be within 2.5 percent. Burden The secondary load of a current transformer is termed the "burden" to distinguish it from the primary load. The burden in a CT metering electrical network is largely resistive impedance presented to its secondary winding. Typical burden ratings for IEC CTs are 1.5 VA, 3 VA, 5 VA, 10 VA, 15 VA, 20 VA, 30 VA, 45 VA and 60 VA. ANSI/IEEE burden ratings are B-0.1, B-0.2, B-0.5, B-1.0, B-2.0 and B-4.0. This means a CT with a burden rating of B-0.2 will maintain its stated accuracy with up to 0.2 Ω on the secondary circuit. These specification diagrams show accuracy parallelograms on a grid incorporating magnitude and phase angle error scales at the CT's rated burden. Items that contribute to the burden of a current measurement circuit are switch-blocks, meters and intermediate conductors. The most common cause of excess burden impedance is the conductor between the meter and the CT. When substation meters are located far from the meter cabinets, the excessive length of cable creates a large resistance. This problem can be reduced by using thicker cables and CTs with lower secondary currents (1 A), both of which will produce less voltage drop between the CT and its metering devices. Knee-point core-saturation voltage The knee-point voltage of a current transformer is the magnitude of the secondary voltage above which the output current ceases to linearly follow the input current within declared accuracy. In testing, if a voltage is applied across the secondary terminals the magnetizing current will increase in proportion to the applied voltage, until the knee point is reached. The knee point is defined as the voltage at which a 10% increase in applied voltage increases the magnetizing current by 50%. For voltages greater than the knee point, the magnetizing current increases considerably even for small increments in voltage across the secondary terminals. The knee-point voltage is less applicable for metering current transformers as their accuracy is generally much higher but constrained within a very small range of the current transformer rating, typically 1.2 to 1.5 times rated current. However, the concept of knee point voltage is very pertinent to protection current transformers, since they are necessarily exposed to fault currents of 20 to 30 times rated current. Phase shift Ideally, the primary and secondary currents of a current transformer should be in phase. In practice, this is impossible, but, at normal power frequencies, phase shifts of a few tenths of a degree are achievable, while simpler CTs may have larger phase shifts. For current measurement, phase shift is immaterial as ammeters only display the magnitude of the current. However, in wattmeters, energy meters, and power factor, phase shift produces errors. For power and energy measurement, the errors are considered to be negligible at unity power factor but become more significant as the power factor approaches zero. The introduction of electronic power and energy meters has allowed current phase error to be calibrated out. Construction Bar-type current transformers have terminals for source and load connections of the primary circuit, and the body of the current transformer provides insulation between the primary circuit and ground. By use of oil insulation and porcelain bushings, such transformers can be applied at the highest transmission voltages. Ring-type current transformers are installed over a bus bar or an insulated cable and have only a low level of insulation on the secondary coil. To obtain non-standard ratios or for other special purposes, more than one turn of the primary cable may be passed through the ring. Where a metal shield is present in the cable jacket, it must be terminated so no net sheath current passes through the ring, to ensure accuracy. Current transformers used to sense ground fault (zero sequence) currents, such as in a three-phase installation, may have three primary conductors passed through the ring. Only the net unbalanced current produces a secondary current - this can be used to detect a fault from an energized conductor to ground. Ring-type transformers usually use dry insulation systems, with a hard rubber or plastic case over the secondary windings. For temporary connections, a split ring-type current transformer can be slipped over a cable without disconnecting it. This type has a laminated iron core, with a hinged section that allows it to be installed over the cable; the core links the magnetic flux produced by the single turn primary winding to a wound secondary with many turns. Because the gaps in the hinged segment introduce inaccuracy, such devices are not normally used for revenue metering. Current transformers, especially those intended for high voltage substation service, may have multiple taps on their secondary windings, providing several ratios in the same device. This can be done to allow for reduced inventory of spare units, or to allow for load growth in an installation. A high-voltage current transformer may have several secondary windings with the same primary, to allow for separate metering and protection circuits, or for connection to different types of protective devices. For example, one secondary may be used for branch overcurrent protection, while a second winding may be used in a bus differential protective scheme, and a third winding used for power and current measurement. Special types Specially constructed wideband current transformers are also used (usually with an oscilloscope) to measure waveforms of high frequency or pulsed currents within pulsed power systems. Unlike CTs used for power circuitry, wideband CTs are rated in output volts per ampere of primary current. If the burden resistance is much less than inductive impedance of the secondary winding at the measurement frequency then the current in the secondary tracks the primary current and the transformer provides a current output that is proportional to the measured current. On the other hand, if that condition is not true, then the transformer is inductive and gives a differential output. The Rogowski coil uses this effect and requires an external integrator in order to provide a voltage output that is proportional to the measured current. Standards Ultimately, depending on client requirements, there are two main standards to which current transformers are designed. IEC 61869-1 (in the past IEC 60044-1) & IEEE C57.13 (ANSI), although the Canadian and Australian standards are also recognised. High voltage types Current transformers are used for protection, measurement and control in high-voltage electrical substations and the electrical grid. Current transformers may be installed inside switchgear or in apparatus bushings, but very often free-standing outdoor current transformers are used. In a switchyard, live tank current transformers have a substantial part of their enclosure energized at the line voltage and must be mounted on insulators. Dead tank current transformers isolate the measured circuit from the enclosure. Live tank CTs are useful because the primary conductor is short, which gives better stability and a higher short-circuit current rating. The primary of the winding can be evenly distributed around the magnetic core, which gives better performance for overloads and transients. Since the major insulation of a live-tank current transformer is not exposed to the heat of the primary conductors, insulation life and thermal stability is improved. A high-voltage current transformer may contain several cores, each with a secondary winding, for different purposes (such as metering circuits, control, or protection). A neutral current transformer is used as earth fault protection to measure any fault current flowing through the neutral line from the wye neutral point of a transformer. See also Instrumentation Transformer types Current sensing techniques References Electric transformers Electronic test equipment
Current transformer
[ "Technology", "Engineering" ]
3,365
[ "Electronic test equipment", "Measuring instruments" ]
1,270,458
https://en.wikipedia.org/wiki/Eisenstein%20series
Eisenstein series, named after German mathematician Gotthold Eisenstein, are particular modular forms with infinite series expansions that may be written down directly. Originally defined for the modular group, Eisenstein series can be generalized in the theory of automorphic forms. Eisenstein series for the modular group Let be a complex number with strictly positive imaginary part. Define the holomorphic Eisenstein series of weight , where is an integer, by the following series: This series absolutely converges to a holomorphic function of in the upper half-plane and its Fourier expansion given below shows that it extends to a holomorphic function at . It is a remarkable fact that the Eisenstein series is a modular form. Indeed, the key property is its -covariance. Explicitly if and then Note that is necessary such that the series converges absolutely, whereas needs to be even otherwise the sum vanishes because the and terms cancel out. For the series converges but it is not a modular form. Relation to modular invariants The modular invariants and of an elliptic curve are given by the first two Eisenstein series: The article on modular invariants provides expressions for these two functions in terms of theta functions. Recurrence relation Any holomorphic modular form for the modular group can be written as a polynomial in and . Specifically, the higher order can be written in terms of and through a recurrence relation. Let , so for example, and . Then the satisfy the relation for all . Here, is the binomial coefficient. The occur in the series expansion for the Weierstrass's elliptic functions: Fourier series Define . (Some older books define to be the nome , but is now standard in number theory.) Then the Fourier series of the Eisenstein series is where the coefficients are given by Here, are the Bernoulli numbers, is Riemann's zeta function and is the divisor sum function, the sum of the th powers of the divisors of . In particular, one has The summation over can be resummed as a Lambert series; that is, one has for arbitrary complex and . When working with the -expansion of the Eisenstein series, this alternate notation is frequently introduced: Identities involving Eisenstein series As theta functions Source: Given , let and define the Jacobi theta functions which normally uses the nome , where and are alternative notations. Then we have the symmetric relations, Basic algebra immediately implies an expression related to the modular discriminant, The third symmetric relation, on the other hand, is a consequence of and . Products of Eisenstein series Eisenstein series form the most explicit examples of modular forms for the full modular group . Since the space of modular forms of weight has dimension 1 for , different products of Eisenstein series having those weights have to be equal up to a scalar multiple. In fact, we obtain the identities: Using the -expansions of the Eisenstein series given above, they may be restated as identities involving the sums of powers of divisors: hence and similarly for the others. The theta function of an eight-dimensional even unimodular lattice is a modular form of weight 4 for the full modular group, which gives the following identities: for the number of vectors of the squared length in the root lattice of the type . Similar techniques involving holomorphic Eisenstein series twisted by a Dirichlet character produce formulas for the number of representations of a positive integer ' as a sum of two, four, or eight squares in terms of the divisors of . Using the above recurrence relation, all higher can be expressed as polynomials in and . For example: Many relationships between products of Eisenstein series can be written in an elegant way using Hankel determinants, e.g. Garvan's identity where is the modular discriminant. Ramanujan identities Srinivasa Ramanujan gave several interesting identities between the first few Eisenstein series involving differentiation. Let then These identities, like the identities between the series, yield arithmetical convolution identities involving the sum-of-divisor function. Following Ramanujan, to put these identities in the simplest form it is necessary to extend the domain of to include zero, by setting Then, for example Other identities of this type, but not directly related to the preceding relations between , and functions, have been proved by Ramanujan and Giuseppe Melfi, as for example Generalizations Automorphic forms generalize the idea of modular forms for general Lie groups; and Eisenstein series generalize in a similar fashion. Defining to be the ring of integers of a totally real algebraic number field , one then defines the Hilbert–Blumenthal modular group as . One can then associate an Eisenstein series to every cusp of the Hilbert–Blumenthal modular group. References Further reading Translated into English as Mathematical series Modular forms Analytic number theory Fractals
Eisenstein series
[ "Mathematics" ]
1,025
[ "Sequences and series", "Analytic number theory", "Functions and mappings", "Series (mathematics)", "Mathematical structures", "Mathematical analysis", "Calculus", "Mathematical objects", "Fractals", "Mathematical relations", "Modular forms", "Number theory" ]
1,270,631
https://en.wikipedia.org/wiki/Methoxy%20group
In organic chemistry, a methoxy group is the functional group consisting of a methyl group bound to oxygen. This alkoxy group has the formula . On a benzene ring, the Hammett equation classifies a methoxy substituent at the para position as an electron-donating group, but as an electron-withdrawing group if at the meta position. At the ortho position, steric effects are likely to cause a significant alteration in the Hammett equation prediction which otherwise follows the same trend as that of the para position. Occurrence The simplest of methoxy compounds are methanol and dimethyl ether. Other methoxy ethers include anisole and vanillin. Many metal alkoxides contain methoxy groups, such as tetramethyl orthosilicate and titanium methoxide. Esters with a methoxy group can be referred to as methyl esters, and the —COOCH3 substituent is called a methoxycarbonyl. Biosynthesis In nature, methoxy groups are found on nucleosides that have been subjected to 2′-O-methylation, for example in variations of the 5′-cap structure known as cap-1 and cap-2. They are also common substituents in O-methylated flavonoids, whose formation is catalyzed by O-methyltransferases that act on phenols, such as catechol-O-methyl transferase (COMT). Many natural products in plants, such as lignins, are generated via catalysis by caffeoyl-CoA O-methyltransferase. Methoxylation Organic methoxides are often produced by methylation of alkoxides. Some aryl methoxides can be synthesized by metal-catalyzed methylation of phenols, or by methoxylation of aryl halides. References Alkoxy groups
Methoxy group
[ "Chemistry" ]
415
[ "Substituents", "Alkoxy groups", "O-methylation", "Functional groups", "Methylation" ]
1,270,878
https://en.wikipedia.org/wiki/Acid-fastness
Acid-fastness is a physical property of certain bacterial and eukaryotic cells, as well as some sub-cellular structures, specifically their resistance to decolorization by acids during laboratory staining procedures. Once stained as part of a sample, these organisms can resist the acid and/or ethanol-based decolorization procedures common in many staining protocols, hence the name acid-fast. The mechanisms of acid-fastness vary by species although the most well-known example is in the genus Mycobacterium, which includes the species responsible for tuberculosis and leprosy. The acid-fastness of Mycobacteria is due to the high mycolic acid content of their cell walls, which is responsible for the staining pattern of poor absorption followed by high retention. Some bacteria may also be partially acid-fast, such as Nocardia. Acid-fast organisms are difficult to characterize using standard microbiological techniques, though they can be stained using concentrated dyes, particularly when the staining process is combined with heat. Some, such as Mycobacteria, can be stained with the Gram stain, but they do not take the crystal violet well and thus appear light purple, which can still potentially result in an incorrect gram negative identification. The most common staining technique used to identify acid-fast bacteria is the Ziehl–Neelsen stain, in which the acid-fast species are stained bright red and stand out clearly against a blue background. Another method is the Kinyoun method, in which the bacteria are stained bright red and stand out clearly against a green background. Acid-fast Mycobacteria can also be visualized by fluorescence microscopy using specific fluorescent dyes (auramine-rhodamine stain, for example). Some acid-fast staining techniques Ziehl–Neelsen stain (classic and modified bleach types) Kinyoun stain For color blind people (or in backgrounds where detecting red bacteria is difficult), Victoria blue can be substituted for carbol fuchsin and picric acid can be used as the counter stain instead of methylene blue, and the rest of the Kinyoun technique can be used. Various bacterial spore staining techniques using Kenyon e.g. Moeller's method Dorner's method (acid alcohol decolorizer) without the Schaeffer–Fulton modification (decolorize by water) Detergent method, using Tergitol 7, nonionic polyglycol ether surfactants type NP-7 Fite stain Fite-Faraco stain Wade Fite stain Ellis and Zabrowarny stain (no phenol/carbolic acid) Auramine-rhodamine stain Auramine phenol stain Notable acid-fast structures Very few structures are acid-fast; this makes staining for acid-fastness particularly useful in diagnosis. The following are notable examples of structures which are acid-fast or modified acid-fast: All Mycobacteria – M. tuberculosis, M. leprae, M. smegmatis and atypical mycobacteria. Certain Actinobacteria (especially aerobic ones in the order Mycobacteriales) with mycolic acid in their cell wall; not to be confused with Actinomyces, which is a non-acid-fast genus of actinomycete. Note that Streptomyces do not contain mycolic acid. Nocardia (weakly acid-fast; resists decolorization with weaker acid concentrations) Rhodococcus Gordonia Tsukamurella Dietzia Head of sperm Bacterial spores, see Endospore Legionella micdadei Certain cellular inclusions e.g. Cytoplasmic inclusion bodies seen in Neurons in layer 5 of cerebral cortex neuronal ceroid lipofuscinosis (Batten disease). Nuclear inclusion bodies seen in Lead poisoning Bismuth poisoning. Oocysts of some coccidian parasites in faecal matter, such as: Cryptosporidium parvum, Isospora belli Cyclospora cayetanensis. A few other parasites: Sarcocystis Taenia saginata eggs stain well but Taenia solium eggs don't (can be used to distinguish) Hydatid cysts, especially their "hooklets" stain irregularly with ZN stain but emanate bright red fluorescence under green light, and can aid detection in moderately heavy backgrounds or with scarce hooklets. Fungal yeast forms are inconsistently stained with Acid-fast stain which is considered a narrow spectrum stain for fungi. In a study on acid-fastness of fungi, 60% of blastomyces and 47% of histoplasma showed positive cytoplasmic staining of the yeast-like cells, and Cryptococcus or candida did not stain, and very rare staining was seen in Coccidioides endospores. References Online protocol examples Ziehl–Neelsen protocol (PDF format). Alternate Ellis & Zabrowarny method for staining AFB. Bacteria Staining
Acid-fastness
[ "Chemistry", "Biology" ]
1,057
[ "Staining", "Prokaryotes", "Microbiology techniques", "Bacteria", "Microscopy", "Microorganisms", "Cell imaging" ]
1,271,003
https://en.wikipedia.org/wiki/Five%20prime%20untranslated%20region
The 5′ untranslated region (also known as 5′ UTR, leader sequence, transcript leader, or leader RNA) is the region of a messenger RNA (mRNA) that is directly upstream from the initiation codon. This region is important for the regulation of translation of a transcript by differing mechanisms in viruses, prokaryotes and eukaryotes. While called untranslated, the 5′ UTR or a portion of it is sometimes translated into a protein product. This product can then regulate the translation of the main coding sequence of the mRNA. In many organisms, however, the 5′ UTR is completely untranslated, instead forming a complex secondary structure to regulate translation. Although not common, the 5′ UTR region can sometimes be translated . In addition, this region has been involved in transcription regulation, such as the sex-lethal gene in Drosophila. Regulatory elements within 5′ UTRs have also been linked to mRNA export. General structure Length The 5′ UTR begins at the transcription start site and ends one nucleotide (nt) before the initiation sequence (usually AUG) of the coding region. In prokaryotes, the length of the 5′ UTR tends to be 3–10 nucleotides long, while in eukaryotes it tends to be anywhere from 100 to several thousand nucleotides long. For example, the ste11 transcript in Schizosaccharomyces pombe has a 2273 nucleotide 5′ UTR while the lac operon in Escherichia coli only has seven nucleotides in its 5′ UTR. The differing sizes are likely due to the complexity of the eukaryotic regulation which the 5′ UTR holds as well as the larger pre-initiation complex that must form to begin translation. The 5′ UTR can also be completely missing, in the case of leaderless mRNAs. Ribosomes of all three domains of life accept and translate such mRNAs. Such sequences are naturally found in all three domains of life. Humans have many pressure-related genes under a 2–3 nucleotide leader. Mammals also have other types of ultra-short leaders like the TISU sequence. Elements The elements of a eukaryotic and prokaryotic 5′ UTR differ greatly. The prokaryotic 5′ UTR contains a ribosome binding site (RBS), also known as the Shine–Dalgarno sequence (AGGAGGU), which is usually 3–10 base pairs upstream from the initiation codon. In contrast, the eukaryotic 5′ UTR contains the Kozak consensus sequence (ACCAUGG), which contains the initiation codon. The eukaryotic 5′ UTR also contains cis-acting regulatory elements called upstream open reading frames (uORFs) and upstream AUGs (uAUGs) and termination codons, which have a great impact on the regulation of translation (see below). Unlike prokaryotes, 5′ UTRs can harbor introns in eukaryotes. In humans, ~35% of all genes harbor introns within the 5′ UTR. Secondary structure As the 5′ UTR has high GC content, secondary structures often occur within it. Hairpin loops are one such secondary structure that can be located within the 5′ UTR. These secondary structures also impact the regulation of translation. Role in translational regulation Prokaryotes In bacteria, the initiation of translation occurs when IF-3, along with the 30S ribosomal subunit, bind to the Shine–Dalgarno (SD) sequence of the 5′ UTR. This then recruits many other proteins, such as the 50S ribosomal subunit, which allows for translation to begin. Each of these steps regulates the initiation of translation. Initiation in Archaea is less understood. SD sequences are much rarer, and the initiation factors have more in common with eukaryotic ones. There is no homolog of bacterial IF3. Some mRNAs are leaderless. In both domains, genes without Shine–Dalgarno sequences are also translated in a less understood manner. A requirement seems to be a lack of secondary structure near the initiation codon. Eukaryotes Pre-initiation complex regulation The regulation of translation in eukaryotes is more complex than in prokaryotes. Initially, the eIF4F complex is recruited to the 5′ cap, which in turn recruits the ribosomal complex to the 5′ UTR. Both eIF4E and eIF4G bind the 5′ UTR, which limits the rate at which translational initiation can occur. However, this is not the only regulatory step of translation that involves the 5′ UTR. RNA-binding proteins sometimes serve to prevent the pre-initiation complex from forming. An example is regulation of the msl2 gene. The protein SXL attaches to an intron segment located within the 5′ UTR segment of the primary transcript, which leads to the inclusion of the intron after processing. This sequence allows the recruitment of proteins that bind simultaneously to both the 5′ and 3′ UTR, not allowing translation proteins to assemble. However, it has also been noted that SXL can also repress translation of RNAs that do not contain a poly(A) tail, or more generally, 3′ UTR. Closed-loop regulation Another important regulator of translation is the interaction between 3′ UTR and the 5′ UTR. The closed-loop structure inhibits translation. This has been observed in Xenopus laevis, in which eIF4E bound to the 5′ cap interacts with Maskin bound to CPEB on the 3′ UTR, creating translationally inactive transcripts. This translational inhibition is lifted once CPEB is phosphorylated, displacing the Maskin binding site, allowing for the polymerization of the PolyA tail, which can recruit the translational machinery by means of PABP. However, it is important to note that this mechanism has been under great scrutiny. Ferritin regulation Iron levels in cells are maintained by translation regulation of many proteins involved in iron storage and metabolism. The 5′ UTR has the ability to form a hairpin loop secondary structure (known as the iron response element or IRE) that is recognized by iron-regulatory proteins (IRP1 and IRP2). In low levels of iron, the ORF of the target mRNA is blocked as a result of steric hindrance from the binding of IRP1 and IRP2 to the IRE. When iron is high, then the two iron-regulatory proteins do not bind as strongly and allow proteins to be expressed that have a role in iron concentration control. This function has gained some interest after it was revealed that the translation of amyloid precursor protein may be disrupted due to a single-nucleotide polymorphism to the IRE found in the 5′ UTR of its mRNA, leading to a spontaneous increased risk of Alzheimer's disease. uORFs and reinitiation Another form of translational regulation in eukaryotes comes from unique elements on the 5′ UTR called upstream open reading frames (uORF). These elements are fairly common, occurring in 35–49% of all human genes. A uORF is a coding sequence located in the 5′ UTR located upstream of the coding sequences initiation site. These uORFs contain their own initiation codon, known as an upstream AUG (uAUG). This codon can be scanned for by ribosomes and then translated to create a product, which can regulate the translation of the main protein coding sequence or other uORFs that may exist on the same transcript. The translation of the protein within the main ORF after a uORF sequence has been translated is known as reinitiation. The process of reinitiation is known to reduce the translation of the ORF protein. Control of protein regulation is determined by the distance between the uORF and the first codon in the main ORF. A uORF has been found to increase reinitiation with the longer distance between its uAUG and the start codon of the main ORF, which indicates that the ribosome needs to reacquire translation factors before it can carry out translation of the main protein. For example, ATF4 regulation is performed by two uORFs further upstream, named uORF1 and uORF2, which contain three amino acids and fifty-nine amino acids, respectively. The location of uORF2 overlaps with the ATF4 ORF. During normal conditions, the uORF1 is translated, and then translation of uORF2 occurs only after eIF2-TC has been reacquired. Translation of the uORF2 requires that the ribosomes pass by the ATF4 ORF, whose start codon is located within uORF2. This leads to its repression. However, during stress conditions, the 40S ribosome will bypass uORF2 because of a decrease in concentration of eIF2-TC, which means the ribosome does not acquire one in time to translate uORF2. Instead, ATF4 is translated. Other mechanisms In addition to reinitiation, uORFs contribute to translation initiation based on: The nucleotides of an uORF may code for a codon that leads to a highly structured mRNA, causing the ribosome to stall. cis- and trans- regulation on translation of the main protein coding sequence. Interactions with IRES sites. Internal ribosome entry sites and viruses Viral (as well as some eukaryotic) 5′ UTRs contain internal ribosome entry sites, which is a cap-independent method of translational activation. Instead of building up a complex at the 5′ cap, the IRES allows for direct binding of the ribosomal complexes to the transcript to begin translation. The IRES enables the viral transcript to translate more efficiently due to the lack of needing a preinitation complex, allowing the virus to replicate quickly. Role in transcriptional regulation msl-2 transcript Transcription of the msl-2 transcript is regulated by multiple binding sites for fly Sxl at the 5′ UTR. In particular, these poly-uracil sites are located close to a small intron that is spliced in males, but kept in females through splicing inhibition. This splicing inhibition is maintained by Sxl. When present, Sxl will repress the translation of msl2 by increasing translation of a start codon located in a uORF in the 5′ UTR (see above for more information on uORFs). Also, Sxl outcompetes TIA-1 to a poly(U) region and prevents snRNP (a step in alternative splicing) recruitment to the 5′ splice site. See also Three prime untranslated region UORF Iron-responsive element-binding protein Iron response element Trans-splicing UTRdb References RNA Gene expression
Five prime untranslated region
[ "Chemistry", "Biology" ]
2,284
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
18,649,261
https://en.wikipedia.org/wiki/Hooke%27s%20atom
Hooke's atom, also known as harmonium or hookium, refers to an artificial helium-like atom where the Coulombic electron-nucleus interaction potential is replaced by a harmonic potential. This system is of significance as it is, for certain values of the force constant defining the harmonic containment, an exactly solvable ground-state many-electron problem that explicitly includes electron correlation. As such it can provide insight into quantum correlation (albeit in the presence of a non-physical nuclear potential) and can act as a test system for judging the accuracy of approximate quantum chemical methods for solving the Schrödinger equation. The name "Hooke's atom" arises because the harmonic potential used to describe the electron-nucleus interaction is a consequence of Hooke's law. Definition Employing atomic units, the Hamiltonian defining the Hooke's atom is As written, the first two terms are the kinetic energy operators of the two electrons, the third term is the harmonic electron-nucleus potential, and the final term the electron-electron interaction potential. The non-relativistic Hamiltonian of the helium atom differs only in the replacement: Solution The equation to be solved is the two electron Schrödinger equation: For arbitrary values of the force constant, , the Schrödinger equation does not have an analytic solution. However, for a countably infinite number of values, such as , simple closed form solutions can be derived. Given the artificial nature of the system this restriction does not hinder the usefulness of the solution. To solve, the system is first transformed from the Cartesian electronic coordinates, , to the center of mass coordinates, , defined as Under this transformation, the Hamiltonian becomes separable – that is, the term coupling the two electrons is removed (and not replaced by some other form) allowing the general separation of variables technique to be applied to further a solution for the wave function in the form . The original Schrödinger equation is then replaced by: The first equation for is the Schrödinger equation for an isotropic quantum harmonic oscillator with ground-state energy and (unnormalized) wave function Asymptotically, the second equation again behaves as a harmonic oscillator of the form and the rotationally invariant ground state can be expressed, in general, as for some function . It was long noted that is very well approximated by a linear function in . Thirty years after the proposal of the model an exact solution was discovered for , and it was seen that . It was later shown that there are many values of which lead to an exact solution for the ground state, as will be shown in the following. Decomposing and expressing the Laplacian in spherical coordinates, one further decomposes the radial wave function as which removes the first derivative to yield The asymptotic behavior encourages a solution of the form The differential equation satisfied by is This equation lends itself to a solution by way of the Frobenius method. That is, is expressed as for some and which satisfy: The two solutions to the indicial equation are and of which the former is taken as it yields the regular (bounded, normalizable) wave function. For a simple solution to exist, the infinite series is sought to terminate and it is here where particular values of are exploited for an exact closed-form solution. Terminating the polynomial at any particular order can be accomplished with different values of defining the Hamiltonian. As such there exists an infinite number of systems, differing only in the strength of the harmonic containment, with exact ground-state solutions. Most simply, to impose for , two conditions must be satisfied: These directly force and respectively, and as a consequence of the three term recession, all higher coefficients also vanish. Solving for and yields and the radial wave function Transforming back to the ground-state (with and energy ) is finally Combining, normalizing, and transforming back to the original coordinates yields the ground state wave function: The corresponding ground-state total energy is then . Remarks The exact ground state electronic density of the Hooke atom for the special case is From this we see that the radial derivative of the density vanishes at the nucleus. This is in stark contrast to the real (non-relativistic) helium atom where the density displays a cusp at the nucleus as a result of the unbounded Coulomb potential. See also List of quantum-mechanical systems with analytical solutions References Further reading Quantum chemistry Quantum models
Hooke's atom
[ "Physics", "Chemistry" ]
915
[ "Quantum chemistry", "Quantum mechanics", "Quantum models", "Theoretical chemistry", " molecular", "Atomic", " and optical physics" ]
18,651,732
https://en.wikipedia.org/wiki/Coherence%20%28signal%20processing%29
In signal processing, the coherence is a statistic that can be used to examine the relation between two signals or data sets. It is commonly used to estimate the power transfer between input and output of a linear system. If the signals are ergodic, and the system function is linear, it can be used to estimate the causality between the input and output. Definition and formulation The coherence (sometimes called magnitude-squared coherence) between two signals x(t) and y(t) is a real-valued function that is defined as: where Gxy(f) is the Cross-spectral density between x and y, and Gxx(f) and Gyy(f) the auto spectral density of x and y respectively. The magnitude of the spectral density is denoted as |G|. Given the restrictions noted above (ergodicity, linearity) the coherence function estimates the extent to which y(t) may be predicted from x(t) by an optimum linear least squares function. Values of coherence will always satisfy . For an ideal constant parameter linear system with a single input x(t) and single output y(t), the coherence will be equal to one. To see this, consider a linear system with an impulse response h(t) defined as: , where denotes convolution. In the Fourier domain this equation becomes , where Y(f) is the Fourier transform of y(t) and H(f) is the linear system transfer function. Since, for an ideal linear system: and , and since is real, the following identity holds, . However, in the physical world an ideal linear system is rarely realized, noise is an inherent component of system measurement, and it is likely that a single input, single output linear system is insufficient to capture the complete system dynamics. In cases where the ideal linear system assumptions are insufficient, the Cauchy–Schwarz inequality guarantees a value of . If Cxy is less than one but greater than zero it is an indication that either: noise is entering the measurements, that the assumed function relating x(t) and y(t) is not linear, or that y(t) is producing output due to input x(t) as well as other inputs. If the coherence is equal to zero, it is an indication that x(t) and y(t) are completely unrelated, given the constraints mentioned above. The coherence of a linear system therefore represents the fractional part of the output signal power that is produced by the input at that frequency. We can also view the quantity as an estimate of the fractional power of the output that is not contributed by the input at a particular frequency. This leads naturally to definition of the coherent output spectrum: provides a spectral quantification of the output power that is uncorrelated with noise or other inputs. Example Here we illustrate the computation of coherence (denoted as ) as shown in figure 1. Consider the two signals shown in the lower portion of figure 2. There appears to be a close relationship between the ocean surface water levels and the groundwater well levels. It is also clear that the barometric pressure has an effect on both the ocean water levels and groundwater levels. Figure 3 shows the autospectral density of ocean water level over a long period of time. As expected, most of the energy is centered on the well-known tidal frequencies. Likewise, the autospectral density of groundwater well levels are shown in figure 4. It is clear that variation of the groundwater levels have significant power at the ocean tidal frequencies. To estimate the extent at which the groundwater levels are influenced by the ocean surface levels, we compute the coherence between them. Let us assume that there is a linear relationship between the ocean surface height and the groundwater levels. We further assume that the ocean surface height controls the groundwater levels so that we take the ocean surface height as the input variable, and the groundwater well height as the output variable. The computed coherence (figure 1) indicates that at most of the major ocean tidal frequencies the variation of groundwater level at this particular site is over 90% due to the forcing of the ocean tides. However, one must exercise caution in attributing causality. If the relation (transfer function) between the input and output is nonlinear, then values of the coherence can be erroneous. Another common mistake is to assume a causal input/output relation between observed variables, when in fact the causative mechanism is not in the system model. For example, it is clear that the atmospheric barometric pressure induces a variation in both the ocean water levels and the groundwater levels, but the barometric pressure is not included in the system model as an input variable. We have also assumed that the ocean water levels drive or control the groundwater levels. In reality it is a combination of hydrological forcing from the ocean water levels and the tidal potential that are driving both the observed input and output signals. Additionally, noise introduced in the measurement process, or by the spectral signal processing can contribute to or corrupt the coherence. Extension to non-stationary signals If the signals are non-stationary, (and therefore not ergodic), the above formulations may not be appropriate. For such signals, the concept of coherence has been extended by using the concept of time-frequency distributions to represent the time-varying spectral variations of non-stationary signals in lieu of traditional spectra. For more details, see. Application in neuroscience Coherence has been used to measure dynamic functional connectivity in brain networks. Studies show that the coherence between different brain regions can change during different mental or perceptual states. Brain coherence during the rest state can be affected by disorders and diseases. See also Bicoherence Scaled Correlation Normalized cross-correlation References Signal processing Telecommunication theory Frequency-domain analysis
Coherence (signal processing)
[ "Physics", "Technology", "Engineering" ]
1,210
[ "Telecommunications engineering", "Computer engineering", "Spectrum (physical sciences)", "Signal processing", "Frequency-domain analysis" ]
18,652,109
https://en.wikipedia.org/wiki/Yeoh%20hyperelastic%20model
The Yeoh hyperelastic material model is a phenomenological model for the deformation of nearly incompressible, nonlinear elastic materials such as rubber. The model is based on Ronald Rivlin's observation that the elastic properties of rubber may be described using a strain energy density function which is a power series in the strain invariants of the Cauchy-Green deformation tensors. The Yeoh model for incompressible rubber is a function only of . For compressible rubbers, a dependence on is added on. Since a polynomial form of the strain energy density function is used but all the three invariants of the left Cauchy-Green deformation tensor are not, the Yeoh model is also called the reduced polynomial model. Yeoh model for incompressible rubbers Strain energy density function The original model proposed by Yeoh had a cubic form with only dependence and is applicable to purely incompressible materials. The strain energy density for this model is written as where are material constants. The quantity can be interpreted as the initial shear modulus. Today a slightly more generalized version of the Yeoh model is used. This model includes terms and is written as When the Yeoh model reduces to the neo-Hookean model for incompressible materials. For consistency with linear elasticity the Yeoh model has to satisfy the condition where is the shear modulus of the material. Now, at , Therefore, the consistency condition for the Yeoh model is Stress-deformation relations The Cauchy stress for the incompressible Yeoh model is given by Uniaxial extension For uniaxial extension in the -direction, the principal stretches are . From incompressibility . Hence . Therefore, The left Cauchy-Green deformation tensor can then be expressed as If the directions of the principal stretches are oriented with the coordinate basis vectors, we have Since , we have Therefore, The engineering strain is . The engineering stress is Equibiaxial extension For equibiaxial extension in the and directions, the principal stretches are . From incompressibility . Hence . Therefore, The left Cauchy-Green deformation tensor can then be expressed as If the directions of the principal stretches are oriented with the coordinate basis vectors, we have Since , we have Therefore, The engineering strain is . The engineering stress is Planar extension Planar extension tests are carried out on thin specimens which are constrained from deforming in one direction. For planar extension in the directions with the direction constrained, the principal stretches are . From incompressibility . Hence . Therefore, The left Cauchy-Green deformation tensor can then be expressed as If the directions of the principal stretches are oriented with the coordinate basis vectors, we have Since , we have Therefore, The engineering strain is . The engineering stress is Yeoh model for compressible rubbers A version of the Yeoh model that includes dependence is used for compressible rubbers. The strain energy density function for this model is written as where , and are material constants. The quantity is interpreted as half the initial shear modulus, while is interpreted as half the initial bulk modulus. When the compressible Yeoh model reduces to the neo-Hookean model for incompressible materials. History The model is named after Oon Hock Yeoh. Yeoh completed his doctoral studies under Graham Lake at the University of London. Yeoh held research positions at Freudenberg-NOK, MRPRA (England), Rubber Research Institute of Malaysia (Malaysia), University of Akron, GenCorp Research, and Lord Corporation. Yeoh won the 2004 Melvin Mooney Distinguished Technology Award from the ACS Rubber Division. References See also Hyperelastic material Strain energy density function Mooney-Rivlin solid Finite strain theory Stress measures Elasticity (physics) Rubber properties Solid mechanics Continuum mechanics
Yeoh hyperelastic model
[ "Physics", "Materials_science" ]
791
[ "Solid mechanics", "Physical phenomena", "Elasticity (physics)", "Continuum mechanics", "Deformation (mechanics)", "Classical mechanics", "Mechanics", "Physical properties" ]
18,654,152
https://en.wikipedia.org/wiki/Kalundborg%20Eco-industrial%20Park
Kalundborg Eco-Industrial Park is an industrial symbiosis network located in Kalundborg, Denmark, in which companies in the region collaborate to use each other's by-products and otherwise share resources. The Kalundborg Eco-Industrial Park is the first full realization of industrial symbiosis. The collaboration and its environmental implications arose unintentionally through private initiatives, as opposed to government planning, making it a model for private planning of eco-industrial parks. At the center of the exchange network is the Asnæs Power Station, a 1500MW coal-fired power plant, which has material and energy links with the community and several other companies. Surplus heat from this power plant is used to heat 3500 local homes in addition to a nearby fish farm, whose sludge is then sold as a fertilizer. Steam from the power plant is sold to Novo Nordisk, a pharmaceutical and enzyme manufacturer, in addition to Statoil oil refinery. This reuse of heat reduces the amount thermal pollution discharged to a nearby fjord. Additionally, a by-product from the power plant's sulfur dioxide scrubber contains gypsum, which is sold to a wallboard manufacturer. Almost all of the manufacturer's gypsum needs are met this way, which reduces the amount of open-pit mining needed. Furthermore, fly ash and clinker from the power plant is used for road building and cement production. These exchanges of waste, water and materials have greatly increased environmental and economic efficiency, as well as created other less tangible benefits for these actors, including sharing of personnel, equipment, and information. History The Kalundborg Industrial Park was not originally planned for industrial symbiosis. Its current state of waste heat and materials sharing developed over a period of 20 years. Early sharing at Kalundborg tended to involve the sale of waste products without significant pretreatment. Each further link in the system was negotiated as an independent business deal, and was established only if it was expected to be economically beneficial. The park began in 1959 with the start up of the Asnæs Power Station. The first episode of sharing between two entities was in 1972 when Gyproc, a plaster-board manufacturing plant, established a pipeline to supply gas from Tidewater Oil Company. In 1981 the Kalundborg municipality completed a district heating distribution network within the city of Kalundborg, which utilized waste heat from the power plant. Since then, the facilities in Kalundborg have been expanding, and have been sharing a variety of materials and waste products, some for the purpose of industrial symbiosis and some out of necessity, for example, freshwater scarcity in the area has led to water reuse schemes. In particular, 700,000 cubic meters per year of cooling water is piped from Statoil to Asnaes. A timeline of the creation of the industrial park: 1959 The Asnæs Power Station was started up 1961 Tidewater Oil Company constructed a pipeline from Lake Tissø to provide water for its operation 1963 Tidewater Oil Company's oil refinery is taken over by Esso 1972 Gyproc establishes plaster-board manufacturing plant. A pipeline from the refinery to the Gyproc facility is constructed to supply excess refinery gas 1973 The Asnæs Power Station is expanded. A connection is built to the Lake Tissø-Statoil pipeline 1976 Novo Nordisk starts delivering biological sludge to neighboring farms 1979 Asnæs Power Station starts supplying fly ash to cement manufacturers in northern Denmark 1981 the Kalundborg municipality completes a district heating distribution network within the city that utilizes waste heat from the power plant 1982 Novo Nordisk and the Statoil refinery complete construction of steam supply pipelines from the power plant. By purchasing process steam from the power plant, the companies are able to shut down inefficient steam boilers 1987 The Statoil refinery completes a pipeline to supply its effluent cooling water to the power plant for use as raw boiler feed water. 1989 The power plant starts using waste heat from its salt cooling water to produce trout and turbot at its local fish farm 1989 Novo Nordisk enters into agreement with Kalundborg municipality, the power plant, and the refinery to connect to the water supply grid from Lake Tissø 1990 The Statoil refinery completes construction of a sulphur recovery plant. The recovered sulphur is sold as raw material to a sulfuric acid manufacturer in Fredericia 1991 The Statoil refinery commissions the building of a pipeline to supply biologically treated refinery effluent water to the power plant for cleaning purposes, and for fly ash stabilization 1992 The Statoil refinery commissions the building of a pipeline to supply flare gas to the power plant as a supplementary fuel 1993 The power plant completes a stack flue gas desulfurization project. The resulting calcium sulphate is sold to Gyproc, where it replaces imported natural gypsum The Symbiosis The relationships among the firms comprising the Kalundborg Eco-Industrial Park form an industrial symbiosis. Generally speaking, the actors involved in the symbiosis at Kalundborg exchange material wastes, energy, water, and information. The Kalundborg network involves a number of actors, including a power station, two big energy firms, a plaster board company, and a soil remediation company. Other actors include farmers, recycling facilities, and fish factories that use some of the material flows. Kalundborg Municipality plays an active role. Additionally, other actors, such as Novoren, a recycling and urban land field firm, are formally part of the network but do not contribute tangibly in the exchange. A researcher studying the evolution of the Kalundborg Symbiosis concluded that a high level of trust between the actors involved represented an essential element to collaborative success. Partners The Kalundborg Eco-Industrial Park today includes nine private and public enterprises, some of which are some of the largest enterprises in Denmark. The enterprises are: Novo Nordisk - Danish company and largest producer of insulin in the world Novozymes - Danish company and largest enzyme producer in the world Gyproc - French producer of gypsum board Kalundborg Municipality Ørsted A/S - owner of Asnaes Power Station, the largest power plant in Denmark RGS 90 - Danish soil remediation and recovery company Statoil - Norwegian company which owns Denmark's largest oil refinery Kara/Novoren - Danish waste treatment company Kalundborg Forsyning A/S - water and heat supplier, as well as waste disposer, for Kalundborg citizens Material Exchanges There are currently over thirty exchanges of materials among the actors of Kalundborg. The Asnaes Power Station is at the heart of the network. The power company gives its steam residuals to the Statoil Refinery, meeting 40% of its steam requirements, in exchange for waste gas from the refinery. The power plant creates electricity and steam from this gas. These products are sent to a fish farm and Novo Nordisk, who receive all of their required steam from Asnaes, and a heating system that supplies 3500 homes. These homeowners pay for the underground piping that supplies their heat, but receive the heat reliably and at a low price. Fly ash from Asnaes is sent to a cement company, and gypsum from its desulfurization process is sent to Gyproc for use in gypsum board. Two-thirds of Gyproc's gypsum needs are met by Asnaes. Statoil Refinery removes sulfur from its natural gas and sells it to a sulfuric acid manufacturer, Kemira. The fish farm sells sludge from its ponds as fertilizer to nearby farms, and Novo Nordisk gives away its own sludge, of which it produces 3,000 cubic meters per day. The sludge is to be refined for biogas for the power plant. Water reuse schemes have also been developed within Kalundborg. Statoil pipes 700,000 cubic meters of cooling water per year to Asnaes, which purifies it and uses it as "boiler feed-water." Asnaes also uses approximately 200,000 cubic meters of Statoil's treated wastewater per year for cleaning. The 90 °C residual heat from the refinery is not used for district heating due to taxes. Instead, heat pumps are used with the 24 °C waste water as a heat reservoir. Savings and environmental impacts Since its start over 25 years ago, Kalundborg has been operating successfully as an eco-industrial park. One of the main goals of industrial symbiosis is to make goods and services that use the least-cost combination of inputs. These relationships were formed on an economic and environmental basis. As mentioned above, there are over thirty exchanges occurring in Kalundborg. While Kalundborg does operate using trades between various firms in the vicinity, it itself is not self-sufficient or contained to the industrial park. There are many trades that occur with companies outside of this park region. All of these exchanges have contributed to water savings, and savings in fuel and input chemicals. Wastes were also avoided through these interchanges. For example, in 1997, Asnaes (the power station) saved 30,000 tons of coal (~2% of throughput) by using Statoil (large oil refinery) fuel gas. And 200,000 tons of fly ash and clinker were avoided from Asnaes landfill. These resources savings and waste avoidances, documented before 1997, are illustrated in the tables to the right. A study in 2002 showed that these exchanges also contributed to more than 95% of the total water supply to the power plant. This is up from 70% in 1990. So, the system is becoming more comprehensive in its ability to save groundwater, however, there is still room for improvement. Out of the 1.2 million m3 of wastewater discharged from Statoil (the refinery), only 9000 m3 were reused at the power plant. More recent numbers show a vast improvement, when comparing to the numbers from 1997, in resource savings. Data from around 2004 show annual savings of 2.9 million cubic meters of ground water, and 1 million cubic meters of surface water. Gypsum savings are estimated around 170,000 tons, and sulfur dioxide waste avoidance is estimated around 53 Tn. These numbers are mostly estimations. Aspects of the eco-industrial park have changed, and there are many levels to consider when doing these calculations. All together though, these interchanges have shown annual savings of up to $15 million (US), with investments around $78.5 million (US). The total accumulated savings is estimated around $310 million (US). As a Model Kalundborg was the first example of separate industries grouping together to gain competitive advantage by material exchange, energy exchange, information exchange, and/or product exchange. The very term industrial symbiosis (IS) was first defined by a station manager in Kalundborg as "a cooperation between different industries by which the presence of each…increases the viability of the others, and by which the demands of society for resource savings and environmental protection are considered". Kalundborg's success helped generate interest in industrial symbiosis. Developed nations such as the United States began to formulate incentives for corporations to implement materials exchange with other corporations. Industrial and political circles began to look into the implementation of eco-industrial parks (EIPs). Specifically, the United States worked to put into service several planned EIPs. The U.S. President's Council on Sustainable Development in 1996 proposed fifteen eco-industrial parks to pursue the idea of industrial symbiosis. These parks were created by grouping diverse stakeholders with common material flows together, with added governmental incentives to encourage materials exchange. The goal of these planned EIPs was to test if the industrial symbiosis that worked so well in Kalundborg could be replicated. The Council on Sustainable Development also defined 5 major characteristics of a successful EIP to help guide EIP development. These characteristics include: (1) some form of material exchange between multiple separate entities, (2) industries in close proximity to each other, (3) cooperation between plant management of the different corporations, (4) an existing infrastructure for material sharing that does not require much retooling, and (5) "anchor" tenants (large corporation with resources to support early implementation). Devens Regional Enterprise Zone is a good example of a successful EIP in the United States. Kalundborg became an attractive topic in academia as well because of the obvious sustainability advantages of industrial symbiosis. Research conducted on planning and implementation of eco-industrial parks revealed interesting results. Experts argued over the idea of "planned parks" versus "self organized parks". Research showed systematic failure of forced or planned EIPs. Most successful EIPs originate from industrial symbiosis that occurs naturally during industry life, much like the Kalundborg case. This conclusion served to deflate the momentum that the success of Kalundborg generated. Organizations began to recognize the difficulties associated with forcing eco-industrial parks to coalesce and abandoned the idea. See also Eco-industrial park EcoPark - EIP in Hong-Kong Industrial ecology Industrial symbiosis References External links The Kalundborg Centre for Industrial Symbiosis Indigo Development Eco-Industrial Park page and handbook Existing and Developing Eco-Industrial Park Sites in the U.S. The Kalundborg eco-industrial park with a perspective of sustainable city planning (Chinese version) Industrial ecology Industrial parks in Denmark Waste processing sites Eco-industrial Park
Kalundborg Eco-industrial Park
[ "Chemistry", "Engineering" ]
2,806
[ "Industrial ecology", "Industrial engineering", "Environmental engineering" ]
18,654,720
https://en.wikipedia.org/wiki/Organic%20solderability%20preservative
Organic solderability preservative or OSP is a method for coating of printed circuit boards. It uses a water-based organic compound that selectively bonds to copper and protects the copper until soldering. Compared to the traditional HASL process, the OSP process is widely used in the electronics manufacturing industry because it does not require high-temperature treatment and significantly reduces the risk of metal corrosion, environmental impact and damage to electronic components. The basic principle of SP to form a protective layer against oxidation and corrosion by applying a layer consisting of a mixture of organic acids and nitrogen compounds, etc. The OSP process involves delicate steps designed to ensure the uniformity and quality of the protective layer. Although environmentally friendly and suitable for microelectronic manufacturing, the complexity of the process requires strict control of the composition and quality of the coating agent to guarantee consistent board performance. The compounds typically used are from the azole family such as benzotriazoles, imidazoles, benzimidazoles. These adsorb on copper surfaces, by forming coordination bonds with copper atoms and form thicker films through formation of copper (I) – N–heterocycle complexes. The typical film thickness used is in the tens to hundreds of nanometers. See also Electroless nickel immersion gold (ENIG) Hot air solder leveling (HASL) Immersion silver plating (IAg) Immersion tin plating (ISn) Reflow soldering Wave soldering References Tong, K. H., M. T. Ku, K. L. Hsu, Q. Tang, C. Y. Chan, and K. W. Yee. “The Evolution of Organic Solderability Preservative (OSP) Process in PCB Application.” 2013 8th International Microsystems, Packaging, Assembly and Circuits Technology Conference (IMPACT). Institute of Electrical & Electronics Engineers (IEEE), October 2013. doi:10.1109/impact.2013.6706620. Printed circuit board manufacturing
Organic solderability preservative
[ "Engineering" ]
411
[ "Electrical engineering", "Electronic engineering", "Printed circuit board manufacturing" ]
3,574,554
https://en.wikipedia.org/wiki/Mechanical%20resonance
Mechanical resonance is the tendency of a mechanical system to respond at greater amplitude when the frequency of its oscillations matches the system's natural frequency of vibration (its resonance frequency or resonant frequency) closer than it does other frequencies. It may cause violent swaying motions and potentially catastrophic failure in improperly constructed structures including bridges, buildings and airplanes. This is a phenomenon known as resonance disaster. Avoiding resonance disasters is a major concern in every building, tower and bridge construction project. The Taipei 101 building for instance relies on a 660-ton pendulum—a tuned mass damper—to modify the response at resonance. The structure is also designed to resonate at a frequency which does not typically occur. Buildings in seismic zones are often constructed to take into account the oscillating frequencies of expected ground motion. Engineers designing objects that have engines must ensure that the mechanical resonant frequencies of the component parts do not match driving vibrational frequencies of the motors or other strongly oscillating parts. Many resonant objects have more than one resonance frequency. Such objects will vibrate easily at those frequencies, and less so at other frequencies. Many clocks keep time by mechanical resonance in a balance wheel, pendulum, or quartz crystal. Description The natural frequency of the very simple mechanical system consisting of a weight suspended by a spring is: where m is the mass and k is the spring constant. For a given mass, stiffening the system (increasing ) increases its natural frequency, which is a general characteristic of vibrating mechanical systems. A swing set is another simple example of a resonant system with which most people have practical experience. It is a form of pendulum. If the system is excited (pushed) with a period between pushes equal to the inverse of the pendulum's natural frequency, the swing will swing higher and higher, but if excited at a different frequency, it will be difficult to move. The resonance frequency of a pendulum, the only frequency at which it will vibrate, is given approximately, for small displacements, by the equation: where g is the acceleration due to gravity (about 9.8 m/s2 near the surface of Earth), and L is the length from the pivot point to the center of mass. (An elliptic integral yields a description for any displacement). Note that, in this approximation, the frequency does not depend on mass. Mechanical resonators work by transferring energy repeatedly from kinetic to potential form and back again. In the pendulum, for example, all the energy is stored as gravitational energy (a form of potential energy) when the bob is instantaneously motionless at the top of its swing. This energy is proportional to both the mass of the bob and its height above the lowest point. As the bob descends and picks up speed, its potential energy is gradually converted to kinetic energy (energy of movement), which is proportional to the bob's mass and to the square of its speed. When the bob is at the bottom of its travel, it has maximum kinetic energy and minimum potential energy. The same process then happens in reverse as the bob climbs towards the top of its swing. Some resonant objects have more than one resonance frequency, particularly at harmonics (multiples) of the strongest resonance. It will vibrate easily at those frequencies, and less so at other frequencies. It will "pick out" its resonance frequency from a complex excitation, such as an impulse or a wideband noise excitation. In effect, it is filtering out all frequencies other than its resonance. In the example above, the swing cannot easily be excited by harmonic frequencies, but can be excited by subharmonics, such as pushing the swing every second or third oscillation. Examples Various examples of mechanical resonance include: Musical instruments (acoustic resonance). Most clocks keep time by mechanical resonance in a balance wheel, pendulum, or quartz crystal. Tidal resonance of the Bay of Fundy. Orbital resonance, as in some moons of the Solar System's giant planets. The resonance of the basilar membrane in the ear. A wineglass breaking when someone sings a loud note at exactly the right pitch. Resonance may cause violent swaying motions in constructed structures, such as bridges and buildings. The London Millennium Footbridge (nicknamed the Wobbly Bridge) exhibited this problem. A faulty bridge can even be destroyed by its resonance (see Broughton Suspension Bridge and Angers Bridge). Mechanical systems store potential energy in different forms. For example, a spring/mass system stores energy as tension in the spring, which is ultimately stored as the energy of bonds between atoms. Resonance disaster In mechanics and construction a resonance disaster describes the destruction of a building or a technical mechanism by induced vibrations at a system's resonant frequency, which causes it to oscillate. Periodic excitation optimally transfers to the system the energy of the vibration and stores it there. Because of this repeated storage and additional energy input the system swings ever more strongly, until its load limit is exceeded. Tacoma Narrows Bridge The dramatic, rhythmic twisting that resulted in the 1940 collapse of "Galloping Gertie", the original Tacoma Narrows Bridge, is sometimes characterized in physics textbooks as a classic example of resonance. The catastrophic vibrations that destroyed the bridge were due to an oscillation caused by interactions between the bridge and the winds passing through its structure—a phenomenon known as aeroelastic flutter. Robert H. Scanlan, father of the field of bridge aerodynamics, wrote an article about this. Other examples Collapse of Broughton Suspension Bridge (due to soldiers walking in step) Collapse of Angers Bridge Collapse of Königs Wusterhausen Central Tower Resonance of the Millennium Bridge Applications Various method of inducing mechanical resonance in a medium exist. Mechanical waves can be generated in a medium by subjecting an electromechanical element to an alternating electric field having a frequency which induces mechanical resonance and is below any electrical resonance frequency. Such devices can apply mechanical energy from an external source to an element to mechanically stress the element or apply mechanical energy produced by the element to an external load. The United States Patent Office classifies devices that tests mechanical resonance under subclass 579, resonance, frequency, or amplitude study, of Class 73, Measuring and testing. This subclass is itself indented under subclass 570, Vibration. Such devices test an article or mechanism by subjecting it to a vibratory force for determining qualities, characteristics, or conditions thereof, or sensing, studying or making analysis of the vibrations otherwise generated in or existing in the article or mechanism. Devices include right methods to cause vibrations at a natural mechanical resonance and measure the frequency and/or amplitude the resonance made. Various devices study the amplitude response over a frequency range is made. This includes nodal points, wave lengths, and standing wave characteristics measured under predetermined vibration conditions. See also Dunkerley's method Electrical resonance List of laser applications Mechanical filter Reed switch Resonator Sympathetic resonance Transducer Notes Further reading S Spinner, WE Tefft, A method for determining mechanical resonance frequencies and for calculating elastic moduli from these frequencies. American Society for testing and materials. CC Jones, A mechanical resonance apparatus for undergraduate laboratories. American Journal of Physics, 1995. Patents Method and apparatus for inspecting materials Apparatus for testing textiles Apparatus for testing textiles and like materials Testing and adjusting device Method and apparatus for testing materials Article testing machine Apparatus for determining the behavior of suspended cables Mechanical resonance detection systems Vibrating-blade relays with electro-mechanical resonance Quantum mechanical resonance devices Mechanical resonance indicator Piezoelectric resonance device Tuned ground motion detector utilizing principles of mechanical resonance Apparatus and method for generating mechanical waves Method of controlling mechanical resonance hand Apparatus and method for suppressing mechanical resonance in a mass transit vehicle Mechanical vibrations Earthquake engineering Resonance ru:Резонанс#.D0.9C.D0.B5.D1.85.D0.B0.D0.BD.D0.B8.D0.BA.D0.B0 sv:Självsvängning
Mechanical resonance
[ "Physics", "Chemistry", "Engineering" ]
1,638
[ "Resonance", "Structural engineering", "Physical phenomena", "Waves", "Scattering", "Civil engineering", "Mechanics", "Mechanical vibrations", "Earthquake engineering" ]
3,574,564
https://en.wikipedia.org/wiki/Electrical%20resonance
Electrical resonance occurs in an electric circuit at a particular resonant frequency when the impedances or admittances of circuit elements cancel each other. In some circuits, this happens when the impedance between the input and output of the circuit is almost zero and the transfer function is close to one. Resonant circuits exhibit ringing and can generate higher voltages or currents than are fed into them. They are widely used in wireless (radio) transmission for both transmission and reception. LC circuits Resonance of a circuit involving capacitors and inductors occurs because the collapsing magnetic field of the inductor generates an electric current in its windings that charges the capacitor, and then the discharging capacitor provides an electric current that builds the magnetic field in the inductor. This process is repeated continually. An analogy is a mechanical pendulum, and both are a form of simple harmonic oscillator. At resonance, the series impedance of the LC circuit is at a minimum and the parallel impedance is at maximum. Resonance is used for tuning and filtering, because it occurs at a particular frequency for given values of inductance and capacitance. It can be detrimental to the operation of communications circuits by causing unwanted sustained and transient oscillations that may cause noise, signal distortion, and damage to circuit elements. Parallel resonance or near-to-resonance circuits can be used to prevent the waste of electrical energy, which would otherwise occur while the inductor built its field or the capacitor charged and discharged. As an example, asynchronous motors waste inductive current while synchronous ones waste capacitive current. The use of the two types in parallel makes the inductor feed the capacitor, and vice versa, maintaining the same resonant current in the circuit, and converting all the current into useful work. Since the inductive reactance and the capacitive reactance are of equal magnitude, , so , where , in which is the resonance frequency in hertz, is the inductance in henries, and is the capacitance in farads, when standard SI units are used. The quality of the resonance (how long it will ring when excited) is determined by its factor, which is a function of resistance: . An idealized, lossless circuit has infinite , but all actual circuits have some resistance and finite , and are usually approximated more realistically by an circuit. RLC circuit An RLC circuit (or LCR circuit) is an electrical circuit consisting of a resistor, an inductor, and a capacitor, connected in series or in parallel. The RLC part of the name is due to those letters being the usual electrical symbols for resistance, inductance and capacitance respectively. The circuit forms a harmonic oscillator for current and resonates similarly to an LC circuit. The main difference stemming from the presence of the resistor is that any oscillation induced in the circuit decays over time if it is not kept going by a source. This effect of the resistor is called damping. The presence of the resistance also reduces the peak resonant frequency of damped oscillation, although the resonant frequency for driven oscillations remains the same as an LC circuit. Some resistance is unavoidable in real circuits, even if a resistor is not specifically included as a separate component. A pure LC circuit is an ideal that exists only in theory. There are many applications for this circuit. It is used in many different types of oscillator circuits. An important application is for tuning, such as in radio receivers or television sets, where they are used to select a narrow range of frequencies from the ambient radio waves. In this role the circuit is often referred to as a tuned circuit. An RLC circuit can be used as a band-pass filter, band-stop filter, low-pass filter or high-pass filter. The tuning application, for instance, is an example of band-pass filtering. The RLC filter is described as a second-order circuit, meaning that any voltage or current in the circuit can be described by a second-order differential equation in circuit analysis. The three circuit elements can be combined in a number of different topologies. All three elements in series or all three elements in parallel are the simplest in concept and the most straightforward to analyse. There are, however, other arrangements, some with practical importance in real circuits. One issue often encountered is the need to take into account inductor resistance. Inductors are typically constructed from coils of wire, the resistance of which is not usually desirable, but it often has a significant effect on the circuit. Example A series circuit has resistance of 4 Ω, and inductance of 500 mH, and a variable capacitance. Supply voltage is 100 V alternating at 50 Hz. At resonance The capacitance required to give series resonance is calculated as: Resonance voltages across the inductor and the capacitor, and , will be: As shown in this example, when the series circuit is at resonance, the magnitudes of the voltages across the inductor and capacitor can become many times larger than the supply voltage. See also Antiresonance Antenna theory Cavity resonator Electronic filter Resonant energy transfer - wireless energy transmission between two resonant coils References Electronic circuits Filter theory Synthesizers
Electrical resonance
[ "Engineering" ]
1,123
[ "Telecommunications engineering", "Electronic engineering", "Electronic circuits", "Filter theory" ]
3,574,578
https://en.wikipedia.org/wiki/Acoustic%20resonance
Acoustic resonance is a phenomenon in which an acoustic system amplifies sound waves whose frequency matches one of its own natural frequencies of vibration (its resonance frequencies). The term "acoustic resonance" is sometimes used to narrow mechanical resonance to the frequency range of human hearing, but since acoustics is defined in general terms concerning vibrational waves in matter, acoustic resonance can occur at frequencies outside the range of human hearing. An acoustically resonant object usually has more than one resonance frequency, especially at harmonics of the strongest resonance. It will easily vibrate at those frequencies, and vibrate less strongly at other frequencies. It will "pick out" its resonance frequency from a complex excitation, such as an impulse or a wideband noise excitation. In effect, it is filtering out all frequencies other than its resonance. Acoustic resonance is an important consideration for instrument builders, as most acoustic instruments use resonators, such as the strings and body of a violin, the length of tube in a flute, and the shape of a drum membrane. Acoustic resonance is also important for hearing. For example, resonance of a stiff structural element, called the basilar membrane within the cochlea of the inner ear allows hair cells on the membrane to detect sound. (For mammals the membrane has tapering resonances across its length so that high frequencies are concentrated on one end and low frequencies on the other.) Like mechanical resonance, acoustic resonance can result in catastrophic failure of the vibrator. The classic example of this is breaking a wine glass with sound at the precise resonant frequency of the glass. Vibrating string In musical instruments, strings under tension, as in lutes, harps, guitars, pianos, violins and so forth, have resonant frequencies directly related to the mass, length, and tension of the string. The wavelength that will create the first resonance on the string is equal to twice the length of the string. Higher resonances correspond to wavelengths that are integer divisions of the fundamental wavelength. The corresponding frequencies are related to the speed v of a wave traveling down the string by the equation where L is the length of the string (for a string fixed at both ends) and n = 1, 2, 3...(Harmonic in an open end pipe (that is, both ends of the pipe are open)). The speed of a wave through a string or wire is related to its tension T and the mass per unit length ρ: So the frequency is related to the properties of the string by the equation where T is the tension, ρ is the mass per unit length, and m is the total mass. Higher tension and shorter lengths increase the resonant frequencies. When the string is excited with an impulsive function (a finger pluck or a strike by a hammer), the string vibrates at all the frequencies present in the impulse (an impulsive function theoretically contains 'all' frequencies). Those frequencies that are not one of the resonances are quickly filtered out—they are attenuated—and all that is left is the harmonic vibrations that we hear as a musical note. String resonance in music instruments String resonance occurs on string instruments. Strings or parts of strings may resonate at their fundamental or overtone frequencies when other strings are sounded. For example, an A string at 440 Hz will cause an E string at 330 Hz to resonate, because they share an overtone of 1320 Hz (3rd overtone of A and 4th overtone of E). Resonance of a tube of air The resonance of a tube of air is related to the length of the tube, its shape, and whether it has closed or open ends. Many musical instruments resemble tubes that are conical or cylindrical (see bore). A pipe that is closed at one end and open at the other is said to be stopped or closed while an open pipe is open at both ends. Modern orchestral flutes behave as open cylindrical pipes; clarinets behave as closed cylindrical pipes; and saxophones, oboes, and bassoons as closed conical pipes, while most modern lip-reed instruments (brass instruments) are acoustically similar to closed conical pipes with some deviations (see pedal tones and false tones). Like strings, vibrating air columns in ideal cylindrical or conical pipes also have resonances at harmonics, although there are some differences. Cylinders Any cylinder resonates at multiple frequencies, producing multiple musical pitches. The lowest frequency is called the fundamental frequency or the first harmonic. Cylinders used as musical instruments are generally open, either at both ends, like a flute, or at one end, like some organ pipes. However, a cylinder closed at both ends can also be used to create or visualize sound waves, as in a Rubens Tube. The resonance properties of a cylinder may be understood by considering the behavior of a sound wave in air. Sound travels as a longitudinal compression wave, causing air molecules to move back and forth along the direction of travel. Within a tube, a standing wave is formed, whose wavelength depends on the length of the tube. At the closed end of the tube, air molecules cannot move much, so this end of the tube is a displacement node in the standing wave. At the open end of the tube, air molecules can move freely, producing a displacement antinode. Displacement nodes are pressure antinodes and vice versa. Closed at both ends The table below shows the displacement waves in a cylinder closed at both ends. Note that the air molecules near the closed ends cannot move, whereas the molecules near the center of the pipe move freely. In the first harmonic, the closed tube contains exactly half of a standing wave (node-antinode-node). Considering the pressure wave in this setup, the two closed ends are the antinodes for the change in pressure Δp; Therefore, at both ends, the change in pressure Δp must have the maximal amplitude (or satisfy in the form of the Sturm–Liouville formulation), which gives the equation for the pressure wave: . The intuition for this boundary condition at and is that the pressure of the closed ends will follow that of the point next to them. Applying the boundary condition at gives the wavelengths of the standing waves: And the resonant frequencies are Open at both ends In cylinders with both ends open, air molecules near the end move freely in and out of the tube. This movement produces displacement antinodes in the standing wave. Nodes tend to form inside the cylinder, away from the ends. In the first harmonic, the open tube contains exactly half of a standing wave (antinode-node-antinode). Thus the harmonics of the open cylinder are calculated in the same way as the harmonics of a closed/closed cylinder. The physics of a pipe open at both ends are explained in Physics Classroom. Note that the diagrams in this reference show displacement waves, similar to the ones shown above. These stand in sharp contrast to the pressure waves shown near the end of the present article. By overblowing an open tube, a note can be obtained that is an octave above the fundamental frequency or note of the tube. For example, if the fundamental note of an open pipe is C1, then overblowing the pipe gives C2, which is an octave above C1. Open cylindrical tubes resonate at the approximate frequencies: where n is a positive integer (1, 2, 3...) representing the resonance node, L is the length of the tube and v is the speed of sound in air (which is approximately at ). This equation comes from the boundary conditions for the pressure wave, which treats the open ends as pressure nodes where the change in pressure Δp must be zero. A more accurate equation considering an end correction is given below: where r is the radius of the resonance tube. This equation compensates for the fact that the exact point at which a sound wave is reflecting at an open end is not perfectly at the end section of the tube, but a small distance outside the tube. The reflection ratio is slightly less than 1; the open end does not behave like an infinitesimal acoustic impedance; rather, it has a finite value, called radiation impedance, which is dependent on the diameter of the tube, the wavelength, and the type of reflection board possibly present around the opening of the tube. So when n is 1: where v is the speed of sound, L is the length of the resonant tube, r is the radius of the tube, f is the resonant sound frequency, and λ is the resonant wavelength. Closed at one end When used in an organ a tube which is closed at one end is called a "stopped pipe". Such cylinders have a fundamental frequency but can be overblown to produce other higher frequencies or notes. These overblown registers can be tuned by using different degrees of conical taper. A closed tube resonates at the same fundamental frequency as an open tube twice its length, with a wavelength equal to four times its length. In a closed tube, a displacement node, or point of no vibration, always appears at the closed end and if the tube is resonating, it will have a displacement antinode, or point of greatest vibration at the Phi point (length × 0.618) near the open end. By overblowing a cylindrical closed tube, a note can be obtained that is approximately a twelfth above the fundamental note of the tube, or a fifth above the octave of the fundamental note. For example, if the fundamental note of a closed pipe is C1, then overblowing the pipe gives G2, which is one-twelfth above C1. Alternatively we can say that G2 is one-fifth above C2 — the octave above C1. Adjusting the taper of this cylinder for a decreasing cone can tune the second harmonic or overblown note close to the octave position or 8th. Opening a small "speaker hole" at the Phi point, or shared "wave/node" position will cancel the fundamental frequency and force the tube to resonate at a 12th above the fundamental. This technique is used in a recorder by pinching open the dorsal thumb hole. Moving this small hole upwards, closer to the voicing will make it an "Echo Hole" (Dolmetsch Recorder Modification) that will give a precise half note above the fundamental when opened. Note: Slight size or diameter adjustment is needed to zero in on the precise half note frequency. A closed tube will have approximate resonances of: where "n" here is an odd number (1, 3, 5...). This type of tube produces only odd harmonics and has its fundamental frequency an octave lower than that of an open cylinder (that is, half the frequency). This equation comes from the boundary conditions for the pressure wave, which treats the closed end as pressure antinodes where the change in pressure Δp must have the maximal amplitude, or satisfy in the form of the Sturm–Liouville formulation. The intuition for this boundary condition at is that the pressure of the closed end will follow that of the point next to it. A more accurate equation considering an end correction is given below: . Again, when n is 1: where v is the speed of sound, L is the length of the resonant tube, d is the diameter of the tube, f is the resonant sound frequency, and λ is the resonant wavelength. Pressure wave In the two diagrams below are shown the first three resonances of the pressure wave in a cylindrical tube, with antinodes at the closed end of the pipe. In diagram 1, the tube is open at both ends. In diagram 2, it is closed at one end. The horizontal axis is pressure. Note that in this case, the open end of the pipe is a pressure node while the closed end is a pressure antinode. Cones An open conical tube, that is, one in the shape of a frustum of a cone with both ends open, will have resonant frequencies approximately equal to those of an open cylindrical pipe of the same length. The resonant frequencies of a stopped conical tube — a complete cone or frustum with one end closed — satisfy a more complicated condition: where the wavenumber k is and x is the distance from the small end of the frustum to the vertex. When x is small, that is, when the cone is nearly complete, this becomes leading to resonant frequencies approximately equal to those of an open cylinder whose length equals L + x. In words, a complete conical pipe behaves approximately like an open cylindrical pipe of the same length, and to first order the behavior does not change if the complete cone is replaced by a closed frustum of that cone. Closed rectangular box Sound waves in a rectangular box include such examples as loudspeaker enclosures and buildings. Rectangular buildings have resonances described as room modes. For a rectangular box, the resonant frequencies are given by where v is the speed of sound, Lx and Ly and Lz are the dimensions of the box. , , and are nonnegative integers that cannot all be zero. If the small loudspeaker box is airtight, the frequency low enough and the compression is high enough, the sound pressure (decibel level) inside the box will be the same anywhere inside the box, this is hydraulic pressure. Resonance of a sphere of air (vented) The resonant frequency of a rigid cavity of static volume V0 with a necked sound hole of area A and length L is given by the Helmholtz resonance formula where is the equivalent length of the neck with end correction   for an unflanged neck   for a flanged neck For a spherical cavity, the resonant frequency formula becomes where D = diameter of sphere d = diameter of sound hole For a sphere with just a sound hole, L=0 and the surface of the sphere acts as a flange, so In dry air at 20 °C, with d and D in metres, f in hertz, this becomes Breaking glass with sound via resonance This is a classic demonstration of resonance. A glass has a natural resonance, a frequency at which the glass will vibrate easily. Therefore the glass needs to be moved by the sound wave at that frequency. If the force from the sound wave making the glass vibrate is big enough, the size of the vibration will become so large that the glass fractures. To do it reliably for a science demonstration requires practice and careful choice of the glass and loudspeaker. In musical composition Several composers have begun to make resonance the subject of compositions. Alvin Lucier has used acoustic instruments and sine wave generators to explore the resonance of objects large and small in many of his compositions. The complex inharmonic partials of a swell shaped crescendo and decrescendo on a tamtam or other percussion instrument interact with room resonances in James Tenney's Koan: Having Never Written A Note For Percussion. Pauline Oliveros and Stuart Dempster regularly perform in large reverberant spaces such as the cistern at Fort Worden, WA, which has a reverb with a 45-second decay. Malmö Academy of Music composition professor and composer Kent Olofsson's "Terpsichord, a piece for percussion and pre-recorded sounds, [uses] the resonances from the acoustic instruments [to] form sonic bridges to the pre-recorded electronic sounds, that, in turn, prolong the resonances, re-shaping them into new sonic gestures." See also Harmony Music theory Resonance Reverberation Standing wave Sympathetic string Reflection phase change References Nederveen, Cornelis Johannes, Acoustical aspects of woodwind instruments. Amsterdam, Frits Knuf, 1969. Rossing, Thomas D., and Fletcher, Neville H., Principles of Vibration and Sound. New York, Springer-Verlag, 1995. External links Standing Waves Applet Acoustics Musical instruments
Acoustic resonance
[ "Physics" ]
3,267
[ "Classical mechanics", "Acoustics" ]
3,575,593
https://en.wikipedia.org/wiki/Education%20and%20training%20of%20electrical%20and%20electronics%20engineers
Both electrical and electronics engineers typically possess an academic degree with a major in electrical/ electronics engineering. The length of study for such a degree is usually three or four years and the completed degree may be designated as a Bachelor of Engineering, Bachelor of Science or Bachelor of Applied Science depending upon the university. Scope of undergraduate education The degree generally includes units covering physics, mathematics, project management and specific topics in electrical and electronics engineering. Initially such topics cover most, if not all, of the sub fields of electrical engineering. Students then choose to specialize in one or more sub fields towards the end of the degree. In most countries, a bachelor's degree in engineering represents the first step towards certification and the degree program itself is certified by a professional body. After completing a certified degree program the engineer must satisfy a range of requirements (including work experience requirements) before being certified. Once certified the engineer is designated the title of Professional Engineer (in the United States and Canada), Chartered Engineer (in the United Kingdom, Ireland, India, Pakistan, South Africa and Zimbabwe), Chartered Professional Engineer (in Australia) or European Engineer (in much of the European Union). Post graduate studies Electrical engineers can also choose to pursue a postgraduate degree such as a master of engineering, a doctor of philosophy in engineering or an engineer's degree. The master and engineer's degree may consist of either research, coursework or a mixture of the two. The doctor of philosophy consists of a significant research component and is often viewed as the entry point to academia. In the United Kingdom and various other European countries, the master of engineering is often considered an undergraduate degree of slightly longer duration than the bachelor of engineering. Typical electrical/electronics engineering undergraduate syllabus Apart from electromagnetics and network theory, other items in the syllabus are particular to electronics engineering course. Electrical engineering courses have other specializations such as machines, power generation and distribution. Note that the following list does not include the large quantity of mathematics (maybe apart from the final year) included in each year's study. Electromagnetics Elements of vector calculus: divergence and curl; Gauss' and Stokes' theorems, Maxwell's equations: differential and integral forms. Wave equation, Poynting vector. Plane waves: propagation through various media; reflection and refraction; phase and group velocity; skin depth. Transmission lines: characteristic impedance; impedance transformation; Smith chart; impedance matching; pulse excitation. Waveguides: modes in rectangular waveguides; boundary conditions; cut-off frequencies; dispersion relations. Antennas: Dipole antennas; antenna arrays; radiation pattern; reciprocity theorem, antenna gain. Additional basic fundamental in electrical are to be study Network theory Network graphs: matrices associated with graphs; incidence, fundamental cut set and fundamental circuit matrices. Solution methods: nodal and mesh analysis. Network theorems: superposition, Thevenin and Norton's maximum power transfer, Wye-Delta transformation. Steady state sinusoidal analysis using phasors. Linear constant coefficient differential equations; time domain analysis of simple RLC circuits, Solution of network equations using Laplace transform: frequency domain analysis of RLC circuits. 2-port network parameters: driving point and transfer functions. State equations. Electronic devices and circuits Electronic Devices: Energy bands in silicon, intrinsic and extrinsic silicon. Carrier transport in silicon: diffusion current, drift current, mobility, resistivity. Generation and recombination of carriers. p-n junction diode, Zener diode, tunnel diode, BJT, JFET, MOS capacitor, MOSFET, LED, p-I-n and avalanche photo diode, LASERs. Device technology: integrated circuits fabrication process, oxidation, diffusion, ion implantation, photolithography, n-tub, p-tub and twin-tub CMOS process. Analog Circuits: Equivalent circuits (large and small-signal) of diodes, BJTs, JFETs, and MOSFETs, Simple diode circuits, clipping, clamping, rectifier. Biasing and bias stability of transistor and FET amplifiers. Amplifiers: single-and multi-stage, differential, operational, feedback and power. Analysis of amplifiers; frequency response of amplifiers. Simple op-amp circuits. Filters. Sinusoidal oscillators; criterion for oscillation; single-transistor and op-amp configurations. Function generators and wave-shaping circuits. Power supplies. Digital circuits: Boolean algebra, minimization of Boolean functions; logic gates digital IC families (DTL, TTL, ECL, MOS, CMOS). Combinational circuits: arithmetic circuits, code converters, multiplexers and decoders. Sequential circuits: latches and flip-flops, counters and shift-registers. Sample and hold circuits, ADCs, DACs. Semiconductor memories. Microprocessor (8085): architecture, programming, memory and I/O interfacing. Signals and systems Definitions and properties of Laplace transform, continuous-time and discrete-time Fourier series, continuous-time and discrete-time Fourier Transform, z-transform. Sampling theorems. Linear Time-Invariant Systems: definitions and properties; casualty, stability, impulse response, convolution, poles and zeros frequency response, group delay, phase delay. Signal transmission through LTI systems. Random signals and noise: probability, random variables, probability density function, autocorrelation, power spectral density. Control systems Control system components; block diagrammatic description, reduction of block diagrams. Open loop and closed loop (feedback) systems and stability analysis of these systems. Signal flow graphs and their use in determining transfer functions of systems; transient and steady state analysis of LTI control systems and frequency response. Tools and techniques for LTI control system analysis: root loci, Routh-Hurwitz criterion, Bode and Nyquist plots. Control system compensators: elements of lead and lag compensation, elements of Proportional-Integral-Derivative control. State variable representation and solution of state equation of LTI control systems. Communications Communication systems: amplitude and angle modulation and demodulation systems, spectral analysis of these operations, superheterodyne receivers; elements of hardware, realizations of analog communication systems; signal-to-noise ratio calculations for amplitude modulation (AM) and frequency modulation (FM) for low noise conditions. Digital communication systems: pulse code modulation, differential pulse-code modulation, delta modulation; digital modulation schemes-amplitude, phase and frequency shift keying schemes, matched filter receivers, bandwidth consideration and probability of error calculations for these schemes. Certification The advantages of certification vary depending upon location. For example, in the United States and Canada "only a licensed engineer may...seal engineering work for public and private clients". This requirement is enforced by state and provincial legislation such as Quebec's Engineers Act. In other countries, such as Australia, no such legislation exists. Practically all certifying bodies maintain a code of ethics that they expect all members to abide by or risk expulsion. In this way these organizations play an important role in maintaining ethical standards for the profession. Even in jurisdictions where certification has little or no legal bearing on work, engineers are subject to contract law. In cases where an engineer's work fails he or she may be subject to the tort of negligence and, in extreme cases, the charge of criminal negligence. An engineer's work must also comply with numerous other rules and regulations such as building codes and legislation pertaining to environmental law. Significant professional bodies for electrical engineers include the Institute of Electrical and Electronics Engineers and the Institution of Engineering and Technology. The former claims to produce 30 percent of the world's literature on electrical engineering, has over 360,000 members worldwide and holds over 300 conferences annually. The latter publishes 14 journals, has a worldwide membership of 120,000, certifies Chartered Engineers in the United Kingdom and claims to be the largest professional engineering society in Europe. See also Engineering education References Much of the above content seems to be copied from: Syllabus for Electronics and Communication Engineering. Graduate Aptitude Test in Engineering. IIT Delhi. (updated 2012-03-22). General information. Graduate Aptitude Test in Engineering. IIT Delhi. 2012 Terman, F. E. (1976). A brief history of electrical engineering education. Proceedings of the IEEE, 64 (9), 1399-1407. Full article can be read here. Education and training Electrical and electronics engineers
Education and training of electrical and electronics engineers
[ "Engineering" ]
1,756
[ "Electrical engineering" ]
22,403,212
https://en.wikipedia.org/wiki/Chemical%20glycosylation
A chemical glycosylation reaction involves the coupling of a glycosyl donor, to a glycosyl acceptor forming a glycoside. If both the donor and acceptor are sugars, then the product is an oligosaccharide. The reaction requires activation with a suitable activating reagent. The reactions often result in a mixture of products due to the creation of a new stereogenic centre at the anomeric position of the glycosyl donor. The formation of a glycosidic linkage allows for the synthesis of complex polysaccharides which may play important roles in biological processes and pathogenesis and therefore having synthetic analogs of these molecules allows for further studies with respect to their biological importance. Terminology The glycosylation reaction involves the coupling of a glycosyl donor and a glycosyl acceptor via initiation using an activator under suitable reaction conditions. A glycosyl donor is a sugar with a suitable leaving group at the anomeric position. This group, under the reaction conditions, is activated and via the formation of an oxocarbenium is eliminated leaving an electrophilic anomeric carbon. A glycosyl acceptor is a sugar with an unprotected nucleophilic hydroxyl group which may attack the carbon of the oxocarbenium ion formed during the reaction and allow for the formation of the glycosidic bond. An activator is commonly a Lewis acid which enables the leaving group at the anomeric position to leave and results in the formation of the oxocarbenium ion. Stereochemistry The formation of a glycosidic linkage results in the formation of a new stereogenic centre and therefore a mixture of products may be expected to result. The linkage formed may either be axial or equatorial (α or β with respect to glucose). To better understand this, the mechanism of a glycosylation reaction must be considered. Neighbouring group participation The stereochemical outcome of a glycosylation reaction may in certain cases be affected by the type of protecting group employed at position 2 of the glycosyl donor. A participating group, typically one with a carboxyl group present, will predominantly result in the formation of a β-glycoside. Whereas a non-participating group, a group usually without a carboxyl group, will often result in an α-glycoside. Below it can be seen that having an acetyl protecting group at position 2 allows for the formation for an acetoxonium ion intermediate that blocks attack to the bottom face of the ring therefore allowing for the formation of the β-glycoside predominantly. Alternatively, the absence of a participating group at position 2 allows for either attack from the bottom or top face. Since the α-glycoside product will be favoured by the anomeric effect, the α-glycoside usually predominates. Protecting groups Different protecting groups on either the glycosyl donor or the glycosyl acceptor may affect the reactivity and yield of the glycosylation reaction. Typically, electron-withdrawing groups such as acetyl or benzoyl groups are found to decrease the reactivity of the donor/acceptor and are therefore termed "disarming" groups. Electron-donating groups such as the benzyl group, are found to increase the reactivity of the donor/acceptor and are therefore called "arming" groups. Current methods in glycoside synthesis Glycosyl iodides Glycosyl iodides were first introduced for use in glycosylation reactions in 1901 by Koenigs and Knorr although were often considered too reactive for synthetic use. Recently several research groups have shown these donors to have unique reactive properties and can differ from other glycosyl chlorides or bromides with respect to reaction time, efficiency, and stereochemistry. Glycosyl iodides may be made under a variety of conditions, one method of note is the reaction of a 1-O-acetylpyranoside with TMSI. Iodide donors may typically be activated under basic conditions to give β-glycosides with good selectivity. The use of tetraalkylammonium iodide salts such as tetrabutylammonium iodide (TBAI) allows for in-situ anomerization of the α-glycosyl halide to the β-glycosyl halide and provides the α-glycoside in good selectivity. Thioglycosides Thioglycosides were first reported in 1909 by Fischer and since then have been explored constantly allowing for the development of numerous protocols for their preparation. The advantage of using thioglycosides is their stability under a wide range of reaction conditions allowing for protecting group manipulations. Additionally thioglycosides act as temporary protecting groups at the anomeric position allowing for thioglycosides to be useful as both glycosyl donors as well as glycosyl acceptors. Thioglycosides are usually prepared by reacting per-acetylated sugars with and the appropriate thiol. Thioglycosides used in glycosylation reactions as donors can be activated under a wide range of conditions, most notably using NIS/AgOTf. Trichloroacetimidates Trichloroacetimidates were first introduced and explored by Schmidt in 1980 and since then have become very popular for glycoside synthesis. The use of trichloroacetimidates provides many advantages including ease of formation, reactivity and stereochemical outcome. O-Glycosyl trichloroacetimidates are prepared via the addition of trichloroacetonitrile () under basic conditions to a free anomeric hydroxyl group. Typical activating groups for glycosylation reactions using trichloroacetimidates are or TMSOTf. Column chromatographic purification of the reaction mixture can sometimes be challenging due to the trichloroacetamide by-product. This can, however, be overcome by washing the organic layer with 1 M NaOH solution in a separatory funnel prior to chromatography. Acetyl protecting groups were found to be stable during this procedure. Notable synthetic products Below are a few examples of some notable targets obtained via a series of glycosylation reactions. See also Glycosylation Glycosyl transferase Glycorandomization Glycation Carbohydrate chemistry Oligosaccharide Carbohydrate References Books Carbohydrates Carbohydrate chemistry
Chemical glycosylation
[ "Chemistry" ]
1,410
[ "Biomolecules by chemical classification", "Carbohydrates", "Organic compounds", "Carbohydrate chemistry", "nan", "Chemical synthesis", "Glycobiology" ]
22,408,665
https://en.wikipedia.org/wiki/Covariance%20intersection
Covariance intersection (CI) is an algorithm for combining two or more estimates of state variables in a Kalman filter when the correlation between them is unknown. Formulation Items of information a and b are known and are to be fused into information item c. We know a and b have mean/covariance , and , , but the cross correlation is not known. The covariance intersection update gives mean and covariance for c as where ω is computed to minimize a selected norm, e.g., the trace, or the logarithm of the determinant. While it is necessary to solve an optimization problem for higher dimensions, closed-form solutions exist for lower dimensions. Application CI can be used in place of the conventional Kalman update equations to ensure that the resulting estimate is conservative, regardless of the correlation between the two estimates, with covariance strictly non-increasing according to the chosen measure. The use of a fixed measure is necessary for rigor to ensure that a sequence of updates does not cause the filtered covariance to increase. Advantages According to a recent survey paper and, the covariance intersection has the following advantages: The identification and computation of the cross covariances are completely avoided. It yields a consistent fused estimate, and thus a non-divergent filter is obtained. The accuracy of the fused estimate outperforms each local one. It gives a common upper bound of actual estimation error variances, which has robustness with respect to unknown correlations. These advantages have been demonstrated in the case of simultaneous localization and mapping (SLAM) involving over a million map features/beacons. Motivation It is widely believed that unknown correlations exist in a diverse range of multi-sensor fusion problems. Neglecting the effects of unknown correlations can result in severe performance degradation, and even divergence. As such, it has attracted and sustained the attention of researchers for decades. However, owing to its intricate, unknown nature, it is not easy to come up with a satisfying scheme to address fusion problems with unknown correlations. If we ignore the correlations, which is the so-called "naive fusion", it may lead to filter divergence. To compensate this kind of divergence, a common sub-optimal approach is to artificially increase the system noise. However, this heuristic requires considerable expertise and compromises the integrity of the Kalman filter framework. References Control theory Nonlinear filters Linear filters Signal estimation Robot control
Covariance intersection
[ "Mathematics", "Engineering" ]
501
[ "Robotics engineering", "Applied mathematics", "Control theory", "Robot control", "Dynamical systems" ]
22,409,063
https://en.wikipedia.org/wiki/Sectional%20cooler
A sectional cooler is a type of liquid cooled rotary drum cooler used for continuous processes in chemical engineering. Sectional coolers consist of a rotating cylinder ("drum" or "shell"), a drive unit and a support structure. At each end of the drum there are stationary chutes for material feed and outlet. Depending on the size of the cooler the drum is pivoted on a shaft running through its axis, or is supported on running treads or external ring gears. The interior of the drum consists of several wedge-shaped chambers arranged around a central hollow shaft. This arrangement is completely surrounded by a jacket of water. In the wedge-shaped chambers are angled fins or shovels which move material through the cooler as the drum is rotated. Function Sectional coolers work by indirectly cooling the material passing through them with water. The water enters the cooler and reaches the space between the wedge-shaped chambers through the central hollow shaft, circulates between and around the chambers then leaves the cooler. The material to be cooled usually falls into the feed chute directly from another machine. As the drum rotates, the fins on the inside faces of the wedge-shaped sections convey the material to other end of the cooler. The rotation causes a constant mixing of the product in the chambers, providing good heat transfer and increasing efficiency. Sectional coolers can be rotated by chains, gears or by friction drive onto the drum itself. Applications Any free flowing bulk material can be cooled in a sectional cooler. They are often found after rotary kilns in calcination or other similar processes. The purpose of sectional coolers is usually to reduce the material temperature enough that it can be handled with other machines such as conveyor belts and mills. Often the cooling itself is also an important part in the production process. Typical products processed with sectional coolers include calcined petroleum coke, zinc calcine, soda ash and pigments. The entry temperature of the products can reach up to 1400 °C. See also Rotary dryer Rotary kiln References Heat exchangers Cooling technology
Sectional cooler
[ "Chemistry", "Engineering" ]
414
[ "Chemical equipment", "Heat exchangers" ]
22,409,299
https://en.wikipedia.org/wiki/Framework%20region
In molecular biology, a framework region is a subdivision of the variable region (Fab) of the antibody. The variable region is composed of seven amino acid regions, four of which are framework regions and three of which are hypervariable regions. The framework region makes up about 85% of the variable region. Located on the tips of the Y-shaped molecule, the framework regions are responsible for acting as a scaffold for the complementarity determining regions (CDR), also referred to as hypervariable regions, of the Fab. These CDRs are in direct contact with the antigen and are involved in binding antigen, while the framework regions support the binding of the CDR to the antigen and aid in maintaining the overall structure of the four variable domains on the antibody. To increase its stability, the framework region has less variability in its amino acid sequences compared to the CDR. Function The antibody has a three-dimensional structure with beta pleated sheet and alpha helices. The antibody folds so the variable regions form three loops with the framework regions folded into one another and the CDR regions on the tips of each of these loops in direct contact with the antigen. Individual residues in the framework region can be divided into two categories, depending on whether they are in direct contact with the antigen. Framework residues that come in contact with the antigen are a part of the antibody's binding site, and are located either close in sequence to the CDRs or in close proximity to the CDR when in the folded three dimensional structure. Framework residues that do not come in contact with the antigen affect the binding indirectly by aiding in structural support for the CDR. This enables the CDR to take on the correct orientation and position so it is exposed on the surface of the chain ready to bind to an antigen. The framework regions are highly conserved regions of the variable portion of the antibody. The evolutionary reason for the conservation of these regions is to support proper folding of the antibody allowing the CDR regions to be stabilized. Folding in FR leads to antibody structure flexibility or rigidity of the binding region of the antibody. Mutations Mutations in the framework regions of antibodies occur in cells by somatic hypermutation and during affinity maturation of the antibody. In vitro, mutations of FR may occur by natural cause or by exposure to mutagens. Recent studies of framework mutations imply that the framework region flexibility or rigidity could alter the specificity of the antibody to its intended epitope. While the framework region doesn't directly interact with antigen, its structure determines whether the CDRs can interact with antigen. If the CDR regions have high affinity for the epitope of antigen, it has been found to be more effective to have a more rigid framework region. When CDR does not have high affinity for antigen, mutations in the FR that create a more flexible structure may allow for higher affinity maturation. Natural mutations in the variable region are typically due to activation-induced cytidine deaminase (AID). AID leads to deamination of cytosine to uracil in DNA and results in somatic hypermutation. This somatic hypermutation allows for immunoglobulin class switching but also results in affinity maturation of the antibody. The CDR are the areas of the variable regions in contact with antigen and thus we see the most mutation in these regions. Although, the framework regions of the antibody are also mutated. Studies have shown that when the CDR is blocked from mutation and only the FR is mutated, certain mutations can lead to increased expression and thermostability of the antibody as a whole. Antibody humanization is an example of beneficial genetic engineering in medicine today. Humanized antibody refers to the creation of non-human antibody in vivo and in response to antigen, then the isolation and humanization of the framework and constant regions. It has been discovered that while these antibodies remain relatively intact upon transition, these modifications can also lead to decreased binding affinity in the humanized framework regions and result in improper folding in humans. This observation is believed to be due to the framework region's role in antibody structure. See also Hypervariable region Variable Region References Proteins Antibodies
Framework region
[ "Chemistry" ]
850
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
22,409,391
https://en.wikipedia.org/wiki/Ferrier%20carbocyclization
The Ferrier carbocyclization (or Ferrier II reaction) is an organic reaction that was first reported by the carbohydrate chemist Robert J. Ferrier in 1979. It is a metal-mediated rearrangement of enol ether pyrans to cyclohexanones. Typically, this reaction is catalyzed by mercury salts, specifically mercury(II) chloride. Several reviews have been published. Reaction mechanism Ferrier proposed the following reaction mechanism: In this mechanism, the terminal olefin undergoes hydroxymercuration to produce the first intermediate, compound 2, a hemiacetal. Next, methanol is lost and the dicarbonyl compound cyclizes through an attack on the electrophilic aldehyde to form the carbocycle as the product. A downside to this reaction is that the loss of CH3OH at the anomeric position (carbon-1) results in a mixture of α- and β-anomers. The reaction also works for substituted alkenes (e. g. having an -OAc group on the terminal alkene). Ferrier also reported that the final product, compound 5, could be converted into a conjugated ketone (compound 6) by reaction with acetic anhydride (Ac2O) and pyridine, as shown below. Modifications In 1997, Sinaÿ and co-workers reported an alternative route to the synthesis (shown below) that did not involve cleavage of the bond at the anomeric position (the glycosidic bond). In this case, the major product formed had maintained its original configuration at the anomeric position. (Bn = benzyl, i-Bu = isobutyl) Sinaÿ proposed this reaction went through the following transition state: Sinaÿ also discovered that titanium (IV) derivatives such as [TiCl3(OiPr)] worked in the same reaction as a milder version of the Lewis acid, i-Bu3Al, which goes through a similar transition state involving the retention of configuration at the anomeric center. In 1988, Adam reported a modification of the reaction that used catalytic amounts of palladium (II) salts, which brought about the same conversion of enol ethers into carbosugars in a more environmentally friendly manner. Applications The development of the Ferrier carbocyclization has been useful for the synthesis of numerous natural products that contain the carbocycle group. In 1991, Bender and co-workers reported a synthetic route to pure enantiomers of myo-inositol derivatives using this reaction. It has also been applied to the synthesis of aminocyclitols in work done by Barton and co-workers. Finally, Amano et al. used the Ferrier conditions to synthesise complex conjugated cyclohexanones in 1998. References Carbohydrate chemistry Carbon-carbon bond forming reactions Name reactions Rearrangement reactions
Ferrier carbocyclization
[ "Chemistry" ]
615
[ "Carbon-carbon bond forming reactions", "Organic reactions", "Name reactions", "Carbohydrate chemistry", "nan", "Chemical synthesis", "Glycobiology", "Rearrangement reactions" ]
22,409,801
https://en.wikipedia.org/wiki/Papaloizou%E2%80%93Pringle%20instability
The Papaloizou-Pringle Instability (PPI) is a scientific discovery made in 1984 by theoretical physicist John Papaloizou and James E. Pringle which proposes that tori, or accretion disks, in anisotropic stellar systems with constant specific angular momentum are unstable to non-axisymmetric global modes. History Linear analysis of the instability was first outlined by Papaloizou and Pringle in 1984. Relativistic numerical studies confirmed the effects of the instability, probed the non-linear effects primarily responsible for setting the final amplitude of the global modes, and showed that the resulting matter distribution exhibited counter-rotating epicyclic vortices (also called planets). References 1984 in science Astrophysics theories
Papaloizou–Pringle instability
[ "Physics" ]
156
[ "Astrophysics theories", "Astrophysics" ]
22,410,248
https://en.wikipedia.org/wiki/Carbohydrate%20conformation
Carbohydrate conformation refers to the overall three-dimensional structure adopted by a carbohydrate (saccharide) molecule as a result of the through-bond and through-space physical forces it experiences arising from its molecular structure. The physical forces that dictate the three-dimensional shapes of all molecules—here, of all monosaccharide, oligosaccharide, and polysaccharide molecules—are sometimes summarily captured by such terms as "steric interactions" and "stereoelectronic effects" (see below). Saccharide and other chemical conformations can be reasonably shown using two-dimensional structure representations that follow set conventions; these capture for a trained viewer an understanding of the three-dimensional structure via structure drawings (see organic chemistry article, and "3D Representations" section in molecular geometry article); they are also represented by stereograms on the two dimensional page, and increasingly using 3D display technologies on computer monitors. Formally and quantitatively, conformation is captured by description of a molecule's angles—for example, sets of three sequential atoms (bond angles) and four sequential atoms (torsion angles, dihedral angles), where the locations and angular directions of non-bonding lone pair electrons must sometimes also be taken into account. Conformations adopted by saccharide molecules in response to the physical forces arising from their bonding and nonbonding electrons, modified by the molecule's interactions with its aqueous or other solvent environment, strongly influence their reactivity with and recognition by other molecules (processes which in turn can alter conformation). Chemical transformations and biological signalling mediated by conformation-dependent molecular recognition between molecules underlie all essential processes in living organisms. Conformations of carbohydrates Monosaccharide conformation Pyranose and furanose forms can exist in different conformers and one can interconvert between the different conformations if an energy requirement is met. For the furanose system there are two possible conformers: twist (T) and envelope (E). In the pyranose system five conformers are possible: chair (C), boat (B), skew (S), half-chair (H) or envelope (E). In all cases there are four or more atoms that make up a plane. In order to define which atoms are above and below the plane one must orient the molecule so that the atoms are numbered clockwise when looking from the top. Atoms above the plane are prefixed as a superscript and atoms below the plane are suffixed as a subscript. If the ring oxygen is above or below the plane it must be prefixed or suffixed appropriately. Conformational analysis The chair conformation of six-membered rings have a dihedral angle of 60° between adjacent substituents thus usually making it the most stable conformer. Since there are two possible chair conformation steric and stereoelectronic effects such as the anomeric effect, 1,3-diaxial interactions, dipoles and intramolecular hydrogen bonding must be taken into consideration when looking at relative energies. Conformations with 1,3-diaxial interactions are usually disfavored due to steric congestion and can shift equilibrium to the other chair form (example: 1C4 to 4C1). The size of the substituents greatly affects this equilibrium. However, intramolecular hydrogen bonding can be an example of a stabilizing 1,3-diaxial interaction. Dipoles also play a role in conformer stability, aligned dipoles lead to an increase in energy while opposed dipoles lead to a lowering of energy hence a stabilizing effect, this can be complicated by solvent effects. Polar solvents tend to stabilize aligned dipoles. All interaction must be taken into account when determining a preferred conformation. Conformations of five-membered rings are limited to two, envelope and twist. The envelope conformation has four atoms in a plane while the twist form only has three. In the envelope form two different scenarios can be envisioned; one where the ring oxygen is in the four atom plane and one where it is puckered above or below the plane. When the ring oxygen is not in the plane the substituents eclipse and when it is in the plane torsional strain is relieved. Conformational analysis for the twist form is similar thus leading to the two forms being very close in energy. Anomers and related effects Anomers are diastereoisomers of glycosides, hemiacetals or related cyclic forms of sugars, or related molecules differing in configuration only at C-1. When the stereochemistry of the first carbon matches the stereochemistry of the last stereogenic center the sugar is the α-anomer when they are opposite the sugar is the β-anomer. Anomeric effect Anomers can be interconverted through a process known as mutarotation. The anomeric effect more accurately called the endo-anomeric effect is the propensity for heteroatoms at C-1 to be oriented axially. This is counter intuitive as one would expect the equatorially anomer to be the thermodynamic product. This effect has been rationalized through dipole–dipole repulsion and n–σ* arguments. Reverse anomeric effect The reverse anomeric effect, proposed in 1965 by R. U. Lemieux, is the tendency for electropositive groups at the anomeric position to be oriented equatorially. Original publication reported this phenomenon with N-(2,3,4,6-tetra-O-acetyl-α-D-glucopyranosyl)-4-methylpyridinium bromide. However, further studies have shown the effect to be a solvation and steric issue. It is accepted that there is no generalized reverse anomeric effect. Hydroxymethyl conformation Rotation around the C-5/C-6 bond is described by the angle ω. Three possible staggered conformations are possible:: gauche–trans (gt), gauche–gauche (gg), and trans–gauche (tg). The name indicates the interaction between O-5 and OH-6 first followed by the interaction between OH-6 and C-4. Oligosaccaharide conformation In addition to the factors affecting monosaccharide residues, conformational analysis of oligosaccharides and polysaccharides requires consideration additional factors. The exo-anomeric effect The exo-anomeric effect is similar to the endo-anomeric effect. The difference being that the lone pair being donated is coming from the substituent at C-1. However, since the substituent can be either axial or equatorial there are two types of exo-anomeric effects, one from axial glycosides and one from equatorial glycosides as long as the donating orbital is anti-periplanar to the accepting orbital. Glycosidic torsion angles Three angles are described by φ, ψ and ω (in the case of glycosidic linkages via O-6). Steric considerations and anomeric effects need to be taken into consideration when looking at preferred angles. Conformations in solution In solution, reducing monosaccharides exist in equilibrium between their acyclic and cyclic forms with less than 1% in the acyclic form. The open chain form can close to give the pyranose and furanose with both the α- and β-anomers present for each. The equilibrium population of conformers depends on their relative energies which can be determined to a rough approximation using steric and stereoelectronic arguments. It has been shown that cations in solution can shift the equilibrium. See also Anomeric effect Carbohydrate Furanose Monosaccharide Polysaccharide Pyranose References External links Carbohydrates Carbohydrate chemistry
Carbohydrate conformation
[ "Chemistry" ]
1,691
[ "Biomolecules by chemical classification", "Carbohydrates", "Organic compounds", "Carbohydrate chemistry", "Chemical synthesis", "nan", "Glycobiology" ]
15,794,238
https://en.wikipedia.org/wiki/Fractal%20cosmology
In physical cosmology, fractal cosmology is a set of minority cosmological theories which state that the distribution of matter in the Universe, or the structure of the universe itself, is a fractal across a wide range of scales (see also: multifractal system). More generally, it relates to the usage or appearance of fractals in the study of the universe and matter. A central issue in this field is the fractal dimension of the universe or of matter distribution within it, when measured at very large or very small scales. Fractals in observational cosmology The first attempt to model the distribution of galaxies with a fractal pattern was made by Luciano Pietronero and his team in 1987, and a more detailed view of the universe's large-scale structure emerged over the following decade, as the number of cataloged galaxies grew larger. Pietronero argues that the universe shows a definite fractal aspect over a fairly wide range of scale, with a fractal dimension of about 2. The fractal dimension of a homogeneous 3D object would be 3, and 2 for a homogeneous surface, whilst the fractal dimension for a fractal surface is between 2 and 3. The universe has been observed to be homogeneous and isotropic (i.e. is smoothly distributed) at very large scales, as is expected in a standard Big Bang or Friedmann-Lemaître-Robertson-Walker cosmology, and in most interpretations of the Lambda-Cold Dark Matter model. The scientific consensus interpretation is that the Sloan Digital Sky Survey (SDSS) suggests that things do indeed smooth out above 100 Megaparsecs. One study of the SDSS data in 2004 found "The power spectrum is not well-characterized by a single power law but unambiguously shows curvature ... thereby driving yet another nail into the coffin of the fractal universe hypothesis and any other models predicting a power-law power spectrum". Another analysis of luminous red galaxies (LRGs) in the SDSS data calculated the fractal dimension of galaxy distribution (on a scales from 70 to 100 Mpc/h) at 3, consistent with homogeneity, but that the fractal dimension is 2 "out to roughly 20 h−1 Mpc". In 2012, Scrimgeour et al. definitively showed that large-scale structure of galaxies was homogeneous beyond a scale around 70 Mpc/h. Fractals in theoretical cosmology In the realm of theory, the first appearance of fractals in cosmology was likely with Andrei Linde's "Eternally Existing Self-Reproducing Chaotic Inflationary Universe" theory (see chaotic inflation theory) in 1986. In this theory, the evolution of a scalar field creates peaks that become nucleation points that cause inflating patches of space to develop into "bubble universes," making the universe fractal on the very largest scales. Alan Guth's 2007 paper on "Eternal Inflation and its implications" shows that this variety of inflationary universe theory is still being seriously considered today. Inflation, in some form or another, is widely considered to be our best available cosmological model. Since 1986, quite a large number of different cosmological theories exhibiting fractal properties have been proposed. While Linde's theory shows fractality at scales likely larger than the observable universe, theories like causal dynamical triangulation and the asymptotic safety approach to quantum gravity are fractal at the opposite extreme, in the realm of the ultra-small near the Planck scale. These recent theories of quantum gravity describe a fractal structure for spacetime itself, and suggest that the dimensionality of space evolves with time. Specifically, they suggest that reality is 2D at the Planck scale, and that spacetime gradually becomes 4D at larger scales. French mathematician Alain Connes has been working for a number of years to reconcile general relativity with quantum mechanics using noncommutative geometry. Fractality also arises in this approach to quantum gravity. An article by Alexander Hellemans in the August 2006 issue of Scientific American quotes Connes as saying that the next important step toward this goal is to "try to understand how space with fractional dimensions couples with gravitation." The work of Connes and physicist Carlo Rovelli suggests that time is an emergent property or arises naturally in this formulation, whereas in causal dynamical triangulation choosing those configurations where adjacent building blocks share the same direction in time is an essential part of the "recipe." Both approaches suggest that the fabric of space itself is fractal, however. See also Invariant set postulate Large-scale structure of the Universe Scale invariance Shape of the universe Notes References Rassem, M. and Ahmed E., "On Fractal Cosmology", Astro. Phys. Lett. Commun. (1996), 35, 311. Astrophysics Fractals Physical cosmology
Fractal cosmology
[ "Physics", "Astronomy", "Mathematics" ]
1,039
[ "Functions and mappings", "Mathematical analysis", "Theoretical physics", "Mathematical objects", "Astrophysics", "Fractals", "Mathematical relations", "Physical cosmology", "Astronomical sub-disciplines" ]
15,794,879
https://en.wikipedia.org/wiki/Quantum%20digital%20signature
A Quantum Digital Signature (QDS) refers to the quantum mechanical equivalent of either a classical digital signature or, more generally, a handwritten signature on a paper document. Like a handwritten signature, a digital signature is used to protect a document, such as a digital contract, against forgery by another party or by one of the participating parties. As e-commerce has become more important in society, the need to certify the origin of exchanged information has arisen. Modern digital signatures enhance security based on the difficulty of solving a mathematical problem, such as finding the factors of large numbers (as used in the RSA algorithm). Unfortunately, the task of solving these problems becomes feasible when a quantum computer is available (see Shor's algorithm). To face this new problem, new quantum digital signature schemes are in development to provide protection against tampering, even from parties in possession of quantum computers and using powerful quantum cheating strategies. Classical public-key method The public-key method of cryptography allows a sender to sign a message (often only the cryptographic hash of the message) with a sign key in such a way that any recipient can, using the corresponding public key, check the authenticity of the message. To allow this, the public key is made broadly available to all potential recipients. To make sure only the legal author of the message can validly sign the message, the public key is created from a random, private sign key, using a one-way function. This is a function that is designed such that computing the result given the input is very easy, but computing the input given the result is very difficult. A classic example is the multiplication of two very large primes: The multiplication is easy, but factoring the product without knowing the primes is normally considered infeasible. easy very difficult Quantum Digital Signature Like classical digital signatures, quantum digital signatures make use of asymmetric keys. Thus, a person who wants to sign a message creates one or more pairs of sign and corresponding public keys. In general we can divide quantum digital signature schemes into two groups: A scheme that creates a public quantum-bit key out of a private classical bit string: A scheme that creates a public quantum-bit key out of a private quantum bit string: In both cases f is a one-way quantum function that has the same properties as a classical one-way function. That is, the result is easy to compute, but, in contrast to the classical scheme, the function is impossible to invert, even if one uses powerful quantum cheating strategies. The most famous scheme for the first method above is provided by Gottesman and Chuang Requirements for a good and usable signature scheme Most of the requirements for a classical digital signature scheme also apply to the quantum digital signature scheme. In detail The scheme has to provide security against tampering by The sender after the message was signed (see bit commitment) The receiver A third party Creating a signed message has to be easy Every recipient has to get the same answer, when testing the message for validity (Valid, Non-Valid) Differences between classical and quantum one-way functions Nature of the one-way function A classical one-way function as said above is based on a classical infeasible mathematical task, whereas a quantum one-way function exploits the uncertainty principle which makes it impossible even for a quantum computer to compute the inverse. This is done by providing a quantum output state, with whom one cannot learn enough about the input string to reproduce it. In case of the first group of schemes this is shown by Holevo's theorem, which says, that from a given n-qubit quantum state one cannot extract more than n classical bits of information. One possibility to ensure that the scheme uses less qubits for a bit string of a certain length is by using nearly orthogonal states That gives us the possibility to induce a basis with more than two states. So to describe an information of bits, we can use less than n qubits. An example with a 3 qubit basis Only m qubits are needed to describe n classical bits when holds. Because of Holevo's theorem and the fact, that m can be much smaller than n, we can only get m bits out of the n bits message. More general, if one gets T copies of the public key he can extract at most Tm bits of the private key. If is big becomes very large, which makes it impossible for a dishonest person to guess the sign key. Note: You cannot distinguish between non-orthogonal states, if you only have a small amount of identical states. That's how the quantum one-way functions works. Nevertheless leaks information about the private key, in contrast to the classical public key, which forces one to get nothing or all about the private key. Copying the public key In the classical case we create a classical public key out of a classical sign key, thus it is easy to provide every potential recipient with a copy of the public key. The public key can be freely distributed. This becomes more difficult in the quantum case, because copying a quantum state is forbidden by the no cloning theorem, as long as the state itself is unknown. So public keys can only be created and distributed by a person who knows the exact quantum state he wants to create, thus who knows the sign key (This can be the sender or in more general a trustful institution). Nevertheless, in contrast to the classical public key there is an upper bound for the number of public quantum keys T which can be created, without enabling one to guess the sign key and thus endangering the security of the scheme ( has to be big) Public Key should be the same for every recipient (Swap Test) To make sure that every recipient gets identical results when testing the authenticity of a message, public keys distributed have to be the same. This is straightforward in the classical case, because one can easily compare two classical bit strings and see if those match. Nevertheless, in the quantum state it is more complicated. To test, if two public quantum states are the same one has to compare the following This is done with the following quantum circuit which uses one Fredkin gate F, one Hadamard gate H and an ancilla qubit a. First of all the ancilla qubit is set to a symmetric state . Right after the ancilla qubit is used as a control on the targets and in a Fredkin Gate. Furthermore, a Hadamard gate is applied on the ancilla qubit and finally the first qubit gets measured. If both states are the same, the result is measured. If both states are nearly orthogonal, the result can be either or . The calculation of the swap test in more detail: The overall state After the Fredkin gate is applied After the Hadamard gate is applied on the first qubit After sorting for Now it is easy to see, if the states then , which gives us a 0 whenever it is measured. An example of a signing-validation process using a simplified Gottesman-Chuang scheme Signing Process Let Person A (Alice) want to send a message to Person B (Bob). Hash algorithms won't be considered, so Alice has to sign every single bit of her message. Message-Bit b . Alice chooses M pairs of private keys All the keys will be used to sign the message-bit if b = 0. All the keys will be used to sign the message-bit if b = 1. The function which maps is known to all parties. Alice now computes the corresponding public keys and gives all of them to the recipients. She can make as many copies as she needs, but has to take care, not to endanger the security . Her level of security limits the number of identical public keys she can create If message-bit b = 0, she sends all her private keys along with the message-bit b to Bob message-bit b = 1, she sends all her private keys along with the message-bit b to Bob Remember: In this example Alice picks only one bit b and signs it. She has to do that for every single bit in her message Validation Process Bob now possesses The message-bit b The corresponding private keys All public keys Now Bob calculates for all received private keys (either ). After he has done so he makes use of the swap test to compare the calculated states with the received public keys. Since the swap test has some probability to give the wrong answer he has to do it for all the M keys and counts how many incorrect keys he gets r. It is obvious, that M is some kind of a security parameter. It is more unlikely to validate a bit wrong for bigger M. If he only gets a few incorrect keys, then the bit is most probably valid, because his calculated keys and the public keys seem to be the same. If he gets many incorrect keys, then somebody faked the message with high probability. Avoid a message to be validated differently One problem which arises especially for small M is, that the number of incorrect keys different recipients measure differ with probability. So to define only one threshold is not enough, because it would cause a message to be validated differently, when the number of incorrect keys r is very close to the defined threshold. This can be prevented by defining more than one threshold. Because the number of errors increase proportional with M, the thresholds are defined like Acceptance Rejection If the number of incorrect keys r is below , then the bit is valid with high probability If the number of incorrect keys r is above , then the bit is faked with high probability If the number of incorrect keys r is in-between both thresholds, then the recipient cannot be sure, if another recipient gets the same outcome, when validating the bit. Furthermore, he can't be even sure, if he validated the message right. If we assume perfect channels without noise, so the bit can't be changed due to the transfer, then the threshold can be set to zero, because the swap test passes always, when the compared states are the same Message authentication Message authentication codes (MACs) mainly aim at data origin authentication, but they can also provide non-repudiation in certain realistic scenarios when a trusted third party is involved. In principle, the same idea can be exploited in the framework of quantum MACs. However, a broad class of quantum MACs does not seem to offer any advantage over their classical counterparts. See also Lamport signature - A practical digital signature method invented in the 1970s and believed to be secure even against quantum computing attacks. Quantum cryptography Quantum fingerprinting References Digital signature schemes Key management Quantum information science Theoretical computer science
Quantum digital signature
[ "Mathematics" ]
2,181
[ "Theoretical computer science", "Applied mathematics" ]
15,795,950
https://en.wikipedia.org/wiki/Activity%20recognition
Activity recognition aims to recognize the actions and goals of one or more agents from a series of observations on the agents' actions and the environmental conditions. Since the 1980s, this research field has captured the attention of several computer science communities due to its strength in providing personalized support for many different applications and its connection to many different fields of study such as medicine, human-computer interaction, or sociology. Due to its multifaceted nature, different fields may refer to activity recognition as plan recognition, goal recognition, intent recognition, behavior recognition, location estimation and location-based services. Types Sensor-based, single-user activity recognition Sensor-based activity recognition integrates the emerging area of sensor networks with novel data mining and machine learning techniques to model a wide range of human activities. Mobile devices (e.g. smart phones) provide sufficient sensor data and calculation power to enable physical activity recognition to provide an estimation of the energy consumption during everyday life. Sensor-based activity recognition researchers believe that by empowering ubiquitous computers and sensors to monitor the behavior of agents (under consent), these computers will be better suited to act on our behalf. Visual sensors that incorporate color and depth information, such as the Kinect, allow more accurate automatic action recognition and fuse many emerging applications such as interactive education and smart environments. Multiple views of visual sensor enable the development of machine learning for automatic view invariant action recognition. More advanced sensors used in 3D motion capture systems allow highly accurate automatic recognition, in the expenses of more complicated hardware system setup. Levels of sensor-based activity recognition Sensor-based activity recognition is a challenging task due to the inherent noisy nature of the input. Thus, statistical modeling has been the main thrust in this direction in layers, where the recognition at several intermediate levels is conducted and connected. At the lowest level where the sensor data are collected, statistical learning concerns how to find the detailed locations of agents from the received signal data. At an intermediate level, statistical inference may be concerned about how to recognize individuals' activities from the inferred location sequences and environmental conditions at the lower levels. Furthermore, at the highest level, a major concern is to find out the overall goal or subgoals of an agent from the activity sequences through a mixture of logical and statistical reasoning. Sensor-based, multi-user activity recognition Recognizing activities for multiple users using on-body sensors first appeared in the work by ORL using active badge systems in the early 1990s. Other sensor technology such as acceleration sensors were used for identifying group activity patterns during office scenarios. Activities of Multiple Users in intelligent environments are addressed in Gu et al. In this work, they investigate the fundamental problem of recognizing activities for multiple users from sensor readings in a home environment, and propose a novel pattern mining approach to recognize both single-user and multi-user activities in a unified solution. Sensor-based group activity recognition Recognition of group activities is fundamentally different from single, or multi-user activity recognition in that the goal is to recognize the behavior of the group as an entity, rather than the activities of the individual members within it. Group behavior is emergent in nature, meaning that the properties of the behavior of the group are fundamentally different than the properties of the behavior of the individuals within it, or any sum of that behavior. The main challenges are in modeling the behavior of the individual group members, as well as the roles of the individual within the group dynamic and their relationship to emergent behavior of the group in parallel. Challenges which must still be addressed include quantification of the behavior and roles of individuals who join the group, integration of explicit models for role description into inference algorithms, and scalability evaluations for very large groups and crowds. Group activity recognition has applications for crowd management and response in emergency situations, as well as for social networking and Quantified Self applications. Approaches Activity recognition through logic and reasoning Logic-based approaches keep track of all logically consistent explanations of the observed actions. Thus, all possible and consistent plans or goals must be considered. Kautz provided a formal theory of plan recognition. He described plan recognition as a logical inference process of circumscription. All actions and plans are uniformly referred to as goals, and a recognizer's knowledge is represented by a set of first-order statements, called event hierarchy. Event hierarchy is encoded in first-order logic, which defines abstraction, decomposition and functional relationships between types of events. Kautz's general framework for plan recognition has an exponential time complexity in worst case, measured in the size of the input hierarchy. Lesh and Etzioni went one step further and presented methods in scaling up goal recognition to scale up his work computationally. In contrast to Kautz's approach where the plan library is explicitly represented, Lesh and Etzioni's approach enables automatic plan-library construction from domain primitives. Furthermore, they introduced compact representations and efficient algorithms for goal recognition on large plan libraries. Inconsistent plans and goals are repeatedly pruned when new actions arrive. Besides, they also presented methods for adapting a goal recognizer to handle individual idiosyncratic behavior given a sample of an individual's recent behavior. Pollack et al. described a direct argumentation model that can know about the relative strength of several kinds of arguments for belief and intention description. A serious problem of logic-based approaches is their inability or inherent infeasibility to represent uncertainty. They offer no mechanism for preferring one consistent approach to another and are incapable of deciding whether one particular plan is more likely than another, as long as both of them can be consistent enough to explain the actions observed. There is also a lack of learning ability associated with logic based methods. Another approach to logic-based activity recognition is to use stream reasoning based on answer set programming, and has been applied to recognising activities for health-related applications, which uses weak constraints to model a degree of ambiguity/uncertainty. Activity recognition through probabilistic reasoning Probability theory and statistical learning models are more recently applied in activity recognition to reason about actions, plans and goals under uncertainty. In the literature, there have been several approaches which explicitly represent uncertainty in reasoning about an agent's plans and goals. Using sensor data as input, Hodges and Pollack designed machine learning-based systems for identifying individuals as they perform routine daily activities such as making coffee. Intel Research (Seattle) Lab and University of Washington at Seattle have done some important works on using sensors to detect human plans.<ref>Mike Perkowitz, Matthai Philipose, Donald J. Patterson, and Kenneth P. Fishkin. "Mining models of human activities from the web". In Proceedings of the Thirteenth International World Wide Web Conference (WWW 2004), pages 573–582, May 2004.</ref>Dieter Fox Lin Liao, Donald J. Patterson and Henry A. Kautz. "Learning and inferring transportation routines". Artif. Intell., 171(5–6):311–331, 2007. Some of these works infer user transportation modes from readings of radio-frequency identifiers (RFID) and global positioning systems (GPS). The use of temporal probabilistic models has been shown to perform well in activity recognition and generally outperform non-temporal models. Generative models such as the Hidden Markov Model (HMM) and the more generally formulated Dynamic Bayesian Networks (DBN) are popular choices in modelling activities from sensor data.TLM van Kasteren, Gwenn Englebienne, Ben Kröse" Hierarchical Activity Recognition Using Automatically Clustered Actions", 2011, Ambient Intelligence, 82–91, Springer Berlin/HeidelbergNuria Oliver, Barbara Rosario and Alex Pentland "A Bayesian Computer Vision System for Modeling Human Interactions" Appears in PAMI Special Issue on Visual Surveillance and Monitoring, Aug 00 Discriminative models such as Conditional Random Fields (CRF) are also commonly applied and also give good performance in activity recognition.Derek Hao Hu, Sinno Jialin Pan, Vincent Wenchen Zheng, Nathan NanLiu, and Qiang Yang. Real world activity recognition with multiple goals . In Proceedings of the 10th international conference on Ubiquitous computing, Ubicomp, pages 30–39, New York, NY, USA, 2008. ACM. Generative and discriminative models both have their pros and cons and the ideal choice depends on their area of application. A dataset together with implementations of a number of popular models (HMM, CRF) for activity recognition can be found here. Conventional temporal probabilistic models such as the hidden Markov model (HMM) and conditional random fields (CRF) model directly model the correlations between the activities and the observed sensor data. In recent years, increasing evidence has supported the use of hierarchical models which take into account the rich hierarchical structure that exists in human behavioral data.Nuria Oliver, Ashutosh Garg, and Eric Horvitz. Layered representations for learning and inferring office activity from multiple sensory channels. Comput. Vis. Image Underst., 96(2):163–180, 2004. The core idea here is that the model does not directly correlate the activities with the sensor data, but instead breaks the activity into sub-activities (sometimes referred to as actions) and models the underlying correlations accordingly. An example could be the activity of preparing a stir fry, which can be broken down into the subactivities or actions of cutting vegetables, frying the vegetables in a pan and serving it on a plate. Examples of such a hierarchical model are Layered Hidden Markov Models (LHMMs) and the hierarchical hidden Markov model (HHMM), which have been shown to significantly outperform its non-hierarchical counterpart in activity recognition. Data mining based approach to activity recognition Different from traditional machine learning approaches, an approach based on data mining has been recently proposed. In the work of Gu et al., the problem of activity recognition is formulated as a pattern-based classification problem. They proposed a data mining approach based on discriminative patterns which describe significant changes between any two activity classes of data to recognize sequential, interleaved and concurrent activities in a unified solution. Gilbert et al. use 2D corners in both space and time. These are grouped spatially and temporally using a hierarchical process, with an increasing search area. At each stage of the hierarchy, the most distinctive and descriptive features are learned efficiently through data mining (Apriori rule). GPS-based activity recognition Location-based activity recognition can also rely on GPS data to recognize activities.Liao, Lin, Dieter Fox, and Henry Kautz. "Location-based activity recognition." Advances in Neural Information Processing Systems. 2006. Sensor usage Vision-based activity recognition It is a very important and challenging problem to track and understand the behavior of agents through videos taken by various cameras. The primary technique employed is Computer Vision. Vision-based activity recognition has found many applications such as human-computer interaction, user interface design, robot learning, and surveillance, among others. Scientific conferences where vision based activity recognition work often appears are ICCV and CVPR. In vision-based activity recognition, a great deal of work has been done. Researchers have attempted a number of methods such as optical flow, Kalman filtering, Hidden Markov models, etc., under different modalities such as single camera, stereo, and infrared. In addition, researchers have considered multiple aspects on this topic, including single pedestrian tracking, group tracking, and detecting dropped objects. Recently some researchers have used RGBD cameras like Microsoft Kinect to detect human activities. Depth cameras add extra dimension i.e. depth which normal 2d camera fails to provide. Sensory information from these depth cameras have been used to generate real-time skeleton model of humans with different body positions. This skeleton information provides meaningful information that researchers have used to model human activities which are trained and later used to recognize unknown activities.Piyathilaka, L. and Kodagoda, S., 2015. Human activity recognition for domestic robots. In Field and Service Robotics (pp. 395–408). Springer, Cham."Human Activity Recognition for Domestic Robots" With the recent emergency of deep learning, RGB video based activity recognition has seen rapid development. It uses videos captured by RGB cameras as input and perform several tasks, including: video classification, detection of activity start and end in videos, and spatial-temporal localization of activity and the people performing the activity. Pose estimation methods allow extracting more representative skeletal features for action recognition. That said, it has been discovered that deep learning based action recognition may suffer from adversarial attacks, where an attacker alter the input insignificantly to fool an action recognition system. Despite remarkable progress of vision-based activity recognition, its usage for most actual visual surveillance applications remains a distant aspiration. Conversely, the human brain seems to have perfected the ability to recognize human actions. This capability relies not only on acquired knowledge, but also on the aptitude of extracting information relevant to a given context and logical reasoning. Based on this observation, it has been proposed to enhance vision-based activity recognition systems by integrating commonsense reasoning and, contextual and commonsense knowledge. Hierarchical Human Activity (HAR) Recognition Hierarchical human activity recognition is a technique within computer vision and machine learning. It aims to identify and comprehend human actions or behaviors from visual data. This method entails structuring activities hierarchically, creating a framework that represents connections and interdependencies among various actions. HAR techniques can be used to understand data correlations and model fundamentals to improve models, to balance accuracy and privacy concerns in sensitive application areas, and to identify and manage trivial labels that have no relevance in specific use cases. Levels of vision-based activity recognition In vision-based activity recognition, the computational process is often divided into four steps, namely human detection, human tracking, human activity recognition and then a high-level activity evaluation. Fine-grained action localization In computer vision-based activity recognition, fine-grained action localization typically provides per-image segmentation masks delineating the human object and its action category (e.g., Segment-Tube''). Techniques such as dynamic Markov Networks, CNN and LSTM are often employed to exploit the semantic correlations between consecutive video frames. Geometric fine-grained features such as objective bounding boxes and human poses facilitate activity recognition with graph neural network. Automatic gait recognition One way to identify specific people is by how they walk. Gait-recognition software can be used to record a person's gait or gait feature profile in a database for the purpose of recognizing that person later, even if they are wearing a disguise. Wi-Fi-based activity recognition When activity recognition is performed indoors and in cities using the widely available Wi-Fi signals and 802.11 access points, there is much noise and uncertainty. These uncertainties can be modeled using a dynamic Bayesian network model. In a multiple goal model that can reason about user's interleaving goals, a deterministic state transition model is applied. Another possible method models the concurrent and interleaving activities in a probabilistic approach. A user action discovery model could segment Wi-Fi signals to produce possible actions. Basic models of Wi-Fi recognition One of the primary thought of Wi-Fi activity recognition is that when the signal goes through the human body during transmission; which causes reflection, diffraction, and scattering. Researchers can get information from these signals to analyze the activity of the human body. Static transmission model As shown in, when wireless signals are transmitted indoors, obstacles such as walls, the ground, and the human body cause various effects such as reflection, scattering, diffraction, and diffraction. Therefore, receiving end receives multiple signals from different paths at the same time, because surfaces reflect the signal during the transmission, which is known as multipath effect. The static model is based on these two kinds of signals: the direct signal and the reflected signal. Because there is no obstacle in the direct path, direct signal transmission can be modeled by Friis transmission equation: is the power fed into the transmitting antenna input terminals; is the power available at receiving antenna output terminals; is the distance between antennas; is transmitting antenna gain; is receiving antenna gain; is the wavelength of the radio frequency If we consider the reflected signal, the new equation is: is the distance between reflection points and direct path. When human shows up, we have a new transmission path. Therefore, the final equation is: is the approximate difference of the path caused by human body. Dynamic transmission model In this model, we consider the human motion, which causes the signal transmission path to change continuously. We can use Doppler Shift to describe this effect, which is related to the motion speed. By calculating the Doppler Shift of the receiving signal, we can figure out the pattern of the movement, thereby further identifying human activity. For example, in, the Doppler shift is used as a fingerprint to achieve high-precision identification for nine different movement patterns. Fresnel zone The Fresnel zone was initially used to study the interference and diffraction of the light, which is later used to construct the wireless signal transmission model. Fresnel zone is a series of elliptical intervals whose foci are the positions of the sender and receiver. When a person is moving across different Fresnel zones, the signal path formed by the reflection of the human body changes, and if people move vertically through Fresnel zones, the change of signal will be periodic. In a pair of papers, Wang et.al. applied the Fresnel model to the activity recognition task and got a more accurate result. Modeling of the human body In some tasks, we should consider modeling the human body accurately to achieve better results. For example, described the human body as concentric cylinders for breath detection. The outside of the cylinder denotes the rib cage when people inhale, and the inside denotes that when people exhale. So the difference between the radius of that two cylinders represents the moving distance during breathing. The change of the signal phases can be expressed in the following equation: is the change of the signal phases; is the wavelength of the radio frequency; is moving distance of rib cage; Datasets There are some popular datasets that are used for benchmarking activity recognition or action recognition algorithms. UCF-101: It consists of 101 human action classes, over 13k clips and 27 hours of video data. Action classes include applying makeup, playing dhol, cricket shot, shaving beard, etc. HMDB51: This is a collection of realistic videos from various sources, including movies and web videos. The dataset is composed of 6,849 video clips from 51 action categories (such as “jump”, “kiss” and “laugh”), with each category containing at least 101 clips. Kinetics: This is a significantly larger dataset than the previous ones. It contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. This dataset was created by DeepMind. Applications By automatically monitoring human activities, home-based rehabilitation can be provided for people suffering from traumatic brain injuries. One can find applications ranging from security-related applications and logistics support to location-based services. Activity recognition systems have been developed for wildlife observation and energy conservation in buildings. See also AI effect Applications of artificial intelligence Conditional random field Gesture recognition Hidden Markov model Motion analysis Naive Bayes classifier Support vector machines Object co-segmentation Outline of artificial intelligence Video content analysis References Human–computer interaction Applied machine learning Motion in computer vision
Activity recognition
[ "Physics", "Engineering" ]
4,039
[ "Physical phenomena", "Motion (physics)", "Motion in computer vision", "Human–machine interaction", "Human–computer interaction" ]
15,798,971
https://en.wikipedia.org/wiki/Lattice%20plane
In crystallography, a lattice plane of a given Bravais lattice is any plane containing at least three noncollinear Bravais lattice points. Equivalently, a lattice plane is a plane whose intersections with the lattice (or any crystalline structure of that lattice) are periodic (i.e. are described by 2d Bravais lattices). A family of lattice planes is a collection of equally spaced parallel lattice planes that, taken together, intersect all lattice points. Every family of lattice planes can be described by a set of integer Miller indices that have no common divisors (i.e. are relative prime). Conversely, every set of Miller indices without common divisors defines a family of lattice planes. If, on the other hand, the Miller indices are not relative prime, the family of planes defined by them is not a family of lattice planes, because not every plane of the family then intersects lattice points. Conversely, planes that are not lattice planes have aperiodic intersections with the lattice called quasicrystals; this is known as a "cut-and-project" construction of a quasicrystal (and is typically also generalized to higher dimensions). References Crystallography Geometry
Lattice plane
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
249
[ "Materials science stubs", "Materials science", "Crystallography stubs", "Crystallography", "Condensed matter physics", "Geometry", "Geometry stubs" ]
1,892,376
https://en.wikipedia.org/wiki/MHC%20class%20I
MHC class I molecules are one of two primary classes of major histocompatibility complex (MHC) molecules (the other being MHC class II) and are found on the cell surface of all nucleated cells in the bodies of vertebrates. They also occur on platelets, but not on red blood cells. Their function is to display peptide fragments of proteins from within the cell to cytotoxic T cells; this will trigger an immediate response from the immune system against a particular non-self antigen displayed with the help of an MHC class I protein. Because MHC class I molecules present peptides derived from cytosolic proteins, the pathway of MHC class I presentation is often called cytosolic or endogenous pathway. In humans, the HLAs corresponding to MHC class I are HLA-A, HLA-B, and HLA-C. Function Class I MHC molecules bind peptides generated mainly from the degradation of cytosolic proteins by the proteasome. The MHC I: peptide complex is then inserted via the endoplasmic reticulum into the external plasma membrane of the cell. The epitope peptide is bound on extracellular parts of the class I MHC molecule. Thus, the function of the class I MHC is to display intracellular proteins to cytotoxic T cells (CTLs). However, class I MHC can also present peptides generated from exogenous proteins, in a process known as cross-presentation. A normal cell will display peptides from normal cellular protein turnover on its class I MHC, and CTLs will not be activated in response to them due to central and peripheral tolerance mechanisms. When a cell expresses foreign proteins, such as after viral infection, a fraction of the class I MHC will display these peptides on the cell surface. Consequently, CTLs specific for the MHC:peptide complex will recognize and kill presenting cells. Alternatively, class I MHC itself can serve as an inhibitory ligand for natural killer cells (NKs). Reduction in the normal levels of surface class I MHC, a mechanism employed by some viruses and certain tumors to evade CTL responses, activates NK cell killing. PirB and visual plasticity Paired-immunoglobulin-like receptor B (PirB), an MHCI-binding receptor, is involved in the regulation of visual plasticity. PirB is expressed in the central nervous system and diminishes ocular dominance plasticity in the developmental critical period and adulthood. When the function of PirB was abolished in mutant mice, ocular dominance plasticity became more pronounced at all ages. PirB loss of function mutant mice also exhibited enhanced plasticity after monocular deprivation during the critical period. These results suggest that PirB may be involved in the modulation of synaptic plasticity in the visual cortex. Structure MHC class I molecules are heterodimers that consist of two polypeptide chains, α and β2-microglobulin (B2M). The two chains are linked noncovalently via interaction of B2M and the α3 domain. Only the α chain is polymorphic and encoded by a HLA gene, while the B2M subunit is not polymorphic and encoded by the Beta-2 microglobulin gene. The α3 domain is plasma membrane-spanning and interacts with the CD8 co-receptor of T-cells. The α3-CD8 interaction holds the MHC I molecule in place while the T cell receptor (TCR) on the surface of the cytotoxic T cell binds its α1-α2 heterodimer ligand, and checks the coupled peptide for antigenicity. The α1 and α2 domains fold to make up a groove for peptides to bind. MHC class I molecules bind peptides that are predominantly 8-10 amino acid in length (Parham 87), but the binding of longer peptides have also been reported. While a high-affinity peptide and the B2M subunit are normally required to maintain a stable ternary complex between the peptide, MHC I, and B2M, under subphysiological temperatures, stable, peptide-deficient MHC I/B2M heterodimers have been observed. Synthetic stable, peptide-receptive MHC I molecules have been generated using a disulfide bond between the MHC I and B2M, named "open MHC-I". Synthesis The peptides are generated mainly in the cytosol by the proteasome. The proteasome is a macromolecule that consists of 28 subunits, of which half affect proteolytic activity. The proteasome degrades intracellular proteins into small peptides that are then released into the cytosol. Proteasomes can also ligate distinct peptide fragments (termed spliced peptides), producing sequences that are noncontiguous and therefore not linearly templated in the genome. The origin of spliced peptide segments can be from the same protein (cis-splicing) or different proteins (trans-splicing). The peptides have to be translocated from the cytosol into the endoplasmic reticulum (ER) to meet the MHC class I molecule, whose peptide-binding site is in the lumen of the ER. They have membrane proximal Ig fold Translocation and peptide loading The peptide translocation from the cytosol into the lumen of the ER is accomplished by the transporter associated with antigen processing (TAP). TAP is a member of the ABC transporter family and is a heterodimeric multimembrane-spanning polypeptide consisting of TAP1 and TAP2. The two subunits form a peptide binding site and two ATP binding sites that face the cytosol. TAP binds peptides on the cytoplasmic side and translocates them under ATP consumption into the lumen of the ER. The MHC class I molecule is then, in turn, loaded with peptides in the lumen of the ER. The peptide-loading process involves several other molecules that form a large multimeric complex called the Peptide loading complex consisting of TAP, tapasin, calreticulin, calnexin, and Erp57 (PDIA3). Calnexin acts to stabilize the class I MHC α chains prior to β2m binding. Following complete assembly of the MHC molecule, calnexin dissociates. The MHC molecule lacking a bound peptide is inherently unstable and requires the binding of the chaperones calreticulin and Erp57. Additionally, tapasin binds to the MHC molecule and serves to link it to the TAP proteins and facilitates the selection of peptide in an iterative process called peptide editing, thus facilitating enhanced peptide loading and colocalization. Once the peptide is loaded onto the MHC class I molecule, the complex dissociates and it leaves the ER through the secretory pathway to reach the cell surface. The transport of the MHC class I molecules through the secretory pathway involves several posttranslational modifications of the MHC molecule. Some of the posttranslational modifications occur in the ER and involve change to the N-glycan regions of the protein, followed by extensive changes to the N-glycans in the Golgi apparatus. The N-glycans mature fully before they reach the cell surface. Peptide removal Peptides that fail to bind MHC class I molecules in the lumen of the endoplasmic reticulum (ER) are removed from the ER via the sec61 channel into the cytosol, where they might undergo further trimming in size, and might be translocated by TAP back into ER for binding to a MHC class I molecule. For example, an interaction of sec61 with bovine albumin has been observed. Effect of viruses MHC class I molecules are loaded with peptides generated from the degradation of ubiquitinated cytosolic proteins in proteasomes. As viruses induce cellular expression of viral proteins, some of these products are tagged for degradation, with the resulting peptide fragments entering the endoplasmic reticulum and binding to MHC I molecules. It is in this way, the MHC class I-dependent pathway of antigen presentation, that the virus infected cells signal T-cells that abnormal proteins are being produced as a result of infection. The fate of the virus-infected cell is almost always induction of apoptosis through cell-mediated immunity, reducing the risk of infecting neighboring cells. As an evolutionary response to this method of immune surveillance, many viruses are able to down-regulate or otherwise prevent the presentation of MHC class I molecules on the cell surface. In contrast to cytotoxic T lymphocytes, natural killer (NK) cells are normally inactivated upon recognizing MHC I molecules on the surface of cells. Therefore, in the absence of MHC I molecules, NK cells are activated and recognize the cell as aberrant, suggesting that it may be infected by viruses attempting to evade immune destruction. Several human cancers also show down-regulation of MHC I, giving transformed cells the same survival advantage of being able to avoid normal immune surveillance designed to destroy any infected or transformed cells. Genes and isotypes Very polymorphic (HLA-A) (HLA-B) (HLA-C) Less polymorphic (HLA-E) (HLA-F) (HLA-G) (pseudogene) (pseudogene) Evolutionary history The MHC class I genes originated in the most recent common ancestor of all jawed vertebrates, and have been found in all living jawed vertebrates that have been studied thus far. Since their emergence in jawed vertebrates, this gene family has been subjected to many divergent evolutionary paths as speciation events have taken place. There are, however, documented cases of trans-species polymorphisms in MHC class I genes, where a particular allele in an evolutionary related MHC class I gene remains in two species, likely due to strong pathogen-mediated balancing selection by pathogens that can infect both species. Birth-and-death evolution is one of the mechanistic explanations for the size of the MHC class I gene family. Birth-and-death of MHC class I genes Birth-and-death evolution asserts that gene duplication events cause the genome to contain multiple copies of a gene which can then undergo separate evolutionary processes. Sometimes these processes result in pseudogenization (death) of one copy of the gene, though sometimes this process results in two new genes with divergent function. It is likely that human MHC class Ib loci (HLA-E, -F, and -G) as well as MHC class I pseudogenes arose from MHC class Ia loci (HLA-A, -B, and -C) in this birth-and-death process. References External links Genes Immune system Glycoproteins Protein targeting Single-pass transmembrane proteins
MHC class I
[ "Chemistry", "Biology" ]
2,313
[ "Immune system", "Protein targeting", "Organ systems", "Cellular processes", "Glycoproteins", "Glycobiology" ]
1,892,519
https://en.wikipedia.org/wiki/Barrett%E2%80%93Crane%20model
The Barrett–Crane model is a model in quantum gravity, first published in 1998, which was defined using the Plebanski action. The field in the action is supposed to be a -valued 2-form, i.e. taking values in the Lie algebra of a special orthogonal group. The term in the action has the same symmetries as it does to provide the Einstein–Hilbert action. But the form of is not unique and can be posed by the different forms: where is the tetrad and is the antisymmetric symbol of the -valued 2-form fields. The Plebanski action can be constrained to produce the BF model which is a theory of no local degrees of freedom. John W. Barrett and Louis Crane modeled the analogous constraint on the summation over spin foam. The Barrett–Crane model on spin foam quantizes the Plebanski action, but its path integral amplitude corresponds to the degenerate field and not the specific definition , which formally satisfies the Einstein's field equation of general relativity. However, if analysed with the tools of loop quantum gravity the Barrett–Crane model gives an incorrect long-distance limit , and so the model is not identical to loop quantum gravity. References Loop quantum gravity
Barrett–Crane model
[ "Physics" ]
260
[ "Relativity stubs", "Theory of relativity" ]
1,892,704
https://en.wikipedia.org/wiki/Plebanski%20action
General relativity and supergravity in all dimensions meet each other at a common assumption: Any configuration space can be coordinatized by gauge fields , where the index is a Lie algebra index and is a spatial manifold index. Using these assumptions one can construct an effective field theory in low energies for both. In this form the action of general relativity can be written in the form of the Plebanski action which can be constructed using the Palatini action to derive Einstein's field equations of general relativity. The form of the action introduced by Plebanski is: where are internal indices, is a curvature on the orthogonal group and the connection variables (the gauge fields) are denoted by . The symbol is the Lagrangian multiplier and is the antisymmetric symbol valued over . The specific definition formally satisfies the Einstein's field equation of general relativity. Application is to the Barrett–Crane model. See also Tetradic Palatini action Barrett–Crane model BF model References Variational formalism of general relativity
Plebanski action
[ "Physics" ]
214
[ "Relativity stubs", "Theory of relativity" ]
1,893,462
https://en.wikipedia.org/wiki/Penicillamine
Penicillamine, sold under the brand name of Cuprimine among others, is a medication primarily used for the treatment of Wilson's disease. It is also used for people with kidney stones who have high urine cystine levels, rheumatoid arthritis, and various heavy metal poisonings. It is taken by mouth. Penicillamine was approved for medical use in the United States in 1970. It is on the World Health Organization's List of Essential Medicines. Medical uses It is used as a chelating agent: In Wilson's disease, a rare genetic disorder of copper metabolism, penicillamine treatment relies on its binding to accumulated copper and elimination through urine. Penicillamine was the second line treatment for arsenic poisoning, after dimercaprol (BAL). It is no longer recommended. In cystinuria, a hereditary disorder in which high urine cystine levels lead to the formation of cystine stones, penicillamine binds with cysteine to yield a mixed disulfide which is more soluble than cystine. Penicillamine has been used to treat scleroderma. Penicillamine can be used as a disease-modifying antirheumatic drug (DMARD) to treat severe active rheumatoid arthritis in patients who have failed to respond to an adequate trial of conventional therapy, although it is rarely used today due to availability of TNF inhibitors and other agents, such as tocilizumab and tofacitinib. Penicillamine works by reducing numbers of T-lymphocytes, inhibiting macrophage function, decreasing IL-1, decreasing rheumatoid factor, and preventing collagen from cross-linking. Adverse effects Common side effects include rash, loss of appetite, nausea, diarrhea, and low white blood cell levels. Other serious side effects include liver problems, obliterative bronchiolitis, and myasthenia gravis. It is not recommended in people with lupus erythematosus. Use during pregnancy may result in harm to the baby. Penicillamine works by binding heavy metals; the resulting penicillamine–metal complexes are then removed from the body in the urine. Bone marrow suppression, dysgeusia, anorexia, vomiting, and diarrhea are the most common side effects, occurring in ~20–30% of the patients treated with penicillamine. Other possible adverse effects include: Nephropathy Hepatotoxicity Membranous glomerulonephritis Aplastic anemia (idiosyncratic) Antibody-mediated myasthenia gravis and Lambert–Eaton myasthenic syndrome, which may persist even after its withdrawal Drug-induced systemic lupus erythematosus Elastosis perforans serpiginosa Toxic myopathies Unwanted breast growth (macromastia) Oligospermia Chemistry Penicillamine is a trifunctional organic compound, consisting of a thiol, an amine, and a carboxylic acid. It is an amino acid structurally similar to cysteine, but with geminal dimethyl substituents α to the thiol. Like most amino acids, it is a colorless solid that exists in the zwitterionic form at physiological pH. Penicillamine is a chiral drug with one stereogenic center; the two enantiomers have distinctly different physiological effects. (S)-penicillamine (D-penicillamine, having (–) optical rotation) is antiarthritic. (R)-penicillamine (L-penicillamine, having (+) optical rotation) is toxic because it inhibits the action of pyridoxine (also known as vitamin B6). That enantiomer is a metabolite of penicillin but has no antibiotic properties itself. A variety of penicillamine–copper complex structures are known. History John Walshe first described the use of penicillamine in Wilson's disease in 1956. He had discovered the compound in the urine of patients (including himself) who had taken penicillin, and experimentally confirmed that it increased urinary copper excretion by chelation. He had initial difficulty convincing several world experts of the time (Denny-Brown and Cumings) of its efficacy, as they held that Wilson's disease was not primarily a problem of copper homeostasis but of amino acid metabolism, and that dimercaprol should be used as a chelator. Later studies confirmed both the copper-centered theory and the efficacy of D-penicillamine. Walshe also pioneered other chelators in Wilson's such as triethylene tetramine and tetrathiomolybdate. Penicillamine was first synthesized by John Cornforth under supervision of Robert Robinson. Penicillamine has been used in rheumatoid arthritis since the first successful case in 1964. Cost In the United States, Valeant raised the cost of the medication from about US$500 to US$24,000 per month in 2016. References External links Alpha-Amino acids Antirheumatic products Chelating agents Enantiopure drugs Human drug metabolites Nephrotoxins Non-proteinogenic amino acids Thiols World Health Organization essential medicines Wikipedia medicine articles ready to translate Disease-modifying antirheumatic drugs
Penicillamine
[ "Chemistry" ]
1,137
[ "Thiols", "Stereochemistry", "Enantiopure drugs", "Human drug metabolites", "Organic compounds", "Chelating agents", "Chemicals in medicine", "Process chemicals" ]
1,894,582
https://en.wikipedia.org/wiki/Dielectric%20spectroscopy
Dielectric spectroscopy (which falls in a subcategory of the impedance spectroscopy) measures the dielectric properties of a medium as a function of frequency. It is based on the interaction of an external field with the electric dipole moment of the sample, often expressed by permittivity. It is also an experimental method of characterizing electrochemical systems. This technique measures the impedance of a system over a range of frequencies, and therefore the frequency response of the system, including the energy storage and dissipation properties, is revealed. Often, data obtained by electrochemical impedance spectroscopy (EIS) is expressed graphically in a Bode plot or a Nyquist plot. Impedance is the opposition to the flow of alternating current (AC) in a complex system. A passive complex electrical system comprises both energy dissipater (resistor) and energy storage (capacitor) elements. If the system is purely resistive, then the opposition to AC or direct current (DC) is simply resistance. Materials or systems exhibiting multiple phases (such as composites or heterogeneous materials) commonly show a universal dielectric response, whereby dielectric spectroscopy reveals a power law relationship between the impedance (or the inverse term, admittance) and the frequency, ω, of the applied AC field. Almost any physico-chemical system, such as electrochemical cells, mass-beam oscillators, and even biological tissue possesses energy storage and dissipation properties. EIS examines them. This technique has grown tremendously in stature over the past few years and is now being widely employed in a wide variety of scientific fields such as fuel cell testing, biomolecular interaction, and microstructural characterization. Often, EIS reveals information about the reaction mechanism of an electrochemical process: different reaction steps will dominate at certain frequencies, and the frequency response shown by EIS can help identify the rate limiting step. Dielectric mechanisms There are a number of different dielectric mechanisms, connected to the way a studied medium reacts to the applied field (see the figure illustration). Each dielectric mechanism is centered around its characteristic frequency, which is the reciprocal of the characteristic time of the process. In general, dielectric mechanisms can be divided into relaxation and resonance processes. The most common, starting from high frequencies, are: Electronic polarization This resonant process occurs in a neutral atom when the electric field displaces the electron density relative to the nucleus it surrounds. This displacement occurs due to the equilibrium between restoration and electric forces. Electronic polarization may be understood by assuming an atom as a point nucleus surrounded by spherical electron cloud of uniform charge density. Atomic polarization Atomic polarization is observed when the nucleus of the atom reorients in response to the electric field. This is a resonant process. Atomic polarization is intrinsic to the nature of the atom and is a consequence of an applied field. Electronic polarization refers to the electron density and is a consequence of an applied field. Atomic polarization is usually small compared to electronic polarization. Dipole relaxation This originates from permanent and induced dipoles aligning to an electric field. Their orientation polarisation is disturbed by thermal noise (which mis-aligns the dipole vectors from the direction of the field), and the time needed for dipoles to relax is determined by the local viscosity. These two facts make dipole relaxation heavily dependent on temperature, pressure, and chemical surrounding. Ionic relaxation Ionic relaxation comprises ionic conductivity and interfacial and space charge relaxation. Ionic conductivity predominates at low frequencies and introduces only losses to the system. Interfacial relaxation occurs when charge carriers are trapped at interfaces of heterogeneous systems. A related effect is Maxwell-Wagner-Sillars polarization, where charge carriers blocked at inner dielectric boundary layers (on the mesoscopic scale) or external electrodes (on a macroscopic scale) lead to a separation of charges. The charges may be separated by a considerable distance and therefore make contributions to the dielectric loss that are orders of magnitude larger than the response due to molecular fluctuations. Dielectric relaxation Dielectric relaxation as a whole is the result of the movement of dipoles (dipole relaxation) and electric charges (ionic relaxation) due to an applied alternating field, and is usually observed in the frequency range 102-1010 Hz. Relaxation mechanisms are relatively slow compared to resonant electronic transitions or molecular vibrations, which usually have frequencies above 1012 Hz. Principles Steady-state For a redox reaction R O + e, without mass-transfer limitation, the relationship between the current density and the electrode overpotential is given by the Butler–Volmer equation: with is the exchange current density and and are the symmetry factors. The curve vs. is not a straight line (Fig. 1), therefore a redox reaction is not a linear system. Dynamic behavior Faradaic impedance In an electrochemical cell the faradaic impedance of an electrolyte-electrode interface is the joint electrical resistance and capacitance at that interface. Let us suppose that the Butler-Volmer relationship correctly describes the dynamic behavior of the redox reaction: Dynamic behavior of the redox reaction is characterized by the so-called charge transfer resistance defined by: The value of the charge transfer resistance changes with the overpotential. For this simplest example the faradaic impedance is reduced to a resistance. It is worthwhile to notice that: for . Double-layer capacitance An electrode electrolyte interface behaves like a capacitance called electrochemical double-layer capacitance . The equivalent circuit for the redox reaction in Fig. 2 includes the double-layer capacitance as well as the charge transfer resistance . Another analog circuit commonly used to model the electrochemical double-layer is called a constant phase element. The electrical impedance of this circuit is easily obtained remembering the impedance of a capacitance which is given by: where is the angular frequency of a sinusoidal signal (rad/s), and . It is obtained: Nyquist diagram of the impedance of the circuit shown in Fig. 3 is a semicircle with a diameter and an angular frequency at the apex equal to (Fig. 3). Other representations, Bode plots, or Black plans can be used. Ohmic resistance The ohmic resistance appears in series with the electrode impedance of the reaction and the Nyquist diagram is translated to the right. Universal dielectric response Under AC conditions with varying frequency ω, heterogeneous systems and composite materials exhibit a universal dielectric response, in which overall admittance exhibits a region of power law scaling with frequency. . Measurement of the impedance parameters Plotting the Nyquist diagram with a potentiostat and an impedance analyzer, most often included in modern potentiostats, allows the user to determine charge transfer resistance, double-layer capacitance and ohmic resistance. The exchange current density can be easily determined measuring the impedance of a redox reaction for . Nyquist diagrams are made of several arcs for reactions more complex than redox reactions and with mass-transfer limitations. Applications Electrochemical impedance spectroscopy is used in a wide range of applications. In the paint and coatings industry, it is a useful tool to investigate the quality of coatings and to detect the presence of corrosion. It is used in many biosensor systems as a label-free technique to measure bacterial concentration and to detect dangerous pathogens such as Escherichia coli O157:H7 and Salmonella, and yeast cells. Electrochemical impedance spectroscopy is also used to analyze and characterize different food products. Some examples are the assessment of food–package interactions, the analysis of milk composition, the characterization and the determination of the freezing end-point of ice-cream mixes, the measure of meat ageing, the investigation of ripeness and quality in fruits and the determination of free acidity in olive oil. In the field of human health monitoring is better known as bioelectrical impedance analysis (BIA) and is used to estimate body composition as well as different parameters such as total body water and free fat mass. Electrochemical impedance spectroscopy can be used to obtain the frequency response of batteries and electrocatalytic systems at relatively high temperatures. Biomedical sensors working in the microwave range relies on dielectric spectroscopy to detect changes in the dielectric properties over a frequency range, such as non-invasive continuous blood glucose monitoring. The IFAC database can be used as a resource to get the dielectric properties for human body tissues. For heterogenous mixtures like suspensions impedance spectroscopy can be used to monitor the particle sedimentation process. See also Debye relaxation Dielectric absorption, ultra-low frequency changes Dielectric loss Electrochemistry Ellipsometry Green–Kubo relations Induced polarization (IP) Kramers–Kronig relations Linear response function Potentiostat Spectral induced polarisation (SIP) References Electric and magnetic fields in matter Electrochemistry Impedance measurements Spectroscopy
Dielectric spectroscopy
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,878
[ "Molecular physics", "Spectrum (physical sciences)", "Physical quantities", "Instrumental analysis", "Electric and magnetic fields in matter", "Materials science", "Impedance measurements", "Electrochemistry", "Condensed matter physics", "Spectroscopy", "Electrical resistance and conductance" ]
1,895,800
https://en.wikipedia.org/wiki/Riemann%20series%20theorem
In mathematics, the Riemann series theorem, also called the Riemann rearrangement theorem, named after 19th-century German mathematician Bernhard Riemann, says that if an infinite series of real numbers is conditionally convergent, then its terms can be arranged in a permutation so that the new series converges to an arbitrary real number, and rearranged such that the new series diverges. This implies that a series of real numbers is absolutely convergent if and only if it is unconditionally convergent. As an example, the series converges to 0 (for a sufficiently large number of terms, the partial sum gets arbitrarily near to 0); but replacing all terms with their absolute values gives which sums to infinity. Thus, the original series is conditionally convergent, and can be rearranged (by taking the first two positive terms followed by the first negative term, followed by the next two positive terms and then the next negative term, etc.) to give a series that converges to a different sum, such as which evaluates to ln 2. More generally, using this procedure with p positives followed by q negatives gives the sum ln(p/q). Other rearrangements give other finite sums or do not converge to any sum. History It is a basic result that the sum of finitely many numbers does not depend on the order in which they are added. For example, . The observation that the sum of an infinite sequence of numbers can depend on the ordering of the summands is commonly attributed to Augustin-Louis Cauchy in 1833. He analyzed the alternating harmonic series, showing that certain rearrangements of its summands result in different limits. Around the same time, Peter Gustav Lejeune Dirichlet highlighted that such phenomena are ruled out in the context of absolute convergence, and gave further examples of Cauchy's phenomenon for some other series which fail to be absolutely convergent. In the course of his analysis of Fourier series and the theory of Riemann integration, Bernhard Riemann gave a full characterization of the rearrangement phenomena. He proved that in the case of a convergent series which does not converge absolutely (known as conditional convergence), rearrangements can be found so that the new series converges to any arbitrarily prescribed real number. Riemann's theorem is now considered as a basic part of the field of mathematical analysis. For any series, one may consider the set of all possible sums, corresponding to all possible rearrangements of the summands. Riemann’s theorem can be formulated as saying that, for a series of real numbers, this set is either empty, a single point (in the case of absolute convergence), or the entire real number line (in the case of conditional convergence). In this formulation, Riemann’s theorem was extended by Paul Lévy and Ernst Steinitz to series whose summands are complex numbers or, even more generally, elements of a finite-dimensional real vector space. They proved that the set of possible sums forms a real affine subspace. Extensions of the Lévy–Steinitz theorem to series in infinite-dimensional spaces have been considered by a number of authors. Definitions A series converges if there exists a value such that the sequence of the partial sums converges to . That is, for any ε > 0, there exists an integer N such that if n ≥ N, then A series converges conditionally if the series converges but the series diverges. A permutation is simply a bijection from the set of positive integers to itself. This means that if is a permutation, then for any positive integer there exists exactly one positive integer such that In particular, if , then . Statement of the theorem Suppose that is a sequence of real numbers, and that is conditionally convergent. Let be a real number. Then there exists a permutation such that There also exists a permutation such that The sum can also be rearranged to diverge to or to fail to approach any limit, finite or infinite. Alternating harmonic series Changing the sum The alternating harmonic series is a classic example of a conditionally convergent series:is convergent, whereasis the ordinary harmonic series, which diverges. Although in standard presentation the alternating harmonic series converges to , its terms can be arranged to converge to any number, or even to diverge. One instance of this is as follows. Begin with the series written in the usual order, and rearrange and regroup the terms as: where the pattern is: the first two terms are 1 and −1/2, whose sum is 1/2. The next term is −1/4. The next two terms are 1/3 and −1/6, whose sum is 1/6. The next term is −1/8. The next two terms are 1/5 and −1/10, whose sum is 1/10. In general, since every odd integer occurs once positively and every even integers occur once negatively (half of them as multiples of 4, the other half as twice odd integers), the sum is composed of blocks of three which can be simplified as: Hence, the above series can in fact be written as: which is half the sum originally, and can only equate to the original sequence if the value were zero. This series can be demonstrated to be greater than zero by the proof of Leibniz's theorem using that the second partial sum is half. Alternatively, the value of which it converges to, cannot be zero. Hence, the value of the sequence is shown to depend on the order in which series is computed. It is true that the sequence: contains all elements in the sequence: However, since the summation is defined as and , the order of the terms can influence the limit. Getting an arbitrary sum An efficient way to recover and generalize the result of the previous section is to use the fact that where γ is the Euler–Mascheroni constant, and where the notation o(1) denotes a quantity that depends upon the current variable (here, the variable is n) in such a way that this quantity goes to 0 when the variable tends to infinity. It follows that the sum of q even terms satisfies and by taking the difference, one sees that the sum of p odd terms satisfies Suppose that two positive integers a and b are given, and that a rearrangement of the alternating harmonic series is formed by taking, in order, a positive terms from the alternating harmonic series, followed by b negative terms, and repeating this pattern at infinity (the alternating series itself corresponds to , the example in the preceding section corresponds to a = 1, b = 2): Then the partial sum of order (a + b)n of this rearranged series contains positive odd terms and negative even terms, hence It follows that the sum of this rearranged series is Suppose now that, more generally, a rearranged series of the alternating harmonic series is organized in such a way that the ratio between the number of positive and negative terms in the partial sum of order n tends to a positive limit r. Then, the sum of such a rearrangement will be and this explains that any real number x can be obtained as sum of a rearranged series of the alternating harmonic series: it suffices to form a rearrangement for which the limit r is equal . Proof Existence of a rearrangement that sums to any positive real M Riemann's description of the theorem and its proof reads in full: This can be given more detail as follows. Recall that a conditionally convergent series of real terms has both infinitely many negative terms and infinitely many positive terms. First, define two quantities, and by: That is, the series includes all an positive, with all negative terms replaced by zeroes, and the series includes all an negative, with all positive terms replaced by zeroes. Since is conditionally convergent, both the 'positive' and the 'negative' series diverge. Let be any real number. Take just enough of the positive terms so that their sum exceeds . That is, let be the smallest positive integer such that This is possible because the partial sums of the series tend to . Now let be the smallest positive integer such that This number exists because the partial sums of tend to . Now continue inductively, defining as the smallest integer larger than such that and so on. The result may be viewed as a new sequence Furthermore the partial sums of this new sequence converge to . This can be seen from the fact that for any , with the first inequality holding due to the fact that has been defined as the smallest number larger than which makes the second inequality true; as a consequence, it holds that Since the right-hand side converges to zero due to the assumption of conditional convergence, this shows that the 'th partial sum of the new sequence converges to as increases. Similarly, the 'th partial sum also converges to . Since the 'th, 'th, ... 'th partial sums are valued between the 'th and 'th partial sums, it follows that the whole sequence of partial sums converges to . Every entry in the original sequence appears in this new sequence whose partial sums converge to . Those entries of the original sequence which are zero will appear twice in the new sequence (once in the 'positive' sequence and once in the 'negative' sequence), and every second such appearance can be removed, which does not affect the summation in any way. The new sequence is thus a permutation of the original sequence. Existence of a rearrangement that diverges to infinity Let be a conditionally convergent series. The following is a proof that there exists a rearrangement of this series that tends to (a similar argument can be used to show that can also be attained). The above proof of Riemann's original formulation only needs to be modified so that is selected as the smallest integer larger than such that and with selected as the smallest integer larger than such that The choice of on the left-hand sides is immaterial, as it could be replaced by any sequence increasing to infinity. Since converges to zero as increases, for sufficiently large there is and this proves (just as with the analysis of convergence above) that the sequence of partial sums of the new sequence diverge to infinity. Existence of a rearrangement that fails to approach any limit, finite or infinite The above proof only needs to be modified so that is selected as the smallest integer larger than such that and with selected as the smallest integer larger than such that This directly shows that the sequence of partial sums contains infinitely many entries which are larger than 1, and also infinitely many entries which are less than , so that the sequence of partial sums cannot converge. Generalizations Sierpiński theorem Given an infinite series , we may consider a set of "fixed points" , and study the real numbers that the series can sum to if we are only allowed to permute indices in . That is, we letWith this notation, we have: If is finite, then . Here means symmetric difference. If then . If the series is an absolutely convergent sum, then for any . If the series is a conditionally convergent sum, then by Riemann series theorem, . Sierpiński proved that rearranging only the positive terms one can obtain a series converging to any prescribed value less than or equal to the sum of the original series, but larger values in general can not be attained. That is, let be a conditionally convergent sum, then contains , but there is no guarantee that it contains any other number. More generally, let be an ideal of , then we can define . Let be the set of all asymptotic density zero sets , that is, . It's clear that is an ideal of . Proof sketch: Given , a conditionally convergent sum, construct some such that and are both conditionally convergent. Then, rearranging suffices to converge to any number in . Filipów and Szuca proved that other ideals also have this property. Steinitz's theorem Given a converging series of complex numbers, several cases can occur when considering the set of possible sums for all series obtained by rearranging (permuting) the terms of that series: the series may converge unconditionally; then, all rearranged series converge, and have the same sum: the set of sums of the rearranged series reduces to one point; the series may fail to converge unconditionally; if S denotes the set of sums of those rearranged series that converge, then, either the set S is a line L in the complex plane C, of the form or the set S is the whole complex plane C. More generally, given a converging series of vectors in a finite-dimensional real vector space E, the set of sums of converging rearranged series is an affine subspace of E. See also Agnew's theorem — describes all rearrangements that preserve convergence to the same sum for all convergent series References External links Mathematical series Theorems in real analysis Permutations Summability theory Bernhard Riemann
Riemann series theorem
[ "Mathematics" ]
2,732
[ "Sequences and series", "Theorems in mathematical analysis", "Functions and mappings", "Mathematical structures", "Series (mathematics)", "Permutations", "Calculus", "Theorems in real analysis", "Mathematical objects", "Combinatorics", "Mathematical relations" ]
8,291,405
https://en.wikipedia.org/wiki/Stability%20conditions
The stability conditions of watercraft are the various standard loading configurations to which a ship, boat, or offshore platform may be subjected. They are recognized by classification societies such as Det Norske Veritas, Lloyd's Register and American Bureau of Shipping (ABS). Classification societies follow rules and guidelines laid down by International Convention for the Safety of Life at Sea (SOLAS) conventions, the International Maritime Organization and laws of the country under which the vessel is flagged, such as the Code of Federal Regulations. Stability is normally broken into two distinct types: intact and damaged. Intact stability The vessel is in normal operational configuration. The hull is not breached in any compartment. The vessel will be expected to meet various stability criteria such as GMt (metacentric height), area under the GZ (righting lever) curve, range of stability, trim, etc. Intact conditions Lightship or Light Displacement The vessel is complete and ready for service in every respect, including permanent ballast, spare parts, lubricating oil, and working stores but is without fuel, cargo, drinking or washing water, officers, crew, passengers, their effects, temporary ballast or any other variable load. Full load departure or full displacement Along with all the Lightship loads, the vessel has all systems charged meaning that all fresh water, cooling, lubricating, hydraulic and fuel service header tanks, piping and equipment systems are filled with their normal operating fluids. Crew and effects are at their normal values. Consumables (provisions, potable water and fuel) are at 100% capacity. Ammunition and/or cargo is at maximum capacity. The vessel is at its limiting draft or legal load line. Standard condition This is only for military vessels. Along with all the Lightship loads, the vessel has all systems charged meaning that all fresh water, cooling, lubricating, hydraulic and fuel service header tanks, piping and equipment systems are filled with their normal operating fluids. Crew and effects are at their normal values. Consumables (provisions, potable water and fuel) are at 50% capacity. Ammunition and/or cargo is at 100% capacity. This condition is normally used for range and speed calculations. Light arrival Along with all the Lightship loads, the vessel has all systems charged meaning that all fresh water, cooling, lubricating, hydraulic and fuel service header tanks, piping and equipment systems are filled with their normal operating fluids. Crew and effects are at their normal values. Consumables (provisions, potable water and fuel) are at 10% full load. Ammunition and/or cargo is at 100% capacity. Damaged stability The vessel in the assessed "Worst Intact Condition" is analytically damaged by opening various combinations of watertight compartments to the sea. The number of compartments and their location are dictated by IMO regulations, SOLAS conventions, or other applicable rules. Typically these conditions are identified by the compartment(s) damaged ex: "Hold #3 and Water Ballast Tank 4 Port" See also Naval architecture Hull (watercraft) Society of Naval Architects and Marine Engineers Ship stability Displacement (ship) References External links nvr.navy.mil, definitions Displacement Naval architecture Ship measurements de:Stabilität (Schifffahrt) el:Καμπύλη ευστάθειας nl:Stabiliteit (schip)
Stability conditions
[ "Physics", "Mathematics", "Engineering" ]
689
[ "Scalar physical quantities", "Naval architecture", "Physical quantities", "Quantity", "Size", "Extensive quantities", "Marine engineering", "Volume", "Wikipedia categories named after physical quantities" ]
8,292,324
https://en.wikipedia.org/wiki/Solid%20solution%20strengthening
In metallurgy, solid solution strengthening is a type of alloying that can be used to improve the strength of a pure metal. The technique works by adding atoms of one element (the alloying element) to the crystalline lattice of another element (the base metal), forming a solid solution. The local nonuniformity in the lattice due to the alloying element makes plastic deformation more difficult by impeding dislocation motion through stress fields. In contrast, alloying beyond the solubility limit can form a second phase, leading to strengthening via other mechanisms (e.g. the precipitation of intermetallic compounds). Types Depending on the size of the alloying element, a substitutional solid solution or an interstitial solid solution can form. In both cases, atoms are visualised as rigid spheres where the overall crystal structure is essentially unchanged. The rationale of crystal geometry to atom solubility prediction is summarized in the Hume-Rothery rules and Pauling's rules. Substitutional solid solution strengthening occurs when the solute atom is large enough that it can replace solvent atoms in their lattice positions. Some alloying elements are only soluble in small amounts, whereas some solvent and solute pairs form a solution over the whole range of binary compositions. Generally, higher solubility is seen when solvent and solute atoms are similar in atomic size (15% according to the Hume-Rothery rules) and adopt the same crystal structure in their pure form. Examples of completely miscible binary systems are Cu-Ni and the Ag-Au face-centered cubic (FCC) binary systems, and the Mo-W body-centered cubic (BCC) binary system. Interstitial solid solutions form when the solute atom is small enough (radii up to 57% the radii of the parent atoms) to fit at interstitial sites between the solvent atoms. The atoms crowd into the interstitial sites, causing the bonds of the solvent atoms to compress and thus deform (this rationale can be explained with Pauling's rules). Elements commonly used to form interstitial solid solutions include H, Li, Na, N, C, and O. Carbon in iron (steel) is one example of interstitial solid solution. Mechanism The strength of a material is dependent on how easily dislocations in its crystal lattice can be propagated. These dislocations create stress fields within the material depending on their character. When solute atoms are introduced, local stress fields are formed that interact with those of the dislocations, impeding their motion and causing an increase in the yield stress of the material, which means an increase in strength of the material. This gain is a result of both lattice distortion and the modulus effect. When solute and solvent atoms differ in size, local stress fields are created that can attract or repel dislocations in their vicinity. This is known as the size effect. By relieving tensile or compressive strain in the lattice, the solute size mismatch can put the dislocation in a lower energy state. In substitutional solid solutions, these stress fields are spherically symmetric, meaning they have no shear stress component. As such, substitutional solute atoms do not interact with the shear stress fields characteristic of screw dislocations. Conversely, in interstitial solid solutions, solute atoms cause a tetragonal distortion, generating a shear field that can interact with edge, screw, and mixed dislocations. The attraction or repulsion of the dislocation to the solute atom depends on whether the atom sits above or below the slip plane. For example, consider an edge dislocation encountering a smaller solute atom above its slip plane. In this case, the interaction energy is negative, resulting in attraction of the dislocation to the solute. This is due to the reduced dislocation energy by the compressed volume lying above the dislocation core. If the solute atom were positioned below the slip plane, the dislocation would be repelled by the solute. However, the overall interaction energy between an edge dislocation and a smaller solute is negative because the dislocation spends more time at sites with attractive energy. This is also true for solute atom with size greater than the solvent atom. Thus, the interaction energy dictated by the size effect is generally negative. The elastic modulus of the solute atom can also determine the extent of strengthening. For a “soft” solute with elastic modulus lower than that of the solvent, the interaction energy due to modulus mismatch (Umodulus) is negative, which reinforce the size interaction energy (Usize). In contrast, Umodulus is positive for a “hard” solute, which results in lower total interaction energy than a soft atom. Even though the interaction force is negative (attractive) in both cases when the dislocation is approaching the solute. The maximum force (Fmax) necessary to tear dislocation away from the lowest energy state (i.e. the solute atom) is greater for the soft solute than the hard one. As a result, a soft solute will strengthen a crystal more than a hard solute due to the synergistic strengthening by combining both size and modulus effects. The elastic interaction effects (i.e. size and modulus effects) dominate solid-solution strengthening for most crystalline materials. However, other effects, including charge and stacking fault effects, may also play a role. For ionic solids where electrostatic interaction dictates bond strength, charge effect is also important. For example, addition of divalent ion to a monovalent material may strengthen the electrostatic interaction between the solute and the charged matrix atoms that comprise a dislocation. However, this strengthening is to a less extent than the elastic strengthening effects. For materials containing a higher density of stacking faults, solute atoms may interact with the stacking faults either attractively or repulsively. This lowers the stacking fault energy, leading to repulsion of the partial dislocations, which thus makes the material stronger. Surface carburizing, or case hardening, is one example of solid solution strengthening in which the density of solute carbon atoms is increased close to the surface of the steel, resulting in a gradient of carbon atoms throughout the material. This provides superior mechanical properties to the surface of the steel without having to use a higher-cost material for the component. Governing equations Solid solution strengthening increases yield strength of the material by increasing the shear stress, , to move dislocations: where c is the concentration of the solute atoms, G is the shear modulus, b is the magnitude of the Burger's vector, and is the lattice strain due to the solute. This is composed of two terms, one describing lattice distortion and the other local modulus change. Here, the term that captures the local modulus change, a constant dependent on the solute atoms and is the lattice distortion term. The lattice distortion term can be described as: , where a is the lattice parameter of the material. Meanwhile, the local modulus change is captured in the following expression: , where G is shear modulus of the solute material. Implications In order to achieve noticeable material strengthening via solution strengthening, one should alloy with solutes of higher shear modulus, hence increasing the local shear modulus in the material. In addition, one should alloy with elements of different equilibrium lattice constants. The greater the difference in lattice parameter, the higher the local stress fields introduced by alloying. Alloying with elements of higher shear modulus or of very different lattice parameters will increase the stiffness and introduce local stress fields respectively. In either case, the dislocation propagation will be hindered at these sites, impeding plasticity and increasing yield strength proportionally with solute concentration. Solid solution strengthening depends on: Concentration of solute atoms Shear modulus of solute atoms Size of solute atoms Valency of solute atoms (for ionic materials) For many common alloys, rough experimental fits can be found for the addition in strengthening provided in the form of: where is a solid solution strengthening coefficient and is the concentration of solute in atomic fractions. Nevertheless, one should not add so much solute as to precipitate a new phase. This occurs if the concentration of the solute reaches a certain critical point given by the binary system phase diagram. This critical concentration therefore puts a limit to the amount of solid solution strengthening that can be achieved with a given material. Examples Aluminum alloys An example of aluminum alloys where solid solution strengthening happens by adding magnesium and manganese into the aluminum matrix. Commercially Mn can be added to the AA3xxx series and Mg can be added to the AA5xxx series. Mn addition to the Aluminum alloys assists in the recrystallization and recovery of the alloy which influences the grain size as well. Both of these systems are used in low to medium-strength applications, with appreciable formability and corrosion resistance. Nickel-based superalloys Many nickel-based superalloys depend on solid solution as a strengthening mechanism. The most popular example is the Inconel family, where many of these alloys contain chromium and iron and some other additions of cobalt, molybdenum, niobium, and titanium. The nickel-based superalloys are well known for their intensive use in the industrial field especially the aeronautical and the aerospace industry due to their superior mechanical and corrosion properties at high temperatures. An example of the use of the nickel-based superalloys in the industrial field would be turbine blades. In practice, this alloy is known as MAR—M200 and is solid solution strengthened by chromium, tungsten and cobalt in the matrix and is also precipitation hardened by carbide and boride precipitates at the grain boundaries. The key impacting factor for these turbine blades lies in the grain size which an increase in grain size can lead to a significant reduction in the strain rate. An example of this reduced strain rate in MAR--M200 can be seen in the figures to the right where the figure on the bottom has a grain size of 100um and the figure on the top has a grain size of 10mm. This reduced strain rate is extremely important for turbine blade operation because they undergo significant mechanical stress and high temperatures which can lead to the onset of creep deformation. Therefore, the precise control of grain size in nickel-based superalloys is key to creep resistance and mechanical reliability and longevity. Some ways to control the grain size lie in the manufacturing techniques like directional solidification and single crystal casting. Stainless steel Stainless steel is one of the most commonly used metals in many industries. Solid solution strengthening of steel is one of the mechanisms used to enhance the properties of the alloy. Austenitic steels mainly contain chromium, nickel, molybdenum, and manganese. It is being used mostly for cookware, kitchen equipment, and in marine applications for its good corrosion properties in saline environments. Titanium alloys Titanium and titanium alloys have been wide usage in aerospace, medical, and maritime applications. The most known titanium alloy that adopts solid solution strengthening is Ti-6Al-4V. Also, the addition of oxygen to pure Ti alloy adopts a solid solution strengthening as a mechanism to the material, while adding it to Ti-6Al-4V alloy doesn’t have the same influence. Copper alloys Bronze and brass are both copper alloys that are solid solution strengthened. Bronze is the result of adding about 12% tin to copper while brass is the result of adding about 34% zinc to copper. Both of these alloys are being utilized in coins production, ship hardware, and art. See also Strength of materials Strengthening mechanisms of materials References External links The Strengthening of Iron and Steel Metallurgy Strengthening mechanisms of materials de:Mischkristallverfestigung ru:Диффузионное насыщение металлами
Solid solution strengthening
[ "Chemistry", "Materials_science", "Engineering" ]
2,488
[ "Strengthening mechanisms of materials", "Metallurgy", "Materials science", "nan" ]
8,293,816
https://en.wikipedia.org/wiki/Countably%20compact%20space
In mathematics a topological space is called countably compact if every countable open cover has a finite subcover. Equivalent definitions A topological space X is called countably compact if it satisfies any of the following equivalent conditions: (1) Every countable open cover of X has a finite subcover. (2) Every infinite set A in X has an ω-accumulation point in X. (3) Every sequence in X has an accumulation point in X. (4) Every countable family of closed subsets of X with an empty intersection has a finite subfamily with an empty intersection. (1) (2): Suppose (1) holds and A is an infinite subset of X without -accumulation point. By taking a subset of A if necessary, we can assume that A is countable. Every has an open neighbourhood such that is finite (possibly empty), since x is not an ω-accumulation point. For every finite subset F of A define . Every is a subset of one of the , so the cover X. Since there are countably many of them, the form a countable open cover of X. But every intersect A in a finite subset (namely F), so finitely many of them cannot cover A, let alone X. This contradiction proves (2). (2) (3): Suppose (2) holds, and let be a sequence in X. If the sequence has a value x that occurs infinitely many times, that value is an accumulation point of the sequence. Otherwise, every value in the sequence occurs only finitely many times and the set is infinite and so has an ω-accumulation point x. That x is then an accumulation point of the sequence, as is easily checked. (3) (1): Suppose (3) holds and is a countable open cover without a finite subcover. Then for each we can choose a point that is not in . The sequence has an accumulation point x and that x is in some . But then is a neighborhood of x that does not contain any of the with , so x is not an accumulation point of the sequence after all. This contradiction proves (1). (4) (1): Conditions (1) and (4) are easily seen to be equivalent by taking complements. Examples The first uncountable ordinal (with the order topology) is an example of a countably compact space that is not compact. Properties Every compact space is countably compact. A countably compact space is compact if and only if it is Lindelöf. Every countably compact space is limit point compact. For T1 spaces, countable compactness and limit point compactness are equivalent. Every sequentially compact space is countably compact. The converse does not hold. For example, the product of continuum-many closed intervals with the product topology is compact and hence countably compact; but it is not sequentially compact. For first-countable spaces, countable compactness and sequential compactness are equivalent. More generally, the same holds for sequential spaces. For metrizable spaces, countable compactness, sequential compactness, limit point compactness and compactness are all equivalent. The example of the set of all real numbers with the standard topology shows that neither local compactness nor σ-compactness nor paracompactness imply countable compactness. Closed subspaces of a countably compact space are countably compact. The continuous image of a countably compact space is countably compact. Every countably compact space is pseudocompact. In a countably compact space, every locally finite family of nonempty subsets is finite. Every countably compact paracompact space is compact. More generally, every countably compact metacompact space is compact. Every countably compact Hausdorff first-countable space is regular. Every normal countably compact space is collectionwise normal. The product of a compact space and a countably compact space is countably compact. The product of two countably compact spaces need not be countably compact. See also Sequentially compact space Compact space Limit point compact Lindelöf space Notes References Properties of topological spaces Compactness (mathematics)
Countably compact space
[ "Mathematics" ]
859
[ "Properties of topological spaces", "Topological spaces", "Topology", "Space (mathematics)" ]
8,293,820
https://en.wikipedia.org/wiki/Sequentially%20compact%20space
In mathematics, a topological space X is sequentially compact if every sequence of points in X has a convergent subsequence converging to a point in . Every metric space is naturally a topological space, and for metric spaces, the notions of compactness and sequential compactness are equivalent (if one assumes countable choice). However, there exist sequentially compact topological spaces that are not compact, and compact topological spaces that are not sequentially compact. Examples and properties The space of all real numbers with the standard topology is not sequentially compact; the sequence given by for all natural numbers is a sequence that has no convergent subsequence. If a space is a metric space, then it is sequentially compact if and only if it is compact. The first uncountable ordinal with the order topology is an example of a sequentially compact topological space that is not compact. The product of copies of the closed unit interval is an example of a compact space that is not sequentially compact. Related notions A topological space is said to be limit point compact if every infinite subset of has a limit point in , and countably compact if every countable open cover has a finite subcover. In a metric space, the notions of sequential compactness, limit point compactness, countable compactness and compactness are all equivalent (if one assumes the axiom of choice). In a sequential (Hausdorff) space sequential compactness is equivalent to countable compactness. There is also a notion of a one-point sequential compactification—the idea is that the non convergent sequences should all converge to the extra point. See also Notes References Steen, Lynn A. and Seebach, J. Arthur Jr.; Counterexamples in Topology, Holt, Rinehart and Winston (1970). . Compactness (mathematics) Properties of topological spaces
Sequentially compact space
[ "Mathematics" ]
385
[ "Properties of topological spaces", "Space (mathematics)", "Topology stubs", "Topological spaces", "Topology" ]
8,294,746
https://en.wikipedia.org/wiki/Smale%27s%20problems
Smale's problems is a list of eighteen unsolved problems in mathematics proposed by Steve Smale in 1998 and republished in 1999. Smale composed this list in reply to a request from Vladimir Arnold, then vice-president of the International Mathematical Union, who asked several mathematicians to propose a list of problems for the 21st century. Arnold's inspiration came from the list of Hilbert's problems that had been published at the beginning of the 20th century. Table of problems In later versions, Smale also listed three additional problems, "that don't seem important enough to merit a place on our main list, but it would still be nice to solve them:" Mean value problem Is the three-sphere a minimal set (Gottschalk's conjecture)? Is an Anosov diffeomorphism of a compact manifold topologically the same as the Lie group model of John Franks? See also Millennium Prize Problems Simon problems Taniyama's problems Hilbert's problems Thurston's 24 questions References Unsolved problems in mathematics
Smale's problems
[ "Mathematics" ]
216
[ "Unsolved problems in mathematics", "Mathematical problems" ]
8,295,165
https://en.wikipedia.org/wiki/Buparvaquone
Buparvaquone is a naphthoquinone antiprotozoal drug related to atovaquone. It is a promising compound for the therapy and prophylaxis of all forms of theileriosis. Buparvaquone has been shown to have anti-leishmanial activity in vitro. It can be used to treat bovine East Coast fever protozoa in vitro, along with the only other substance known – Peganum harmala. It is the only really effective commercial therapeutic product against bovine theileriosis, where it has been used since the late 1980s. Industrial production It was first produced in Great Britain, then in Germany. Its patent expired in the mid-2000s, and was then produced in different countries such as India and Iran. Use in bovine theileriosis Using a single dose of 2.5 mg/kg, the recovery rate of curable cases is 90 to 98%. In tropical theileriosis, a dosage of 2.0 mg/kg has the same efficacy. Body temperature returns to normal in two to five days. Parasitemia lowers from 12% on day 0 to 5% the next day, then to 1% by day 5 and none at day 7. Viruses Buparvaquone has been shown to inhibit completely vaccinia virus in cell based assay in human cell line. Molecular target Buparvaquone resistance appears to be associated with parasite mutations in the Qo quinone-binding site of mitochondrial cytochrome b. Its mode of action is thus likely to be similar to that of the antimalarial drug atovaquone, a similar 2-hydroxy-1,4-naphthoquinone that binds to the Qo site of cytochrome b thus inhibiting Coenzyme Q – cytochrome c reductase. References Antiprotozoal agents 1,4-Naphthoquinones
Buparvaquone
[ "Biology" ]
399
[ "Antiprotozoal agents", "Biocides" ]
18,655,162
https://en.wikipedia.org/wiki/Two-dimensional%20infrared%20spectroscopy
Two-dimensional infrared spectroscopy (2D IR) is a nonlinear infrared spectroscopy technique that has the ability to correlate vibrational modes in condensed-phase systems. This technique provides information beyond linear infrared spectra, by spreading the vibrational information along multiple axes, yielding a frequency correlation spectrum. A frequency correlation spectrum can offer structural information such as vibrational mode coupling, anharmonicities, along with chemical dynamics such as energy transfer rates and molecular dynamics with femtosecond time resolution. 2DIR experiments have only become possible with the development of ultrafast lasers and the ability to generate femtosecond infrared pulses. Systems studied Among the many systems studied with infrared spectroscopy are water, metal carbonyls, short polypeptides, proteins, perovskite solar cells, and DNA oligomers. Experimental approaches There are two main approaches to two-dimensional spectroscopy, the Fourier-transform method, in which the data is collected in the time-domain and then Fourier-transformed to obtain a frequency-frequency 2D correlation spectrum, and the frequency domain approach in which all the data is collected directly in the frequency domain. Time domain The time-domain approach consists of applying two pump pulses. The first pulse at creates a coherence between the vibrational modes of the molecule and the second pulse at creates a population, effectively storing information in the molecules. After a determined waiting time, ranging from a zero to a few hundred picoseconds, an interaction with a third pulse again creates a coherence, which, due to an oscillating dipole, radiates an infrared signal. The radiated signal is heterodyned with a reference pulse in order to retrieve frequency and phase information; the signal is usually collected in the frequency domain using a spectrometer yielding detection frequency . A Fourier transform along then yields a (, ) correlation spectrum. In all these measurements phase stability among the pulses has to be preserved. Recently, pulse shaping approaches were developed to simplify overcoming this challenge. Frequency domain Similarly, in the frequency-domain approach, a narrowband pump pulse is applied and, after a certain waiting time, then a broadband pulse probes the system. A 2DIR correlation spectrum is obtained by plotting the probe frequency spectrum at each pump frequency. Spectral interpretation After the waiting time in the experiment, it is possible to reach double excited states. This results in the appearance of an overtone peak. The anharmonicity of a vibration can be read from the spectra as the distance between the diagonal peak and the overtone peak. One obvious advantage of 2DIR spectra over normal linear absorption spectra is that they reveal the coupling between different states. This for example, allows for the determination of the angle between the involved transition dipoles. The true power of 2DIR spectroscopy is that it allows following dynamical processes such as chemical exchange, motional narrowing, vibrational population transfer, and molecular reorientation on the sub-picosecond time scale. It has for example been used successfully to study hydrogen bond forming and breaking and to determine the transition state geometry of a structural rearrangement in an iron carbonyl compound. Spectral interpretation can be successfully assisted with developed theoretical methods. Currently, two freely available packages exists for modeling 2D IR spectra. These are the SPECTRON developed by the Mukamel group (University of California, Irvine) and the NISE program developed by the Jansen group (University of Groningen). Solvent effect The consideration of the solvent effect has been shown to be crucial in order to effectively describe the vibrational coupling in solution, since the solvent modify both vibrational frequencies, transition probabilities and couplings. Computer simulations can reveal the spectral signatures arising from solvent degrees of freedom and their change upon water reorganization. See also Two-dimensional correlation analysis References Infrared spectroscopy
Two-dimensional infrared spectroscopy
[ "Physics", "Chemistry" ]
771
[ "Infrared spectroscopy", "Spectroscopy", "Spectrum (physical sciences)" ]
18,657,630
https://en.wikipedia.org/wiki/Drug%20repositioning
Drug repositioning (also called drug repurposing) involves the investigation of existing drugs for new therapeutic purposes. Repurposing achievements Repurposing generics can have groundbreaking effects for patients: 35% of 'transformative' drugs approved by the US FDA are repurposed products. Repurposing is especially relevant for rare or neglected diseases. A number of successes have been achieved, the foremost including sildenafil (Viagra) for erectile dysfunction and pulmonary hypertension and thalidomide for leprosy and multiple myeloma. Clinical trials have been performed on posaconazole and ravuconazole for Chagas disease. Other antifungal agents clotrimazole and ketoconazole have been investigated for anti-trypanosome therapy. Successful repositioning of antimicrobials has led to the discovery of broad-spectrum therapeutics, which are effective against multiple infection types. Strategy Drug repositioning is a "universal strategy" for neglected diseases due to 1) reduced number of required clinical trial steps could reduce the time and costs for the medicine to reach market, 2) existing pharmaceutical supply chains could facilitate "formulation and distribution" of the drug, 3) known possibility of combining with other drugs could allow more effective treatment, 4) the repositioning could facilitate the discovery of "new mechanisms of action for old drugs and new classes of medicines", 5) the removal of “activation barriers” of early research stages can enable the project to advance rapidly into disease-oriented research. Often considered as a serendipitous approach, where repurposable drugs are discovered by chance, drug repurposing has heavily benefited from advances in human genomics, network biology, and chemoproteomics. It is now possible to identify serious repurposing candidates by finding genes involved in a specific disease and checking if they interact, in the cell, with other genes which are targets of known drugs. It was shown that drugs against targets supported by human genetics are twice as likely to succeed than overall drugs in the pharmaceutical pipeline. Drug repurposing can be a time and cost effective strategy for treating dreadful diseases such as cancer and is applied as a means of solution-finding to combat the COVID-19 pandemic. Computational drug repurposing is the in silico screening of approved drugs for use against new indications. It can use molecular, clinical or biophysical data. Electronic health records and real-world evidence gained popularity in drug repurposing, for instance for COVID 19. Computational drug repurposing is expected to reduce drug development costs and time. In 2020, during the COVID-19 pandemic, a European project, Exscalate4Cov conducted drug repurposing experiments, leading to the identification of raloxifene as a possible candidate for treating early-stage COVID-19 patients. Challenges According to a 2022 systematic review, inadequate resources (financial and subject matter expertise), barriers to accessing shelved compounds and their trial data, and the lack of traditional IP protections for repurposed compounds are the key barriers to drug repurposing. There is a lack of financial incentives for pharmaceutical companies to explore the repurposing of generic drugs. Indeed, doctors can prescribe the drug off-label and pharmacists can switch the branded version for a cheaper generic alternative. According to Pharmacologist Alasdair Breckenridge and patent judge Robin Jacob this issue is so significant that: "If a generic version of a drug is available, developers have little or no opportunity to recoup their investment in the development of the drug for a new indication". Drug repositioning present other challenges. First, the dosage required for the treatment of a novel disease usually differs from that of its original target disease, and if this happens, the discovery team will have to begin from Phase I clinical trials, which effectively strips drug repositioning of its advantages of over de novo drug discovery. Second, the finding of new formulation and distribution mechanisms of existing drugs to the novel-disease-affected areas rarely includes the efforts of "pharmaceutical and toxicological" scientists. Third, patent right issues can be very complicated for drug repurposing due to the lack of experts in the legal area of drug repositioning, the disclosure of repositioning online or via publications, and the extent of the novelty of the new drug purpose. Drug repurposing in psychiatry Drug repurposing is considered a rapid, cost-effective, and reduced-risk strategy for the development of new treatment options also for psychiatric disorders. Bipolar disorder In bipolar disorder, repurposed drugs are emerging as feasible augmentation options. Several agents, all sustained by a plausible biological rationale, have been evaluated. Evidence from meta-analyses showed that adjunctive allopurinol and tamoxifen were superior to placebo for mania, and add-on modafinil/armodafinil and pramipexole seemed to be effective for bipolar depression, while the efficacy of celecoxib and N-acetylcysteine appeared to be limited to certain outcomes. Further, meta-analytic evidence exists also for adjunctive melatonin and ramelteon in mania, and for add-on acetylsalicylic acid, pioglitazone, memantine, and inositol in bipolar depression, but findings were not significant. The generally low quality of evidence does not allow making reliable recommendations for the use of repurposed drugs in clinical practice, but some of these drugs have shown promising results and deserve further attention in research. See also COVID-19 drug repurposing research Chemoproteomics Exscalate4Cov References Further reading Drug development Drug discovery
Drug repositioning
[ "Chemistry", "Biology" ]
1,200
[ "Life sciences industry", "Medicinal chemistry", "Drug discovery" ]
18,658,020
https://en.wikipedia.org/wiki/Fludorex
Fludorex is a stimulant anorexic agent of the phenethylamine chemical class. References Stimulants Trifluoromethyl compounds Ethers Phenylethanolamine ethers Monoamine releasing agents
Fludorex
[ "Chemistry" ]
52
[ "Organic compounds", "Functional groups", "Ethers" ]
25,240,121
https://en.wikipedia.org/wiki/Multiplex%20polymerase%20chain%20reaction
Multiplex polymerase chain reaction (Multiplex PCR) refers to the use of polymerase chain reaction to amplify several different DNA sequences simultaneously (as if performing many separate PCR reactions all together in one reaction). This process amplifies DNA in samples using multiple primers and a temperature-mediated DNA polymerase in a thermal cycler. The primer design for all primers pairs has to be optimized so that all primer pairs can work at the same annealing temperature during PCR. Multiplex-PCR was first described in 1988 as a method to detect deletions in the dystrophin gene. It has also been used with the steroid sulfatase gene. In 2008, multiplex-PCR was used for analysis of microsatellites and SNPs. In 2020, RT-PCR multiplex assays were designed that combined multiple gene targets from the Center for Diseases and Control in a single reaction to increase molecular testing accessibility and throughput for SARS-CoV-2 diagnostics. Multiplex-PCR consists of multiple primer sets within a single PCR mixture to produce amplicons of varying sizes that are specific to different DNA sequences. By targeting multiple sequences at once, additional information may be gained from a single test run that otherwise would require several times the reagents and more time to perform. Annealing temperatures for each of the primer sets must be optimized to work correctly within a single reaction, and amplicon sizes, i.e., their base pair length, should be different enough to form distinct bands when visualized by gel electrophoresis. Alternatively, if amplicon sizes overlap, the different amplicons may be differentiated and visualised using primers that have been dyed with different colour fluorescent dyes. Commercial multiplexing kits for PCR are available and used by many forensic laboratories to amplify degraded DNA samples. Applications Some of the applications of multiplex PCR include: Pathogen Identification High Throughput SNP Genotyping Mutation Analysis Gene Deletion Analysis Template Quantitation Linkage Analysis RNA Detection Forensic Studies Diet Analysis References Molecular biology Laboratory techniques Amplifiers Polymerase chain reaction
Multiplex polymerase chain reaction
[ "Chemistry", "Technology", "Biology" ]
450
[ "Biochemistry methods", "Genetics techniques", "Polymerase chain reaction", "nan", "Molecular biology", "Biochemistry", "Amplifiers" ]
25,241,870
https://en.wikipedia.org/wiki/Impedance%20pump
An impedance pump is a valveless pump consisting of an elastic tube connected on both ends to an inelastic tube. Tapping the end of a tube will cause flow of liquid inside the system. Very small versions of an impedance pump -- a micro impedance pump -- can be used as a micropump for lab-on-a-chip active microfluidics. References Pumps
Impedance pump
[ "Physics", "Chemistry" ]
83
[ "Pumps", "Hydraulics", "Physical systems", "Turbomachinery" ]
25,243,874
https://en.wikipedia.org/wiki/Magnetic%20translation
Magnetic translations are naturally defined operators acting on wave function on a two-dimensional particle in a magnetic field. The motion of an electron in a magnetic field on a plane is described by the following four variables: guiding center coordinates and the relative coordinates . The guiding center coordinates are independent of the relative coordinates and, when quantized, satisfy , where , which makes them mathematically similar to the position and momentum operators and in one-dimensional quantum mechanics. Much like acting on a wave function of a one-dimensional quantum particle by the operators and generate the shift of momentum or position of the particle, for the quantum particle in 2D in magnetic field one considers the magnetic translation operators for any pair of numbers . The magnetic translation operators corresponding to two different pairs and do not commute. References Magnetism Quantum magnetism
Magnetic translation
[ "Physics", "Materials_science" ]
162
[ "Quantum magnetism", "Quantum mechanics", "Condensed matter physics" ]
25,244,065
https://en.wikipedia.org/wiki/La%20Pedrera%20de%20R%C3%BAbies%20Formation
The La Pedrera de Rúbies Formation, also called as La Pedrera de Meià is an Early Cretaceous (late Berriasian to early Barremian geologic formation in Catalonia, Spain. The formation crops out in the area of the Montsec in the Organyà Basin. At the La Pedrera de Meià locality, the formation consists of rhythmically laminated, lithographic limestones that formed in the distal areas of a large, shallow coastal lake. It is noted for the exceptional preservation of articulated small vertebrates and insects, similar to that of the Solnhofen Limestone. Fossil content The La Pedrera de Rúbies Formation has yielded the enantiornithine bird Noguerornis and the scincogekkomorph lizard Pedrerasaurus, and two species of Teiid lizard Meyasaurus, M. fauri and M. crusafonti, the indeterminate avialan Ilerdopteryx, frogs Neusibatrachus wilferti, Eodiscoglossus santonjae and Montsechobatrachus. A crocodyliform Montsecosuchus and many insects and other arthropods, as: Angarosphex lithographicu Archisphex catalunicus Artitocoblatta hispanica Chalicoridulum montsecensis Chrysobothris ballae Cionocoleus longicapitis Condalia woottoni Cretephialtites pedrerae Hirmoneura (Eohirmoneura) neli Hirmoneura richterae Iberoraphidia dividua Ilerdocossus pulcherrima Ilerdosphex wenzae Jarzembowskia edmundi Leridatoma pulcherrima Manlaya lacabrua Meiagaster cretaceus Meiatermes bertrani Mesoblattina colominasi Mesopalingea lerida Mimamontsecia cretacea Montsecbelus solutus Nanoraphidia lithographica Nogueroblatta fontllongae N. nana Pachypsyche vidali Pompilopterus montsecensis Proraphidia gomezi Prosyntexis montsecensis Pseudochrysobothris ballae Ptiolinites almuthae Vitisma occidentalis Cretaholocompsa montsecana Montsecosphex jarzembowskii Cretobestiola hispanica Angarosphex penyalveri Cretoserphus gomezi Bolbonectus lithographicus ?Anaglyphites pluricavus Palaeaeschna vidali Hispanochlorogomphus rossi Palaeouloborus lacasae Ichthyemidion vivaldi Correlation See also List of dinosaur-bearing rock formations List of stratigraphic units with few dinosaur genera Tremp Formation Baltic, Burmese, Dominican, Mexican amber References Bibliography Further reading A. P. Rasnitsyn and J. Ansorge. 2000. Two new Lower Cretaceous hymenopterous insects (Insecta: Hymenoptera) from Sierra del Montsec, Spain. Acta Geológica Hispánica 35:59-64 X. Martínez-Delclòs. 1993. Blátidos (Insecta, Blattodea) del Cretácico Inferior de España. Familias Mesoblattinidae, Blattulidae y Poliphagidae. Boletín Geológico y Minero 104:516-538 X. Martínez-Delclòs. 1990. Insectos del Cretácico inferior de Santa Maria de Meià (Lleida): colleción Lluís Marià Vidal i Carreras. Treballs del Museu de Geologia de Barcelona 1:91-116 P. E. S. Whalley and E. A. Jarzembowski. 1985. Fossil insects from the Lithographic Limestone Montsech (Late Jurassic-early Cretaceous), Lérida Province, Spain. Bulletin of the British Museum of Natural History (Geology) 38(5):381-412 J. E. Gomez Pallerola. 1979. Un ave y otras especies fósiles nuevas de la biofacies de Santa María de Meyá (Lérida). Boletín Geológico y Minero 90:333-346 Geologic formations of Spain Cretaceous Spain Lower Cretaceous Series of Europe Barremian Stage Hauterivian Stage Valanginian Stage Berriasian Stage Limestone formations Lacustrine deposits Formations Paleontology in Spain Formations Formations
La Pedrera de Rúbies Formation
[ "Physics" ]
966
[ "Amorphous solids", "Unsolved problems in physics", "Amber" ]
25,249,780
https://en.wikipedia.org/wiki/Landfill%20gas%20utilization
Landfill gas utilization is a process of gathering, processing, and treating the methane or another gas emitted from decomposing garbage to produce electricity, heat, fuels, and various chemical compounds. After fossil fuel and agriculture, landfill gas is the third largest human generated source of methane. Compared to , methane is 25 times more potent as a greenhouse gas. It is important not only to control its emission but, where conditions allow, use it to generate energy, thus offsetting the contribution of two major sources of greenhouse gases towards climate change. The number of landfill gas projects, which convert the gas into power, went from 399 in 2005 to 519 in 2009 in the United States, according to the U.S. Environmental Protection Agency. These projects are popular because they control energy costs and reduce greenhouse gas emissions. These projects collect the methane gas and treat it, so it can be used for electricity or upgraded to pipeline-grade gas to power homes, buildings, and vehicles. Generation Landfill gas (LFG) is generated through the degradation of municipal solid waste (MSW) and other biodegradable waste, by microorganisms. Aerobic conditions (presence of oxygen) leads to predominately emissions. In anaerobic conditions, as is typical of landfills, methane and are produced in a ratio of 60:40. Methane () is the important component of landfill gas as it has a calorific value of 33.95 MJ/Nm^3 which gives rise to energy generation benefits. The amount of methane that is produced varies significantly based on composition of the waste. Most of the methane produced in MSW landfills is derived from food waste, composite paper, and corrugated cardboard which comprise 19.4 ± 5.5%, 21.9 ± 5.2%, and 20.9 ± 7.1% respectively on average of MSW landfills in the United States. The rate of landfill gas production varies with the age of the landfill. There are 4 common phases that a section of a MSW landfill undergoes after placement. Typically, in a large landfill, different areas of the site will be at different stages at the same time. The landfill gas production rate will reach a maximum at around 5 years and start to decline. Landfill gas follows first-order kinetic decay after decline begins with a k-value ranging 0.02 yr-1 for arid conditions and 0.065 yr-1 for wet conditions. The Landfill Methane Outreach Program (LMOP) provides the LandGEM (Landfill Gas Emissions Model), a first-order decay model which aids in the determination of landfill gas production for an individual landfill. Typically, gas extraction rates from a municipal solid waste (MSW) landfill range from 25 to 10000 m3/h where Landfill sites typically range from 100,000 m3 to 10 million m3 of waste in place. MSW landfill gas typically has roughly 45 to 60% methane and 40 to 60% carbon dioxide, depending on the amount of air introduced to the site, either through active gas extraction or from inadequate sealing (capping) of the landfill site. Depending on the composition of the waste in place, there are many other minor components that comprises roughly 1% which includes , , , , non-methane volatile organic compounds (NMVOCs), polycyclic aromatic hydrocarbons (PAHs), polychlorinated dibenzodioxins (PCDDs), polychlorinated dibenzofurans (PCDFs), etc. All of these gases are harmful to human health at high doses. LFG collection systems Landfill gas collection is typically accomplished through the installation of wells – vertically and/or horizontally – in the waste mass. Design heuristics for vertical wells call for about one well per acre of landfill surface, whereas horizontal wells are normally spaced about 50 to 200 feet apart on center. Efficient gas collection can be accomplished at both open and closed landfills, but closed landfills have systems that are more efficient, owing to greater deployment of collection infrastructure since active filling is not occurring. On average, closed landfills have gas collection systems that capture about 84% of produced gas, compared to about 67% for open landfills. Landfill gas can also be extracted through horizontal trenches instead of vertical wells. Both systems are effective at collecting. Landfill gas is extracted and piped to a main collection header, where it is sent to be treated or flared. The main collection header can be connected to the leachate collection system to collect condensate forming in the pipes. A blower is needed to pull the gas from the collection wells to the collection header and further downstream. A landfill gas collection system with a flare designed for a 600 ft3/min extraction rate is estimated to cost $991,000 (approximately $24,000 per acre) with annual operation and maintenance costs of $166,000 per year at $2,250 per well, $4,500 per flare and $44,500 per year to operate the blower (2008). LMOP provides a software model to predict collection system costs. Flaring If gas extraction rates do not warrant direct use or electricity generation, the gas can be flared off in order to avoid uncontrolled release to the atmosphere. One hundred m3/h is a practical threshold for flaring in the U.S. In the U.K, gas engines are used with a capacity of less than 100m3/h. Flares are useful in all landfill gas systems as they can help control excess gas extraction spikes and maintenance down periods. In the U.K. and EU enclosed flares, from which the flame is not visible are mandatory at modern landfill sites. Flares can be either open or enclosed, but the latter are typically more expensive as they provide high combustion temperatures and specific residence times as well as limit noise and light pollution. Some US states require the use of enclosed flares over open flares. Higher combustion temperatures and residence times destroy unwanted constituents such as un-burnt hydrocarbons. General accepted values are an exhaust gas temperature of 1000 °C with a retention time of 0.3 seconds which is said to result in greater than 98% destruction efficiency. The combustion temperature is an important controlling factor as if greater than 1100 °C, there is a danger of the exponential formation of thermal NOx. Landfill gas treatment Landfill gas must be treated to remove impurities, condensate, and particulates. The treatment system depends on the end use. Minimal treatment is needed for the direct use of gas in boilers, furnaces, or kilns. Using the gas in electricity generation typically requires more in-depth treatment. Treatment systems are divided into primary and secondary treatment processing. Primary processing systems remove moisture and particulates. Gas cooling and compression are common in primary processing. Secondary treatment systems employ multiple cleanup processes, physical and chemical, depending on the specifications of the end use. Two constituents that may need to be removed are siloxanes and sulfur compounds, which are damaging to equipment and significantly increase maintenance cost. Adsorption and absorption are the most common technologies used in secondary treatment processing. Use of landfill gas Direct use Boiler, dryer, and process heater Pipelines transmit gas to boilers, dryers, or kilns, where it is used much in the same way as natural gas. Landfill gas is cheaper than natural gas and holds about half the heating value at 16,785 – 20,495 kJ/m3 (450 – 550 Btu/ft3) as compared to 35,406 kJ/m3 (950 Btu/ft3) of natural gas. Boilers, dryers, and kilns are used often because they maximize use of the gas, limited treatment is needed, and the gas can be mixed with other fuels. Boilers use the gas to transform water into steam for use in various applications. For boilers, about 8,000 to 10,000  pounds per hour of steam can be generated for every 1  million metric tons of waste-in-place at the landfill. Most direct use projects use boilers. General Motors saves $500,000 on energy costs per year at each of the four plants owned by General Motors that has implemented landfill gas boilers. Disadvantages of Boilers, dryers, and kilns are that they need to be retrofitted in order to accept the gas and the end user has to be nearby (within roughly 5  miles) as pipelines will need to be built. Infrared heaters, greenhouses, artisan studios In situations with low gas extraction rates, the gas can go to power infrared heaters in buildings local to the landfill, provide heat and power to local greenhouses, and power the energy intensive activities of a studio engaged in pottery, metalworking or glass-blowing. Heat is fairly inexpensive to employ with the use of a boiler. A microturbine would be needed to provide power in low gas extraction rate situations. Leachate evaporation The gas coming from the landfill can be used to evaporate leachate in situations where leachate is fairly expensive to treat. The system to evaporate the leachate costs $300,000 to $500,000 to put in place with operations and maintenance costs of $70,000 to $95,000 per year. A 30,000 gallons per day evaporator costs $.05 - $.06 per gallon. The cost per gallon increases as the evaporator size decreases. A 10,000 gallons per day evaporator costs $.18 - $.20 per gallon. Estimates are in 2007 dollars. Pipeline-quality gas, CNG, LNG Landfill gas can be converted to high-Btu gas by reducing its carbon dioxide, nitrogen, and oxygen content. The high-Btu gas can be piped into existing natural gas pipelines or in the form of CNG (compressed natural gas) or LNG (liquid natural gas). CNG and LNG can be used on site to power hauling trucks or equipment or sold commercially. Three commonly used methods to extract the carbon dioxide from the gas are membrane separation, molecular sieve, and amine scrubbing. Oxygen and nitrogen are controlled by the proper design and operation of the landfill since the primary cause for oxygen or nitrogen in the gas is intrusion from outside into the landfill because of a difference in pressure. The high-Btu processing equipment can be expected to cost $2,600 to $4,300 per standard cubic foot per minute (scfm) of landfill gas. Annual costs range from $875,000 to $3.5 million to operate, maintain and provide electricity to. Costs depend on quality of the end product gas as well as the size of the project. The first landfill gas to LNG facility in the United States was the Frank R. Bowerman Landfill in Orange County, California. The same process is used for the conversion to CNG, but on a smaller scale. The CNG project at Puente Hills Landfill in Los Angeles has realized $1.40 per gallon of gasoline equivalent with the flow rate of 250 scfm. Cost per gallon equivalent reduces as the flow rate of gas increases. LNG can be produced through the liquification of CNG. However, the oxygen content needs to be reduced to be under 0.5% to avoid explosion concerns, the carbon dioxide content must be as close to zero as possible to avoid freezing problems encountered in the production, and nitrogen must be reduced enough to achieve at least 96% methane. A $20 million facility is estimated to achieve $0.65/gallon for a plant producing 15,000 gallons/day of LNG (3,000 scfm). Estimates are in 2007 dollars. Electricity generation If the landfill gas extraction rate is large enough, a gas turbine or internal combustion engine could be used to produce electricity to sell commercially or use on site. Reciprocating piston engine More than 70 percent of all landfill electricity projects use reciprocating piston (RP) engines, a form of internal combustion engine, because of relatively low cost, high efficiency, and good size match with most landfills. RP engines usually achieve an efficiency of 25 to 35 percent with landfill gas. However, RP engines can be added or removed to follow gas trends. Each engine can achieve 150 kW to 3 MW, depending on the gas flow. An RP engine (less than 1 MW) can typically cost $2,300 per kW with annual operation and maintenance costs of $210 per kW. An RP engine (greater than 800 kW) can typically cost $1,700 per kW with annual operation and maintenance costs of $180 per kW. Estimates are in 2010 dollars. Gas turbine Gas turbines, another form of internal combustion engine, usually meet an efficiency of 20 to 28 percent at full load with landfill gas. Efficiencies drop when the turbine is operating at partial load. Gas turbines have relatively low maintenance costs and nitrogen oxide emissions when compared to RP engines. Gas turbines require high gas compression, which uses more electricity to compress, therefore reducing the efficiency. Gas turbines are also more resistant to corrosive damage than RP engines. Gas turbines need a minimum of 1,300 cfm and typically exceed 2,100 cfm and can generate 1 to 10 MW. A gas turbine (greater than 3 MW) can typically cost $1,400 per kW with annual operation and maintenance costs of $130 per kW. Estimates are in 2010 dollars. Microturbine Microturbines can produce electricity with lower amounts of landfill gas than gas turbines or RP engines. Microturbines can operate between 20 and 200 cfm and emit less nitrogen oxides than RP engines. Also, they can function with less methane content (as little as 35 percent). Microturbines require extensive gas treatment and come in sizes of 30, 70, and 250 kW. A microturbine (less than 1 MW) can typically cost $5,500 per kW with annual operation and maintenance costs of $380 per kW. Estimates are in 2010 dollars. Fuel cell Research has been performed indicating that molten carbonate fuel cells could be fueled by landfill gas. Molten carbonate fuel cells require less purity than typical fuel cells, but still require extensive treatment. The separation of acid gases (HCl, HF, and SO2), VOC oxidation (H2S removal) and siloxane removal are required for molten carbonate fuel cells. Fuel cells are typically run on hydrogen and hydrogen can be produced from landfill gas. Hydrogen used in fuel cells have zero emissions, high efficiency, and low maintenance costs. Project incentives Various landfill gas project incentives exist for United States projects at the federal and state level. The Department of the Treasury, Department of Energy, Department of Agriculture, and Department of Commerce all provide federal incentives for landfill gas projects. Typically, incentives are in the form of tax credits, bonds, or grants. For example, the Renewable Electricity Production Tax Credit (PTC) gives a corporate tax credit of 1.1 cents per kWh for landfill projects above 150 kW. Various states and private foundations give incentives to landfill gas projects. A Renewable Portfolio Standard (RPS) is a legislative requirement for utilities to sell or generate a percentage of their electricity from renewable sources including landfill gas. Some states require all utilities to comply, while others require only public utilities to comply. Environmental impact In 2005, 166 million tons of MSW were discarded to landfills in the United States. Roughly 120 kg of methane is generated from every ton of MSW. Methane has a global warming potential of 25 times more effective of a greenhouse gas than carbon dioxide on a 100-year time horizon. It is estimated that more than 10% of all global anthropogenic methane emissions are from landfills. Landfill gas projects help aid in the reduction of methane emissions. However, landfill gas collection systems do not collect all the gas generated. Around 4 to 10 percent of landfill gas escapes the collection system of a typical landfill with a gas collection system. The use of landfill gas is considered a green fuel source because it offsets the use of environmentally damaging fuels such as oil or natural gas, destroys the heat-trapping gas methane, and the gas is generated by deposits of waste that are already in place. 450 of the 2,300 landfills in the United States have operational landfill gas utilization projects as of 2007. LMOP has estimated that approximately 520 landfills that currently exist could use landfill gas (enough to power 700,000 homes). Landfill gas projects also decrease local pollution, and create jobs, revenues and cost savings. Of the roughly 450 landfill gas projects operational in 2007, 11 billion kWh of electricity was generated and 78 billion cubic feet of gas was supplied to end users. These totals amount to roughly of pine or fir forests or annual emissions from 14,000,000 passenger vehicles. See also Anaerobic digestion Atmospheric methane Biogas Biodegradation Cogeneration Landfill gas migration Landfill gas monitoring Solar landfill Waste minimisation Underground coal gasification References Waste management concepts Landfill Renewable energy Cogeneration Greenhouse gas emissions Methane
Landfill gas utilization
[ "Chemistry" ]
3,484
[ "Greenhouse gases", "Greenhouse gas emissions", "Methane" ]
25,250,359
https://en.wikipedia.org/wiki/List%20of%20freeware
Freeware is software that is available for use at no monetary cost or for an optional fee, but usually (although not necessarily) closed source with one or more restricted usage rights. Freeware is in contrast to commercial software, which is typically sold for profit, but might be distributed for a business or commercial purpose in the aim to expand the marketshare of a "premium" product. Popular examples of closed-source freeware include Adobe Reader, Free Studio and Skype. This is a list of notable software packages that meet the freeware definition. 3D artistry Anim8or Daz Studio Administration Remote access TeamViewer System monitoring and benchmarking CPU-Z Mactracker Process Explorer Process Monitor Samurize Tweaking and configuration Tweak UI RivaTuner Audio tools Jeskola Buzz SoundApp Mp3tag UTAU Audacity Authoring (CD and DVD writing) CDBurnerXP ImgBurn Communications and messengers ooVoo Skype Telegram Trillian Xfire WeChat Windows Live Messenger Yahoo! Messenger WhatsApp Discord Mobile phone Disc2Phone Compression B1 Free Archiver Filzip LHA TUGZip ZipGenius Decompression StuffIt Expander Zipeg Concept- and mind-mapping software Data recovery Recuva Stellar Phoenix Windows Data Recovery Defragmentation software Desktop plug-ins AveDesk Kapsules RocketDock Download software CoreFTP FlashGet Free Studio WinMX μTorrent Email ePrompter Foxmail Pegasus Mail Emulators File management Xplorer² Lite Fractal generators Fractint Games Image manipulation Artweaver GIMP Paint.NET Pixia Image viewers FastStone Image Viewer IrfanView Jalbum XnView Information GrabIt Lingoes NetNewsWire ProgDVB Xnews Maintenance CCleaner Revo Uninstaller Should I Remove It? UltimateDefrag UpdateStar ZSoft Uninstaller Media manipulation and creation Any Video Converter Audiograbber DVD Shrink FormatFactory Free Studio GSpot VirtualDub Media players and media centers AIMP ALLPlayer foobar2000 GOM Player Groove Music Microsoft Movies & TV Sonique Winamp XMPlay Medical Navigation Navigational Algorithms Office suite Optical disk authoring software Optimization software PDF and printing doPDF Foxit Reader PrimoPDF Sumatra PDF PrimoPDF Productivity Evernote Windows Live Essentials Programming AutoIt HxD Microsoft Visual Studio Express Atom Security Comodo Internet Security HDDerase HijackThis K9 Web Protection Malwarebytes' Anti-Malware RootkitRevealer ZoneAlarm Citizen COP Antivirus Panda Cloud Antivirus Avast! AVG Simulators HNSKY Physics Algodoo Virtual machine VMware Player VirtualBox QEMU Discontinued AIDA32 Statistical packages Text editors BBEdit Lite Codelobster Programmer's File Editor PSPad TED Notepad TextWrangler Typeface software Video transcoding software Web browsers Maxthon Opera QQ browser SlimBrowser Word processors Virtual printer software References Lists of software
List of freeware
[ "Technology" ]
630
[ "Computing-related lists", "Lists of software" ]
12,934,010
https://en.wikipedia.org/wiki/German%20Naval%20Grid%20System
German Naval Grid Reference (German:Gradnetzmeldeverfahren), was a system for referencing a location on a map. Introduced initially by the German Luftwaffe just before World War II, it was used widely in the German armed forces until 1943. Each armed force had its own version of this reference. The reference used in the ’Gradnetzmeldeverfahren’ can be viewed as a short form of the position in full, without a real translation or encoding. In the Kriegsmarine version, the entire globe was divided into large square sectors (assuming a Mercator projection), each with its unique two-letter designation (e.g. AE, AF, BA, BB, etc.) with each square called a quadrant 486 nautical miles to a side. E.g CA covered the East Coast of the United States from about Portsmouth, Hampshire south to Cape Fear, North Carolina. Each such sector was further sub-divided into a 3 x 3 matrix, so that there were nine squares. Each of the nine squares were again divided into nine smaller squares. This was known as a Grid, so that there were now 81 total grid squares within a sector. Each grid was given a two-digit designation, so the Grid System would now have two alphabets and two digits. Each of these Grids were again divided in the same manner – first into a 3 x 3 matrix, and then each matrix was divided into nine squares, so that a further 81 squares were formed within the Grid. Each newly formed square was again given a two digit designation. The complete Grid System would now read as two alphabets with four digits. This can be referred to as the patrol zone. Thus the Kriegsmarine could pinpoint any location on the globe using six characters, a very useful tool when using radio. Its precision was to the level of six nautical miles within a grid. This was how locations were communicated to naval units, particularly U-boats. Thus grid location AN1879 denoted a location east of Northern Scotland, just below Scapa Flow. For example, major grid AJ is located south of Greenland. Each submarine was equipped with an Adressbuch to decipher the locations. As the war advanced, the Germans suspected that the Allies were deciphering their patrol reports. As a precautionary measure they transmitted patrol zones by using coded sectors. Thus instead of transmitting the actual patrol zone, they would use an offset, so that any zone that was transmitted over the waves would be an offset of a secret location. This secret location was changed at random intervals, and U-boat captains would calculate the new patrol zone based on the offset. References External links Further reading Bray, Jeffrey K. Ultra in the Atlantic: The German Naval Grid and Its Ciphers, Aegean Park Printers, 1996, Geographic coordinate systems Kriegsmarine
German Naval Grid System
[ "Mathematics" ]
587
[ "Geographic coordinate systems", "Coordinate systems" ]
12,939,181
https://en.wikipedia.org/wiki/Quasithin%20group
In mathematics, a quasithin group is a finite simple group that resembles a group of Lie type of rank at most 2 over a field of characteristic 2. The classification of quasithin groups is a crucial part of the classification of finite simple groups. More precisely it is a finite simple group of characteristic 2 type and width 2. Here characteristic 2 type means that its centralizers of involutions resemble those of groups of Lie type over fields of characteristic 2, and the width is roughly the maximal rank of an abelian group of odd order normalizing a non-trivial 2-subgroup of G. When G is a group of Lie type of characteristic 2 type, the width is usually the rank (the dimension of a maximal torus of the algebraic group). Classification The quasithin groups were classified in a 1221-page paper by . An earlier announcement by of the classification, on the basis of which the classification of finite simple groups was announced as finished in 1983, was premature as the unpublished manuscript of his work was incomplete and contained serious gaps. According to , the finite simple quasithin groups of even characteristic are given by Groups of Lie type of characteristic 2 and rank 1 or 2, except that U5(q) only occurs for q = 4 PSL4(2), PSL5(2), Sp6(2) The alternating groups on 5, 6, 8, 9 points PSL2(p) for p a Fermat or Mersenne prime, L(3), L(3), G2(3) The Mathieu groups M11, M12, M22, M23, M24, The Janko groups J2, J3, J4, the Higman-Sims group, the Held group, and the Rudvalis group. If the condition "even characteristic" is relaxed to "even type" in the sense of the revision of the classification by Daniel Gorenstein, Richard Lyons, and Ronald Solomon, then the only extra group that appears is the Janko group J1. References (unpublished typescript) Finite groups
Quasithin group
[ "Mathematics" ]
432
[ "Mathematical structures", "Algebraic structures", "Finite groups" ]
20,908,488
https://en.wikipedia.org/wiki/Patched
Patched (Ptc) is a conserved 12-pass transmembrane protein receptor that plays an obligate negative regulatory role in the Hedgehog signaling pathway in insects and vertebrates. Patched is an essential gene in embryogenesis for proper segmentation in the fly embryo, mutations in which may be embryonic lethal. Patched functions as the receptor for the Hedgehog protein and controls its spatial distribution, in part via endocytosis of bound Hedgehog protein, which is then targeted for lysosomal degradation. Discovery The original mutations in the ptc gene were discovered in the fruit fly Drosophila melanogaster by 1995 Nobel Laureates Eric F. Wieschaus and Christiane Nusslein-Volhard and colleagues, and the gene was independently cloned in 1989 by Joan Hooper in the laboratory of Matthew P. Scott, and by Philip Ingham and colleagues. Role in hedgehog signaling Patched is part of a negative feedback mechanism for hedgehog signaling that helps shape the spatial gradient of signaling activity across tissues. In the absence of hedgehog, low levels of patched are sufficient to suppress activity of the signal transduction pathway. When hedgehog is present, its cholesterol moiety binds to the sterol-sensing domain in patched, which then inhibits the activity of smoothened. Smoothened is a G protein-coupled receptor, most of which is stored in membrane bound vesicles internally within the cell and which increases at the cell surface when hedgehog is present. Smoothened must be present on the cell membrane in order for the Hedgehog signaling pathway to be activated. Among other genes, the transcription of the patched gene is induced by hedgehog signaling, with the accumulation of the patched protein limiting signaling through the Smoothened protein. Recent work implicates the cilium in intracellular trafficking of hedgehog signaling components in vertebrate cells. Role in disease Mutated patched proteins have been implicated in a number of cancers including basal cell carcinoma, medulloblastoma, and rhabdomyosarcoma. Hereditary mutations in the human patched homolog PTCH1 cause autosomal dominant Gorlin syndrome, which consists of overgrowth and hereditary disposition to cancer including basal cell carcinoma and medulloblastoma. Mice with mutations in mouse PTCH1 similarly develop medulloblastoma. References Hedgehog signaling pathway Human proteins Histopathology Developmental genes and proteins
Patched
[ "Chemistry", "Biology" ]
504
[ "Signal transduction", "Microscopy", "Developmental genes and proteins", "Hedgehog signaling pathway", "Induced stem cells", "Histopathology" ]
6,310,882
https://en.wikipedia.org/wiki/Optical%20circulator
An optical circulator is a three- or four-port optical device designed such that light entering any port exits from the next. This means that if light enters port 1 it is emitted from port 2, but if some of the emitted light is reflected back to the circulator, it does not come out of port 1 but instead exits from port 3. This is analogous to the operation of an electronic circulator. Fiber-optic circulators are used to separate optical signals that travel in opposite directions in an optical fiber, for example to achieve bi-directional transmission over a single fiber. Because of their high isolation of the input and reflected optical powers and their low insertion loss, optical circulators are widely used in advanced fiber-optic communications and fiber-optic sensor applications. Optical circulators are non-reciprocal optics, which means that changes in the properties of light passing through the device are not reversed when the light passes through in the opposite direction. This can only happen when the symmetry of the system is broken, for example by an external magnetic field. A Faraday rotator is another example of a non-reciprocal optical device, and indeed it is possible to construct an optical circulator based on a Faraday rotator. History In 1965, Ribbens reported an early form of optical circulator that utilized a Nicol prism with a Faraday rotator. With the advent of fiber and guided-wave optics, waveguide-integrable and polarization-independent optical circulators were later introduced. The concept was later extended to silicon photonic waveguide systems. In 2016, Scheucher et al. have demonstrated a fiber-integrated optical circulator whose nonreciprocal behavior originated from the chiral interaction between a single 85Rb atom and the confined light in a whispering-gallery mode microresonator. The routing direction of the device is controlled by the internal quantum state of the atom and the device is able to route individual photons. In 2013, Davoyan and Engheta proposed a nanoscale plasmonic Y-circulator based on three dielectric waveguides interconnected with a magneto-optical junction with plasmonic nanorods. See also Optical isolator References External links US Patent 5,909,310 (USPTO) (Google Patents) Optical components
Optical circulator
[ "Materials_science", "Technology", "Engineering" ]
489
[ "Glass engineering and science", "Optical components", "Components" ]
6,313,537
https://en.wikipedia.org/wiki/Automated%20insulin%20delivery%20system
Automated insulin delivery systems are automated (or semi-automated) systems designed to assist people with insulin-requiring diabetes, by automatically adjusting insulin delivery in response to blood glucose levels. Currently available systems (as of October 2020) can only deliver (and regulate delivery of) a single hormone—insulin. Other systems currently in development aim to improve on current systems by adding one or more additional hormones that can be delivered as needed, providing something closer to the endocrine functionality of the pancreas. The endocrine functionality of the pancreas is provided by islet cells which produce the hormones insulin and glucagon. Artificial pancreatic technology mimics the secretion of these hormones into the bloodstream in response to the body's changing blood glucose levels. Maintaining balanced blood sugar levels is crucial to the function of the brain, liver, and kidneys. Therefore, for people with diabetes, it is necessary that the levels be kept balanced when the body cannot produce insulin itself. Automated insulin delivery (AID) systems are often referred to using the term artificial pancreas, but the term has no precise, universally accepted definition. For uses other than automated insulin delivery, see Artificial pancreas (disambiguation). General overview History The first automated insulin delivery system was known as the Biostator. Classes of AID systems Currently available AID systems fall into three broad classes based on their capabilities. The first systems released can only halt insulin delivery (predictive low glucose suspend) in response to already low or predicted low glucose. Hybrid Closed Loop systems can modulate delivery both up and down, although users still initiate insulin doses (boluses) for meals and typically "announce" or enter meal information. Fully Closed Loops require no manual insulin delivery actions or announcement for meals. Predictive Low Glucose Suspend (PLGS) A step forward from threshold suspend systems, predictive low glucose suspend (PLGS) systems use a mathematical model to extrapolate predicted future blood sugar levels based on recent past readings from a CGM. This allows the system to reduce or halt insulin delivery prior to a predicted hypoglycemic event. Hybrid Closed Loop (HCL) / Advanced Hybrid Closed Loop (AHCL) Hybrid closed loop (HCL) systems further expand on the capabilities of PLGS systems by adjusting basal insulin delivery rates both up and down in response to values from a continuous glucose monitor. Through this modulation of basal insulin, the system is able to reduce the magnitude and duration both hyperglycemic and hypoglycemic events. Users still must initiate manual mealtime boluses. Advanced hybrid closed loop systems have advanced algorithms. Fully Closed Loop (FCL) Fully or full closed loop (FCL) systems adjust insulin delivery in response to changes in glucose levels without requiring input by users for mealtime insulin or announcements of meals. Required components An automated insulin delivery system consists of three distinct components: a continuous glucose monitor to determine blood sugar levels, a pump to deliver insulin, and an algorithm that uses the data from the CGM and pump to determine needed insulin adjustments. In the United States, the Food and Drug Administration (FDA) allows each component to be approved independently, allowing for more rapid approvals and incremental innovation. Each component is discussed in greater detail below. Continuous glucose monitor (CGM) Continuous glucose monitors (CGMs) are wearable sensors which extrapolate an estimate of the glucose concentration in a patient's blood based on the level of glucose present in the subcutaneous interstitial fluid. A thin, biocompatible sensor wire coated with a glucose-reactive enzyme is inserted into the skin, allowing the system to read the voltage generated, and based on it, estimate blood glucose. The biggest advantage of a CGM over a traditional fingerstick blood glucose meter is that the CGM can take a new reading as often as every 60 seconds (although most only take a reading every 5 minutes), allowing for a sampling frequency that is able to provide not just a current blood sugar level, but a record of past measurements; allowing computer systems to project past short-term trends into the future, showing patients where their blood sugar levels are likely headed. Insulin pump An insulin pump delivers insulin subcutaneously. The insulin pump body itself can also contain the algorithm used in an AID system, or it can connect via Bluetooth with a separate mobile device (such as a phone) to send data and receive commands to adjust insulin delivery. Algorithm The algorithm for each AID system differs. In commercial systems (see below), little is known about the details of how the control algorithm works. In open source systems, the code and algorithm are openly available. In general, all algorithms do the same basic functionality of taking in CGM data and based on predicted glucose level's and the user's personal settings (for basal rates, insulin sensitivity, and carbohydrate ratio, for example) then recommends insulin dosing to help bring or maintain glucose levels in target range. Depending on the system, users may have the ability to adjust the target for the system, and may have different settings to ask the system to give more or less insulin in general. Currently available systems Commercial Commercial availability varies by country. Approved systems in various countries, described further below, include MiniMed 670G or 780G, Tandem's Control-IQ, Omnipod 5, CamAPS FX, and Diabeloop DBLG1. MiniMed 670G In September 2016, the FDA approved the Medtronic MiniMed 670G, which was the first approved hybrid closed loop system. The device automatically adjusts a patient's basal insulin delivery. It is made up of a continuous glucose monitor, an insulin pump, and a glucose meter for calibration. It automatically functions to modify the level of insulin delivery based on the detection of blood glucose levels by continuous monitor. It does this by sending the blood glucose data through an algorithm that analyzes and makes the subsequent adjustments. The system has two modes. Manual mode lets the user choose the rate at which basal insulin is delivered. Auto mode regulates basal insulin levels from the CGM readings every five minutes. Tandem Diabetes Care t:Slim X2 with Control IQ The Tandem Diabetes Care t:Slim X2 was approved by the U.S. Food and Drug Administration in 2019 and is the first insulin pump to be designated as an alternate controller enabled (ACE) insulin pump. ACE insulin pumps allow users to integrate continuous glucose monitors, automated insulin dosing (AID) systems, and other diabetes management devices with the pump to create a personalized diabetes therapy system. Many users of the t:slim X2 integrate the pump with the Dexcom G6, a continuous glucose monitor approved by the FDA in 2018. It was the first CGM authorized for use in an integrated therapy system. The device does not require fingerstick calibrations. iLet Bionic Pancreas In May 2023, the FDA approved the iLet Bionic Pancreas system for people with Type 1 diabetes of six years and older. The device uses a closed-loop system to deliver both insulin and glucagon in response to sensed blood glucose levels. The 4th generation iLet prototype, presented in 2017, is around the size of an iPhone, with a touchscreen interface. It contains two chambers for both insulin and glucagon, and the device is configurable for use with only one hormone, or both. A 440-patient study of type I diabetes ran in 2020 and 2021 using a device configuration that delivered only insulin in comparison to standard of care; device use led to better circulating glucose control (measured by continuous monitoring) and a reduction in glycated hemoglobin (versus no change for the standard of care group). However, the incidence of severe hypoglycemic events was more than 1.5 times higher among device users versus standard care patients. Non-Commercial There are several non-commercial, non-FDA approved DIY options, using open source code, including OpenAPS, Loop, and/or AndroidAPS. Systems in development Luna Diabetes Former founders of Timesulin, Welldoc, Companion Medical and Bigfoot Biomedical have joined together to create the world's first automated insulin delivery system for those that want to continue to use insulin pens. The team is calling it Episodic AID. The working product name is Luna. Inreda AP In collaboration with the Academic Medical Center in Amsterdam, Inreda Diabetic B.V. has developed a closed loop system with insulin and glucagon. The initiator, Robin Koops, started to develop the device in 2004 and ran the first tests on himself. In October 2016 Inreda Diabetic B.V. got the ISO 13485 license, a first requirement to produce its artificial pancreas. The product itself is called Inreda AP, and soon made some highly successful trials. After clinical trials, it received the CE marking, noting that it complies with European regulation, in February 2020. In October 2020 the health insurance company Menzis and Inreda Diabetic then started a pilot with 100 patients insured by Menzis. These are all patients that face very serious trouble in regulating their blood glucose levels. They now use the Inreda AP instead of the traditional treatment. Another large scale trial with the Inreda AP was set up in July 2021, and should determine whether Dutch health insurance should cover the device for all their insured. A smaller improved version of the Inreda AP is scheduled for release in 2023. Approaches Medical equipment The medical equipment approach involves combining a continuous glucose monitor and an implanted insulin pump that can function together with a computer-controlled algorithm to replace the normal function of the pancreas. The development of continuous glucose monitors has led to the progress in artificial pancreas technology using this integrated system. Closed-loop systems Unlike the continuous sensor alone, the closed-loop system requires no user input in response to reading from the monitor; the monitor and insulin pump system automatically delivers the correct amount of hormone calculated from the readings transmitted. The system is what makes up the artificial pancreas device. Current studies Four studies on different artificial pancreas systems are being conducted starting in 2017 and going into the near future. The projects are funded by the National Institute of Diabetes and Digestive and Kidney Diseases, and are the final part of testing the devices before applying for approval for use. Participants in the studies are able to live their lives at home while using the devices and being monitored remotely for safety, efficacy, and a number of other factors. The International Diabetes Closed-Loop trial, led by researchers from the University of Virginia, is testing a closed-loop system called inControl, which has a smartphone user interface. 240 people of ages 14 and up are participating for 6 months. A full-year trial led by researchers from the University of Cambridge started in May 2017 and has enrolled an estimated 150 participants of ages 6 to 18 years. The artificial pancreas system being studied uses a smartphone and has a low glucose feature to improve glucose level control. The International Diabetes Center in Minneapolis, Minnesota, in collaboration with Schneider Children's Medical Center of Israel, are planning a 6-month study that will begin in early 2019 and will involve 112 adolescents and young adults, ages 14 to 30. The main object of the study is to compare the current Medtronic 670G system to a new Medtronic-developed system. The new system has programming that aims to improve glucose control around mealtime, which is still a big challenge in the field. The current 6-month study led by the Bionic Pancreas team started in mid-2018 and enrolled 312 participants of ages 18 and above. Physiological The biotechnical company Defymed, based in France, is developing an implantable bio-artificial device called MailPan which features a bio-compatible membrane with selective permeability to encapsulate different cell types, including pancreatic beta cells. The implantation of the device does not require conjunctive immuno-suppressive therapy because the membrane prevents antibodies of the patient from entering the device and damaging the encapsulated cells. After being surgically implanted, the membrane sheet will be viable for years. The cells that the device holds can be produced from stem cells rather than human donors, and may also be replaced over time using input and output connections without surgery. Defymed is partially funded by JDRF, formerly known as the Juvenile Diabetes Research Foundation, but is now defined as an organization for all ages and all stages of type 1 diabetes. In November 2018, it was announced that Defymed would partner with the Israel-based Kadimastem, a bio-pharmaceutical company developing stem-cell based regenerative therapies, to receive a two-year grant worth approximately $1.47 million for the development of a bio-artificial pancreas that would treat type 1 diabetes. Kadimastem's stem cell technology uses differentiation of human embryonic stem cells to obtain pancreatic endocrine cells. These include insulin-producing beta cells, as well as alpha cells, which produce glucagon. Both cells arrange in islet-like clusters, mimicking the structure of the pancreas. The aim of the partnership is to combine both technologies in a bio-artificial pancreas device, which releases insulin in response to blood glucose levels, to bring to clinical trial stages. The San Diego, California based biotech company ViaCyte has also developed a product aiming to provide a solution for type 1 diabetes which uses an encapsulation device made of a semi-permeable immune reaction-protective membrane. The device contains pancreatic progenitor cells that have been differentiated from embryonic stem cells. After surgical implantation in an outpatient procedure, the cells mature into endocrine cells which arrange in islet-like clusters and mimic the function of the pancreas, producing insulin and glucagon. The technology advanced from pre-clinical studies to FDA approval for phase 1 clinical trials in 2014, and presented two-year data from the trial in June 2018. They reported that their product, called PEC-Encap, has so far been safe and well tolerated in patients at a dose below therapeutic levels. The encapsulated cells were able to survive and mature after implantation, and immune system rejection was decreased due to the protective membrane. The second phase of the trial will evaluate the efficacy of the product. ViaCyte has also been receiving financial support from JDRF on this project. Initiatives around the globe In the United States in 2006, JDRF (formerly the Juvenile Diabetes Research Foundation) launched a multi-year initiative to help accelerate the development, regulatory approval, and acceptance of continuous glucose monitoring and artificial pancreas technology. Grassroots efforts to create and commercialize a fully automated artificial pancreas system have also arisen directly from patient advocates and the diabetes community. In April 2024, the NHS announced it would, over the next five years, offer use of a Hybrid Closed Loop system to Type 1 diabetes patients in England. References Notes Insulin delivery Biomedical engineering Biological engineering Implants (medicine) Diabetes-related supplies and medical equipment Prosthetics Pancreas
Automated insulin delivery system
[ "Engineering", "Biology" ]
3,132
[ "Biological engineering", "Medical technology", "Artificial organs", "Biomedical engineering" ]
6,317,357
https://en.wikipedia.org/wiki/Gallium%28III%29%20chloride
Gallium(III) chloride is an inorganic chemical compound with the formula GaCl3 which forms a monohydrate, GaCl3·H2O. Solid gallium(III) chloride is a deliquescent white solid and exists as a dimer with the formula Ga2Cl6. It is colourless and soluble in virtually all solvents, even alkanes, which is truly unusual for a metal halide. It is the main precursor to most derivatives of gallium and a reagent in organic synthesis. As a Lewis acid, GaCl3 is milder than aluminium chloride. It is also easier to reduce than aluminium chloride. The coordination chemistry of Ga(III) and Fe(III) are similar, so gallium(III) chloride has been used as a diamagnetic analogue of ferric chloride. Preparation Gallium(III) chloride can be prepared from the elements by heating gallium metal in a stream of chlorine at 200 °C and purifying the product by sublimation under vacuum. 2 Ga + 3 Cl2 → 2 GaCl3 It can also be prepared from by heating gallium oxide with thionyl chloride: Ga2O3 + 3 SOCl2 → 2 GaCl3 + 3 SO2 Gallium metal reacts slowly with hydrochloric acid, producing hydrogen gas. Evaporation of this solution produces the monohydrate. Structure As a solid, it adopts a bitetrahedral structure with two bridging chlorides. Its structure resembles that of aluminium tribromide. In contrast AlCl3 and InCl3 feature contain 6 coordinate metal centers. As a consequence of its molecular nature and associated low lattice energy, gallium(III) chloride has a lower melting point vs the aluminium and indium trihalides. The formula of Ga2Cl6 is often written as Ga2(μ-Cl)2Cl4. In the gas-phase, the dimeric (Ga2Cl6) and trigonal planar monomeric (GaCl3) are in a temperature-dependent equilibrium, with higher temperatures favoring the monomeric form. At 870 K, all gas-phase molecules are effectively in the monomeric form. In the monohydrate, the gallium is tetrahedrally coordinated with three chlorine molecules and one water molecule. Properties Physical Gallium(III) chloride is a diamagnetic and deliquescent colorless white solid that melts at 77.9 °C and boils at 201 °C without decomposition to the elements. This low melting point results from the fact that it forms discrete Ga2Cl6 molecules in the solid state. Gallium(III) chloride dissolves in water with the release of heat to form a colorless solution, which when evaporated, produces a colorless monohydrate, which melts at 44.4 °C. Chemical Gallium is the lightest member of Group 13 to have a full d shell, (gallium has the electronic configuration [Ar] 3d10 4s2 4p1) below the valence electrons that could take part in d-π bonding with ligands. The low oxidation state of Ga in Ga(III)Cl3, along with the low electronegativity and high polarisability, allow GaCl3 to behave as a "soft acid" in terms of the HSAB theory. The strength of the bonds between gallium halides and ligands have been extensively studied. What emerges is: GaCl3 is a weaker Lewis acid than AlCl3 towards N and O donors e.g. pyridine GaCl3 is a stronger Lewis acid than AlCl3 towards thioethers e.g. dimethyl sulfide, Me2S With a chloride ion as ligand the tetrahedral GaCl4− ion is produced, the 6 coordinate GaCl63− cannot be made. Compounds like KGa2Cl7 that have a chloride bridged anion are known. In a molten mixture of KCl and GaCl3, the following equilibrium exists: 2 GaCl4− Ga2Cl7− + Cl− When dissolved in water, gallium(III) chloride dissociates into the octahederal [Ga(H2O)6]3+ and Cl− ions forming an acidic solution, due to the hydrolysis of the hexaaquogallium(III) ion: [Ga(H2O)6]3+ → [Ga(H2O)5OH]2+ + H+ (pKa = 3.0) In basic solution, it hydrolyzes to gallium(III) hydroxide, which redissolves with the addition of more hydroxide, possibly to form Ga(OH)4−. Uses Organic synthesis Gallium(III) chloride is a Lewis acid catalyst, such as in the Friedel–Crafts reaction, which is able to substitute more common lewis acids such as ferric chloride. Gallium complexes strongly with π-donors, especially silylethynes, producing a strongly electrophilic complex. These complexes are used as an alkylating agent for aromatic hydrocarbons. It is also used in carbogallation reactions of compounds with a carbon-carbon triple bond. It is also used as a catalyst in many organic reactions. Organogallium compounds It is a precursor to organogallium reagents. For example, trimethylgallium, an organogallium compound used in MOCVD to produce various gallium-containing semiconductors, is produced by the reaction of gallium(III) chloride with various alkylating agents, such as dimethylzinc, trimethylaluminium, or methylmagnesium iodide. Purification of gallium Gallium(III) chloride is an intermediate in various gallium purification processes, where gallium(III) chloride is fractionally distilled or extracted from acid solutions. Detection of solar neutrinos 110 tons of gallium(III) chloride aqueous solution was used in the GALLEX and GNO experiments performed at Laboratori Nazionali del Gran Sasso in Italy to detect solar neutrinos. In these experiments, germanium-71 was produced by neutrino interactions with the isotope gallium-71 (which has a natural abundance of 40%), and the subsequent beta decays of germanium-71 were measured. See also Gallium halides References Further reading External links Inorganic compounds Gallium compounds Chlorides Metal halides
Gallium(III) chloride
[ "Chemistry" ]
1,347
[ "Chlorides", "Inorganic compounds", "Metal halides", "Salts" ]
6,318,968
https://en.wikipedia.org/wiki/List%20of%20materials%20properties
A material property is an intensive property of a material, i.e., a physical property or chemical property that does not depend on the amount of the material. These quantitative properties may be used as a metric by which the benefits of one material versus another can be compared, thereby aiding in materials selection. A property having a fixed value for a given material or substance is called material constant or constant of matter. (Material constants should not be confused with physical constants, that have a universal character.) A material property may also be a function of one or more independent variables, such as temperature. Materials properties often vary to some degree according to the direction in the material in which they are measured, a condition referred to as anisotropy. Materials properties that relate to different physical phenomena often behave linearly (or approximately so) in a given operating range . Modeling them as linear functions can significantly simplify the differential constitutive equations that are used to describe the property. Equations describing relevant materials properties are often used to predict the attributes of a system. The properties are measured by standardized test methods. Many such methods have been documented by their respective user communities and published through the Internet; see ASTM International. Acoustical properties Acoustical absorption Speed of sound Sound reflection Sound transfer Third order elasticity (Acoustoelastic effect) Atomic properties Atomic mass: (applies to each element) the average mass of the atoms of an element, in daltons (Da), a.k.a. atomic mass units (amu). Atomic number: (applies to individual atoms or pure elements) the number of protons in each nucleus Relative atomic mass, a.k.a. atomic weight: (applies to individual isotopes or specific mixtures of isotopes of a given element) (no units) Standard atomic weight: the average relative atomic mass of a typical sample of the element (no units) Chemical properties Corrosion resistance Hygroscopy pH Reactivity Specific internal surface area Surface energy Surface tension Electrical properties Capacitance Dielectric constant Dielectric strength Electrical resistivity and conductivity Electric susceptibility Electrocaloric coefficient Electrostriction Magnetoelectric polarizability Nernst coefficient (thermoelectric effect) Permittivity Piezoelectric constants Pyroelectricity Seebeck coefficient Magnetic properties Curie temperature Diamagnetism Hall coefficient Hysteresis Magnetostriction Magnetocaloric coefficient Magnetothermoelectric power (magneto-Seebeck effect coefficient) Magnetoresistance Maximum energy product Permeability Piezomagnetism Pyromagnetic coefficient Spin Hall effect Manufacturing properties Castability: How easily a high-quality casting can be obtained from the material Machinability rating Machining speeds and feeds Mechanical properties Brittleness: Ability of a material to break or shatter without significant deformation when under stress; opposite of plasticity, examples: glass, concrete, cast iron, ceramics etc. Bulk modulus: Ratio of pressure to volumetric compression (GPa) or ratio of the infinitesimal pressure increase to the resulting relative decrease of the volume Coefficient of restitution: The ratio of the final to initial relative velocity between two objects after they collide. Range: 0–1, 1 for perfectly elastic collision. Compressive strength: Maximum stress a material can withstand before compressive failure (MPa) Creep: The slow and gradual deformation of an object with respect to time. If the s in a material exceeds the yield point, the strain caused in the material by the application of load does not disappear totally on the removal of load. The plastic deformation caused to the material is known as creep. At high temperatures, the strain due to creep is quite appreciable. Density: Mass per unit volume (kg/m^3) Ductility: Ability of a material to deform under tensile load (% elongation). It is the property of a material by which it can be drawn into wires under the action of tensile force. A ductile material must have a high degree of plasticity and strength so that large deformations can take place without failure or rupture of the material. In ductile extension, a material that exhibits a certain amount of elasticity along with a high degree of plasticity. Durability: Ability to withstand wear, pressure, or damage; hard-wearing Elasticity: Ability of a body to resist a distorting influence or stress and to return to its original size and shape when the stress is removed Fatigue limit: Maximum stress a material can withstand under repeated loading (MPa) Flexural modulus Flexural strength: Maximum bending stress a material can withstand before failure (MPa) Fracture toughness: Ability of a material containing a crack to resist fracture (J/m^2) Friction coefficient: The amount of force normal to surface which converts to force resisting relative movement of contacting surfaces between material pairs Hardness: Ability to withstand surface indentation and scratching (e.g. Brinell hardness number) Malleability: Ability of the material to be flattened into thin sheets under applications of heavy compressive forces without cracking by hot or cold working means.This property of a material allows it to expand in all directions without rupture. Mass diffusivity: Ability of one substance to diffuse through another Plasticity: Ability of a material to undergo irreversible or permanent deformations without breaking or rupturing; opposite of brittleness Poisson's ratio: Ratio of lateral strain to axial strain (no units) Resilience: Ability of a material to absorb energy when it is deformed elastically (MPa); combination of strength and elasticity Shear modulus: Ratio of shear stress to shear strain (MPa) Shear strength: Maximum shear stress a material can withstand Slip: A tendency of a material's particles to undergo plastic deformation due to a dislocation motion within the material. Common in Crystals. Specific modulus: Modulus per unit volume (MPa/m^3) Specific strength: Strength per unit density (Nm/kg) Specific weight: Weight per unit volume (N/m^3) Surface roughness: The deviations in the direction of the normal vector of a real surface from its ideal form Tensile strength: Maximum tensile stress of a material can withstand before failure (MPa) Toughness: Ability of a material to absorb energy (or withstand shock) and plastically deform without fracturing (or rupturing); a material's resistance to fracture when stressed; combination of strength and plasticity Viscosity: A fluid's resistance to gradual deformation by tensile or shear stress; thickness Yield strength: The stress at which a material starts to yield plastically (MPa) Young's modulus: Ratio of linear stress to linear strain (MPa) (influences the stiffness and flexibility of an object) Optical properties Absorbance: How strongly a chemical attenuates light Birefringence Color Electro-optic effect Luminosity Optical activity Photoelasticity Photosensitivity Reflectivity Refractive index Scattering Transmittance Radiological properties Attenuation coefficients Half-life Neutron cross section Specific activity Thermal properties Phase diagram Boiling point Coefficient of thermal expansion Critical temperature Curie point Ductile-to-brittle transition temperature Emissivity Eutectic point Flammability Flash point Glass transition temperature Heat of vaporization Inversion temperature Melting point Thermal conductivity Thermal diffusivity Thermal expansion Triple point Vapor pressure Specific heat capacity See also Physical property Strength of materials Supervenience List of thermodynamic properties References Chemical properties Materials science Physical quantities
List of materials properties
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,568
[ "Physical phenomena", "Applied and interdisciplinary physics", "Physical quantities", "Quantity", "Materials science", "nan", "Physical properties" ]
4,829,062
https://en.wikipedia.org/wiki/Ethylhexyl%20palmitate
Ethylhexyl palmitate, also known as octyl palmitate, is the fatty acid ester derived from 2-ethylhexanol and palmitic acid. It is frequently utilized in cosmetic formulations. Chemical structure Ethylhexyl palmitate is a branched saturated fatty ester derived from ethylhexyl alcohol and palmitic acid. Physical properties Ethylhexyl palmitate is a clear, colorless liquid with a slightly fatty odor at room temperature. The ester is synthesized by reacting palmitic acid and 2-ethylhexanol in the presence of an acid catalyst. Uses Ethylhexyl palmitate is used in cosmetic formulations as a solvent, carrying agent, pigment wetting agent, fragrance fixative and emollient. Its dry-slip skinfeel is similar to some silicone derivatives. References Cosmetics chemicals Fatty acid esters Lipids Palmitate esters 2-Ethylhexyl esters
Ethylhexyl palmitate
[ "Chemistry" ]
200
[ "Organic compounds", "Biomolecules by chemical classification", "Lipids" ]
4,830,276
https://en.wikipedia.org/wiki/Molecular%20logic%20gate
A molecular logic gate is a molecule that performs a logical operation based on at least one physical or chemical inputs and a single output. The field has advanced from simple logic systems based on a single chemical or physical input to molecules capable of combinatorial and sequential operations such as arithmetic operations (i.e. moleculators and memory storage algorithms). Molecular logic gates work with input signals based on chemical processes and with output signals based on spectroscopic phenomena. Logic gates are the fundamental building blocks of computers, microcontrollers and other electrical circuits that require one or more logical operations. They can be used to construct digital architectures with varying degrees of complexity by a cascade of a few to several million logic gates, and are essentially physical devices that produce a singular binary output after performing logical operations based on Boolean functions on one or more binary inputs. The concept of molecular logic gates, extending the applicability of logic gates to molecules, aims to convert chemical systems into computational units. The field has evolved to realize several practical applications in fields such as molecular electronics, biosensing, DNA computing, nanorobotics, and cell imaging. Working principle For logic gates with a single input, there are four possible output patterns. When the input is 0, the output can be either a 0 or 1. When the input is 1, the output can again be 0 or 1. The four output bit patterns correspond to a specific logic type: PASS 0, YES, NOT, and PASS 1. PASS 0 and PASS 1 always outputs 0 and 1, respectively, regardless of input. YES outputs a 1 when the input is 1, and NOT is the inverse of YES – it outputs a 0 when the input is 1. AND, OR, XOR, NAND, NOR, XNOR, and INH are two-input logic gates. The AND, OR, and XOR gates are fundamental logic gates, and the NAND, NOR, and XNOR gates are complementary to AND, OR, and XOR gates, respectively. An INHIBIT (INH) gate is a special conditional logic gate that includes a prohibitory input. When the prohibitory input is absent, the output produced depends solely on the other input. History and development One of the earliest ideas for the use of π-conjugated molecules in molecular computation was proposed by Ari Aviram from IBM in 1988. The first practical realization of molecular logic was by de Silva et al. in their seminal work, in which they constructed a molecular photoionic AND gate with a fluorescent output. While a YES molecular logic gate can convert signals from their ionic to photonic forms, they are singular-input-singular-output systems. To build more complex molecular logic architectures, two-input gates, namely AND and OR gates, are needed. Some early works made some progress in this direction, but they could not realize a complete truth table as their protonated ionic forms could not bind to the substrate in every case. De Silva et al. constructed an anthracene-based AND gate made up of tertiary amine and benzo-18-crown-6 units, both of which were known to show photoinduced electron transfer (PET) processes. The two molecules acted as receptors that were connected to the anthracene-based fluorophore by alkyl spacers. The PET is quenched upon coordination with protons and sodium ions, respectively, for the two receptors, and would cause the anthracene unit to fluoresce. Examples of molecular logic gates YES molecular logic gate An example of a YES logic gate comprises a benzo-crown-ether connected to a cyano-substituted anthracene unit. An output of 1 (fluorescence) is obtained only when sodium ions are present in the solution (indicating an input of 1). Sodium ions are encapsulated by the crown ether, resulting in a quenching of the PET process and causing the anthracene unit to fluoresce. AND molecular logic gate This molecular logic gate illustrates the advancement from redox-fluorescent switches to multi-input logic gates with an electrochemical switch, detecting the presence of acids. This two-input AND logic gate incorporates a tertiary amine proton receptor and a tetrathiafulvalene redox donor. These groups, when attached to anthracene, can simultaneously process information concerning the concentration of the acid and oxidizing ability of the solution. OR molecular logic gate De Silva et al. constructed an OR molecular logic gate using an aza-crown ether receptor and sodium and potassium ions as the inputs. Either of the two ions could bind to the crown ether, causing the PET to be quenched and the fluorescence to be turned on. Since either of the two ions (input “1”) could cause fluorescence (output “1”), the system resembled an OR logic gate. INH molecular logic gate The INH logic gate incorporates a Tb3+ ion in a chelate complex. This two-input logic gate displays non-commutative behavior with chemical inputs and a phosphorescence output. Whenever dioxygen (input “1”) is present, the system is quenched and no phosphorescence is observed (output “0”). The second input, H+, must also be present for an output “1” to be observed. NAND molecular logic gate Parker and Williams constructed a NAND logic gate based on strong emission from a terbium complex of phenanthridine. When acid and oxygen (the two inputs) are absent (input “0”), the terbium center fluoresces (output “1”). NOR molecular logic gate Akkaya and coworkers demonstrated a molecular NOR gate using a boradiazaindacene system. Fluorescence of the highly-emissive boradiazaindacene (input “1”) was found to be quenched in the presence of either a zinc salt [Zn(II)] or trifluoroacetic acid (TFA). XOR and XNOR molecular logic gates De Silva and McClenaghan designed a proof-of-principle arithmetic device based on molecular logic gates. Compound A is a push-pull olefin with the top receptor containing four carboxylic acid anion groups (and non-disclosed counter cations) capable of binding to calcium. The bottom part is a quinoline molecule which is a receptor for hydrogen ions. The logic gate operates as follows: without any chemical input of Ca2+ or H+, the chromophore shows a maximum absorbance in UV/VIS spectroscopy at 390 nm. When calcium is introduced, a hypsochromic shift (blue shift) takes place and the absorbance at 390 nm decreases; likewise, an addition of protons causes a bathochromic shift (red shift). When both cations are in water, the net result is absorption at the original 390 nm wavelength. This system represents an XNOR logic gate in absorption and an XOR logic gate in transmittance. In another XOR logic gate system, the chemistry is based on pseudorotaxane. In organic solution the electron-deficient diazapyrenium salt (rod) and the electron-rich 2,3-dioxynaphthalene units of the crown ether (ring) self-assemble by formation of a charge transfer complex. An added tertiary amine like tributylamine forms a 1:2 adduct with the diazapyrene and the complex gets dethreaded. This process is accompanied by an increase in emission intensity at 343 nm resulting from freed crown ether. Added trifluoromethanesulfonic acid reacts with the amine and the process is reverted. Excess acid locks the crown ether by protonation and the complex is de-threaded again. Half-adder and half-subtractor molecular circuits In compound B, the bottom section contains a tertiary amino group that is capable of binding to protons. In this system, fluorescence only occurs when both cations are present. The presence of both cations hinders PET, allowing compound B to fluoresce. In the absence of either ion, fluorescence is quenched by PET, which involves an electron transfer from either the nitrogen atom or the oxygen atoms, or both to the anthracenyl group. When both receptors are bound to calcium ions and protons respectively, both PET channels are shut off. The overall result of Compound B is AND logic, since an output of "1" (fluorescence) occurs only when both Ca2+ and H+ are present in solution, that is, have values as "1". With both systems running in parallel and the monitoring of transmittance for system A and fluorescence for system B, the result is a half-adder capable of reproducing the equation 1 + 1 = 2. In a modification of system B, three chemical inputs are simultaneously processed in an AND logic gate. An enhanced fluorescence signal is observed only in the presence of excess protons, zinc and sodium ions through interactions with their respective amine, phenyldiaminocarboxylate, and crown ether receptors. The processing mode operates similarly as discussed above – fluorescence is observed due to the prevention of competing PET reactions from the receptors to the excited anthracene fluorophore. The absence of any ion input results in a low fluorescence output. Each receptor is selective for its specific ion as an increase in the concentration of the other ions does not yield a high fluorescence. The specific concentration threshold of each input must be reached to achieve a fluorescent output in accordance with combinatorial AND logic. More complex molecular logic circuits A molecular logic gate can process modulators much like the setup seen in de Silva’s proof-of-principle, but incorporating different logic gates on the same molecule is challenging. Such a function is called integrated logic and is exemplified by the BODIPY-based, half-subtractor logic gate illustrated by Coskun, Akkaya, and their colleagues. When monitored at two different wavelengths, 565 and 660 nm, XOR and INH logic gates operations are realized at the respective wavelengths. Optical studies of this compound in tetrahydrofuran reveal an absorbance peak at 565 nm and an emission peak at 660 nm. Addition of an acid results in a hypsochromic shift of both peaks as protonation of the tertiary amine results in an internal charge transfer. The color of the emission observed is yellow. When a strong base is added, the phenolic hydroxyl group is deprotonated, effecting a PET that renders the molecule non-emissive. When an acid and base are added, the molecule is observed to give off a red emission, as the tertiary amine would not be protonated while the hydroxyl group would remain protonated, resulting in the absence of both PET and intramolecular charge transfer (ICT). Due to the great difference in emission intensity, this single molecule is capable of carrying out subtraction at a nanoscale level. A full adder system based on fluorescein has also been constructed by Shanzer et al. The system is able to compute 1+1+1=3. Potential applications Over the years, the utility of molecular logic gates has been explored in a wide range of fields such as chemical and biological detection, the pharmaceutical and food industries, and the emerging fields of nanomaterials and chemical computing. Chemical detection of ions Fluoride (F−) and acetate (CH3COO−) anions are among the most important ones in the context of human health and well-being. The former, used extensively in health care, is known for its toxicity and corrosiveness. The latter can cause alkalosis and affect metabolic pathways beyond a certain concentration. Hence, it is crucial to develop methods to detect these anions in aqueous media. Bhat et al.. constructed an INH gate with receptors that bind selectively to F‑ and CH3COO− anions. The system used changes in absorbance as a colorimetric-based output to detect the concentration of anions.   Wen and coworkers designed an INH molecular logic gate with Fe3+ and EDTA as the inputs and a fluorescent output for the detection of ferric ions in solutions. The fluorescence of the system is quenched if and only if Fe3+ input is present and EDTA is absent. Heavy metal ions are a persistent threat to human health because of their inherent toxicity and low degradability. Several molecular logic gate-based systems have been constructed to detect ions such as Cd2+, Hg2+/Pb2+, and Ag+. In their work, Chen et al. demonstrated that logic gate-based systems could be used to detect Cd2+ ions in rice samples. Biological applications The effectiveness of methods such as chemotherapy to treat cancer tends to plateau after some time, as the cells undergo molecular changes that render them insensitive to the effect of anticancer drugs, making the early detection of cancerous cells important. A biomarker, microRNA (miRNA), is crucial in this detection via its expression patterns. Zhang et al. have demonstrated an INH-OR gate cascade for the purpose, Yue et al. used an AND gate to construct a system with two miRNA inputs and a quantum dot photoluminescence output, and Peng et al. also constructed an AND gate-based dual-input system for the simultaneous detection of miRNAs from tumor cells. Akkaya et al. illustrated the application of a logic gate for photodynamic therapy in their work. A BODIPY dye attached to a crown ether and two pyridyl groups separated by spacers works according to an AND logic gate. The molecule works as a photodynamic agent upon irradiation at 660 nm under conditions of relatively high sodium and proton ion concentrations by converting triplet oxygen to cytotoxic singlet oxygen. This prototypical example uses higher sodium levels and lower pH in tumor tissue compared to the levels in normal cells. When these two cancer-related cellular parameters are satisfied, a change is observed in the absorbance spectrum. DNA computing and logic calculation The concept of DNA computing arose from addressing storage density issues because of the increasing volumes of data information. Theoretically, a gram of single-stranded DNA is capable of storing over 400 exabytes of data at a density of two bits per nucleotide. Leonard Adleman is credited with having established the field in 1994. Recently, molecular logic gate systems have been utilized in DNA computing models. Massey et al. constructed photonic DNA molecular logic circuits using cascades of AND, OR, NAND, and NOR molecular logic gates. They used lanthanide complexes as fluorescent markers, and their luminescent outputs were detected by FRET-based devices at the terminals of DNA strands. Works by Campbell et al. on demonstrating NOT, AND, OR, and XNOR logic systems based on DNA crossover tiles, Bader et al. on manipulating the DNA G-quadruplex structure to realize YES, AND, and OR logic operations, and Chatterjee and coworkers on constructing logic gates using reactive DNA hairpins on DNA origami surfaces are some examples of logic gate-based DNA computing. Nanorobotics and advanced machines Nanorobots have the potential to transform drug delivery processes and biological computing. Llopis-Lorente et al. developed a nanorobot that can perform logic operations and process information on glucose and urea. Thubagere et al. designed a DNA molecular nanorobot capable of sorting chemical cargo. The system could work without additional power as the robot was capable of walking across the DNA origami surface on its two feet. It also had an arm to transport cargo.   Margulies et al. demonstrated molecular sequential logic, where they created a molecular keypad lock resembling the processing capabilities of an electronic security device, which is equivalent to incorporates several interconnected AND logic gates in parallel. The molecule mimics an electronic keypad of an automated teller machine. The output signals are dependent not only on the presence of inputs but also on a correct order; i.e. the correct password must be entered. The molecule was designed using pyrene and fluorescein fluorophores connected by a siderophore, which binds to Fe(III), and the acidity of the solution changes the fluorescence properties of the fluorescein fluorophore. Molecular logic gate systems can theoretically overcome the problems arising when semiconductors approach nano-dimensions. Molecular logic gates are more versatile than their silicon counterparts, with phenomena such as superposed logic unavailable to semiconductor electronics. Dry molecular gates, such as the one demonstrated by Avouris and colleagues, prove to be possible substitutes for semiconductor devices due to their small size, similar infrastructure, and data processing abilities. Avouris revealed a NOT logic gate composed of a bundle of carbon nanotubes. The nanotubes are doped differently in adjoining regions creating two complementary field effect transistors, and the bundle operates as a NOT logic gate only when satisfactory conditions are met. See also Molecular scale electronics Molecular machine Chemical computer Host-guest chemistry Molecular switch Photoswitch Molecular memory Quantum computing Unconventional computing References External links The 3rd International Conference on Molecular Sensors & Molecular Logic Gates (MSMLG) was held on July 8–11, 2012 at Korea University in Seoul, Korea. Logic gates Molecular electronics Molecular machines Nanoelectronics Supramolecular chemistry
Molecular logic gate
[ "Physics", "Chemistry", "Materials_science", "Technology" ]
3,623
[ "Machines", "Molecular physics", "Molecular electronics", "Physical systems", "Molecular machines", "Nanoelectronics", "nan", "Nanotechnology", "Supramolecular chemistry" ]
4,832,120
https://en.wikipedia.org/wiki/Atmospheric%20Dispersion%20Modelling%20Liaison%20Committee
The Atmospheric Dispersion Modelling Liaison Committee (ADMLC) is composed of representatives from government departments, agencies (predominantly but not exclusively from the UK) and private consultancies. The ADMLC's main aim is to review current understanding of atmospheric dispersion and related phenomena for application primarily in the authorization or licensing of pollutant emissions to the atmosphere from industrial, commercial or institutional sites. The ADMLC is primarily concerned with atmospheric pollutant discharges from regulated emission sites and other fixed sources. Their review and study interests include routine discharges as well as accidental releases or releases cause by operational upsets. Their interests also include modelling dispersion at all scales from on-site short range (including dispersion modelling indoors) to long range distances. The ADMLC does not normally get involved with pollutant emissions from roadway traffic or other non-fixed sources. Nor does it get involved with air pollution topics such as acid rain and ozone formation. History In 1977, a meeting of representatives from UK government departments, utilities and research organisations was held to discuss atmospheric dispersion calculation methods for radioactive releases. Those present agreed on the need for a review of recent developments in atmospheric dispersion modelling and formed an informal Steering Committee, which operated for a number of years. That Steering Committee subsequently became the Atmospheric Dispersion Modelling Liaison Committee in 1995. Although the ADMLC was initially formed to consider primarily radioactive releases from the nuclear industry, it has expanded its range of interests and its membership to more fully reflect the needs of industrial and regulatory organisations. Membership As listed on the ADMLC's web site, the membership of the ADMLC includes the following entities: Atomic Weapons Establishment at Aldermaston Defence Science and Technology Laboratory (DSTL) Department for Energy and Climate Change Department for Environment Food and Rural Affairs (Defra) Environment Agency Environmental Protection Agency (Ireland) Food Standards Agency Health and Safety Executive Methodology and Standards Development Unit, Hazardous Installations Directorate Nuclear Installations Inspectorate Health and Safety Laboratory Home Office Met Office Amec Foster Wheeler Nuclear Department, HMS Sultan Public Health England Scottish Environment Protection Agency The Chairman of ADMLC is provided by the Met Office and the Secretariat of the ADMLC are provided by Public Health England. Areas of interest ADMLC facilitates the exchange of ideas and highlights where there are gaps in knowledge. It tries to provide guidance to, and to endorse good practice in, the dispersion modelling community. The ADMLC has hosted workshops and welcomes ideas for joint meetings or joint workshops with other organisations. The ADMLC members pay an annual subscription which is used to fund reviews on topics agreed on by the members. Reviews already funded by ADMLC include: Dispersion at low wind speed Dispersion from sources near groups of buildings, or in urban areas Plume rise Dispersion in coastal areas, The use of old meteorological data or data obtained at some distance from the release point The possible use of data from numerical weather prediction programs Uncertainty of dispersion model predictions as a result of deriving atmospheric stability indicators from meteorological data Proceedings of a workshop on the reliability of dispersion models for regulatory applications Review of the Royal Meteorological Society guidelines for atmospheric dispersion modelling Calculation of air pollutant concentrations indoors Dispersion following explosions A complete list of projects, and links to each report, can be found on the ADMLC website. See also Accidental release source terms Bibliography of atmospheric dispersion modeling Air pollution dispersion terminology Air Quality Modeling Group Air Quality Modelling and Assessment Unit (AQMAU) Air Resources Laboratory AP 42 Compilation of Air Pollutant Emission Factors Atmospheric dispersion modeling :Category:Atmospheric dispersion modeling List of atmospheric dispersion models Finnish Meteorological Institute Met Office National Environmental Research Institute of Denmark NILU, Norwegian Institute of Air Research Royal Meteorological Society UK Dispersion Modelling Bureau References Further reading External links Atmospheric Dispersion Modelling Liaison Committee Air Resources Laboratory (ARL) Air Quality Modeling Group Met Office web site Error propagation in air dispersion modeling Air pollution in the United Kingdom Air pollution organizations Atmospheric dispersion modeling Science and technology in the United Kingdom 1995 establishments in the United Kingdom
Atmospheric Dispersion Modelling Liaison Committee
[ "Chemistry", "Engineering", "Environmental_science" ]
850
[ "Atmospheric dispersion modeling", "Environmental modelling", "Environmental engineering" ]
4,834,091
https://en.wikipedia.org/wiki/Kuhn%20length
The Kuhn length is a theoretical treatment, developed by Werner Kuhn, in which a real polymer chain is considered as a collection of Kuhn segments each with a Kuhn length . Each Kuhn segment can be thought of as if they are freely jointed with each other. Each segment in a freely jointed chain can randomly orient in any direction without the influence of any forces, independent of the directions taken by other segments. Instead of considering a real chain consisting of bonds and with fixed bond angles, torsion angles, and bond lengths, Kuhn considered an equivalent ideal chain with connected segments, now called Kuhn segments, that can orient in any random direction. The length of a fully stretched chain is for the Kuhn segment chain. In the simplest treatment, such a chain follows the random walk model, where each step taken in a random direction is independent of the directions taken in the previous steps, forming a random coil. The average end-to-end distance for a chain satisfying the random walk model is . Since the space occupied by a segment in the polymer chain cannot be taken by another segment, a self-avoiding random walk model can also be used. The Kuhn segment construction is useful in that it allows complicated polymers to be treated with simplified models as either a random walk or a self-avoiding walk, which can simplify the treatment considerably. For an actual homopolymer chain (consists of the same repeat units) with bond length and bond angle θ with a dihedral angle energy potential, the average end-to-end distance can be obtained as , where is the average cosine of the dihedral angle. The fully stretched length . By equating the two expressions for and the two expressions for from the actual chain and the equivalent chain with Kuhn segments, the number of Kuhn segments and the Kuhn segment length can be obtained. For worm-like chain, Kuhn length equals two times the persistence length. References Polymer chemistry Polymer physics
Kuhn length
[ "Chemistry", "Materials_science", "Engineering" ]
401
[ "Polymer physics", "Materials science", "Polymer chemistry" ]
4,838,287
https://en.wikipedia.org/wiki/Hot%20blast
Hot blast refers to the preheating of air blown into a blast furnace or other metallurgical process. As this considerably reduced the fuel consumed, hot blast was one of the most important technologies developed during the Industrial Revolution. Hot blast also allowed higher furnace temperatures, which increased the capacity of furnaces. As first developed, it worked by alternately storing heat from the furnace flue gas in a firebrick-lined vessel with multiple chambers, then blowing combustion air through the hot chamber. This is known as regenerative heating. Hot blast was invented and patented for iron furnaces by James Beaumont Neilson in 1828 at Wilsontown Ironworks in Scotland, but was later applied in other contexts, including late bloomeries. Later the carbon monoxide in the flue gas was burned to provide additional heat. History Invention and spread James Beaumont Neilson, previously foreman at Glasgow gas works, invented the system of preheating the blast for a furnace. He found that by increasing the temperature of the incoming air to , he could reduce the fuel consumption from 8.06 tons of coal to 5.16 tons of coal per ton of produced iron with further reductions at even higher temperatures. He, with partners including Charles Macintosh, patented this in 1828. Initially the heating vessel was made of wrought iron plates, but these oxidized, and he substituted a cast iron vessel. On the basis of a January 1828 patent, Thomas Botfield has a historical claim as the inventor of the hot blast method. Neilson is credited as inventor of hot blast, because he won patent litigation. Neilson and his partners engaged in substantial litigation to enforce the patent against infringers. The spread of this technology across Britain was relatively slow. By 1840, 58 ironmasters had taken out licenses, yielding a royalty income of £30,000 per year. By the time the patent expired there were 80 licenses. In 1843, just after it expired, 42 of the 80 furnaces in south Staffordshire were using hot blast, and uptake in south Wales was even slower. Other advantages of hot blast were that raw coal could be used instead of coke. In Scotland, the relatively poor "black band" ironstone could be profitably smelted. It also increased the daily output of furnaces. In the case of Calder ironworks from 5.6 tons per day in 1828 to 8.2 in 1833, which made Scotland the lowest cost steel producing region in Britain in the 1830s. Early hot blast stoves were troublesome, as thermal expansion and contraction could cause breakage of pipes. This was somewhat remedied by supporting the pipes on rollers. It was also necessary to devise new methods of connecting the blast pipes to the tuyeres, as leather could no longer be used. Ultimately this principle was applied even more efficiently in regenerative heat exchangers, such as the Cowper stove (which preheat incoming blast air with waste heat from flue gas; these are used in modern blast furnaces), and in the open hearth furnace (for making steel) by the Siemens-Martin process. Independently, George Crane and David Thomas, of the Yniscedwyn Works in Wales, conceived of the same idea, and Crane filed for a British patent in 1836. They began producing iron by the new process on February 5, 1837. Crane subsequently bought Gessenhainer's patent and patented additions to it, controlling the use of the process in both Britain and the US. While Crane remained in Wales, Thomas moved to the US on behalf of the Lehigh Coal & Navigation Company and founded the Lehigh Crane Iron Company to utilize the process. Anthracite in ironmaking Hot blast allowed the use of anthracite in iron smelting. It also allowed use of lower quality coal because less fuel meant proportionately less sulfur and ash. At the time the process was invented, good coking coal was only available in sufficient quantities in Great Britain and western Germany, so iron furnaces in the US were using charcoal. This meant that any given iron furnace required vast tracts of forested land for charcoal production, and generally went out of blast when the nearby woods had been felled. Attempts to use anthracite as a fuel had ended in failure, as the coal resisted ignition under cold blast conditions. In 1831, Dr. Frederick W. Gessenhainer filed for a US patent on the use of hot blast and anthracite to smelt iron. He produced a small quantity of anthracite iron by this method at Valley Furnace near Pottsville, Pennsylvania in 1836, but due to breakdowns and his illness and death in 1838, he was not able to develop the process into large-scale production. Anthracite was displaced by coke in the US after the Civil War. Coke was more porous and able to support the heavier loads in the vastly larger furnaces of the late 19th century. References Blast furnaces Metallurgy Steelmaking Scottish inventions British inventions
Hot blast
[ "Chemistry", "Materials_science", "Engineering" ]
1,004
[ "Metallurgical processes", "Metallurgy", "Steelmaking", "History of metallurgy", "Materials science", "Blast furnaces", "nan" ]
4,838,571
https://en.wikipedia.org/wiki/Position%20operator
In quantum mechanics, the position operator is the operator that corresponds to the position observable of a particle. When the position operator is considered with a wide enough domain (e.g. the space of tempered distributions), its eigenvalues are the possible position vectors of the particle. In one dimension, if by the symbol we denote the unitary eigenvector of the position operator corresponding to the eigenvalue , then, represents the state of the particle in which we know with certainty to find the particle itself at position . Therefore, denoting the position operator by the symbol we can write for every real position . One possible realization of the unitary state with position is the Dirac delta (function) distribution centered at the position , often denoted by . In quantum mechanics, the ordered (continuous) family of all Dirac distributions, i.e. the family is called the (unitary) position basis, just because it is a (unitary) eigenbasis of the position operator in the space of tempered distributions. It is fundamental to observe that there exists only one linear continuous endomorphism on the space of tempered distributions such that for every real point . It's possible to prove that the unique above endomorphism is necessarily defined by for every tempered distribution , where denotes the coordinate function of the position line defined from the real line into the complex plane by Introduction Consider representing the quantum state of a particle at a certain instant of time by a square integrable wave function . For now, assume one space dimension (i.e. the particle "confined to" a straight line). If the wave function is normalized, then the square modulus represents the probability density of finding the particle at some position of the real-line, at a certain time. That is, if then the probability to find the particle in the position range is Hence the expected value of a measurement of the position for the particle is where is the coordinate function which is simply the canonical embedding of the position-line into the complex plane. Strictly speaking, the observable position can be point-wisely defined as for every wave function and for every point of the real line. In the case of equivalence classes the definition reads directly as follows That is, the position operator multiplies any wave-function by the coordinate function . Three dimensions The generalisation to three dimensions is straightforward. The space-time wavefunction is now and the expectation value of the position operator at the state is where the integral is taken over all space. The position operator is Basic properties In the above definition, which regards the case of a particle confined upon a line, the careful reader may remark that there does not exist any clear specification of the domain and the co-domain for the position operator. In literature, more or less explicitly, we find essentially three main directions to address this issue. The position operator is defined on the subspace of formed by those equivalence classes whose product by the embedding lives in the space . In this case the position operator reveals not continuous (unbounded with respect to the topology induced by the canonical scalar product of ), with no eigenvectors, no eigenvalues and consequently with empty point spectrum. The position operator is defined on the Schwartz space (i.e. the nuclear space of all smooth complex functions defined upon the real-line whose derivatives are rapidly decreasing). In this case the position operator reveals continuous (with respect to the canonical topology of ), injective, with no eigenvectors, no eigenvalues and consequently with empty point spectrum. It is (fully) self-adjoint with respect to the scalar product of in the sense that The position operator is defined on the dual space of (i.e. the nuclear space of tempered distributions). As is a subspace of , the product of a tempered distribution by the embedding always lives . In this case the position operator reveals continuous (with respect to the canonical topology of ), surjective, endowed with complete families of generalized eigenvectors and real generalized eigenvalues. It is self-adjoint with respect to the scalar product of in the sense that its transpose operator is self-adjoint, that is The last case is, in practice, the most widely adopted choice in Quantum Mechanics literature, although never explicitly underlined. It addresses the possible absence of eigenvectors by extending the Hilbert space to a rigged Hilbert space: thereby providing a mathematically rigorous notion of eigenvectors and eigenvalues. Eigenstates The eigenfunctions of the position operator (on the space of tempered distributions), represented in position space, are Dirac delta functions. Informal proof. To show that possible eigenvectors of the position operator should necessarily be Dirac delta distributions, suppose that is an eigenstate of the position operator with eigenvalue . We write the eigenvalue equation in position coordinates, recalling that simply multiplies the wave-functions by the function , in the position representation. Since the function is variable while is a constant, must be zero everywhere except at the point . Clearly, no continuous function satisfies such properties, and we cannot simply define the wave-function to be a complex number at that point because its -norm would be 0 and not 1. This suggest the need of a "functional object" concentrated at the point and with integral different from 0: any multiple of the Dirac delta centered at . The normalized solution to the equation is or better such that Indeed, recalling that the product of any function by the Dirac distribution centered at a point is the value of the function at that point times the Dirac distribution itself, we obtain immediately Although such Dirac states are physically unrealizable and, strictly speaking, are not functions, Dirac distribution centered at can be thought of as an "ideal state" whose position is known exactly (any measurement of the position always returns the eigenvalue ). Hence, by the uncertainty principle, nothing is known about the momentum of such a state. Momentum space Usually, in quantum mechanics, by representation in the momentum space we intend the representation of states and observables with respect to the canonical unitary momentum basis In momentum space, the position operator in one dimension is represented by the following differential operator where: the representation of the position operator in the momentum basis is naturally defined by , for every wave function (tempered distribution) ; represents the coordinate function on the momentum line and the wave-vector function is defined by . Formalism in L2(R, C) Consider the case of a spinless particle moving in one spatial dimension. The state space for such a particle contains ; the Hilbert space of complex-valued, square-integrable functions on the real line. The position operator is defined as the self-adjoint operator with domain of definition and coordinate function sending each point to itself, such that for each pointwisely defined and . Immediately from the definition we can deduce that the spectrum consists of the entire real line and that has a strictly continuous spectrum, i.e., no discrete set of eigenvalues. The three-dimensional case is defined analogously. We shall keep the one-dimensional assumption in the following discussion. Measurement theory in L2(R, C) As with any quantum mechanical observable, in order to discuss position measurement, we need to calculate the spectral resolution of the position operator which is where is the so-called spectral measure of the position operator. Let denote the indicator function for a Borel subset of . Then the spectral measure is given by i.e., as multiplication by the indicator function of . Therefore, if the system is prepared in a state , then the probability of the measured position of the particle belonging to a Borel set is where is the Lebesgue measure on the real line. After any measurement aiming to detect the particle within the subset B, the wave function collapses to either or where is the Hilbert space norm on . See also Position and momentum space Momentum operator Translation operator (quantum mechanics) Notes References Quantum mechanics
Position operator
[ "Physics" ]
1,667
[ "Quantum operators", "Quantum mechanics" ]
26,754,386
https://en.wikipedia.org/wiki/Randomized%20rounding
In computer science and operations research, randomized rounding is a widely used approach for designing and analyzing approximation algorithms. Many combinatorial optimization problems are computationally intractable to solve exactly (to optimality). For such problems, randomized rounding can be used to design fast (polynomial time) approximation algorithms—that is, algorithms that are guaranteed to return an approximately optimal solution given any input. The basic idea of randomized rounding is to convert an optimal solution of a relaxation of the problem into an approximately-optimal solution to the original problem. The resulting algorithm is usually analyzed using the probabilistic method. Overview The basic approach has three steps: Formulate the problem to be solved as an integer linear program (ILP). Compute an optimal fractional solution to the linear programming relaxation (LP) of the ILP. Round the fractional solution of the LP to an integer solution of the ILP. (Although the approach is most commonly applied with linear programs, other kinds of relaxations are sometimes used. For example, see Goemans' and Williamson's semidefinite programming-based Max-Cut approximation algorithm.) In the first step, the challenge is to choose a suitable integer linear program. Familiarity with linear programming, in particular modelling using linear programs and integer linear programs, is required. For many problems, there is a natural integer linear program that works well, such as in the Set Cover example below. (The integer linear program should have a small integrality gap; indeed randomized rounding is often used to prove bounds on integrality gaps.) In the second step, the optimal fractional solution can typically be computed in polynomial time using any standard linear programming algorithm. In the third step, the fractional solution must be converted into an integer solution (and thus a solution to the original problem). This is called rounding the fractional solution. The resulting integer solution should (provably) have cost not much larger than the cost of the fractional solution. This will ensure that the cost of the integer solution is not much larger than the cost of the optimal integer solution. The main technique used to do the third step (rounding) is to use randomization, and then to use probabilistic arguments to bound the increase in cost due to the rounding (following the probabilistic method from combinatorics). Therein, probabilistic arguments are used to show the existence of discrete structures with desired properties. In this context, one uses such arguments to show the following: Given any fractional solution of the LP, with positive probability the randomized rounding process produces an integer solution that approximates according to some desired criterion. Finally, to make the third step computationally efficient, one either shows that approximates with high probability (so that the step can remain randomized) or one derandomizes the rounding step, typically using the method of conditional probabilities. The latter method converts the randomized rounding process into an efficient deterministic process that is guaranteed to reach a good outcome. Example: the set cover problem The following example illustrates how randomized rounding can be used to design an approximation algorithm for the set cover problem. Fix any instance of set cover over a universe . Computing the fractional solution For step 1, let IP be the standard integer linear program for set cover for this instance. For step 2, let LP be the linear programming relaxation of IP, and compute an optimal solution to LP using any standard linear programming algorithm. This takes time polynomial in the input size. The feasible solutions to LP are the vectors that assign each set a non-negative weight , such that, for each element , covers —the total weight assigned to the sets containing is at least 1, that is, The optimal solution is a feasible solution whose cost is as small as possible. Note that any set cover for gives a feasible solution (where for , otherwise). The cost of this equals the cost of , that is, In other words, the linear program LP is a relaxation of the given set-cover problem. Since has minimum cost among feasible solutions to the LP, the cost of is a lower bound on the cost of the optimal set cover. Randomized rounding step In step 3, we must convert the minimum-cost fractional set cover into a feasible integer solution (corresponding to a true set cover). The rounding step should produce an that, with positive probability, has cost within a small factor of the cost of .Then (since the cost of is a lower bound on the cost of the optimal set cover), the cost of will be within a small factor of the optimal cost. As a starting point, consider the most natural rounding scheme: For each set in turn, take with probability , otherwise take . With this rounding scheme, the expected cost of the chosen sets is at most , the cost of the fractional cover. This is good. Unfortunately the coverage is not good. When the variables are small, the probability that an element is not covered is about So only a constant fraction of the elements will be covered in expectation. To make cover every element with high probability, the standard rounding scheme first scales up the rounding probabilities by an appropriate factor . Here is the standard rounding scheme: Fix a parameter . For each set in turn, take with probability , otherwise take . Scaling the probabilities up by increases the expected cost by , but makes coverage of all elements likely. The idea is to choose as small as possible so that all elements are provably covered with non-zero probability. Here is a detailed analysis. Lemma (approximation guarantee for rounding scheme) Fix . With positive probability, the rounding scheme returns a set cover of cost at most (and thus of cost times the cost of the optimal set cover). (Note: with care the can be reduced to .) Proof The output of the random rounding scheme has the desired properties as long as none of the following "bad" events occur: the cost of exceeds , or for some element , fails to cover . The expectation of each is at most . By linearity of expectation, the expectation of is at most . Thus, by Markov's inequality, the probability of the first bad event above is at most . For the remaining bad events (one for each element ), note that, since for any given element , the probability that is not covered is (This uses the inequality , which is strict for .) Thus, for each of the elements, the probability that the element is not covered is less than . By the union bound, the probability that one of the bad events happens is less than . Thus, with positive probability there are no bad events and is a set cover of cost at most . QED Derandomization using the method of conditional probabilities The lemma above shows the existence of a set cover of cost ). In this context our goal is an efficient approximation algorithm, not just an existence proof, so we are not done. One approach would be to increase a little bit, then show that the probability of success is at least, say, 1/4. With this modification, repeating the random rounding step a few times is enough to ensure a successful outcome with high probability. That approach weakens the approximation ratio. We next describe a different approach that yields a deterministic algorithm that is guaranteed to match the approximation ratio of the existence proof above. The approach is called the method of conditional probabilities. The deterministic algorithm emulates the randomized rounding scheme: it considers each set in turn, and chooses . But instead of making each choice randomly based on , it makes the choice deterministically, so as to keep the conditional probability of failure, given the choices so far, below 1. Bounding the conditional probability of failure We want to be able to set each variable in turn so as to keep the conditional probability of failure below 1. To do this, we need a good bound on the conditional probability of failure. The bound will come by refining the original existence proof. That proof implicitly bounds the probability of failure by the expectation of the random variable , where is the set of elements left uncovered at the end. The random variable may appear a bit mysterious, but it mirrors the probabilistic proof in a systematic way. The first term in comes from applying Markov's inequality to bound the probability of the first bad event (the cost is too high). It contributes at least 1 to if the cost of is too high. The second term counts the number of bad events of the second kind (uncovered elements). It contributes at least 1 to if leaves any element uncovered. Thus, in any outcome where is less than 1, must cover all the elements and have cost meeting the desired bound from the lemma. In short, if the rounding step fails, then . This implies (by Markov's inequality) that is an upper bound on the probability of failure. Note that the argument above is implicit already in the proof of the lemma, which also shows by calculation that . To apply the method of conditional probabilities, we need to extend the argument to bound the conditional probability of failure as the rounding step proceeds. Usually, this can be done in a systematic way, although it can be technically tedious. So, what about the conditional probability of failure as the rounding step iterates through the sets? Since in any outcome where the rounding step fails, by Markov's inequality, the conditional probability of failure is at most the conditional expectation of . Next we calculate the conditional expectation of , much as we calculated the unconditioned expectation of in the original proof. Consider the state of the rounding process at the end of some iteration . Let denote the sets considered so far (the first sets in ). Let denote the (partially assigned) vector (so is determined only if ). For each set , let denote the probability with which will be set to 1. Let contain the not-yet-covered elements. Then the conditional expectation of , given the choices made so far, that is, given , is Note that is determined only after iteration . Keeping the conditional probability of failure below 1 To keep the conditional probability of failure below 1, it suffices to keep the conditional expectation of below 1. To do this, it suffices to keep the conditional expectation of from increasing. This is what the algorithm will do. It will set in each iteration to ensure that (where ). In the th iteration, how can the algorithm set to ensure that ? It turns out that it can simply set so as to minimize the resulting value of . To see why, focus on the point in time when iteration starts. At that time, is determined, but is not yet determined --- it can take two possible values depending on how is set in iteration . Let denote the value of . Let and , denote the two possible values of , depending on whether is set to 0, or 1, respectively. By the definition of conditional expectation, Since a weighted average of two quantities is always at least the minimum of those two quantities, it follows that Thus, setting so as to minimize the resulting value of will guarantee that . This is what the algorithm will do. In detail, what does this mean? Considered as a function of (with all other quantities fixed) is a linear function of , and the coefficient of in that function is Thus, the algorithm should set to 0 if this expression is positive, and 1 otherwise. This gives the following algorithm. Randomized-rounding algorithm for set cover input: set system , universe , cost vector output: set cover (a solution to the standard integer linear program for set cover) Compute a min-cost fractional set cover (an optimal solution to the LP relaxation). Let . Let for each . For each do: Let .   ( contains the not-yet-decided sets.) If    then set , else set and .   ( contains the not-yet-covered elements.) Return . lemma (approximation guarantee for algorithm) The algorithm above returns a set cover of cost at most times the minimum cost of any (fractional) set cover. proof The algorithm ensures that the conditional expectation of , , does not increase at each iteration. Since this conditional expectation is initially less than 1 (as shown previously), the algorithm ensures that the conditional expectation stays below 1. Since the conditional probability of failure is at most the conditional expectation of , in this way the algorithm ensures that the conditional probability of failure stays below 1. Thus, at the end, when all choices are determined, the algorithm reaches a successful outcome. That is, the algorithm above returns a set cover of cost at most times the minimum cost of any (fractional) set cover. Remarks In the example above, the algorithm was guided by the conditional expectation of a random variable . In some cases, instead of an exact conditional expectation, an upper bound (or sometimes a lower bound) on some conditional expectation is used instead. This is called a pessimistic estimator. Comparison to other applications of the probabilistic method The randomized rounding step differs from most applications of the probabilistic method in two respects: The computational complexity of the rounding step is important. It should be implementable by a fast (e.g. polynomial time) algorithm. The probability distribution underlying the random experiment is a function of the solution of a relaxation of the problem instance. This fact is crucial to proving the performance guarantee of the approximation algorithm --- that is, that for any problem instance, the algorithm returns a solution that approximates the optimal solution for that specific instance. In comparison, applications of the probabilistic method in combinatorics typically show the existence of structures whose features depend on other parameters of the input. For example, consider Turán's theorem, which can be stated as "any graph with vertices of average degree must have an independent set of size at least . (See this for a probabilistic proof of Turán's theorem.) While there are graphs for which this bound is tight, there are also graphs which have independent sets much larger than . Thus, the size of the independent set shown to exist by Turán's theorem in a graph may, in general, be much smaller than the maximum independent set for that graph. See also Method of conditional probabilities Randomized rounding without solving the linear program. References . Further reading Algorithms Probabilistic arguments
Randomized rounding
[ "Mathematics" ]
2,937
[ "Algorithms", "Mathematical logic", "Applied mathematics" ]
23,917,974
https://en.wikipedia.org/wiki/SOX1
SOX1 is a gene that encodes a transcription factor with a HMG-box (high mobility group) DNA-binding domain and functions primarily in neurogenesis. SOX1, SOX2 and SOX3, members of the SOX gene family (specifically the SOXB1 group), contain transcription factors related to SRY, the testis-determining factor. SOX1 exerts its importance in its role in development of the central nervous system (neurogenesis) and in particular the development of the eye, where it is functionally redundant with SOX3 and to a lesser degree SOX2, and maintenance of neural progenitor cell identity. SOX1 expression is restricted to the neuroectoderm by proliferating progenitor cells in the tetrapod embryo. The induction of this neuroectoderm occurs upon expression of the SOX1 gene. In ectodermal cells committed to a certain cell fate, SOX1 has shown to be one of the earliest transcription factors expressed. In particular, SOX1 is first detected in the late head fold stage. Clinical significance Striatum development SOX1 is expressed particularly in the ventral striatum, and Sox1-deficient mice have altered striatum development, leading e.g. to epilepsy. Lens development SOX1 has shown clinical significance in its direct regulation of gamma-crystallin genes, which is vital for lens development in mice. Gamma-crystallins serve as a key structural component in lens fiber cells in both mammals and amphibians. Research has shown direct deletion of the SOX1 gene in mice causes cataracts and microphthalmia. These mutant lenses fail to elongate due to the absence of gamma-crystallins. SOXB1 group redundant roles SOX1 is a member of the SOX gene family, in particular the SOXB1 group, which includes SOX1, SOX2, and SOX3. The SOX gene family encodes transcription factors. It is suggested the three members of the SOXB1 group have redundant roles in the development of neural stem cells. This group of SOX genes regulate neural progenitor identity. Each of these proteins have unique neural markers. Overexpression of either SOX1, SOX2, or SOX 3 increases neural progenitors and prevents neural differentiation. In non-mammalian vertebrates, loss of one SOXB1 protein results in minor phenotypic differences. This supports the claim that SOXB1 group proteins have redundant roles. See also Neurogenesis References Transcription factors Developmental genes and proteins
SOX1
[ "Chemistry", "Biology" ]
518
[ "Transcription factors", "Gene expression", "Signal transduction", "Developmental genes and proteins", "Induced stem cells" ]
23,920,468
https://en.wikipedia.org/wiki/List%20of%20software%20for%20nanostructures%20modeling
This is a list of notable computer programs that are used to model nanostructures at the levels of classical mechanics and quantum mechanics. Furiousatoms - a powerful software for molecular modelling and visualization Aionics.io - a powerful platform for nanoscale modelling Ascalaph Designer Atomistix ToolKit and Virtual NanoLab CoNTub CP2K CST Studio Suite Deneb – graphical user interface (GUI) for SIESTA, VASP, QE, etc., DFT calculation packages Enalos Cloud Platform – a cloud platform containing tools for the digital construction of energy minimized nanotubes and ellipsoidal nanoparticles and the calculation of their atomistic descriptors. Exabyte.io - a cloud-native integrated platform for nanoscale modeling, supporting simulations at multiple scales, including Density Functional Theory and Molecular Dynamics JCMsuite – a finite element analysis software for simulating optical properties of nanostructures LAMMPS – Open source molecular dynamics code MAPS - Graphical user interface to build complex systems (nanostructures, polymers, surfaces...), set up and analyze ab-initio (Quantum Espresso, VASP, Abinit, NWChem...) or classical (LAMMPS, Towhee) simulations Nanoengineer-1 – developed by company Nanorex, but the website doesn't work, may be unavailable nanoHUB allows simulating geometry, electronic properties and electrical transport phenomena in various nanostructures Nanotube Modeler NEMO 3-D – enables multi-million atom electronic structure simulations in empirical tight binding; open source; an educational version is on nanoHUB and Quantum Dot Lab nextnano allows simulating geometry, electronic properties and electrical transport phenomena in various nanostructures using continuum models (commercial software) Ninithi – carbon nanotube, graphene, and Fullerene modelling software Materials Design MedeA Materials Studio Materials Square - a cloud-based materials simulation web platform, provides GUI for Quantum Espresso, LAMMPS, and Open Calphad MBN Explorer and MBN Studio MD-kMC PARCAS – Open source molecular dynamics code SAMSON: interactive carbon nanotube modeling and simulation Scigress TubeASP Tubegen Wrapping See also References Molecular modelling software Carbon nanotubes Materials science
List of software for nanostructures modeling
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
485
[ "Molecular modelling software", "Applied and interdisciplinary physics", "Computational chemistry software", "Materials science", "Molecular modelling", "nan" ]