id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
28,016,564
https://en.wikipedia.org/wiki/Two-step%20floating%20catchment%20area%20method
The two-step floating catchment area (2SFCA) method is a method for combining a number of related types of information into a single, immediately meaningful, index that allows comparisons to be made across different locations. Its importance lies in the improvement over considering the individual sources of information separately, where none on its own provides an adequate summary. Background The two-step floating catchment area (2SFCA) method is a special case of a gravity model of spatial interaction that was developed to measure spatial accessibility to primary care physicians. 2SFCA is based on the accessibility measure developed by Shen (1998), who used it to compare accessibility to jobs among workers residing in different locations and traveling by different transportation means, and more generally, to measure accessibility to spatially distributed opportunities that have capacity limitations (i.e., rival goods). 2SFCA was inspired by the spatial decomposition idea first proposed by Radke and Mu (2000). The 2SFCA method not only has most of the advantages of a gravity model, but is also intuitive to interpret, as it uses essentially a special form of physician-to-population ratio. It is easy to implement in a GIS environment. In essence, applying the accessibility measure formulated by Shen (1998) the 2SFCA method is an automated procedure for measuring spatial accessibility as a ratio of primary-care physicians to population, combining two steps: it first assesses “physician availability” at the physicians' (supply) locations as the ratio of physicians to their surrounding population (i.e., within a threshold travel time from the physicians) it sums up the ratios (i.e., physician availability derived in the first step) around (i.e., within the same threshold travel time from) each residential (demand) location. It has been recently enhanced by considering distance decay within catchments and called the enhanced two-step floating catchment area (E2SFCA) method. Furthermore, the use of capping certain services according to nearby population size, can improve the accuracy when analyzing across areas of different environments (i.e. rural and urban). The method has been applied to other related public health issues, such as access to healthy food retailers. See also Primary care service area Gravity model of migration Notes References Luo, W., Wang, F., 2003a. Spatial accessibility to primary care and physician shortage area designation: a case study in Illinois with GIS approaches. In: Skinner, R., Khan, O. (Eds.), Geographic Information Systems and Health Applications. Idea Group Publishing, Hershey, PA, pp. 260–278. Wang, F. 2006. Quantitative Methods and Applications in GIS. London: CRC Press. Geostatistics Accessibility Urban studies and planning terminology
Two-step floating catchment area method
[ "Engineering" ]
565
[ "Accessibility", "Design" ]
28,021,681
https://en.wikipedia.org/wiki/Absolute%20difference
The absolute difference of two real numbers and is given by , the absolute value of their difference. It describes the distance on the real line between the points corresponding to and . It is a special case of the Lp distance for all and is the standard metric used for both the set of rational numbers and their completion, the set of real numbers . As with any metric, the metric properties hold: , since absolute value is always non-negative.   if and only if   .     (symmetry or commutativity).     (triangle inequality); in the case of the absolute difference, equality holds if and only if or . By contrast, simple subtraction is not non-negative or commutative, but it does obey the second and fourth properties above, since if and only if , and . The absolute difference is used to define other quantities including the relative difference, the L1 norm used in taxicab geometry, and graceful labelings in graph theory. When it is desirable to avoid the absolute value function – for example because it is expensive to compute, or because its derivative is not continuous – it can sometimes be eliminated by the identity This follows since and squaring is monotonic on the nonnegative reals. Additional Properties In any subset S of the real numbers which has an Infimum and a Supremum, the absolute difference between any two numbers in S is less or equal then the absolute difference of the Infimum and Supremum of S. See also Absolute deviation References Real numbers Distance
Absolute difference
[ "Physics", "Mathematics" ]
307
[ "Physical quantities", "Distance", "Real numbers", "Quantity", "Mathematical objects", "Size", "Space", "Spacetime", "Wikipedia categories named after physical quantities", "Numbers" ]
28,022,292
https://en.wikipedia.org/wiki/Plane%20wave%20tube
An acoustic duct or plane wave tube is a test facility used in acoustics. Anechoic chambers are typically subject to a low frequency limit, governed by the length of the sound absorbing wedges employed to prevent reflections within the chamber. Test and measurement microphone calibration services are often required to be undertaken at frequencies where anechoic chambers cannot be used effectively. In this case, a plane wave acoustic duct with anechoic termination provides a practical alternative. Such a facility consists of a long duct, with a special low-frequency sound source (subwoofer) at one end and very large acoustically absorbent wedges at the other end. The duct cross section dimensions are made sufficiently small compared to the wavelength at the frequencies of interest that sound can be assumed to propagate down the duct as a plane wave with no reflections from the sides. Acoustic ducts are most commonly used by National Measurement Institutes that specialise in acoustical measurement (such as the National Physical Laboratory (United Kingdom)), who use them for measurement microphone calibration at low frequencies. Acoustics
Plane wave tube
[ "Physics" ]
221
[ "Classical mechanics", "Acoustics" ]
28,022,494
https://en.wikipedia.org/wiki/Measurement%20microphone%20calibration
In order to take a scientific measurement with a microphone, its precise sensitivity must be known (in volts per pascal). Since this may change over the lifetime of the device, it is necessary to regularly calibrate measurement microphones. This service is offered by some microphone manufacturers and by independent testing laboratories. Microphone calibration by certified laboratories should ultimately be traceable to primary standards a (National) Measurement Institute that is a signatory to International Laboratory Accreditation Cooperation. These could include the National Physical Laboratory in the UK, PTB in Germany, NIST in the USA and the National Measurement Institute, Australia, where the reciprocity calibration (see below) is the internationally recognised means of realising the primary standard. Laboratory standard microphones calibrated using this method are used in-turn to calibrate other microphones using comparison calibration techniques (‘secondary calibration’), referencing the output of the ‘test’ microphone against that of the reference laboratory standard microphone. A microphone’s sensitivity varies with frequency (as well as with other factors such as environmental conditions) and is therefore normally recorded as several sensitivity values, each for a specific frequency band (see frequency spectrum). A microphone’s sensitivity can also depend on the nature of the sound field it is exposed to. For this reason, microphones are often calibrated in more than one sound field, for example a pressure field and a free field. Depending on their application, measurement microphones must be tested periodically (every year or several months, typically), and after any potentially damaging event, such as being dropped or exposed to sound levels beyond the device’s operational range. Reciprocity calibration Reciprocity calibration is currently the favoured primary standard for calibration of measurement microphones. The technique exploits the reciprocal nature of certain transduction mechanisms such as the electrostatic transducer principle used in condenser measurement microphones. In order to carry out a reciprocity calibration, three uncalibrated microphones , and are used. Microphones and are placed facing each other with a well known acoustical coupler between their diaphragms, allowing the acoustic transfer impedance to be easily modelled. One of the microphones is then driven by a current to act as the source of sound and the other responds to the pressure generated in the coupler, producing an output voltage resulting in the electrical transfer impedance . Provided that the microphones are reciprocal in behaviour, which means the open circuit sensitivity in V/Pa as a receiver is the same as the sensitivity in m3/s/A as a transmitter, it can be shown that the product of the transmission factors , , and the acoustical transfer impedance equals the electrical transfer impedance. Having determined the product of the transmission factors for one pair of microphones, the process is repeated with the other two possible pair-wise combinations and . The set of three measurements then allows the individual microphone transmission factor to be deduced by solving three simultaneous equations. The electrical transfer impedance is determined during the calibration procedure by measuring the current and voltage and the acoustic transfer impedance depends on the acoustical coupler. Commonly used acoustical couplers are free field, diffuse field and compression chamber. For free field conditions between the two microphones the sound pressure in the far field can be calculated and it follows where is the distance between the microphones. For diffuse field conditions follows where is the equivalent absorption area and is the critical distance for reverberation. For compression chamber conditions follows where is the air volume in the chamber. The technique provides a measurement of the sensitivity of a microphone without the need for comparison with another previously calibrated microphone, and is instead traceable to reference electrical quantities such as volts and ohms, as well as length, mass and time. Although a given calibrated microphone will often have been calibrated by other (secondary) methods, all can be traced (through a process of dissemination) back to a microphone calibrated using the reciprocity method at a National Measurement Institute. Reciprocity calibration is a specialist process, and because it forms the basis of the primary standard for sound pressure, many national measurement institutes have invested significant research efforts to refine the method and develop calibration facilities. A system is also commercially available from Brüel & Kjær. For airborne acoustics, the reciprocity technique is currently the most precise method available for microphone calibration (i.e. has the smallest uncertainty of measurement). Free field reciprocity calibration (to give the free-field response, as opposed to the pressure response of the microphone) follows the same principles and roughly the same method as pressure reciprocity calibration, but in practice is much more difficult to implement. As such it is more usual to perform reciprocity calibration in an acoustical coupler, and then apply a correction if the microphone is to be used in free-field conditions; such corrections are standardised for laboratory standard microphones (IEC/TS 61094-7) and are generally available from the manufacturers of most of the common microphone types. Calibration using pistonphones and sound calibrators A is an acoustical calibrator (sound source) that uses a closed coupling volume to generate a precise sound pressure for the calibration of measurement microphones. The principle relies on a piston mechanically driven to move at a specified cyclic rate, pushing on a fixed volume of air to which the microphone under test is coupled. The air is assumed to be compressed adiabatically and the sound pressure level in the chamber can, potentially, be calculated from internal physical dimensions of the device and the adiabatic gas law, which requires that PVγ is a constant, where P is the pressure in the chamber, V is the volume of the chamber, and γ is the ratio of the specific heat of air at constant pressure to its specific heat at constant volume. Pistonphones are highly dependent on ambient pressure (always requiring a correction to ambient pressure conditions) and are generally only made to reproduce low frequencies (for practical reasons), typically 250 Hz. However, pistonphones can be very precise, with good stability over time. However, commercially available pistonphones are not calculable devices and must themselves be calibrated using a calibrated microphone if the results are to be traceable; though generally very stable over time, there will be small differences in the sound pressure level generated between different pistonphones. Since their output is also dependent on the volume of the chamber (coupling volume), differences in shape and load volume between different models of microphone will have an influence on the resulting SPL, requiring the pistonphone to be calibrated accordingly. Sound calibrators are used in an identical way to pistonphones, providing a known sound pressure field in a cavity to which a test microphone is coupled. Sound calibrators are different from pistonphones in that they work electronically and use a low-impedance (electrodynamic) source to yield a high degree of volume independent operation. Furthermore, modern devices often use a feedback mechanism to monitor and adjust the sound pressure level in the cavity so that it is constant regardless of the cavity / microphone size. Sound calibrators normally generate a 1 kHz sine tone; 1 kHz is chosen since the A-weighted SPL is equal to the linear level at 1 kHz. Sound calibrators should also be calibrated regularly at a nationally accredited calibration laboratory to ensure traceability. Sound calibrators tend to be less precise than pistonphones, but are (nominally) independent of internal cavity volume and ambient pressure. References IEC 61094-2, edition 2. (February 20, 2009) "Measurement Microphones, part 2". IEC Standard for Pressure Reciprocity Calibration of Measurement Microphones IEC 61094-5, edition 1. (October 16, 2001) "Measurement Microphones, part 5". IEC Standard for Comparison Calibration of Measurement Microphones Acoustics Sound technology
Measurement microphone calibration
[ "Physics" ]
1,652
[ "Classical mechanics", "Acoustics" ]
26,155,440
https://en.wikipedia.org/wiki/Plate%20theory
In continuum mechanics, plate theories are mathematical descriptions of the mechanics of flat plates that draw on the theory of beams. Plates are defined as plane structural elements with a small thickness compared to the planar dimensions. The typical thickness to width ratio of a plate structure is less than 0.1. A plate theory takes advantage of this disparity in length scale to reduce the full three-dimensional solid mechanics problem to a two-dimensional problem. The aim of plate theory is to calculate the deformation and stresses in a plate subjected to loads. Of the numerous plate theories that have been developed since the late 19th century, two are widely accepted and used in engineering. These are the Kirchhoff–Love theory of plates (classical plate theory) The Uflyand-Mindlin theory of plates (first-order shear plate theory) Kirchhoff–Love theory for thin plates The Kirchhoff–Love theory is an extension of Euler–Bernoulli beam theory to thin plates. The theory was developed in 1888 by Love using assumptions proposed by Kirchhoff. It is assumed that a mid-surface plane can be used to represent the three-dimensional plate in two-dimensional form. The following kinematic assumptions are made in this theory: straight lines normal to the mid-surface remain straight after deformation straight lines normal to the mid-surface remain normal to the mid-surface after deformation the thickness of the plate does not change during a deformation. Displacement field The Kirchhoff hypothesis implies that the displacement field has the form where and are the Cartesian coordinates on the mid-surface of the undeformed plate, is the coordinate for the thickness direction, are the in-plane displacements of the mid-surface, and is the displacement of the mid-surface in the direction. If are the angles of rotation of the normal to the mid-surface, then in the Kirchhoff–Love theory Strain-displacement relations For the situation where the strains in the plate are infinitesimal and the rotations of the mid-surface normals are less than 10° the strains-displacement relations are Therefore, the only non-zero strains are in the in-plane directions. If the rotations of the normals to the mid-surface are in the range of 10° to 15°, the strain-displacement relations can be approximated using the von Kármán strains. Then the kinematic assumptions of Kirchhoff-Love theory lead to the following strain-displacement relations This theory is nonlinear because of the quadratic terms in the strain-displacement relations. Equilibrium equations The equilibrium equations for the plate can be derived from the principle of virtual work. For the situation where the strains and rotations of the plate are small, the equilibrium equations for an unloaded plate are given by where the stress resultants and stress moment resultants are defined as and the thickness of the plate is . The quantities are the stresses. If the plate is loaded by an external distributed load that is normal to the mid-surface and directed in the positive direction, the principle of virtual work then leads to the equilibrium equations For moderate rotations, the strain-displacement relations take the von Karman form and the equilibrium equations can be expressed as Boundary conditions The boundary conditions that are needed to solve the equilibrium equations of plate theory can be obtained from the boundary terms in the principle of virtual work. For small strains and small rotations, the boundary conditions are Note that the quantity is an effective shear force. Stress–strain relations The stress–strain relations for a linear elastic Kirchhoff plate are given by Since and do not appear in the equilibrium equations it is implicitly assumed that these quantities do not have any effect on the momentum balance and are neglected. It is more convenient to work with the stress and moment resultants that enter the equilibrium equations. These are related to the displacements by and The extensional stiffnesses are the quantities The bending stiffnesses (also called flexural rigidity) are the quantities Isotropic and homogeneous Kirchhoff plate For an isotropic and homogeneous plate, the stress–strain relations are The moments corresponding to these stresses are Pure bending The displacements and are zero under pure bending conditions. For an isotropic, homogeneous plate under pure bending the governing equation is In index notation, In direct tensor notation, the governing equation is Transverse loading For a transversely loaded plate without axial deformations, the governing equation has the form where for a plate with thickness . In index notation, and in direct notation In cylindrical coordinates , the governing equation is Orthotropic and homogeneous Kirchhoff plate For an orthotropic plate Therefore, and Transverse loading The governing equation of an orthotropic Kirchhoff plate loaded transversely by a distributed load per unit area is where Dynamics of thin Kirchhoff plates The dynamic theory of plates determines the propagation of waves in the plates, and the study of standing waves and vibration modes. Governing equations The governing equations for the dynamics of a Kirchhoff–Love plate are where, for a plate with density , and The figures below show some vibrational modes of a circular plate. Isotropic plates The governing equations simplify considerably for isotropic and homogeneous plates for which the in-plane deformations can be neglected and have the form where is the bending stiffness of the plate. For a uniform plate of thickness , In direct notation Uflyand-Mindlin theory for thick plates In the theory of thick plates, or theory of Yakov S. Uflyand (see, for details, Elishakoff's handbook), Raymond Mindlin and Eric Reissner, the normal to the mid-surface remains straight but not necessarily perpendicular to the mid-surface. If and designate the angles which the mid-surface makes with the axis then Then the Mindlin–Reissner hypothesis implies that Strain-displacement relations Depending on the amount of rotation of the plate normals two different approximations for the strains can be derived from the basic kinematic assumptions. For small strains and small rotations the strain-displacement relations for Mindlin–Reissner plates are The shear strain, and hence the shear stress, across the thickness of the plate is not neglected in this theory. However, the shear strain is constant across the thickness of the plate. This cannot be accurate since the shear stress is known to be parabolic even for simple plate geometries. To account for the inaccuracy in the shear strain, a shear correction factor () is applied so that the correct amount of internal energy is predicted by the theory. Then Equilibrium equations The equilibrium equations have slightly different forms depending on the amount of bending expected in the plate. For the situation where the strains and rotations of the plate are small the equilibrium equations for a Mindlin–Reissner plate are The resultant shear forces in the above equations are defined as Boundary conditions The boundary conditions are indicated by the boundary terms in the principle of virtual work. If the only external force is a vertical force on the top surface of the plate, the boundary conditions are Constitutive relations The stress–strain relations for a linear elastic Mindlin–Reissner plate are given by Since does not appear in the equilibrium equations it is implicitly assumed that it do not have any effect on the momentum balance and is neglected. This assumption is also called the plane stress assumption. The remaining stress–strain relations for an orthotropic material, in matrix form, can be written as Then, and For the shear terms The extensional stiffnesses are the quantities The bending stiffnesses are the quantities Isotropic and homogeneous Uflyand-Mindlin plates For uniformly thick, homogeneous, and isotropic plates, the stress–strain relations in the plane of the plate are where is the Young's modulus, is the Poisson's ratio, and are the in-plane strains. The through-the-thickness shear stresses and strains are related by where is the shear modulus. Constitutive relations The relations between the stress resultants and the generalized displacements for an isotropic Mindlin–Reissner plate are: and The bending rigidity is defined as the quantity For a plate of thickness , the bending rigidity has the form where Governing equations If we ignore the in-plane extension of the plate, the governing equations are In terms of the generalized deformations , the three governing equations are The boundary conditions along the edges of a rectangular plate are Reissner–Stein static theory for isotropic cantilever plates In general, exact solutions for cantilever plates using plate theory are quite involved and few exact solutions can be found in the literature. Reissner and Stein provide a simplified theory for cantilever plates that is an improvement over older theories such as Saint-Venant plate theory. The Reissner-Stein theory assumes a transverse displacement field of the form The governing equations for the plate then reduce to two coupled ordinary differential equations: where At , since the beam is clamped, the boundary conditions are The boundary conditions at are where {| class="toccolours collapsible collapsed" width="60%" style="text-align:left" !Derivation of Reissner–Stein cantilever plate equations |- |The strain energy of bending of a thin rectangular plate of uniform thickness is given by where is the transverse displacement, is the length, is the width, is the Poisson's ratio, is the Young's modulus, and The potential energy of transverse loads (per unit length) is The potential energy of in-plane loads (per unit width) is The potential energy of tip forces (per unit width), and bending moments and (per unit width) is A balance of energy requires that the total energy is With the Reissener–Stein assumption for the displacement, we have and Taking the first variation of with respect to and setting it to zero gives us the Euler equations and where Since the beam is clamped at , we have The boundary conditions at can be found by integration by parts: where |} See also Bending of plates Vibration of plates Infinitesimal strain theory Membrane theory of shells Finite strain theory Stress (mechanics) Stress resultants Linear elasticity Bending Föppl–von Kármán equations Euler–Bernoulli beam equation Timoshenko beam theory References Continuum mechanics
Plate theory
[ "Physics" ]
2,106
[ "Classical mechanics", "Continuum mechanics" ]
26,157,481
https://en.wikipedia.org/wiki/Acoustic%20dispersion
In acoustics, acoustic dispersion is the phenomenon of a sound wave separating into its component frequencies as it passes through a material. The phase velocity of the sound wave is viewed as a function of frequency. Hence, separation of component frequencies is measured by the rate of change in phase velocities as the radiated waves pass through a given medium. Broadband transmission method A widely used technique for determining acoustic dispersion is a broadband transmission method. This technique was originally introduced in 1978 and has been employed to study the dispersion properties of metal (1978), epoxy resin (1986), paper materials (1993), and ultra-sound contrast agent (1998). In 1990 and 1993 this method confirmed the Kramers–Kronig relation for acoustic waves. Application of this method requires the measurements of a reference velocity to obtain values for the acoustic dispersion. This is accomplished by determining (usually) the speed of the sound in water, the thickness of the specimen, and the phase spectrum of each of the two transmitted ultrasound pulses. Dispersive attenuation Acoustic attenuation See also Dispersion (optics) References Acoustics Metamaterials
Acoustic dispersion
[ "Physics", "Materials_science", "Engineering" ]
240
[ "Materials science stubs", "Metamaterials", "Classical mechanics", "Acoustics", "Materials science" ]
26,158,367
https://en.wikipedia.org/wiki/Hyperbaric%20stretcher
A hyperbaric stretcher is a lightweight pressure vessel for human occupancy (PVHO) designed to accommodate one person undergoing initial hyperbaric treatment during or while awaiting transport or transfer to a treatment chamber. Originally developed as advanced diving equipment, it has since been used for other medical conditions such as altitude sickness, carbon monoxide poisoning and smoke inhalation, air and gas embolism and is viewed as potentially important equipment for the early treatment of blast related injuries within the combat zone with the anticipated benefit that traumatic brain injury may not develop in the ensuing months. There is currently only one unit approved under the US National Standard - ASME PVHO-1 (2007) and Case 12. This unit, known as the SOS Hyperlite or by the US military as the EEHS (Emergency Evacuation Hyperbaric Stretcher) is, or has been, in service with the US Army, Navy, Air Force, Coast Guard, NOAA and NASA as well as being supplied to other Government Agencies. The EEHS has a length of 2.26 metres (89 inches) and a diameter of 59 cm. (23.5 inches) and operates at a pressure of up to 2.3 bar (33 psi) above ambient pressure with a built-in safety factor of over 6:1. It is pressurised with air and the occupant breathes oxygen or air through a demand mask (BIBS) during treatment. The Hyperlite also complies with Lloyds Register and ISO 9001/2000 requirements, and is CE marked. It has applications in military, commercial, scientific, and recreational diving, and in hyperbaric medicine. It is made of flexible material and when the internal pressure matches the external pressure, it is collapsible, which can make transfer under pressure possible with relatively small hyperbaric chambers. A hyperbaric stretcher must be portable, and should be compatible with transfer under pressure to and from full size hyperbaric chambers. This can be achieved by making the unit small enough to be loaded inside the hyperbaric facility for transfer under pressure, or by having a mating flange compatible with the larger chamber, by way of an adapter if necessary. Some types of treatment may be done in the hyperbaric stretcher, provided the patient is sufficiently fit for unattended recompression. References External links SOS Hyperlite Hyperbaric medicine Pressure vessels
Hyperbaric stretcher
[ "Physics", "Chemistry", "Engineering" ]
496
[ "Structural engineering", "Chemical equipment", "Physical systems", "Hydraulics", "Pressure vessels" ]
44,847,815
https://en.wikipedia.org/wiki/Newlight%20Technologies
Newlight Technologies is a company based in Huntington Beach, California, known for carbon sequestration into materials and products. The company is headquartered and manufactures in Huntington Beach, CA, and staffs over 200 employees. History and corporate affairs As of October 2020, Newlight Technologies has one facility located in Huntington Beach, California, which serves as its headquarters, R&D, operations, and manufacturing facility. Technology Currently, Newlight captures methane from a dairy farm in California. The methane is transported to a bioreactor. From there, the methane is mixed with air and interacts with enzymes to form a polymer trademarked as Aircarbon. According to Popular Science, the material performs similarly to most oil-based plastics but costs less to produce. Aircarbon has already been contracted for use in desk chairs, computer packaging, and smart phone cases. Newlight Technologies has also commercialized its own lines of carbon-negative eyewear and foodware, formerly known as Covalent and Restore. Recognition In 2014, AirCarbon was named Popular Science's Innovation of the Year, and in 2016, Aircarbon was awarded the Presidential Green Chemistry Challenge Award by the U.S. EPA. References Carbon capture and storage Technology companies based in Greater Los Angeles Companies based in Irvine, California Renewable resource companies established in 2003 Technology companies established in 2003 2003 establishments in California Methane American companies established in 2003
Newlight Technologies
[ "Chemistry", "Engineering" ]
278
[ "Greenhouse gases", "Geoengineering", "Carbon capture and storage", "Methane" ]
44,848,466
https://en.wikipedia.org/wiki/Translational%20Research%20%28journal%29
Translational Research: The Journal of Laboratory and Clinical Medicine is a monthly peer-reviewed medical journal covering translational research. It was established in 1915 as The Journal of Laboratory and Clinical Medicine obtaining its current title in 2006. Jeffrey Laurence (Weill Cornell Medical College) has been editor-in-chief since 2006. He was preceded by Dale Hammerschmidt. It is the official journal of the Central Society for Clinical and Translational Research. It is published by Mosby. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2014 impact factor of 5.03, ranking it second out of 30 journals in the category "Medical Laboratory Technology", 17th out of 153 journals in the category "Medicine, General & Internal" and 17th out of 123 journals in the category "Medicine, Research & Experimental" References Further reading External links Central Society for Clinical and Translational Research Academic journals established in 1915 Translational medicine General medical journals English-language journals Mosby academic journals
Translational Research (journal)
[ "Biology" ]
210
[ "Translational medicine" ]
44,849,610
https://en.wikipedia.org/wiki/Catellani%20reaction
The Catellani reaction was discovered by Marta Catellani (Università degli Studi di Parma, Italy) and co-workers in 1997. The reaction uses aryl iodides to perform bi- or tri-functionalization, including C-H functionalization of the unsubstituted ortho position(s), followed a terminating cross-coupling reaction at the ipso position. This cross-coupling cascade reaction depends on the ortho-directing transient mediator, norbornene. Reaction mechanism The Catellani reaction is catalyzed by palladium and norbornene, although in most cases superstoichiometric amounts of norbornene are used to allow the reaction to proceed at a reasonable rate. The generally accepted reaction mechanism, as outlined below, is intricate and believed to proceed via a series of Pd(0), Pd(II), and Pd(IV) intermediates, although an alternative bimetallic mechanism that avoids the formation of Pd(IV) has also been suggested. Initially, Pd(0) oxidatively adds into the C–X bond of the aryl halide. Subsequently, the arylpalladium(II) species undergoes carbopalladation with the norbornene. The structure of the norbornylpalladium intermediate does not allow for β-hydride elimination at either of the β-positions due to Bredt's Rule for the bridgehead β-hydrogen and the trans-configuration between palladium and other β-hydrogen. Thereafter, the Pd(II) species undergoes electrophilic cyclopalladation at the ortho position of the aryl group. Subsequently, the palladacyclic intermediate undergoes a second oxidation addition with the alkyl halide coupling partner to form a Pd(IV) intermediate, which undergoes reductive elimination to forge the first C–C bond of the product. After β-carbon elimination of norbornene, the resultant Pd(II) species then undergoes a second C–C bond forming step via a Heck reaction or cross coupling with an organoboron reagent to afford the final organic product and close the catalytic cycle. Steps of the Catellani reaction: Oxidative addition Carbopalladation of norbornene Palladacycle formation Oxidative addition to palladacycle Reductive elimination from palladacycle Norbornene extrusion Termination via Heck reaction, Suzuki reaction, etc. Ortho and ipso cross-coupling partners The Catellani reaction facilitates a variety of C—C and C—N bond-forming reactions at the ortho position. These include alkylation from alkyl halides, arylation from aryl bromides, amination from benzyloxyamines, acylation from anhydrides. Likewise in the case of terminating ipso coupling partners with Heck-type termination with olefins, Suzuki-type reaction with boronic esters, borylation with bis(pinacolato)diboron, protonation with i-PrOH, decarboxylative alkynylation with alkynyl carboxylic acids. Uses With tethered cross-coupling partners, Lautens, Malacria, and Catellani used this reaction to construct a variety of fused ring systems since 2000. The Catellani reaction has been used as a key step for the total synthesis (+)-linoxepin, rhazinal, aspidospermidine, and (±)-goniomitine. References External links Total Synthesis of (+)-Linoxepin by Utilizing the Catellani Reaction Chemical reactions Name reactions
Catellani reaction
[ "Chemistry" ]
770
[ "Name reactions", "nan" ]
44,849,824
https://en.wikipedia.org/wiki/Atiyah%E2%80%93Bott%20formula
In algebraic geometry, the Atiyah–Bott formula says the cohomology ring of the moduli stack of principal bundles is a free graded-commutative algebra on certain homogeneous generators. The original work of Michael Atiyah and Raoul Bott concerned the integral cohomology ring of . See also Borel's theorem, which says that the cohomology ring of a classifying stack is a polynomial ring. Notes References Theorems in algebraic geometry
Atiyah–Bott formula
[ "Mathematics" ]
96
[ "Theorems in algebraic geometry", "Topology stubs", "Topology", "Theorems in geometry" ]
44,852,866
https://en.wikipedia.org/wiki/Performance%20gap
A performance gap is a disparity that is found between the energy use predicted and carbon emissions in the design stage of buildings and the energy use of those buildings in operation. Research in the UK suggests that actual carbon emissions from new homes can be 2.5 times the design estimates, on average. For non-domestic buildings, the gap is even higher - actual carbon emissions as much as 3.8 times the design estimates, on average. There are established tools for reducing the performance gap, by reviewing project objectives, outline and detailed design drawings, design calculations, implementation of designs on site, and post-occupancy evaluation. NEF's Assured Performance Process (APP) is one such tool, which is being used extensively on different sites that form part of East Hampshire's Whitehill and Bordon new town development, one of the largest regeneration projects anywhere in the UK, with high ambitions for both environmental performance and health. Classification of factors that contribute to the performance gap The performance gap is produced mainly due to uncertainties. Uncertainties are found in any “real-world” system, and buildings are no exception. As early as 1978, Gero and Dudnik wrote a paper presenting a methodology to solve the problem of designing subsystems (HVAC) subjected to uncertain demands. After that, other authors have shown an interest in the uncertainties that are present in building design; Ramallo-González classified uncertainties in building design/construction in three different groups: Environmental. Uncertainty in weather prediction under changing climate; and uncertain weather data information due to the use of synthetic weather data files: (1) use of synthetic years that do not represent a real year, and (2) use of a synthetic year that has not been generated from recorded data in the exact location of the project but in the closest weather station. Workmanship and quality of building elements. Differences between the design and the real building: Conductivity of thermal bridges, conductivity of insulation, value of infiltration or U-Values of walls and windows. There may be optimism bias by designers, where expectations about what is possible on site are unrealistic, and/or buildability fails to get adequate attention during design. Behavioural. All other parameters linked to human behaviour i.e. door and window opening, heating regimes, use of appliances, occupancy patterns or cooking habits. Type 1: Environmental uncertainties The type 1 from this grouping, have been divided here into two main groups: one concerning the uncertainty due to climate change; and the other concerning uncertainties due to the use of synthetic weather data files. Concerning the uncertainties due to climate change: buildings have long life spans, for example, in England and Wales, around 40% of the office blocks existing in 2004 were built before 1940 (30% if considered by floor area). and, 38.9% of English dwellings in 2007 were built before 1944. This long life span makes buildings likely to operate with climates that might change due to global warming. De Wilde and Coley showed how important is to design buildings that take into consideration climate change and that are able to perform well in future weathers. Concerning the uncertainties due to the use of synthetic weather data files: Wang et al. showed the impact that uncertainties in weather data (among others) may cause in energy demand calculations. The deviation in calculated energy use due to variability in the weather data were found to be different in different locations from a range of (-0.5% – 3%) in San Francisco to a range of (-4% to 6%) in Washington D.C. The ranges were calculated using TMY as the reference. These deviations on the demand were smaller than the ones due to operational parameters. For those, the ranges were (-29% – 79%) for San Francisco and (-28% – 57%) for Washington D.C. The operation parameters were those linked with occupants’ behaviour. The conclusion of this paper is that occupants will have a larger impact in energy calculations than the variability between synthetically generated weather data files. The spatial resolution of weather data files was the concern covered by Eames et al. Eames showed how a low spatial resolution of weather data files can be the cause of disparities of up to 40% in the heating demand. Type 2: Workmanship In the work of Pettersen, uncertainties of group 2 (workmanship and quality of elements) and group 3 (behaviour) of the previous grouping were considered (Pettersen, 1994). This work shows how important occupants’ behaviour is on the calculation of the energy demand of a building. Pettersen showed that the total energy use follows a normal distribution with a standard deviation of around 7.6% when the uncertainties due to occupants are considered, and of around 4.0% when considering those generated by the properties of the building elements. A large study was carried out by Leeds Metropolitan at Stamford Brook. This project saw 700 dwellings built to high efficiency standards. The results of this project show a significant gap between the energy used expected before construction and the actual energy use once the house is occupied. The workmanship is analysed in this work. The authors emphasise the importance of thermal bridges that were not considered for the calculations, and how those originated by the internal partitions that separate dwellings have the largest impact on the final energy use. The dwellings that were monitored in use in this study show a large difference between the real energy use and that estimated using SAP, with one of them giving +176% of the expected value when in use. Hopfe has published several papers concerning uncertainties in building design that cover workmanship. A more recent publication at the time of writing looks into uncertainties of group 2 and 3. In this work the uncertainties are defined as normal distributions. The random parameters are sampled to generate 200 tests that are sent to the simulator (VA114), the results from which will be analysed to check the uncertainties with the largest impact on the energy calculations. This work showed that the uncertainty in the value used for infiltration is the factor that is likely to have the largest influence on cooling and heating demands. Another study performed by de Wilde and Wei Tian, compared the impact of most of the uncertainties affecting building energy calculations taking into account climate change. De Wilde and Tian used a two dimensional Monte Carlo Analysis to generate a database obtained with 7280 runs of a building simulator. A sensitivity analysis was applied to this database to obtain the most significant factors on the variability of the energy demand calculations. Standardised Regression Coefficients and Standardised Rank Regression Coefficients were used to compare the impacts of the uncertainties. De Wilde and Tian agreed with Hopfe on the impact of uncertainties in the infiltration over energy calculations, but also introduced other factors, including uncertainties in: weather, U-Value of windows, and other variables related with occupants’ behaviour (equipment and lighting). Their paper compares many of the uncertainties with a good sized database providing a realistic comparison for the scope of the sampling of the uncertainties. The work of Schnieders and Hermelink showed a substantial variability in the energy demands of low-energy buildings designed under the same specification (Passivhaus). Type 3: Occupants The work of Schnieders and Hermelink showed a substantial variability in the energy demands of low-energy buildings designed under the same specification (Passivhaus). Although the passivhaus standard has a very controlled, high quality workmanship, large differences have been seen in energy demand in different houses. Blight and Coley showed that that variability can be occasioned due to variance in occupant behaviour (the use of windows and doors was included in this work). The work of Blight and Coley proves two things: (1) Occupants have a substantial influence on energy use; and (2) The model they used to generate occupants’ behaviour is accurate for the creation of behavioural patterns of inhabitants. The method used in the previous paper to generate accurate profiles of occupants’ behaviour was the one developed by Richardson et al. The method was developed using the Time-Use Survey (TUS) of the United Kingdom as a reference of real behaviour of occupants, this database was elaborated after recording the activity of more than 6000 occupants in 24-hours diaries with a 10 minutes resolution . Richardson’s paper shows how the tool is able to generate behavioural patterns that correlate with the real data obtained from the TUS. The availability of this tool allows scientist’s to model the uncertainty of occupants’ behaviour as a set of behavioural patterns that have been proven to correlate with real occupants’ behaviour. There have been works published to take into account occupancy in optimisation using the so called robust optimisation External links http://www.zerocarbonhub.org/current-projects/performance-gap http://www.building.co.uk/zero-carbon-hub-report-performance-gap-in-new-homes/5069589.article https://web.archive.org/web/20141223075403/http://greenconstructionboard.org/index.php/resources/performance-gap https://www.gov.uk/government/publications/low-carbon-buildings-best-practices-and-what-to-avoid https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/497758/Domestic_Building_Performance_full_report_2016.pdf http://www.assuredperformanceprocess.org.uk/ http://whitehillbordon.com/ References Building engineering Energy consumption
Performance gap
[ "Engineering" ]
2,012
[ "Building engineering", "Civil engineering", "Architecture" ]
44,854,262
https://en.wikipedia.org/wiki/PIDA%20%28polymer%29
PIDA, or poly(diiododiacetylene), is an organic polymer that has a polydiacetylene backbone. It is one of the simplest polydiacetylenes that has been synthesized, having only iodine atoms as side chains. It is created by 1,4 topochemical polymerization of diiodobutadiyne. It has many implications in the field of polymer chemistry as it can be viewed as a precursor to other polydiacetylenes by replacing iodine atoms with other side chains using organic synthesis, or as an iodinated form of the carbon allotrope carbyne. Structure The backbone of PIDA is highly conjugated and allows for the formation of an extended pi system along the length of the polymer. This property of PIDA allows it to transport electricity and act as a molecular wire or an organic semiconductor. Considering PIDA's backbone and the fact that Iodine atoms can easily undergo elimination, it is conceivable that PIDA can be subjected to full reductive deiodination in the presence of a Lewis base, such as pyrrolidine to yield carbyne. Synthesis PIDA is synthesized from diiodobutadiyne via 1,4 topochemical polymerization. In order to meet the geometric requirements for polymerization, a host–guest strategy is used by combining a host molecule and diiodobutadiyne in solution and allowing co-crystallization to occur. This can be utilized because hosts that are most commonly used are able to bond to the diyne monomer by halogen bonding from the lewis acidic iodine atom to a lewis basic nitrogen of the host (usually a nitrile or pyridine). In order to give a proper repeat distance to the monomers (5 Å), the hosts also contain oxalamide groups that create a hydrogen bonding network throughout the crystal. In most instances, polymerization is spontaneous upon crystallization or exposure to UV radiation/pressure. Reactions PIDA Can undergo carbonization at high temperatures near 900 °C and reductive dehalogenation carbonization when exposed to pyrrolidine at room temperature. Attempts have been made to replace iodine side groups with other functional groups. There are also attempts being made at making other halogen analogs of PIDA. See also Crystal engineering References Conductive polymers Organic polymers Organoiodides Alkyne derivatives
PIDA (polymer)
[ "Chemistry" ]
500
[ "Organic compounds", "Organic polymers", "Molecular electronics", "Conductive polymers" ]
44,856,539
https://en.wikipedia.org/wiki/Non%20linear%20piezoelectric%20effects%20in%20polar%20semiconductors
Non linear piezoelectric effects in polar semiconductors are the manifestation that the strain induced piezoelectric polarization depends not just on the product of the first order piezoelectric coefficients times the strain tensor components but also on the product of the second order (or higher) piezoelectric coefficients times products of the strain tensor components. The idea was put forward experimentally for zincblende CdTe heterostructures in 1992, It was confirmed in 1996 by the application of a hydrostatic pressure to the same heterostructures, and found to agree with the results of an ab initio approach, but also to a simple calculation using what is currently known as the Harrisson’s Model. The idea was then extended to all commonly used wurtzite and zincblende semiconductors. Given the difficulty of finding direct experimental evidence for the existence of these effects, there are different schools of thought on how one can calculate reliably all the piezoelectric coefficients. On the other hand, there is widespread agreement on the fact that non linear effects are rather large and comparable to the linear terms (first order). Indirect experimental evidence of the existence of these effects has been also reported in the literature in relation to GaN and InN semiconductor optoelectronic devices. History Non linear piezoelectric effects in polar semiconductors were first reported in 1996 by R. André et al. in zincblende cadmium telluride and later on by G.Bester et al. in 2006 and by M.A. Migliorato et al., in relation to zincblende GaAs and InAs. Different methods were used and while the influence of second (and third) order piezoelectric coefficients was generally recognized as being comparable to first order, fully ab initio and simple approaches using the Harrison's model, appeared to predict slightly different results, particularly for the magnitude of the first order coefficients. Formalism While first order piezoelectric coefficients are of the form eij, the second and third order coefficients are in the form of a higher rank tensor, expressed as eijk and eijkl. The piezoelectric polarization would then be expressed in terms of products of the piezoelectric coefficients and strain components, products of two strain components, and products of three strain components for the first, second, and third order approximation respectively. Available Non Linear Piezoelectric Coefficients Many more articles were published on the subject. Non linear piezoelectric coefficients are now available for many different semiconductor materials and crystal structures: zincblende CdTe, experiments (under pseudomorphic strain and hydrostatic pressure ), and theory (ab initio and using Harrison's Model ) zincblende GaAs and InAs, under pseudomorphic strain, using Harrison's Model zincblende GaAs and InAs, for any combination of diagonal strain components, using Harrison's Model All common III-V semiconductors in the zincblende structure using ab initio GaN, AlN, InN in the Wurtzite crystal structure, using Harrison's Model GaN, AlN, InN in the Wurtzite crystal structure, using ab initio ZnO in the Wurtzite crystal structure, using Harrison's Model Wurtzite crystal structure GaN, InN, AlN and ZnO, using ab initio Wurtzite crystal structure GaAs, InAs, GaP and InP, using Harrison's Model Non linear piezoelectricity in devices Particularly for III-N semiconductors, the influence of non linear piezoelectricity was discussed in the context of light-emitting diodes: Influence of external pressure Increased efficiency See also Piezotronics Piezoelectricity Light-emitting diode Wurtzite crystal structure References Nanoelectronics Semiconductor devices Semiconductors
Non linear piezoelectric effects in polar semiconductors
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
798
[ "Electrical resistance and conductance", "Physical quantities", "Semiconductors", "Materials", "Electronic engineering", "Condensed matter physics", "Nanoelectronics", "Nanotechnology", "Solid state engineering", "Matter" ]
44,857,653
https://en.wikipedia.org/wiki/International%20Conference%20on%20Composite%20Materials
International Conference on Composite Materials (ICCM) is an international scientific conference devoted to all aspects of composite materials. The list of topics includes manufacturing, mechanics, fracture and damage, fatigue, design of components and structures, impact behavior, experimental methods. History The conference was initiated by the Metallurgical Society of AIME. The first conference was held in 1975 simultaneously in Geneva and Boston and was rather small. The second conference, ICCM-2, held in 1978 in Toronto, Canada, gathered around 300 delegates. Official welcome was given by Frank Thurston, Director of the National Aeronautical Establishment, Ottawa; the key address was given by Alan Lovelace from NASA. List of events References External links ICCM-23 ICCM-20 ICCM-19 Academic conferences
International Conference on Composite Materials
[ "Physics" ]
157
[ "Materials", "Composite materials", "Matter" ]
44,858,871
https://en.wikipedia.org/wiki/Clicked%20peptide%20polymer
Clicked peptide polymers are poly-triazole-poly-peptide hybrid polymers. They are made of repeating units of a 1,2,3-triazole and an oligopeptide. They can be visualized as an oligopeptide that is flanked at both the C-terminus and N-terminus by a triazole molecule. Synthesis Clicked peptide polymers are prepared by the azide-alkyne Huisgen cycloaddition also called the click reaction; which is commonly used in bioconjugation reactions to link molecules together with a stable triazole bridge. Peptide based polymers are produced from a cycloaddition variant of step-growth polymerization. The monomers used in this polymerization are oligopeptides with terminal azide and terminal alkyne groups Monomer preparation To prepare and oligopeptide with both a terminal azide and terminal alkyne two modifications must be carried out. The first is the amidation of the oligopeptide's C-terminus by propargylamine. This would done with all other reaction groups protected and with the C-terminus activated. The second modification required is the addition of an azide to the N-terminus. Unlike the addition of the alkyne this can be done on the whole peptide, or solely on the N-terminal residue which is added to the rest of the oligopeptide by solid-phase peptide synthesis. The addition of the azide occurs by Cu(II)-catalyzed diazo transfer. Degradation The molecules linked to one another by the azide-alkyne Huisgen cycloaddition are connected by an aromatic triazole which is extremely stable, and can withstand high temperatures and extremes of pH. The oligopeptide units of a clicked peptide polymer are a different story. The triazole bridges do not confer any stability to oligopeptide. Degradation of the polymer occurs at the peptide bonds linking individual amino acids. The amide bonds can be attacked non-specifically by acid or base catalyzed hydrolysis. The polymer's peptide bonds can also be attacked by endopeptidases which will cleave at a specific side of a specific peptide bond based upon the residues which make up that bond. For example, if the first residue is a phenylalanine then the enzyme chymotrypsin will cleave at site 1, and if the third residue of the tripeptide is a lysine then trypsin would cleave at the third cleavage site. See also Biodegradable polymer Peptide nucleic acid Peptide synthesis Peptidomimetic References Peptides Organic polymers Triazoles
Clicked peptide polymer
[ "Chemistry" ]
563
[ "Biomolecules by chemical classification", "Organic polymers", "Organic compounds", "Molecular biology", "Peptides" ]
44,860,596
https://en.wikipedia.org/wiki/Aliflurane
Aliflurane (code name Hoechst Compound 26 or 26-P) is a halocarbon drug which was investigated as an inhalational anesthetic but was never marketed. See also Halopropane Norflurane Roflurane Synthane Teflurane References General anesthetics Cyclopropanes Ethers Organochlorides Organofluorides GABAA receptor positive allosteric modulators Fluranes
Aliflurane
[ "Chemistry" ]
98
[ "Organic compounds", "Functional groups", "Ethers" ]
44,861,052
https://en.wikipedia.org/wiki/Synthane
Synthane (development code BAX-3224) is a halocarbon agent which was investigated as an inhalational anesthetic but was never marketed. See also Aliflurane Halopropane Norflurane Roflurane Teflurane References General anesthetics Ethers Organofluorides GABAA receptor positive allosteric modulators Abandoned drugs
Synthane
[ "Chemistry" ]
83
[ "Functional groups", "Drug safety", "Organic compounds", "Ethers", "Abandoned drugs" ]
44,862,348
https://en.wikipedia.org/wiki/Indonesia%20AirAsia%20Flight%208501
Indonesia AirAsia Flight 8501 was a scheduled international passenger flight operated by Indonesia AirAsia from Surabaya, Java, Indonesia, to Singapore. On 28 December 2014, the Airbus A320-216 flying the route crashed into the Java Sea, killing all 162 of the people on board. When search operations ended in March 2015, only 116 bodies had been recovered. This is the only fatal accident involving Indonesia AirAsia. In December 2015, the Indonesian National Transportation Safety Committee (KNKT or NTSC) released a report concluding that a non-critical malfunction in the rudder control system prompted the captain to perform a non-standard reset of the on-board flight control computers. Control of the aircraft was subsequently lost, resulting in a stall and uncontrolled descent into the sea. Miscommunication between the two pilots was cited as a contributing factor. This was the first and fatal crash in the history of AirAsia. History of the flight Indonesia AirAsia Flight 8501 was a scheduled flight from Surabaya, Java, Indonesia, to Singapore on Sunday, 2014. It was scheduled to depart Juanda International Airport at 05:20 Western Indonesian Time (WIB, UTC+7) and arrive at Singapore Changi Airport at 08:30 Singapore Standard Time (SST, UTC+8). Flight 8501 took off at 05:35 WIB and reached its cruising altitude at flight level (FL) 320 at fourteen minutes later. It joined air route M635, heading north-west out over the western Java Sea. The plane was in contact with Jakarta air traffic control (ATC). The flight was normal until 06:00 when an electronic centralised aircraft monitor (ECAM) memo was displayed, along with a master caution light, to indicate a fault with the rudder limiter system. Captain Iriyanto read the actions for fixing this failure, and rebooted the aircraft's two Flight Augmentation Computers (FACs). The same fault recurred at 06:09, and the captain fixed it in the same way. At 06:11, the pilots turned fifteen degrees to the left to avoid inclement weather, and contacted Jakarta ATC to request a climb to FL 380 at for the same reason. The controller could not give immediate permission for this due to other aircraft in the vicinity, and instructed them to wait. While they were waiting for permission to climb, the rudder limiter problem occurred for a third time, and for the third time the captain reset the FAC computers. When the memo displayed for the fourth time, Captain Iriyanto decided to reset the FAC circuit breakers (CB). He had previously seen this action being performed by a ground engineer, and believed that it was acceptable to do so in flight. The FAC circuit breakers were reset at 06:16:45, with immediate consequences, as this action not only reset the FAC computers but also disconnected the autopilot and autothrottle, and the flight control law changed from Normal to Alternate. It allowed the aircraft to roll to the left, and by the time First Officer Plesel reacted to this it was banked at 54 degrees. Plesel, possibly spatially disoriented due to the roll sensation, over-corrected twice: first by making a sharp right bank input and then a sharp left bank input. After that, at 06:17, Plesel made a nose-up input on his side-stick, causing the aircraft to enter a steep climb at a 24-degree nose-up pitch. The captain gave a confusing direction to "pull down", which 'bears an internal contradiction as “pull” suggests up, while “down” means down.' In just 54 seconds, the aircraft climbed from 32,000 feet to , exceeding a climb rate of . It then entered a stall, at around 06:17:40, descending at a rate of up to . The aircraft also began a turn to the left, forming at least one complete circle before disappearing from radar at 06:18:44. At 06:20:35, the flight data recorder stopped recording. The CVR stopped recording one second later, at 06:20:36. The aircraft crashed into the Java Sea and was destroyed. All 162 people on board were killed instantly upon impact. Its last recorded position was over the Java Sea, Karimata Strait between the islands of Belitung and Kalimantan (). The aircraft crashed in the Java Sea, Karimata Strait between the islands of Belitung and Borneo (). The cockpit voice recorder captured multiple warnings, including a stall warning, sounding in the cockpit during the final minutes of the flight. No distress signal was sent from the aircraft. Search and rescue (SAR) operations were activated by the Indonesia National Search and Rescue Agency (Basarnas) from the Pangkal Pinang office. Aircraft The aircraft was an Airbus A320-216, with serial number 3648, registered as PK-AXC. It first flew on 2008, and was delivered to AirAsia on 2008. The aircraft was six years old and had accumulated approximately 23,039 airframe hours and around 13,610 takeoff and landing cycles. It had undergone its most recent scheduled maintenance on 2014. The aircraft was powered by two CFM International CFM56-5B6/3 engines and was configured to carry 180 passengers. Victims AirAsia released details of the 155 passengers, which included 137 adults, 17 children, and one infant. The crew consisted of two pilots, four flight attendants and an engineer. The passengers on board were mostly Indonesian and 3 passengers on board originated from South Korea. A few were from the United Kingdom, Singapore and Malaysia. The pilots on board the flight were: Captain Iriyanto, age 53, an Indonesian national, had a total of 20,537 flying hours, of which 4,687 hours were on the Airbus A320. The captain began his career with the Indonesian Air Force, graduating from pilot school in 1983 and previously flying fighter jet aircraft. He took early retirement from the air force in the mid-2000s to join Adam Air, and later worked for Merpati Nusantara Airlines and Sriwijaya Air before joining Indonesia AirAsia. He had 6,100 flying hours with Indonesia AirAsia. First Officer Rémi Emmanuel Plesel, age 46, a French national, had a total of 2,247 flying hours, including 1,367 hours on the Airbus A320. He was originally from Le Marigot, Martinique, and had studied and worked as an engineer in Paris. At 44 he left his previous job to fulfill a childhood dream of becoming a pilot, Indonesia AirAsia was the first airline he had worked for. He was living in Indonesia. Forty-one people who were on board the AirAsia flight were members of a single church congregation in Surabaya named Gereja Mawar Sharon. Most were families with young children travelling to Singapore for a new year's holiday. The bodies began to be returned to their families on 1 January 2015. At that time, the East Java Regional Police Department's Disaster Victim Identification commissioner stated that the victims were identified by the means of post mortem results, thumb prints, and their personal belongings. AirAsia offered US$32,000 or Rp300 million to each of the grieving family members of the victims of the accident as initial compensation from an overall part of compensation. David Thejakusuma, who had 7 family members on the flight, received the amount for each family member he lost. On 16 March 2015, Monash University posthumously awarded a Bachelor of Commerce to one of the crash victims, Kevin Alexander Sujipto. Professor Colm Kearney, Dean of the Faculty of Business and Economics, presented it to a member of his family. A memorial service was held alongside the presentation of the award, and was attended by the Consul General of Indonesia for Victoria and Tasmania, Dewi Savitri Wahab, 40 of the deceased's friends and representatives from the Indonesian Student Association in Australia (PPIA) Monash University branch. On 28 December 2015, the first anniversary of the crash, a private prayer service was held in a room in the Mahameru Building of the East Java Regional Police, Surabaya, and was attended by relatives of the victims, as well as by the Head Chief of the Search and Rescue Agency, Henry Bambang Soelistyo. Representatives of the family members asked the National Transportation Safety Committee to ensure the safety of air travel in Indonesia. The Indonesian Government was also asked by the family members to ratify the Montreal Convention, which later occurred on 19 May 2017. Search and recovery Shortly after the aircraft was confirmed to be missing, unconfirmed reports stated that wreckage had been found off the island of Belitung in Indonesia. Indonesia's National Search and Rescue Agency (Basarnas) deployed seven ships and two helicopters to search the shores of Belitung and Kalimantan. The Indonesian Navy and the provincial Indonesian National Police Air and Water Unit each sent out search and rescue teams. In addition, an Indonesian Air Force Boeing 737 reconnaissance aircraft was dispatched to the last known location of the airliner. The Indonesian Navy dispatched four ships by the end of the first search day and the Air Force deployed aircraft including a CASA/IPTN CN-235. The Indonesian Army deployed ground troops to search the shores and mountains of adjacent islands. Local fishermen also participated in the search. Search and rescue operations were under the guidance of the Civil Aviation Authority of Indonesia. The search was suspended at 7:45 pm local time on due to darkness and bad weather, to be resumed in daylight. An operations center to coordinate search efforts was set up in Pangkal Pinang. The search area was a radius near Belitung Island. Search and rescue operations quickly became an international effort. By naval and air units from Singapore, Malaysia and Australia had joined Indonesian authorities in patrolling designated search areas. Singapore's Rescue Coordination Centre (RCC) deployed three C-130 Hercules aircraft to aid in the search and rescue operation. RSS Supreme, a RSS Valour, RSS Persistence, and MV Swift Rescue subsequently took part in the search and rescue after Indonesia's National Search and Rescue Agency accepted the offer of help from the Republic of Singapore Navy. Singapore's Ministry of Transport provided specialist teams from the Air Accident and Investigation Bureau and underwater locator equipment. The Malaysian government set up a rescue coordination centre at Subang and deployed three military vessels and three aircraft, including a C-130, to assist in search and rescue operations. Australia deployed a P-3 Orion to assist in the search and rescue operation. Elements of the United States Navy joined the search effort; arrived on station late on , and on . More than ninety vessels and aircraft from Indonesia, Singapore, Malaysia, Australia, South Korea, Japan, China, the United States, and Russia participated in the search. This fleet included three ships with underwater detectors and two fuel tankers seconded to ensure efficient operation of the vessels in the search area. On the Indonesian Ministry of Transport reported that two other Indonesian tender vessels had been fitted with equipment that could detect acoustic signals from the flight recorder ("black box") beacons and airframe metal, as well as multibeam side scan sonar. By 5 January, 31 bodies had been recovered with the aid of the Russian and the US search teams. Divers entered the main section of the fuselage underwater and discovered six bodies on 24 January. The official search for bodies ended on , after 116 bodies had been recovered. 46 bodies remained unaccounted for. Wreckage On the day of the disappearance, a fisherman observed "a lot of debris, small and large, near Pulau Tujuh. [...] It looked like the AirAsia colours." Another fisherman reported that, while moored on Sunday at Pulau Senggora, south of the town of Pangkalan Bun in Central Kalimantan, "Around 7 am, I heard a loud booming sound. Soon afterwards, there was haze that usually happened only during the dry season. [...] Before the exploding sound, my friends saw a plane from above Pulau Senggaro heading towards the sea. The plane was said to be flying relatively low, but then disappeared." The fisherman's reports, delivered after he had returned home the next day, were credited with guiding the search-and-rescue team to the vicinity of the crash. The first items of wreckage were spotted by search aircraft on in the Karimata Strait, from where the crew last contacted air traffic control, and three bodies were recovered by the warship KRI Bung Tomo. On , Basarnas claimed that a sonar image obtained by an Indonesian naval ship appeared to show an aircraft upside down on the seabed in about of water, about from the debris found on . The head of the Search and Rescue Agency also denied the existence of any sonar images of the wreckage (as well as the reported recovery of a body wearing a life vest). He stressed that only official information from his search-and-rescue service can be considered to be reliable. On 2015, Basarnas reported evidence of a fuel slick on the water surface in the search area, but detection of the fuselage remained unconfirmed. At a press conference given on the morning of by Basarnas, the discovery of two large submerged objects was reported: , and a thin object . Also, the previously reported fuel slick was confirmed. A later media report mentioned four large sections of wreckage, the largest being located at . Later in the day, Basarnas announced no more bodies had been found, leaving the total at 30. On , divers found parts of the aircraft, including a section of the tail. Other sections of the tail are expected to lie nearby. On , divers used an inflatable device to bring the aircraft's tail to the surface of the sea. They continued to search the sea floor within of where faint pings were heard. The flight data recorder was recovered by Indonesian divers on at , within of part of the fuselage and tail. Later in the day, the cockpit voice recorder was located and was recovered the following day. On , the Republic of Singapore's navy submarine rescue vessel MV Swift Rescue located a large section of the fuselage with one wing attached. On , ropes around the fuselage snapped during an initial failed effort to raise the wreckage. Four bodies were recovered, taking the total recovered to 69. More bodies were thought to be inside. Rear Admiral Widodo, who was in charge of recovery operations, said that the fuselage might be too fragile to be lifted. On , salvage workers recovered a large piece of fuselage, including the wings, of the A320. Lifting balloons were used to lift the fuselage, but the first attempt failed as the balloons deflated. By March 2015, all large pieces of fuselage from the jet had been lifted from the seafloor and moved for investigative purposes. Aftermath AirAsia An emergency call center was established by the airline for the families of those who were on board the aircraft, and an emergency information center was set up at Juanda International Airport to provide hourly updates as well as lodging for victims' relatives. Smaller posts were also opened at Soekarno–Hatta International Airport and Sultan Hasanuddin International Airport. On , Indonesia AirAsia retired the flight number QZ8501, changing the designation of its Surabaya-Singapore route to QZ678. The return flight number was also changed, from QZ8502 to QZ679. The Surabaya-Singapore route by AirAsia was then terminated on 4 January 2015. The route was reopened on 25 March 2023, with flight number QZ478 serving the sector. But as of May 2024, the route has been suspended again. Subsequent to the 1 December 2015 NTSC report as to the causes of the crash, the airline said it had already implemented improved pilot training. Indonesia AirAsia did not have any official permission to fly the Surabaya–Singapore route on Sunday – the day of the crash – but was licensed on four other days of the week, and, according to an Indonesian Ministry of Transport statement, "The Indonesian authorities are suspending the company's flights on this route with immediate effect pending an investigation." In response on the same day, the Civil Aviation Authority of Singapore (CAAS) and the Changi Airport Group (CAG) made a clarification that AirAsia QZ8501 "has been given approval at Singapore's end to operate a daily flight for the Northern Winter Season from 26 October 2014 to 28 March 2015". On , Indonesian Ministry of Transport representative Djoko Murjatmojo stated that "officials at the airport operator in Surabaya and [the] air traffic control agency who had allowed the flight to take off had been moved to other duties", and an immediate air transport directive had been issued "making it mandatory for pilots to go through a face-to-face briefing by an airline flight operations officer on weather conditions and other operational issues prior to every flight". The loss of Flight 8501 also brought attention to the lack of weather radar at Indonesian air traffic control centres. According to the Toronto Star, "Indonesia’s aviation industry has been plagued with problems ... pilot shortages, shoddy maintenance and poor oversight have all been blamed following a string of deadly accidents in recent years." The West Kotawaringin Regency administration in Central Kalimantan planned to build a memorial for the AirAsia flight that also doubles as a monument for aviation safety. Central Kalimantan deputy governor Achmad Diran stated that the monument is also going to be the symbol of gratitude and appreciation for the efforts of the National Search and Rescue Agency. The cornerstone ceremony was attended by local and state officials and representatives from Australia and Singapore. West Kotawaringin regent Ujang Iskandar said that with the monument, "we hope that the families and the government will lay flowers every 28 December, and continue the dialogue on aviation safety in Indonesia." On 22 March, Indonesia's search and rescue agency's head, Bambang Soelistyo, families of the victims and AirAsia officials visited the crash site to spread flowers and hold prayers. Legal proceedings France opened a criminal investigation to investigate possible manslaughter charges. The family of the first officer, a French national, have filed a lawsuit against AirAsia in connection to the lack of permission to fly on that day, claiming the airline was "endangering the life of others". Surabaya Mayor Tri Rismaharini said her administration had consulted with legal experts from Airlangga University on the fears of most families regarding the difficulties in disbursing insurance funds, after the Transportation Ministry regarded the Surabaya-Singapore flight on 28 Dec as illegal. She said her administration continued to collect data on the victims, including their valuable belongings. The data would later be used for insurance purposes and matters related to the beneficiary rights of the affected families. The families of ten of the victims filed a suit against Airbus and some of its suppliers, alleging that A320 suffered a malfunction of the fly-by-wire system, and that "at the time the accident aircraft left the control of defendant Airbus, it was defectively and unreasonably dangerous". The case (Aris Siswanto et al. v Airbus, SAS et al., 1:15-cv-05486) was dismissed by the Illinois court on the grounds that it would be more appropriate for the case to be heard in Indonesia. Air transport industry Following the recovery of the flight recorders, on 12 and , an anonymous International Civil Aviation Organization (ICAO) representative said, "The time has come that deployable recorders are going to get a serious look." Unlike military recorders, which jettison away from an aircraft and float on the water, signalling their location to search and rescue bodies, recorders on commercial aircraft sink. A second ICAO official said that public attention had "galvanized momentum in favour of ejectable recorders on commercial aircraft". Indonesian tourism Figures from the Indonesian Ministry of Tourism showed that the number of foreign tourists arriving at Surabaya's Juanda Airport was 5.33% lower in February 2015 compared to February 2014, 15.01% down at Jakarta's Soekarno-Hatta International Airport, and 10.66% at Bandung's Husein Sastranegara Airport. The head of Indonesia's Central Statistics Agency (CSA) Suryamin attributed the decrease to the revocation of a number of flight licences in the wake of the accident. By contrast, foreign visitors into Indonesia as a whole increased by 3.71%. Investigation The events leading to the crash were investigated by Indonesia's National Transportation Safety Committee (KNKT or NTSC). Assistance was provided by Australia, France, Singapore, and Malaysia. Data from the flight data recorder was downloaded. Although the aircraft's route took it through areas of cloud that extended from up to , FDR data showed that weather was not a factor in the accident. 124 minutes of cockpit dialogue was successfully extracted from the cockpit voice recorder. The sound of many alarms from the flight system can be heard in the final minutes, almost drowning out the voices of the pilots. The investigators ruled out a terrorist attack as the cause and then examined the possibility of human error or aircraft malfunction. Acting director of Air Transportation, Djoko Murjatmodjo, clearly stated that the investigation of the flight route and the investigation of the crash itself are separate. Murjatmodjo said that "AirAsia is clearly wrong because they didn’t fly at a time and schedule that was already determined." Both Singapore's civil aviation authority and the Changi Airport Group stated that AirAsia was allowed daily flights between Surabaya and Singapore. Tatang Kurniadi, head of Indonesia's National Transportation Safety Committee, stated that sabotage was ruled out as a cause of the accident by the black boxes, and a preliminary report was supposedly submitted to the International Civil Aviation Organisation by early February. Final NTSC report After studying the wreckage of the Airbus A320-216 as well as the two black boxes and the cockpit recorder, Indonesia's National Transportation Safety Committee issued a report with their conclusions from the investigation on 1 December 2015. The report stated that the sequence of events that led to the crash started with a malfunction in two of the plane's rudder travel limiter units (RTLU). A tiny soldered electrical connection in the plane's RTLU was found to be cracked, likely for over a year, causing it to intermittently send amber master caution warnings to the electronic centralised aircraft monitor (ECAM)—with the plane's maintenance records showing that the RTLU warning had been sent 23 times over the previous year, but was always solved (and never further investigated, which could have addressed the underlying electrical problem) by resetting the RTLU system. On this flight, the RTLU issue sent an amber caution warning four different times, and the first three times that the ECAM system gave the warning "Auto Flight Rudder Travel Limiter System", the pilot in command followed the ECAM instructions, toggling the flight augmentation computer (FAC) 1 and 2 buttons on the cockpit's overhead panel to off and then on. This procedure did clear the amber master caution warnings for each of those first three warnings. Specifics in the report indicate that French First Officer Rémi Emmanuel Plesel was at the controls just before the stall warning sounded in the cockpit indicating that the jet had lost lift. Investigators also found that, just moments earlier—on the fourth occurrence of the RTLU warning during the flight—the captain chose to ignore the procedure advised by the ECAM instructions, and, instead, left his seat and reset the circuit breaker of the entire FAC, unintentionally disengaging multiple flight control systems, which would have to be turned on by the pilots after the circuit breakers are reset. This circuit breaker is not on the list of circuit breakers that are allowed to be reset in flight, and disabling both FACs placed the aircraft in alternate law mode, disengaging the autopilot and stopping the automatic stall protection and bank angle protection. The FAC is the part of the fly-by-wire system in A320 aircraft responsible for controlling flight surfaces including the rudder. Without the FAC's computerized flight augmentation, pilots would have to "rely on manual flying skills that are often stretched during a sudden airborne emergency". When the crew was required to fly the Airbus A320 manually, there was an unexplained nine-second delay between the start of the roll and either pilot attempting to take control. After nine seconds, the aircraft was banking at a 54° angle: the rudder had deflected 2 degrees to the left, causing the aircraft to roll. Subsequent flight crew actions resulted in the aircraft entering a prolonged stall from which they were unable to recover. The report did not specifically conclude that pilot error caused the crash while detailing the chain of events leading to the loss of Flight 8501. One of the investigators, the NTSC's Nurcahyo Utomo, referred to an apparent miscommunication between the pilots (based on the recordings on the cockpit voice recorder) and said that the malfunction should not have led to a total loss of control had they followed the recommended procedure. Side-stick control issue The example of miscommunication between the pilots was when the plane was in a critical stalling condition, the co-pilot misunderstood the captain's command "pull down"; instead of pushing the airplane's nose down (pushing forward on the stick to regain speed and escape the stall), he pulled the stick back, which would have ordered the aircraft to pitch up, deepening the stall. Because the captain was also pushing the stick forward and because Airbus has a dual-input system, the two stick inputs cancelled each other out, which led to the plane remaining in a stall condition until the end of the black box recording. On 3 December 2015, Indonesia's air transportation director general, Suprasetyo, said that the National Safety Transportation Committee (KNKT) had provided recommendations as to tightened controls on aircraft maintenance and flight crew competence. He added that the government had implemented "... a series of corrective actions as a preventive measure so that the same accident will not happen again in the future." Suprasetyo also confirmed that the suspension of Indonesia AirAsia's Surabaya–Singapore route would not be lifted until the carrier had completed the steps recommended by the KNKT. Dramatization The crash was dramatized in the 16th season of the TV Series Mayday, in an episode entitled "Deadly Solution", aired just over two years after the crash on 6 February 2017. Also, Science Channel aired a documentary on 28 April 2015 called "AirAsia 8501: Anatomy of a Crash". See also Air France Flight 447, a 2009 fatal crash involving an Airbus A330 resulting from a high-altitude stall and pilots making opposite inputs with the aircraft's side-stick controls Accidents and incidents involving the Airbus A320 family List of aircraft accidents and incidents resulting in at least 50 fatalities Notes References External links AirAsia Flight 8501 – AirAsia's official webpage for information about Flight 8501 Passenger list (PDF) – From the Indonesian Ministry of Transportation "http://www.bea.aero/en/enquetes/flight.qz.8501/flight.qz.8501.php ." – Press release by France's aviation accident investigation agency BEA (representing the state of manufacture of the aircraft) Weather analysis (in Indonesian) – Detailed analysis of weather in the vicinity and time of the crash and its possible implications, by the Indonesian Central Office of Meteorology, Climatology and Geophysics (BMKG) Flight 8501 Final Report Final accident report from KNKT (Indonesian's National Transportation Safety Committee) (Archive) Cockpit Voice Recorder transcript and accident summary Viking Nomads 28 December 2014 as the events unfolded. 2014 disasters in Indonesia 2014 in Singapore Accidents and incidents involving the Airbus A320 8501 Airliner accidents and incidents caused by mechanical failure Airliner accidents and incidents caused by pilot error Aviation accidents and incidents in 2014 Aviation accidents and incidents in Indonesia Java Sea Marine salvage operations Indonesia–Singapore relations December 2014 events in Indonesia December 2014 events in Singapore Airliner accidents and incidents caused by stalls
Indonesia AirAsia Flight 8501
[ "Materials_science" ]
5,875
[ "Airliner accidents and incidents caused by mechanical failure", "Mechanical failure" ]
21,769,719
https://en.wikipedia.org/wiki/Oscillator%20linewidth
The concept of a linewidth is borrowed from laser spectroscopy. The linewidth of a laser is a measure of its phase noise. The spectrogram of a laser is produced by passing its light through a prism. The spectrogram of the output of a pure noise-free laser will consist of a single infinitely thin line. If the laser exhibits phase noise, the line will have non-zero width. The greater the phase noise, the wider the line. The same will be true with oscillators. The spectrum of the output of a noise-free oscillator has energy at each of the harmonics of the output signal, but the bandwidth of each harmonic will be zero. If the oscillator exhibits phase noise, the harmonics will not have zero bandwidth. The more phase noise the oscillator exhibits, the wider the bandwidth of each harmonic. Phase noise is a noise in the phase of the signal. Consider the following noise free signal: v(t) = Acos(2πf0t). Phase noise is added to this signal by adding a stochastic process represented by φ to the signal as follows: v(t) = Acos(2πf0t + φ(t)). If the phase noise in an oscillator stems from white noise sources, then the power spectral density (PSD) of the phase noise produced by an oscillator will be Sφ(f) = n/f 2, where n specifies the amount of noise. The PSD of the output signal would then be where n = 2cf02. Define the corner frequency fΔ = cπ f02 as the linewidth of the oscillator. Then It is more common to report oscillator phase noise as L, the ratio of the single-sideband (SSB) phase noise power to the power in the fundamental (in dBc/Hz). In this case Adding phase noise neither increases nor decreases the power of the signal. It simply redistributes the power by increasing the bandwidth over which the signal is present while decreasing the amplitude of the signal that occurs at the nominal oscillation frequency. The total noise power, as found by integrating the power spectral density over all frequencies, remains constant regardless of the amount of phase noise. This is illustrated in the figures on the right. It can be proven by integrating L over all frequencies to compute the total power of the signal. See also Laser linewidth Spectral linewidth Introduction to RF Simulation and its Application by Ken Kundert Oscillators Stochastic processes Statistical signal processing
Oscillator linewidth
[ "Engineering" ]
551
[ "Statistical signal processing", "Engineering statistics" ]
21,770,563
https://en.wikipedia.org/wiki/Fluorine-19%20nuclear%20magnetic%20resonance%20spectroscopy
Fluorine-19 nuclear magnetic resonance spectroscopy (fluorine NMR or 19F NMR) is an analytical technique used to detect and identify fluorine-containing compounds. 19F is an important nucleus for NMR spectroscopy because of its receptivity and large chemical shift dispersion, which is greater than that for proton nuclear magnetic resonance spectroscopy. Operational details 19F has a nuclear spin (I) of and a high gyromagnetic ratio. Consequently, this isotope is highly responsive to NMR measurements. Furthermore, 19F comprises 100% of naturally occurring fluorine. The only other highly sensitive spin NMR-active nuclei that are monoisotopic (or nearly so) are 1H and 31P. Indeed, the 19F nucleus is the third most receptive NMR nucleus, after the 3H nucleus and 1H nucleus. The 19F NMR chemical shifts span a range of about 800 ppm. For organofluorine compounds the range is narrower, being about −50 to −70 ppm (for CF3 groups) to −200 to −220 ppm (for CH2F groups). The very wide spectral range can cause problems in recording spectra, such as poor data resolution and inaccurate integration. It is also possible to record decoupled 19F{1H} and 1H{19F} spectra and multiple bond correlations 19F-13C HMBC and through space HOESY spectra. Chemical shifts 19F NMR chemical shifts in the literature vary strongly, commonly by over 1 ppm, even within the same solvent. Although the reference compound for 19F NMR spectroscopy, neat CFCl3 (0 ppm), has been used since the 1950s, clear instructions on how to measure and deploy it in routine measurements were not present until recently. An investigation of the factors influencing the chemical shift in fluorine NMR spectroscopy revealed the solvent to have the largest effect (Δδ = ±2 ppm or more). A solvent-specific reference table with 5 internal reference compounds has been prepared (CFCl3, C6H5F, PhCF3, C6F6 and CF3CO2H) to allow reproducible referencing with an accuracy of Δδ = ±30 ppb. As the chemical shift of CFCl3 is also affected by the solvent, care must be taken when using dissolved CFCl3 as reference compound with regards to the chemical shift of neat CFCl3 (0 ppm). Example of chemical shifts determined against neat CFCl3: For a complete list the reference compounds chemical shifts in 11 deuterated solvents the reader is referred to the cited literature. A concise list of appropriately referenced chemical shifts of over 240 fluorinated chemicals has also been recently provided. Chemical shift prediction 19F NMR chemical shifts are more difficult to predict than 1H NMR shifts. Specifically, 19F NMR shifts are strongly affected by contributions from electronic excited states whereas 1H NMR shifts are dominated by diamagnetic contributions. Fluoromethyl compounds Fluoroalkenes For vinylic fluorine substituents, the following formula allows estimation of 19F chemical shfits: where Z is the statistical substituent chemical shift (SSCS) for the substituent in the listed position, and S is the interaction factor. Some representative values for use in this equation are provided in the table below: Fluorobenzenes When determining the 19F chemical shifts of aromatic fluorine atoms, specifically phenyl fluorides, there is another equation that allows for an approximation. Adopted from "Structure Determination of Organic Compounds," this equation is where Z is the SSCS value for a substituent in a given position relative to the fluorine atom. Some representative values for use in this equation are provided in the table below: The data shown above are only representative of some trends and molecules. Other sources and data tables can be consulted for a more comprehensive list of trends in 19F chemical shifts. Something to note is that, historically, most literature sources switched the convention of using negatives. Therefore, be wary of the sign of values reported in other sources. Spin–spin coupling 19F-19F coupling constants are generally larger than 1H-1H coupling constants. Long range 19F-19F coupling, (2J, 3J, 4J or even 5J) are commonly observed. Generally, the longer range the coupling, the smaller the value. Hydrogen couples with fluorine, which is very typical to see in 19F spectrum. With a geminal hydrogen, the coupling constants can be as large as 50 Hz. Other nuclei can couple with fluorine, however, this can be prevented by running decoupled experiments. It is common to run fluorine NMRs with both carbon and proton decoupled. Fluorine atoms can also couple with each other. Between fluorine atoms, homonuclear coupling constants are much larger than with hydrogen atoms. Geminal fluorines usually have a J-value of 250-300 Hz. There are many good references for coupling constant values. The citations are included below. Magnetic resonance imaging 19F magnetic resonance imaging (MRI) is a viable alternative to 1H MRI. The sensitivity issues can be overcome by using soft nanoparticles. Application include pH-, temperature-, enzyme-, metal ion- and redox responsive- contrast agents. They can also be used for long-term cell labelling. Notes References Nuclear magnetic resonance Fluorine
Fluorine-19 nuclear magnetic resonance spectroscopy
[ "Physics", "Chemistry" ]
1,148
[ "Nuclear magnetic resonance", "Nuclear physics" ]
38,885,073
https://en.wikipedia.org/wiki/Vibroscope
Vibroscope ( 'vibrate' + scope) is an instrument for observing and tracing (and sometimes recording) vibration. For example, a primitive mechanical vibroscope consists of a vibrating object with a pointy end which leaves a wave trace on a smoked surface of a rotating cylinder. Vibroscopes are used to study properties of substances. For examples, polymers' torsional modulus and Young's modulus may be determined by vibrating the polymers and measuring their frequency of vibration under certain external forces. Similar approach works to determine linear density of thread-shaped objects, such as fibers, filaments, and yarn. Vibroscopes are also used to study sound in different areas of the mouth during speech. Jean-Marie Duhamel published about an early recording device he called a vibroscope in 1843. References Oscillation Measuring instruments
Vibroscope
[ "Physics", "Technology", "Engineering" ]
176
[ "Mechanics", "Measuring instruments", "Oscillation" ]
38,891,557
https://en.wikipedia.org/wiki/Truncated%20hexaoctagonal%20tiling
In geometry, the truncated hexaoctagonal tiling is a semiregular tiling of the hyperbolic plane. There are one square, one dodecagon, and one hexakaidecagon on each vertex. It has Schläfli symbol of tr{8,6}. Dual tiling Symmetry There are six reflective subgroup kaleidoscopic constructed from [8,6] by removing one or two of three mirrors. Mirrors can be removed if its branch orders are all even, and cuts neighboring branch orders in half. Removing two mirrors leaves a half-order gyration point where the removed mirrors met. In these images fundamental domains are alternately colored black and white, and mirrors exist on the boundaries between colors. The subgroup index-8 group, [1+,8,1+,6,1+] (4343) is the commutator subgroup of [8,6]. A radical subgroup is constructed as [8,6*], index 12, as [8,6+], (6*4) with gyration points removed, becomes (*444444), and another [8*,6], index 16 as [8+,6], (8*3) with gyration points removed as (*33333333). Related polyhedra and tilings From a Wythoff construction there are fourteen hyperbolic uniform tilings that can be based from the regular order-6 octagonal tiling. Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 7 forms with full [8,6] symmetry, and 7 with subsymmetry. See also Tilings of regular polygons List of uniform planar tilings References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Hyperbolic tilings Isogonal tilings Semiregular tilings Truncated tilings
Truncated hexaoctagonal tiling
[ "Physics" ]
465
[ "Semiregular tilings", "Truncated tilings", "Isogonal tilings", "Tessellation", "Hyperbolic tilings", "Symmetry" ]
41,632,849
https://en.wikipedia.org/wiki/Supercomplex
Modern biological research has revealed strong evidence that the enzymes of the mitochondrial respiratory chain assemble into larger, supramolecular structures called supercomplexes, instead of the traditional fluid model of discrete enzymes dispersed in the inner mitochondrial membrane. These supercomplexes are functionally active and necessary for forming stable respiratory complexes. One supercomplex of complex I, III, and IV make up a unit known as a respirasome. Respirasomes have been found in a variety of species and tissues, including rat brain, liver, kidney, skeletal muscle, heart, bovine heart, human skin fibroblasts, fungi, plants, and C. elegans. History In 1955, biologists Britton Chance and G. R. Williams were the first to propose the idea that respiratory enzymes assemble into larger complexes, although the fluid state model remained the standard. However, as early as 1985, researchers had begun isolating complex III/complex IV supercomplexes from bacteria and yeast. Finally, in 2000 Hermann Schägger and Kathy Pfeiffer used Blue Native PAGE to isolate bovine mitochondrial membrane proteins, showing Complex I, III, and IV arranged in supercomplexes. Composition and formation The most common supercomplexes observed are Complex I/III, Complex I/III/IV, and Complex III/IV. Most of Complex II is found in a free-floating form in both plant and animal mitochondria. Complex V can be found co-migrating as a dimer with other supercomplexes, but scarcely as part of the supercomplex unit. Supercomplex assembly appears to be dynamic and respiratory enzymes are able to alternate between participating in large respirasomes and existing in a free state. It is not known what triggers changes in complex assembly, but research has revealed that the formation of supercomplexes is heavily dependent upon the lipid composition of the mitochondrial membrane, and in particular requires the presence of cardiolipin, a unique mitochondrial lipid. In yeast mitochondria lacking cardiolipin, the number of enzymes forming respiratory supercomplexes was significantly reduced. According to Wenz et al. (2009), cardiolipin stabilizes the supercomplex formation by neutralizing the charges of lysine residues in the interaction domain of Complex III with Complex IV. In 2012, Bazan et al. was able to reconstitute trimer and tetramer Complex III/IV supercomplexes from purified complexes isolated from Saccharomyces cerevisiae and exogenous cardiolipin liposomes. Another hypothesis for respirasome formation is that membrane potential may initiate changes in the electrostatic/hydrophobic interactions mediating the assembly/disassembly of supercomplexes. Functional significance The functional significance of respirasomes is not entirely clear but more recent research is beginning to shed some light on their purpose. It has been hypothesized that the organization of respiratory enzymes into supercomplexes reduces oxidative damage and increases metabolism efficiency. Schäfer et al. (2006) demonstrated that supercomplexes comprising Complex IV had higher activities in Complex I and III, indicating that the presence of Complex IV modifies the conformation of the other complexes to enhance catalytic activity. Evidence has also been accumulated to show that the presence of respirasomes is necessary for the stability and function of Complex I. In 2013, Lapuente-Brun et al. demonstrated that supercomplex assembly is "dynamic and organizes electron flux to optimize the use of available substrates." References Cellular respiration Integral membrane proteins
Supercomplex
[ "Chemistry", "Biology" ]
754
[ "Biochemistry", "Cellular respiration", "Metabolism" ]
36,014,669
https://en.wikipedia.org/wiki/Yff%20center%20of%20congruence
In geometry, the Yff center of congruence is a special point associated with a triangle. This special point is a triangle center and Peter Yff initiated the study of this triangle center in 1987. Isoscelizer An isoscelizer of an angle in a triangle is a line through points , where lies on and on , such that the triangle is an isosceles triangle. An isoscelizer of angle is a line perpendicular to the bisector of angle . Isoscelizers were invented by Peter Yff in 1963. Yff central triangle Let be any triangle. Let be an isoscelizer of angle , be an isoscelizer of angle , and be an isoscelizer of angle . Let be the triangle formed by the three isoscelizers. The four triangles and are always similar. There is a unique set of three isoscelizers such that the four triangles and are congruent. In this special case formed by the three isoscelizers is called the Yff central triangle of . The circumcircle of the Yff central triangle is called the Yff central circle of the triangle. Yff center of congruence Let be any triangle. Let be the isoscelizers of the angles such that the triangle formed by them is the Yff central triangle of . The three isoscelizers are continuously parallel-shifted such that the three triangles are always congruent to each other until formed by the intersections of the isoscelizers reduces to a point. The point to which reduces to is called the Yff center of congruence of . Properties The trilinear coordinates of the Yff center of congruence are Any triangle is the triangle formed by the lines which are externally tangent to the three excircles of the Yff central triangle of . Let be the incenter of . Let be the point on side such that , a point on side such that , and a point on side such that . Then the lines are concurrent at the Yff center of congruence. This fact gives a geometrical construction for locating the Yff center of congruence. A computer assisted search of the properties of the Yff central triangle has generated several interesting results relating to properties of the Yff central triangle. Generalization The geometrical construction for locating the Yff center of congruence has an interesting generalization. The generalisation begins with an arbitrary point in the plane of a triangle . Then points are taken on the sides such that The generalization asserts that the lines are concurrent. See also Congruent isoscelizers point Central triangle References Triangle centers
Yff center of congruence
[ "Physics", "Mathematics" ]
540
[ "Point (geometry)", "Triangle centers", "Points defined for a triangle", "Geometric centers", "Symmetry" ]
36,017,008
https://en.wikipedia.org/wiki/Deletion%20channel
A deletion channel is a communications channel model used in coding theory and information theory. In this model, a transmitter sends a bit (a zero or a one), and the receiver either receives the bit (with probability ) or does not receive anything without being notified that the bit was dropped (with probability ). Determining the capacity of the deletion channel is an open problem. The deletion channel should not be confused with the binary erasure channel which is much simpler to analyze. Formal description Let be the deletion probability, . The iid binary deletion channel is defined as follows: Given an input sequence of bits as input, each bit in can be deleted with probability . The deletion positions are unknown to the sender and the receiver. The output sequence is the sequence of the which were not deleted, in the correct order and with no errors. Capacity The capacity of the binary deletion channel (as an analytical expression of the deletion rate ) is unknown. It has a mathematical expression. Several upper and lower bounds are known. References External links Implementation of correction for deletion channel Coding theory
Deletion channel
[ "Mathematics" ]
233
[ "Discrete mathematics", "Coding theory" ]
36,018,530
https://en.wikipedia.org/wiki/Sarayk%C3%B6y%20Nuclear%20Research%20and%20Training%20Center
The Sarayköy Nuclear Research and Training Center (), known as SANAEM, is a nuclear research and training center of Turkey. The organization was established on July 1, 2005, as a subunit of Turkish Atomic Energy Administration (, TAEK) at Kazan district in northwest of Ankara on an area of . The center can be visited for technical purposes upon application. Laboratories SANAEM is accredited to perform following metrology: total alpha and beta radioactivity analysis in drinking water, Tritium analysis in drinking water, analysis of Cs-134 and Cs-137 radionuclides in food, analysis of Ra-226, Th-232, Cs-137 and K-40 radionuclides in earth and building materials, analysis of Ra-226, U-234 and U-238 isotopes with alpha particle energy spectrometry method in water. The center consists of a wide range of laboratories as listed below: Alpha/beta particle spectroscopy laboratory, Liquid scintillation spectrometry laboratory, Alpha particle spectrometry laboratory, Gamma particle spectrometry laboratory, C14 dating laboratory, Radon screening laboratory, Analytical measurement and analysis laboratories, Chemical elements and stable isotopes analysis laboratory, Chromatography laboratory, Spectroscopy laboratory, Nuclear electronics laboratory, Film dosimetry laboratory, Thermoluminescent dosimetry laboratory Finger ring dosimetry laboratory, Environmental radiological monitoring activity, Medical physics applications laboratory, Radiation source quality control test laboratory, Molecular genetics laboratory, Stable isotopes laboratory, Differential scanning calorimetry and Thermogravimetric analysis in material characterization laboratory, Microscopy laboratory, Nuclear fusion laboratory, Plasma physics laboratory, Nuclear fission laboratory, Radiation microbiology laboratory, Food microbiology laboratory, Food irradiation determination laboratory, Gamma radiation facility Three gamma radiation units are in service for sterilization of disposable medical supplies and food irradiation. Electron accelerator facility The electron accelerator is the only one in the country. It has 500 keV energy and 20 mA current. Proton accelerator facility The country's first proton accelerator facility (, TAEK-PHT) is housed in a separate building of two stories built on a ground area of . The building consists of a cyclotron room, four target rooms, six production laboratories, three quality control laboratories, a R&D laboratory and storage rooms for material and waste. Facility's groundbreaking was held on February 24, 2010. The building was constructed by the Turkish Housing Development Administration (TOKİ), and it was completed on December 12, 2010. The installation of the proton accelerator and all the equipment needed concluded in 2011. The facility was officially opened on May 30, 2012, by Prime Minister Recep Tayyip Erdoğan. The cyclotron type proton accelerator "CYCLONE-30" has a variable beam energy between 15 and 30 MeV and variable beam current up to 1.2 mA. It was designed, manufactured and installed by Belgian Ion Beam Applications S.A. (IBA). The cyclotron cost 11.6 million EUR. With the cyclotron, four beamlines can be generated, three beamlines for radioisotope production and one for research and development works. With the proton bombardment following radioisotopes are produced in three separate target rooms: Indium-111, gallium-67, thallium-201 on solid target material, Fluorine-18 on liquid target material, Iodine-123 on evaporated target material. These radioisotopes and radiopharmaceuticals produced from those radioisotopes are used for the diagnosis and therapy of diseases like cancer, neurological disorder, brain physiology and pathology as well as coronary artery disease. By producing radioisotopes and radiopharmaceuticals, the facility conducts research work in the fields of medicine, industry, agriculture, food, biotechnology, animal husbandry and heath physics. References Nuclear research institutes Nuclear medicine organizations Research institutes in Turkey Nuclear technology in Turkey Buildings and structures in Ankara Organizations based in Ankara Organizations established in 2005 Tourist attractions in Ankara
Sarayköy Nuclear Research and Training Center
[ "Engineering" ]
838
[ "Nuclear research institutes", "Nuclear medicine organizations", "Nuclear organizations" ]
36,022,927
https://en.wikipedia.org/wiki/Nuclear%20forensics
Nuclear forensics is the investigation of nuclear materials to find evidence for the source, the trafficking, and the enrichment of the material. The material can be recovered from various sources including dust from the vicinity of a nuclear facility, or from the radioactive debris following a nuclear explosion. Results of nuclear forensic testing are used by different organisations to make decisions. The information is typically combined with other sources of information such as law enforcement and intelligence information. History The first seizures of nuclear or otherwise radioactive material were reported in Switzerland and Italy in 1991. Later, reports of incidents of nuclear material occurred in Germany, the Czech Republic, Hungary and other central European countries. Nuclear Forensics became a new branch of scientific research with the intent of not only determining the nature of the material, but also the intended use of the seized material as well as its origin and about the potential trafficking routes. Nuclear forensics relies on making these determinations through measurable parameters including, but not limited to chemical impurities, isotopic composition, microscopic appearance, and microstructure. By measuring these parameters, conclusions can be drawn as to the origin of the material. Identification of these parameters is an ongoing area of research, however, data interpretation also relies on the availability of reference information and on knowledge of the fuel cell operations. The first investigative radiochemical measurements began in the early days of nuclear fission. In 1944, the US Air Force made the first attempts to detect fissiogenic 133Xe in the atmosphere in order to indicate the production of plutonium through the irradiation of uranium and chemical reprocessing in an effort to gather intelligence on the status of the German nuclear program. However, no 133Xe was detected. In the subsequent years it became increasingly valuable to gather information on the Soviet nuclear weapons program, which resulted in the development of technologies that could gather airborne particles in a WB-29 weather reconnaissance plane. On September 3, 1949, these particles were used to determine that the detonation time of the first Soviet atomic test, "Joe 1". Further analysis revealed that this bomb was a replicate of the "Fat Man", which was the bomb dropped on Nagasaki in 1945. This investigative methodology combined radiochemistry and other techniques to gather intelligence on nuclear activities. The first seizures of nuclear materials from trafficking in the early 1990s allowed the nuclear forensic methodology to be adopted by a wider scientific community. When scientific laboratories outside the weapons and intelligence community took an interest in this methodology was when the term "Nuclear Forensics" was coined. Unlike standard forensics, nuclear forensics focuses mainly on the nuclear or radioactive material and aims to provide knowledge of the intended use of the materials. In 1994 560 grams of plutonium and uranium oxide were intercepted at Munich airport in an airplane coming from Moscow. The precise composition was 363 grams plutonium (87% of which was Plutonium-239) and 122 grams of uranium. It later emerged through a German parliamentary enquiry that the purchase had been arranged and financed by the German Federal Intelligence Service. U.S. Department of Energy official Jay A. Tilden has advocated for the use of nuclear forensics science to assign responsibility for, or resolve ambiguity about, “unattributed nuclear events,” such as accidents at nuclear facilities, nuclear weapons mishaps in denied geographic areas, accidental nuclear detonations, the limited use of nuclear weapons and subsequent denial of responsibility by the perpetrator, and attempts to blame a clandestine nuclear attack on non-state actors. An example of an unattributed nuclear event was the September 2017 unattributed release of the radioisotope ruthenium across central and eastern Europe and Asia. Chronometry Determining a nuclear material's age is critical to nuclear forensic investigations. Dating techniques can be utilized to identify a material's source as well as procedures performed on the material. This can aid in determining the information about the potential participant in the "age" of the material of interest. Nuclides, related through radioactive decay processes will have relative sample concentrations that can be predicted using parent-daughter in-growth equations and relevant half-lives. Because radioactive isotopes decay at a rate determined by the amount of the isotope in a sample and the half-life of the parent isotope, the relative amount of the decay products compared to the parent isotopes can be used to determine "age". Heavy element nuclides have a 4n+2 relationship, where the mass number divided by 4 leaves a remainder of two. The decay network begins with 238Pu and proceeds through the in-growth of long-lived 234U, 230Th, and 226Ra. If any member of the 4n+2 decay chain is purified it will immediately begin to produce descendant species. The time since a sample was last purified can be calculated from the ratio of any two concentrations among the decaying nuclides. Essentially, if a nuclear material has been put through a refinement process to remove the daughter species, the time elapsed since purification can be "back-calculated" using radiochemical separation techniques in conjunction with analytical measurement of the existing parent-daughter ratios. For example, the α decay of 239Pu to 235U can be used as an example of this procedure. with the assumption of a perfect purification time T0 then there will be a linear relationship between the in-growth of 235U and time elapsed since purification. There are, however, various instances where the correlation is not as clear. This strategy may not apply when the parent-daughter pair achieve secular equilibrium very rapidly or when the half-life of the daughter nuclide is significantly shorter than the time that has elapsed since purification of the nuclear material, e.g. 237Np/233Pa. Another possible complication is if in environmental samples, non-equivalent metal/ion transport for parents and daughter species may complicate or invalidate the use of chronometric measurements. Special age-dating relationships exist, including the commonly employed 234U/230Th and 241Pu/241Am chronometers. In special circumstances, parent-granddaughter relationships can be used to elucidate the age of nuclear materials when the material is intentionally made to look older through the addition of daughter nuclides. Chronometry is based on the concept that the composition of the nuclear material changes as samples are prepared and analyzed. This barrier can be substantial for species that decay quickly or whose daughter products put forth spectral interferences. The decay of 233U, for example, has a t1/2~1.6x105years which is rapid in comparison to many species and yield 229Th, which emits an α particle that is isoenergetic, having the same energy, as the parent. To avoid this, freshly prepared samples as well as complementary analysis methods are used for confident nuclear materials characterization. The decay of nuclear samples makes rapid analysis methods highly desirable. Separations Chemical separation techniques are frequently utilized in nuclear forensics as a method of reducing the interferences and to facilitate the measurement of low level radionuclides. Purification that occurs rapidly as progeny in-growth begins immediately following purification is ideal. Anion Exchange Anion exchange separation methods are widely used in the purification of actinides and actinide bearing materials through the use of resin columns. The anionic actinide complexes are retained by anion exchange sites that are on the resin and neutral species pass through the column unretained. Then the retained species can be eluted from the column by conversion to a neutral complex, typically by changing the mobile phase passed through the resin bed. Anion exchange-based separations of actinides, while valued for there simplicity and widely used, tend to be time-consuming and are infrequently automated. Most are still dependent on gravity. Speeding up the flow of the mobile phase tends to introduce problems such as impurities and jeopardize future investigations. Hence, there is still a need for development of this technique to satisfy the nuclear forensic research priorities. Co-Precipitation Actinide isolation by co-precipitation is frequently used for samples of relatively large volumes to concentrate analytes and remove interferences. Actinide carriers include iron hydroxides, lanthanide fluorides/hydroxides, manganese dioxide, and a few other species. Analysis A wide range of instrumental techniques are employed in nuclear forensics. Radiometric counting techniques are useful when determining decay products of species with short half-lives. However, for longer half-lives, inorganic mass spec is a powerful means of carrying out elemental analysis and determining isotopic relationships. Microscopy approaches can also be useful in characterization of a nuclear material. Counting Techniques Counting techniques of α,β,γ or neutron can be used as approaches for the analysis of nuclear forensic materials that emit decay species. The most common of these are alpha and gamma spectroscopy. β counting is used infrequently because most short lived β-emitters also give off characteristic γ-rays and produce very broad counting peaks. Neutron counting are found more rarely in analytical labs due in part to shielding concerns should such neutron emitters be introduced into a counting facility. Alpha-particle spectroscopy Alpha-particle spectroscopy is a method of measuring the radionuclides based on emission of α particles. They can be measured by a variety of detectors, including liquid scintillation counters, gas ionization detectors, and ion-implanted silicon semiconductor detectors. Typical alpha-particle spectrometers have low backgrounds and measure particles ranging from 3 to 10 MeV. Radionuclides that decay through α emission tend to eject α particles with discrete, characteristic energies between 4 and 6 MeV. These energies become attenuated as they pass through the layers of sample. Increasing the distance between the source and the detector can lead to improved resolution, but decreased particle detection. The advantages of alpha-particle spectroscopy include relatively inexpensive equipment costs, low backgrounds, high selectivity, and good throughput capabilities with the use of multi-chamber systems. There are also disadvantages of alpha-particle spectroscopy. One disadvantage is that there must be significant sample preparation to obtain useful spectroscopy sources. Also, spectral interferences or artifacts from extensive preparation prior to counting, to minimize this high purity acids are needed. Another disadvantage is that measurements require a large quantity of material which can also lead to poor resolution. Also, undesired spectral overlap and long analysis times are disadvantages. Gamma Spectroscopy Gamma spectroscopy yields results that are conceptually equivalent to alpha-particle spectroscopy, however, can result in sharper peaks due to reduced attenuation of energy. Some radionuclides produce discrete γ-rays that produce energy between a few KeV to 10 MeV which can be measured with a gamma-ray spectrometer. This can be accomplished without destroying the sample. The most common gamma-ray detector is a semiconductor germanium detector which allow for a greater energy resolution than alpha-particle spectroscopy, however gamma spectroscopy only has an efficiency of a few percent. Gamma spectroscopy is a less sensitive method due to low detector efficiency and high background. However, gamma spectroscopy has the advantage of having less time-consuming sample procedures and portable detectors for field use. Mass Spectrometry Mass spec techniques are essential in nuclear forensics analysis. Mass spec can provide elemental and isotopic information. Mass spec also requires less sample mass relative to counting techniques. For nuclear forensic purposes it is essential that the mass spectrometry offers excellent resolution in order to distinguish between similar analytes, e.g. 235U and 236U. Ideally, mass spec should offer excellent resolution/mass abundance, low backgrounds, and proper instrumental function. Thermal Ionization MS In thermal ionization mass spectrometry, small quantities of highly purified analyte are deposited onto a clean metal filament. Rhenium or tungsten are typically used. The sample is heated in a vacuum of the ion source by applying a current to the filaments. A portion of the analyte will be ionized by the filament and then are directed down the flight tube and separated based on mass to charge ratios. Major disadvantages include time-consuming sample preparation and inefficient analyte ionization. Multi-Collector Inductively Coupled Plasma-Mass Spectrometry This is a frequently used technique in nuclear forensics. In this technique a purified sample is nebulized in a spray chamber and then aspirated into a plasma. The high temperature of the plasma leads to sample dissociation and high efficiency of ionization of the analyte. The ions then enter the mass spectrometer where they are discriminated based on mass based on a double focusing system. Ions of various masses are detected simultaneously by a bank of detectors similar to those used in the thermal ionization mass spec. MC-ICP-MS has a more rapid analysis because it does not require lengthy filament preparation. For high quality, however, there is a requirement for extensive sample cleanup. Argon plasma is also less stable and requires relatively expensive equipment as well as skilled operators. Secondary-Ion MS SIMS is a micro-analytical technique valuable for three-dimensional analysis of a materials elemental composition and isotopic ratios. This method can be utilized in characterization of bulk materials with a detection limit in the low parts per billion (10−9 or ng/g) range. Particles as small as a few hundreds of nanometers can be detected. Ion production in this technique is dependent on the bombardment of solid samples with a focused beam of primary ions. The sputtered, secondary ions are directed onto the mass spectrometry system to be measured. The secondary ions are a result of kinetic energy transfer from the primary ions. These primary ions penetrate into the solid sample to some depth. This method can be used to detect any element, however the sputtering process is highly matrix dependent and ion yields vary. This method is especially useful, because it can be fully automated to find uranium particles in a sample of many million particles in a matter of hours. Particles of interest can then be imaged and further analyzed with very high isotopic precision. Additional Nuclear Forensic Methods Numerous additional approaches may be employed in the interrogation of seized nuclear material. In contrast to previously mentioned analysis techniques, these approaches have received relatively low attention in recent years in terms of novel advancement, and, typically, require greater quantities of sample. Scanning electron microscope The scanning electron microscope can provide images of an object's surface at high magnification with a resolution on the order of nanometers. A focused beam of energetic electrons is scanned over the sample and electrons that a backscattered or emitted from the sample surface are detected. Images are constructed via measuring the fluctuations of electrons from the sample beam scanning position. This data is useful in determining what process may have been employed in the materials production and to distinguish between materials of differing origins. Measurement of backscattered electrons elucidate the average atomic number of the area being scanned. The emitted, or secondary electrons provide topological information. This is a relatively straight forward technique, however samples must be amenable to being under a vacuum and may require pre-treatment. X-Ray Fluorescence X-ray fluorescence offers rapid and non-destructive determination of the elemental composition of a nuclear material based on the detection of characteristic X-rays. Direct sample irradiation allows for minimal sample preparation and portable instrumentation for field deployment. The detection limit is 10 ppm. This is well above mass spectrometry. This technique tends to be hindered by matrix affects, which must be corrected for. Neutron Activation Analysis Neutron activation analysis is a powerful non-destructive method of analyzing elements of mid to high atomic number. This method combines excitation by nuclear reaction and the radiation counting techniques to detect various materials. The measurement of characteristic radiation, following the bombardment completion, is indicative of the elements of interest. The equation for the production product is given by: where is the starting analyte, is the incoming neutron, is the excited product and is the detected radiation that results from the de-excitation of the product species. The advantages of this technique include multi-element analysis, excellent sensitivity, and high selectivity, and no time-consuming separation procedures. One disadvantage is the requirement of a nuclear reactor for sample preparation. X-Ray Absorption Spectroscopy X-Ray absorption spectroscopy (XAS) has been demonstrated as a technique for nuclear forensic investigations involving uranium speciation. Both the lower energy near-edge (XANES) and higher energy fine structure (EXAFS) analytical methods may be useful for this type of characterisation. Typically, XANES is employed to determine the oxidation state of the absorbing uranium atom, while EXAFS can be used to determine its local atomic environment. This spectroscopic method, when coupled with X-Ray diffraction (XRD), would be of most benefit to complex nuclear forensic investigations involving species of different oxidation states. Objective Colour Analysis Objective colour analysis can be performed using digital images taken with a digital camera, either in the field or in a laboratory. This method was developed to replace subjective colour reporting, such as by-eye observations, with quantitative RGB and HSV values. The method has previously been demonstrated on the thermal treatment of uranyl peroxide powders, which yield distinctive yellow to brown hues. Hence, this method is noted as particularly useful in determining thermal processing history, especially where colour changes occur in uranium compounds of various oxidation states. References Forensic techniques Nuclear interdisciplinary topics
Nuclear forensics
[ "Physics" ]
3,575
[ "Nuclear interdisciplinary topics", "Nuclear physics" ]
36,023,644
https://en.wikipedia.org/wiki/Tubulin%20domain
Tubulin/FtsZ family, GTPase domain is an evolutionary conserved protein domain. This domain is found in all tubulin chains, as well as the bacterial FtsZ family of proteins. These proteins are involved in polymer formation. Tubulin is the major component of microtubules, while FtsZ is the polymer-forming protein of bacterial cell division, it is part of a ring in the middle of the dividing cell that is required for constriction of cell membrane and cell envelope to yield two daughter cells. FtsZ and tubulin are GTPases, this entry is the GTPase domain. FtsZ can polymerise into tubes, sheets, and rings in vitro and is ubiquitous in bacteria and archaea. References Protein domains
Tubulin domain
[ "Biology" ]
152
[ "Protein domains", "Protein classification" ]
36,023,674
https://en.wikipedia.org/wiki/Misato%20segment%20II%20myosin-like%20domain
The Misato segment II myosin-like domain is an evolutionary conserved protein domain. The misato protein contains three distinct, conserved domains, segments I, II and III and is involved in the regulation of mitochondrial distribution and morphology. Segments I and III are common to tubulins (INTERPRO), but segment II aligns with myosin heavy chain sequences from Drosophila melanogaster (Fruit fly, SWISSPROT), rabbit (SWISSPROT), and human. Segment II of misato is a major contributor to its greater length compared with the various tubulins. The most significant sequence similarities to this 54-amino acid region are from a motif found in the heavy chains of myosins from different organisms. A comparison of segment II with the vertebrate myosin heavy chains reveals that it is homologous to a myosin peptide in the hinge region linking the S2 and LMM domains. Segment II also contains heptad repeats which are characteristic of the myosin tail alpha-helical coiled-coils. Deletion of the budding yeast homologue is lethal and unregulated expression leads to mitochondrial dispersion and abnormalities in cell morphology. The group of proteins containing this domain is conserved from yeast to human, but its exact function is still unknown. References Protein domains
Misato segment II myosin-like domain
[ "Biology" ]
277
[ "Protein domains", "Protein classification" ]
36,026,354
https://en.wikipedia.org/wiki/Boolean%20hierarchy
The boolean hierarchy is the hierarchy of boolean combinations (intersection, union and complementation) of NP sets. Equivalently, the boolean hierarchy can be described as the class of boolean circuits over NP predicates. A collapse of the boolean hierarchy would imply a collapse of the polynomial hierarchy. Formal definition BH is defined as follows: BH1 is NP. BH2k is the class of languages which are the intersection of a language in BH2k-1 and a language in coNP. BH2k+1 is the class of languages which are the union of a language in BH2k and a language in NP. BH is the union of all the BHi classes. Derived classes DP (Difference Polynomial Time) is BH2. Equivalent definitions Defining the conjunction and the disjunction of classes as follows allows for more compact definitions. The conjunction of two classes contains the languages that are the intersection of a language of the first class and a language of the second class. Disjunction is defined in a similar way with the union in place of the intersection. C ∧ D = { A ∩ B | A ∈ C   B ∈ D } C ∨ D = { A ∪ B | A ∈ C   B ∈ D } According to this definition, DP = NP ∧ coNP. The other classes of the Boolean hierarchy can be defined as follows. The following equalities can be used as alternative definitions of the classes of the Boolean hierarchy: Alternatively, for every k ≥ 3: Hardness Hardness for classes of the Boolean hierarchy can be proved by showing a reduction from a number of instances of an arbitrary NP-complete problem A. In particular, given a sequence {x1, ... xm} of instances of A such that xi ∈ A implies xi-1 ∈ A, a reduction is required that produces an instance y such that y ∈ B if and only if the number of xi ∈ A is odd or even: BH2k-hardness is proved if and the number of xi ∈ A is odd BH2k+1-hardness is proved if and the number of xi ∈ A is even Such reductions work for every fixed . If such reductions exist for arbitrary , the problem is hard for PNP[O(log n)]. References Hierarchy
Boolean hierarchy
[ "Mathematics", "Technology" ]
477
[ "Mathematical logic", "Computer science stubs", "Computer science", "Computing stubs", "Mathematical logic hierarchies" ]
24,751,678
https://en.wikipedia.org/wiki/C15H21N
{{DISPLAYTITLE:C15H21N}} The molecular formula C15H21N (molar mass: 120.15 g/mol) may refer to: 8A-PDHQ, also known as 8a-Phenyldecahydroquinoline Fencamfamin, also known as fencamfamine Molecular formulas
C15H21N
[ "Physics", "Chemistry" ]
77
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,758,132
https://en.wikipedia.org/wiki/Constant%20%28mathematics%29
In mathematics, the word constant conveys multiple meanings. As an adjective, it refers to non-variance (i.e. unchanging with respect to some other value); as a noun, it has two different meanings: A fixed and well-defined number or other non-changing mathematical object, or the symbol denoting it. The terms mathematical constant or physical constant are sometimes used to distinguish this meaning. A function whose value remains unchanged (i.e., a constant function). Such a constant is commonly represented by a variable which does not depend on the main variable(s) in question. For example, a general quadratic function is commonly written as: where , and are constants (coefficients or parameters), and a variable—a placeholder for the argument of the function being studied. A more explicit way to denote this function is which makes the function-argument status of (and by extension the constancy of , and ) clear. In this example , and are coefficients of the polynomial. Since occurs in a term that does not involve , it is called the constant term of the polynomial and can be thought of as the coefficient of . More generally, any polynomial term or expression of degree zero (no variable) is a constant. Constant function A constant may be used to define a constant function that ignores its arguments and always gives the same value. A constant function of a single variable, such as , has a graph of a horizontal line parallel to the x-axis. Such a function always takes the same value (in this case 5), because the variable does not appear in the expression defining the function. Context-dependence The context-dependent nature of the concept of "constant" can be seen in this example from elementary calculus: "Constant" means not depending on some variable; not changing as that variable changes. In the first case above, it means not depending on h; in the second, it means not depending on x. A constant in a narrower context could be regarded as a variable in a broader context. Notable mathematical constants Some values occur frequently in mathematics and are conventionally denoted by a specific symbol. These standard symbols and their values are called mathematical constants. Examples include: 0 (zero). 1 (one), the natural number after zero. (pi), the constant representing the ratio of a circle's circumference to its diameter, approximately equal to 3.141592653589793238462643. , approximately equal to 2.718281828459045235360287. , the imaginary unit such that . (square root of 2), the length of the diagonal of a square with unit sides, approximately equal to 1.414213562373095048801688. (golden ratio), approximately equal to 1.618033988749894848204586, or algebraically, . Constants in calculus In calculus, constants are treated in several different ways depending on the operation. For example, the derivative (rate of change) of a constant function is zero. This is because constants, by definition, do not change. Their derivative is hence zero. Conversely, when integrating a constant function, the constant is multiplied by the variable of integration. During the evaluation of a limit, a constant remains the same as it was before and after evaluation. Integration of a function of one variable often involves a constant of integration. This arises due to the fact that the integral is the inverse (opposite) of the derivative meaning that the aim of integration is to recover the original function before differentiation. The derivative of a constant function is zero, as noted above, and the differential operator is a linear operator, so functions that only differ by a constant term have the same derivative. To acknowledge this, a constant of integration is added to an indefinite integral; this ensures that all possible solutions are included. The constant of integration is generally written as 'c', and represents a constant with a fixed but undefined value. Examples If is the constant function such that for every then See also Constant (disambiguation) Expression Level set List of mathematical constants Physical constant References External links Algebra Elementary mathematics
Constant (mathematics)
[ "Mathematics" ]
867
[ "Elementary mathematics", "Algebra" ]
24,758,677
https://en.wikipedia.org/wiki/Continuum%20%28set%20theory%29
In the mathematical field of set theory, the continuum means the real numbers, or the corresponding (infinite) cardinal number, denoted by . Georg Cantor proved that the cardinality is larger than the smallest infinity, namely, . He also proved that is equal to , the cardinality of the power set of the natural numbers. The cardinality of the continuum is the size of the set of real numbers. The continuum hypothesis is sometimes stated by saying that no cardinality lies between that of the continuum and that of the natural numbers, , or alternatively, that . Linear continuum According to Raymond Wilder (1965), there are four axioms that make a set C and the relation < into a linear continuum: C is simply ordered with respect to <. If [A,B] is a cut of C, then either A has a last element or B has a first element. (compare Dedekind cut) There exists a non-empty, countable subset S of C such that, if x,y ∈ C such that x < y, then there exists z ∈ S such that x < z < y. (separability axiom) C has no first element and no last element. (Unboundedness axiom) These axioms characterize the order type of the real number line. See also Aleph null Suslin's problem Transfinite number References Bibliography Raymond L. Wilder (1965) The Foundations of Mathematics, 2nd ed., page 150, John Wiley & Sons. Set theory Infinity
Continuum (set theory)
[ "Mathematics" ]
308
[ "Set theory", "Mathematical logic", "Mathematical objects", "Infinity", "Mathematical logic stubs" ]
24,760,321
https://en.wikipedia.org/wiki/Integrated%20pulmonary%20index
Integrated pulmonary index (IPI) is a patient pulmonary index which uses information from capnography and pulse oximetry to provide a single value that describes the patient's respiratory status. IPI is used by clinicians to quickly assess the patient's respiratory status to determine the need for additional clinical assessment or intervention. The IPI is a patient index which provides a simple indication in real time of the patient's overall ventilatory status as an integer ranging from numbers 1 to 10. IPI integrates four major physiological parameters provided by a patient monitor, using this information along with an algorithm to produce the IPI score. The IPI score is not intended to replace current patient respiratory parameters, but to provide an additional integrated score or index of the patient ventilation status to the caregiver. Mechanism The IPI incorporates four patient parameters (end-tidal CO2 and respiratory rate measured by capnography, as well as pulse rate and blood oxygenation SpO2 as measured by pulse oximetry) into a single index value. The IPI value on the patient monitor indicates the patient ventilatory status, where a score of 10 is normal, indicating optimal pulmonary status, and a score of 1 or 2 requires immediate intervention. The IPI algorithm was developed based on the data from a group of medical experts (anesthesiologists, nurses, respiratory therapists, and physiologists) who evaluated cases with varying parameter values and whom assigned an IPI value to a predefined patient status. A mathematical model was built using patient normal ranges for these parameters and the ratings given to various combinations of the parameters by these professionals. Fuzzy logic, a mathematical method which mimics human logical thinking, was used to develop the IPI model. Clinical validation studies indicate that the IPI value produced by the IPI algorithm accurately reflects the patient's ventilatory status. In studies on both adult and pediatric patients, in which experts’ ratings of ventilatory status were collected along with IPI data, the IPI scores were found to be highly correlated with the experts’ annotated ratings. Studies conducted to validate the index also concluded that the single numeric value of IPI along with IPI trend may be valuable for promoting early awareness to changes in patient ventilatory status and in simplifying the monitoring of patients in busy clinical environments. How does IPI help clinicians? IPI is a real-time patient value, updated every second, always available to the caregiver. An IPI trend graph also shows IPI scores over the previous hour (or other set time period), indicating if the IPI is remaining steady or trending up or down, thus reflecting changes in pulmonary status over time. In the example seen here, the changing IPI score indicates changes in the ventilatory status of the patient; IPI improves after a stimulus is applied. IPI can promote early awareness to changes in a patient's ventilatory status. The caregiver can view the IPI trend, which indicates changes in IPI over time. A quick view of the IPI trend can show that if the IPI has changed over the previous minutes or hours, to help the clinician ascertain if the patient's overall ventilatory status is worsening, remaining steady, or improving. This information can help determine the next steps in patient care. Thus, IPI can simplify the monitoring of patients in clinical environments. The caregiver can quickly and easily assess a patient's ventilatory status by following one number, the IPI, before checking the four parameters that make up this number. The four parameters continue to be displayed on the monitor screen. A significant change in the IPI is a “red flag” indicator, indicating that the clinician should review other monitored data and assess the patient. In the clinical environment, a quick check of the IPI value and IPI trend is a first indicator of pulmonary status of the patient and may be used to determine if further patient assessment is warranted. IPI can increase patient safety, by indicating the presence of slow-developing patient respiratory issues not easily identified with individual instantaneous data to the caregiver in real time. This enables timely decisions and interventions to reduce patient risk, improve outcomes and increase patient safety. Since normal values for the physiological parameters are different for different age categories, the IPI algorithm differs for different age groups (three pediatric age groups and adult). IPI is not available for neonatal and infant patients (up to the age of 1 year). See also Anesthesia Medical tests Footnotes {{}}==References== A Novel Integrated Pulmonary Index (IPI) Quantifies Heart Rate, Etco2, Respiratory Rate and SpO2% , Arthur Taft, Ph.D., Michal Ronen, Ph.D., Chad Epps, M.D., Jonathan Waugh, Ph.D., Richard Wales, B.S., presented at the Annual meeting of the American Society of Anesthesiologists, 2008 Reliability of the Integrated Pulmonary Index Postoperatively , D. Gozal, MD, Y. Gozal, MD, presented at Society for Technology in Anesthesia (STA) in 2009 The Integrated Pulmonary Index: Validity and Application in the Pediatric Population , D. Gozal, MD, Y. Gozal, MD, presented at Society for Technology in Anesthesia (STA) in 2009 Medical technology
Integrated pulmonary index
[ "Biology" ]
1,124
[ "Medical technology" ]
24,762,328
https://en.wikipedia.org/wiki/Seismic%20metamaterial
A seismic metamaterial, is a metamaterial that is designed to counteract the adverse effects of seismic waves on artificial structures, which exist on or near the surface of the Earth. Current designs of seismic metamaterials utilize configurations of boreholes, trees or proposed underground resonators to act as a large scale material. Experiments have observed both reflections and bandgap attenuation from artificially induced seismic waves. These are the first experiments to verify that seismic metamaterials can be measured for frequencies below 100 Hz, where damage from Rayleigh waves is the most harmful to artificial structures. The mechanics of seismic waves More than a million earthquakes are recorded each year, by a worldwide system of earthquake detection stations. The propagation velocity of the seismic waves depends on density and elasticity of the earth materials. In other words, the speeds of the seismic waves vary as they travel through different materials in the Earth. The two main components of a seismic event are body waves and surface waves. Both of these have different modes of wave propagation. Towards Seismic Cloaking Computations showed that seismic waves traveling toward a building, could be directed around the building, leaving the building unscathed, by using seismic metamaterials. The very long wavelengths of earthquake waves would be shortened as they interact with the metamaterials; the waves would pass around the building so as to arrive in phase as the earthquake wave proceeded, as if the building was not there. The mathematical models produce the regular pattern provided by Metamaterial cloaking. This method was first understood with electromagnetic cloaking metamaterials - the electromagnetic energy is in effect directed around an object, or hole, and protecting buildings from seismic waves employs this same principle. Giant polymer-made split ring resonators combined with other metamaterials are designed to couple at the seismic wavelength. Concentric layers of this material would be stacked, each layer separated by an elastic medium. The design that worked is ten layers of six different materials, which can be easily deployed in building foundations. As of 2009, the project is still in the design stage. Electromagnetics cloaking principles for seismic metamaterials For seismic metamaterials to protect surface structures, the proposal includes a layered structure of metamaterials, separated by elastic plates in a cylindrical configuration. A prior simulation showed that it is possible to create concealment from electromagnetic radiation with concentric, alternating layers of electromagnetic metamaterials. That study is in contrast to concealment by inclusions in a split ring resonator designed as an anisotropic metamaterial. The configuration can be viewed as alternating layers of "homogeneous isotropic dielectric material" A. with "homogeneous isotropic dielectric material" B. Each dielectric material is much thinner than the radiated wavelength. As a whole, such structure is an anisotropic medium. The layered dielectric materials surround an "infinite conducting cylinder". The layered dielectric materials radiate outward, in a concentric fashion, and the cylinder is encased in the first layer. The other layers alternate and surround the previous layer all the way to the first layer. Electromagnetic wave scattering was calculated and simulated for the layered (metamaterial) structure and the split-ring resonator anisotropic metamaterial, to show the effectiveness of the layered metamaterial. Acoustic cloaking principles for seismic metamaterials The theory and ultimate development for the seismic metamaterial is based on coordinate transformations achieved when concealing a small cylindrical object with electromagnetic waves. This was followed by an analysis of acoustic cloaking, and whether or not coordinate transformations could be applied to artificially fabricated acoustic materials. Applying the concepts used to understand electromagnetic materials to material properties in other systems shows them to be closely analogous. Wave vector, wave impedance, and direction of power flow are universal. By understanding how permittivity and permeability control these components of wave propagation, applicable analogies can be used for other material interactions. In most instances, applying coordinate transformation to engineered artificial elastic media is not possible. However, there is at least one special case where there is a direct equivalence between electromagnetics and elastodynamics. Furthermore, this case appears practically useful. In two dimensions, isotropic acoustic media and isotropic electromagnetic media are exactly equivalent. Under these conditions, the isotropic characteristic works in anisotropic media as well. It has been demonstrated mathematically that the 2D Maxwell equations with normal incidence apply to 2D acoustic equations when replacing the electromagnetic parameters with the following acoustic parameters: pressure, vector fluid velocity, fluid mass density and the fluid bulk modulus. The compressional wave solutions used in the electromagnetic cloaking are transferred to material fluidic solutions where fluid motion is parallel to the wavevector. The computations then show that coordinate transformations can be applied to acoustic media when restricted to normal incidence in two dimensions. Next the electromagnetic cloaking shell is referenced as an exact equivalence for a simulated demonstration of the acoustic cloaking shell. Bulk modulus and mass density determine the spatial dimensions of the cloak, which can bend any incident wave around the center of the shell. In a simulation with perfect conditions, because it is easier to demonstrate the principles involved, there is zero scattering in any direction. The seismic cloak However, it can be demonstrated through computation and visual simulation that the waves are in fact dispersed around the location of the building. The frequency range of this capability is shown to have no limitation regarding the radiated frequency. The cloak itself demonstrates no forward or back scattering, hence, the seismic cloak becomes an effective medium. Experiments on Seismic Metamaterials In 2012, researchers held an experimental field-test near Grenoble (France), with the aim to highlight analogy with phononic crystals. At the geophysics scale, in a forest in the Landes region of France in 2016, an ambitious seismic experiment called the METAFORET experiment demonstrated that trees could significantly modify the surface wavefield due to their coupled resonances when arranged at a subwavelength scale. A follow-up field experiment called the META-WT experiment was performed in the Nauen wind farm. This for the first time demonstrated that at the city scale, collective resonance of wind turbine structures can modify seismic waves propagating through it. These new observations have implications for seismic hazard in a city where dense urban structures like tall buildings can strongly modify the wavefield. See also Negative index metamaterials Metamaterial antennas Photonic crystal Superlens Split-ring resonator Terahertz metamaterials Tunable metamaterials Photonic metamaterials Material properties Acoustic dispersion Bulk modulus Constitutive equation Elastic wave Equation of state Linear elasticity Permeability Permittivity Stress (mechanics) Thermodynamic state References Seismology Metamaterials Continuum mechanics
Seismic metamaterial
[ "Physics", "Materials_science", "Engineering" ]
1,404
[ "Metamaterials", "Materials science", "Classical mechanics", "Continuum mechanics" ]
23,263,587
https://en.wikipedia.org/wiki/Specific%20ion%20interaction%20theory
In theoretical chemistry, Specific ion Interaction Theory (SIT theory) is a theory used to estimate single-ion activity coefficients in electrolyte solutions at relatively high concentrations. It does so by taking into consideration interaction coefficients between the various ions present in solution. Interaction coefficients are determined from equilibrium constant values obtained with solutions at various ionic strengths. The determination of SIT interaction coefficients also yields the value of the equilibrium constant at infinite dilution. Background This theory arises from the need to derive activity coefficients of solutes when their concentrations are too high to be predicted accurately by the Debye–Hückel theory. Activity coefficients are needed because an equilibrium constant is defined in chemical thermodynamics as the ratio of activities but is usually measured using concentrations. The protonation of a monobasic acid will be used to simplify the presentation. The equilibrium for protonation of the conjugate base, A− of the acid HA, may be written as: H+ + A- <=> HA for which the association constant K is defined as: where {HA}, {H+}, and {A–} represent the activity of the corresponding chemical species. The role of water in the association equilibrium is ignored as in all but the most concentrated solutions the activity of water is constant. K is defined here as an association constant, the reciprocal of an acid dissociation constant. Each activity term { } can be expressed as the product of a concentration [ ] and an activity coefficient γ. For example, where the square brackets represent a concentration and γ is an activity coefficient. Thus the equilibrium constant can be expressed as a product of a concentration ratio and an activity coefficient ratio. Taking the logarithms: where: at infinite dilution of the solution K0 is the hypothetical value that the equilibrium constant K would have if the solution of the acid HA was infinitely diluted and that the activity coefficients of all the species in solution were equal to one. It is a common practice to determine equilibrium constants in solutions containing an electrolyte at high ionic strength such that the activity coefficients are effectively constant. However, when the ionic strength is changed the measured equilibrium constant will also change, so there is a need to estimate individual (single ion) activity coefficients. Debye–Hückel theory provides a means to do this, but it is accurate only at very low concentrations. Hence the need for an extension to Debye–Hückel theory. Two main approaches have been used. SIT theory, discussed here and Pitzer equations. Development SIT theory was first proposed by Brønsted in 1922 and was further developed by Guggenheim in 1955. Scatchard extended the theory in 1936 to allow the interaction coefficients to vary with ionic strength. The theory was mainly of theoretical interest until 1945 because of the difficulty of determining equilibrium constants before the glass electrode was invented. Subsequently, Ciavatta developed the theory further in 1980. The activity coefficient of the jth ion in solution is written as γj when concentrations are on the molal concentration scale and as yj when concentrations are on the molar concentration scale. (The molality scale is preferred in thermodynamics because molal concentrations are independent of temperature). The basic idea of SIT theory is that the activity coefficient can be expressed as (molalities) or (molar concentrations) where z is the electrical charge on the ion, I is the ionic strength, ε and b are interaction coefficients and m and c are concentrations. The summation extends over the other ions present in solution, which includes the ions produced by the background electrolyte. The first term in these expressions comes from Debye–Hückel theory. The second term shows how the contributions from "interaction" are dependent on concentration. Thus, the interaction coefficients are used as corrections to Debye–Hückel theory when concentrations are higher than the region of validity of that theory. The activity coefficient of a neutral species can be assumed to depend linearly on ionic strength, as in where km is a Sechenov coefficient. In the example of a monobasic acid HA, assuming that the background electrolyte is the salt NaNO3, the interaction coefficients will be for interaction between H+ and NO3−, and between A− and Na+. Determination and application Firstly, equilibrium constants are determined at a number of different ionic strengths, at a chosen temperature and particular background electrolyte. The interaction coefficients are then determined by fitting to the observed equilibrium constant values. The procedure also provides the value of K at infinite dilution. It is not limited to monobasic acids. and can also be applied to metal complexes. The SIT and Pitzer approaches have been compared recently. The Bromley equation has also been compared to both SIT and Pitzer equations. It has been shown that the SIT equation is a practical simplification of a more complicated hypothesis, that is rigorously applicable only at trace concentrations of reactant and product species immersed in a surrounding electrolyte medium. References External links SIT program A PC program to correct stability constants for changes in ionic strength using SIT theory and to estimate SIT parameters with full statistics. Contains an editable database of published SIT parameters. It also provides routines to inter-convert MolaRities (c) and MolaLities (m), and lg K(c) and lg K(m). Equilibrium chemistry Thermodynamics
Specific ion interaction theory
[ "Physics", "Chemistry", "Mathematics" ]
1,105
[ "Equilibrium chemistry", "Thermodynamics", "Dynamical systems" ]
23,264,409
https://en.wikipedia.org/wiki/Dynamic%20structure%20factor
In condensed matter physics, the dynamic structure factor (or dynamical structure factor) is a mathematical function that contains information about inter-particle correlations and their time evolution. It is a generalization of the structure factor that considers correlations in both space and time. Experimentally, it can be accessed most directly by inelastic neutron scattering or X-ray Raman scattering. The dynamic structure factor is most often denoted , where (sometimes ) is a wave vector (or wave number for isotropic materials), and a frequency (sometimes stated as energy, ). It is defined as: Here , is called the intermediate scattering function and can be measured by neutron spin echo spectroscopy. The intermediate scattering function is the spatial Fourier transform of the van Hove function : Thus we see that the dynamical structure factor is the spatial and temporal Fourier transform of van Hove's time-dependent pair correlation function. It can be shown (see below), that the intermediate scattering function is the correlation function of the Fourier components of the density : The dynamic structure is exactly what is probed in coherent inelastic neutron scattering. The differential cross section is : where is the scattering length. The van Hove function The van Hove function for a spatially uniform system containing point particles is defined as: It can be rewritten as: References Further reading Lovesey, Stephen W. (1986). Theory of Neutron Scattering from Condensed Matter - Volume I: Nuclear Scattering. Oxford University Press. . Condensed matter physics Neutron scattering
Dynamic structure factor
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
303
[ "Materials science stubs", "Neutron scattering", "Scattering stubs", "Phases of matter", "Materials science", "Scattering", "Condensed matter physics", "Condensed matter stubs", "Matter" ]
23,266,515
https://en.wikipedia.org/wiki/Arf%20semigroup
In mathematics, Arf semigroups are certain subsets of the non-negative integers closed under addition, that were studied by . They appeared as the semigroups of values of Arf rings. A subset of the integers forms a monoid if it includes zero, and if every two elements in the subset have a sum that also belongs to the subset. In this case, it is called a "numerical semigroup". A numerical semigroup is called an Arf semigroup if, for every three elements x, y, and z with z = min(x, y, and z), the semigroup also contains the element . For instance, the set containing zero and all even numbers greater than 10 is an Arf semigroup. References . Semigroup theory
Arf semigroup
[ "Mathematics" ]
159
[ "Semigroup theory", "Fields of abstract algebra", "Mathematical structures", "Algebraic structures" ]
23,267,052
https://en.wikipedia.org/wiki/DC%20block
DC blocks are coaxial components that prevent the flow of audio and direct current (DC) frequencies while offering minimum interference to RF signals. There are three basic forms of DC blocks. "Inner only" models have a capacitor in series with the center conductor, "outer only" models have a capacitor in series with the outer conductor, and "inner/outer" models have capacitors in series with both the inner and outer conductors. The insulation material on the outer models is non-conductive. Applications include ground loop elimination, signal source modulation leakage suppression, system signal-to-noise ratio improvement, test setup isolation and other situations where undesired DC or audio current flows in the system. DC blocks serve a wide range of practical functions, primarily in systems where undesired DC or audio currents can degrade performance. One of their key applications is in eliminating ground loops, which are common sources of hum and noise in audio and video systems. DC blocks also help suppress signal leakage, such as modulation leakage in signal sources, thereby improving the system’s signal integrity. They can enhance the signal-to-noise ratio (SNR) in sensitive communication systems by preventing the intrusion of unwanted currents. Additionally, DC blocks are used in test setups to isolate different parts of the system and prevent interference, ensuring more accurate measurements and maintaining the quality of the overall signal transmission. See also Bias tee Choke (electronics) DC-blocking capacitor External links What Is a DC Block? // wiseGEEK DC Blocks & Bias Tees Electrical components Electrical wiring
DC block
[ "Physics", "Technology", "Engineering" ]
324
[ "Electrical components", "Electrical systems", "Building engineering", "Physical systems", "Electrical engineering", "Electrical wiring", "Components" ]
23,268,911
https://en.wikipedia.org/wiki/Asphalt%20roll%20roofing
Asphalt roll roofing or membrane is a roofing material commonly used for buildings that feature a low sloped roof pitch in North America. The material is based on the same materials used in asphalt shingles; an organic felt or fiberglass mat, saturated with asphalt, and faced with granular stone aggregate. Overview Roll roofing is usually restricted to a lightweight mat compared to shingles, as it must be rolled for shipment. Rolls are typically by in size. Due to its light weight compared to shingles, roll roofing is regarded as an inexpensive, temporary material. Its broad width makes it vulnerable to temperature-induced shrinkage and tearing as it expands and contracts. Other names for this material are "asphalt prepared roofing, asphaltic felt, cold-process roofing, prepared roofing, rolled roofing, rolled strip roofing, roofing felt, sanded bituminous felt, saturated felt, self-finished roofing felt." Roll roofing is normally applied parallel to the eaves from the bottom of the roof upwards, lapping each new roll in the same manner as shingles. Its use is restricted to roofs with a pitch of less than 2:12. To avoid penetrating the exposed membrane with nails, adhesive or "lap cement" must be used at the bottom edge to keep it from being lifted by the wind. The upper edge of the roll is nailed and covered by the next roll. Historical Development: The asphalt-prepared roofing industry had its beginning in Sweden more than a century ago, when roof boards were covered with paper treated with wood tar. Later, in Germany, paper was coated with varnish, surfaced with finely ground mineral matter, and used as a roofing material. In the United States, asphalt was used to waterproof duck fabric in the early part of the nineteenth century. The first recorded use of melted asphalt for impregnating duck fabric in this country was in 1844. About this time roofs composed of sheets of sheathing paper treated with pine tar and pine pitch, and surfaced with fine sand, were being laid. Coal tar and coal- tar pitch were later substituted for the pine tar. These were the forerunners of the present asphalt and coal-tar-pitch built-up roofs. It is not known definitely when felt was first substituted for sheathing paper or when asphalt was first used as the impregnating agent, but it is known that the first asphalt-prepared roofing, that is, roofing manufactured ready to apply, was marketed in 1893. The first roofing was not surfaced. Mineral-surfaced, asphalt-prepared roofing appeared in 1897. The first asphalt shingles, mineral-surfaced, were made in 1901, and about this time slate grain less were first used as a surfacing material. Asphalt shingles did not come into general use until about 1911. During 1939, thirty-two manufacturers, representing about 95 percent of the asphalt-prepared roofing industry, produced 34,225,187 squares ^ of prepared roofing. Almost one third of this, 11,173,856 squares, was in the form of asphalt shingles, which are used principally for roofing dwellings. The shingles produced in 1939 were sufficient to cover more than 1,000,000 dwellings, assuming an average size of 10 squares per roof. In two surveys of roofing materials in 20 Eastern States, - made during 1938, the kinds of roofing materials on 20,841 dwellings along 4,038 miles of highway were tabulated. Of these dwellings, 6,549 were roofed with asphalt shingles and 2,381 with asphalt roll roofing. Thus, almost 43 percent of these dwelling were roofed with asphalt-prepared roofing. Statistics of the Bureau of the Census indicate that asphalt-prepared roll roofing and shingles constituted slightly less than half of all the roofing materials sold during 1937. Uses The main uses are: for outbuildings on flat roofs on houses in the UK, a low cost limited life roofing method as a backup water catching & wind stopping layer under roofing slates & tiles Types Several variations of bitumen roofing felt are available. Single coverage thicknesses range from 55 to 90 pounds per square (100 sq. ft.) for single-coverage Double coverage range from 110 to 140 pounds per square. Fibre content: mixed rag fibre - lowest cost, shortest life all plastic fibres fibreglass - longest lived Bitumen: bitumen - stiffens & hardens in winter, cracks in time modified bitumen - stays supple in winter, lasts better Underside: Uncoated - most common, applied with adhesive or nails Self-adhesive - simpler to apply Torchable - applied by torching the underside, which partly melts and glues the sheet. (Most roofing felt is torchable.) Topping: Sand - low cost Stone waste - prettier, better life expectancy. Only used on capsheet. Uncoated - used as undersheet Application methods Glue in place with bitumen/solvent mix Nail in place – relies on the clout nail head being driven slightly under the surface for a pressure seal. Waterproofing not quite perfect, a water durable timber layer is used under the felt, usually OSB or ply. Most common method on sheds. Hot bitumen Torch on – underside of felt melted with a torch and pressed in place References Building materials Roofs
Asphalt roll roofing
[ "Physics", "Technology", "Engineering" ]
1,099
[ "Structural engineering", "Building engineering", "Architecture", "Structural system", "Construction", "Materials", "Roofs", "Matter", "Building materials" ]
23,270,645
https://en.wikipedia.org/wiki/Beta-decay%20stable%20isobars
Beta-decay stable isobars are the set of nuclides which cannot undergo beta decay, that is, the transformation of a neutron to a proton or a proton to a neutron within the nucleus. A subset of these nuclides are also stable with regards to double beta decay or theoretically higher simultaneous beta decay, as they have the lowest energy of all isobars with the same mass number. This set of nuclides is also known as the line of beta stability, a term already in common use in 1965. This line lies along the bottom of the nuclear valley of stability. Introduction The line of beta stability can be defined mathematically by finding the nuclide with the greatest binding energy for a given mass number, by a model such as the classical semi-empirical mass formula developed by C. F. Weizsäcker. These nuclides are local maxima in terms of binding energy for a given mass number. All odd mass numbers have only one beta decay stable nuclide. Among even mass number, five (124, 130, 136, 150, 154) have three beta-stable nuclides. None have more than three; all others have either one or two. From 2 to 34, all have only one. From 36 to 72, only eight (36, 40, 46, 50, 54, 58, 64, 70) have two, and the remaining 11 have one. From 74 to 122, three (88, 90, 118) have one, and the remaining 22 have two. From 124 to 154, only one (140) has one, five have three, and the remaining 10 have two. From 156 to 262, only eighteen have one, and the remaining 36 have two, though there may also exist some undiscovered ones. All primordial nuclides are beta decay stable, with the exception of 40K, 50V, 87Rb, 113Cd, 115In, 138La, 176Lu, and 187Re. In addition, 123Te and 180mTa have not been observed to decay, but are believed to undergo beta decay with an extremely long half-life (over 1015 years). (123Te can only undergo electron capture to 123Sb, whereas 180mTa can decay in both directions, to 180Hf or 180W.) Among non-primordial nuclides, there are some other cases of theoretically possible but never-observed beta decay, notably including 222Rn and 247Cm (the most stable isotopes of their elements considering all decay modes). Finally, 48Ca and 96Zr have not been observed to undergo beta decay (theoretically possible for both) which is extremely suppressed, but double beta decay is known for both. Similar suppression of single beta decay occurs also for 148Gd, a rather short-lived alpha emitter. All elements up to and including nobelium, except technetium, promethium, and mendelevium, are known to have at least one beta-stable isotope. It is known that technetium and promethium have no beta-stable isotopes; current measurement uncertainties are not enough to say whether mendelevium has them or not. List of known beta-decay stable isobars 346 nuclides (including Fm whose discovery is unconfirmed) have been definitively identified as beta-stable. Theoretically predicted or experimentally observed double beta decay is shown by arrows, i.e. arrows point toward the lightest-mass isobar. This is sometimes dominated by alpha decay or spontaneous fission, especially for the heavy elements. Observed decay modes are listed as α for alpha decay, SF for spontaneous fission, and n for neutron emission in the special case of He. For mass 5 there are no bound isobars at all; mass 8 has bound isobars, but the beta-stable Be is unbound. Two beta-decay stable nuclides exist for odd neutron numbers 1 (2H and 3He), 3 (5He and 6Li – the former has an extremely short half-life), 5 (9Be and 10B), 7 (13C and 14N), 55 (97Mo and 99Ru), and 85 (145Nd and 147Sm); the first four cases involve very light nuclides where odd-odd nuclides are more stable than their surrounding even-even isobars, and the last two surround the proton numbers 43 and 61 which have no beta-stable isotopes. Also, two beta-decay stable nuclides exist for odd proton numbers 1, 3, 5, 7, 17, 19, 29, 31, 35, 47, 51, 63, 77, 81, and 95; the first four cases involve very light nuclides where odd-odd nuclides are more stable than their surrounding even-even isobars, and the other numbers surround the neutron numbers 19, 21, 35, 39, 45, 61, 71, 89, 115, 123, 147 which have no beta-stable isotopes. (For N = 21 the long-lived primordial 40K exists, and for N = 71 there is 123Te whose electron capture has not yet been observed, but neither are beta-stable.) All even proton numbers 2 ≤ Z ≤ 102 have at least two beta-decay stable nuclides, with exactly two for Z = 4 (8Be and 9Be – the former having an extremely short half-life) and 6 (12C and 13C). Also, the only even neutron numbers with only one beta-decay stable nuclide are 0 (1H) and 2 (4He); at least two beta-decay stable nuclides exist for even neutron numbers in the range 4 ≤ N ≤ 160, with exactly two for N = 4 (7Li and 8Be), 6 (11B and 12C), 8 (15N and 16O), 66 (114Cd and 116Sn, noting also primordial but not beta-stable 115In), 120 (198Pt and 200Hg), and 128 (212Po and 214Rn – both very unstable to alpha decay). Seven beta-decay stable nuclides exist for the magic N = 82 (136Xe, 138Ba, 139La, 140Ce, 141Pr, 142Nd, and 144Sm) and five for N = 20 (36S, 37Cl, 38Ar, 39K, and 40Ca), 50 (86Kr, 88Sr, 89Y, 90Zr, and 92Mo, noting also primordial but not beta-stable 87Rb), 58 (100Mo, 102Ru, 103Rh, 104Pd, and 106Cd), 74 (124Sn, 126Te, 127I, 128Xe, and 130Ba), 78 (130Te, 132Xe, 133Cs, 134Ba, and 136Ce), 88 (148Nd, 150Sm, 151Eu, 152Gd, and 154Dy – the last not primordial), and 90 (150Nd, 152Sm, 153Eu, 154Gd, and 156Dy). For A ≤ 209, the only beta-decay stable nuclides that are not primordial nuclides are 5He, 8Be, 146Sm, 150Gd, and 154Dy. (146Sm has a half-life long enough that it should barely survive as a primordial nuclide, but it has never been experimentally confirmed as such.) All beta-decay stable nuclides with A ≥ 209 are known to undergo alpha decay, though for some, spontaneous fission is the dominant decay mode. Cluster decay is sometimes also possible, but in all known cases it is a minor branch compared to alpha decay or spontaneous fission. Alpha decay is energetically possible for all beta-stable nuclides with A ≥ 165 with the single exception of 204Hg, but in most cases the Q-value is small enough that such decay has never been seen. With the exception of 262No, no nuclides with A > 260 are currently known to be beta-stable. Moreover, the known beta-stable nuclei for individual masses A = 222, A = 256, and A ≥ 258 (corresponding to proton numbers Z = 86 and Z ≥ 98, or to neutron numbers N = 136 and N ≥ 158) may not represent the complete set. The general patterns of beta-stability are expected to continue into the region of superheavy elements, though the exact location of the center of the valley of stability is model dependent. It is widely believed that an island of stability exists along the beta-stability line for isotopes of elements around copernicium that are stabilized by shell closures in the region; such isotopes would decay primarily through alpha decay or spontaneous fission. Beyond the island of stability, various models that correctly predict many known beta-stable isotopes also predict anomalies in the beta-stability line that are unobserved in any known nuclides, such as the existence of two beta-stable nuclides with the same odd mass number. This is a consequence of the fact that a semi-empirical mass formula must consider shell correction and nuclear deformation, which become far more pronounced for heavy nuclides. The beta-stable fully ionized nuclei (with all electrons stripped) are somewhat different. Firstly, if a proton-rich nuclide can only decay by electron capture (because the energy difference between the parent and daughter is less than 1.022 MeV, the amount of decay energy needed for positron emission), then full ionization makes decay impossible. This happens for example for 7Be. Moreover, sometimes the energy difference is such that while β− decay violates conservation of energy for a neutral atom, bound-state β− decay (in which the decay electron remains bound to the daughter in an atomic orbital) is possible for the corresponding bare nucleus. Within the range , this means that 163Dy, 193Ir, 205Tl, 215At, and 243Am among beta-stable neutral nuclides cease to be beta-stable as bare nuclides, and are replaced by their daughters 163Ho, 193Pt, 205Pb, 215Rn, and 243Cm (bound-state β− decay has been observed for 163Dy, 205Tl and is predicted for 193Ir, 215At, 243Am). Beta decay toward minimum mass Beta decay generally causes nuclides to decay toward the isobar with the lowest mass (which is often, but not always, the one with highest binding energy) with the same mass number. Those with lower atomic number and higher neutron number than the minimum-mass isobar undergo beta-minus decay, while those with higher atomic number and lower neutron number undergo beta-plus decay or electron capture. However, there are a few odd-odd nuclides between two beta-stable even-even isobars, that predominantly decay to the higher-mass of the two beta-stable isobars. For example, 40K could either undergo electron capture or positron emission to 40Ar, or undergo beta minus decay to 40Ca: both possible products are beta-stable. The former process would produce the lighter of the two beta-stable isobars, yet the latter is more common. Isotope masses from: Notes References External links Decay-Chains https://www-nds.iaea.org/relnsd/NdsEnsdf/masschain.html (Russian) Beta-decay stable nuclides up to Z = 118 (data for Z ≥ 102 are predictions) Nuclear physics
Beta-decay stable isobars
[ "Physics" ]
2,390
[ "Nuclear physics" ]
23,271,188
https://en.wikipedia.org/wiki/Format-preserving%20encryption
In cryptography, format-preserving encryption (FPE), refers to encrypting in such a way that the output (the ciphertext) is in the same format as the input (the plaintext). The meaning of "format" varies. Typically only finite sets of characters are used; numeric, alphabetic or alphanumeric. For example: Encrypting a 16-digit credit card number so that the ciphertext is another 16-digit number. Encrypting an English word so that the ciphertext is another English word. Encrypting an n-bit number so that the ciphertext is another n-bit number (this is the definition of an n-bit block cipher). For such finite domains, and for the purposes of the discussion below, the cipher is equivalent to a permutation of N integers } where N is the size of the domain. Motivation Restricted field lengths or formats One motivation for using FPE comes from the problems associated with integrating encryption into existing applications, with well-defined data models. A typical example would be a credit card number, such as 1234567812345670 (16 bytes long, digits only). Adding encryption to such applications might be challenging if data models are to be changed, as it usually involves changing field length limits or data types. For example, output from a typical block cipher would turn credit card number into a hexadecimal (e.g.0x96a45cbcf9c2a9425cde9e274948cb67, 34 bytes, hexadecimal digits) or Base64 value (e.g. lqRcvPnCqUJc3p4nSUjLZw==, 24 bytes, alphanumeric and special characters), which will break any existing applications expecting the credit card number to be a 16-digit number. Apart from simple formatting problems, using AES-128-CBC, this credit card number might get encrypted to the hexadecimal value 0xde015724b081ea7003de4593d792fd8b695b39e095c98f3a220ff43522a2df02. In addition to the problems caused by creating invalid characters and increasing the size of the data, data encrypted using the CBC mode of an encryption algorithm also changes its value when it is decrypted and encrypted again. This happens because the random seed value that is used to initialize the encryption algorithm and is included as part of the encrypted value is different for each encryption operation. Because of this, it is impossible to use data that has been encrypted with the CBC mode as a unique key to identify a row in a database. FPE attempts to simplify the transition process by preserving the formatting and length of the original data, allowing a drop-in replacement of plaintext values with their ciphertexts in legacy applications. Comparison to truly random permutations Although a truly random permutation is the ideal FPE cipher, for large domains it is infeasible to pre-generate and remember a truly random permutation. So the problem of FPE is to generate a pseudorandom permutation from a secret key, in such a way that the computation time for a single value is small (ideally constant, but most importantly smaller than O(N)). Comparison to block ciphers An n-bit block cipher technically is a FPE on the set }. If an FPE is needed on one of these standard sized sets (for example, n = 64 for DES and n = 128 for AES) a block cipher of the right size can be used. However, in typical usage, a block cipher is used in a mode of operation that allows it to encrypt arbitrarily long messages, and with an initialization vector as discussed above. In this mode, a block cipher is not an FPE. Definition of security In cryptographic literature (see most of the references below), the measure of a "good" FPE is whether an attacker can distinguish the FPE from a truly random permutation. Various types of attackers are postulated, depending on whether they have access to oracles or known ciphertext/plaintext pairs. Algorithms In most of the approaches listed here, a well-understood block cipher (such as AES) is used as a primitive to take the place of an ideal random function. This has the advantage that incorporation of a secret key into the algorithm is easy. Where AES is mentioned in the following discussion, any other good block cipher would work as well. The FPE constructions of Black and Rogaway Implementing FPE with security provably related to that of the underlying block cipher was first undertaken in a paper by cryptographers John Black and Phillip Rogaway, which described three ways to do this. They proved that each of these techniques is as secure as the block cipher that is used to construct it. This means that if the AES algorithm is used to create an FPE algorithm, then the resulting FPE algorithm is as secure as AES because an adversary capable of defeating the FPE algorithm can also defeat the AES algorithm. Therefore, if AES is secure, then the FPE algorithms constructed from it are also secure. In all of the following, E denotes the AES encryption operation that is used to construct an FPE algorithm and F denotes the FPE encryption operation. FPE from a prefix cipher One simple way to create an FPE algorithm on is to assign a pseudorandom weight to each integer, then sort by weight. The weights are defined by applying an existing block cipher to each integer. Black and Rogaway call this technique a "prefix cipher" and showed it was probably as good as the block cipher used. Thus, to create a FPE on the domain {0,1,2,3}, given a key K apply AES(K) to each integer, giving, for example, weight(0) = 0x56c644080098fc5570f2b329323dbf62 weight(1) = 0x08ee98c0d05e3dad3eb3d6236f23e7b7 weight(2) = 0x47d2e1bf72264fa01fb274465e56ba20 weight(3) = 0x077de40941c93774857961a8a772650d Sorting [0,1,2,3] by weight gives [3,1,2,0], so the cipher is F(0) = 3 F(1) = 1 F(2) = 2 F(3) = 0 This method is only useful for small values of N. For larger values, the size of the lookup table and the required number of encryptions to initialize the table gets too big to be practical. FPE from cycle walking If there is a set M of allowed values within the domain of a pseudorandom permutation P (for example P can be a block cipher like AES), an FPE algorithm can be created from the block cipher by repeatedly applying the block cipher until the result is one of the allowed values (within M). CycleWalkingFPE(x) { if P(x) is an element of M then return P(x) else return CycleWalkingFPE(P(x)) } The recursion is guaranteed to terminate. (Because P is one-to-one and the domain is finite, repeated application of P forms a cycle, so starting with a point in M the cycle will eventually terminate in M.) This has the advantage that the elements of M do not have to be mapped to a consecutive sequence {0,...,N-1} of integers. It has the disadvantage, when M is much smaller than P's domain, that too many iterations might be required for each operation. If P is a block cipher of a fixed size, such as AES, this is a severe restriction on the sizes of M for which this method is efficient. For example, an application may want to encrypt 100-bit values with AES in a way that creates another 100-bit value. With this technique, AES-128-ECB encryption can be applied until it reaches a value which has all of its 28 highest bits set to 0, which will take an average of 228 iterations to happen. FPE from a Feistel network It is also possible to make a FPE algorithm using a Feistel network. A Feistel network needs a source of pseudo-random values for the sub-keys for each round, and the output of the AES algorithm can be used as these pseudo-random values. When this is done, the resulting Feistel construction is good if enough rounds are used. One way to implement an FPE algorithm using AES and a Feistel network is to use as many bits of AES output as are needed to equal the length of the left or right halves of the Feistel network. If a 24-bit value is needed as a sub-key, for example, it is possible to use the lowest 24 bits of the output of AES for this value. This may not result in the output of the Feistel network preserving the format of the input, but it is possible to iterate the Feistel network in the same way that the cycle-walking technique does to ensure that format can be preserved. Because it is possible to adjust the size of the inputs to a Feistel network, it is possible to make it very likely that this iteration ends very quickly on average. In the case of credit card numbers, for example, there are 1015 possible 16-digit credit card numbers (accounting for the redundant check digit), and because the 1015 ≈ 249.8, using a 50-bit wide Feistel network along with cycle walking will create an FPE algorithm that encrypts fairly quickly on average. The Thorp shuffle A Thorp shuffle is like an idealized card-shuffle, or equivalently a maximally-unbalanced Feistel cipher where one side is a single bit. It is easier to prove security for unbalanced Feistel ciphers than for balanced ones. VIL mode For domain sizes that are a power of two, and an existing block cipher with a smaller block size, a new cipher may be created using VIL mode as described by Bellare, Rogaway. Hasty Pudding Cipher The Hasty Pudding Cipher uses custom constructions (not depending on existing block ciphers as primitives) to encrypt arbitrary finite small domains. The FFSEM/FFX mode of AES The FFSEM mode of AES (specification) that has been accepted for consideration by NIST uses the Feistel network construction of Black and Rogaway described above, with AES for the round function, with one slight modification: a single key is used and is tweaked slightly for each round. As of February 2010, FFSEM has been superseded by the FFX mode written by Mihir Bellare, Phillip Rogaway, and Terence Spies. (specification, ). FPE for JPEG 2000 encryption In JPEG 2000 standard, the marker codes (in the range 0xFF90 through 0xFFFF) should not appear in the plaintext and ciphertext. The simple modular-0xFF90 technique cannot be applied to solve the JPEG 2000 encryption problem. For example, the ciphertext words 0x23FF and 0x9832 are valid, but their combination 0x23FF9832 becomes invalid since it introduces the marker code 0xFF98. Similarly, the simple cycle-walking technique cannot be applied to solve the JPEG2000 encryption problem since two valid ciphertext blocks may give invalid ciphertext when they get combined. For example, if the first ciphertext block ends with bytes "...30FF" and the second ciphertext block starts with bytes "9832...", then the marker code "0xFF98" would appear in the ciphertext. Two mechanisms for format-preserving encryption of JPEG 2000 were given in the paper "Efficient and Secure Encryption Schemes for JPEG2000" by Hongjun Wu and Di Ma. To perform format-preserving encryption of JPEG 2000, the technique is to exclude the byte "0xFF" in the encryption and decryption. Then a JPEG 2000 encryption mechanism performs modulo-n addition with stream cipher; another JPEG 2000 encryption mechanism performs the cycle-walking technique with block cipher. Other FPE constructions Several FPE constructs are based on adding the output of a standard cipher, modulo n, to the data to be encrypted, with various methods of unbiasing the result. The modulo-n addition shared by many of the constructs is the immediately obvious solution to the FPE problem (thus its use in a number of cases), with the main differences being the unbiasing mechanisms used. Section 8 of the FIPS 74, Federal Information Processing Standards Publication 1981 Guidelines for Implementing and Using the NBS Data Encryption Standard, describes a way to use the DES encryption algorithm in a manner that preserves the format of the data via modulo-n addition followed by an unbiasing operation. This standard was withdrawn on May 19, 2005, so the technique should be considered obsolete in terms of being a formal standard. Another early mechanism for format-preserving encryption was Peter Gutmann's "Encrypting data with a restricted range of values" which again performs modulo-n addition on any cipher with some adjustments to make the result uniform, with the resulting encryption being as strong as the underlying encryption algorithm on which it is based. The paper "Using Datatype-Preserving Encryption to Enhance Data Warehouse Security" by Michael Brightwell and Harry Smith describes a way to use the DES encryption algorithm in a way that preserves the format of the plaintext. This technique doesn't appear to apply an unbiasing step as do the other modulo-n techniques referenced here. The paper "Format-Preserving Encryption" by Mihir Bellare and Thomas Ristenpart describes using "nearly balanced" Feistel networks to create secure FPE algorithms. The paper "Format Controlling Encryption Using Datatype Preserving Encryption" by Ulf Mattsson describes other ways to create FPE algorithms. An example of FPE algorithm is FNR (Flexible Naor and Reingold). Acceptance of FPE algorithms by standards authorities NIST Special Publication 800-38G, "Recommendation for Block Cipher Modes of Operation: Methods for Format-Preserving Encryption" specifies two methods: FF1 and FF3. Details on the proposals submitted for each can be found at the NIST Block Cipher Modes Development site, including patent and test vector information. Sample values are available for both FF1 and FF3. FF1 is FFX[Radix] "Format-preserving Feistel-based Encryption Mode" which is also in standards processes under ANSI X9 as X9.119 and X9.124. It was submitted to NIST by Mihir Bellare of University of California, San Diego, Phillip Rogaway of University of California, Davis, and Terence Spies of Voltage Security Inc. Test vectors are supplied and parts of it are patented. (DRAFT SP 800-38G Rev 1) requires the minimum domain size of the data being encrypted to be 1 million (previously 100). FF3 is BPS named after the authors. It was submitted to NIST by Éric Brier, Thomas Peyrin and Jacques Stern of Ingenico in France. Authors declared to NIST that their algorithm is not patented. The CyberRes Voltage product, although claims to own patents also for BPS mode. On 12 April 2017, NIST concluded that FF3 is "no longer suitable as a general-purpose FPE method" because researchers found a vulnerability. FF3-1 (DRAFT SP 800-38G Rev 1) replaces FF3 and requires the minimum domain size of the data being encrypted to be 1 million (previously 100). Another mode was included in the draft NIST guidance but was removed before final publication. FF2 is VAES3 scheme for FFX: An addendum to "The FFX Mode of Operation for Preserving Encryption": A parameter collection for encipher strings of arbitrary radix with subkey operation to lengthen life of the enciphering key. It was submitted to NIST by Joachim Vance of VeriFone Systems Inc. Test vectors are not supplied separately from FF1 and parts of it are patented. Authors have submitted a modified algorithm as DFF which is under active consideration by NIST. Korea has also developed a FPE standard, FEA-1 and FEA-2. Implementations Open Source implementations of FF1 and FF3 are publicly available in C language, Go language, Java, Node.js, Python, C#/.Net and Rust References Block ciphers Cryptography
Format-preserving encryption
[ "Mathematics", "Engineering" ]
3,589
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
23,271,791
https://en.wikipedia.org/wiki/Precipitation%20polymerization
In polymer science, precipitation polymerization is a heterogeneous polymerization process that begins initially as a homogeneous system in the continuous phase, where the monomer and initiator are completely soluble, but upon initiation the formed polymer is insoluble and thus precipitates. After precipitation, the polymerization proceeds by absorption of monomer and initiator into the polymer particles. A distinction should be made between precipitation and dispersion polymerization, due to the similarities. A dispersion polymerization is actually a type of precipitation polymerization, but the difference lies in the fact that precipitation polymerizations give larger and less regular particles, as a result of little or no stabilizer present. References Polymerization reactions fr:Procédé de polymérisation#Polymérisation en dispersion
Precipitation polymerization
[ "Chemistry", "Materials_science" ]
162
[ "Polymerization reactions", "Polymer chemistry" ]
23,271,819
https://en.wikipedia.org/wiki/Helmholtz%E2%80%93Kohlrausch%20effect
The Helmholtz–Kohlrausch effect (after Hermann von Helmholtz and V. A. Kohlrausch) is a perceptual phenomenon where some hues, even when of the same lightness, appear to be bolder than others. Explanation Any colored lights will seem brighter to human observers than pure white light. Oftentimes this makes more saturated colors actually seem lighter than shades of gray, no matter how bright they are. Certain colors do not have significant effect, however; any hue of colored lights still seem brighter than white light of the same brightness. Two colors that do not have as great of an Helmholtz–Kohlrausch effect as the others are green and yellow. The Helmholtz–Kohlrausch effect is affected by the viewing environment. This includes the surroundings of the object and the lighting that the object is being viewed under. The Helmholtz–Kohlrausch effect works best in darker environments where there are not any other outside factors influencing the colors. For example, this is why theaters are all dark environments, so more saturated colors on the screen can "pop out" even more than they would normally. An example of this lightness factor would be if there were different colors on a grey background that all are of the same lightness as it is, as in the image above right. Obviously the colors look different because they are different hues, not just gray, but if the image were converted all to grayscale, all of the colors would match the grey background because they all have the same lightness as it does. Brightness Perceived brightness is affected most by what is surrounding the object. In other words, the object can look lighter or darker depending on what is around it. In addition, the brightness can also appear different depending on the color of the object. For example, an object of a grayer color than the exact same object, but this time in a less gray color, will look darker, even when both are just as bright. The difference between brightness and lightness is that the brightness is the intensity of the object independent of the light source. Lightness is the brightness of the object in respect to the light reflecting on it. This is important because the Helmholtz–Kohlrausch effect is a measure of the ratio between the two. Helmholtz color coordinates Similar to the Munsell color system, Helmholtz designed a color coordinate system, where chromaticity is defined by dominant wavelength and purity (chroma). The percentage of purity for each wavelength can be determined by the equation below: where %P is the percent of purity, S is the point being assessed, N is the position of the white point, and DW the dominant wavelength. Modelling The Helmholtz–Kohlrausch effect has been described in mathematical models by Fairchild and Pirrotta 1991, Nayatani 1997, and most recently High, Green, and Nussbamm 2023. Given a color's CIELAB coordinates, these methods produce an adjusted "equivalent achromatic lightness" L*EAL, i.e. the shade of grey humans think is as bright as the color. Effects on industry Entertainment It is essential for lighting technicians to be aware of the Helmholtz–Kohlrausch effect when working in theaters or in other venues where lighting is often used. In order to get the greatest effect to illuminate their stage or theater, the lighting users need to understand that color has an effect on brightness. For example, one color may appear brighter than another but really they have the same brightness. On stage, lighting users have the ability to make a white light appear much brighter by adding a color gel. This occurs even though gels can only absorb some of the light. When lighting a stage, the lighting users tend to choose reds, pinks, and blues because they are highly saturated colors and are really very dim. However, we perceive them as being brighter than the other colors because they are most affected by the Helmholtz–Kohlrausch effect. We perceive that the color white does not look any brighter to us than individual colors. LED lights are a good example of this. Aviation The Helmholtz–Kohlrausch effect influences the use of LED lights in different technological practices. Aviation is one field that relies upon the results of the Helmholtz–Kohlrausch effect. A comparison of runway LED lamps and filtered and unfiltered incandescent lights all at the same luminance shows that in order to accomplish the same brightness, the white reference incandescent lamp needs to have twice the luminance of the red LED lamp, therefore suggesting that the LED lights do appear to have a greater brightness than the traditional incandescent lights. One condition that affects this theory is the presence of fog. Automotive Another field that uses this is the automotive industry. LEDs in the dashboard and instrument lighting are designed for use in mesopic luminance. In studies, it has been found that red LEDs appear brighter than green LEDs under these conditions, which means that a driver would be able to see red light more intensely and would thus be more alerting than green lights when driving at night. See also Color appearance model Bezold–Brücke shift References Further reading External links The geometry of color perception LED Projection Enters the Mainstream Optical phenomena Color appearance phenomena
Helmholtz–Kohlrausch effect
[ "Physics" ]
1,106
[ "Optical phenomena", "Physical phenomena", "Color appearance phenomena" ]
44,584,349
https://en.wikipedia.org/wiki/Modular%20smartphone
A modular smartphone is a smartphone designed for users to upgrade or replace components and modules without the need for resoldering or repair services. The most important component is the main board, to which others such as cameras and batteries are attached. Components can be obtained from open-source hardware stores. This design aims to reduce electronic waste, increase the phone's lifespan, and lower repair costs. However, modular smartphones are generally bulkier and slower than their non-modular counterparts which may make them less attractive for most consumers. Motivation Environmental impact and ethical considerations Consumers may be motivated to buy modular phones to bypass non-modular phones, which are designed with planned obsolescence. Planned obsolescence, originating from American industrial designer Brooks Stevens, is a strategy of selling phones to be replaced rather than repaired. Planned obsolescence in smartphones prematurely shortens their life spans, as users replace their smartphones earlier than necessary. This quick consumption cycle, caused by planned obsolescence, can lead to increased electronic waste.(Electronic waste is one of the world's fastest growing sources of waste.) Modular phones, which are repairable and do not need to be as frequently replaced, are considered as a sustainable consumer electronic. Modular phones have also been proposed as an ethically conscious alternative to annual phone release. However, the degree of benefits are unclear because modular phone companies can not accurately trace the origin of all their materials. In addition to the impact of disposal, the manufacturing of phones, which includes use of conflict minerals can result in soil degradation and heavy metal pollution. High amounts of energy, ore and processing power are required to obtain small quantities of the minerals used in the circuit board, display and battery of mobile phones. Repairability Consumers often prematurely replace their smartphones due to degradation of certain components that experience the most mechanical stress and are costly to repair (specifically the display, battery, or back cover). Modularity in smartphones promotes self-repair over repair services by enabling consumers to swap out faulty components for functional ones without incurring service or labor costs. The ability to self-repair creates positive user experience, which translates to higher satisfaction and brand loyalty. Customization and upgradability Modular phones are part of a trend in mass customization which propelled by consumers’ demand for new product iterations within shorter time frames. Companies like Fairphone and Google saw modular smartphones as a way to extend the life cycles of smartphones and their components while satisfying the consumer need for incremental customizations and upgrades. Such customization-intense platforms can have many resultant configurations. Component lending Modular components that can be lent out when they are not in use by the owner is a concept not yet realized, but is being considered as a viable option to reduce e-waste. Specialized components such as ultra high-definition cameras, condenser microphones, or barometers are generally costly to produce, and are only useful in very specific applications. These specialized components can be lent out to users on a per-need basis, thus reducing the number of units that need to be produced and increasing the number of people who can have access to otherwise hard-to-obtain equipment. History Modu (2008) The Modu Phone is a modular smartphone created by an Israeli company. The Modu Phone is the first modular smartphone and has a record as the world’s lightest hand-held mobile phone in the Guinness World Records. The Modu Phone is a ‘Jacket’ type modular smartphone that allows customers to chop and customize the style of their mobile phone by slipping it into various Modu jackets, also known as phone connector. The Modu jackets available for the customers were GPS, camera, MP3 player, and keyboards. The Modu Phone was first commercially launched in Israel in June 2009. The introductory Modu Phone kit was about $125 (500 Israeli shekels). The introductory Modu Phone kit contains 2GB of internal memory device and a music player jacket. In January 2011, Modu announced that the company was in debt and closed all operations in the following month. In May 2011, Google paid $4.9 million for the patents of the Modu company’s mobile phones, including the Modu Phone. Phonebloks (2013) In 2013, Phonebloks (a concept that was never manufactured) was the first modular smartphone concept that attracted widespread attention. First conceptualized by Dutch industrial designer, Dave Hakkens, this smartphone would have been made of detachable blocks that are connected to a base. Each detachable block would have had pins which transfer electrical signals to the base. To lock the device together, two small screws are used at the base. The concept of Phonebloks would not only have allowed a customer to easily replace broken components of the phone, rather than replacing the entire device, it also allow a customer to build and customize their perfect phone. This would have included upgrading to a larger storage block, or a better camera, depending on the user’s use of the component. Project Ara (2013) Inspired by the concept of Phonebloks, Google developed a modular smartphone project called Project Ara. This project was formerly headed by the Advanced Technology and Projects team of Motorola Mobility. The purpose of Project Ara was to develop a smartphone that could be repaired, rather than replaced entirely. It was hoped that it could be part of a solution to decrease the electronic waste produced from non-modular smartphones. Google's design consisted of one metal endoskeleton with several different hardware modules attached. These parts included the battery, the processor, the display screen, the camera, storage components, and speakers. In addition to reducing electronic waste, Project Ara also proposed to include a specialized Wifi module that would ensure a strong signal no matter the ISP. Project Ara's starter kit which includes the endoskeleton, CPU, battery, display, and Wifi was priced at $50. Due to the device's complexity, its need for constant upgrading, and lack of support from mobile carriers, Google abandoned Project Ara. Most consumers purchase their cell phones without a thorough understanding of the internal components, but purchasing a modular smartphone would force consumers to learn about how processors, RAMs, and storage impact a smartphone's functionality when looking for upgrades. In addition, big mobile companies did not support Project Ara because they directly profit from customers replacing their non-modular smartphones every few years. Finally, due to the constant advancements of hardware components, such as graphic cards, CPU, RAM, and storage cards, the modular smartphone would need to be constantly upgraded. This may ultimately create more electronic waste since more modules may need to be replaced more frequently than replacing a smartphone. Fairphone (2015) Fairphone is a modular smartphone created by a Dutch company, a social enterprise that aims to produce smartphones with the goal of having a lower environmental footprint. The first model of Fairphone, Fairphone 1, was released in 2013, and the most recent model, Fairphone 5, was released in August 2023. As of 2022, Fairphone 4 was priced at €579 and had sold around 400,000 devices in Europe. Fairphone 4 uses a Kryo 570 processor that can support 5G connectivity, with a Sony IMX363 camera sensor. According to the company, it has increased the lifespan of a phone by two years and achieved a decrease of 29% for the yearly Global Warming Potential impact category when extending the phone lifetime to 5 years and 42% of the GWP when extended to 7 years. Shiftphone (2015) Shiftphone is a modular smartphone created by the German company SHIFT. The first model of Shiftphone, SHIFT4 was released in 2015, and the most recent version was the SHIFT6mq released in June 2020. The next model is expected to be SHIFTphone 8, scheduled for release in 2023. Currently, the annual turnover of Shift is less than 1 million. To lower the inhibition threshold of self-repair, SHIFT provides video instructions via YouTube, and provides a repair service for customers. The company also offers hardware upgrade opportunities. The goal of the company is to provide spare parts for a period of ten years for the Shift 6mq released. Shiftphone and the company were criticized for not providing information regarding conflict-free material used in Shiftphone. The company also did not provide detailed audit reports about component suppliers. Challenges Technical limitations Modular smartphones are difficult to miniaturize, and as a result, they are generally bulkier, slower, and less sturdy than non-modular phones. Because a modular smartphone is separated into individual components, the distance between each of the components is significantly larger than that of non-modular phones. This increased bulkiness leads modular smartphones to having a shorter battery life and slower responsiveness because distances between components are directly correlated with data speeds and power efficiency; the larger the distance, the slower the speed and efficiency. Modular phones also rely on pre-manufactured components from different suppliers like InvenSense, Asahi Kasei, and Amotech that roughly fit different connecting pieces together. This uneven fitting of the different modules causes the device to function slower than non-modular smartphones, which have perfectly aligned components that increase device responsiveness. Furthermore, making pluggable modules that are more space-optimal would be difficult due to the complexity of hardware configurations. Separate modules not only take up more space, but they also require individualized and self-contained boxes in order to ensure each component can be safely handled, which also adds to the device's overall size. In contrast, non-modular phones, such as the iPhone produced by Apple Inc., the memory, the processor, and the graphics circuitry are all built into a single chip. This is able to foster a faster connection and a significantly smaller device. The intrinsically interchangeable nature of modular phones also poses a challenge as this characteristic makes these devices less sturdy. While Project Ara used latches and electropermanent magnets to achieve a more durable phone, the device still has a higher potential for breaking apart than non-modular smartphones because they rely on detachable components. In addition, due to the nature of modular smartphones having removable modules, as users pry modules off, replace them, and move them around, there is an increased possibility of breakage that exceeds that of non-modular devices. Market uncertainty There are also market uncertainties about consumer demand and distribution of modular smartphones. Currently, smartphone consumers prefer to have fast product iteration and individualization. There are concerns that consumers may be overwhelmed by the number of choices and would prefer pre-packaged phones, or that the modular smartphone distribution process lacks the agility to keep up with short product life-cycles. Therefore, the secondary component market's viability is unclear, until more products become available. In addition to uncertainties regarding consumer demand, there are concerns about whether smartphone providers have sufficient incentives to distribute modular smartphones. providers, like AT&T and Verizon, are profitable because of their trade-in policies and short-term contracts for phones. Therefore, these companies may not be receptive to selling and promoting modular smartphones that may result in fewer trade-ins if it may risk their own profits. While there are concerns, proponents hope that the technical challenges can be overcome and that a viable market ecosystem (the hardware version of an app store) will enable finer-grained competition that will benefit consumers with better and cheaper choices Modular phone platforms Current Fairphone 5, Fairphone 4, Fairphone 3, 2 and 1 by Fairphone Librem 5, by Purism Pinephone, by Pine64 Shift6mq, Shift6m and Shift5me by SHIFT In development SHIFTmu by SHIFT Discontinued Essential Phone by Essential Products LG G5 by LG Moto Z, Moto Z Force and Moto Z Play by Motorola Mobility Phonebloks Project Ara by Google See also References External links Google plans 2015 Project Ara launch in Puerto Rico, partnering with Ingram Micro, OpenMobile, and Claro. How Google’s Project Ara smartphone will be Project Ara official website Motorola Mobility Project Ara Blog Toshiba Project Ara Modules : camera, media bar, Wi-Fi, display module, wireless communication and solution for activity measuring module. Nexpaq rebrand Environmental impact of products Sustainable technologies
Modular smartphone
[ "Engineering" ]
2,515
[ "Modular design", "Modular smartphones" ]
44,585,187
https://en.wikipedia.org/wiki/Heliotron%20J
Heliotron J is a fusion research device in Japan, specifically a helical-axis heliotron designed to study plasma confinement in this type of device. It is located at the Institute of Advanced Energy of Kyoto University. References Stellarators Nuclear technology in Japan
Heliotron J
[ "Physics" ]
55
[ "Nuclear and atomic physics stubs", "Nuclear physics" ]
44,585,565
https://en.wikipedia.org/wiki/2219%20aluminium%20alloy
2219 aluminium alloy is an alloy in the wrought aluminium-copper family (2000 or 2xxx series). It can be heat-treated to produce tempers with higher strength but lower ductility. The aluminium-copper alloys have high strength, but are generally less corrosion resistant and harder to weld than other types of aluminium alloys. To compensate for the lower corrosion resistance, 2219 aluminium can be clad in a commercially pure alloy such as 1050 or painted. This alloy is commonly formed by both extrusion and forging, but is not used in casting. The 2219 aluminium alloy in particular has high fracture toughness, is weldable and resistant to stress corrosion cracking, therefore it is widely used in supersonic aircraft skin and structural members. The Space Shuttle Standard Weight Tank was also fabricated from the 2219 alloy. The Columbus module on the International Space Station also used 2219 aluminium alloy with a cylinder thickness of 4 mm, which was increased to 7 mm for the end cones. The dome and skirt of the Cupola Module on the International Space Station also uses 2219 aluminium alloy. Alternate designations include AlCu6Mn and A92219. It is described in the following standards: ASTM B 209: Standard Specification for Aluminium and Aluminium-Alloy Sheet and Plate ASTM B 211: Standard Specification for Aluminium and Aluminium-Alloy Bar, Rod, and Wire ASTM B 221: Standard Specification for Aluminium and Aluminium-Alloy Extruded Bars, Rods, Wire, Profiles, and Tubes ISO 6361: Wrought Aluminium and Aluminium Alloy Sheets, Strips and Plates Chemical composition The alloy composition of 2219 aluminium is: Aluminium: 91.5 to 93.8% Copper: 5.8 to 6.8% Iron: 0.3% max Magnesium: 0.02% max Manganese: 0.2 to 0.4% Silicon: 0.2% max Titanium: 0.02 to 0.10% Vanadium: 0.05 to 0.15% Zinc: 0.1% max Zirconium: 0.10 to 0.25% Residuals: 0.15% max References Aluminium alloy table Aluminium alloys Aluminium–copper alloys
2219 aluminium alloy
[ "Chemistry" ]
445
[ "Alloys", "Aluminium alloys" ]
44,585,698
https://en.wikipedia.org/wiki/3003%20aluminium%20alloy
3003 aluminium alloy is an alloy in the wrought aluminium-manganese family (3000 or 3xxx series). It can be cold worked (but not, unlike some other types of aluminium alloys, heat-treated) to produce tempers with a higher strength but a lower ductility. Like most other aluminium-manganese alloys, 3003 is a general-purpose alloy with moderate strength, good workability, and good corrosion resistance. It is commonly rolled and extruded, but typically not forged. As a wrought alloy, it is not used in casting. It is also commonly used in sheet metal applications such as gutters, downspouts, roofing, and siding. Alternate designations include 3.0517 and A93003. 3003 aluminium and its various tempers are covered by the ISO standard 6361 and the ASTM standards B209, B210, B211, B221, B483, B491, and B547. Chemical Composition The alloy composition of 3003 aluminium is: Aluminium: 96.8 to 99% Copper: 0.05 to 0.20% Iron: 0.7% max Manganese: 1.0 to 1.5% Silicon: 0.6% max Zinc: 0.1% max Residuals: 0.15% max References Aluminium alloy table Aluminium alloys Aluminium–manganese alloys
3003 aluminium alloy
[ "Chemistry" ]
286
[ "Alloys", "Aluminium alloys" ]
43,135,011
https://en.wikipedia.org/wiki/Nucleation%20in%20microcellular%20foaming
Nucleation in microcellular plastic is an important stage which decides the final cell size, cell density and cell morphology of the foam. In the recent past, numerous researchers have studied the cell nucleation phenomenon in microcellular polymers. Studies were performed with ultrasound induced nucleation during microcellular foaming of Acrylonitrile butadiene styrene polymers. M.C.Guo studied nucleation under the shear action. As the shear enhanced, the cell size diminished and thereby increased the cell density in the foam. Plastics Foams
Nucleation in microcellular foaming
[ "Physics", "Chemistry" ]
113
[ "Amorphous solids", "Foams", "Unsolved problems in physics", "Plastics" ]
43,137,281
https://en.wikipedia.org/wiki/Vertebrate%20mitochondrial%20code
The vertebrate mitochondrial code (translation table 2) is the genetic code found in the mitochondria of all vertebrata. Evolution AGA and AGG were thought to have become mitochondrial stop codons early in vertebrate evolution. However, at least in humans it has now been shown that AGA and AGG sequences are not recognized as termination codons. A -1 mitoribosome frameshift occurs at the AGA and AGG codons predicted to terminate the CO1 and ND6 open reading frames (ORFs), and consequently both ORFs terminate in the standard UAG codon. Incomplete stop codons Mitochondrial genes in some vertebrates (including humans) have incomplete stop codons ending in U or UA, which become complete termination codons (UAA) upon subsequent polyadenylation. Translation table The codon AUG both codes for methionine and serves as an initiation site: the first AUG in an mRNA's coding region is where translation into protein begins. Differences from the standard code Alternative initiation codons Bos: AUA Homo: AUA, AUU Mus: AUA, AUU, AUC Coturnix, Gallus: also GUG See also List of genetic codes References This article contains public domain text from the NCBI page compiled by Andrzej Elzanowski and Jim Ostell. Molecular genetics Gene expression Protein biosynthesis
Vertebrate mitochondrial code
[ "Chemistry", "Biology" ]
293
[ "Protein biosynthesis", "Gene expression", "Molecular genetics", "Biosynthesis", "Cellular processes", "Molecular biology", "Biochemistry" ]
43,140,842
https://en.wikipedia.org/wiki/NASICON
NASICON is an acronym for sodium (Na) super ionic conductor, which usually refers to a family of solids with the chemical formula Na1+xZr2SixP3−xO12, 0 < x < 3. In a broader sense, it is also used for similar compounds where Na, Zr and/or Si are replaced by isovalent elements. NASICON compounds have high ionic conductivities, on the order of 10−3 S/cm, which rival those of liquid electrolytes. They are caused by hopping of Na ions among interstitial sites of the NASICON crystal lattice. Properties The crystal structure of NASICON compounds was characterized in 1968. It is a covalent network consisting of ZrO6 octahedra and PO4/SiO4 tetrahedra that share common corners. Sodium ions are located at two types of interstitial positions. They move among those sites through bottlenecks, whose size, and thus the NASICON electrical conductivity, depends on the NASICON composition, on the site occupancy, and on the oxygen content in the surrounding atmosphere. The conductivity decreases for x < 2 or when all Si is substituted for P in the crystal lattice (and vice versa); it can be increased by adding a rare-earth compound to NASICON, such as yttria. NASICON materials can be prepared as single crystals, polycrystalline ceramic compacts, thin films or as a bulk glass called NASIGLAS. Most of them, except NASIGLAS and phosphorus-free Na4Zr2Si3O12, react with molten sodium at 300 °C, and therefore are unsuitable for electric batteries that use sodium as an electrode. However, a NASICON membrane is being considered for a sodium-sulfur battery where the sodium stays solid. Development and potential applications The main application envisaged for NASICON materials is as the solid electrolyte in a sodium-ion battery. Some NASICONs exhibit a low thermal expansion coefficient (< 10−6 K−1), which is useful for precision instruments and household ovenware. NASICONs can be doped with rare-earth elements, such as Eu, and used as phosphors. Their electrical conductivity is sensitive to molecules in the ambient atmosphere, a phenomenon that can be used to detect CO2, SO2, NO, NO2, NH3 and H2S gases. Other NASICON applications include catalysis, immobilization of radioactive waste, and sodium removal from water. The development of sodium-ion batteries is important since it makes use of an earth-abundant material and can serve as an alternative to lithium-ion batteries which are experiencing ever-increasing demand despite the limited availability of lithium. Developing high-performance sodium-ion batteries is a challenge because it is necessary to develop electrodes that meet the requirements of high-energy density and high cycling stability while also being cost-efficient. NaSICON-based electrode materials are known for their wide range of electrochemical potentials, high ionic conductivity, and most importantly their structural and thermal stabilities. NaSICON-type cathode materials for sodium-ion batteries have a mechanically robust three-dimensional (3D) framework with open channels that endow it with the capability for fast ionic diffusion. A strong and lasting structural framework allows for repeated ion de-/insertions with relatively high operating potentials. Its high safety, high potential, and low volume change make NaSICON a promising candidate for sodium-ion battery cathodes. NaSICON cathodes typically suffer from poor electrical conductivity and low specific capacity which severely limits their practical applications. Efforts to enhance the movement of electrons, or electrical conductivity, include particle downsizing and carbon-coating which have both been reported to improve the electrochemical performance. It is important to consider the relationship between lattice parameters and activation energy as the change in lattice size has a direct influence on the size of the pathway for conduction as well as the hopping distance of the ions to the next vacancy. A large hopping distance requires a high activation energy. NaSICON-phosphate compounds are considered promising cathodes with a theoretical specific energy of 400 W h kg−1. Vanadium-based compounds exhibit satisfactory high energy densities that are comparable to those of lithium-ion batteries as they operate through multi-electron redox reactions (V3+/V4+ and V4+/V5+) and a high operating voltage. The use of vanadium is toxic and expensive which introduces a critical issue in real applications. This concern holds true for other electrodes based on costly 3d transition metal elements such as Ni- or Co-based electrodes. The most abundant and non-toxic 3d element, iron, is the favored choice as the redox center in the polyanionic or mixed-polyanion system. Lithium analogues Some lithium phosphates also possess the NASICON structure and can be considered as the direct analogues of the sodium-based NASICONs. The general formula of such compounds is , where M identifies an element like titanium, germanium, zirconium, hafnium, or tin. Similarly to sodium-based NASICONs, lithium-based NASICONs consist of a network of MO6 octahedra connected by PO4 tetrahedra, with lithium ions occupying the interstitial sites among them. Ionic conduction is ensured by lithium hopping among adjacent interstitial sites. Lithium NASICONs are promising materials to be used as solid electrolytes in all-solid-state lithium-ion batteries. Relevant examples The most investigated lithium-based NASICON materials are , , and . Lithium zirconium phosphate Lithium zirconium phosphate, identified by the formula (LZP), has been extensively studied because of its polymorphism and interesting conduction properties. At room temperature, LZP has a triclinic crystal structure (C1) and undergoes a phase transition to rhombohedral crystal structure (R3c) between 25 and 60 °C. The rhombohedral phase is characterized by higher values of ionic conductivity (8×10−6 S/cm at 150 °C) compared to the triclinic phase (≈ 8×10−9 S/cm at room temperature): such difference may be ascribed to the peculiar distorted tetrahedral coordination of lithium ions in the rhombohedral phase, along with the large number of available empty sites. The ionic conductivity of LZP can be enhanced by elemental doping, for example replacing some of the zirconium cations with lanthanum, titanium, or aluminium atoms. In case of lanthanum doping, the room-temperature ionic conductivity of the material approaches 7.2×10−5 S/cm. Lithium titanium phosphate Lithium titanium phosphate, with general formula (LTP or LTPO), is another lithium-containing NASICON material in which TiO6 octahedra and PO4 tetrahedra are arranged in a rhombohedral unit cell. The LTP crystal structure is stable down to 100 K and is characterized by a small coefficient of thermal expansion. LTP shows low ionic conductivity at room temperature, around 10−6 S/cm; however, it can be effectively increased by elemental substitution with isovalent or aliovalent elements (Al, Cr, Ga, Fe, Sc, In, Lu, Y, La). The most common derivative of LTP is lithium aluminium titanium phosphate (LATP), whose general formula is . Ionic conductivity values as high as 1.9×10−3 S/cm can be achieved when the microstructure and the aluminium content (x = 0.3 - 0.5) are optimized. The increase of conductivity is attributed to the larger number of mobile lithium ions necessary to balance the extra electrical charge after Ti replacement by Al, together with a contraction of the c axis of the LATP unit cell. In spite of attractive conduction properties, LATP is highly unstable in contact with lithium metal, with formation of a lithium-rich phase at the interface and with reduction of Ti to Ti. Reduction of tetravalent titanium ions proceeds along a single-electron transfer reaction: LiTi2(PO4)3 + Li -> Li2Ti2(PO4)3 Both phenomena are responsible for a significant increase of the electronic conductivity of the LATP material (from 3×10−9 S/cm to 2.9×10−6 S/cm), leading to the degradation of the material and to the ultimate cell failure if LATP is used as a solid electrolyte in a lithium-ion battery with metallic lithium as the anode. Lithium germanium phosphate Lithium germanium phosphate, (LGP), is closely similar to LTP, except for the presence of GeO6 octahedra instead of TiO6 octahedra in the rhombohedral unit cell. Similarly to LTP, the ionic conductivity of pure LGP is low and can be improved by doping the material with aliovalent elements like aluminium, resulting in lithium aluminium germanium phosphate (LAGP), . Contrary to LGP, the room-temperature ionic conductivity of LAGP spans from 10−5 S/cm up to 10−3 S/cm, depending on the microstructure and on the aluminium content, with an optimal composition for x ≈ 0.5. In both LATP and LAGP, non-conductive secondary phases are expected for larger aluminium content (x > 0.5 - 0.6). LAGP is more stable than LATP against lithium metal anode, since the reduction reaction of Ge cations is a 4-electron reaction and has a high kinetic barrier: 2LiGe2(PO4)3 + 4Li -> 3GeO2 + 6LiPO3 + Ge However, the stability of the lithium anode-LAGP interface is still not fully clarified and the formation of detrimental interlayers with subsequent battery failure has been reported. Application in lithium-ion batteries Phosphate-based materials with a NASICON crystal structure, especially LATP and LAGP, are good candidates as solid-state electrolytes in lithium-ion batteries, even if their average ionic conductivity (≈10−5 - 10−4 S/cm) is lower compared to other classes of solid electrolytes like garnets and sulfides. However, the use of LATP and LAGP provides some advantages: Excellent stability in humid air and against CO2, with no release of harmful gases or formation of Li2CO3 passivating layer; High stability against water; Wide electrochemical stability window and high voltage stability, up to 6 V in the case of LAGP, enabling the use of high-voltage cathodes; Low toxicity compared to sulfide-based solid electrolytes; Low cost and easy preparation. A high-capacity lithium metal anode could not be coupled with a LATP solid electrolyte, because of Ti reduction and fast electrolyte decomposition; on the other hand, the reactivity of LAGP in contact with lithium at very negative potentials is still debated, but protective interlayers could be added to improve the interfacial stability. Considering LZP, it is predicted to be electrochemically stable in contact with metallic lithium; the main limitation arises from the low ionic conductivity of the room-temperature triclinic phase. Proper elemental doping is an effective route to both stabilize the rhombohedral phase below 50 °C and improve the ionic conductivity. See also Lithium aluminium germanium phosphate LISICON Solid-state electrolyte Sodium-ion battery Lithium-ion battery References Electrolytes Sodium compounds Lithium compounds Phosphates
NASICON
[ "Chemistry" ]
2,415
[ "Electrochemistry", "Phosphates", "Electrolytes", "Salts" ]
37,455,368
https://en.wikipedia.org/wiki/Internal%20measurement
In quantum mechanics, internal measurement refers to the measurement of a quantum system by an observer (referred to as an internal observer or endo-observer). A quantum measurement represents the action of a measuring device on a quantum system. When the measuring device is a part of the measured quantum system, the measurement proceeds internally in relation to the whole system. Internal measurement theory was first introduced by Koichiro Matsuno and developed by Yukio-Pegio Gunji. They expanded on the original ideas of Robert Rosen and Howard Pattee regarding quantum measurement in living systems viewed as natural internal observers that belong to the same scale of the observed objects. According to Matsuno, an internal measurement is accompanied by a redistribution of probabilities that leave them entangled in accordance with the many-worlds interpretation of quantum mechanics by Everett. However, this form of quantum entanglement does not survive in an external measurement, in which the mapping to real numbers takes place and the result is revealed in classical spacetime, as the Copenhagen interpretation suggests. This means that the internal measurement concept unifies the current alternative interpretations of quantum mechanics. Internal measurement and theoretical biology The concept of internal measurement is important for theoretical biology, as living organisms can be regarded as endo-observers having their internal self-referential encoding. An internal measurement leads to an iterative recursive process which appears as the development and evolution of the system where any solution is destined to be relative. The evolutionary increase of complexity becomes possible when the genotype emerges as a system distinct from the phenotype and embedded into it, which separates energy-degenerate rate-independent genetic symbols from the rate-dependent dynamics of construction that they control. Evolution in this concept, which is related to autopoiesis, becomes its own cause, a universal property of our world. Internal measurement and the problem of self The self can be attributed to the internal quantum state with entangled probabilities. This entanglement can be held for prolonged times in the systems with low dissipation without demolition. According to Matsuno, organisms exploit thermodynamic gradients by acting as heat engines to drastically reduce the effective temperature within macromolecular complexes which can potentially provide the maintenance of long-living coherent states in the microtubules of nervous system. The concept of internal measurement develops the ideas of Schrödinger who suggested in "What is life?" that the nature of the self is quantum mechanical, i.e. the self is attributed to an internal state beyond quantum reduction, which generates emergent events by applying quantum reduction externally and observing it. See also Endophysics Interpretations of quantum mechanics Autopoiesis References Philosophical theories Measurement
Internal measurement
[ "Physics", "Mathematics" ]
547
[ "Quantity", "Physical quantities", "Measurement", "Size" ]
33,376,527
https://en.wikipedia.org/wiki/Michele%20Parrinello
Michele Parrinello (born 7 September 1945) is an Italian physicist particularly known for his work in molecular dynamics (the computer simulation of physical movements of atoms and molecules). Parrinello and Roberto Car were awarded the Dirac Medal of the International Centre for Theoretical Physics (ICTP) and the Sidney Fernbach Award in 2009 for their continuing development of the Car–Parrinello method, first proposed in their seminal 1985 paper, "Unified Approach for Molecular Dynamics and Density-Functional Theory". They have continued to receive awards for this breakthrough, most recently the Dreyfus Prize in the Chemical Sciences and the 2021 Benjamin Franklin Medal in Chemistry. Parrinello also co-authored highly cited publications on "polymorphic transitions in single crystals" and "canonical sampling through velocity rescaling." Life and career Michele Parrinello was born in Messina (Sicily) and received his Laurea in physics from the University of Bologna in 1968. After working at the International School for Advanced Studies in Trieste, the IBM research laboratory in Zurich, and the Max Planck Institute for Solid State Research in Stuttgart, he was appointed Professor of Computational Science at the Swiss Federal Institute of Technology Zurich in 2001, a position he also holds at the Università della Svizzera italiana in Lugano. In 2004 he was elected to Great Britain's Royal Society. In 2011 he was awarded the Marcel Benoist Prize. Between 2014 and 2018, he was a member of the Scientific and Technical Committee of the Italian Institute of Technology (IIT). Since 2018, he has been a Senior Researcher, and since 2020, the Principal Investigator of the Atomistic Simulations research unit at the Italian Institute of Technology (IIT). In 2020 he received the Benjamin Franklin Medal (Franklin Institute) in Chemistry. As of 2024, he has received over 150,000 scientific citations and has an h-index of 163, which is one of the highest among all scientists. In Clarivate's annual list of citation laureates, Car and Parrinello have been selected as candidates for the 2024 chemistry Nobel prize. As of 2023, at the age of 78, there are still 6 PhD students working in his group. Selected notable contributions Car–Parrinello molecular dynamics (the original paper on this is now the 5th most highly cited paper in Physical Review Letters) Parrinello–Rahman algorithm Flying ice cube Metadynamics Machine learning potential References Further reading Andreoni, W.; Marx, D.; Sprik, M. (2005). "Editorial: a tribute to Michele Parrinello: from physics via chemistry to biology", ChemPhysChem, Volume 6, Issue 9 (Special Issue: Parrinello Festschrift) Car, R. and Parrinello, M. (1985). "Unified Approach for Molecular Dynamics and Density-Functional Theory" Physical Review Letters, Vol. 55, Issue 22 Kühne, T. D.; Krack, M.; Mohamed, F. R. and Parrinello, M. (2007). "Efficient and Accurate Car-Parrinello-like Approach to Born-Oppenheimer Molecular Dynamics" Physical Review Letters, Vol. 98, 066401 External links . Profile (and CV) at the Università della Svizzera italiana. Profile at Italian Institute of Technology (IIT). 1945 births Living people 20th-century Italian physicists Italian expatriates in Switzerland Scientists from Messina Fellows of the American Academy of Arts and Sciences Foreign members of the Royal Society Foreign associates of the National Academy of Sciences Academic staff of the University of Lugano Schrödinger Medal recipients Computational chemists Computational physicists 21st-century Italian physicists Academic staff of ETH Zurich Fellows of the American Physical Society Benjamin Franklin Medal (Franklin Institute) laureates
Michele Parrinello
[ "Physics", "Chemistry" ]
781
[ "Computational physicists", "Computational chemists", "Computational physics", "Computational chemistry", "Theoretical chemists" ]
33,381,675
https://en.wikipedia.org/wiki/Hydroxyl%20value
In analytical chemistry, the hydroxyl value is defined as the number of milligrams of potassium hydroxide (KOH) required to neutralize the acetic acid taken up on acetylation of one gram of a chemical substance that contains free hydroxyl groups. The analytical method used to determine hydroxyl value traditionally involves acetylation of the free hydroxyl groups of the substance with acetic anhydride in pyridine solvent. After completion of the reaction, water is added, and the remaining unreacted acetic anhydride is converted to acetic acid and measured by titration with potassium hydroxide. The hydroxyl value can be calculated using the following equation. Note that a chemical substance may also have a measurable acid value affecting the measured endpoint of the titration. The acid value () of the substance, determined in a separate experiment, enters into this equation as a correction factor in the calculation of the hydroxyl value (): Where is the hydroxyl value; is the amount (ml) potassium hydroxide solution required for the titration of the blank; is the amount (ml) of potassium hydroxide solution required for the titration of the acetylated sample; is the weight of the sample (in grams) used for acetylation; is the normality of the titrant; 56.1 is the molecular weight of potassium hydroxide (g/mol); is a separately determined acid value of the chemical substance. The content of free hydroxyl groups in a substance can also be determined by methods other than acetylation. Determinations of hydroxyl content by other methods may instead be expressed as a weight percentage (wt. %) of hydroxyl groups in units of the mass of hydroxide functional groups in grams per 100 grams of substance. The conversion between hydroxyl value and other hydroxyl content measurements is obtained by multiplying the hydroxyl value by the factor 17/560. The chemical substance may be a fat, oil, natural or synthetic ester, or other polyol. ASTM D 1957 and ASTM E222-10 describe several versions of this method of determining hydroxyl value. Uses and value The value is important because it helps determine the Stoichiometry of a system for example in polyurethanes. The value may also be used to calculate equivalent weight and if the functionality is known, the molecular weight also. References See also-related test methods Acid value Bromine number Amine value Epoxy value Iodine value Peroxide value Saponification value Analytical chemistry Dimensionless numbers of chemistry Lipids
Hydroxyl value
[ "Chemistry" ]
551
[ "Biomolecules by chemical classification", "Lipids", "Organic compounds", "nan", "Dimensionless numbers of chemistry" ]
33,383,655
https://en.wikipedia.org/wiki/Chemical%20Engineering%20and%20Biotechnology%20Abstracts
Chemical Engineering and Biotechnology Abstracts (CEABA-VTB) is an abstracting and indexing service that is published by DECHEMA, BASF, and Bayer Technology Services, all based in Germany. This is a bibliographic database that covers multiple disciplines. Subject coverage Subject coverage includes engineering, management, manufacturing plants, equipment, production, and processing pertaining to various disciplines. The fields of interest are bio-process engineering, chemical engineering, process engineering, environmental protection (including safety), fermentation, enzymology, bio-transformation, information technology, technology and testing of materials (including corrosion), mathematical methods (including modeling), measurement (including control of processes), utilities (including services). Also covered are production processes and process development. CAS registry numbers are also part of this database. References Bibliographic databases in engineering Chemical engineering journals Chemical industry in Germany Biotechnology
Chemical Engineering and Biotechnology Abstracts
[ "Chemistry", "Engineering", "Biology" ]
182
[ "Chemical engineering", "nan", "Biotechnology", "Chemical engineering journals" ]
33,387,415
https://en.wikipedia.org/wiki/Matchbox
A matchbox is a container or case for matches, made of cardboard, thin wood, or metal, generally in the form of a box with a separate drawer sliding inside the cover. Matchboxes generally measure 5 x 3.5 x 1.5 cm, and commonly have coarse striking surfaces on the edges for lighting the matches. Cylindrical matchboxes with a round cover on one end are also available. For many applications matchbooks have replaced matchboxes. Metallic model There are metal matchboxes some of which also have a hollow cylinder in which a nitrated wick is housed so that it can ignite when it is windy. The metal boxes have a scraper that is usually placed on the edge, in a slot made for this purpose. a sort of file that can be machined to the same metal casing or be a metal sheet, welded or glued. In 1878.the patent document, nº 2191 class 44, was registered by Hannoversche Gummi-Kamm-Compagnie in Hannover, about a metallic matchbox, with the following text: Other types There are other types different from those described above, made of rubber, wood, mother of pearl, ivory, bone, celluloid, etc. sometimes with very whimsical shapes, Apart from the pocket boxes mentioned, there are tabletop match boxes and boxes meant to hang on a wall. Tabletop matchboxes are of some capacity, made of fine wood, cut glass, etc., and may have a lid, the only condition being that they close well and have enough weight to allow striking a match without moving the box. Match boxes hung on the wall are used in kitchens, they are usually made of ash wood, they do not have a lid and a hook or hole protrudes from the back of the box to hang them on the wall. All matchboxes must have a scraper so that the head of the match can be rubbed against it to light it. Ordinary cardboard boxes have it on one or both sides. In tabletop or wall match boxes, the scraper is usually made of sandpaper, attached to the most visible part and at the top of the box. Matchbook A book of matches is a small cardboard folder that contains matches joined at the base and has a surface to be able to rub the matches on the outside. The binder must be opened to access the matches, which are placed in a comb shape and must be torn to use.them, unlike those in a standard matchbox where they are loosely packed in the drawer that can be slid with the finger. Phillumeny Phillumeny is the hobby of collecting different items related to matches, matchboxes, matchbox labels, matchbooks, matchbox covers, etc. In Japan, Teiichi Yoshizawa was listed in the Guinness Book of Records as the best collector of matches in the world. In Portugal, Jose Manuel Pereira published a series of albums to catalog and display matchbox collections called "Phillalbum". References Further reading Steele, H. Thomas; Hiemann, Jim; Dyer, Rod (1987). Close Cover Before Striking: The Golden Age of Matchbook Art NY: Abbeville Press, Silke Eilers: Zündholzetiketten als historische Quelle. Dissertation. Westfälische Wilhelms-Universität, Münster 2002. Handbuch der Phillumenie. Zündholzetiketten als historische Quelle; eine bildkundliche Untersuchung. (= Modern Imaginarium. 1). Ahlen 2003, Paul Fleischman: Das Streichholzschachtel-Tagebuch. Illustriert von Bagram Ibatoulline. Verlagshaus Jacoby & Stuart, Berlin 2013, External links Containers Collecting Home appliances Packaging
Matchbox
[ "Physics", "Technology" ]
791
[ "Physical systems", "Machines", "Home appliances" ]
34,937,051
https://en.wikipedia.org/wiki/Flooding%20%28nuclear%20reactor%20core%29
Flooding refers to a fluid flow phenomenon whereby counter-current two-phase flow is reversed and runs concurrent in the direction of the initial gas/vapor phase flow when filling, or "flooding", a nuclear reactor core with coolant. This phenomenon is generally discussed with respect to a loss-of-coolant accident (LOCA). As this phenomenon proceeds, annular flow running counter-current begins as liquid water is inserted into the system. Then if conditions are correct, the frictional force at the gas-liquid interface begins to reverse the flow of the liquid. Finally, the flow of the liquid reverses, running concurrently in a slug (or other) flow regime. The significance of this phenomenon is that, if not properly designed for, it can present issues when trying to fill the core with liquid (the phenomenon works against gravity, forcing liquid out of the core). Light water reactor examples In a boiling water reactor (BWR), the emergency core cooling system (ECCS) injects liquid water into the reactor core from the top. Water vapor produced from boiling will flow in the opposite direction. Given a high enough flow rate of steam, reversal of the ECCS-injected liquid water occurs. In a pressurized water reactor (PWR), the ECCS injects liquid into the hot and/or cold leg of the reactor. The cold leg flows through a downcomer on the outside of the core, before flowing up through the core. The core barrel and the reactor vessel wall form a cylindrical shell that is referred to as the downcomer. In the cold leg, boiling in the downcomer creates an upward flow of steam that can reverse the flow of liquid water coming in through the cold leg. The flooding rate in a PWR is pressure dependent. Similar terminology Flooding as a fluid flow phenomenon should be distinguished from the act of filling the core with coolant ("flooding of the reactor core"), as the fluid flow phenomenon occurs during the filling process. "Flooding of containment" refers to filling the nuclear reactor containment with liquid (usually water), which is distinctly different from either reactor core flooding or flooding as a fluid flow phenomenon. References Nuclear power
Flooding (nuclear reactor core)
[ "Physics" ]
447
[ "Power (physics)", "Physical quantities", "Nuclear power" ]
34,939,027
https://en.wikipedia.org/wiki/Pharmacometabolomics
Pharmacometabolomics, also known as pharmacometabonomics, is a field which stems from metabolomics, the quantification and analysis of metabolites produced by the body. It refers to the direct measurement of metabolites in an individual's bodily fluids, in order to predict or evaluate the metabolism of pharmaceutical compounds, and to better understand the pharmacokinetic profile of a drug. Alternatively, pharmacometabolomics can be applied to measure metabolite levels following the administration of a pharmaceutical compound, in order to monitor the effects of the compound on certain metabolic pathways(pharmacodynamics). This provides detailed mapping of drug effects on metabolism and the pathways that are implicated in mechanism of variation of response to treatment. In addition, the metabolic profile of an individual at baseline (metabotype) provides information about how individuals respond to treatment and highlights heterogeneity within a disease state. All three approaches require the quantification of metabolites found in bodily fluids and tissue, such as blood or urine, and can be used in the assessment of pharmaceutical treatment options for numerous disease states. Goals of Pharmacometabolomics Pharmacometabolomics is thought to provide information that complements that gained from other omics, namely genomics, transcriptomics, and proteomics. Looking at the characteristics of an individual down through these different levels of detail, there is an increasingly more accurate prediction of a person's ability to respond to a pharmaceutical compound. The genome, made up of 25 000 genes, can indicate possible errors in drug metabolism; the transcriptome, made up of 85,000 transcripts, can provide information about which genes important in metabolism are being actively transcribed; and the proteome, >10,000,000 members, depicts which proteins are active in the body to carry out these functions. Pharmacometabolomics complements the omics with direct measurement of the products of all of these reactions, but with perhaps a relatively smaller number of members: that was initially projected to be approximately 2200 metabolites, but could be a larger number when gut derived metabolites and xenobiotics are added to the list. Overall, the goal of pharmacometabolomics is to more closely predict or assess the response of an individual to a pharmaceutical compound, permitting continued treatment with the right drug or dosage depending on the variations in their metabolism and ability to respond to treatment. Pharmacometabolomic analyses, through the use of a metabolomics approach, can provide a comprehensive and detailed metabolic profile or “metabolic fingerprint” for an individual patient. Such metabolic profiles can provide a complete overview of individual metabolite or pathway alterations, providing a more realistic depiction of disease phenotypes. This approach can then be applied to the prediction of response to a pharmaceutical compound by patients with a particular metabolic profile. Pharmacometabolomic analyses of drug response are often coupled or followed up with pharmacogenetics studies. Pharmacogenetics focuses on the identification of genetic variations (e.g. single-nucleotide polymorphisms) within patients that may contribute to altered drug responses and overall outcome of a certain treatment. The results of pharmacometabolomics analyses can act to “inform” or “direct” pharmacogenetic analyses by correlating aberrant metabolite concentrations or metabolic pathways to potential alterations at the genetic level. This concept has been established with two seminal publications from studies of antidepressants serotonin reuptake inhibitors where metabolic signatures were able to define pathway implicated in response to the antidepressant and that lead to identification of genetic variants within a key gene within highlighted pathway as being implicated in variation in response. These genetic variants were not identified through genetic analysis alone and hence illustrated how metabolomics can guide and inform genetic data. History Although the applications of pharmacometabolomics to personalized medicine are largely only being realized now, the study of an individual's metabolism has been used to treat disease since the Middle Ages. Early physicians employed a primitive form of metabolomic analysis by smelling, tasting and looking at urine to diagnose disease. Obviously the measurement techniques needed to look at specific metabolites were unavailable at that time, but such technologies have evolved dramatically over the last decade to develop precise, high-throughput devices, as well as the accompanying data analysis software to analyze output. Currently, sample purification processes, such as liquid or gas chromatography, are coupled with either mass spectrometry (MS)-based or nuclear magnetic resonance (NMR)-based analytical methods to characterize the metabolite profiles of individual patients. Continually advancing informatics tools allow for the identification, quantification and classification of metabolites to determine which pathways may influence certain pharmaceutical interventions. One of the earliest studies discussing the principle and applications of pharmacometabolomics was conducted in an animal model to look at the metabolism of paracetamol and liver damage. NMR spectroscopy was used to analyze the urinary metabolic profiles of rats pre- and post-treatment with paracetamol. The analysis revealed a certain metabolic profile associated with increased liver damage following paracetamol treatment. At this point, it was eagerly anticipated that such pharmacometabolomics approaches could be applied to personalized human medicine. Since this publication in 2006, the Pharmacometabolomics Research Network led by Duke University researchers and that included partnerships between centers of excellence in metabolomics, pharmacogenomics and informatics (over sixteen academic centers funded by NIGMS) has been able to illustrate for the first time the power of the pharmacometabolomics approach in informing about treatment outcomes in large clinical studies and with use of drugs that include antidepressants, statins, antihypertensives, antiplatelet therapies and antipsychotics. Totally new concepts emerged from these studies on use of pharmacometabolomics as a tool that can bring a paradigm shift in the field of pharmacology. It illustrated how pharmacometabolomics can enable a Quantitative and Systems Pharmacology approach. Pharmacometabolomics has been applied for the treatment of numerous human diseases, such as schizophrenia, diabetes, neural disease, depression and cancer. Personalized Medicine As metabolite analyses are being conducted at the individual patient level, pharmacometabolomics may be considered a form of personalized medicine. This field is currently being employed in a predictive manner to determine the potential responses of therapeutic compounds in individual patients, allowing for more customized treatment regimens. It is anticipated that such pharmacometabolomics approaches will lead to the improved ability to predict an individual's response to a compound, the efficacy and metabolism of it as well as adverse or off-target effects that may take place in the body. The metabolism of certain drugs varies from patient to patient as the copy number of the genes which code for common drug metabolizing enzymes varies within the population, and leads to differences in the ability of an individual to metabolize different compounds. Other important personal factors contributing to an individual's metabolic profile, such as patient nutritional status, commensal bacteria, age, and pre-existing medical conditions, are also reflected in metabolite assessment., Overall, pharmacometabolomic analyses combined with such approaches as pharmacogenetics, can function to identify the metabolic processes and particular genetic alterations that may compromise the anticipated efficacy of a drug in a particular patient. The results of such analyses can then allow modification of treatment regimens for an optimal outcome. Current Applications Predicting treatment outcome Metabotype informs about treatment outcomes Pharmacometabolomics may be used in a predictive manner to determine the correct course of action in regards to a patient about to undergo some type of drug treatment. This involves determining the metabolic profile of a patient prior to treatment, and correlating metabolic signatures with the outcome of a pharmaceutical treatment course. Analysis of a patient's metabolic profile can reveal factors that may contribute to altered drug metabolism, allowing for predictions of the overall efficacy of a proposed treatment, as well as potential drug toxicity risks that may differ from the general population. This approach has been used to identify novel or previously characterized metabolic biomarkers in patients, which can be used to predict the expected outcome of that patient following treatment with a pharmaceutical compound. One example of the clinical application of pharmacometabolomics are studies that looked to identify a predictive metabolic marker for the treatment of major depressive disorder (MDD)., In a study with antidepressant Sertraline, the Pharmacometabolomics Network illustrated that metabolic profile at baseline of patients with major depression can inform about treatment outcomes. In addition the study illustrated the power of metabolomics for defining response to placebo and compared response to placebo to response to sertraline and showed that several pathways were common to both. In another study with escitalopram citalopram, metabolomic analysis of plasma from patients with MDD revealed that variations in glycine metabolism were negatively associated with patient outcome upon treatment with selective serotonin reuptake inhibitors (SSRIs), an important drug class involved in the treatment of this disease. Monitoring drug-related alterations in metabolic pathways The second major application of pharmacometabolomics is the analysis of a patient's metabolic profile following the administration of a specific therapy. This process is often secondary to a pre-treatment metabolic analysis, allowing for the comparison of pre- and post-treatment metabolite concentrations. This allows for the identification of the metabolic processes and pathways that are being altered by the treatment either intentionally as a designated target of the compound, or unintentionally as a side effect. Furthermore, the concentration and variety of metabolites produced from the compound itself can also be identified, providing information on the rate of metabolism and potentially leading to development of a related compound with increased efficacy or decreased side effects. An example of this approach was used to investigate the effect of several antipsychotic drugs on lipid metabolism in patients treated for schizophrenia. It was hypothesized that these antipsychotic drugs may be altering lipid metabolism in treated patients with schizophrenia, contributing to the weight gain and hypertriglyceridemia. The study monitored lipid metabolites in patients both before and after treatment with antipsychotics. The compiled pre- and post-treatment profiles were then compared to examine the effect of these compounds on lipid metabolism. The researchers found correlations between treatment with antipsychotic drugs and lipid metabolism, in both a lipid-class-specific and drug-specific manner, establishing new foundations around the concept that pharmacometabolomics provides powerful tools for enabling detailed mapping of drug effects. Additional studies by the Pharmacometabolomics Research Network enabled mapping in ways not possible before effects of statins, atenolol and aspirin. Totally new insights were gained about effect of these drugs on metabolism and they highlighted pathways implicated in response and side effects. Metabolite Quantification and Analysis In order to identify and quantify metabolites produced by the body, various detection methods have been employed. Most often, these involve the use of nuclear magnetic resonance (NMR) spectroscopy or mass spectrometry (MS), providing universal detection, identification and quantification of metabolites in individual patient samples. Although both processes are used in pharmacometabolomic analyses, there are advantages and disadvantages for using either nuclear magnetic resonance (NMR) spectroscopy- or mass spectrometry (MS)-based platforms in this application. Nuclear Magnetic Resonance Spectroscopy NMR spectroscopy has been utilized for the analysis of biological samples since the 1980s, and can be used as an effective technique for the identification and quantification of both known and unknown metabolites. For details on the principles of this technique, see NMR spectroscopy. In pharmacometabolomics analyses, NMR is advantageous because minimal sample preparation is required. Isolated patient samples typically include blood or urine due to their minimally-invasive acquisition, however, other fluid types and solid tissue samples have also been studied with this approach. Due to the minimal preparation of samples before analysis, samples can be potentially fully recovered following NMR analysis (If samples are kept refrigerated to avoid degradation). This permits samples to be repeatedly analysed with extremely high levels of reproducibility, as well as maintaining precious patient samples for an alternative analysis. The high reproducibility and precision of NMR, coupled with relatively fast processing time (greater than 100 samples per day), makes this process a relatively high-throughput form of sample analysis. One disadvantage of this technique is the relatively poor metabolite detection sensitivity compared to MS-based analysis, leading to a requirement for greater initial sample volume. Furthermore, the initial instrument costs are extremely high, for both NMR and MS equipment. Mass Spectrometry An alternative approach to the identification and quantification of patient samples is through the use of mass spectrometry. This approach offers excellent precision and sensitivity in the identification, characterization and quantification of metabolites in multiple patient sample types, such as blood and urine. The mass spectrometry (MS) approach is typically coupled to gas chromatography (GC), in GC-MS or liquid chromatography (LC), in LC-MS, which aid in initially separating out the metabolite components within complex sample mixtures, and can allow for the isolation of particular metabolite subsets for analysis. GC-MS can provide relatively precise quantification of metabolites, as well as chemical structural information that can be compared to pre-existing chemical libraries. GC-MS can be conducted in a relatively high-throughput manner (greater than 100 samples per day) with greater detection sensitivity than NMR analysis. A limitation of GC-MS for this application, however, is that processed metabolite components must be readily volatilized for sample processing. LC-MS initially separates out the components of a sample mixture based on properties such as hydrophobicity, before processing them for identification and quantification by mass spectrometry (MS). Overall, LC-MS is an extremely flexible method for processing most compound types in a somewhat high-throughput manner (20-100 samples a day), also with greater sensitivity than NMR analysis. For both GC-MS and LC-MS there are limitations in the reproducibility of metabolite quantification. Furthermore, sample processing for downstream mass spectrometry (MS) analysis is much more intensive than in NMR application, and results in the destruction of the original sample (via trypsin digestion). Following identification and quantification of metabolites in individual patient samples, NMR and mass spectrometry (MS) output is compiled into a dataset. These datasets include information on the identity and levels of individual metabolites detected within processed samples, as well as characteristics of each metabolite during the detection process (e.g. mass-to-charge ratios for mass spectrometry (MS)-based analysis). Multiple datasets can be created and compiled into large databases for individual patients in order to monitor varying metabolic profiles over a treatment course (i.e. pre- and post-treatment profiles). Each database is then processed through a type of informatics platform with software designed to characterize and analyze the data to generate an overall metabolic profile for the patient. To generate this overall profile, computational programs are designed to: identify metabolic disease signatures assess treatment class (pre- or post-treatment) identify compounds present in a patient sample that may alter drug response, or be caused by a therapy identify metabolite variables and interactions among these variables map identified variables to known metabolic and biochemical pathways Limitations Along with the emerging diagnostic capabilities of pharmacometabolomics, there are limitations introduced when individual variability is looked at. The ability to determine an individual's physiological state by measurement of metabolites is not contested, but the extreme variability that can be introduced by age, nutrition, and commensal organisms suggest problems in creating generalized pharmacometabolomes for patient groups. However, as long as meaningful metabolic signatures can be elucidated to create baseline values, there still exists a possible means of comparison. Issues surrounding the measurement of metabolites in an individual can also arise from the methodology of metabolite detection, and there are arguments both for and against NMR and mass spectrometry (MS). Other limitations surrounding metabolite analysis include the need for proper handling and processing of samples, as well as proper maintenance and calibration of the analytical and computational equipment. These tasks require skilled and experienced technicians, and potential instrument repair costs due to continuous sample processing can be costly. The cost of the processing and analytical platforms alone is very high, making it difficult for many facilities to afford pharmacometabolomics-based treatment analyses. Implications for Health Care Pharmacometabolomics may decrease the burden on the healthcare system by better gauging the correct choice of treatment drug and dosage in order to optimize the response of a patient to a treatment. Hopefully, this approach will also ultimately limit the number of adverse drug reactions (ADRs) associated with many treatment regimens. Overall, physicians would be better able to apply more personalized, and potentially more effective, treatments to their patients. It is important to consider, however, that the processing and analysis of the patient samples takes time, resulting in delayed treatment. Another concern about the application of pharmacometabolomics analyses to individual patient care, is deciding who should and who should not receive this in-depth, personalized treatment protocol. Certain diseases and stages of disease would have to be classified according to their requirement of such a treatment plan, but there are no criteria for this classification. Furthermore, not all hospitals and treatment institutes can afford the equipment to process and analyze patient samples on site, but sending out samples takes time and ultimately delays treatment. Health insurance coverage of such procedures may also be an issue. Certain insurance companies may discriminate against the application of this type of sample analysis and metabolite characterization. Furthermore, there would have to be regulations put in place to ensure that there was no discrimination by insurance companies against the metabolic profiles of individual patients (“high metabolizers” vs. risky “low metabolizers”). See also Pharmacogenetics Pharmacogenomics Personal genomics Drug development Metabolism Personalized medicine References External links Genomics Directory: A one-stop biotechnology resource center for bioentrepreneurs, scientists, and students Pharmacometabolomics Research Network (PMRN) Human Metabolome Project:Project supported by Genome Alberta and Genome Canada Metabolomics Society: An organization dedicated to promoting the growth, use and understanding of metabolomics in the life sciences. Biological Magnetic Resonance Data Bank:A Repository for Data from NMR Spectroscopy on Proteins, Peptides, Nucleic Acids, and other Biomolecules Scripps Center for Metabolomics and Mass Spectrometry Branches of biology Metabolism
Pharmacometabolomics
[ "Chemistry", "Biology" ]
4,018
[ "Cellular processes", "Biochemistry", "Metabolism", "nan" ]
34,940,171
https://en.wikipedia.org/wiki/Programming%20languages%20used%20in%20most%20popular%20websites
One thing the most visited websites have in common is that they are dynamic websites. Their development typically involves server-side coding, client-side coding and database technology. The programming languages applied to deliver such dynamic web content vary vastly between sites. *data on programming languages is based on: HTTP Header information Request for file types Citations from reliable sources See also Comparison of programming languages List of programming languages TIOBE index "Hello, World!" program References Software comparisons Web development
Programming languages used in most popular websites
[ "Technology", "Engineering" ]
95
[ "Software engineering", "Software comparisons", "Computing comparisons", "Web development" ]
34,941,120
https://en.wikipedia.org/wiki/Nucleotide%20pyrophosphatase/phosphodiesterase
Nucleotide pyrophosphatase/phosphodiesterase (NPP) is a class of dimeric enzymes that catalyze the hydrolysis of phosphate diester bonds. NPP belongs to the alkaline phosphatase (AP) superfamily of enzymes. Humans express seven known NPP isoforms, some of which prefer nucleotide substrates, some of which prefer phospholipid substrates, and others of which prefer substrates that have not yet been determined. In eukaryotes, most NPPs are located in the cell membrane and hydrolyze extracellular phosphate diesters to affect a wide variety of biological processes. Bacterial NPP is thought to localize to the periplasm. Structure The catalytic site of NPP consists of a two-metal-ion (bimetallo) Zn2+ catalytic core. These Zn2+ catalytic components are thought to stabilize the transition state of the NPP phosphoryl transfer reaction. Mechanism Overview NPP catalyses the nucleophilic substitution of one ester bond on a phosphodiester substrate. It has a nucleoside binding pocket that excludes phospholipid substrates from the active site. A threonine nucleophile has been identified through site-directed mutagenesis, and the reaction inverts the stereochemistry of the phosphorus center. The sequence of bond breakage and formation has yet to be resolved. Ongoing Investigation Three extreme possibilities have been proposed for the mechanism of NPP-catalyzed phosphoryl transfer. They are distinguished by the sequence in which bonds to phosphorus are made and broken. Though this phenomenon is subtle, it is important for understanding the physiological roles of AP superfamily enzymes, and also to molecular dynamic modeling. Extreme mechanistic scenarios:1) A two-step "dissociative" (elimination-addition or DN + AN) mechanism that proceeds via a trigonal metaphosphate intermediate. This mechanism is represented by the red dashed lines in the figure at right. 2) A two-step "associative" (addition-elimination or AN + DN) mechanism that proceeds via a pentavalent phosphorane intermediate. This is represented by the blue dashed lines in the figure at right. 3) A one-step fully synchronous mechanism analogous to SN2 substitution. Bond formation and breakage occur simultaneously and at the same rate. This is represented by the black dashed line in the figure at right. The above three cases represent archetypes for the reaction mechanism, and the actual mechanism probably falls somewhere in between them. The red and blue dotted lines in Fig. 2a represent more realistic "concerted" mechanisms in which addition and elimination overlap, but are not fully synchronous. The difference in initial rates of the two steps implies different charge distribution in the transition state (TS). When the addition step occurs more quickly than elimination (an ANDN mechanism), more positive charge develops on the nucleophile, and the transition state is said to be "tight." Conversely, if elimination occurs more quickly than addition (DNAN), the transition state is considered "loose." López-Canut et al. modeled substitution of a phosphodiester substrate using a hybrid quantum mechanics/molecular mechanics model. Notably, the model predicted an ANDN concerted mechanism in aqueous solution, but a DNAN mechanism in the active site of Xac NPP. Promiscuity Although NPP primarily catalyzes phosphodiester hydrolysis, the enzyme will also catalyze the hydrolysis of phosphate monoesters, though to a much smaller extent. NPP preferentially hydrolyzes phosphate diesters over monoesters by factors of 102-106, depending on the identity of the diester substrate. This ability to catalyze a reaction with a secondary substrate is known as enzyme promiscuity, and may have played a role in NPP's evolutionary history. NPP's promiscuity enables the enzyme to share substrates with alkaline phosphatase (AP), another member of the alkaline phosphate superfamily. Alkaline phosphatase primarily hydrolyzes phosphate monoester bonds, but it shows some promiscuity towards hydrolyzing phosphate diester bonds, making it a sort of opposite to NPP. The active sites of these two enzymes show marked similarities, namely in the presence of nearly superimposable Zn2+ bimetallo catalytic centers. In addition to the bimetallo core, AP also has an Mg2+ ion in its active site. Biological function NPPs have been implicated in several biological processes, including bone mineralization, purine nucleotide and insulin signaling, and cell differentiation and motility. They are generally regulated at the transcriptional level. Mammalian Isoforms Nucleotide pyrophosphatase/phosphodiesterase I NPP1 helps scavenge extracellular nucleotides in order to meet the high purine and pyrimidine requirements of dividing cells. In T-cells, it may scavenge NAD+ from nearby dead cells as a source of adenosine. The pyrophosphate produced by NPP1 in bone cells is thought to serve as both a phosphate source for calcium phosphate deposition and as an inhibitory modulator of calcification. NPP1 appears to be important for maintaining pyrophosphate/phosphate balance. Overactivity of the enzyme is associated with chondrocalcinosis, while deficiency correlates to pathological calcification. NPP1 inhibits the insulin receptor in vitro. In 2005, overexpression of the isoform was implicated in insulin resistance in mice. It has been linked to insulin resistance and Type 2 diabetes in humans. NPP2 NPP2, known in humans as autotaxin, acts primarily in cell motility pathways. With its active site functioning, NPP2 promotes cellular migration at picomolar concentrations. Soluble splice variants of NPP2 are thought to be important to cancer metastasis, and also show angiogenic properties in tumors. NPP3 NPP3 is probably a major contributor to nucleotide metabolism in the intestine and liver. Intestinal NPP3 would be involved in hydrolyzing food-derived nucleotides. The liver releases ATP and ADP into the bile to regulate bile secretion. It subsequently reclaims adenosine via a pathway that probably contains NPP3. Evolution NPP belongs to the alkaline phosphatase superfamily, which is a group of evolutionarily related enzymes that catalyze phosphoryl and sulfuryl transfer reactions. This group includes phosphomonoesterases, phosphodiesterases, phosphoglycerate mutases, phosphophenomutases, and sulfatases. References Protein structure Biochemistry Enzymes of known structure
Nucleotide pyrophosphatase/phosphodiesterase
[ "Chemistry", "Biology" ]
1,458
[ "Biochemistry", "Structural biology", "nan", "Protein structure" ]
34,942,847
https://en.wikipedia.org/wiki/Compression%20of%20genomic%20sequencing%20data
High-throughput sequencing technologies have led to a dramatic decline of genome sequencing costs and to an astonishingly rapid accumulation of genomic data. These technologies are enabling ambitious genome sequencing endeavours, such as the 1000 Genomes Project and 1001 (Arabidopsis thaliana) Genomes Project. The storage and transfer of the tremendous amount of genomic data have become a mainstream problem, motivating the development of high-performance compression tools designed specifically for genomic data. A recent surge of interest in the development of novel algorithms and tools for storing and managing genomic re-sequencing data emphasizes the growing demand for efficient methods for genomic data compression. General concepts While standard data compression tools (e.g., zip and rar) are being used to compress sequence data (e.g., GenBank flat file database), this approach has been criticized to be extravagant because genomic sequences often contain repetitive content (e.g., microsatellite sequences) or many sequences exhibit high levels of similarity (e.g., multiple genome sequences from the same species). Additionally, the statistical and information-theoretic properties of genomic sequences can potentially be exploited for compressing sequencing data. Base variants With the availability of a reference template, only differences (e.g., single nucleotide substitutions and insertions/deletions) need to be recorded, thereby greatly reducing the amount of information to be stored. The notion of relative compression is obvious especially in genome re-sequencing projects where the aim is to discover variations in individual genomes. The use of a reference single nucleotide polymorphism (SNP) map, such as dbSNP, can be used to further improve the number of variants for storage. Relative genomic coordinates Another useful idea is to store relative genomic coordinates in lieu of absolute coordinates. For example, representing sequence variant bases in the format ‘Position1Base1Position2Base2…’, ‘123C125T130G’ can be shortened to ‘0C2T5G’, where the integers represent intervals between the variants. The cost is the modest arithmetic calculation required to recover the absolute coordinates plus the storage of the correction factor (‘123’ in this example). Prior information about the genomes Further reduction can be achieved if all possible positions of substitutions in a pool of genome sequences are known in advance. For instance, if all locations of SNPs in a human population are known, then there is no need to record variant coordinate information (e.g., ‘123C125T130G’ can be abridged to ‘CTG’). This approach, however, is rarely appropriate because such information is usually incomplete or unavailable. Encoding genomic coordinates Encoding schemes are used to convert coordinate integers into binary form to provide additional compression gains. Encoding designs, such as the Golomb code and the Huffman code, have been incorporated into genomic data compression tools. Of course, encoding schemes entail accompanying decoding algorithms. Choice of the decoding scheme potentially affects the efficiency of sequence information retrieval. Algorithm design choices A universal approach to compressing genomic data may not necessarily be optimal, as a particular method may be more suitable for specific purposes and aims. Thus, several design choices that potentially impacts compression performance may be important for consideration. Reference sequence Selection of a reference sequence for relative compression can affect compression performance. Choosing a consensus reference sequence over a more specific reference sequence (e.g., the revised Cambridge Reference Sequence) can result in higher compression ratio because the consensus reference may contain less bias in its data. Knowledge about the source of the sequence being compressed, however, may be exploited to achieve greater compression gains. The idea of using multiple reference sequences has been proposed. Brandon et al. (2009) alluded to the potential use of ethnic group-specific reference sequence templates, using the compression of mitochondrial DNA variant data as an example (see Figure 2). The authors found biased haplotype distribution in the mitochondrial DNA sequences of Africans, Asians, and Eurasians relative to the revised Cambridge Reference Sequence. Their result suggests that the revised Cambridge Reference Sequence may not always be optimal because a greater number of variants need to be stored when it is used against data from ethnically distant individuals. Additionally, a reference sequence can be designed based on statistical properties or engineered to improve the compression ratio. Encoding schemes The application of different types of encoding schemes have been explored to encode variant bases and genomic coordinates. Fixed codes, such as the Golomb code and the Rice code, are suitable when the variant or coordinate (represented as integer) distribution is well defined. Variable codes, such as the Huffman code, provide a more general entropy encoding scheme when the underlying variant and/or coordinate distribution is not well-defined (this is typically the case in genomic sequence data). List of genomic re-sequencing data compression tools The compression ratio of currently available genomic data compression tools ranges between 65-fold and 1,200-fold for human genomes. Very close variants or revisions of the same genome can be compressed very efficiently (for example, 18,133 compression ratio was reported for two revisions of the same A. thaliana genome, which are 99.999% identical). However, such compression is not indicative of the typical compression ratio for different genomes (individuals) of the same organism. The most common encoding scheme amongst these tools is Huffman coding, which is used for lossless data compression. References Genomics techniques
Compression of genomic sequencing data
[ "Chemistry", "Biology" ]
1,132
[ "Genetics techniques", "Genomics techniques", "Molecular biology techniques" ]
34,946,963
https://en.wikipedia.org/wiki/Fusion%20ignition
Fusion ignition is the point at which a nuclear fusion reaction becomes self-sustaining. This occurs when the energy being given off by the reaction heats the fuel mass more rapidly than it cools. In other words, fusion ignition is the point at which the increasing self-heating of the nuclear fusion removes the need for external heating. This is quantified by the Lawson criterion. Ignition can also be defined by the fusion energy gain factor. In the laboratory, fusion ignition defined by the Lawson criterion was first achieved in August 2021, and ignition defined by the energy gain factor was achieved in December 2022, both by the U.S. National Ignition Facility. Research Ignition should not be confused with breakeven, a similar concept that compares the total energy being given off to the energy being used to heat the fuel. The key difference is that breakeven ignores losses to the surroundings, which do not contribute to heating the fuel, and thus are not able to make the reaction self-sustaining. Breakeven is an important goal in the fusion energy field, but ignition is required for a practical energy producing design. In nature, stars reach ignition at temperatures similar to that of the Sun, around 15 million kelvins (27 million degrees F). Stars are so large that the fusion products will almost always interact with the plasma before their energy can be lost to the environment at the outside of the star. In comparison, man-made reactors are far less dense and much smaller, allowing the fusion products to easily escape the fuel. To offset this, much higher rates of fusion are required, and thus much higher temperatures; most man-made fusion reactors are designed to work at temperatures over 100 million kelvins (180 million degrees F). Fusion ignition was first achieved by humans in the cores of detonating thermonuclear weapons. A thermonuclear weapon uses a conventional fission (U-235 or Pu-239/241) "sparkplug" to generate high pressures and compress a rod of fusion fuel (usually lithium deuteride). The fuel reaches high enough pressures and densities to ignite, releasing large amounts of energy and neutrons in the process. The National Ignition Facility at Lawrence Livermore National Laboratory performs laser-driven inertial confinement fusion experiments that achieve fusion ignition. This is similar to a thermonuclear weapon, but the National Ignition Facility uses a 1.8 MJ laser system instead of a fission weapon to compress the fuel, and uses a much smaller amount of fuel (a mixture of deuterium and tritium, which are both isotopes of hydrogen). In January 2012, National Ignition Facility Director Mike Dunne predicted in a Photonics West 2012 plenary talk that ignition would be achieved at NIF by October 2012. By 2022 the NIF had achieved ignition. Based on the tokamak reactor design, the ITER is intended to sustain fusion mostly by internal fusion heating and yield in its plasma a ten-fold return on power. Construction is expected to be completed in 2025. Experts believe that achieving fusion ignition is the first step towards electricity generation using fusion power. 2021 and 2022 ignition reports The National Ignition Facility at the Lawrence Livermore National Laboratory in California reported in 2021 that it had triggered ignition in the laboratory on 8 August 2021, for the first time in the over-60-year history of the ICF program. The shot yielded 1.3 megajoules of fusion energy, an 8-fold improvement on tests done in spring 2021. NIF estimates that the laser supplied 1.9 megajoules of energy, 230 kilojoules of which reached the fuel capsule. This corresponds to a total scientific energy gain of 0.7 and a capsule energy gain of 6. While the experiment fell short of ignition as defined by the National Academy of Sciences – a total energy gain greater than one – most people working in the field viewed the experiment as the demonstration of ignition as defined by the Lawson criterion. In August 2022, the results of the experiment were confirmed in three peer-reviewed papers: one in Physical Review Letters and two in Physical Review E. Throughout 2022, the NIF researchers tried and failed to replicate the August result. However, on 13 December 2022, the United States Department of Energy announced via Twitter that an experiment on December 5 had surpassed the August result, achieving a scientific gain of 1.5, surpassing the National Academy of Sciences definition of ignition. See also Burning plasma Inertial confinement fusion Laser Mégajoule Timeline of nuclear fusion References External links National Ignition Facility Laser Megajoule Ignition
Fusion ignition
[ "Physics", "Chemistry" ]
935
[ "Nuclear fusion", "Nuclear physics" ]
31,813,101
https://en.wikipedia.org/wiki/Hopeaphenol
Hopeaphenol is a stilbenoid. It is a resveratrol tetramer. It has been first isolated from Dipterocarpaceae like Shorea ovalis. It has also been isolated from wines from North Africa. It shows an opposite effect to vitisin A on apoptosis of myocytes isolated from adult rat heart. See also Phenolic compounds in wine References Resveratrol oligomers Natural phenol tetramers Wine chemistry
Hopeaphenol
[ "Chemistry" ]
99
[ "Wine chemistry", "Alcohol chemistry" ]
31,813,996
https://en.wikipedia.org/wiki/Basel%20Declaration
The Basel Declaration is a call for greater transparency and communication on the use of animals in research. It is supported by an international scientific non profit society, the Basel Declaration Society, a forum of scientists established to foster the greatest dissemination and acceptance of the Declaration, and the dialogue with the public and stakeholders. Summary The Declaration was issued on 30 November 2010 by over 60 scientists from Switzerland, Germany, the United Kingdom, France and Sweden. The signatories commit to accepting greater responsibility in animal experiments and to intensive cooperation with the public in the form of a dialogue with prejudice. At the same time, they demand that essential animal experiments for obtaining research results remain permitted both now and in the future. With their Basel Declaration, researchers are seeking to achieve a more impartial approach to scientific issues by the public and a more trusting and reliable cooperation with national and international decision makers. The signatories to the Basel Declaration are actively seeking to show that science and animal welfare are not diametrically opposed and to make a constructive contribution to the dialogue taking place in society – for example in the incorporation of the new EU Directive of 22 September 2010 on the protection of animals used for scientific purposes into the national laws.[1] (The revised EU Directive provides for the use of fewer laboratory animals for scientific purposes in the future and better reconciles the needs of research with the protection of animals without making research more difficult. The EU Member States must incorporate the Directive into national law within two years and apply these national regulations as from January 2013.) Alternatives to animal experiments “Animal experiments will remain necessary in biomedical research for the foreseeable future, but we are constantly working to refine the methods with animal welfare in mind.”[2] The signatories to the Declaration commit, amongst other things, to the use of animal experiments only when the research concerns fundamentally important knowledge and no alternative methods are available. As part of this commitment, their two-day conference in November 2010 ended with an affirmation of their allegiance to the 3R principle “Reduction, Refinement, Replacement”: The 3R principle (replace, reduce, refine) has its origins with William M. S. Russell & Rex L. Burch, who published their “Principles of Humane Experimental Technique” in 1959. These principles are regarded internationally as the guideline for avoiding or reducing animal experiments and the suffering of laboratory animals: Replacement: replacement of animal experiments by methods that do not involve animals Reduction: reduction in the number of animals in unavoidable animal experiments Refinement: improvement in experimental procedures, so that unavoidable animal experiments Need for improved communication The participants in the symposium that adopted the Basel Declaration were unanimously agreed that science must not only take a clear stand with regard to the responsible handling of laboratory animals, but also has to show greater transparency toward the public.[3] To make their motivation and methods more comprehensible to the public and the decision makers, the researchers aim to cooperate more closely with politicians, the media and schools in the future and to give greater importance to the communication of science. Obligation to the public The authors of the Basel Declaration acknowledge the need for greater discussion of animal experiment issues in public and also of the risks of research approaches and possible misuse of new technological developments. In addition, they declare their intention to communicate not only results und scientific controversies, but also processes and approval procedures in the science process, in order to foster a deeper understanding of research.[4] With regard to the improvement of information for the public on research involving experiments the signatories to the Basel Declaration commit to the following: We communicate openly and with transparency – also with regard to animal experiments. We proactively address the problems and openly declare that part of our research involves animal experiments. We grant journalists access to our laboratories. We invite opinion formers, media people and teachers to enter into a dialogue with researchers on the problem area of basic research. We endeavor to use a language that is comprehensible to the general public. We declare our solidarity with all researchers who have to rely on animal experiments. We are united in rejecting unjustified allegations against individuals. We shall jointly and publicly condemn vandalism, threats and other criminal acts. Animal experiments in basic research Modern medicine is based on discoveries of basic biological research and their implementation in applied research. The initial signatories to the Basel Declaration see the tendency to restrict animal experiments, especially in the field of basic research, as a major risk. And they maintain that no stage of research (neither basic nor applied research) must be categorically excluded from those purposes of animal experiments that are deemed permissible. Apart from the difficulty of differentiating the two stages in the field of medical research, applied research is generally inconceivable without basic research. Basic research is not an end in itself, but serves as the basis for further consideration. Basic and applied research is all part of the same continuum in biomedical research and the assignment of a research project to one part or the other is often rather arbitrary. On the other hand, the categorization of an experiment as basic research does not yet justify the use of animals per se. The demonstration that an animal experiment is indispensable is just as necessary as a weighing the interests of animal welfare against the benefits according to the research objective. Better animal models Genetically modified animals represent an important instrument of modern biomedical research. In many cases, species higher on the evolutionary scale in animal experiments can be replaced by the use of simpler organisms bred by means of gene technology, such as fruit flies, laboratory worms or fish. This plays a major part in helping to promote the 3R principles of replacement, reduction and refinement of animal experiments. Disease models in genetically modified animals are mainly in rodents, such as mice and rats. However, they cannot adequately depict human physiology in all cases. Research in animal models using mammals, such as even-toed ungulates (especially for animal health) and in very rare cases also monkeys, remains necessary according to participants at the symposium on the Basel Declaration. They see the following advantages in the use of genetically modified organisms in animal experiments: Possibility of developing tests for therapeutic antibodies that are being increasingly used in modern medical therapy in humans Production of recombinant products such as anticoagulants or therapeutic antibodies Research on disease mechanisms in complex organisms (e.g. diabetes) Research and understanding of the underlying mechanisms and metabolic pathways in human diseases Fundamental principles for efficient and targeted treatment of diseases such as leukemia, hypertension or obesity Experiments in non-human primates The participants at the symposium on the Basel Declaration at the end of November 2010 summarized the outcome of their discussions on the subject of experiments in non-human primates as follows: 1. Research in non-human primates is an essential part of biomedical progress in the 21st century. Research in non-human primates has led to the development of crucial medical treatments, such as vaccines against poliomyelitis and hepatitis (jaundice), as well as to improved drug safety thanks to indispensable contributions to the basic principles of physiology, immunology, infectious diseases, genetics, pharmacology, reproduction biology and neuroscience. We predict an increased need for research using non-human primates in the future, e.g. for personalized medicine and neurodegenerative diseases in an aging society. This continuing need is also reflected in the EU Directive of 2010 (2010 /63/EU) on animals used for scientific purposes, in which it is recognized that research in non-human primates will remain irreplaceable in the foreseeable future. 2. Biomedical research cannot be divided into “basic research” and “applied research”: it is a continuum that includes both basic studies on normal functions and their failure in diseases and also the development of treatments. This fundamental research is indispensable for biomedical progress. Any categorical restriction of research in non-human primates in basic research is shortsighted and not justified by any scientific evidence. 3. Researchers working with non-human primates are committed to the 3R principle on the replacement, reduction and refinement of animal experiments. Research in animals must satisfy the highest ethical standards. Non-human primates are only used when there are no alternatives. We are working constantly and intensively on refining experimental methods and keeping the number of non-human primates used to a minimum. A strong commitment to the 3Rs guarantees the best science and the best welfare of the animals. 4. We are committed to informing the public and providing objective information on research in non-human primates. External links Basel Declaration in English https://de.basel-declaration.org/basel-declaration-de/assets/basel_declaration_en.pdf Basel Declaration homepage http://www.basel-declaration.org Basler Declaration in Nature http://www.nature.com/news/2010/101206/full/468742a.html Basel Declaration in Scientific American: http://www.scientificamerican.com/article.cfm?id=basel-declaration-defends-animal Animal testing
Basel Declaration
[ "Chemistry" ]
1,872
[ "Animal testing" ]
31,816,022
https://en.wikipedia.org/wiki/Holographic%20weapon%20sight
A holographic weapon sight or holographic diffraction sight is a non-magnifying gunsight that allows the user to look through a glass optical window and see a holographic reticle image superimposed at a distance on the field of view. The hologram of the reticle is built into the window and is illuminated by a laser diode. History The first-generation holographic sight was introduced by EOTech—then an ERIM subsidiary—at the 1996 SHOT Show, under the trade name HoloSight by Bushnell, with whom the company was partnered at the time, initially aiming for the civilian sport shooting and hunting market. It won the Optic of the Year Award from the Shooting Industry Academy of Excellence. EOTech was the only company that manufactured holographic sights until early 2017, when Vortex introduced the Razor AMG UH-1 into the market as a competing product. As Vortex introduced the Gen II model on mid July, 2020 which later replaced the original UH-1. Design Holographic weapon sights use a laser transmission hologram of a reticle image that is recorded in three-dimensional space onto holographic film at the time of manufacture. This image is part of the optical viewing window. The recorded hologram is illuminated by the collimated light of a laser diode built into the sight. The sight can be adjusted for range and windage by simply tilting or pivoting the holographic grating. To compensate for any change in the laser wavelength due to temperature, the sight employs a holography grating that disperses the laser light by an equal amount but in the opposite direction as the hologram forming the aiming reticle. The optical window in a holographic weapon sight looks like a piece of clear glass with an illuminated reticle in the middle. The aiming reticle can be an infinitely small dot whose perceived size is given by the acuity of the eye. For someone with 20/20 vision, it is about 1 minute of arc (0.3 mrad). Holographic sights can be paired with "red dot magnifiers" to better engage farther targets. Parallax error Like the reflector sight, the holographic sight is not "parallax free", having an aim-point that can move with eye position. This can be compensated for by having a holographic image that is set at a finite distance with parallax due to eye movement being the size of the optical window at close range and diminishing to zero at the set distance, usually around the target range of 100 yards. Compared to reflector sights Light transmission Since the reticle is a transmission hologram, illuminated by a laser shining through hologram presenting a reconstructed image, there is no need for the sight "window" to be partially blocked by a semi-silvered or dielectric dichroic coating needed to reflect an image such as in standard reflex sights. Holographic sights therefore have the potential for better light transmission than reflector sights. Manufacturing costs Holographic sights are considerably more expensive than red dot sights, due to their complexity as well as there being only two manufacturers of holographic sights. Size Holographic sights are generally bulkier than reflex sights and require a rifle to mount, while red dot sights have been made small enough to fit handguns. Battery life Holographic sights have shorter battery life when compared to reflex sights that use LEDs, such as red dot sights. The laser diode in a holographic sight uses more power and has more complex driving electronics than a standard LED of an equivalent brightness, reducing the amount of time a holographic sight can run on a single set of batteries compared to a red dot sight, around 600 hours for typical holographic sights, compared to sometimes up to tens of thousands of hours for red dot sights. For example, the Vortex Razor AMG UH-1 holographic sight has been quoted as having an expected battery life of 1,000 to 1,500 hours (1½ to 2 months) on medium setting. The Aimpoint CompM5s red dot sight has an expected battery life of around 8,000 to hours (1 to 5 years) depending on the setting. See also Fire-control system Collimator sight Reflex sight Laser sight (firearms) Prism sight, a type of telescopic sight Glossary of firearms terminology Glossary of military abbreviations List of laser articles References External links Red Dot Sights / Reflex Sights & Holosights Explained -Electronic Sights; A look at why they exist, how they work, and how you use them. Firearm sights Optical devices
Holographic weapon sight
[ "Materials_science", "Engineering" ]
960
[ "Glass engineering and science", "Optical devices" ]
31,818,344
https://en.wikipedia.org/wiki/Irish%20logarithm
The Irish logarithm was a system of number manipulation invented by Percy Ludgate for machine multiplication. The system used a combination of mechanical cams as lookup tables and mechanical addition to sum pseudo-logarithmic indices to produce partial products, which were then added to produce results. The technique is similar to Zech logarithms (also known as Jacobi logarithms), but uses a system of indices original to Ludgate. Concept Ludgate's algorithm compresses the multiplication of two single decimal numbers into two table lookups (to convert the digits into indices), the addition of the two indices to create a new index which is input to a second lookup table that generates the output product. Because both lookup tables are one-dimensional, and the addition of linear movements is simple to implement mechanically, this allows a less complex mechanism than would be needed to implement a two-dimensional 10×10 multiplication lookup table. Ludgate stated that he deliberately chose the values in his tables to be as small as he could make them; given this, Ludgate's tables can be simply constructed from first principles, either via pen-and-paper methods, or a systematic search using only a few tens of lines of program code. They do not correspond to either Zech logarithms, Remak indexes or Korn indexes. Pseudocode The following is an implementation of Ludgate's Irish logarithm algorithm in the Python programming language: table1 = [50, 0, 1, 7, 2, 23, 8, 33, 3, 14] table2 = [ 1, 2, 4, 8, 16, 32, 64, 3, 6, 12, 24, 48, 0, 0, 9, 18, 36, 72, 0, 0, 0, 27, 54, 5, 10, 20, 40, 0, 81, 0, 15, 30, 0, 7, 14, 28, 56, 45, 0, 0, 21, 42, 0, 0, 0, 0, 25, 63, 0, 0, 0, 0, 0, 0, 0, 0, 35, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] def product(a: int, b: int) -> int: """Ludgate's Irish logarithm algorithm.""" return table2[table1[a] + table1[b]] Table 1 is taken from Ludgate's original paper; given the first table, the contents of Table 2 can be trivially derived from Table 1 and the definition of the algorithm. Note since that the last third of the second table is entirely zeros, this could be exploited to further simplify a mechanical implementation of the algorithm. See also References Further reading Boys, C.V., "A New Analytical Engine," Nature, Vol. 81, No. 2070, July 1, 1904, pp. 14–15. Randell, B., "Ludgate's analytical machine of 1909", The Computer Journal, Volume 14, Issue 3, 1971, Pages 317–326, https://doi.org/10.1093/comjnl/14.3.317 Includes the text of Ludgate's original paper. External links A detailed treatment of Ludgate's Irish Logarithms, Brian Coghlan, 2019 (Archived from original link) Transcript of "On a Proposed Analytical Machine" by Percy Ludgate (first published in Scientific Proceedings of the Royal Dublin Society 1909 vol 12 pages 77–91), containing Ludgate's own description of the Irish logarithm tables A reproduction of Ludgate's original 1909 paper, from Method for deriving Ludgate's Irish Logarithms from first principles, Brian Coghlan, 2022) Articles with example Python (programming language) code Mechanical calculators Multiplication Algorithms Irish inventions
Irish logarithm
[ "Mathematics" ]
905
[ "Algorithms", "Mathematical logic", "Applied mathematics" ]
26,165,922
https://en.wikipedia.org/wiki/HD%2086226%20b
HD 86226 b is a gas giant exoplanet discovered by the Magellan Planet Search Program in 2010. It was confirmed in data collected by the CORALIE spectrograph on the Swiss 1.2-metre Leonhard Euler Telescope in 2012. It takes about 4.6 years to orbit its G-type star and was initially believed to have a minimal mass of 0.92 Jupiters. Discovery of the second planet in the system has led to the revised mass of HD 86226 b in 2020, now estimated to be 0.45. See also List of exoplanets discovered in 2010 References Exoplanets discovered in 2010 Exoplanets detected by radial velocity Giant planets Hydra (constellation)
HD 86226 b
[ "Astronomy" ]
149
[ "Hydra (constellation)", "Constellations" ]
4,016,952
https://en.wikipedia.org/wiki/N-electron%20valence%20state%20perturbation%20theory
In quantum chemistry, n-electron valence state perturbation theory (NEVPT) is a perturbative treatment applicable to multireference CASCI-type wavefunctions. It can be considered as a generalization of the well-known second-order Møller–Plesset perturbation theory to multireference complete active space cases. The theory is directly integrated into many quantum chemistry packages such as MOLCAS, Molpro, DALTON, PySCF and ORCA. The research performed into the development of this theory led to various implementations. The theory here presented refers to the deployment for the single-state NEVPT, where the perturbative correction is applied to a single electronic state. Research implementations has been also developed for quasi-degenerate cases, where a set of electronic states undergo the perturbative correction at the same time, allowing interaction among themselves. The theory development makes use of the quasi-degenerate formalism by Lindgren and the Hamiltonian multipartitioning technique from Zaitsevskii and Malrieu. Theory Let be a zero-order CASCI wavefunction, defined as a linear combination of Slater determinants obtained diagonalizing the true Hamiltonian inside the CASCI space where is the projector inside the CASCI space. It is possible to define perturber wavefunctions in NEVPT as zero-order wavefunctions of the outer space (external to CAS) where electrons are removed from the inactive part (core and virtual orbitals) and added to the valence part (active orbitals). At second order of perturbation . Decomposing the zero-order CASCI wavefunction as an antisymmetrized product of the inactive part and a valence part then the perturber wavefunctions can be written as The pattern of inactive orbitals involved in the procedure can be grouped as a collective index , so to represent the various perturber wavefunctions as , with an enumerator index for the different wavefunctions. The number of these functions is relative to the degree of contraction of the resulting perturbative space. Supposing indexes and referring to core orbitals, and referring to active orbitals and and referring to virtual orbitals, the possible excitation schemes are: two electrons from core orbitals to virtual orbitals (the active space is not enriched nor depleted of electrons, therefore ) one electron from a core orbital to a virtual orbital, and one electron from a core orbital to an active orbital (the active space is enriched with one electron, therefore ) one electron from a core orbital to a virtual orbital, and one electron from an active orbital to a virtual orbital (the active space is depleted with one electron, therefore ) two electrons from core orbitals to active orbitals (active space enriched with two electrons, ) two electrons from active orbitals to virtual orbitals (active space depleted with two electrons, ) These cases always represent situations where interclass electronic excitations happen. Other three excitation schemes involve a single interclass excitation plus an intraclass excitation internal to the active space: one electron from a core orbital to a virtual orbital, and an internal active-active excitation () one electron from a core orbital to an active orbital, and an internal active-active excitation () one electron from an active orbital to a virtual orbital, and an internal active-active excitation () Totally Uncontracted Approach A possible approach is to define the perturber wavefunctions into Hilbert spaces defined by those determinants with given k and l labels. The determinants characterizing these spaces can be written as a partition comprising the same inactive (core + virtual) part and all possible valence (active) parts The full dimensionality of these spaces can be exploited to obtain the definition of the perturbers, by diagonalizing the Hamiltonian inside them This procedure is impractical given its high computational cost: for each space, a diagonalization of the true Hamiltonian must be performed. Computationally, is preferable to improve the theoretical development making use of the modified Dyall's Hamiltonian . This Hamiltonian behaves like the true Hamiltonian inside the CAS space, having the same eigenvalues and eigenvectors of the true Hamiltonian projected onto the CAS space. Also, given the decomposition for the wavefunction defined before, the action of the Dyall's Hamiltonian can be partitioned into stripping out the constant contribution of the inactive part and leaving a subsystem to be solved for the valence part The total energy is the sum of and the energies of the orbitals involved in the definition of the inactive part . This introduces the possibility to perform a single diagonalization of the valence Dyall's Hamiltonian on the CASCI zero-order wavefunction and evaluate the perturber energies using the property depicted above. Strongly Contracted Approach A different choice in the development of the NEVPT approach is to choose a single function for each space , leading to the Strongly Contracted (SC) scheme. A set of perturbative operators are used to produce a single function for each space, defined as the projection inside each space of the application of the Hamiltonian to the contracted zero order wavefunction. In other words, where is the projector onto the subspace. This can be equivalently written as the application of a specific part of the Hamiltonian to the zero-order wavefunction For each space, appropriate operators can be devised. We will not present their definition, as it could result overkilling. Suffice to say that the resulting perturbers are not normalized, and their norm plays an important role in the Strongly Contracted development. To evaluate these norms, the spinless density matrix of rank not higher than three between the functions are needed. An important property of the is that any other function of the space which is orthogonal to do not interact with the zero-order wavefunction through the true Hamiltonian. It is possible to use the functions as a basis set for the expansion of the first-order correction to the wavefunction, and also for the expression of the zero-order Hamiltonian by means of a spectral decomposition where are the normalized . The expression for the first-order correction to the wavefunction is therefore and for the energy is This result still misses a definition of the perturber energies , which can be defined in a computationally advantageous approach by means of the Dyall's Hamiltonian leading to Developing the first term and extracting the inactive part of the Dyall's Hamiltonian it can be obtained with equal to the sum of the orbital energies of the newly occupied virtual orbitals minus the orbital energies of the unoccupied core orbitals. The term that still needs to be evaluated is the bracket involving the commutator. This can be obtained developing each operator and substituting. To obtain the final result it is necessary to evaluate Koopmans matrices and density matrices involving only active indexes. An interesting case is represented by the contribution for the case, which is trivial and can be demonstrated identical to the Møller–Plesset second-order contribution NEVPT2 can therefore be seen as a generalized form of MP2 to multireference wavefunctions. Partially Contracted Approach An alternative approach, named Partially Contracted (PC) is to define the perturber wavefunctions in a subspace of with dimensionality higher than one (like in case of the Strongly Contracted approach). To define this subspace, a set of functions is generated by means of the operators, after decontraction of their formulation. For example, in the case of the operator The Partially Contracted approach makes use of functions and . These functions must be orthonormalized and purged of linear dependencies which may arise. The resulting set spans the space. Once all the spaces have been defined, we can obtain as usual a set of perturbers from the diagonalization of the Hamiltonian (true or Dyall) inside this space As usual, the evaluation of the Partially Contracted perturbative correction by means of the Dyall Hamiltonian involves simply manageable entities for nowadays computers. Although the Strongly Contracted approach makes use of a perturbative space with very low flexibility, in general it provides values in very good agreement with those obtained by the more decontracted space defined for the Partially Contracted approach. This can be probably explained by the fact that the Strongly Contracted perturbers are a good average of the totally decontracted perturbative space. The Partially Contracted evaluation has a very little overhead in computational cost with respect to the Strongly Contracted one, therefore they are normally evaluated together. Properties NEVPT is blessed with many important properties, making the approach very solid and reliable. These properties arise both from the theoretical approach used and on the Dyall's Hamiltonian particular structure: Size consistency: NEVPT is size consistent (strict separable). Briefly, if A and B are two non-interacting systems, the energy of the supersystem A-B is equal to the sum of the energy of A plus the energy of B taken by themselves (). This property is of particular importance to obtain correctly behaving dissociation curves. Absence of intruder states: in perturbation theory, divergencies can occur if the energy of some perturber happens to be nearly equal to the energy of the zero-order wavefunction. This situation, which is due to the presence of an energy difference at the denominator, can be avoided if the energies associated to the perturbers are guaranteed to be never nearly equal to the zero-order energy. NEVPT satisfies this requirement. Invariance under active orbital rotation: The NEVPT results are stable if an intraclass active-active orbital mixing occurs. This arises both from the structure of the Dyall Hamiltonian and the properties of a CASSCF wavefunction. This property has been also extended to the intraclass core-core and virtual-virtual mixing, thanks to the Non Canonical NEVPT approach, allowing to apply a NEVPT evaluation without performing an orbital canonization (which is required, as we saw previously) Spin purity is guaranteed: The resulting wave functions are guaranteed to be spin pure, due to the spin-free formalism. Efficiency: although not a formal theoretical property, computational efficiency is highly important for the evaluation on medium-size molecular systems. The current limit of the NEVPT application is largely dependent on the feasibility of the previous CASSCF evaluation, which scales factorially with respect to the active space size. The NEVPT implementation using the Dyall's Hamiltonian involves the evaluation of Koopmans' matrices and density matrices up to the four-particle density matrix spanning only active orbitals. This is particularly convenient, given the small size of currently used active spaces. Partitioning into additive classes: The perturbative correction to the energy is additive on eight different contributions. Although the evaluation of each contribution has a different computational cost, this fact can be used to improve performance, by parallelizing each contribution to a different processor. See also Electron correlation Perturbation theory (quantum mechanics) Post-Hartree–Fock References Electron states Computational chemistry
N-electron valence state perturbation theory
[ "Chemistry" ]
2,351
[ "Electron", "Theoretical chemistry", "Computational chemistry", "Electron states" ]
4,018,181
https://en.wikipedia.org/wiki/Algebraic%20specification
Algebraic specification is a software engineering technique for formally specifying system behavior. It was a very active subject of computer science research around 1980. Overview Algebraic specification seeks to systematically develop more efficient programs by: formally defining types of data, and mathematical operations on those data types abstracting implementation details, such as the size of representations (in memory) and the efficiency of obtaining outcome of computations formalizing the computations and operations on data types allowing for automation by formally restricting operations to this limited set of behaviors and data types. An algebraic specification achieves these goals by defining one or more data types, and specifying a collection of functions that operate on those data types. These functions can be divided into two classes: Constructor functions: Functions that create or initialize the data elements, or construct complex elements from simpler ones. The set of available constructor functions is implied by the specification's signature. Additionally, a specification can contain equations defining equivalences between the objects constructed by these functions. Whether the underlying representation is identical for different but equivalent constructions is implementation-dependent. Additional functions: Functions that operate on the data types, and are defined in terms of the constructor functions. Examples Consider a formal algebraic specification for the boolean data type. One possible algebraic specification may provide two constructor functions for the data-element: a true constructor and a false constructor. Thus, a boolean data element could be declared, constructed, and initialized to a value. In this scenario, all other connective elements, such as XOR and AND, would be additional functions. Thus, a data element could be instantiated with either "true" or "false" value, and additional functions could be used to perform any operation on the data element. Alternatively, the entire system of boolean data types could be specified using a different set of constructor functions: a false constructor and a not constructor. In that case, an additional function true could be defined to yield the value not false, and an equation should be added. The algebraic specification therefore describes all possible states of the data element, and all possible transitions between states. For a more complicated example, the integers can be specified (among many other ways, and choosing one of the many formalisms) with two constructors 1 : Z (_ - _) : Z × Z -> Z and three equations: (1 - (1 - p)) = p ((1 - (n - p)) - 1) = (p - n) ((p1 - n1) - (n2 - p2)) = (p1 - (n1 - (p2 - n2))) It is easy to verify that the equations are valid, given the usual interpretation of the binary "minus" function. (The variable names have been chosen to hint at positive and negative contributions to the value.) With a little effort, it can be shown that, applied left to right, they also constitute a confluent and terminating rewriting system, mapping any constructed term to an unambiguous normal form representing the respective integer: ... (((1 - 1) - 1) - 1) ((1 - 1) - 1) (1 - 1) 1 (1 - ((1 - 1) - 1)) (1 - (((1 - 1) - 1) - 1)) ... Therefore, any implementation conforming to this specification will behave like the integers, or possibly a restricted range of them, like the usual integer types found in most programming languages. See also Common Algebraic Specification Language Formal specification OBJ Notes Formal methods
Algebraic specification
[ "Engineering" ]
744
[ "Software engineering", "Formal methods" ]
4,018,549
https://en.wikipedia.org/wiki/Miller%20process
The Miller process is an industrial-scale chemical procedure used to refine gold to a high degree of purity (99.5%). It was patented by Francis Bowyer Miller in 1867. This chemical process involves blowing chlorine gas through molten, but (slightly) impure, gold. Nearly all metal contaminants react to form chlorides but gold does not at these high temperatures. The other metals volatilize or form a low density slag on top of the molten gold. When all impurities have been removed from the gold (observable by a change in flame color) the gold is removed and processed in the manner required for sale or use. The resulting gold is 99.5% pure, but of lower purity than gold produced by the other common refining method, the Wohlwill process, which produces gold of up to 99.999% purity. The Wohlwill process is commonly used for producing high-purity gold, such as in electronics work, where exacting standards of purity are required. When highest purity gold is not required, refiners use the Miller process due to its relative ease, quicker turnaround times, and because it does not tie up the large amount of gold in the form of chloroauric acid which the Wohlwill process permanently requires for the electrolyte. See also Gold parting References Metallurgical processes Gold industry
Miller process
[ "Chemistry", "Materials_science" ]
289
[ "Metallurgical processes", "Metallurgy" ]
4,019,573
https://en.wikipedia.org/wiki/Jensen%E2%80%93Shannon%20divergence
In probability theory and statistics, the Jensen–Shannon divergence, named after Johan Jensen and Claude Shannon, is a method of measuring the similarity between two probability distributions. It is also known as information radius (IRad) or total divergence to the average. It is based on the Kullback–Leibler divergence, with some notable (and useful) differences, including that it is symmetric and it always has a finite value. The square root of the Jensen–Shannon divergence is a metric often referred to as Jensen–Shannon distance. The similarity between the distributions is greater when the Jensen-Shannon distance is closer to zero. Definition Consider the set of probability distributions where is a set provided with some σ-algebra of measurable subsets. In particular we can take to be a finite or countable set with all subsets being measurable. The Jensen–Shannon divergence (JSD) is a symmetrized and smoothed version of the Kullback–Leibler divergence . It is defined by where is a mixture distribution of and . The geometric Jensen–Shannon divergence (or G-Jensen–Shannon divergence) yields a closed-form formula for divergence between two Gaussian distributions by taking the geometric mean. A more general definition, allowing for the comparison of more than two probability distributions, is: where and are weights that are selected for the probability distributions , and is the Shannon entropy for distribution . For the two-distribution case described above, Hence, for those distributions Bounds The Jensen–Shannon divergence is bounded by 1 for two probability distributions, given that one uses the base 2 logarithm: . With this normalization, it is a lower bound on the total variation distance between P and Q: . With base-e logarithm, which is commonly used in statistical thermodynamics, the upper bound is . In general, the bound in base b is : . A more general bound, the Jensen–Shannon divergence is bounded by for more than two probability distributions: . Relation to mutual information The Jensen–Shannon divergence is the mutual information between a random variable associated to a mixture distribution between and and the binary indicator variable that is used to switch between and to produce the mixture. Let be some abstract function on the underlying set of events that discriminates well between events, and choose the value of according to if and according to if , where is equiprobable. That is, we are choosing according to the probability measure , and its distribution is the mixture distribution. We compute It follows from the above result that the Jensen–Shannon divergence is bounded by 0 and 1 because mutual information is non-negative and bounded by in base 2 logarithm. One can apply the same principle to a joint distribution and the product of its two marginal distribution (in analogy to Kullback–Leibler divergence and mutual information) and to measure how reliably one can decide if a given response comes from the joint distribution or the product distribution—subject to the assumption that these are the only two possibilities. Quantum Jensen–Shannon divergence The generalization of probability distributions on density matrices allows to define quantum Jensen–Shannon divergence (QJSD). It is defined for a set of density matrices and a probability distribution as where is the von Neumann entropy of . This quantity was introduced in quantum information theory, where it is called the Holevo information: it gives the upper bound for amount of classical information encoded by the quantum states under the prior distribution (see Holevo's theorem). Quantum Jensen–Shannon divergence for and two density matrices is a symmetric function, everywhere defined, bounded and equal to zero only if two density matrices are the same. It is a square of a metric for pure states, and it was recently shown that this metric property holds for mixed states as well. The Bures metric is closely related to the quantum JS divergence; it is the quantum analog of the Fisher information metric. Jensen–Shannon centroid The centroid C* of a finite set of probability distributions can be defined as the minimizer of the average sum of the Jensen-Shannon divergences between a probability distribution and the prescribed set of distributions: An efficient algorithm (CCCP) based on difference of convex functions is reported to calculate the Jensen-Shannon centroid of a set of discrete distributions (histograms). Applications The Jensen–Shannon divergence has been applied in bioinformatics and genome comparison, in protein surface comparison, in the social sciences, in the quantitative study of history, in fire experiments, and in machine learning. Notes External links Ruby gem for calculating JS divergence Python code for calculating JS distance THOTH: a python package for the efficient estimation of information-theoretic quantities from empirical data statcomp R library for calculating complexity measures including Jensen-Shannon Divergence Statistical distance
Jensen–Shannon divergence
[ "Physics" ]
993
[ "Physical quantities", "Statistical distance", "Distance" ]
4,021,238
https://en.wikipedia.org/wiki/Transcription%20factor%20II%20D
Transcription factor II D (TFIID) is one of several general transcription factors that make up the RNA polymerase II preinitiation complex. RNA polymerase II holoenzyme is a form of eukaryotic RNA polymerase II that is recruited to the promoters of protein-coding genes in living cells. It consists of RNA polymerase II, a subset of general transcription factors, and regulatory proteins known as SRB proteins. Before the start of transcription, the transcription Factor II D (TFIID) complex binds to the core promoter DNA of the gene through specific recognition of promoter sequence motifs, including the TATA box, Initiator, Downstream Promoter, Motif Ten, or Downstream Regulatory elements. Functions Coordinates the activities of more than 70 polypeptides required for initiation of transcription by RNA polymerase II Binds to the core promoter to position the polymerase properly Serves as the scaffold for assembly of the remainder of the transcription complex Acts as a channel for regulatory signals Structure TFIID is itself composed of TBP and several subunits called TATA-binding protein Associated Factors (TBP-associated factors, or TAFs). In a test tube, only TBP is necessary for transcription at promoters that contain a TATA box. TAFs, however, add promoter selectivity, especially if there is no TATA box sequence for TBP to bind to. TAFs are included in two distinct complexes, TFIID and B-TFIID. The TFIID complex is composed of TBP and more than eight TAFs. But, the majority of TBP is present in the B-TFIID complex, which is composed of TBP and TAFII170 (BTAF1) in a 1:1 ratio. TFIID and B-TFIID are not equivalent, since transcription reactions utilizing TFIID are responsive to gene specific transcription factors such as SP1, while reactions reconstituted with B-TFIID are not. Subunits in the TFIID complex include: TBP (TATA binding protein), or: TBP-related factors in animals (TBPL1; TBPL2) TAF1 (TAFII250) TAF2 (CIF150) TAF3 (TAFII140) TAF4 (TAFII130/135) TAF4B (TAFII105) TAF5 (TAFII100) TAF6 (TAFII70/80) TAF7 (TAFII55) TAF8 (TAFII43) TAF9 (TAFII31/32) TAF9B (TAFII31L) TAF10 (TAFII30) TAF11 (TAFII28) TAF12 (TAFII20/15) TAF13 (TAFII18) TAF15 (TAFII68) See also Eukaryotic transcription General transcription factor Preinitiation complex Regulation of gene expression RNA polymerase II holoenzyme TATA binding protein Transcription (genetics) References External links 3D electron microscopy structures of TFIID from the EM Data Bank(EMDB) Gene expression Molecular genetics Proteins Transcription factors
Transcription factor II D
[ "Chemistry", "Biology" ]
659
[ "Biomolecules by chemical classification", "Gene expression", "Signal transduction", "Molecular genetics", "Induced stem cells", "Cellular processes", "Molecular biology", "Biochemistry", "Proteins", "Transcription factors" ]
4,021,739
https://en.wikipedia.org/wiki/LaSalle%27s%20invariance%20principle
LaSalle's invariance principle (also known as the invariance principle, Barbashin-Krasovskii-LaSalle principle, or Krasovskii-LaSalle principle) is a criterion for the asymptotic stability of an autonomous (possibly nonlinear) dynamical system. Global version Suppose a system is represented as where is the vector of variables, with If a (see Smoothness) function can be found such that for all (negative semidefinite), then the set of accumulation points of any trajectory is contained in where is the union of complete trajectories contained entirely in the set . If we additionally have that the function is positive definite, i.e. , for all and if contains no trajectory of the system except the trivial trajectory for , then the origin is asymptotically stable. Furthermore, if is radially unbounded, i.e. , as then the origin is globally asymptotically stable. Local version If , when hold only for in some neighborhood of the origin, and the set does not contain any trajectories of the system besides the trajectory , then the local version of the invariance principle states that the origin is locally asymptotically stable. Relation to Lyapunov theory If is negative definite, then the global asymptotic stability of the origin is a consequence of Lyapunov's second theorem. The invariance principle gives a criterion for asymptotic stability in the case when is only negative semidefinite. Examples Simple example Example taken from "LaSalle's Invariance Principle, Lecture 23, Math 634", by Christopher Grant. Consider the vector field in the plane. The function satisfies , and is radially unbounded, showing that the origin is globally asymptotically stable. Pendulum with friction This section will apply the invariance principle to establish the local asymptotic stability of a simple system, the pendulum with friction. This system can be modeled with the differential equation where is the angle the pendulum makes with the vertical normal, is the mass of the pendulum, is the length of the pendulum, is the friction coefficient, and g is acceleration due to gravity. This, in turn, can be written as the system of equations Using the invariance principle, it can be shown that all trajectories that begin in a ball of certain size around the origin asymptotically converge to the origin. We define as This is simply the scaled energy of the system. Clearly, is positive definite in an open ball of radius around the origin. Computing the derivative, Observe that and . If it were true that , we could conclude that every trajectory approaches the origin by Lyapunov's second theorem. Unfortunately, and is only negative semidefinite since can be non-zero when . However, the set which is simply the set does not contain any trajectory of the system, except the trivial trajectory . Indeed, if at some time , , then because must be less than away from the origin, and . As a result, the trajectory will not stay in the set . All the conditions of the local version of the invariance principle are satisfied, and we can conclude that every trajectory that begins in some neighborhood of the origin will converge to the origin as . History The general result was independently discovered by J.P. LaSalle (then at RIAS) and N.N. Krasovskii, who published in 1960 and 1959 respectively. While LaSalle was the first author in the West to publish the general theorem in 1960, a special case of the theorem was communicated in 1952 by Barbashin and Krasovskii, followed by a publication of the general result in 1959 by Krasovskii. See also Stability theory Lyapunov stability Original papers LaSalle, J.P. Some extensions of Liapunov's second method, IRE Transactions on Circuit Theory, CT-7, pp. 520–527, 1960. (PDF ) Krasovskii, N. N. Problems of the Theory of Stability of Motion, (Russian), 1959. English translation: Stanford University Press, Stanford, CA, 1963. Text books Lectures Texas A&M University notes on the invariance principle (PDF) NC State University notes on LaSalle's invariance principle (PDF). Caltech notes on LaSalle's invariance principle (PDF). MIT OpenCourseware notes on Lyapunov stability analysis and the invariance principle (PDF). References Stability theory Dynamical systems Principles
LaSalle's invariance principle
[ "Physics", "Mathematics" ]
955
[ "Stability theory", "Mechanics", "Dynamical systems" ]
4,022,741
https://en.wikipedia.org/wiki/Coactivator%20%28genetics%29
A coactivator is a type of transcriptional coregulator that binds to an activator (a transcription factor) to increase the rate of transcription of a gene or set of genes. The activator contains a DNA binding domain that binds either to a DNA promoter site or a specific DNA regulatory sequence called an enhancer. Binding of the activator-coactivator complex increases the speed of transcription by recruiting general transcription machinery to the promoter, therefore increasing gene expression. The use of activators and coactivators allows for highly specific expression of certain genes depending on cell type and developmental stage. Some coactivators also have histone acetyltransferase (HAT) activity. HATs form large multiprotein complexes that weaken the association of histones to DNA by acetylating the N-terminal histone tail. This provides more space for the transcription machinery to bind to the promoter, therefore increasing gene expression. Activators are found in all living organisms, but coactivator proteins are typically only found in eukaryotes because they are more complex and require a more intricate mechanism for gene regulation. In eukaryotes, coactivators are usually proteins that are localized in the nucleus. Mechanism Some coactivators indirectly regulate gene expression by binding to an activator and inducing a conformational change that then allows the activator to bind to the DNA enhancer or promoter sequence. Once the activator-coactivator complex binds to the enhancer, RNA polymerase II and other general transcription machinery are recruited to the DNA and transcription begins. Histone acetyltransferase Nuclear DNA is normally wrapped tightly around histones, making it hard or impossible for the transcription machinery to access the DNA. This association is due primarily to the electrostatic attraction between the DNA and histones as the DNA phosphate backbone is negatively charged and histones are rich in lysine residues, which are positively charged. The tight DNA-histone association prevents the transcription of DNA into RNA. Many coactivators have histone acetyltransferase (HAT) activity meaning that they can acetylate specific lysine residues on the N-terminal tails of histones. In this method, an activator binds to an enhancer site and recruits a HAT complex that then acetylates nucleosomal promoter-bound histones by neutralizing the positively charged lysine residues. This charge neutralization causes the histones to have a weaker bond to the negatively charged DNA, which relaxes the chromatin structure, allowing other transcription factors or transcription machinery to bind to the promoter (transcription initiation). Acetylation by HAT complexes may also help keep chromatin open throughout the process of elongation, increasing the speed of transcription. Acetylation of the N-terminal histone tail is one of the most common protein modifications found in eukaryotes, with about 85% of all human proteins being acetylated. Acetylation is crucial for synthesis, stability, function, regulation and localization of proteins and RNA transcripts. HATs function similarly to N-terminal acetyltransferases (NATs) but their acetylation is reversible unlike in NATs. HAT mediated histone acetylation is reversed using histone deacetylase (HDAC), which catalyzes the hydrolysis of lysine residues, removing the acetyl group from the histones. This causes the chromatin to close back up from their relaxed state, making it difficult for the transcription machinery to bind to the promoter, thus repressing gene expression. Examples of coactivators that display HAT activity include CARM1, CBP and EP300. Corepression Many coactivators also function as corepressors under certain circumstances. Cofactors such as TAF1 and BTAF1 can initiate transcription in the presence of an activator (act as a coactivator) and repress basal transcription in the absence of an activator (act as a corepressor). Significance Biological significance Transcriptional regulation is one of the most common ways for an organism to alter gene expression. The use of activation and coactivation allows for greater control over when, where and how much of a protein is produced. This enables each cell to be able to quickly respond to environmental or physiological changes and helps to mitigate any damage that may occur if it were otherwise unregulated. Associated disorders Mutations to coactivator genes leading to loss or gain of protein function have been linked to diseases and disorders such as birth defects, cancer (especially hormone dependent cancers), neurodevelopmental disorders and intellectual disability (ID), among many others. Dysregulation leading to the over- or under-expression of coactivators can detrimentally interact with many drugs (especially anti-hormone drugs) and has been implicated in cancer, fertility issues and neurodevelopmental and neuropsychiatric disorders. For a specific example, dysregulation of CREB-binding protein (CBP)—which acts as a coactivator for numerous transcription factors within the central nervous system (CNS), reproductive system, thymus and kidneys—has been linked to Huntington's disease, leukaemia, Rubinstein-Taybi syndrome, neurodevelopmental disorders and deficits of the immune system, hematopoiesis and skeletal muscle function. As drug targets Coactivators are promising targets for drug therapies in the treatment of cancer, metabolic disorder, cardiovascular disease and type 2 diabetes, along with many other disorders. For example, the steroid receptor coactivator (SCR) NCOA3 is often overexpressed in breast cancer, so the development of an inhibitor molecule that targets this coactivator and decreases its expression could be used as a potential treatment for breast cancer. Because transcription factors control many different biological processes, they are ideal targets for drug therapy. The coactivators that regulate them can be easily replaced with a synthetic ligand that allows for control over an increase or decrease in gene expression. Further technological advances will provide new insights into the function and regulation of coactivators at a whole-organism level and elucidate their role in human disease, which will hopefully provide better targets for future drug therapies. Known coactivators To date there are more than 300 known coregulators. Some examples of these coactivators include: ARA54 targets androgen receptors ATXN7L3 targets several members of the nuclear receptor superfamily BCL3 targets 9-cis retinoic acid receptor (RXR) CBP targets many transcription factors CDC25B targets steroid receptors COPS5 targets several nuclear receptors DDC targets androgen receptors EP300 targets many transcription factors KAT5 targets many nuclear receptors KDM1A targets androgen receptors Steroid receptor coactivator (SRC) family NCOA1 targets several members of the nuclear receptor superfamily NCOA2 targets several members of the nuclear receptor superfamily NCOA3 targets several nuclear receptors and transcription factors YAP targets transcription factors WWTR1 targets transcription factors See also Repressor Regulation of gene expression Transcription coregulator Translation TcoF-DB References External links Nuclear Receptor Signalling Atlas (NIH-funded research consortium and database; includes open-access PubMed-indexed journal, Nuclear Receptor Signaling) TcoF - Dragon database of transcription co-factors and transcription factor interacting proteins Gene expression Molecular genetics Proteins Transcription coregulators
Coactivator (genetics)
[ "Chemistry", "Biology" ]
1,560
[ "Biomolecules by chemical classification", "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry", "Proteins" ]
4,022,767
https://en.wikipedia.org/wiki/Relativity%20priority%20dispute
Albert Einstein presented the theories of special relativity and general relativity in publications that either contained no formal references to previous literature, or referred only to a small number of his predecessors for fundamental results on which he based his theories, most notably to the work of Henri Poincaré and Hendrik Lorentz for special relativity, and to the work of David Hilbert, Carl F. Gauss, Bernhard Riemann, and Ernst Mach for general relativity. Subsequently, claims have been put forward about both theories, asserting that they were formulated, either wholly or in part, by others before Einstein. At issue is the extent to which Einstein and various other individuals should be credited for the formulation of these theories, based on priority considerations. Various scholars have questioned aspects of the work of Einstein, Poincaré, and Lorentz leading up to the theories’ publication in 1905. Questions raised by these scholars include asking to what degree Einstein was familiar with Poincaré's work, whether Einstein was familiar with Lorentz's 1904 paper or a review of it, and how closely Einstein followed other physicists at the time. It is known that Einstein was familiar with Poincaré's 1902 paper [Poi02], but it is not known to what extent he was familiar with other work of Poincaré in 1905. However, it is known that he knew [Poi00] in 1906, because he quoted it in [Ein06]. Lorentz's 1904 paper [Lor04] contained the transformations bearing his name that appeared in the Annalen der Physik. Some authors claim that Einstein worked in relative isolation and with restricted access to the physics literature in 1905. Others, however, disagree; a personal friend of Einstein, Maurice Solovine, acknowledged that he and Einstein pored over Poincaré's 1902 book, keeping them "breathless for weeks on end" [Rot06]. One television show raised the question of whether Einstein's wife Mileva Marić contributed to Einstein's work, but the network's ombudsman and historians on the topic say that there is no substantive evidence that she made significant contributions. Background In the history of special relativity, the most important names that are mentioned in discussions about the distribution of credit are Albert Einstein, Hendrik Lorentz, Henri Poincaré, and Hermann Minkowski. Consideration is also given to numerous other scientists for either anticipations of some aspects of the theory, or else for contributions to the development or elaboration of the theory. These include Woldemar Voigt, August Föppl, Joseph Larmor, Emil Cohn, Friedrich Hasenöhrl, Max Planck, Max von Laue, Gilbert Newton Lewis and Richard Chase Tolman, and others. In addition, polemics exist about alleged contributions of others such as Olinto De Pretto who according to some mathematical scholars did not create relativity but was the first to use the equation. Einstein's first wife Mileva Marić was featured in a PBS bibliography and claimed she made uncredited contributions, but the network later wrote that the show was "factually flawed and ultimately misleading" and these claims have no foundation according to serious scholars. In his History of the theories of ether and electricity from 1953, E. T. Whittaker claimed that relativity is the creation of Poincaré and Lorentz and attributed to Einstein's papers only little importance. However, most historians of science, like Gerald Holton, Arthur I. Miller, Abraham Pais, John Stachel, or Olivier Darrigol have other points of view. They admit that Lorentz and Poincaré developed the mathematics of special relativity, and many scientists originally spoke about the "Lorentz–Einstein theory". But they argue that it was Einstein who eliminated the classical ether and demonstrated the relativity of space and time. They also argue that Poincaré demonstrated the relativity of space and time only in his philosophical writings, but in his physical papers he maintained the ether as a privileged frame of reference that is perfectly undetectable, and continued (like Lorentz) to distinguish between "real" lengths and times measured by observers at rest within the aether, and "apparent" lengths and times measured by observers in motion within the aether. Darrigol summarizes: Most of the components of Einstein's paper appeared in others' anterior works on the electrodynamics of moving bodies. Poincaré and Alfred Bucherer had the relativity principle. Lorentz and Larmor had most of the Lorentz transformations, Poincaré had them all. Cohn and Bucherer rejected the ether. Poincaré, Cohn, and Abraham had a physical interpretation of Lorentz's local time. Larmor and Cohn alluded to the dilation of time. Lorentz and Poincaré had the relativistic dynamics of the electron. None of these authors, however, dared to reform the concepts of space and time. None of them imagined a new kinematics based on two postulates. None of them derived the Lorentz transformations on this basis. None of them fully understood the physical implications of these transformations. It all was Einstein's unique feat. Undisputed facts The following facts are well established and referable: In 1889, ([Poi89]), Henri Poincaré argued that the ether might be unobservable, in which case the existence of the ether is a metaphysical question, and he suggested that some day the ether concept would be thrown aside as useless. However, in the same book (Ch. 10) he considered the ether a "convenient hypothesis" and continued to use the concept also in later books in 1908 ([Poi08], Book 3) and 1912 ([Poi13], Ch. 6). In 1895, Poincaré argued that results like those obtained by Michelson and Morley (Michelson–Morley experiment) show that it seems to be impossible to detect the absolute motion of matter or the relative motion of matter in relation to the ether. In 1900 [Poi00] he called this the Principle of Relative Motion, i.e., that the laws of movement should be the same in all inertial frames. Alternative terms used by Poincaré were "relativity of space" and "principle of relativity". In 1904 he expanded that principle by saying: "The principle of relativity, according to which the laws of physical phenomena must be the same for a stationary observer as for one carried along in a uniform motion of translation, so that we have no means, and can have none, of determining whether or not we are being carried along in such a motion." However, he also stated that we do not know if this principle will turn out to be true, but that it is interesting to determine what the principle implies. In 1900([Poi00]), Poincaré published a paper in which he said that radiation could be considered as a fictitious fluid with an equivalent mass of . He derived this interpretation from Lorentz's 'theory of electrons' which incorporated Maxwell's radiation pressure. Poincaré had described a synchronization procedure for clocks at rest relative to each other in [Poi00] and again in [Poi04]. So two events, which are simultaneous in one frame of reference, are not simultaneous in another frame. It is very similar to the one later proposed by Einstein. However, Poincaré distinguished between "local" or "apparent" time of moving clocks, and the "true" time of resting clocks in the ether. In [Poi02] he argued that "some day, no doubt, the ether will be thrown aside as useless". Lorentz' paper [Lor04] containing the transformations bearing his name appeared in 1904. Albert Einstein in [Ein05c] derived the Lorentz equations by using the principle of constancy of velocity of light and the relativity principle. He was the first to argue that those principles (along with certain other basic assumptions about the homogeneity and isotropy of space, usually taken for granted by theorists) are sufficient to derive the theory—see Postulates of special relativity. He said: "The introduction of a luminiferous ether will prove to be superfluous inasmuch as the view here to be developed will not require an absolutely stationary space provided with special properties, nor assign a velocity vector to a point of the empty space in which electromagnetic processes take place." * Einstein's Elektrodynamik paper [Ein05c] contains no formal references to other literature. It does mention, in §9, part II, that the results of the paper are in agreement with Lorentz's electrodynamics. Poincaré is not mentioned in this paper, although he is cited formally in a paper on special relativity written by Einstein the following year. In 1905 Einstein was the first to suggest that when a material body lost energy (either radiation or heat) of amount , its mass decreased by the amount . Hermann Minkowski showed in 1907 that the theory of special relativity could be elegantly described using a four-dimensional spacetime, which combines the dimension of time with the three dimensions of space. Einstein in 1920 returned to a concept of aether having no state of motion. Comments by Lorentz, Poincaré, and Einstein Lorentz In a paper that was written in 1914 and published in 1921, Lorentz expressed appreciation for Poincaré's Palermo paper (1906) on relativity. Lorentz stated: However, a 1916 reprint of his main work "The theory of electrons" contains notes (written in 1909 and 1915) in which Lorentz sketched the differences between his results and that of Einstein as follows: Regarding the fact, that in this book Lorentz only mentioned Einstein and not Poincaré in connection with a) the synchronisation by light signals, b) the reciprocity of the Lorentz transformation, and c) the relativistic transformation law for charge density, Janssen comments: And at a conference on the Michelson–Morley experiment in 1927 at which Lorentz and Michelson were present, Michelson suggested that Lorentz was the initiator of the theory of relativity. Lorentz then replied: Poincaré Poincaré attributed the development of the new mechanics almost entirely to Lorentz. He only mentioned Einstein in connection with the photoelectric effect, but not in connection with special relativity. For example, in 1912 Poincaré raises the question whether "the mechanics of Lorentz" will still exist after the development of the quantum theory. He wrote: Einstein It is now known that Einstein was well aware of the scientific research of his time. The well known historian of science, Jürgen Renn, Director of the Max Planck Institute for the History of Science, wrote on Einstein's contributions to the Annalen der Physik: Einstein wrote in 1907 that one needed only to realize that an auxiliary quantity that was introduced by Lorentz and that he called "local time" can simply be defined as "time". In 1909 and 1912 Einstein explained: But Einstein and his supporters took the position that this "light postulate" together with the principle of relativity renders the ether superfluous and leads directly to Einstein's version of relativity. It is also known that Einstein had been reading and studying Poincaré's 1902 book Science and hypothesis well before 1905, which included: detailed philosophical assessments on the relativity of space, time, and simultaneity discussion of the reliance on conventions regarding the use of light signals for the synchronization of clocks the definition of the principle of relativity and the conjecture that a violation of that principle can never be detected empirically the possible redundancy of the ether hypothesis detailed remarks on the physical status of non-Euclidean geometry. Einstein refers to Poincaré in connection with the inertia of energy in 1906 and the non-Euclidean geometry in 1921, but not in connection with the Lorentz transformation, the relativity principle or the synchronization procedure by light signals. However, in the last years before his death Einstein acknowledged some of Poincaré's contributions (according to Darrigol, maybe because his biographer Pais in 1950 sent Einstein a copy of Poincarè's Palermo paper, which he said that he had not read before). Einstein wrote in 1953: Timeline This section cites notable publications where people have expressed a view on the issues outlined above. Sir Edmund Whittaker (1954) In 1954, Sir Edmund Taylor Whittaker, an English mathematician and historian of science, credited Henri Poincaré with the equation , and he included a chapter entitled The Relativity Theory of Poincaré and Lorentz in his book A History of the Theories of Aether and Electricity. He credited Poincaré and Lorentz, and especially alluded to Lorentz's 1904 paper (dated by Whittaker as 1903), Poincaré's St. Louis speech (The Principles of Mathematical Physics) of September 1904, and Poincaré's June 1905 paper. Whittaker attributed to Einstein's relativity paper only little importance, i.e., the formulation of the Doppler and aberration formulas. Max Born spent three years trying to dissuade Whittaker, but Whittaker insisted that everything of importance had already been said by Poincaré, and that Lorentz quite plainly had the physical interpretation. Gerald Holton (1960) Whittaker's claims were criticized by Gerald Holton (1960, 1973). He argued that there are fundamental differences between the theories of Einstein on one hand, and Poincaré and Lorentz on the other hand. Einstein radically reformulated the concepts of space and time, and by that removed "absolute space" and thus the stationary luminiferous aether from physics. On the other hand, Holton argued that Poincaré and Lorentz still adhered to the stationary aether concept, and tried only to modify Newtonian dynamics, not to replace it. Holton argued, that "Poincaré's silence" (i.e., why Poincaré never mentioned Einstein's contributions to relativity) was due to their fundamentally different conceptual viewpoints. Einstein's views on space and time and the abandonment of the aether were, according to Holton, not acceptable to Poincaré, therefore the latter only referred to Lorentz as the creator of the "new mechanics". Holton also pointed out that although Poincaré's 1904 St. Louis speech was "acute and penetrating" and contained a "principle of relativity" that is confirmed by experience and needs new development, it did not "enunciate a new relativity principle". He also alluded to mistakes of Whittaker, like predating Lorentz's 1904 paper (published April 1904) to 1903. Views similar to Holton's were later (1967, 1970) expressed by his former student, Stanley Goldberg. G. H. Keswani (1965) In a 1965 series of articles tracing the history of relativity, Keswani claimed that Poincaré and Lorentz should have the main credit for special relativity – claiming that Poincaré pointedly credited Lorentz multiple times, while Lorentz credited Poincaré and Einstein, refusing to take credit for himself. He also downplayed the theory of general relativity, saying "Einstein's general theory of relativity is only a theory of gravitation and of modifications in the laws of physics in gravitational fields". This would leave the special theory of relativity as the unique theory of relativity. Keswani cited also Vladimir Fock for this same opinion. This series of articles prompted responses, among others from Herbert Dingle and Karl Popper. Dingle said, among other things, ".. the 'principle of relativity' had various meanings, and the theories associated with it were quite distinct; they were not different forms of the same theory. Each of the three protagonists.... was very well aware of the others .... but each preferred his own views" Karl Popper says "Though Einstein appears to have known Poincaré's Science and Hypothesis prior to 1905, there is no theory like Einstein's in this great book." Keswani did not accept the criticism, and replied in two letters also published in the same journal ( and – in his reply to Dingle, he argues that the three relativity theories were at heart the same: ".. they meant much that was common. And that much mattered the most." Dingle commented the year after on the history of crediting: "Until the first World War, Lorentz's and Einstein's theories were regarded as different forms of the same idea, but Lorentz, having priority and being a more established figure speaking a more familiar language, was credited with it." (Dingle 1967, Nature 216 p. 119–122). Arthur I. Miller (1973) Miller (1973, 1981) agreed with the analysis of Holton and Goldberg, and further argued that although the terminology (like the principle of relativity) used by Poincaré and Einstein were very similar, their content differs sharply. According to Miller, Poincaré used this principle to complete the aether based "electromagnetic world view" of Lorentz and Abraham. He also argued that Poincaré distinguished (in his July 1905 paper) between "ideal" and "real" systems and electrons. That is, Lorentz's and Poincaré's usage of reference frames lacks an unambiguous physical interpretation, because in many cases they are only mathematical tools, while in Einstein's theory the processes in inertial frames are not only mathematically, but also physically equivalent. Miller wrote in 1981: p. 172: "Although Poincaré's principle of relativity is stated in a manner similar to Einstein's, the difference in content is sharp. The critical difference is that Poincaré's principle admits the existence of the ether, and so considers the velocity of light to be exactly c only when it is measured in coordinate systems at rest in the ether. In inertial reference systems, the velocity of light is c and is independent of the emitter's motion as a result of certain compensatory effects such as the mathematical local time and the hypothesis of an unobservable contraction. Consequently, Poincaré's extension of the relativity principle of relative motion into the dynamics of the electron resided in electromagnetic theory, and not in mechanics...Poincaré came closest to rendering electrodynamics consistent, but not to a relativity theory." p. 217: "Poincaré related the imaginary system Σ' to the ether fixed system S'". Miller (1996) argues that Poincaré was guided by empiricism, and was willing to admit that experiments might prove relativity wrong, and so Einstein is more deserving of credit, even though he might have been substantially influenced by Poincaré's papers. Miller also argues that "Emphasis on conventionalism ... led Poincaré and Lorentz to continue to believe in the mathematical and observational equivalence of special relativity and Lorentz's electron theory. This is incorrect." [p. 96] Instead, Miller claims that the theories are mathematically equivalent but not physically equivalent. [p. 91–92] Abraham Pais (1982) In his 1982 Einstein biography Subtle is the Lord, Abraham Pais argued that Poincaré "comes near" to discovering special relativity (in his St. Louis lecture of September 1904, and the June 1905 paper), but eventually he failed, because in 1904 and also later in 1909, Poincaré treated length contraction as a third independent hypothesis besides the relativity principle and the constancy of the speed of light. According to Pais, Poincaré thus never understood (or at least he never accepted) special relativity, in which the whole theory including length contraction can simply be derived from two postulates. Consequently, he sharply criticized Whittaker's chapter on the "Relativity theory of Poincaré and Lorentz", saying "how well the author's lack of physical insight matches his ignorance of the literature", although Pais admitted that both he and his colleagues hold the original version of Whittaker's History as a masterpiece. Although he was apparently trying to make a point concerning Whittaker's treatment of the origin of special relativity, Pais' phrasing of that statement was rebuked by at least one notable reviewer of his 1982 book as being "scurrilous" and "lamentable". Also in contrast to Pais' overgeneralized claim, notable scientists such as Max Born refer to parts of Whittaker's second volume, especially the history of quantum mechanics, as "the most amazing feats of learning, insight, and discriminations" while Freeman Dyson says of the two volumes of Whittaker's second edition: "it is likely that this is the most scholarly and generally authoritative history of its period that we shall ever get." Pais goes on to argue that Lorentz never abandoned the stationary aether concept, either before or after 1905: p. 118: "Throughout the paper of 1895, the Fresnel aether is postulated explicitly"; p. 125: "Like Voigt before him, Lorentz regarded the transformation ... only as a convenient mathematical tool for proving a physical theorem ... he proposed to call t the general time and t' the local time. Although he didn't say it explicitly, it is evident that to him there was, so to speak, only one true time t."; p. 166: "8.3. Lorentz and the Aether... For example, Lorentz still opines that the contraction of the rods has a dynamic origin. There is no doubt that he had read and understood Einstein's papers by then. However, neither then nor later was he prepared to accept their conclusions as the definitive answer to the problems of the aether." Elie Zahar (1983) In several papers, Elie Zahar (1983, 2000) argued that both Einstein (in his June paper) and Poincaré (in his July paper) independently discovered special relativity. He said that "though Whittaker was unjust towards Einstein, his positive account of Poincaré's actual achievement contains much more than a simple grain of truth". According to him, it was Poincaré's unsystematic and sometimes erroneous statements regarding his philosophical papers (often connected with conventionalism), which hindered many to give him due credit. In his opinion, Poincaré was rather a "structural realist" and from that he concludes, that Poincaré actually adhered to the relativity of time and space, while his allusions to the aether are of secondary importance. He continues, that due to his treatment of gravitation and four-dimensional space, Poincaré's 1905/6 paper was superior to Einstein's 1905 paper. Yet Zahar gives also credit to Einstein, who introduced Mass–Energy equivalence, and also transcended special relativity by taking a path leading to the development of general relativity. John Stachel (1995) John Stachel (1995) argued that there is a debate over the respective contributions of Lorentz, Poincaré and Einstein to relativity. These questions depend on the definition of relativity, and Stachel argued that kinematics and the new view of space and time is the core of special relativity, and dynamical theories must be formulated in accordance with this scheme. Based on this definition, Einstein is the main originator of the modern understanding of special relativity. In his opinion, Lorentz interpreted the Lorentz transformation only as a mathematical device, while Poincaré's thinking was much nearer to the modern understanding of relativity. Yet Poincaré still believed in the dynamical effects of the aether and distinguished between observers being at rest or in motion with respect to it. Stachel wrote: "He never organized his many brilliant insights into a coherent theory that resolutely discarded the aether and the absolute time or transcended its electrodynamic origins to derive a new kinematics of space and time on a formulation of the relativity principle that makes no reference to the ether". Peter Galison (2002) In his book Einstein's clocks, Poincaré's maps (2002), Peter Galison compared the approaches of both Poincaré and Einstein to reformulate the concepts of space and time. He wrote: "Did Einstein really discover relativity? Did Poincaré already have it? These old questions have grown as tedious as they are fruitless." This is because it depends on the question, which parts of relativity one considers as essential: the rejection of the aether, the Lorentz transformation, the connection with the nature of space and time, predictions of experimental results, or other parts. For Galison, it is more important to acknowledge that both thinkers were concerned with clock synchronization problems, and thus both developed the new operational meaning of simultaneity. However, while Poincaré followed a constructive approach and still adhered to the concepts of Lorentz's stationary aether and the distinction between "apparent" and "true" times, Einstein abandoned the aether and therefore all times in different inertial frames are equally valid. Galison argued that this does not mean that Poincaré was conservative, since Poincaré often alluded to the revolutionary character of the "new mechanics" of Lorentz. Anatoly Alexeevich Logunov on special relativity (2004) In Anatoly Logunov's book about Poincaré's relativity theory, there is an English translation (on p. 113, using modern notations) of the part of Poincaré's 1900 article containing E=mc2. Logunov states that Poincaré's two 1905 papers are superior to Einstein's 1905 paper. According to Logunov, Poincaré was the first scientist to recognize the importance of invariance under the Poincaré group as a guideline for developing new theories in physics. In chapter 9 of this book, Logunov points out that Poincaré's second paper was the first one to formulate a complete theory of relativistic dynamics, containing the correct relativistic analogue of Newton's F=ma. On p. 142, Logunov points out that Einstein wrote reviews for the Beiblätter Annalen der Physik, writing 21 reviews in 1905. In his view, this contradicts the claims that Einstein worked in relative isolation and with limited access to the scientific literature. Among the papers reviewed in the Beiblätter in the fourth (of 24) issue of 1905, there is a review of Lorentz' 1904 paper by Richard Gans, which contains the Lorentz transformations. In Logunov's view, this supports the view that Einstein was familiar with the Lorentz' paper containing the correct relativistic transformation in early 1905, while his June 1905 paper does not mention Lorentz in connection with this result. Harvey R. Brown (2005) Harvey R. Brown (2005) (who favors a dynamical view of relativistic effects similar to Lorentz, but "without a hidden aether frame") wrote about the road to special relativity from Michelson to Einstein in section 4: p. 40: "The cradle of special theory of relativity was the combination of Maxwellian electromagnetism and the electron theory of Lorentz (and to a lesser extent of Larmor) based on Fresnel's notion of the stationary aether…. It is well known that Einstein's special relativity was partially motivated by this failure [to find the aether wind], but in order to understand the originality of Einstein's 1905 work it is incumbent on us to review the work of the trailblazers, and in particular Michelson, FitzGerald, Lorentz, Larmor, and Poincaré. After all they were jointly responsible for the discovery of relativistic kinematics, in form if not in content, as well as a significant portion of relativistic dynamics as well." Regarding Lorentz's work before 1905, Brown wrote about the development of Lorentz's "theorem of corresponding states" and then continued: p. 54: "Lorentz's interpretation of these transformations is not the one Einstein would give them and which is standardly embraced today. Indeed, until Lorentz came to terms with Einstein's 1905 work, and somehow despite Poincaré's warning, he continued to believe that the true coordinate transformations were the Galilean ones, and that the 'Lorentz' transformations … were merely a useful formal device…" p. 56. "Lorentz consistently failed to understand the operational significance of his notions of 'local' time…. He did however have an intimation of time dilation in 1899, but inevitably there are caveats…. The hypotheses of Lorentz's system were starting to pile up, and the spectre of ad hocness was increasingly hard to ignore." Then the contribution of Poincaré's to relativity: p. 62: "Indeed, the claim that this giant of pure and applied mathematics co-discovered special relativity is not uncommon, and it is not hard to see why. Poincaré was the first to extend the relativity principle to optics and electrodynamics exactly. Whereas Lorentz, in his theorem of corresponding states, had from 1899 effectively assumed this extension of the relativity principle up to second-order effects, Poincaré took it to hold for all orders. Poincaré was the first to show that Maxwell's equations with source terms are strictly Lorentz covariant. … Poincaré was the first to use the generalized relativity principle as a constraint on the form of the coordinate transformations. He recognized that the relativity principle implies that the transformations form a group, and in further appealing to spatial isotropy. … Poincaré was the first to see the connection between Lorentz's ‘local time’, and the issue of clock synchrony. … It is fair to say that Poincaré was the first to understand the relativity of simultaneity, and the conventionality of distant simultaneity. Poincaré anticipated Minkowski's interpretation of the Lorentz transformations as a passive, rigid rotation within a four-dimensional pseudo-Euclidean spacetime. He was also aware that the electromagnetic potentials transform in the manner of what is now called a Minkowski 4-vector. He anticipated the major results of relativistic dynamics (and in particular the relativistic relations between force, momentum and velocity), but not E=mc² in its full generality." However, Brown continued with the reasons which speak against crediting Poincaré with co-discovery: p. 63–64: "What are the grounds for denying Poincaré the title of co-discoverer of special relativity? ... Although Poincaré understood independently of Einstein how the Lorentz transformations give rise to non-Galilean transformation rules for velocities (indeed Poincaré derived the correct relativistic rules), it is not clear that he had a full appreciation of the modern operational significance attached to coordinate transformations.... he did not seem to understand the role played by the second-order terms in the transformation. Compared with the cases of Lorentz and Larmor, it is even less clear that Poincaré understood either length contraction or time dilation to be a consequence of the coordinate transformation.... What Poincaré was holding out for was no less than a new theory of ether and matter – something far more ambitious than what appeared in Einstein's 1905 relativity paper...p. 65. Like Einstein half a decade later, Poincaré wanted new physics, not a reinterpretations or reorganization of existing notions." Brown denies the idea of other authors and historians that the major difference between Einstein and his predecessors is Einstein's rejection of the aether, because it is always possible to add for whatever reason the notion of a privileged frame to special relativity as long as one accepts that it will remain unobservable, and also Poincaré argued that "some day, no doubt, the aether will be thrown aside as useless". However Brown gave some examples of what in his opinion were the new features in Einstein's work: p. 66: "The full meaning of relativistic kinematics was simply not properly understood before Einstein. Nor was the 'theory of relativity' as Einstein articulated it in 1905 anticipated even in its programmatic form." p. 69. "How did Albert Einstein...arrive at his special theory of relativity?...I want only to stress that it is impossible to understand Einstein's discovery (if that is the right word) of special relativity without taking on board the impacts of the quantum in physics." p. 81. "In this respect [Brown refers to the conventional nature of distant simultaneity] Einstein was doing little more than expanding on a theme that Poincaré had already introduced. Where Einstein goes well beyond the great mathematician is in his treatment of the coordinate transformations... In particular, the extraction of the phenomena of length contraction and time dilation directly from the Lorentz transformations in section 4 of the 1905 paper is completely original." After that, Brown develops his own dynamical interpretation of special relativity as opposed to the kinematical approach of Einstein's 1905 paper (although he says that this dynamical view is already contained in Einstein's 1905 paper, "masqueraded in the language of kinematics", p. 82), and the modern understanding of spacetime. Roger Cerf (2006) Roger Cerf (2006) gave priority to Einstein for developing special relativity, and criticized the assertions of Leveugle and others concerning the priority of Poincaré. While Cerf agreed that Poincaré made important contributions to relativity, he argued (following Pais) that Poincaré "stopped short before the crucial step" because he handled length contraction as a "third hypothesis", therefore Poincaré lacked a complete understanding of the basic principles of relativity. "Einstein's crucial step was that he abandoned the mechanistic ether in favor of a new kinematics." He also denies the idea, that Poincaré invented E=mc² in its modern relativistic sense, because he did not realize the implications of this relationship. Cerf considers Leveugle's Hilbert–Planck–Einstein connection an implausible conspiracy theory. Shaul Katzir (2005) Katzir (2005) argued that "Poincaré's work should not be seen as an attempt to formulate special relativity, but as an independent attempt to resolve questions in electrodynamics." Contrary to Miller and others, Katzir thinks that Poincaré's development of electrodynamics led him to the rejection of the pure electromagnetic world view (due to the non-electromagnetic Poincaré stresses introduced in 1905), and Poincaré's theory represents a "relativistic physics" which is guided by the relativity principle. In this physics, however, "Lorentz's theory and Newton's theory remained as the fundamental bases of electrodynamics and gravitation." Scott Walter (2005, 2007) Walter (2005) argues that both Poincaré and Einstein put forward the theory of relativity in 1905. And in 2007 he wrote, that although Poincaré formally introduced four-dimensional spacetime in 1905/6, he was still clinging to the idea of "Galilei spacetime". That is, Poincaré preferred Lorentz covariance over Galilei covariance when it is about phenomena accessible to experimental tests; yet in terms of space and time, Poincaré preferred Galilei spacetime over Minkowski spacetime, and length contraction and time dilation "are merely apparent phenomena due to motion with respect to the ether". This is the fundamental difference in the two principal approaches to relativity theory, namely that of "Lorentz and Poincaré" on one side, and "Einstein and Minkowski" on the other side. See also History of Lorentz transformations History of special relativity Criticism of relativity theory#Accusations of plagiarism and priority discussions List of scientific priority disputes General relativity priority dispute Multiple discovery Notes Citations References Works of physics (primary sources) [Ein05c] : Albert Einstein: Zur Elektrodynamik bewegter Körper, Annalen der Physik 17(1905), 891–921. Received June 30, published September 26, 1905. Reprinted with comments in [Sta89], p. 276–306 English translation, with footnotes not present in the 1905 paper, available on the net [Ein05d] : Albert Einstein: Ist die Trägheit eines Körpers von seinem Energiegehalt abhängig?, Annalen der Physik 18(1905), 639–641, Reprinted with comments in [Sta89], Document 24 English translation available on the net [Ein06] : Albert Einstein: Das Prinzip von der Erhaltung der Schwerpunktsbewegung und die Trägheit der Energie Annalen der Physik 20(1906):627–633, Reprinted with comments in [Sta89], Document 35 [Ein15a]: Einstein, A. (1915) "Die Feldgleichungun der Gravitation". Sitzungsberichte der Preussischen Akademie der Wissenschaften zu Berlin, 844–847. [Ein15b]: Einstein, A. (1915) "Zur allgemeinen Relativatstheorie", Sitzungsberichte der Preussischen Akademie der Wissenschaften zu Berlin, 778–786 [Ein15c]: Einstein, A. (1915) "Erklarung der Perihelbewegung des Merkur aus der allgemeinen Relatvitatstheorie", Sitzungsberichte der Preussischen Akademie der Wissenschaften zu Berlin, 799–801 [Ein15d]: Einstein, A. (1915) "Zur allgemeinen Relativatstheorie", Sitzungsberichte der Preussischen Akademie der Wissenschaften zu Berlin, 831–839 [Ein16]: Einstein, A. (1916) "Die Grundlage der allgemeinen Relativitätstheorie", Annalen der Physik, 49 [Hil24]: Hilbert, D., Die Grundlagen der Physik – Mathematische Annalen, 92, 1924 – "meiner theorie" quote on page 2 – online at Uni Göttingen – index of journal [Lan05]:Langevin, P. (1905) "Sur l'origine des radiations et l'inertie électromagnétique", Journal de Physique Théorique et Appliquée, 4, pp. 165–183. [Lan14]:Langevin, P. (1914) "Le Physicien" in Henri Poincaré Librairie (Felix Alcan 1914) pp. 115–202. [Lor99]:Lorentz, H. A. (1899) "Simplified Theory of Electrical and Optical Phenomena in Moving Systems", Proc. Acad. Science Amsterdam, I, 427–43. [Lor04]: Lorentz, H. A. (1904) "Electromagnetic Phenomena in a System Moving with Any Velocity Less Than That of Light", Proc. Acad. Science Amsterdam, IV, 669–78. [Lor11]:Lorentz, H. A. (1911) Amsterdam Versl. XX, 87 [Lor14]:. [Pla07]:Planck, M. (1907) Berlin Sitz., 542 [Pla08]:Planck, M. (1908) Verh. d. Deutsch. Phys. Ges. X, p218, and Phys. ZS, IX, 828 [Poi89]:Poincaré, H. (1889) Théorie mathématique de la lumière, Carré & C. Naud, Paris. Partly reprinted in [Poi02], Ch. 12. [Poi97]:Poincaré, H. (1897) "The Relativity of Space", article in English translation [Poi00] : . See also the English translation [Poi02] : [Poi04] : English translation as The Principles of Mathematical Physics, in "The value of science" (1905a), Ch. 7–9. [Poi05] : [Poi06] : [Poi08] : [Poi13] : [Ein20]: Albert Einstein: "Ether and the Theory of Relativity", An Address delivered on May 5, 1920, in the University of Leyden. [Sta89] : John Stachel (Ed.), The collected papers of Albert Einstein, volume 2, Princeton University Press, 1989 Further reading Nándor Balázs (1972) "The acceptability of physical theories: Poincaré versus Einstein", pages 21–34 in General Relativity: Papers in Honour of J.L. Synge, L. O'Raifeartaigh editor, Clarendon Press. External links Discovery and invention controversies Albert Einstein Hendrik Lorentz Henri Poincaré Theory of relativity E. T. Whittaker
Relativity priority dispute
[ "Physics" ]
8,727
[ "Theory of relativity" ]
4,023,059
https://en.wikipedia.org/wiki/Two-dimensional%20nuclear%20magnetic%20resonance%20spectroscopy
Two-Dimensional Nuclear Magnetic Resonance (2D NMR) is an advanced spectroscopic technique that builds upon the capabilities of one-dimensional (1D) NMR by incorporating an additional frequency dimension. This extension allows for a more comprehensive analysis of molecular structures. In 2D NMR, signals are distributed across two frequency axes, providing improved resolution and separation of overlapping peaks, particularly beneficial for studying complex molecules. This technique identifies correlations between different nuclei within a molecule, facilitating the determination of connectivity, spatial proximity, and dynamic interactions. 2D NMR encompasses a variety of experiments, including COSY (Correlation Spectroscopy), TOCSY (Total Correlation Spectroscopy), NOESY (Nuclear Overhauser Effect Spectroscopy), and HSQC (Heteronuclear Single Quantum Coherence). These techniques are indispensable in fields such as structural biology, where they are pivotal in determining protein and nucleic acid structures; organic chemistry, where they aid in elucidating complex organic molecules; and materials science, where they offer insights into molecular interactions in polymers and metal-organic frameworks. By resolving signals that would typically overlap in the 1D NMR spectra of complex molecules, 2D NMR enhances the clarity of structural information. 2D NMR can provide detailed information about the chemical structure and the three-dimensional arrangement of molecules. The first two-dimensional experiment, COSY, was proposed by Jean Jeener, a professor at the Université Libre de Bruxelles, in 1971. This experiment was later implemented by Walter P. Aue, Enrico Bartholdi and Richard R. Ernst, who published their work in 1976. Fundamental concepts Each experiment consists of a sequence of radio frequency (RF) pulses with delay periods in between them. The timing, frequencies, and intensities of these pulses distinguish different NMR experiments from one another. Almost all two-dimensional experiments have four stages: the preparation period, where a magnetization coherence is created through a set of RF pulses; the evolution period, a determined length of time during which no pulses are delivered and the nuclear spins are allowed to freely precess (rotate); the mixing period, where the coherence is manipulated by another series of pulses into a state which will give an observable signal; and the detection period, in which the free induction decay signal from the sample is observed as a function of time, in a manner identical to one-dimensional FT-NMR. The two dimensions of a two-dimensional NMR experiment are two frequency axes representing a chemical shift. Each frequency axis is associated with one of the two time variables, which are the length of the evolution period (the evolution time) and the time elapsed during the detection period (the detection time). They are each converted from a time series to a frequency series through a two-dimensional Fourier transform. A single two-dimensional experiment is generated as a series of one-dimensional experiments, with a different specific evolution time in successive experiments, with the entire duration of the detection period recorded in each experiment. The end result is a plot showing an intensity value for each pair of frequency variables. The intensities of the peaks in the spectrum can be represented using a third dimension. More commonly, intensity is indicated using contour lines or different colors. Homonuclear through-bond correlation methods In these methods, magnetization transfer occurs between nuclei of the same type, through J-coupling of nuclei connected by up to a few bonds. Correlation spectroscopy (COSY) The first and most popular two-dimension NMR experiment is the homonuclear correlation spectroscopy (COSY) sequence, which is used to identify spins which are coupled to each other. It consists of a single RF pulse (p1) followed by the specific evolution time (t1) followed by a second pulse (p2) followed by a measurement period (t2). The Correlation Spectroscopy experiment operates by correlating nuclei coupled to each other through scalar coupling, also known as J-coupling. This coupling is the interaction between nuclear spins connected by bonds, typically observed between nuclei that are 2-3 bonds apart (e.g., vicinal protons). By detecting these interactions, COSY provides vital information about the connectivity between atoms within a molecule, making it a crucial tool for structural elucidation in organic chemistry. The COSY experiment generates a two-dimensional spectrum with chemical shifts along the x-axis (horizontal) and y-axis (vertical) and involves several key steps. First, the sample is excited using a series of radiofrequency (RF) pulses, bringing the nuclear spins into a higher energy state. After the first RF pulse, the system evolves freely for a period called t1, during which the spins precess at frequencies corresponding to their chemical shifts. The correlation between nuclei is achieved by incrementally varying the evolution time (t1) to capture indirect interactions. This series of experiments, each with a different value of t1, allows for the detection of chemical shifts from nuclei that may not be observed directly in a one-dimensional spectrum. As t1 is incremented, cross-peaks are produced in the resulting 2D spectrum, representing interactions like coupling or spatial proximity between nuclei. This approach helps map out atomic connections, providing deeper insight into molecular structure and aiding in the interpretation of complex systems. Cross peaks result from a phenomenon called magnetization transfer, and their presence indicates that two nuclei are coupled which have the two different chemical shifts that make up the cross peak's coordinates. Each coupling gives two symmetrical cross peaks above and below the diagonal. That is, a cross-peak occurs when there is a correlation between the signals of the spectrum along each of the two axes at these values. An easy visual way to determine which couplings a cross peak represents is to find the diagonal peak which is directly above or below the cross peak, and the other diagonal peak which is directly to the left or right of the cross peak. The nuclei represented by those two diagonal peaks are coupled. Next, a second RF pulse is applied to allow magnetization to transfer between coupled nuclei. The resulting signal is recorded continuously during a detection period ( t2) after the second RF pulse. The data are then processed through Fourier transformation along both the t1 and t2 axes, creating a 2D spectrum with peaks plotted along the diagonal and off-diagonal. When interpreting the COSY spectrum, diagonal peaks correspond to the 1D chemical shifts of individual nuclei, similar to the standard peaks in a 1D NMR spectrum. The key feature of a COSY spectrum is the presence of cross-peaks as shown in Figure 1, indicating coupling between pairs of nuclei. These cross-peaks provide crucial information about the connectivity within a molecule, showing that the two nuclei are connected by a small number of bonds, usually two or three bonds. COSY is especially useful when dealing with complex molecules such as natural products, peptides, and proteins, where understanding the connectivity of different nuclei through bonds is crucial. While 1D NMR is more straightforward and ideal for identifying basic structural features, COSY enhances the capabilities of NMR by providing deeper insights into molecular connectivity. The two-dimensional spectrum that results from the COSY experiment shows the frequencies for a single isotope, most commonly hydrogen (1H) along both axes. (Techniques have also been devised for generating heteronuclear correlation spectra, in which the two axes correspond to different isotopes, such as 13C and 1H.) Diagonal peaks correspond to the peaks in a 1D-NMR experiment, while the cross peaks indicate couplings between pairs of nuclei (much as multiplet splitting indicates couplings in 1D-NMR). COSY-90 is the most common COSY experiment. In COSY-90, the p1 pulse tilts the nuclear spin by 90°. Another member of the COSY family is COSY-45. In COSY-45 a 45° pulse is used instead of a 90° pulse for the second pulse, p2. The advantage of a COSY-45 is that the diagonal-peaks are less pronounced, making it simpler to match cross-peaks near the diagonal in a large molecule. Additionally, the relative signs of the coupling constants (see J-coupling#Magnitude of J-coupling) can be elucidated from a COSY-45 spectrum. This is not possible using COSY-90. Overall, the COSY-45 offers a cleaner spectrum while the COSY-90 is more sensitive. Another related COSY technique is double quantum filtered (DQF) COSY. DQF COSY uses a coherence selection method such as phase cycling or pulsed field gradients, which cause only signals from double-quantum coherences to give an observable signal. This has the effect of decreasing the intensity of the diagonal peaks and changing their lineshape from a broad "dispersion" lineshape to a sharper "absorption" lineshape. It also eliminates diagonal peaks from uncoupled nuclei. These all have the advantage that they give a cleaner spectrum in which the diagonal peaks are prevented from obscuring the cross peaks, which are weaker in a regular COSY spectrum. Exclusive correlation spectroscopy (ECOSY) Total correlation spectroscopy (TOCSY) The TOCSY experiment is similar to the COSY experiment, in that cross peaks of coupled protons are observed. However, cross peaks are observed not only for nuclei which are directly coupled, but also between nuclei which are connected by a chain of couplings. This makes it useful for identifying the larger interconnected networks of spin couplings. This ability is achieved by inserting a repetitive series of pulses which cause isotropic mixing during the mixing period. Longer isotropic mixing times cause the polarization to spread out through an increasing number of bonds. In the case of oligosaccharides, each sugar residue is an isolated spin system, so it is possible to differentiate all the protons of a specific sugar residue. A 1D version of TOCSY is also available, and by irradiating a single proton the rest of the spin system can be revealed. Recent advances in this technique include the 1D-CSSF (chemical shift selective filter) TOCSY experiment, which produces higher quality spectra and allows coupling constants to be reliably extracted and used to help determine stereochemistry. TOCSY is sometimes called "homonuclear Hartmann–Hahn spectroscopy" (HOHAHA). Incredible natural-abundance double-quantum transfer experiment (INADEQUATE) INADEQUATE is a method often used to find 13C couplings between adjacent carbon atoms. Because the natural abundance of 13C is only about 1%, only about 0.01% of molecules being studied will have the two nearby 13C atoms needed for a signal in this experiment. However, correlation selection methods are used (similarly to DQF COSY) to prevent signals from single 13C atoms, so that the double 13C signals can be easily resolved. Each coupled pair of nuclei gives a pair of peaks on the INADEQUATE spectrum which both have the same vertical coordinate, which is the sum of the chemical shifts of the nuclei; the horizontal coordinate of each peak is the chemical shift for each of the nuclei separately. Heteronuclear through-bond correlation methods Heteronuclear correlation spectroscopy gives signal based upon coupling between nuclei of two different types. Often the two nuclei are protons and another nucleus (called a "heteronucleus"). For historical reasons, experiments which record the proton rather than the heteronucleus spectrum during the detection period are called "inverse" experiments. This is because the low natural abundance of most heteronuclei would result in the proton spectrum being overwhelmed with signals from molecules with no active heteronuclei, making it useless for observing the desired, coupled signals. With the advent of techniques for suppressing these undesired signals, inverse correlation experiments such as HSQC, HMQC, and HMBC are actually much more common today. "Normal" heteronuclear correlation spectroscopy, in which the heteronucleus spectrum is recorded, is known as HETCOR. Heteronuclear single-quantum correlation spectroscopy (HSQC) Heteronuclear Single Quantum Coherence (HSQC) is a 2D NMR technique utilized for the detection of interactions between different types of nuclei which are separated by one bond, particularly a proton (1H) and a heteronucleus such as carbon (13C) or nitrogen (15N). This method gives one peak per pair of coupled nuclei, whose two coordinates are the chemical shifts of the two coupled atoms. This method plays a role in structural elucidation, particularly in analyzing organic compounds, natural products, and biomolecules such as proteins and nucleic acids. HSQC is designed to detect one-bond correlations between protons and heteronuclear atoms, providing insight into the connectivity of hydrogen and heteronuclear atoms through the transfer of magnetization. The HSQC experiment involves a series of steps to generate a two-dimensional NMR spectrum. Initially, the sample is excited using radiofrequency (RF) pulses, bringing the nuclear spins into an excited state and preparing them for magnetization transfer. Magnetization is then transferred from the proton to the heteronucleus through a one-bond scalar coupling (J-coupling), ensuring that only directly bonded nuclei participate in the transfer. Subsequently, the system evolves during a period called t1, and the magnetization is transferred back from the heteronuclear to the proton. The final signal is detected, encoding both the proton and the heteronuclear information, and a Fourier transformation is performed to create a 2D spectrum correlating the proton and heteronuclear chemical shifts. HSQC works by transferring magnetization from the I nucleus (usually the proton) to the S nucleus (usually the heteroatom) using the INEPT pulse sequence; this first step is done because the proton has a greater equilibrium magnetization and thus this step creates a stronger signal. The magnetization then evolves and then is transferred back to the I nucleus for observation. An extra spin echo step can then optionally be used to decouple the signal, simplifying the spectrum by collapsing multiplets to a single peak. The undesired uncoupled signals are removed by running the experiment twice with the phase of one specific pulse reversed; this reverses the signs of the desired but not the undesired peaks, so subtracting the two spectra will give only the desired peaks. Interpretation of the HSQC spectrum is based on the observation of cross-peaks, which indicates the direct bonding between protons and carbons or nitrogens. Each cross-peak corresponds to a specific 1H-13C or 1H-15N pair, providing direct assignments of 1H-Xconnectivity, where X is the heteronucleus The HSQC technique offers several advantages, including its focus on one-bond correlations, increased sensitivity due to the direct detection of protons, and the simplification of crowded spectra by resolving overlapping signals and aiding in the analysis of complex molecules. Heteronuclear multiple-quantum correlation spectroscopy (HMQC) gives an identical spectrum as HSQC, but using a different method. The two methods give similar quality results for small to medium-sized molecules, but HSQC is considered to be superior for larger molecules. Heteronuclear multiple-bond correlation spectroscopy (HMBC) HMBC detects heteronuclear correlations over longer ranges of about 2–4 bonds. The difficulty of detecting multiple-bond correlations is that the HSQC and HMQC sequences contain a specific delay time between pulses which allows detection only of a range around a specific coupling constant. This is not a problem for the single-bond methods since the coupling constants tend to lie in a narrow range, but multiple-bond coupling constants cover a much wider range and cannot all be captured in a single HSQC or HMQC experiment. In HMBC, this difficulty is overcome by omitting one of these delays from an HMQC sequence. This increases the range of coupling constants that can be detected, and also reduces signal loss from relaxation. The cost is that this eliminates the possibility of decoupling the spectrum, and introduces phase distortions into the signal. There is a modification of the HMBC method which suppresses one-bond signals, leaving only the multiple-bond signals. Through-space correlation methods These methods establish correlations between nuclei which are physically close to each other regardless of whether there is a bond between them. They use the nuclear Overhauser effect (NOE) by which nearby atoms (within about 5 Å) undergo cross relaxation by a mechanism related to spin–lattice relaxation. Nuclear Overhauser effect spectroscopy (NOESY) In NOESY, the nuclear Overhauser cross relaxation between nuclear spins during the mixing period is used to establish the correlations. The spectrum obtained is similar to COSY, with diagonal peaks and cross peaks, however the cross peaks connect resonances from nuclei that are spatially close rather than those that are through-bond coupled to each other. NOESY spectra also contain extra axial peaks which do not provide extra information and can be eliminated through a different experiment by reversing the phase of the first pulse. One application of NOESY is in the study of large biomolecules, such as in protein NMR, in which relationships can often be assigned using sequential walking. The NOESY experiment can also be performed in a one-dimensional fashion by pre-selecting individual resonances. The spectra are read with the pre-selected nuclei giving a large, negative signal while neighboring nuclei are identified by weaker, positive signals. This only reveals which peaks have measurable NOEs to the resonance of interest but takes much less time than the full 2D experiment. In addition, if a pre-selected nucleus changes environment within the time scale of the experiment, multiple negative signals may be observed. This offers exchange information similar to the EXSY (exchange spectroscopy) NMR method. NOESY experiments are important tool to identify stereochemistry of a molecule in solvent whereas single crystal XRD used to identify stereochemistry of a molecule in solid form. Heteronuclear Overhauser effect spectroscopy (HOESY) In HOESY, much like NOESY is used for the cross relaxation between nuclear spins. However, HOESY can offer information about other NMR active nuclei in a spatially relevant manner. Examples include any nuclei X{Y} or X→Y such as 1H→13C, 19F→13C, 31P→13C, or 77Se→13C. The experiments typically observe NOEs from protons on X, X{1H}, but do not have to include protons. Rotating-frame nuclear Overhauser effect spectroscopy (ROESY) ROESY is similar to NOESY, except that the initial state is different. Instead of observing cross relaxation from an initial state of z-magnetization, the equilibrium magnetization is rotated onto the x axis and then spin-locked by an external magnetic field so that it cannot precess. This method is useful for certain molecules whose rotational correlation time falls in a range where the nuclear Overhauser effect is too weak to be detectable, usually molecules with a molecular weight around 1000 daltons, because ROESY has a different dependence between the correlation time and the cross-relaxation rate constant. In NOESY the cross-relaxation rate constant goes from positive to negative as the correlation time increases, giving a range where it is near zero, whereas in ROESY the cross-relaxation rate constant is always positive. ROESY is sometimes called "cross relaxation appropriate for minimolecules emulated by locked spins" (CAMELSPIN). Resolved-spectrum methods Unlike correlated spectra, resolved spectra spread the peaks in a 1D-NMR experiment into two dimensions without adding any extra peaks. These methods are usually called J-resolved spectroscopy, but are sometimes also known as chemical shift resolved spectroscopy or δ-resolved spectroscopy. They are useful for analysing molecules for which the 1D-NMR spectra contain overlapping multiplets as the J-resolved spectrum vertically displaces the multiplet from each nucleus by a different amount. Each peak in the 2D spectrum will have the same horizontal coordinate that it has in a non-decoupled 1D spectrum, but its vertical coordinate will be the chemical shift of the single peak that the nucleus has in a decoupled 1D spectrum. For the heteronuclear version, the simplest pulse sequence used is called a Müller–Kumar–Ernst (MKE) experiment, which has a single 90° pulse for the heteronucleus for the preparation period, no mixing period, and applies a decoupling signal to the proton during the detection period. There are several variants on this pulse sequence which are more sensitive and more accurate, which fall under the categories of gated decoupler methods and spin-flip methods. Homonuclear J-resolved spectroscopy uses the spin echo pulse sequence. Higher-dimensional methods 3D and 4D experiments can also be done, sometimes by running the pulse sequences from two or three 2D experiments in series. Many of the commonly used 3D experiments, however, are triple resonance experiments; examples include the HNCA and HNCOCA experiments, which are often used in protein NMR. See also Two-dimensional correlation analysis References Nuclear magnetic resonance spectroscopy
Two-dimensional nuclear magnetic resonance spectroscopy
[ "Physics", "Chemistry" ]
4,462
[ "Nuclear magnetic resonance", "Spectroscopy", "Spectrum (physical sciences)", "Nuclear magnetic resonance spectroscopy" ]
4,024,093
https://en.wikipedia.org/wiki/Thermal%20efficiency
In thermodynamics, the thermal efficiency () is a dimensionless performance measure of a device that uses thermal energy, such as an internal combustion engine, steam turbine, steam engine, boiler, furnace, refrigerator, ACs etc. For a heat engine, thermal efficiency is the ratio of the net work output to the heat input; in the case of a heat pump, thermal efficiency (known as the coefficient of performance or COP) is the ratio of net heat output (for heating), or the net heat removed (for cooling) to the energy input (external work). The efficiency of a heat engine is fractional as the output is always less than the input while the COP of a heat pump is more than 1. These values are further restricted by the Carnot theorem. Overview In general, energy conversion efficiency is the ratio between the useful output of a device and the input, in energy terms. For thermal efficiency, the input, , to the device is heat, or the heat-content of a fuel that is consumed. The desired output is mechanical work, , or heat, , or possibly both. Because the input heat normally has a real financial cost, a memorable, generic definition of thermal efficiency is From the first law of thermodynamics, the energy output cannot exceed the input, and by the second law of thermodynamics it cannot be equal in a non-ideal process, so When expressed as a percentage, the thermal efficiency must be between 0% and 100%. Efficiency must be less than 100% because there are inefficiencies such as friction and heat loss that convert the energy into alternative forms. For example, a typical gasoline automobile engine operates at around 25% efficiency, and a large coal-fuelled electrical generating plant peaks at about 46%. However, advances in Formula 1 motorsport regulations have pushed teams to develop highly efficient power units which peak around 45–50% thermal efficiency. The largest diesel engine in the world peaks at 51.7%. In a combined cycle plant, thermal efficiencies approach 60%. Such a real-world value may be used as a figure of merit for the device. For engines where a fuel is burned, there are two types of thermal efficiency: indicated thermal efficiency and brake thermal efficiency. This form of efficiency is only appropriate when comparing similar types or similar devices. For other systems, the specifics of the calculations of efficiency vary, but the non-dimensional input is still the same: Efficiency = Output energy / input energy. Heat engines Heat engines transform thermal energy, or heat, Qin into mechanical energy, or work, Wnet. They cannot do this task perfectly, so some of the input heat energy is not converted into work, but is dissipated as waste heat Qout < 0 into the surroundings: The thermal efficiency of a heat engine is the percentage of heat energy that is transformed into work. Thermal efficiency is defined as The efficiency of even the best heat engines is low; usually below 50% and often far below. So the energy lost to the environment by heat engines is a major waste of energy resources. Since a large fraction of the fuels produced worldwide go to powering heat engines, perhaps up to half of the useful energy produced worldwide is wasted in engine inefficiency, although modern cogeneration, combined cycle and energy recycling schemes are beginning to use this heat for other purposes. This inefficiency can be attributed to three causes. There is an overall theoretical limit to the efficiency of any heat engine due to temperature, called the Carnot efficiency. Second, specific types of engines have lower limits on their efficiency due to the inherent irreversibility of the engine cycle they use. Thirdly, the nonideal behavior of real engines, such as mechanical friction and losses in the combustion process causes further efficiency losses. Carnot efficiency The second law of thermodynamics puts a fundamental limit on the thermal efficiency of all heat engines. Even an ideal, frictionless engine can't convert anywhere near 100% of its input heat into work. The limiting factors are the temperature at which the heat enters the engine, , and the temperature of the environment into which the engine exhausts its waste heat, , measured in an absolute scale, such as the Kelvin or Rankine scale. From Carnot's theorem, for any engine working between these two temperatures: This limiting value is called the Carnot cycle efficiency because it is the efficiency of an unattainable, ideal, reversible engine cycle called the Carnot cycle. No device converting heat into mechanical energy, regardless of its construction, can exceed this efficiency. Examples of are the temperature of hot steam entering the turbine of a steam power plant, or the temperature at which the fuel burns in an internal combustion engine. is usually the ambient temperature where the engine is located, or the temperature of a lake or river into which the waste heat is discharged. For example, if an automobile engine burns gasoline at a temperature of and the ambient temperature is , then its maximum possible efficiency is: It can be seen that since is fixed by the environment, the only way for a designer to increase the Carnot efficiency of an engine is to increase , the temperature at which the heat is added to the engine. The efficiency of ordinary heat engines also generally increases with operating temperature, and advanced structural materials that allow engines to operate at higher temperatures is an active area of research. Due to the other causes detailed below, practical engines have efficiencies far below the Carnot limit. For example, the average automobile engine is less than 35% efficient. Carnot's theorem applies to thermodynamic cycles, where thermal energy is converted to mechanical work. Devices that convert a fuel's chemical energy directly into electrical work, such as fuel cells, can exceed the Carnot efficiency. Engine cycle efficiency The Carnot cycle is reversible and thus represents the upper limit on efficiency of an engine cycle. Practical engine cycles are irreversible and thus have inherently lower efficiency than the Carnot efficiency when operated between the same temperatures and . One of the factors determining efficiency is how heat is added to the working fluid in the cycle, and how it is removed. The Carnot cycle achieves maximum efficiency because all the heat is added to the working fluid at the maximum temperature , and removed at the minimum temperature . In contrast, in an internal combustion engine, the temperature of the fuel-air mixture in the cylinder is nowhere near its peak temperature as the fuel starts to burn, and only reaches the peak temperature as all the fuel is consumed, so the average temperature at which heat is added is lower, reducing efficiency. An important parameter in the efficiency of combustion engines is the specific heat ratio of the air-fuel mixture, γ. This varies somewhat with the fuel, but is generally close to the air value of 1.4. This standard value is usually used in the engine cycle equations below, and when this approximation is made the cycle is called an air-standard cycle. Otto cycle: automobiles The Otto cycle is the name for the cycle used in spark-ignition internal combustion engines such as gasoline and hydrogen fuelled automobile engines. Its theoretical efficiency depends on the compression ratio r of the engine and the specific heat ratio γ of the gas in the combustion chamber. Thus, the efficiency increases with the compression ratio. However the compression ratio of Otto cycle engines is limited by the need to prevent the uncontrolled combustion known as knocking. Modern engines have compression ratios in the range 8 to 11, resulting in ideal cycle efficiencies of 56% to 61%. Diesel cycle: trucks and trains In the Diesel cycle used in diesel truck and train engines, the fuel is ignited by compression in the cylinder. The efficiency of the Diesel cycle is dependent on r and γ like the Otto cycle, and also by the cutoff ratio, rc, which is the ratio of the cylinder volume at the beginning and end of the combustion process: The Diesel cycle is less efficient than the Otto cycle when using the same compression ratio. However, practical Diesel engines are 30% - 35% more efficient than gasoline engines. This is because, since the fuel is not introduced to the combustion chamber until it is required for ignition, the compression ratio is not limited by the need to avoid knocking, so higher ratios are used than in spark ignition engines. Rankine cycle: steam power plants The Rankine cycle is the cycle used in steam turbine power plants. The overwhelming majority of the world's electric power is produced with this cycle. Since the cycle's working fluid, water, changes from liquid to vapor and back during the cycle, their efficiencies depend on the thermodynamic properties of water. The thermal efficiency of modern steam turbine plants with reheat cycles can reach 47%, and in combined cycle plants, in which a steam turbine is powered by exhaust heat from a gas turbine, it can approach 60%. Brayton cycle: gas turbines and jet engines The Brayton cycle is the cycle used in gas turbines and jet engines. It consists of a compressor that increases pressure of the incoming air, then fuel is continuously added to the flow and burned, and the hot exhaust gasses are expanded in a turbine. The efficiency depends largely on the ratio of the pressure inside the combustion chamber p2 to the pressure outside p1 Other inefficiencies One should not confuse thermal efficiency with other efficiencies that are used when discussing engines. The above efficiency formulas are based on simple idealized mathematical models of engines, with no friction and working fluids that obey simplified thermodynamic models. Real engines have many departures from ideal behavior that waste energy, reducing actual efficiencies below the theoretical values given above. Examples are: friction of moving parts inefficient combustion heat loss from the combustion chamber departure of the working fluid from the thermodynamic properties of an ideal gas aerodynamic drag of air moving through the engine energy used by auxiliary equipment like oil and water pumps. inefficient compressors and turbines imperfect valve timing These factors may be accounted when analyzing thermodynamic cycles, however discussion of how to do so is outside the scope of this article. Energy conversion For a device that converts energy from another form into thermal energy (such as an electric heater, boiler, or furnace), the thermal efficiency is where the quantities are heat-equivalent values. So, for a boiler that produces 210 kW (or 700,000 BTU/h) output for each 300 kW (or 1,000,000 BTU/h) heat-equivalent input, its thermal efficiency is 210/300 = 0.70, or 70%. This means that 30% of the energy is lost to the environment. An electric resistance heater has a thermal efficiency close to 100%. When comparing heating units, such as a highly efficient electric resistance heater to an 80% efficient natural gas-fuelled furnace, an economic analysis is needed to determine the most cost-effective choice. Effects of fuel heating value The heating value of a fuel is the amount of heat released during an exothermic reaction (e.g., combustion) and is a characteristic of each substance. It is measured in units of energy per unit of the substance, usually mass, such as: kJ/kg, J/mol. The heating value for fuels is expressed as the HHV, LHV, or GHV to distinguish treatment of the heat of phase changes: Higher heating value (HHV) is determined by bringing all the products of combustion back to the original pre-combustion temperature, and in particular condensing any vapor produced. This is the same as the thermodynamic heat of combustion. Lower heating value (LHV) (or net calorific value) is determined by subtracting the heat of vaporization of the water vapor from the higher heating value. The energy required to vaporize the water therefore is not realized as heat. Gross heating value accounts for water in the exhaust leaving as vapor, and includes liquid water in the fuel prior to combustion. This value is important for fuels like wood or coal, which will usually contain some amount of water prior to burning. Which definition of heating value is being used significantly affects any quoted efficiency. Not stating whether an efficiency is HHV or LHV renders such numbers very misleading. Heat pumps and refrigerators Heat pumps, refrigerators and air conditioners use work to move heat from a colder to a warmer place, so their function is the opposite of a heat engine. The work energy (Win) that is applied to them is converted into heat, and the sum of this energy and the heat energy that is taken up from the cold reservoir (QC) is equal to the magnitude of the total heat energy given off to the hot reservoir (|QH|) Their efficiency is measured by a coefficient of performance (COP). Heat pumps are measured by the efficiency with which they give off heat to the hot reservoir, COPheating; refrigerators and air conditioners by the efficiency with which they take up heat from the cold space, COPcooling: The reason the term "coefficient of performance" is used instead of "efficiency" is that, since these devices are moving heat, not creating it, the amount of heat they move can be greater than the input work, so the COP can be greater than 1 (100%). Therefore, heat pumps can be a more efficient way of heating than simply converting the input work into heat, as in an electric heater or furnace. Since they are heat engines, these devices are also limited by Carnot's theorem. The limiting value of the Carnot 'efficiency' for these processes, with the equality theoretically achievable only with an ideal 'reversible' cycle, is: The same device used between the same temperatures is more efficient when considered as a heat pump than when considered as a refrigerator since This is because when heating, the work used to run the device is converted to heat and adds to the desired effect, whereas if the desired effect is cooling the heat resulting from the input work is just an unwanted by-product. Sometimes, the term efficiency is used for the ratio of the achieved COP to the Carnot COP, which can not exceed 100%. Energy efficiency The 'thermal efficiency' is sometimes called the energy efficiency. In the United States, in everyday usage the SEER is the more common measure of energy efficiency for cooling devices, as well as for heat pumps when in their heating mode. For energy-conversion heating devices their peak steady-state thermal efficiency is often stated, e.g., 'this furnace is 90% efficient', but a more detailed measure of seasonal energy effectiveness is the annual fuel use efficiency (AFUE). Heat exchangers The role of a heat exchanger is to transfer heat between two mediums, so the performance of the heat exchanger is closely related to energy or thermal efficiency. A counter flow heat exchanger is the most efficient type of heat exchanger in transferring heat energy from one circuit to the other. However, for a more complete picture of heat exchanger efficiency, exergetic considerations must be taken into account. Thermal efficiencies of an internal combustion engine are typically higher than that of external combustion engines. See also Kalina cycle Electrical efficiency Mechanical efficiency Heat engine Federal roofing tax credit for energy efficiency (US) Lower heating value Cost of electricity by source Higher heating value Energy conversion efficiency References Thermodynamic properties Heating, ventilation, and air conditioning Energy conversion Engineering thermodynamics
Thermal efficiency
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
3,185
[ "Thermodynamic properties", "Physical quantities", "Engineering thermodynamics", "Quantity", "Thermodynamics", "Mechanical engineering" ]
30,861,073
https://en.wikipedia.org/wiki/Fog%20collection
Fog collection is the harvesting of water from fog using large pieces of vertical mesh netting to induce the fog-droplets to flow down towards a trough below. The setup is known as a fog fence, fog collector or fog net. Through condensation, atmospheric water vapour from the air condenses on cold surfaces into droplets of liquid water known as dew. The phenomenon is most observable on thin, flat, exposed objects including plant leaves and blades of grass. As the exposed surface cools by radiating its heat to the sky, atmospheric moisture condenses at a rate greater than that of which it can evaporate, resulting in the formation of water droplets. Water condenses onto the array of parallel wires and collects at the bottom of the net. This requires no external energy and is facilitated naturally through temperature fluctuation, making it attractive for deployment in less developed areas. The term 'fog fence' comes from its long rectangular shape resembling a fence, but fog collectors are not confined just to this structural style. The efficiency of the fog collector is based on the net material, the size of the holes and filament, and chemical coating. Fog collectors can harvest from 2% up to 10% of the moisture in the air, depending on their efficiency. An ideal location is a high altitude arid area near cold offshore currents, where fog is common, and therefore, the fog collector can produce the highest yield. Historical origin The organized collection of dew or condensation through natural or assisted processes is an ancient practice, from the small-scale drinking of pools of condensation collected in plant stems (still practiced today by survivalists), to large-scale natural irrigation without rain falling, such as in the Atacama and Namib deserts. The first man-made fog collectors stretch back as far as the Inca Empire, where buckets were placed under trees to take advantage of condensation. Several man-made devices such as antique stone piles in Ukraine, medieval dew ponds in southern England and volcanic stone covers on the fields of Lanzarote have all been thought to be possible dew-catching devices. One of the first recorded projects of fog collection was in 1969 in South Africa as a water source for an air force base. The structure consisted of two fences each 100m2 (1000 sq. ft.). Between the two, 11 litres (2½ gallons) of water was produced on average per day over the 14 month study, which is 110 ml of water for every square meter (⅓ fl. oz. per sq. ft.). The next large study was performed by the National Catholic University of Chile and the International Development Research Centre in Canada in 1987. One hundred 48m2 (520 sq. ft.) fog fences were assembled in northern Italy. The project was able to yield on average 0.5 litre of water for every square meter (1½ fl. oz. per sq. ft), or 33L (8 gallons) for each of the 300 villagers, each day. In nature Fog collectors were first seen in nature as a technique for collecting water by some insects and foliage. Namib Desert beetles live off water that condenses on their wings due to a pattern of alternating hydrophilic (water attracting) and hydrophobic (water repelling) regions. Redwood forests are able to survive on limited rainfall due to the addition of condensation on needles which drip into the trees' root systems. Parts of a fog collector The fog collector is made up of three major parts: the frame, the mesh netting, and the trough or basin. The frame supports the mesh netting and can be made from a wide array of materials from stainless steel poles to bamboo. The frame can vary in shape. Proposed geometries include linear, similar to a fence and cylindrical. Linear frames are rectangles with the vertical endpoints embedded into the ground. They have rope supports connected at the top and staked into the ground to provide stability. The mesh netting is where the condensation of water droplets appear. It consists of filaments knitted together with small openings, coated with a chemical to increase condensation. Shade cloth is used for mesh structure because it can be locally sourced in underdeveloped countries. The filaments are coated to be hydrophilic and hydrophobic, which attracts and repels water to increase the condensation. This can retrieve 2% of moisture in the air. Efficiency increases as the size of the filaments and the holes decrease. The most optimal mesh netting is made from stainless steel filaments the size of three to four human hairs and with holes that are twice as big as the filament. The netting is coated in a chemical that decreases water droplet's contact angle hysteresis, which allows for more small droplets to form. This type of netting can capture 10% of the moisture in the air. Below the mesh netting of a fog fence, there is a small trough for the water to be collected in. The water runs from the trough to some type of storage container or irrigation system for use. If the fog collector is circular the water will be deposited into a basin placed at the bottom of the netting. Principle Fog contains typically from 0.05 to 1 grams of water per cubic meter (⅗ to 12 grains per cu. yd.), with droplets from 1 to 40 micrometres in diameter. It settles slowly and is carried by wind. Therefore, an efficient fog fence must be placed facing the prevailing winds, and must be a fine mesh, as wind would flow around a solid wall and take the fog with it. The water droplets in the fog deposit on the mesh. A second mesh rubbing against the first causes the droplets to coalesce and run to the bottom of the meshes, where the water may be collected and led away. Advantages and disadvantages Advantages Water can be collected in any environment, including extremely arid environments such as the Atacama Desert, one of the driest places on earth. The harvested water can be safer to drink than ground water. Fog collection is considered low maintenance because it requires no exterior energy and only an occasional brushing of the nets to keep them clean. Parts can sometimes be sourced locally in underdeveloped countries, which allows for the collector to be fixed if broken and to not sit in disrepair. No in-depth training is necessary for repairing the collector. Fog collectors are low cost to implement compared to other water alternatives. Disadvantages Fog fences are limited in quantity by the regional climate and topography and cannot produce more water on demand. Their yields are not consistent year round and are affected by local weather and global weather fluctuations (such as El Niño). Their water supply can still be contaminated by windborne dust, birds, and insects. The moisture collected can promote growth of mold and other possibly toxic microorganisms on the mesh. Modern methods In the mid-1980s, the Meteorological Service of Canada (MSC) began constructing and deploying large fog collecting devices on Mont Sutton in Quebec. These simple tools consisted of a large piece of canvas (generally 12 metres; 40' long and 4 metres; 10' high) stretched between two 6 metres (20') wooden poles held up by guy wires, with a long trough underneath. Water would condense out of the fog onto the canvas, coalesce into droplets, and then slide down to drip off of the bottom of the canvas and into the collecting trough below. Chilean project The intent of the Canadian project was simply to use fog collection devices to study the constituents of the fog that they collected. However, their success sparked the interest of scientists in Chile's National Forest Corporation (CONAF) and Catholic University of Chile to exploit the or clouds which blanket the northern Chile coast in the southern hemisphere winter. With funding from the International Development Research Centre (IDRC), the MSC collaborated with the Chileans to begin testing different designs of collection facilities on El Tofo Mountain in northern Chile. Once perfected, approximately 50 of the systems were erected and used to irrigate seedlings on the hillside in an attempt at reforestation. Once vegetation became established, it should have begun collecting fog for itself, like the many cloud forests in South America, in order to flourish as a self-sustaining system. The success of the reforestation project is unclear, but approximately five years after the beginning of the project, the nearby village of Chungungo began to push for a pipeline to be sent down the mountain into the town. Though this was not in the scope of CONAF, which pulled out at this point, it was agreed to expand the collection facility to 94 nylon mesh collectors with a reserve tank and piping in order to supply the 300 inhabitants of Chungungo with water. The IDRC reports that ten years later in 2002, only nine of the devices remained and the system overall was in very poor shape. Conversely, the MSC states in its article that the facility was still fully functional in 2003, but provides no details behind this statement. In June 2003 the IDRC reported that plans existed to revive the site on El Tofo. Dar Si Hmad In March 2015 Dar Si Hmad (DSH), a Moroccan NGO, built a large fog-collection and distribution system in the Anti-Atlas Mountains. The region DSH worked in is water-poor, but abundant fog drapes the area 6 months out of the year. DSH's system included technology that monitored the water system via SMS message. These capabilities were crucial in dealing with the effects of fog collection on the social fabric of these rural areas. According to MIT researchers, the fog collection methods implemented by DSH have "improved the fog-collecting efficiency by about five hundred per cent." International use Despite the apparent failure of the fog collection project in Chungungo, the method has caught on in some localities around the world. The International Organization for Dew Utilization organization is working on foil-based effective condensers for regions where rain or fog cannot cover water needs throughout the year. Shortly after the initial success of the project, researchers from the participating organizations formed the nonprofit organization FogQuest, which has set up operational facilities in Yemen and central Chile, while still others are under evaluation in Guatemala, Haiti, and Nepal, this time with much more emphasis on the continuing involvement of the communities in the hopes that the projects will last well into the future. Villages in a total of 25 countries worldwide now operate fog collection facilities. There is potential for the systems to be used to establish dense vegetation on previously arid grounds. It appears that the inexpensive collectors will continue to flourish. There have been several attempts to set up fog catchers in Peru, with varying success. See also References Sources International Development Research Centre article on the fog collection project Meteorological Service of Canada article on fog collection project Further reading External links Fog Harvesting, chapter from Source Book of Alternative Technologies for Freshwater Augmentation in Latin America and the Caribbean, UNEP International Environmental Technology Centre FogQuest: Sustainable Water Solutions, Canadian organization, historical information on fog collection projects in developing countries Standard Fog Collector, at USGS installation in Hawaii Fog Harvesting, chapter from Source Book of Alternative Technologies for Freshwater Augmentation in Latin America and the Caribbean, UNEP International Environmental Technology Centre The Fog Collectors: Harvesting Water From Thin Air FogQuest: Sustainable Water Solutions, Canadian organization, historical information on fog collection projects in developing countries Water supply Hydrology Appropriate technology
Fog collection
[ "Chemistry", "Engineering", "Environmental_science" ]
2,319
[ "Hydrology", "Water supply", "Environmental engineering" ]
30,862,811
https://en.wikipedia.org/wiki/Rotating%20biological%20contactor
A rotating biological contactor or RBC is a biological fixed-film treatment process used in the secondary treatment of wastewater following primary treatment. The primary treatment process involves removal of grit, sand and coarse suspended material through a screening process, followed by settling of suspended solids. The RBC process allows the wastewater to come in contact with a biological film in order to remove pollutants in the wastewater before discharge of the treated wastewater to the environment, usually a body of water (river, lake or ocean). A rotating biological contactor is a type of secondary (biological) treatment process. It consists of a series of closely spaced, parallel discs mounted on a rotating shaft which is supported just above the surface of the wastewater. Microorganisms grow on the surface of the discs where biological degradation of the wastewater pollutants takes place. Rotating biological contactors (RBCs) are capable of withstanding surges in organic load. To be successful, micro-organisms need both oxygen to live and food to grow. Oxygen is obtained from the atmosphere as the disks rotate. As the micro-organisms grow, they build up on the media until they are sloughed off due to shear forces provided by the rotating discs in the sewage. Effluent from the RBC is then passed through a clarifier where the sloughed biological solids in suspension settle as a sludge. Operation The rotating packs of disks (known as the media) are contained in a tank or trough and rotate at between 2 and 5 revolutions per minute. Commonly used plastics for the media are polyethylene, PVC and expanded polystyrene. The shaft is aligned with the flow of wastewater so that the discs rotate at right angles to the flow, with several packs usually combined to make up a treatment train. About 40% of the disc area is immersed in the wastewater. Biological growth is attached to the surface of the disc and forms a slime layer. The discs contact the wastewater with the atmospheric air for oxidation as it rotates. The rotation helps to slough off excess solids. The disc system can be staged in series to obtain nearly any detention time or degree of removal required. Since the systems are staged, the culture of the later stages can be acclimated to the slowly degraded materials. The discs consist of plastic sheets ranging from 2 to 4 m in diameter and are up to 10 mm thick. Several modules may be arranged in parallel and/or in series to meet the flow and treatment requirements. The discs are submerged in waste water to about 40% of their diameter. Approximately 95% of the surface area is thus alternately submerged in waste water and then exposed to the atmosphere above the liquid. Carbonaceous substrate is removed in the initial stage of RBC. Carbon conversion may be completed in the first stage of a series of modules, with nitrification being completed after the 5th stage. Most design of RBC systems will include a minimum of 4 or 5 modules in series to obtain nitrification of waste water. As the biofilm biomass changes from Carbon metabolizing to nitrifying, a visual colour change from grey/beige to brown can be seen which is illustrated by the adjacent photo. Biofilms, which are biological growths that become attached to the discs, assimilate the organic materials (measured as BOD5) in the wastewater. Aeration is provided by the rotating action, which exposes the media to the air after contacting them with the wastewater, facilitating the degradation of the pollutants being removed. The degree of wastewater treatment is related to the amount of media surface area and the quality and volume of the inflowing wastewater. RBC's regularly achieve the following effluent parameters for treated waste water: BOD5: 20 mg/L, Suspended Solids: 30 mg/L and Ammonia N: 20 mg/L. They consume very low power and make little noise due to the slow rotation of the rotor (2-5 RPM). They are generally considered very robust and low maintenance systems. Better discharge effluent parameters can be achieved by adding a tertiary polishing filter after the RBC to lower BOD5, SS and Ammonia Nitrogen. An additional UV or Chlorination step can achieve effluent parameters that make the water suitable for irrigation or toilet flushing. Secondary clarification Secondary clarifiers following RBCs are identical in design to conventional humus tanks, as used downstream of trickling filters. Sludge is generally removed daily, or pumped automatically to the primary settlement tank for co-settlement. Regular sludge removal reduces the risk of anaerobic conditions from developing within the sludge, with subsequent sludge flotation due to the release of gases. History The first RBC was installed in West Germany in 1959, later it was introduced in the United States and Canada. In the United States, rotating biological contactors are used for industries producing wastewaters high in biochemical oxygen demand (BOD) (e.g., petroleum industry and dairy industry). In the UK, the first GRP RBC's - manufactured by KEE Process Ltd. originally known as KLARGESTER - go back to 1955. A properly designed RBC produced a very high quality final effluent. However both the organic and hydraulic loading had to be addressed in the design phase. In the 1980s problems were encountered in the USA prompting the Environmental Agency to commission a number of reports. These reports identified a number of issues and criticized the RBC process. One author suggested that since manufacturers were aware of the problem, the problems would be resolved and suggested that design engineers should specify a long life. Severn Trent Water Ltd, a large UK Water Company based in the Midlands, employed RBCs as the preferred process for their small works which amount to over 700 sites Consequently, long life was essential to compliance. This issue was successfully addressed by Eric Findlay C Eng when he was employed by Severn Trent Water Ltd in the UK following a period of failure of a number of plants. As a result, the issue of short life failure became fully understood in the early 1990s when the correct process and hydraulic issues had been identified to produce a high quality nitrified effluent. There are several other papers which address the whole issue of RBCs. Findlay also developed a system for repairing defective RBCs enabling shaft and frame life to be extended up to 30 years based on the Cranfield designed frame. Where additional capacity was required intermediate frames are used. See also Activated sludge Aerated lagoon Trickling filter Industrial wastewater treatment List of waste water treatment technologies Sewage treatment References External links Design Criteria for Rotating Biological Contactors Implementing Rotating Biological Contactor Solutions Applying the Rotating Biological Contactor Process Wisconsin Department of Natural Resources - Wastewater Operator Certification. Biological Treatment - Attached-Growth Processes Study Guide, February 2016 Edition Penn State Harrisburg Environmental Training Center Wastewater Treatment Plant Operator Certification Training - Module 21: Rotating Biological Contactors Environmental engineering Chemical equipment Biodegradable waste management Waste treatment technology Water treatment
Rotating biological contactor
[ "Chemistry", "Engineering", "Environmental_science" ]
1,425
[ "Water treatment", "Biodegradable waste management", "Chemical engineering", "Chemical equipment", "Water pollution", "Biodegradation", "Civil engineering", "nan", "Environmental engineering", "Water technology", "Waste treatment technology" ]
28,034,097
https://en.wikipedia.org/wiki/European%20Association%20of%20Geoscientists%20and%20Engineers
The European Association of Geoscientists and Engineers (EAGE) is a professional organization for geoscientists and engineers, established in 1951 with a worldwide membership. The association provides a platform for professionals in geophysics, petroleum exploration, geology, reservoir engineering, mining, civil engineering, digitalization and energy transition to exchange ideas and information. EAGE is headquartered in the Netherlands and has regional offices in Dubai, Kuala Lumpur, and Bogota. The association is committed to promoting the advancement of geoscience and engineering through various activities, including: Conferences, exhibitions, workshops, and webinars: EAGE organizes a large number of events each year, including its flagship event, the EAGE Annual Conference and Exhibition, which attracts nearly 6,000 visitors from around the world. Publications: EAGE publishes several journals, books, and magazines, including First Break, Geophysical Prospecting, Near Surface Geophysics, Petroleum Geoscience, Basin Research and Geoenergy. Educational programmes: EAGE offers educational programmes, including short courses and lectures, as well as various student programmes. EAGE is an official Continuing Professional Development (CPD) Provider for the "European Geologist" (EurGeol) title, established by European Federation of Geologists. Events The largest EAGE event in any year is the EAGE Annual Conference and Exhibition, attracting almost 6,000 visitors from all over the world. The conference covers a wide range of topics in the fields of geoscience, geophysics, and petroleum engineering. The topics include imaging and interpretation of seismic data, modeling and simulation of reservoirs, geology and petrophysics, geomechanics and structural geology, geothermal energy, artificial intelligence and digitalization, environmental impact and HSE issues, and many more. The conference also covers topics such as exploration and production, carbon capture and storage, energy transition, and education in the future. The conference brings together experts in these fields to discuss current trends, future prospects, and best practices in various aspects of the industry. EAGE also organizes dedicated conferences and workshops on a variety of topics, including exploration for minerals, oil and gas, geothermal energy, as well as emerging areas such as water footprint, hydrogen, carbon capture and storage, and deepwater exploration. The events also focus on various aspects of geoscience and engineering, including geostatistics, depth imaging, borehole geology, reservoir modeling, rock physics, seismology, and geochemistry as well as special sessions on topics such as women in geoscience and engineering. Additionally, EAGE hosts events dedicated to the use of new technologies and digitalization in the energy industry, as well as events that bring together young professionals and students in the field. Publications EAGE's flagship magazine is First Break. In addition, EAGE publishes five scientific journals: Geophysical Prospecting, Near Surface Geophysics, Petroleum Geoscience, Basin Research and Geoenergy. EAGE also publishes several books per year and maintains an online publishing platform called EarthDoc. See also List of geoscience organizations Society of Exploration Geophysicists Society of Petroleum Engineers American Association of Petroleum Geologists References External links Geology societies Geophysics societies Geotechnical organizations Petroleum engineering Petroleum organizations International professional associations based in Europe Professional associations based in the Netherlands Scientific organisations based in the Netherlands Scientific organizations established in 1951 1951 establishments in the Netherlands Organisations based in Utrecht (province) Bunnik
European Association of Geoscientists and Engineers
[ "Chemistry", "Engineering" ]
693
[ "Petroleum engineering", "Energy engineering", "Geotechnical organizations", "Civil engineering organizations", "Petroleum", "Petroleum organizations", "Energy organizations" ]
28,034,156
https://en.wikipedia.org/wiki/CING%20%28biomolecular%20NMR%20structure%29
In biomolecular structure, CING stands for the Common Interface for NMR structure Generation and is known for structure and NMR data validation. NMR spectroscopy provides diverse data on the solution structure of biomolecules. CING combines many external programs and internalized algorithms to direct an author of a new structure or a biochemist interested in an existing structure to regions of the molecule that might be problematic in relation to the experimental data. The source code is maintained open to the public at Google Code. There is a secure web interface iCing available for new data. Applications 9000+ validation reports for existing Protein Data Bank structures in NRG-CING. CING has been applied to automatic predictions in the CASD-NMR experiment with results available at CASD-NMR. Validated NMR data Protein or Nucleic acid structure together called Biomolecular structure Chemical shift (Nuclear Overhauser effect) Distance restraint Dihedral angle restraint RDC or Residual dipolar coupling restraint NMR (cross-)peak Software Following software is used internally or externally by CING: 3DNA Collaborative Computing Project for NMR CYANA (Software) DSSP (algorithm) MOLMOL Matplotlib Nmrpipe PROCHECK/Aqua POV-Ray ShiftX TALOS+ WHAT_CHECK Wattos XPLOR-NIH Yasara Algorithms Saltbridge Disulfide bridge Outlier Funding The NRG-CING project was supported by the European Community grants 213010 (eNMR) and 261572 (WeNMR). References External links CING - includes tutorials and blog. iCing - web interface to CING. software - Google code with issue tracker and Wiki. NRG-CING - validation results on all PDB NMR structures. CASD-NMR CING - validation results of recent CASD-NMR predicted structures. Protein structure
CING (biomolecular NMR structure)
[ "Chemistry" ]
390
[ "Protein structure", "Structural biology" ]
44,864,352
https://en.wikipedia.org/wiki/Dissimilatory%20sulfate%20reduction
Dissimilatory sulfate reduction is a form of anaerobic respiration that uses sulfate as the terminal electron acceptor to produce hydrogen sulfide. This metabolism is found in some types of bacteria and archaea which are often termed sulfate-reducing organisms. The term "dissimilatory" is used when hydrogen sulfide () is produced in an anaerobic respiration process. By contrast, the term "assimilatory" would be used in relation to the biosynthesis of organosulfur compounds, even though hydrogen sulfide may be an intermediate. Dissimilatory sulfate reduction occurs in four steps: Conversion (activation) of sulfate to adenosine 5’-phosphosulfate (APS) via sulfate adenylyltransferase Reduction of APS to sulfite via adenylyl-sulfate reductase Transfer of the sulfur atom of sulfite to the DsrC protein creating a trisulfide intermediate catalyzed by DsrAB Reduction of the trisulfide to sulfide and reduced DsrC via a membrane bound enzyme, DsrMKJOP Which requires the consumption of a single ATP molecule and the input of 8 electrons (e−). The protein complexes responsible for these chemical conversions — Sat, Apr and Dsr — are found in all currently known organisms that perform dissimilatory sulfate reduction. Energetically, sulfate is a poor electron acceptor for microorganisms as the sulfate-sulfite redox couple has a standard formal reduction potential (E0') of -516 mV, which is too negative to allow reduction by NADH or ferrodoxin that are the primary intracellular electron mediators. To overcome this issue, sulfate is first converted into APS by the enzyme ATP sulfurylase (Sat), at the cost of a single ATP molecule. The APS-sulfite redox couple has an E0' of -60 mV, which allows APS to be reduced by either NADH or reduced ferrodoxin using the enzyme adenylyl-sulfate reductase (Apr), which requires the input of 2 electrons. In the final step, sulfite is reduced by the dissimilatory sulfite reductase (Dsr) to form sulfide, requiring the input of 6 electrons. See also Sulfur cycle Sulfur assimilation References Sulfur metabolism
Dissimilatory sulfate reduction
[ "Chemistry" ]
493
[ "Sulfur metabolism", "Metabolism" ]
44,865,976
https://en.wikipedia.org/wiki/Premixed%20turbulent%20flames
In a premixed turbulent flame, fuel and oxidizer are being mixed by turbulence during a sufficiently long time before combustion is initiated. The deposition of energy from the spark generates a flame kernel that grows at first by laminar, then by turbulent flame propagation. And in which the oxidizer has been mixed with the fuel before it reaches the flame front. This creates a thin flame front as all of the reactants are readily available. Further reading Reduction of pollutant emissions from high pressure flames using an electric field Erlangen [ESYTEC-Verl.] 2006. The Influence of Pressure on the Control of Premixed Turbulent Flames Using an Electric Field, Combustion and Flame, 2005. Combustion Turbulence
Premixed turbulent flames
[ "Chemistry" ]
148
[ "Turbulence", "Combustion", "Chemical reaction stubs", "Chemical process stubs", "Fluid dynamics" ]
44,867,070
https://en.wikipedia.org/wiki/Magnesium%20monohydride
Magnesium monohydride is a molecular gas with formula MgH that exists at high temperatures, such as the atmospheres of the Sun and stars. It was originally known as magnesium hydride, although that name is now more commonly used when referring to the similar chemical magnesium dihydride. History George Downing Liveing and James Dewar are claimed to be the first to make and observe a spectral line from MgH in 1878. However they did not realise what the substance was. Formation A laser can evaporate magnesium metal to form atoms that react with molecular hydrogen gas to form MgH and other magnesium hydrides. An electric discharge through hydrogen gas at low pressure (20 pascals) containing pieces of magnesium can produce MgH. Thermally produced hydrogen atoms and magnesium vapour can react and condense in a solid argon matrix. This process does not work with solid neon, probably due to the formation of instead. A simple way to produce some MgH is to burn magnesium in a bunsen burner flame, where there is enough hydrogen to form MgH temporarily. Magnesium arcs in steam also produce MgH, but also produce MgO. Natural formation of MgH happens in stars, brown dwarfs, and large planets, where the temperature is high enough. The reaction that produces it is either or Mg + H → MgH. Decomposition is by the reverse process. Formation requires the presence of magnesium gas. The amount of magnesium gas is greatly reduced in cool stars by its extraction in clouds of enstatite, a magnesium silicate. Otherwise in these stars, below any magnesium silicate clouds where the temperature is hotter, the concentration of MgH is proportional to the square root of the pressure, and concentration of magnesium, and 10−4236/T. MgH is the second most abundant magnesium containing gas (after atomic magnesium) in the deeper hotter parts of planets and brown dwarfs. The reaction of Mg atoms with (dihydrogen gas) is actually endothermic and proceeds when magnesium atoms are excited electronically. The magnesium atom inserts into the bond between the two hydrogen atoms to create a temporary molecule, which spins rapidly and breaks up into a spinning MgH molecule and a hydrogen atom. The MgH molecules produced have a bimodal distribution of rotation rates. When Protium is changed for Deuterium in this reaction the distribution of rotations remains unchanged. (). The low rotation rate products also have low vibration levels, and so are "cold". Properties Spectrum The far infrared contains the rotational spectrum of MgH ranging from 0.3 to 2 THz. This also contains hyperfine structure. 24MgH is predicted to have spectral lines for various rotational transition for the following vibrational levels. The infrared vibration rotation bands are in the range 800–2200 cm−1. The fundamental vibration mode is at 6.7 μm. Three isotopes of magnesium and two of hydrogen multiply the band spectra with six isotopomers: 24MgH 25MgH 26MgH 24MgD 25MgD 26MgD. Vibration and rotation frequencies are significantly altered by the different masses of the atoms. The visible band spectrum of magnesium hydride was first observed in the 19th century, and was soon confirmed to be due to a combination of magnesium and hydrogen. Whether there was actually a compound was debated due to no solid material being able to be produced. Despite this the term magnesium hydride was used for whatever made the band spectrum. This term was used before magnesium dihydride was discovered. The spectral bands had heads with fluting in the yellow green, green, and blue parts of the visible spectrum. The yellow green band of the MgH spectrum is around the wavelength 5622 Å. The blue band is 4845 Å The main band of MgH in the visible spectrum is due to electronic transition between the A2Π→X2Σ+ levels combined with transitions in rotational and vibrational state. For each electronic transition, there are different bands for changes between the different vibrational states. The transition between vibrational states is represented using parenthesis (n,m), with n and m being numbers. Within each band there are many lines organised into three sets called branches. The P, Q and R branch are distinguished by whether the rotational quantum number increases by one, stays the same or decreases by one. Lines in each branch will have different rotational quantum numbers depending on how fast the molecules are spinning. For the A2Π→X2Σ+ transition the lowest vibrational level transitions are the most prominent, however the A2Π energy level can have a vibration quantum state up to 13. Any higher level and the molecule has too much energy and shakes apart. For each level of vibrational energy there are a number of different rates of rotation that the molecule can sustain. For level 0 the maximum rotational quantum number is 49. Above this rotation rate it would spin so fast it would break apart. Then for subsequently higher vibrational levels from 2 to 13 the number of maximum rotational levels decreasing going through the sequence 47, 44, 42, 39, 36, 33, 30, 27, 23, 19, 15, 11 and 6. The B'2Σ+→X2Σ+ system is a transition from a slightly higher electronic state to the ground state. It also has lines in the visible spectrum that are observable in sunspots. The bands are headless. The (0,0) band is weak compared to the (0,3), (0,4), (0,5), (0,6), (0,7), (1,3), (1,4), (1,7), and (1,8) vibrational bands. The C2Π state has rotational parameters of B = 6.104 cm−1, D = 0.0003176 cm −1, A = 3.843 cm−1, and p = -0.02653 cm−1. It has an energy level of 41242 cm−1. Another 2Δ electronic level has energy 42192 cm−1 and rotation parameters B = 6.2861 cm−1 and A = -0.168 cm−1. The ultraviolet has many more bands due to higher energy electronic states. The UV spectrum contains band heads at 3100 Å due to a vibrational transition (1,0) 2940 Å (2,0) 2720 Å (3,0) 2640 Å (0,1) 2567 Å (1,3). Physical The magnesium monohydride molecule is a simple diatomic molecule with a magnesium atom bonded to a hydrogen atom. The distance between hydrogen and magnesium atoms is 1.7297Å. The ground state of magnesium monohydride is X2Σ+. Due to the simple structure the symmetry point group of the molecule is C∞v. The moment of inertia of one molecule is 4.805263×10−40 g cm2. The bond has significant covalent character. The dipole moment is 1.215 Debye. Bulk properties of the MgH gas include enthalpy of formation of 229.79 kJ mol−1, entropy 193.20 J K−1 mol−1 and heat capacity of 29.59 J K−1 mol−1. The dissociation energy of the molecule is 1.33 eV. Ionization potential is around 7.9 eV with the ion formed when the molecule loses an electron. Dimer In noble gas matrices MgH can form two kinds of dimer: HMgMgH and a rhombic shaped (◊) in which a dihydrogen molecule bridges the bond between two magnesium atoms. MgH also can form a complex with dihydrogen . Photolysis increases reactions which form the dimer. The energy to break up the dimer HMgMgH into two MgH radicals is 197 kJ/mol. has 63 kJ/mol more energy than HMgMgH. In theory gas phase HMgMgH can decompose to and releasing 24 kJ/mol of energy exothermically. The distance between the magnesium atoms in HMgMgH is calculated to be 2.861 Å. HMgMgH can be considered a formal base compound for other substances LMgMgL that have a magnesium to magnesium bond. In these magnesium can be considered to be in oxidation state +1 rather than the normal +2. However these sorts of compounds are not made from HMgMgH. Related ions can be made by protons hitting magnesium, or dihydrogen gas interacting with singly ionized magnesium atoms (). , and are formed from low pressure hydrogen or ammonia over a magnesium cathode. The trihydride ion is produced the most, and in a greater proportion when pure hydrogen is used rather than ammonia. The dihydride ion is produced the least of the three. Related radicals HMgO and HMgS have been theoretically investigated. MgOH and MgSH are lower in energy. Applications The spectrum of MgH in stars can be used to measure the isotope ratio of magnesium, the temperature, and gravity of the surface of the star. In hot stars MgH will be mostly disassociated due to the heat breaking the molecules, but it can be detected in cooler G, K and M type stars. It can also be detected in starspots or sunspots. The MgH spectrum can be used to study the magnetic field and nature of starspots. Some MgH spectral lines show up prominently in the second solar spectrum, that is the fractional linear polarization. The lines belong to the Q1 and Q2 branches. The MgH absorption lines are immune to the Hanle effect where polarization is reduced in the presence of magnetic fields, such as near sunspots. These same absorption lines do not suffer from the Zeeman effect either. The reason that the Q branch shows up in this way is because Q branch lines are four times more polarizable, and twice as intense as P and R branch lines. These lines that are more polarizable are also less subject to magnetic field effects. References Other reading Metal hydrides Magnesium compounds
Magnesium monohydride
[ "Chemistry" ]
2,088
[ "Metal hydrides", "Inorganic compounds", "Reducing agents" ]
44,876,201
https://en.wikipedia.org/wiki/Setchell%20Carlson
The Setchell Carlson Company was a manufacturer of radios, electronic equipment, and televisions from 1928 until the 1960s. The company was founded in St. Paul, Minnesota, in 1928 by Bart Setchell and Carl Donald Carlson under the name "Karadio Corporation", and its first product was a car radio. The company took the name Setchell Carlson in 1934, and produced consumer radios. During World War II, the company switched to war production, and its most prominent product was the BC-1206-C aviation range receiver. After the war, the company moved to New Brighton, Minnesota, in 1949, and produced televisions, which continued until the 1960s. In the late 1960s or early 1970s, the company moved away from consumer televisions and focused on equipment for institutions, such as schools. It eventually became a subsidiary of Audiotronics Corporation. At its peak in the 1960s, the company employed about 500 at two plants in New Brighton and Arden Hills, Minnesota. References Minneapolis School A-V Equipment of the 1960s and 1970s 1928 establishments in Minnesota Defunct electronics companies of the United States Defunct manufacturing companies based in Minnesota Radio manufacturers
Setchell Carlson
[ "Engineering" ]
232
[ "Radio electronics", "Radio manufacturers" ]
38,895,549
https://en.wikipedia.org/wiki/C25H30O8
{{DISPLAYTITLE:C25H30O8}} The molecular formula C25H30O8 (molar mass: 458.501 g/mol, exact mass: 458.1941 u) may refer to: Kadsurin Mallotojaponin B Molecular formulas
C25H30O8
[ "Physics", "Chemistry" ]
64
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
38,896,498
https://en.wikipedia.org/wiki/Snub%20hexaoctagonal%20tiling
In geometry, the snub hexaoctagonal tiling is a semiregular tiling of the hyperbolic plane. There are three triangles, one hexagon, and one octagon on each vertex. It has Schläfli symbol of sr{8,6}. Images Drawn in chiral pairs, with edges missing between black triangles: Related polyhedra and tilings From a Wythoff construction there are fourteen hyperbolic uniform tilings that can be based from the regular order-6 octagonal tiling. Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 7 forms with full [8,6] symmetry, and 7 with subsymmetry. See also Tilings of regular polygons List of uniform planar tilings References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Chiral figures Hyperbolic tilings Isogonal tilings Semiregular tilings Snub tilings
Snub hexaoctagonal tiling
[ "Physics", "Chemistry" ]
274
[ "Snub tilings", "Semiregular tilings", "Isogonal tilings", "Tessellation", "Chirality", "Hyperbolic tilings", "Chiral figures", "Symmetry" ]
38,899,475
https://en.wikipedia.org/wiki/Terradynamics
Terradynamics is the study of forces and movement during terrestrial locomotion (particularly that using legs) on ground that can flow such as sand and soil. The term "terradynamics" is used in analogy to aerodynamics for flying in the air and hydrodynamics for swimming in water. Terradynamics has been used "to predict a small legged robot’s locomotion on granular media". The Johns Hopkins University Terradynamics Lab describes the field as "Movement Science at the Interface of Biology, Robotics & Physics". References Terrestrial locomotion Robot locomotion
Terradynamics
[ "Physics" ]
126
[ "Physical phenomena", "Motion (physics)", "Robot locomotion" ]
38,899,596
https://en.wikipedia.org/wiki/Transfer%20line
A transfer line is a manufacturing system which consists of a predetermined sequence of machines connected by an automated material handling system and designed for working on a very small family of parts. Parts can be moved singularly because there’s no need for batching when carrying parts between process stations (as opposed to a job shop for example). The line can synchronous, meaning that all parts advance with the same speed, or asynchronous, meaning buffers exist between stations where parts wait to be processed. Not all transfer lines must geometrically be straight lines, for example circular solutions have been developed which make use of rotary tables, however using buffers becomes almost impossible. A crucial problem for this production system is that of line balancing: a trade-off between increasing productivity and minimizing cost conserving total processing time. Advantages Easy management: low work in progress and scheduling without simultaneous processing of different products Low need for manpower Less space needed (compare with job shop) Less output variability: no alternative technological cycles and quality control is more effective (less WIP and easier to automate) High system saturation: less production mix variability Fast lead time. High volume of production is possible. Disadvantages Very low flexibility Risk of obsolescence: due to new product introduction High vulnerability to failures: a failure in a single machine blocks the whole system in very short time See also Job shop Production line Workflow External links Fuel injector transfer line with poka-yoke Brush transfer line Load/Unload Station of a golf ball transfer line Manufacturing
Transfer line
[ "Engineering" ]
319
[ "Manufacturing", "Mechanical engineering" ]
38,899,724
https://en.wikipedia.org/wiki/Unified%20interoperability
Unified interoperability is the property of a system that allows for the integration of real-time and non-real time communications, activities, data, and information services (i.e., unified) and the display and coordination of those services across systems and devices (i.e., interoperability). Unified interoperability provides the capability to communicate and exchange processing across different applications, data, and infrastructure. Unified communications Unified communications has been led by the business world, which has a need for efficiency, simplicity, and speed. Rather than a single tool or product, unified communications is a set of products that deliver a nearly identically user experience across multiple devices or media types. The system begins with “presence information” - a feature of telecommunications technology that “senses” where a user is in relation to the technology. This change has been dominated by telecommunications providers integrating video, instant messaging, voice, and collaboration. Unified Communications Interoperability Forum In May 2010, a number of communications technology vendors founded a nonprofit organization for the advancement of interoperability. The goal of the Unified Communications Interoperability Forum is to enable complete interoperability of hardware and software across huge networks of systems. The UCIF relies on existing standards rather than the authoring of new ones. Members of the UCIF include (*founding member): HP* Microsoft* Polycom* Logitech* Juniper Networks* Acme Packet Huawei Aspect Software AudioCodes Broadcom BroadSoft Brocade Communications Systems ClearOne Jabra Plantronics Siemens Enterprise Communications Teliris Interoperability In the broadest sense, interoperability is the ability of multiple systems (usually computer systems) to work together seamlessly. In the Information Age, interoperability is a highly desirable trait for most business systems. Likewise, as homes become more infused with networked technologies (desktop PCs, tablet computers, smartphones, Internet-ready television), interoperability becomes an issue even for the average consumer. Computer operating systems are a prime example of interoperability, wherein several programs from different vendors are able to co-exist and, in many cases, exchange data in a meaningful way. An operating system is also “unified” in the sense that it presents the user with a common, easy to understand computer interface for executing numerous tasks. The unified interoperability of computers means that users need not have specialized knowledge about how computers function. A system with the property of interoperability will retain that property well into the future. The system will be adaptable to the rapid changes in technology with only minor adjustments. Syntactic interoperability The most fundamental level of interoperability is syntactic interoperability. At this level, systems can exchange data without loss or corruption. Certain data formats are especially suited to the exchange of data between diverse systems. XML (extensible markup language), for instance, allows data to be transmitted in a comprehensible format for people and machines. SQL (structured query language), on the other hand, is an industry-standard, nearly universal format for compiling information in a database. SQL databases are essential for a business such as Amazon.com, with its vast catalog of products, attributes, and consumer reviews. Semantic interoperability Semantic interoperability goes a step further than syntactic interoperability. Systems with semantic interoperability can not only exchange data effortlessly, but also interpret and communicate that data to human users in a meaningful, actionable way. Distributed functions and processing interoperability Distributed functions and processing interoperability focus on the ability to create new products, applications and operating models without traditional intermediaries like data models, databases or large system integrations through establishing a Unified Interoperability framework between normally, diverse and distributed sources, data, technology and other assets. It enables business problems to be solved by connecting interoperable components of any characteristic into single, uniform, global “instruction chain” of functionality. Components use existing IP or applications and so integrate disparate technology to a uniform platform. Configuration models combine runtime processing infrastructure for common and predictable performance, security, resiliency, and availability with the whole process, enabling the uniform exchange of data and consistent processing across components, irrespective of technology, format or location. Benefits Unified interoperability offers benefits for every stakeholder in a system. For customers and end-users of a system, unified interoperability offers a more convenient, satisfying experience. In business, interoperability helps lower costs and improves overall efficiency. As businesses strive to maximize the efficiency of their integrated systems, they encourage innovation and problem solving. References External links UCIF Official Website Strategic management Interoperability
Unified interoperability
[ "Engineering" ]
952
[ "Telecommunications engineering", "Interoperability" ]
38,899,889
https://en.wikipedia.org/wiki/Transformation%20Priority%20Premise
Transformation Priority Premise (TPP) is a programming approach developed by Robert C. Martin (Uncle Bob) as a refinement to make the process of test-driven development (TDD) easier and more effective for a computer programmer. Transformation Priority Premise states that simpler transformations should be preferred: This approach facilitates the programmer doing the simplest possible thing for the purposes of test-driven development as they can explicitly refer to the list of transformations and favor the simpler transformations (from the top of the list) over those further down in the list in the first instance. The Transformations ({} → nil) no code at all → code that employs nil (nil → constant) (constant → constant+) a simple constant to a more complex constant (constant → scalar) replacing a constant with a variable or an argument (statement → statements) adding more unconditional statements. (unconditional → if) splitting the execution path (scalar → array) (array → container) (statement → tail-recursion) (if → while) (statement → non-tail-recursion) (expression → function) replacing an expression with a function or algorithm (variable → assignment) replacing the value of a variable. (case) adding a case (or else) to an existing switch or if Uncle Bob also explicitly stated: "There are likely others", and How to use the Transformations in Practice Ridlehoover clarifies that the Transformations help you pick which tests to write and in what order. Corey Haines provides a live coding demo (Roman Numerals Kata) where he solves a coding challenge utilising the Transformations. References Roman Numerals Kata with Commentary Transformation Priority Premise Applied The Transformation Priority Premise explained by Uncle Bob External links Bob Martin's original blog post on TPP A subsequent blog post in which Bob Martin extended the list of transformations Extreme programming Software development philosophies Software development process Software testing
Transformation Priority Premise
[ "Engineering" ]
397
[ "Software engineering", "Software testing" ]
38,904,907
https://en.wikipedia.org/wiki/Centre%20for%20Advanced%202D%20Materials
The Centre for Advanced 2D Materials (CA2DM), at the National University of Singapore (NUS), is the first centre in Asia dedicated to graphene research. The centre was established under the scientific advice of two Nobel Laureates in physics – Prof Andre Geim and Prof Konstantin Novoselov - who won the 2010 Nobel Prize in Physics for their discovery of graphene. It was created for the conception, characterization, theoretical modeling, and development of transformative technologies based on two-dimensional crystals, such as graphene. In 2019, Prof Konstantin Novoselov moved to Singapore and joined NUS as Distinguished Professor of Materials Science and Engineering. History and funding CA2DM had its beginnings in 2010 as the Graphene Research Centre (GRC), which NUS established under the leadership of Prof. Antonio H. Castro Neto, with a start-up fund from NUS of S$40 Million, 1,000 m2 of laboratory space, and a state-of-the-art clean room facility of 800 m2. In June 2012, the GRC announced the opening of a S$15 Million micro and nano fabrication facility to produce graphene products. Then in 2014, research activities in the GRC expanded to other 2D materials such as 2D transition metal dichalcogenides. To better reflect the research activities, the GRC was renamed the Centre for Advanced 2D Materials (CA2DM), and became an NRF “Medium-Sized Centre", with a S$50 Million grant. Speaking of commercial application today scientists are using graphene for making synthetic blood and developing non-invasive treatments for cancer. Graphene would soon replace silicon in your computer chips thus resulting in a much faster, unbreakable tablets, phone and others; CA2DM is also participating on a S$50 Million CREATE grant from NRF, together with University of California, Berkeley and Nanyang Technological University, for the study of new photovoltaic systems based on two-dimensional crystals. Research The target areas of intervention of the NUS Centre for Advanced 2D Materials are Graphene Research Principal Investigator(s): Barbaros Özyilmaz Research areas include: Atomically thin, wafer size, crystal growth, and characterization: Raman, AFM, TEM, STM, magneto transport, angle resolved photoemission (ARPES), optics. Three-dimensional architectures based on atomically thin films (atomic multi-layers, see figure). Composite materials where accumulated stress could be monitored by contactless, non-invasive, optical methods. Spintronics and valleytronics in two-dimensional materials. Graphene-ferroelectric memories (G-FeRAM), graphene spin torque transistors (G-STT). 2D Materials Research Principal Investigator(s): Loh Kian Ping Research areas include: Atomically thin, wafer size, crystal growth, and characterization: Raman, AFM, TEM, STM, magneto transport, angle resolved photoemission (ARPES), optics. Three-dimensional architectures based on atomically thin films (atomic multi-layers, see figure). Composite materials where accumulated stress could be monitored by contactless, non-invasive, optical methods. Spintronics and valleytronics in two-dimensional materials. 2D Device Research Principal Investigator(s): Lim Chwee Teck Research areas include: Flexible electronics and strain engineering of atomically thin materials. Mechanics of atomically thin film transfer. Nano-scale patterning and new device development. Atomically thin electrodes for photovoltaic or OLED applications. Atomically thin gas barriers and electrodes for energy/charge transfer and storage (water splitting, fuel cells, etc.). Solution-processed atomically thin substrates for bio applications and catalysis. Atomically thin films as optical components in fiber lasers (mode locking, polarizers etc.). Atomically thin film platforms for bio-sensing and stem cell growth. Atomically thin film platforms for sol-gel, organic, and electro-chemistry. Theory Group Principal Investigator(s): Feng Yuan Ping Research areas include: Computational modeling of new atomically thin materials and complex architectures. Spintronics and valleytronics in two-dimensional materials. References Physics research institutes Research institutes in Singapore National University of Singapore 2010 establishments in Singapore Graphene Educational institutions established in 2010 Nanotechnology institutions
Centre for Advanced 2D Materials
[ "Materials_science" ]
901
[ "Nanotechnology", "Nanotechnology institutions" ]
38,905,019
https://en.wikipedia.org/wiki/Prosopine
Prosopine is an alkaloid found in Prosopis africana. References Alkaloids Alkaloids found in Fabaceae
Prosopine
[ "Chemistry" ]
30
[ "Organic compounds", "Biomolecules by chemical classification", "Natural products", "Alkaloids" ]
21,779,590
https://en.wikipedia.org/wiki/GRB%20970508
GRB 970508 was a gamma-ray burst (GRB) detected on May 8, 1997, at 21:42 UTC; it is historically important as the second GRB (after GRB 970228) with a detected afterglow at other wavelengths, the first to have a direct redshift measurement of the afterglow, and the first to be detected at radio wavelengths. A gamma-ray burst is a highly luminous flash associated with an explosion in a distant galaxy and producing gamma rays, the most energetic form of electromagnetic radiation, and often followed by a longer-lived "afterglow" emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, and radio). GRB 970508 was detected by the Gamma Ray Burst Monitor on the Italian–Dutch X-ray astronomy satellite BeppoSAX. Astronomer Mark Metzger determined that GRB 970508 occurred at least 6 billion light years from Earth; this was the first measurement of the distance to a gamma-ray burst. Until this burst, astronomers had not reached a consensus regarding how far away GRBs occur from Earth. Some supported the idea that GRBs occur within the Milky Way, but are visibly faint because they are not highly energetic. Others concluded that GRBs occur in other galaxies at cosmological distances and are extremely energetic. Although the possibility of multiple types of GRBs meant that the two theories were not mutually exclusive, the distance measurement unequivocally placed the source of the GRB outside the Milky Way, effectively ending the debate. GRB 970508 was also the first burst with an observed radio frequency afterglow. By analyzing the fluctuating strength of the radio signals, astronomer Dale Frail calculated that the source of the radio waves had expanded almost at the speed of light. This provided strong evidence that GRBs are relativistically expanding explosions. Discovery A gamma-ray burst (GRB) is a highly luminous flash of gamma rays—the most energetic form of electromagnetic radiation. GRBs were first detected in 1967 by the Vela satellites (a series of spacecraft designed to detect nuclear explosions in space). The initial burst is often followed by a longer-lived "afterglow" emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, and radio). The first GRB afterglow to be discovered was the X-ray afterglow of GRB 970228, which was detected by BeppoSAX, an Italian–Dutch satellite originally designed to study X-rays. On Thursday May 8, 1997, at 21:42 UTC, BeppoSAX's Gamma Ray Burst Monitor registered a gamma-ray burst that lasted approximately 15 seconds. It was also detected by Ulysses, a robotic space probe designed to study the Sun, and by the Burst and Transient Source Experiment (BATSE) on board the Compton Gamma Ray Observatory. The burst also occurred within the field of view of one of BeppoSAX's two X-ray Wide Field Cameras. Within a few hours, the BeppoSAX team localized the burst to an error box—a small area around the specific position to account for the error in the position—with a diameter of approximately 10 arcminutes. Observations After a rough position of the burst had been determined, Enrico Costa of the BeppoSAX team contacted astronomer Dale Frail at the National Radio Astronomy Observatory's Very Large Array. Frail began making observations at a wavelength of 20 centimeters at 01:30 UTC, less than four hours after the discovery. While preparing for his observations Frail contacted astronomer Stanislav Djorgovski, who was working with the Hale Telescope. Djorgovski immediately compared his images of the region with older images from the Digitized Sky Survey, but he found no new sources of light within the error box. Mark Metzger, a colleague of Djorgovski at the Caltech observatory, conducted a more extensive analysis of the data, but was also unable to identify any new light sources. The following evening Djorgovski again observed the region. He compared the images from both nights but the error box contained no objects that had decreased in luminosity between May 8 and May 9. Metzger noticed one object that had increased in luminosity, but he assumed it was a variable star rather than the GRB afterglow. Titus Galama and Paul Groot, members of a research team in Amsterdam led by Jan van Paradijs, compared images taken by the WIYN Telescope on May 8 and the William Herschel Telescope on May 9. They were also unable to find any light sources which had faded during that time. After discovering the burst's X-ray afterglow, the BeppoSAX team provided a more accurate localization, and what Metzger had assumed to be a variable star was still present in this smaller error box. Both the Caltech team and the Amsterdam team were hesitant to publish any conclusions on the variable object. On May 10 Howard Bond of the Space Telescope Science Institute published his discovery, which was later confirmed to be the burst's optical afterglow. On the night between May 10 and May 11, 1997, Metzger's colleague Charles Steidel recorded the spectrum of the variable object at the W. M. Keck Observatory. He then sent the data to Metzger, who after identifying a system of absorption lines associated with magnesium and iron determined a redshift of z = 0.8349 ± 0.0002, indicating that light from the burst had been absorbed by matter roughly 6 billion light-years from Earth. Although the redshift of the burst itself had not been determined, the absorbent matter was necessarily located between the burst and the Earth, implying that the burst itself was at least as far away. The absence of Lyman-alpha forest features in the spectra constrained the redshift to z ≤ 2.3, while further investigation by Daniel E. Reichart of the University of Chicago suggested a redshift of z ≈ 1.09. This was the first instance in which scientists were able to measure the redshift of a GRB. Several optical spectra were also obtained at the Calar Alto Observatory at wavelength ranges of and , but no emission lines were identified. On May 13, five days after the first detection of GRB 970508, Frail resumed his observations with the Very Large Array. He made observations of the burst's position at a wavelength of 3.5 cm and immediately detected a strong signal. After 24 hours, the 3.5 cm signal became significantly stronger, and he also detected signals at the 6 and 21 cm wavelengths. This was the first confirmed observation of a radio afterglow of a GRB. Over the next month, Frail observed that the luminosity of the radio source fluctuated significantly from day to day but increased on average. The fluctuations did not occur simultaneously along all of the observed wavelengths, which Jeremy Goodman of Princeton University explained as being the result of the radio waves being bent by interstellar plasma in the Milky Way. Such radio scintillations (rapid variations in the radio luminosity of an object) occur only when the source has an apparent diameter of less than 3 microarcseconds. Characteristics BeppoSAX's Gamma-Ray Burst Monitor, operating in the energy range of 40–700 keV, recorded a fluence of (1.85 ± 0.3) × 10−6 erg/cm2 (1.85 ± 0.3 nJ/m2), and the Wide Field Camera (2–26 keV) recorded a fluence of (0.7 ± 0.1) × 10−6 erg/cm2 (0.7 ± 0.1 nJ/m2). BATSE (20–1000 keV) recorded a fluence of (3.1 ± 0.2) × 10−6 erg/cm2 (3.1 ± 0.2 nJ/m2). About 5 hours after the burst the apparent magnitude of the object—a logarithmic measure of its brightness with a higher number indicating a fainter object—was 20.3 ± 0.3 in the U-band (the ultraviolet region of the spectrum) and 21.2 ± 0.1 in the R-band (the red region of the spectrum). The afterglow reached its peak luminosity in both bands approximately 2 days after the burst was first detected—19.6 ± 0.3 in the U-band at 02:13 UTC on May 11, and 19.8 ± 0.2 in the R-band at 20:55 UTC on May 10. James E. Rhoads, an astronomer at the Kitt Peak National Observatory, analyzed the burst and determined that it was not strongly beamed. Further analysis by Frail and his colleagues indicated that the total energy released by the burst was approximately 5×1050 ergs (5×1043 J), and Rhoads determined that the total gamma-ray energy was approximately 3×1050 erg (3×1043 J). This implied that the gamma-ray and kinetic energy of the burst's ejecta were comparable, effectively ruling out those GRB models which are relatively inefficient at producing gamma rays. Distance scale and emission model Prior to this burst, astronomers had not reached consensus regarding how far away GRBs occur from Earth. Although the isotropic distribution of bursts suggested that they do not occur within the disk of the Milky Way, some astronomers supported the idea that they occur within the Milky Way's halo, concluding that the bursts are visibly faint because they are not highly energetic. Others concluded that GRBs occur in other galaxies at cosmological distances and that they can be detected because they are extremely energetic. The distance measurement and the calculations of the burst's total energy release unequivocally supported the latter theory, effectively ending the debate. Throughout the month of May the radio scintillations became less noticeable until they ceased altogether. This implies that the radio source significantly expanded in the time that had passed since the burst was detected. Using the known distance to the source and the elapsed time before the scintillation ended, Frail calculated that the radio source had expanded at almost the speed of light. While various existing models already encompassed the notion of a relativistically expanding fireball, this was the first strong evidence to support such a model. Host galaxy The afterglow of GRB 970508 reached a peak total luminosity 19.82 days after the burst was detected. It then faded with a power law slope over about 100 days. The afterglow eventually disappeared, revealing the burst's host, an actively star-forming dwarf galaxy with an apparent magnitude of V = 25.4 ± 0.15. The galaxy was well fitted by an exponential disk with an ellipticity of 0.70 ± 0.07. The redshift of GRB 970508's optical afterglow, z = 0.835, agreed with the host galaxy's redshift of z = 0.83, suggesting that, unlike previously observed bursts, GRB 970508 may have been associated with an active galactic nucleus. See also List of gamma-ray bursts Notes References 970508 19970508 May 1997 Camelopardalis
GRB 970508
[ "Astronomy" ]
2,336
[ "Camelopardalis", "Constellations" ]
21,779,600
https://en.wikipedia.org/wiki/Transition%20path%20sampling
Transition path sampling (TPS) is a rare-event sampling method used in computer simulations of rare events: physical or chemical transitions of a system from one stable state to another that occur too rarely to be observed on a computer timescale. Examples include protein folding, chemical reactions and nucleation. Standard simulation tools such as molecular dynamics can generate the dynamical trajectories of all the atoms in the system. However, because of the gap in accessible time-scales between simulation and reality, even present supercomputers might require years of simulations to show an event that occurs once per millisecond without some kind of acceleration. Transition path ensemble TPS focuses on the most interesting part of the simulation, the transition. For example, an initially unfolded protein will vibrate for a long time in an open-string configuration before undergoing a transition and fold on itself. The aim of the method is to reproduce precisely those folding moments. Consider in general a system with two stable states A and B. The system will spend a long time in those states and occasionally jump from one to the other. There are many ways in which the transition can take place. Once a probability is assigned to each of the many pathways, one can construct a Monte Carlo random walk in the path space of the transition trajectories, and thus generate the ensemble of all transition paths. All the relevant information can then be extracted from the ensemble, such as the reaction mechanism, the transition states, and the rate constants. Given an initial path, TPS provides some algorithms to perturb that path and create a new one. As in all Monte Carlo walks, the new path will then be accepted or rejected in order to have the correct path probability. The procedure is iterated and the ensemble is gradually sampled. A powerful and efficient algorithm is the so-called shooting move. Consider the case of a classical many-body system described by coordinates r and momenta p. Molecular dynamics generates a path as a set of (rt, pt) at discrete times t in [0,T] where T is the length of the path. For a transition from A to B, (r0, p0) is in A, and (rT, pT) is in B. One of the path times is chosen at random, the momenta p are modified slightly into p + δp, where δp is a random perturbation consistent with system constraints, e.g. conservation of energy and linear and angular momentum. A new trajectory is then simulated from this point, both backward and forward in time until one of the states is reached. Being in a transition region, this will not take long. If the new path still connects A to B it is accepted, otherwise it is rejected and the procedure starts again. Rate constant computation In the Bennett–Chandler procedure, the rate constant kAB for the transition from A to B is derived from the correlation function , where hX is the characteristic function of state X, and hX(t) is either 1 if the system at time t is in state X or 0 if not. The time-derivative C'(t) starts at time 0 at the transition state theory (TST) value kABTST and reaches a plateau kAB ≤ kABTST for times of the order of the transition time. Hence once the function is known up to these times, the rate constant is also available. In the TPS framework C(t) can be rewritten as an average in the path ensemble , where the subscript AB denotes an average in the ensemble of paths that start in A and visit B at least once. Time t''' is an arbitrary time in the plateau region of C(t). The factor C(t') at this specific time can be computed with a combination of path sampling and umbrella sampling. Transition interface sampling The TPS rate constant calculation can be improved in a variation of the method called Transition interface sampling (TIS). In this method the transition region is divided in subregions using interfaces. The first interface defines state A and the last state B. The interfaces are not physical interfaces but hypersurfaces in the phase space. The rate constant can be viewed as a flux through these interfaces. The rate kAB is the flux of trajectories starting before the first interface and going through the last interface. Being a rare event, the flux is very small and practically impossible to compute with a direct simulation. However, using the other interfaces between the states, one can rewrite the flux in terms of transition probabilities between interfaces , where PA(i + 1|i) is the probability for trajectories, coming from state A and crossing interface i, to reach interface i + 1. Here interface 0 defines state A and interface n defines state B. The factor Φ1,0 is the flux through the interface closest to A. By making this interface close enough, the quantity can be computed with a standard simulation, as the crossing event through this interface is not a rare event any more. Remarkably, in the formula above there is no Markov assumption of independent transition probabilities. The quantities PA(i + 1|i) carry a subscript A to indicate that the probabilities are all dependent on the history of the path, all the way from when it left A. These probabilities can be computed with a path sampling simulation using the TPS shooting move. A path crossing interface i is perturbed and a new path is shot. If the path still starts from A and crosses interface i, is accepted. The probability PA(i + 1|i) follows from the ratio of the number of paths that reach interface i + 1 to the total number of paths in the ensemble. Theoretical considerations show that TIS computations are at least twice as fast as TPS, and computer experiments have shown that the TIS rate constant can converge up to 10 times faster. A reason for this is due to TIS using paths of adjustable length and on average shorter than TPS. Also, TPS relies on the correlation function C(t), computed by summation of positive and negative terms due to recrossings. TIS instead computes the rate as an effective positive flux, the quantity k''AB is directly computed as an average of only positive terms contributing to the interface transition probabilities. Time Dependent Processes TPS/TIS as normally implemented can be acceptable for non-equilibrium calculations provided that the interfacial fluxes are time-independent (stationary). To treat non-stationary systems in which there is time dependence in the dynamics, due either to variation of an external parameter or to evolution of the system itself, then other rare-event methods may be needed, such as stochastic-process rare-event sampling. Cited references More references For a review of TPS: For a review of TIS External links C++ source code of an S-PRES wrapper program, with optional parallelism using OpenMP. http://www.pyretis.org Python open source library to perform transition path sampling, Interfaced with GROMACS, LAMMPS, CP2K. Computational chemistry Monte Carlo methods Molecular dynamics Theoretical chemistry
Transition path sampling
[ "Physics", "Chemistry" ]
1,471
[ "Molecular physics", "Monte Carlo methods", "Computational physics", "Molecular dynamics", "Computational chemistry", "Theoretical chemistry", "nan" ]
21,782,811
https://en.wikipedia.org/wiki/Equatorial%20Rossby%20wave
Equatorial Rossby waves, often called planetary waves, are very long, low frequency water waves found near the equator and are derived using the equatorial beta plane approximation. Mathematics Using the equatorial beta plane approximation, , where β is the variation of the Coriolis parameter with latitude, . With this approximation, the primitive equations become the following: the continuity equation (accounting for the effects of horizontal convergence and divergence and written with geopotential height): the U-momentum equation (zonal component): the V-momentum equation (meridional component): In order to fully linearize the primitive equations, one must assume the following solution: Upon linearization, the primitive equations yield the following dispersion relation: , where c is the phase speed of an equatorial Kelvin wave (). Their frequencies are much lower than that of gravity waves and represent motion that occurs as a result of the undisturbed potential vorticity varying (not constant) with latitude on the curved surface of the earth. For very long waves (as the zonal wavenumber approaches zero), the non-dispersive phase speed is approximately: , which indicates that these long equatorial Rossby waves move in the opposite direction (westward) of Kelvin waves (which move eastward) with speeds reduced by factors of 3, 5, 7, etc. To illustrate, suppose c = 2.8 m/s for the first baroclinic mode in the Pacific; then the Rossby wave speed would correspond to ~0.9 m/s, requiring a 6-month time frame to cross the Pacific basin from east to west. For very short waves (as the zonal wavenumber increases), the group velocity (energy packet) is eastward and opposite to the phase speed, both of which are given by the following relations: Frequency relation: Group velocity: Thus, the phase and group speeds are equal in magnitude but opposite in direction (phase speed is westward and group velocity is eastward); note that is often useful to use potential vorticity as a tracer for these planetary waves, due to its invertibility (especially in the quasi-geostrophic framework). Therefore, the physical mechanism responsible for the propagation of these equatorial Rossby waves is none other than the conservation of potential vorticity: Thus, as a fluid parcel moves equatorward (βy approaches zero), the relative vorticity must increase and become more cyclonic in nature. Conversely, if the same fluid parcel moves poleward, (βy becomes larger), the relative vorticity must decrease and become more anticyclonic in nature. As a side note, these equatorial Rossby waves can also be vertically-propagating waves when the Brunt–Vaisala frequency (buoyancy frequency) is held constant, ultimately resulting in solutions proportional to , where m is the vertical wavenumber and k is the zonal wavenumber. Equatorial Rossby waves can also adjust to equilibrium under gravity in the tropics; because the planetary waves have frequencies much lower than gravity waves. The adjustment process tends to take place in two distinct stages where the first stage is a rapid change due to the fast propagation of gravity waves, the same as that on an f-plane (Coriolis parameter held constant), resulting in a flow that is close to geostrophic equilibrium. This stage could be thought of as the mass field adjusting to the wave field (due to the wavelengths being smaller than the Rossby deformation radius. The second stage is one where quasi-geostrophic adjustment takes place by means of planetary waves; this process can be comparable to the wave field adjusting to the mass field (due to the wavelengths being larger than the Rossby deformation radius. See also Equatorial waves References Oceanography
Equatorial Rossby wave
[ "Physics", "Environmental_science" ]
773
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
21,784,317
https://en.wikipedia.org/wiki/Direct%20and%20indirect%20band%20gaps
In semiconductors, the band gap of a semiconductor can be of two basic types, a direct band gap or an indirect band gap. The minimal-energy state in the conduction band and the maximal-energy state in the valence band are each characterized by a certain crystal momentum (k-vector) in the Brillouin zone. If the k-vectors are different, the material has an "indirect gap". The band gap is called "direct" if the crystal momentum of electrons and holes is the same in both the conduction band and the valence band; an electron can directly emit a photon. In an "indirect" gap, a photon cannot be emitted because the electron must pass through an intermediate state and transfer momentum to the crystal lattice. Examples of direct bandgap materials include hydrogenated amorphous silicon and some III–V materials such as InAs and GaAs. Indirect bandgap materials include crystalline silicon and Ge. Some III–V materials are indirect bandgap as well, for example AlSb. Implications for radiative recombination Interactions among electrons, holes, phonons, photons, and other particles are required to satisfy conservation of energy and crystal momentum (i.e., conservation of total k-vector). A photon with an energy near a semiconductor band gap has almost zero momentum. One important process is called radiative recombination, where an electron in the conduction band annihilates a hole in the valence band, releasing the excess energy as a photon. This is possible in a direct band gap semiconductor if the electron has a k-vector near the conduction band minimum (the hole will share the same k-vector), but not possible in an indirect band gap semiconductor, as photons cannot carry crystal momentum, and thus conservation of crystal momentum would be violated. For radiative recombination to occur in an indirect band gap material, the process must also involve the absorption or emission of a phonon, where the phonon momentum equals the difference between the electron and hole momentum. It can also, instead, involve a crystallographic defect, which performs essentially the same role. The involvement of the phonon makes this process much less likely to occur in a given span of time, which is why radiative recombination is far slower in indirect band gap materials than direct band gap ones. This is why light-emitting and laser diodes are almost always made of direct band gap materials, and not indirect band gap ones like silicon. The fact that radiative recombination is slow in indirect band gap materials also means that, under most circumstances, radiative recombinations will be a small proportion of total recombinations, with most recombinations being non-radiative, taking place at point defects or at grain boundaries. However, if the excited electrons are prevented from reaching these recombination places, they have no choice but to eventually fall back into the valence band by radiative recombination. This can be done by creating a dislocation loop in the material. At the edge of the loop, the planes above and beneath the "dislocation disk" are pulled apart, creating a negative pressure, which raises the energy of the conduction band substantially, with the result that the electrons cannot pass this edge. Provided that the area directly above the dislocation loop is defect-free (no non-radiative recombination possible), the electrons will fall back into the valence shell by radiative recombination, thus emitting light. This is the principle on which "DELEDs" (Dislocation Engineered LEDs) are based. Implications for light absorption The exact reverse of radiative recombination is light absorption. For the same reason as above, light with a photon energy close to the band gap can penetrate much farther before being absorbed in an indirect band gap material than a direct band gap one (at least insofar as the light absorption is due to exciting electrons across the band gap). This fact is very important for photovoltaics (solar cells). Crystalline silicon is the most common solar-cell substrate material, despite the fact that it is indirect-gap and therefore does not absorb light very well. As such, they are typically hundreds of microns thick; thinner wafers would allow much of the light (particularly in longer wavelengths) to simply pass through. By comparison, thin-film solar cells are made of direct band gap materials (such as amorphous silicon, CdTe, CIGS or CZTS), which absorb the light in a much thinner region, and consequently can be made with a very thin active layer (often less than 1 micron thick). The absorption spectrum of an indirect band gap material usually depends more on temperature than that of a direct material, because at low temperatures there are fewer phonons, and therefore it is less likely that a photon and phonon can be simultaneously absorbed to create an indirect transition. For example, silicon is opaque to visible light at room temperature, but transparent to red light at liquid helium temperatures, because red photons can only be absorbed in an indirect transition. Formula for absorption A common and simple method for determining whether a band gap is direct or indirect uses absorption spectroscopy. By plotting certain powers of the absorption coefficient against photon energy, one can normally tell both what value the band gap is, and whether or not it is direct. For a direct band gap, the absorption coefficient is related to light frequency according to the following formula: , with where: is the absorption coefficient, a function of light frequency is light frequency is the Planck constant ( is the energy of a photon with frequency ) is the reduced Planck constant () is the band gap energy is a certain constant, with formula above , where and are the effective masses of the electron and hole, respectively ( is called a "reduced mass") is the elementary charge is the (real) index of refraction is the vacuum permittivity is the vacuum wavelength for light of frequency is a "matrix element", with units of length and typical value the same order of magnitude as the lattice constant. This formula is valid only for light with photon energy larger, but not too much larger, than the band gap (more specifically, this formula assumes the bands are approximately parabolic), and ignores all other sources of absorption other than the band-to-band absorption in question, as well as the electrical attraction between the newly created electron and hole (see exciton). It is also invalid in the case that the direct transition is forbidden, or in the case that many of the valence band states are empty or conduction band states are full. On the other hand, for an indirect band gap, the formula is: where: is the energy of the phonon that assists in the transition is the Boltzmann constant is the thermodynamic temperature This formula involves the same approximations mentioned above. Therefore, if a plot of versus forms a straight line, it can normally be inferred that there is a direct band gap, measurable by extrapolating the straight line to the axis. On the other hand, if a plot of versus forms a straight line, it can normally be inferred that there is an indirect band gap, measurable by extrapolating the straight line to the axis (assuming ). Other aspects In some materials with an indirect gap, the value of the gap is negative. The top of the valence band is higher than the bottom of the conduction band in energy. Such materials are known as semimetals. See also Moss–Burstein effect Tauc plot References External links B. Van Zeghbroeck's Principles of Semiconductor Devices at Electrical and Computer Engineering Department of University of Colorado at Boulder Electronic band structures Optoelectronics
Direct and indirect band gaps
[ "Physics", "Chemistry", "Materials_science" ]
1,626
[ "Electron", "Electronic band structures", "Condensed matter physics" ]
21,787,021
https://en.wikipedia.org/wiki/Movat%27s%20stain
Movat's stain is a pentachrome stain originally developed by Henry Zoltan Movat (1923–1995), a Hungarian-Canadian Pathologist in Toronto in 1955 to highlight the various constituents of connective tissue, especially cardiovascular tissue, by five colors in a single stained slide. In 1972, H. K. Russell, Jr. modified the technique so as to reduce the time for staining and to increase the consistency and reliability of the staining, creating the Russell–Movat stain. Principle Modified Russell–Movat staining highlights numerous tissue components in histological slides. It is obtained by a mix of five stains: alcian blue, Verhoeff hematoxylin and crocein scarlet combined with acidic fuchsine and saffron. At pH 2.5, alcian blue is fixed by electrostatic binding with the acidic mucopolysaccharides. The Verhoeff hematoxylin has a high affinity for nuclei and elastin fibers, negatively charged. The combination of crocein scarlet with acidic fuchsine stains acidophilic tissue components in red. Then, collagen and reticulin fibers are unstained by a reaction with phosphotungstic acid and stained in yellow by saffron. Uses Modified Russell–Movat staining is used to study the heart, blood vessels and connective tissues. It can also be used to diagnose vascular and lung diseases. Gallery References See also Cardiovascular disease Staining
Movat's stain
[ "Chemistry", "Biology" ]
311
[ "Staining", "Microbiology techniques", "Cell imaging", "Microscopy" ]
21,787,470
https://en.wikipedia.org/wiki/Alpha%20particle
Alpha particles, also called alpha rays or alpha radiation, consist of two protons and two neutrons bound together into a particle identical to a helium-4 nucleus. They are generally produced in the process of alpha decay but may also be produced in other ways. Alpha particles are named after the first letter in the Greek alphabet, α. The symbol for the alpha particle is α or α2+. Because they are identical to helium nuclei, they are also sometimes written as He2+ or 2+ indicating a helium ion with a +2 charge (missing its two electrons). Once the ion gains electrons from its environment, the alpha particle becomes a normal (electrically neutral) helium atom . Alpha particles have a net spin of zero. When produced in standard alpha radioactive decay, alpha particles generally have a kinetic energy of about 5 MeV and a velocity in the vicinity of 4% of the speed of light. They are a highly ionizing form of particle radiation, with low penetration depth (stopped by a few centimetres of air, or by the skin). However, so-called long-range alpha particles from ternary fission are three times as energetic and penetrate three times as far. The helium nuclei that form 10–12% of cosmic rays are also usually of much higher energy than those produced by nuclear decay processes, and thus may be highly penetrating and able to traverse the human body and also many metres of dense solid shielding, depending on their energy. To a lesser extent, this is also true of very high-energy helium nuclei produced by particle accelerators. Name The term "alpha particle" was coined by Ernest Rutherford in reporting his studies of the properties of uranium radiation. The radiation appeared to have two different characters, the first he called " radiation" and the more penetrating one he called " radiation". After five years of additional experimental work, Rutherford and Hans Geiger determined that "the alpha particle, after it has lost its positive charge, is a Helium atom". Alpha radiation consists of particles equivalent to doubly-ionized helium nuclei (He2+) which can gain electrons from passing through matter. This mechanism is the origin of terrestrial helium gas. Sources Alpha decay The best-known source of alpha particles is alpha decay of heavier (mass number of at least 104) atoms. When an atom emits an alpha particle in alpha decay, the atom's mass number decreases by four due to the loss of the four nucleons in the alpha particle. The atomic number of the atom goes down by two, as a result of the loss of two protons – the atom becomes a new element. Examples of this sort of nuclear transmutation by alpha decay are the decay of uranium to thorium, and that of radium to radon. Alpha particles are commonly emitted by all of the larger radioactive nuclei such as uranium, thorium, actinium, and radium, as well as the transuranic elements. Unlike other types of decay, alpha decay as a process must have a minimum-size atomic nucleus that can support it. The smallest nuclei that have to date been found to be capable of alpha emission are beryllium-8 and tellurium-104, not counting beta-delayed alpha emission of some lighter elements. The alpha decay sometimes leaves the parent nucleus in an excited state; the emission of a gamma ray then removes the excess energy. Mechanism of production in alpha decay In contrast to beta decay, the fundamental interactions responsible for alpha decay are a balance between the electromagnetic force and nuclear force. Alpha decay results from the Coulomb repulsion between the alpha particle and the rest of the nucleus, which both have a positive electric charge, but which is kept in check by the nuclear force. In classical physics, alpha particles do not have enough energy to escape the potential well from the strong force inside the nucleus (this well involves escaping the strong force to go up one side of the well, which is followed by the electromagnetic force causing a repulsive push-off down the other side). However, the quantum tunnelling effect allows alphas to escape even though they do not have enough energy to overcome the nuclear force. This is allowed by the wave nature of matter, which allows the alpha particle to spend some of its time in a region so far from the nucleus that the potential from the repulsive electromagnetic force has fully compensated for the attraction of the nuclear force. From this point, alpha particles can escape. Ternary fission Especially energetic alpha particles deriving from a nuclear process are produced in the relatively rare (one in a few hundred) nuclear fission process of ternary fission. In this process, three charged particles are produced from the event instead of the normal two, with the smallest of the charged particles most probably (90% probability) being an alpha particle. Such alpha particles are termed "long range alphas" since at their typical energy of 16 MeV, they are at far higher energy than is ever produced by alpha decay. Ternary fission happens in both neutron-induced fission (the nuclear reaction that happens in a nuclear reactor), and also when fissionable and fissile actinides nuclides (i.e., heavy atoms capable of fission) undergo spontaneous fission as a form of radioactive decay. In both induced and spontaneous fission, the higher energies available in heavy nuclei result in long range alphas of higher energy than those from alpha decay. Accelerators Energetic helium nuclei (helium ions) may be produced by cyclotrons, synchrotrons, and other particle accelerators. Convention is that they are not normally referred to as "alpha particles". Solar core reactions Helium nuclei may participate in nuclear reactions in stars, and occasionally and historically these have been referred to as alpha reactions (see triple-alpha process and alpha process). Cosmic rays In addition, extremely high energy helium nuclei sometimes referred to as alpha particles make up about 10 to 12% of cosmic rays. The mechanisms of cosmic ray production continue to be debated. Energy and absorption The energy of the alpha particle emitted in alpha decay is mildly dependent on the half-life for the emission process, with many orders of magnitude differences in half-life being associated with energy changes of less than 50%, shown by the Geiger–Nuttall law. The energy of alpha particles emitted varies, with higher energy alpha particles being emitted from larger nuclei, but most alpha particles have energies of between 3 and 7 MeV (mega-electron-volts), corresponding to extremely long and extremely short half-lives of alpha-emitting nuclides, respectively. The energies and ratios are often distinct and can be used to identify specific nuclides as in alpha spectrometry. With a typical kinetic energy of 5 MeV; the speed of emitted alpha particles is 15,000 km/s, which is 5% of the speed of light. This energy is a substantial amount of energy for a single particle, but their high mass means alpha particles have a lower speed than any other common type of radiation, e.g. β particles, neutrons. Because of their charge and large mass, alpha particles are easily absorbed by materials, and they can travel only a few centimetres in air. They can be absorbed by tissue paper or by the outer layers of human skin. They typically penetrate skin about 40 micrometres, equivalent to a few cells deep. Biological effects Due to the short range of absorption and inability to penetrate the outer layers of skin, alpha particles are not, in general, dangerous to life unless the source is ingested or inhaled. Because of this high mass and strong absorption, if alpha-emitting radionuclides do enter the body (upon being inhaled, ingested, or injected, as with the use of Thorotrast for high-quality X-ray images prior to the 1950s), alpha radiation is the most destructive form of ionizing radiation. It is the most strongly ionizing, and with large enough doses can cause any or all of the symptoms of radiation poisoning. It is estimated that chromosome damage from alpha particles is anywhere from 10 to 1000 times greater than that caused by an equivalent amount of gamma or beta radiation, with the average being set at 20 times. A study of European nuclear workers exposed internally to alpha radiation from plutonium and uranium found that when relative biological effectiveness is considered to be 20, the carcinogenic potential (in terms of lung cancer) of alpha radiation appears to be consistent with that reported for doses of external gamma radiation i.e. a given dose of alpha-particles inhaled presents the same risk as a 20-times higher dose of gamma radiation. The powerful alpha emitter polonium-210 (a milligram of 210Po emits as many alpha particles per second as 4.215 grams of 226Ra) is suspected of playing a role in lung cancer and bladder cancer related to tobacco smoking. 210Po was used to kill Russian dissident and ex-FSB officer Alexander V. Litvinenko in 2006. History of discovery and use In 1896, Henri Becquerel discovered that uranium emits an invisible radiation that can leave marks on photographic plates, and this mystery radiation wasn't phosphorescence. Marie Curie showed that this phenomenon, which she called "radioactivity", was not unique to uranium and a consequence of individual atoms. Ernest Rutherford studied uranium radiation and discovered that it could ionize gas particles. In 1899, Rutherford discovered that uranium radiation is a mixture of two types of radiation. He performed an experiment which involved two electrodes separated by 4 cm of air. He placed some uranium on the bottom electrode, and the radiation from the uranium ionized the air between the electrodes, creating a current. Rutherford then placed an aluminium foil (5 micrometers thick) over the uranium and noticed that the current dropped a bit, indicating that the foil was absorbing some of the uranium's radiation. Rutherford placed a few more foils over the uranium and found that, for the first four foils, the current steadily decreased at a geometric rate. However, after the fourth layer of foil over the uranium, the current didn't drop anymore and remained more or less level for up to twelve layers of foil. This result indicated that uranium radiation has two components. Rutherford dubbed one component "alpha radiation" which was fully absorbed by just a few layers of foil, and what was left was a second component that could penetrate the foils more easily, and he dubbed the latter "beta radiation". In 1900, Marie Curie noticed that the absorption coefficient of alpha rays seemed to increase the thicker the barrier she placed in their path. This suggested that alpha radiation is not a form of light but made of particles that lose kinetic energy as they pass through barriers. In 1902, Rutherford found that he could deflect alpha rays with a magnetic field and an electric field, showing that alpha radiation is composed of positively charged particles. In 1906, Rutherford made some more precise measurements of the charge-to-mass ratio of alpha particles. Firstly, he found that the ratio was more or less the same whether the source was radium or actinium, showing that alpha particles are the same regardless of the source. Secondly, he found the charge-to-mass ratio of alpha particles to be half that of the hydrogen ion. Rutherford proposed three explanations: 1) an alpha particle is a hydrogen molecule (H2) with a charge of 1 e; 2) an alpha particle is an atom of helium with a charge of 2 e; 3) an alpha particle is half a helium atom with a charge of 1 e. At that time in history, scientists knew that hydrogen ions have an atomic weight of 1 and a charge of 1 e, and that helium has an atomic weight of 4. Nobody knew exactly how many electrons were in an atom. Protons and neutrons had not yet been discovered. Rutherford decided the second explanation was the most plausible because it is the simplest and sizeable deposits of helium were commonly found underground next to deposits of radioactive elements. His explanation was that as alpha particles are emitted by underground radioactive elements, they become trapped in the rock strata and acquire electrons, becoming helium atoms. Therefore an alpha particle is essentially a helium atom stripped of two electrons. In 1909, Ernest Rutherford and Thomas Royds finally proved that alpha particles were indeed helium ions. To do this they collected and purified the gas emitted by radium, a known alpha particle emitter, in a glass tube. An electric spark discharge inside the tube produced light. Subsequent study of the spectra of this light showed that the gas was helium and thus the alpha particles were indeed the helium ions. In 1911, Rutherford used alpha particle scattering data to argue that the positive charge of an atom is concentrated in a tiny nucleus. In 1913, Antonius van den Broek suggested that anomalies in the periodic table would be reduced if the nuclear charge in an atom and thus the number of electrons in an atom is equal to its atomic number. Therefore a helium atom has two electrons, and an alpha particle is essentially a helium nucleus. In 1920, Rutherford deduced the existence of the proton as the source of positive charge in the atom. In 1932, James Chadwick discovered the neutron. Thereafter it was known that an alpha particle is an agglomeration of two protons and two neutrons. Anti-alpha particle While anti-matter equivalents for helium-3 have been known since 1970, it took until 2010 for members of the international STAR collaboration using the Relativistic Heavy Ion Collider at the U.S. Department of Energy's Brookhaven National Laboratory to detect the antimatter partner of the helium-4 nucleus. Like the Rutherford scattering experiments, the antimatter experiment used gold. This time the gold ions ions moving at nearly the speed of light and colliding head on to produce the antiparticle, also dubbed "anti-alpha" particle. Applications Devices Some smoke detectors contain a small amount of the alpha emitter americium-241. The alpha particles ionize air within a small gap. A small current is passed through that ionized air. Smoke particles from fire that enter the air gap reduce the current flow, sounding the alarm. The isotope is extremely dangerous if inhaled or ingested, but the danger is minimal if the source is kept sealed. Many municipalities have established programs to collect and dispose of old smoke detectors, to keep them out of the general waste stream. However the US EPA says they "may be thrown away with household garbage". Alpha decay can provide a safe power source for radioisotope thermoelectric generators used for space probes. Alpha decay is much more easily shielded against than other forms of radioactive decay. Plutonium-238, a source of alpha particles, requires only 2.5 mm of lead shielding to protect against unwanted radiation. Static eliminators typically use polonium-210, an alpha emitter, to ionize air, allowing the "static cling" to more rapidly dissipate. Cancer treatment Alpha-emitting radionuclides are presently being used in three different ways to eradicate cancerous tumors: as an infusible radioactive treatment targeted to specific tissues (radium-223), as a source of radiation inserted directly into solid tumors (radium-224), and as an attachment to an tumor-targeting molecule, such as an antibody to a tumor-associated antigen. Radium-223 is an alpha emitter that is naturally attracted to the bone because it is a calcium mimetic. Radium-223 (as radium-223 dichloride) can be infused into a cancer patient's veins, after which it migrates to parts of the bone where there is rapid turnover of cells due to the presence of metastasized tumors. Once within the bone, Ra-223 emits alpha radiation that can destroy tumor cells within a 100-micron distance. This approach has been in use since 2013 to treat prostate cancer which has metastasized to the bone. Radionuclides infused into the circulation are able to reach sites that are accessible to blood vessels. This means, however, that the interior of a large tumor that is not vascularized (i.e. is not well penetrated by blood vessels) may not be effectively eradicated by the radioactivity. Radium-224 is a radioactive atom that is utilized as a source of alpha radiation in a cancer treatment device called DaRT (diffusing alpha emitters radiation therapy). Each radium-224 atom undergoes a decay process producing 6 daughter atoms. During this process, 4 alpha particles are emitted. The range of an alpha particle—up to 100 microns—is insufficient to cover the width of many tumors. However, radium-224's daughter atoms can diffuse up to 2–3 mm in the tissue, thus creating a "kill region" with enough radiation to potentially destroy an entire tumor, if the seeds are placed appropriately. Radium-224's half-life is short enough at 3.6 days to produce a rapid clinical effect while avoiding the risk of radiation damage due to overexposure. At the same time, the half-life is long enough to allow for handling and shipping the seeds to a cancer treatment center at any location across the globe. Targeted alpha therapy for solid tumors involves attaching an alpha-particle-emitting radionuclide to a tumor-targeting molecule such as an antibody, that can be delivered by intravenous administration to a cancer patient. Alpha radiation and DRAM errors In computer technology, dynamic random access memory (DRAM) "soft errors" were linked to alpha particles in 1978 in Intel's DRAM chips. The discovery led to strict control of radioactive elements in the packaging of semiconductor materials, and the problem is largely considered to be solved. See also Alpha nuclide Alpha process (Also known as alpha-capture, or the alpha-ladder) Beta particle Cosmic rays Helion, the nucleus of helium-3 rather than helium-4 List of alpha emitting materials Nuclear physics Particle physics Radioactive isotope Rays: β (beta) rays γ Gamma ray δ Delta ray ε Epsilon radiation Rutherford scattering References Further reading External links Helium Alpha Alpha Subatomic particles with spin 0
Alpha particle
[ "Physics", "Chemistry" ]
3,707
[ "Ionizing radiation", "Physical phenomena", "Radiation", "Nuclear physics", "Radioactivity" ]